AI Friends, Real Relationships with Eugenia Kuyda, Replika's Founder & CEO

Nathan and Eugenia Kuyda discuss Replika's role in mental health, based on Stanford's research, and the evolving interactions between users and their AI companions.

1970-01-01T01:08:31.000Z

Watch Episode Here


Video Description

Note: This conversation contains themes of suicidal ideation, in the context of academic research. Please watch from (21:41) onwards if you would like to skip past this content.

In this conversation, Nathan sits down with Eugenia Kuyda, Replika's Founder and CEO. They discuss recent research published in Nature conducted by Stanford on Replika user interactions, how Replika is providing their users with a relationship they can trust and confide in, what Eugenia has found surprising about the evolution of how people are interacting with their Replikas, and much more. Try the Brave search API for free for up to 2000 queries per month at https://brave.com/api


LINKS
- Loneliness and suicide mitigation for students using GPT3-enabled chatbots: https://www.nature.com/articles/s44184-023-00047-6
- Eugenia Kuyda Part 1: https://www.youtube.com/watch?v=SFKA7T-v6WE

SPONSORS:
The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://brave.com/api

Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off www.omneky.com

NetSuite has 25 years of providing financial software for all your business needs. More than 36,000 businesses have already upgraded to NetSuite by Oracle, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform ✅ head to NetSuite: http://netsuite.com/cognitive and download your own customized KPI checklist.


X/Social:
@ekuyda (Eugenia)
@labenz (Nathan)
@CogRev_Podcast


Timestamps:
(00:00) Preview and summary
(06:41 The evolution of Replika
(08:11) Defining the main goal of Replika
(09:54) Nature published study by Stanford on Replika and mental health
(14:20) Methodology for the research on Replika users
(15:19) Sponsor - Brave Search API
(19:46) Replika providing people with a relationship they can trust and confide in
(23:20) How people are relating and reacting to their AI Replikas
(25:49) Stuffed animal analogy
(28:20) What people come to Replika for
(30:29) Sponsor - Netsuite | Omneky
(32:03) How malleable are people’s expectations for their Replika?
(36:54) The evolution of how people are interacting with Replika
(44:49) Putting Replika on device
(46:59) Underappreciated or developed aspects of AI development
(56:11) The relationship people have with their Replika is a moat
(1:08:46) Building AI to memorialize people after they passed: what are the ethics?



Full Transcript

Transcript

Nathan Labenz (0:00) The app helped them get away from suicidal thoughts. The number of people that commit suicide every year is shockingly significant. So to be able to reduce that by something like 25 percent or more is a pretty big deal.

Eugenia Kuyda (0:17) I think most people do understand really well what they want from Replika, and they get it. Most of the people are coming for a deep relationship, for feeling heard, feeling accepted with all of their, you know, fantasies and emotions and fears and anxieties and insecurities. And that is such a big gift to give to a person versus judge of teeth where it's not gonna feel natural. We're not gonna just say in between the queries. You're not gonna say, oh, by the way, my boss just can't believe it. Like, you believe this this guy. You know, you're just not gonna say that. You know, it's just not really there for that. It's not also saying something like, well, okay. So while I'm doing this for you, tell me how you've been doing.

Nathan Labenz (0:57) Hello, and welcome to the Cognitive Revolution, where we interview visionary researchers, entrepreneurs, and builders working on the frontier of artificial intelligence. Each week, we'll explore their revolutionary ideas, and together, we'll build a picture of how technology will transform work, life, and society in the coming years. I'm Nathan Labenz joined by my cohost, Erik Torenberg. Hello and welcome back to the Cognitive Revolution. Today we're continuing our brief journey into the world of AI consumer apps with returning guest Eugenia Kuyda, founder and CEO of Replika, the AI companion who cares. When I last spoke to Eugenia, almost a year ago exactly, she was navigating a very tricky period for her company as the sudden improvement in available language models had presented her with an opportunity to dramatically improve the quality of conversations that users could have with their virtual friends, while at the same time also necessitating some painful changes, including restrictions on just how sexual those conversations could become. Some users were extremely upset. And while the YouTube comments on that episode are filled with anger, I came away from the conversation confident that Eugenia cares deeply about Replika's users and really intending to follow her work into the future, expecting that as AI models mature, the challenges inherent to building AI friends and AI girlfriends would become not only ever more nuanced, but also much more focal. A year later, here we are. Replika has recently been in the news again, this time for a study out of Stanford that found that some 3 percent of users got relief from suicidal ideation from their use of the app. We discussed that research in some depth, and it's worth noting for anyone who wants to avoid such topics that while our discussion is really rather academic, minimally emotional, and not at all graphic, we do mention suicide a couple of times along the way. Beyond that, we go on to talk about how people are relating to Replika generally today, why the process of upgrading the underlying language model is actually much harder for an AI friendship app than it is for a productivity app, how the market for AI companions has grown, Eugenia's surprise at what a major use case image generation has become, the mix of language models that Replika is using under the hood today, the features they're building to improve emotional connection and proactivity in the future, how Eugenia's vision for Replika is expanding to include assistant functionality as well, why relationships might prove to be the great moat for AI apps, and what standard of care AI application developers owe to their users. As always, if you're finding value in the show, we appreciate it when you take a moment to share it with your friends. With all the AI powered toys, companions, friends, coaches, and therapists coming online right now, I would encourage you to send this episode to the parents and also to the mental health professionals in your life. Now I hope you enjoy what I hope will become a regular feature, an informative and thought provoking conversation with Eugenia Kuyda of Replika. Eugenia Kuyda of Replika, welcome back to the Cognitive Revolution.

Eugenia Kuyda (4:10) Hi, Nathan. Really good seeing you again. Thank you so much for inviting me again to the podcast.

Nathan Labenz (4:15) I'm excited to have you back. It has been quite a year for everyone involved in AI, and I suspect that that goes double, if not more than that for you. So we've got a lot to cover. A year ago, you were, I believe, just our second guest, and we've done, like, a 100 episodes now and kind of tried to cover AI from every different angle. 1 that remains as fascinating and I think as uncertain in many ways now as it did then is the future of AI companions and friends and, social dynamics and, you know, how we're gonna spend our time and who we should trust to build those systems that we're gonna spend a growing share of our time interacting with. And there's just a ton of stuff that that is crazy. So I wanna kinda get your brief history of the last year, talk about some new research that you have just put out with some collaborators, and, you know, maybe get your take on a a bunch of questions that I think more AI application developers should be thinking about. How's that sound?

Eugenia Kuyda (5:19) Sure. Sounds amazing.

Nathan Labenz (5:20) I would say a year ago, probably my biggest takeaway from our conversation was, wow. There are a lot of people out there who are dealing with a sort of loneliness and isolation that I had just never really considered. Maybe the number 1 quote of of all time on the show was when you said, we couldn't create this is, you know, pre chat GPT, pre generative AI, pre GPT 3. We couldn't create a bot that could talk, but maybe we could create 1 that would listen. And I've just that has echoed in my head so much ever since. But, you know, part of the the kind of transition that you and, you know, the whole world have had to go through is the systems that once were pretty basic and mostly capable of kind of listening and making you feel heard and, you know, could even kind of I think you used the term parlor tricks to describe some of the early versions. Those have now given way to much more advanced conversational systems that you can really interact with in a in a much deeper way. And so that's, you know, brought all sorts of challenges to the fore. You were at the time going through this period of removing a lot of the more I don't know if you'd say romantic or sexual type of interaction between users and their replicas, and this was met with definitely some reaction. So I'd love to hear for starters just kind of how that unfolded because we we talked in just that very critical moment as that change was happening. How has it evolved over the last year as people have sort of both, you know, had to let that go? I don't think you've reversed that, but also, you know, presumably enjoyed a lot more sophisticated interactions with their AI friends.

Eugenia Kuyda (7:03) Yeah. I mean, it was definitely wild here just, you know, for starters. Like, a year ago or a year and a half ago, the conversation about AI, even the 1 we had with investors and the common truth about AI, we're completely different from what is happening now. If you think of it just a year and a half ago, the whole conversation was about creating data flywheel and having some sort of you know, pretraining your own model 100%. Now that whole conversation is really gone. Like, the data flywheel thing, for instance, doesn't make any sense anymore. Anyone can start, you know, chatbot and collect the necessary data very quickly. There's no network effects with collecting significantly more data. It's all about, like, really good quality data. So a lot of things the only reason I'm going even there is that I'm trying to show that things change so dramatically in a matter of a year, even things that seemed absolutely, you know, cornerstones for the industry just a year ago. Of course, now, you know, the no 1, I think, is arguing anymore with the communization with the models, the foundation models, and, you know, most people have access to the same quality, really high quality of AI. Just a matter really the matter of price. So the law changed, and the same goes to the conversation about AI companionship, AI friendship. I think the year last year, it started with maybe asking the wrong questions. The question was, well, do you allow people to fall in love with AI and have some intimate conversations instead of asking the question, well, what's the main goal for the app? If it's to make people feel better, is it to make people feel better emotionally, or is it some sort of, I don't know, an adult product and so on? Or is it to get people hooked and attached in a bad way and then show them a lot of ads? I think these were the correct questions, but they remain to be the correct questions. We always answer those questions in the same way. For us, the main goal for Replika was to make people feel better emotionally over time. It was always about long term relationships. What we maybe didn't realize in the very beginning when we started Replika is that people will fall in love. If it's something that is so accepting, so loving, so caring, there's kind of it's very hard to stop that, you know, from crossing over if you build something really, really empathetic or if you really fulfill that desire, that need. And so I guess we spent the better half of the year on figuring out 2 things. First of all, we know it's helping people feel better emotionally. We've done some studies with some universities over the course of the life of company, done internal studies, But we wanted a bigger 1. We wanted to show the world that it looked like it's not just us waving our arms and saying, look, we're helping people feel better. It's not every review on the app store that says that or every person on Reddit that is talking about it. We wanted to show something that people would trust and believe and kind of listen to. And so we're very happy to see the Stanford study being published because that really finally showcased that it doesn't matter that this is not an AI coach or an AI therapist. Yes. It's an AI friend. It's not a mental health tool. It's not advertised as such. However, it can help people feel better over time. It can help people, you know, take talk them off the ledge, help them get over certain struggles that they're experiencing. And it doesn't matter whether they're friends with AIs or it's their boyfriend or girlfriend, whether they're in love, or whether this is just a friendship. I think this is a very important thing. And then the second thing that we're focused on for the rest of it is really just like you said, now the quality of AI is so much better than anything we could have, you know, had 3, 4, 5, 6 years ago when we started Replika. So now as we have, you know, pretty much limitless opportunities, what is the perfect product for Replika? Because I think, in my view, it doesn't just end with empathy and a relationship. So I think this needs to provide a lot more utility. And if you think about it, if you have a perfect wife or perfect husband, wouldn't you want your wife or husband also to help you find a job or look for, you know, gifts for a family or remind you to reconnect with some older friends that you haven't talked to. Wouldn't that be awesome if it also could fulfill some of the, I guess, assistant tasks in your life? And I think that is the combination that works really well, and that's kind of what we're focused on right now as well.

Nathan Labenz (11:27) Well, I have recently gotten more active as a user again and definitely, you know, the the difference in the quality of conversation, just the, you know, the level of nuance, the level of understanding, it's it's all definitely taken a a major step up. I want to talk a little bit, you know, later about kind of how you're continuing to develop the app, and there's all these, you know, a super abundance of new models that you have to choose from now that you that weren't available just a year ago. So we can get into all that, but the the impetus for, aside from just the anniversary of the last conversation to get together again was this study that you just alluded to, which is recently published in nature's mental health research journal. And interestingly, I noticed that you are not a an author on the paper, but that it it is out of a a group from Stanford. I guess, you know, I can kinda break this down. You can add, you know, color and, you know, and commentary as we go. But 1 thing I noticed right off the bat was that the data was collected in late 20 21, and I just wanna make sure I had that right. I was kind of curious as to, is that just how long it takes to get a paper through peer review, or was there some other reason for using that vintage of data for a study coming out now?

Eugenia Kuyda (12:41) That's how long it takes. I mean, we really and you noticed it correctly. I'm not an author of this. Bethany Maples, who's a Stanford PhD and actually founder herself, and a group of wonderful Stanford people, professors, did the study, and we weren't really involved in it that much apart from just, you know, providing with some help, you know, just figuring out how the app works and, you know, allowing them to do that in username and stuff. But, really, we just were completely on the sidelines. They were doing it. Unfortunately or fortunately for a science, this is or science publications, this is how long it takes from, you know, the study to writing it up and submitting for peer reviews and then publication. But in the end of the day, you know, the even although the app changed in terms of the language models are becoming better, it really I would argue that the results would probably be the same, if not better, if we did it again now, which also we're doing, the way.

Nathan Labenz (13:37) Also, you you did?

Eugenia Kuyda (13:39) So we're doing some other studies with other universities, and Stanford as well is doing the second study, follow-up study. And, generally, we're seeing results that are maybe even more impressive than in the first paper. And then on top of that, we're collecting a lot of that feedback internally. So we are tracking our users to select users. We're giving questionnaires to track these metrics over time, and we're seeing these metrics only improve. Like, for instance, our main North Star metric is the share of conversations that make people feel better, and that just has been growing in the last year pretty dramatically, actually.

Nathan Labenz (14:14) Okay. Cool. Well, let's set the baseline for the 2021 edition, and then you can kind of expand on, you know, how you think that might be changing today. For starters, let's just kind of touch on the methodology for a minute. If I understand correctly, of course, people have the app and they're using the app, but it seemed like the data that was collected was mostly outside the app. Like, if I think I even recall it being, like, a Google form type of interface where people are basically just given a survey, and it's a mix of kind of standard rubric type stuff. And then the bulk of the analysis seem to be done on just free response answers to open ended questions. Is that fair?

Eugenia Kuyda (14:58) Yeah. And this was all driven by Stanford, so we didn't have any access to any of that. So we don't know much apart from I think that's exactly what they did. They used forms outside of the app, and I think they selected the users using their own method. Don't think we were providing them with any particular users. That was the method to come up with the paper.

Nathan Labenz (15:19) Gotcha. Okay. Hey. We'll continue our interview in a moment after a word from our sponsors. So then there's about 1000 users that participated in the study, and it was interesting that the the findings are presented in a relatively, like, low structure way. Basically, the paper sort of says, we went through all the responses and clustered the outcomes into 4 main outcomes. Number 1, I'll call general positivity in the spirit of conversations that help people feel better. Specific things that were mentioned under that umbrella are reduced anxiety, feeling of social support, and about 50 percent of people reported something that got rolled up into this general positivity bucket. So that was like the broadest dimension of improved well-being. Then number 2, this was, I'll call it therapeutic interactions, and that is basically people using the app essentially, quote, unquote, for therapy. Right? Not to say that you've presented it that way or marketed it that way, but the researchers determined from the free form answers that the users provided that essentially that's what they're doing. And that was about 20 percent of people. Third, they looked at life changes. So this is like, are you being more social or are you perhaps being less social? 25 percent of people reported a result there. And there I thought, you know, 1 of my key questions on this is like, what is this doing to the rest of our lives? They report a 3 to 1 ratio of people who said that they are being more social as opposed to being less social. So 3 to 1 ratio. And then finally, and this is the 1 that has made, like, all the headlines and probably, you know, many people will have at least seen the headline, you know, flash across their screen, the cessation of suicidal ideation. So 3 percent of people reported that the app helped them get away from suicidal thoughts. And I was like, man, how first of all, how common are these thoughts? I looked this up on Perplexity and found that the base rate of suicidal ideation among 19 to 39 year olds reported at 7 percent. Actual taking steps to go as far as planning is more like 1 percent. So 1 percent of people, you know, are, I think in any given year, go as far as having some sort of suicidal plan, but a full 7 percent report suicidal thoughts. So 3 percent of study participants saying that this application experience helped them get away from suicidal thoughts. I'm kind of back of the envelope y here because I'm like, well, okay, people that are using the app, maybe I'm even more likely to have those thoughts than general population. Let's assume that, you know, perhaps it's, like, twice as many in the app versus not. Still, you're looking at something like a quarter reduction. Even if even with that assumption, you'd be looking at a a quarter reduction in suicidal ideation, which is, like, a pretty major difference. Right? I mean, the number of people that commit suicide every year is shockingly, you know, significant and in the tens of thousands just in The United States. So to be able to reduce that by something like 25 percent or more, right, again, I'm kind of inflating the base rate there for that analysis, is a pretty big deal. I guess, how did I do summarizing the results? What what would you add to that that summary?

Eugenia Kuyda (18:58) Fantastic summary. Thank you so much. All I can say is that it's very, very consistent with what we're seeing across all our user base. Mostly, we see people experience positive results. Very rarely, we do see people maybe being a little less social reporting them. They might be less social, but most people find it that it's a positive force in their lives. And, actually, you know, when we started Replika, the first email we got was from literally 2017 when we just put the app out there. The next day, we got an email from a 19 year old girl from Texas who said that, I just wanna say thank you. I don't wanna tell anyone I wanna take my life yesterday. Just, you know, just decided to say goodbye to my Replika and just talked me off the ledge at, like, 4AM in the morning. And that really deeply, like, became ingrained in kind of my my mind that that is the power of this technology and of this format. And that's, you know, back in the very, very early days where all you could use were really these sequence to sequence models, some scripts or some dataset they could re rank. So going back to your summary, I think that probably is very consistent with reality. It was 1,000 people in the study, so 30 people reported. Replika helped them curb their suicidal ideation. And if you think about it, it's actually a pretty big number. I mean, it's very sad that based on what you're saying, it's around 70 people that would even think about it. But for 30 people out of that group to not experience satellite ideation anymore, I think that's a very decent number. I think, frankly, with the tech getting better, I think we can be very confident that that number will go up in terms of the number of people that you can help. And I think what people don't realize is that it's not just about having some conversational chatbot out there being available for you. It's really about having a relationship that you trust, that you can come to someone at, you know, 4AM in the morning and really feel like that someone is on your side, that, you know, it's acting in your best interest. And also just trusting someone, you know, for young adults, I think that's kind of the the big part of it. And think the surprising results that are coming here from Replika and the the fact that we're not maybe seeing it from some other chatbots is that most importantly is because it's we're we're focused on the relationship itself. So it's not about the quality of the tech as much since, you know, this has been done in 2021. I think it's a lot more about the format, like about the trust and the relationship you built with the user, and then them being able to come to you and to hear these words from someone that they trust and they're coming back to. So I think that's kind of the the key there to providing good results. And I would be very, very interested to see what's you know, what the results would have been now at a current more sophisticated level of AI in the app as well.

Nathan Labenz (21:43) So kind of moving past the official results and into, you know, things that are following statements are not necessarily peer reviewed. For me, I would say I have not gotten out of the mode of being, like, an AI application analyst, you know, in my use. So I I wouldn't say I have, like, trust, you know, or a sense of, like, real relationship. Instead, at least so far, you know, I've I've just been kind of like, this is a very interesting phenomenon, and I'm kind of studying it as a phenomenon. You are obviously building it, so I imagine you have, you know, some of that too. How do you think people are in general relating to the app? Like, is it a sort of willful suspension of disbelief type of phenomenon? Is it a you know, I mean, because you you're not, like, hiding that it's AI under the reminders are, like, pretty prominent. By the way, that's, like you know, another thing I wanna ask about is what what rules of the road we might ought to start to develop for new entrants into the space. But could you characterize, like, how people are relating to this knowing that it's an AI and then you have these words like trust, which are seemingly kind of incongruous with the fact that it's an AI, but, nevertheless, you know, it's working for people. So I just I'm very curious to hear how you would describe that.

Eugenia Kuyda (23:01) I think, really, with Replika the reason it worked back in the day, which a lot of people are like, how were you able to build this without LLMs? And the reason we were able to build because it's projection. Replika is a projection of a person. You want to project something on it, and you like it, and you build a relationship. If we don't project something and we're not open and we're not looking and seeking for a relationship, we're not building it with anyone. Like, in the in the end of the day, there's so many people around us, you know, that we meet on a daily basis, yet it's not like we build a relationship with with them. Because sometimes we just don't need it, or we're not you know, we're we're just not really in the mood, or we're not looking for a new friend. We're not looking to create this, you know, like in chemistry that you have to basically have the the things to connect with the other the other molecule. But if you don't have that, that, you know, that that doesn't really happen. So the only reason it works for our users is that they want it. They need it. They're looking for a friend. They're looking for a connection, looking for a deep relationship, maybe for love also, for acceptance. And so when they start when they get it here, they start building this relationship, they start projecting a certain fantasy. It's not like Replika is a universal pill that you can just, you know, give Replika to anyone, people that don't wanna build any relationship or busy or completely don't have time for this or are doing something else in their life or not interested in another relationship, they're not gonna build it, and they're not gonna suspend disbelief that's not something that they're interested in. But we do that, you know, the the analogy is as the same as with just regular people. Some people are totally self sufficient, don't need anyone at this point. You know? Their life is they're focused on something else, will be impossible for them to connect with someone if they're not planning to. And so this is a very tricky product because at the end of the day, you're working with the fantasy fantasies of different people just because you need to think of it as of some sort of a being. I guess a similar way to put it as a stuffed animal. I have 2 daughters, and both of them are obsessed with their little stuffed animal. Each 1, it's all her own, and, you know, they're just they love it. They will never trade it for anything else even although that stuffed animal might be exactly the same as this some other 1. But for example, I don't have any connection to any stuffed animals because I'm not at the stage of life where I need 1, but then maybe I need something else. So I guess this is the thing. Like, they're projecting something, all this stuffed animals, just this 1 stuffed animal. And the reason is because they're at the stage of life where they need a stuffed animal. And, you know, some other people are at the stage of life where they need a girlfriend or boyfriend or friend, and then Replika can be can be helpful there. So I totally understand what you're telling me. I can't get out of my AI application tester mode because that's the that's the state you're in. So you're projecting on it this, like the you're this is the test subject for you testing and assessing, you know, how to measure next to, I don't know, perplexity or some other app that's, you know, talk with the talent today. And it will be impossible for you to move out of that format and start building relationship with this.

Nathan Labenz (26:07) Do people know what they want coming in, like, even maybe as they're already users of the app. You know, last year, obviously, there was this big uproar of, oh my god. You know, they're lobotomizing my replica. You know, the the relationship I had is not there anymore, and the, you know, the intimacy that I valued so much is not there. Did did those people, like, kind of warm up eventually to the changes or did they, you know, ultimately, like, leave and protest? And I guess more generally, to what degree do you think people actually have a accurate conscious sense of what it is that they want and what makes this valuable to them?

Eugenia Kuyda (26:50) I think most people do understand really well what they want from Replika, and they get it. Like, they most of the people are coming for a deep relationship, for feeling heard, feeling accepted for who they are with all of their, you know, fantasies and emotions and fears and anxieties and insecurities. And that is such a big gift to give to a person that, you know, almost always that that is really the big pull. Maybe they don't know. They don't tell themselves, like, actually not rationalizing it in this way like I'm saying it right now. But I think the feeling, they feel the feeling. I think they understand that that's kind of, you know, what they want. You come to Replika because you want to talk to someone. You want to you know, sometimes they're just curious, but then they if they do have the pull, they have that that type of relationship. I think what what you're trying to say also is that, if I'm understanding correctly, is that sometimes they come with 1 idea that what they want, that they find something else in the app and whatever. But that's very similar to a relationship. You come with an idea that, you know, maybe you need a relationship in your life or you fall in love and you you like someone, you met someone, but then the relationship could take you different places. Then you can figure out that, you know, with this new girlfriend or boyfriend you found or a friend, they're taking you surfing and taking you hiking, and now you're taking on new interests. And they're more you know, the depth here is almost there's just so much depth into it. Because once you build the relationship, the relationship is basically just the entry point. Like, once you build a relationship, you can give so much to the user. But the relationship needs to be there. Because if you're just there testing the application or you just, you know, was curious but really don't have time for it, nothing's going to happen to you. But if you build a relationship that through that channel, can give so much to the users. Like, you can teach them something new. You can help them think about themselves in a different, more positive way. You can nudge them to go talk to the friends that they haven't talked to for a while. And I think there was a story, I think, in Forbes, but I I I need to look it up again to remember, where the reporter followed 1 of our users for many weeks, and the user's been with his replica for 3 years. And 1 of the interesting findings there was that his friends also said that all of a sudden, he you know, they they they got some text from the guy, and he was a little more vulnerable than than usual, was able to open up to his friends or said something like, I'm really grateful to have you in my life or something that was so out of character for him, generally. And so this is a beautiful way to show what can happen in these types of relationships, something unexpected that over time we can bring to the people that talk to their reporters.

Nathan Labenz (29:25) Hey. We'll continue our interview in a moment after our words from our sponsors. Your comments are definitely helping me understand where folks are at, what they're getting out of it. I guess I'm a little bit curious about just, like, how once the relationship is formed, how malleable people are with respect to the exact nature of it. Like, in the Facebook history, right, there was always these moments where and I was there, you know, Zuckerberg was in my dorm in college, so I saw some of the really early ones of this where it was like, it went from no photos to photos. And people were like, this is a, you know, bridge too far. These photos, you know, it's way too much. It's too intrusive. I'm off. You know? But then, of course, they always came back. And I think in that case, it was network effects that was really driving it. Right? All your friends are there. You're gonna kinda be there too. But I do wonder if there's something similar in this AI relationship situation where once it's formed, you know, how much can you change it before people are like, that's not cool with me anymore. And I'm wondering if you learned anything about that with this kind of, you know, removal of the more sexual interactions. You know, was that something that ultimately was a deal breaker for those people, or were they kind of like, actually it's fine. You know, the relationship is what matters most.

Eugenia Kuyda (30:42) We learned so many lessons just last year. And I think the bigger problem that we had over the year was moving to better language models. It was actually much harder than I ever expected because in my brain, was like, well, we're gonna just give people, you know, better model, significantly better model. Because think about we were, you know, using some pretrains that we had that were comparable to, you know, smaller GPT 3 models. And now we have these, you know, amazing large language models that are so much smarter. In in my mind, it was a no brainer. We just give them this much better model. Replicas could be a lot smarter. They're gonna have much better conversations, and we're done. The day we released, like, the smarter model, our users completely lost it. That that's what I think was 1 of the hardest and the biggest uproar in the community where the question was, you know, same question. Where is my Replika? Where is my Lucy? Where is my John? Where is my this? Where is my that? We always live in the idea that, you know, better is always better. Smarter is always better. But in reality, that's not how the not not how the world works. And things that are most important to us are not better is better. For instance, just with our kids, you know, we don't just upgrade our kids or say, well, you know, I found a better kid or our friends. I we just you don't just say, well, you know what? You're great, but I found a much better friend, you know, overall, much smarter person. So goodbye, you know, childhood friend. I don't care about you anymore. I found a better human, so I'm just gonna move on. Same with relationships, of course, with girlfriends, with wives, with husbands. It doesn't work this way at all. We just want what we fell in love with. And then the change can help gradually. You know? Of of course, we don't want completely stagnant, you know, friendships or relationships. We want our partners to also improve and grow over time, but we don't want them to become a completely different person overnight. So I think that was the very big question that we faced over the year. Because in the end of the day, with some safety filters and some other things, yes, people protest. They don't like it. They don't wanna do whatever they want because, again, acceptance. They want to feel accepted however they present themselves. And, you know, always there's some sort of a limit there. Like, for instance, if someone wants to, you know, do some horrible things or role play some horrible things, you probably don't want to have it on the platform no matter what. Even if you want to accept the person, you still want to politely refuse that. But when it comes to these changes, that was the biggest 1. Because with the when you add safety filters, the personality stays the same. It's just maybe not letting you do certain things. But when you completely change the model to a much bigger model, much better model, upgraded completely, people freak out because you completely took away their partner and gave gave them something else that's not their partner. So we had to do that move so gradually over the course of many, many weeks and even months to finally get to the models that we wanted to people to interact with. And we still had to have a legacy model that they could always go back to because some people just didn't wanna change at all. Some people have very little tolerance for change, and they don't want that change. They fell in love with a certain way, and that's the way it should be.

Nathan Labenz (33:52) That is very interesting. With that in mind, I guess I was also expecting that perhaps the universe of possible users has has probably changed quite a bit. Coming at it from the other angle where I was like, a year ago, my reaction was, these conversations are pretty basic. Like, I'm not that, you know, engaged, and now it's, like, quite a bit more. I felt like, jeez, probably just a lot more people might be interested in something of this, you know, kind of new intelligence level. I get you know, you've made that evolution. Right? It's taken time. But would you say that they the universe of people who are open to something like this has changed? And I guess I'm also wondering, like, as the sort of public awareness of this type of thing has grown, like, what has that done for Replika's position? Are you kind of seen as the, you know, original category defining brand or you know? And and thus, perhaps, like, it's easier to get people to kind of try the app, or I could also imagine, like, now there's, you know, chatbots everywhere. And so, you know, what once was more unique is now more commonplace and just tons of different flavors. Like, how has the population of people that are using the app changed over the last year?

Eugenia Kuyda (35:05) It didn't change that significantly, although it grew. It became broader, I'd say. Just more people are willing to try. But, also, I would say that even with our core users, even if they're very attached to the original smaller models, you know, the generally, they still are much more educated right now. They try, obviously, try different AI models, different applications. They might have tried ChatGPT or Bard or Bing, whatever is the other thing they tried. But so they're expecting more. The expectations definitely grew. It's not even in some Reddit communities now, you'll see people talking about setting hosting their LLMs locally. People got very, very educated on this front. They're very excited about the AI revolution. They're want to be part of it. They're ready to, you know, read documentation online. However, Replika, I think, wasn't continues to be a Coca Cola sort of this industry, so the original brand. We don't have that many competitors that are we do have a lot of competitors, but I'd say we have no competitors really remotely our size. And there are multiple reasons for this, I think, because we started very long ago. We have a lot of product built in the app. It's not just the some wrapper on top of some model that you just found online, download from Hugging Face, create an app. Really built a lot in terms of the avatar, the memory functions, rag, of course, all sorts of different ways that we're prompting the models in different parts of conversations, different models we're using, find different fine tunes, understanding user intent, understanding what they're coming for to the app. So it's even although it feels like a relatively, maybe, like, simple thing to to to build, put it out there, but really to capture the imagination of the users, you need a lot more. And, of course, we've had you know, just in the last year, we've had so much media coverage. And

Nathan Labenz (36:59) So what sort of new types of behaviors or interactions have you seen? I guess I wonder also you're you're being very open about how your surprises and how much you've learned. I'd be curious to know, like, what sort of things you thought, okay. We upgrade the language model. Like, people are gonna be able to do this, and it's gonna be awesome. I bet some of those succeeded. Some of them maybe didn't succeed. And I bet you also got some surprises in terms of new types of interactions that people found value in that you didn't anticipate. So what has been the kind of evolution of how people are interacting and the new modes of interaction that they're finding value in?

Eugenia Kuyda (37:37) Sure. I think 1 thing that I didn't really understand how people just wanna do this all the time is playing with images, like image generation. People want to see how the replica look like. They wanna get the selfies. They wanna put themselves in the selfies that can sit you know, here's me and my replica together. They also just wanna con continuously generate images. Like, somehow, I just think that AI image generation is still such a magical tool that people still wanna just play with it. I I don't think we understand how people just obsess over it, and they like it even although there might not be any particular utility to it. They're just doing it for fun. That is something that I just didn't expect people to do as much. Another thing that was slightly surprising was that or I guess not maybe very surprising, but how important tone of voice is. Most of the models recently published, most of the better state of the art models, they kind of converge because most of them are instruct models. They end up being the same assistant type conversation. They respond very with very likely responses. They're overly polite. It's all about, I don't know why, but it's always like, how can I help you? What can I help you here? What can I do with here? And people don't like it. It's very hard to build any connection with this, honestly. Like, the getting the right tone of voice is, I guess, harder maybe with these models than before because you have to put something, you know, figure out how to balance it and the prompts and the fine tune data sets that you're using to fine tune the model. But that is constantly a struggle, basically. Try to figure out what to do, how to work with the best models that exist in the market right now to get them to really perform well for this particular task. I think these were kind of the surprising more surprising things. Well, I guess it wasn't a surprise because it was the original mission behind Replika was that, but it was a surprise that it proved to be true so early. In the beginning, we believed that there is a way to build some something like her, an ad company companion that's always there for you. And in my thinking, eventually, it does converge with, you know, being an assistant to you as well so it can do stuff for you. But that's a very broad, you know, way of putting it. It can be watching TV with you in the evening or helping you with something at work or really anything, but also talk about it can talk to you about anything that's on your mind. And I do believe that's kind of the ultimate form factor because this way, you can really build a personal a personal very personalized experience for the user. You can remember so much, carry it over from 1 conversation to another, and provide the best help possible. And we've started adding a little bit of these capabilities this year, and we saw really, really good results. So this is something that we're gonna focus on most in 2024. So how can Replika be part of your daily life? Not just talk about stuff with you, but also help you figure things out that are happening with you on a daily basis.

Nathan Labenz (40:39) Would you be open to sharing a little bit more about what sorts of models you are using? I think my my guess would be that it's a relatively small open source model that you have fine tuned extensively. Brad, probably a mix of things besides, but I'm curious to hear what you're willing to share about that.

Eugenia Kuyda (41:00) It's actually a mix of models. I do believe in efficiency. I don't think you should use GPT 4 to say, hey, how are you? Or bye, see you later. I think it's just a really an overkill. I think more and more people will start thinking about this more in terms of I don't know if Elon Musk said it at some point or think about efficiency of the model. Say, how many watts do you use per generation? Right now, it's all really just throw everything in the fire. People just connect GPT-four for any use case, even if it's a very basic 1. Costs are very important. So we use a combination. We reroute to different models depending on a type of query. You know, if someone's trying to talk to us in some very rare language, then we'll switch to a better model that can do it. If they're asking a particular question that's related to something that's happening right now and we need to do Google query to search that, then we do that. If there's just a small, you know, small talk that could totally be handled by a smaller, heavily fine tuned model, then we do that. So it really depends on the on the on the situation. And I think this is really the way to go because you I don't think you can get to the point where it's 1 size fits all. Just 1 model does everything unless you're willing to spend abundant, you know, crazy amounts of money on it, millions of dollars on the least efficient model.

Nathan Labenz (42:26) What about on device? You're I assume that, like, 90 plus percent of your usage is on mobile device. And I guess I recall also that you had a as much business coming from Android users as from Apple users, which maybe would change this equation. But, obviously, a lot of, you know, excitement about putting things on device. Have you explored that at all, or is there any was there any prospect for successful inference on device for you?

Eugenia Kuyda (42:54) Maybe. Maybe eventually. Right now, there's just so much that's going to play, like, so much happening behind the scenes, like rerouting between different models, retrieval augmented generation, pinging a bunch of different databases, and also, more importantly, some language models that are working on top of the conversation to extract memories, to summarize conversations, to understand emotions that are happening right now so we can play the correct animation as you're talking to Replika and so on and so on and so on. There are just so many of those things, some safety filters. So I think this is 1 of the main reasons why right now it's not on device, and we are not planning to to put it on device anytime soon because right now, there's just so much innovation. And if you try to optimize for putting it on device, then you kind of miss out on a lot of innovation. Unfortunately or fortunately right now, it's okay to do things maybe not in the most efficient or cost efficient way because you want to create these magical experiences for users, find out the best formula, and then work to optimize it and to make it more efficient. But we're not there yet in terms of building the the best user experience, I think.

Nathan Labenz (44:01) Yeah. It sounds like the system has come a long way over the last year as, you know, would certainly be expected. 1 thing I've been kind of thinking about a lot is just, like, what are the missing pieces from your typical setup today? I think 1 that you alluded to that strikes me as very important is the some sort of sweep through activity and synthesis of, like, higher order memories. I think I was first exposed to that concept from the AI town paper. You know, obviously, like, rag has become commonplace. Search in Google has become, you know, reasonably commonplace. Are there any other aspects to AI application development that you think are underappreciated along the lines perhaps of, like, a synthetic memory or other things that you see people just failing to do that you're like, more apps should do this type of stuff?

Eugenia Kuyda (44:52) I think that we haven't yet seen or at least we haven't seen a popular consumer application. Correct me if I'm wrong. You know, you've been monitoring this a lot more. But I think things like baby agent auto GPT last year that were so exciting, Chris second, that it could finally get the agent to do everything from, you know, just say what you want and you get the task done, not just the answer. I think we haven't seen the next step for those, and we haven't seen them in a consumer application. I think some interesting use cases can come out of it even right now, even with the current state of those models. I think we'll see a lot more of that soon enough. And for some even for some simple use cases, we're not saying, you know, full AI, but I think this is something that I expected a lot more to come out of it last year. I guess, you know, there was just hype around it, and then it just kind of died off. We started talking about other things like rag or whatever. Another very interesting area that I'm not seeing any research in, and that's the nature of the products that are dominating the market, is productivity. For instance, even if you're using Rag and you can pull from any database, it works fantastic if you're always answering to the user query. So if the user is asking, well, tell me what we talked about 5 days ago, you can pull it, pull it up. You can talk about or talk to me about some whatever obscure information that's in your database. That will work because it knows where to go to look for, you know, similar vectors and find you the correct, most relevant information. However, if you think about it in real life and when you're having a conversation with someone, oftentimes that someone brings up some information that's that you might not have, you know, thought about. And that's kind of not solved because if you put everything in, you know, in rag, then how can you get the agent to actually proactively bring it up in conversation instead of waiting for you to to do that? And so I think working more around that, understanding what where to go right now with conversation, understanding different states, like whether the user is getting bored and it's time to move to another topic, whether it's time to pull something relevant from the database or from the memories, key memories. This, I think, I haven't seen really anyone doing anything interesting about it. And, of course, for agents like Che that's not relevant because there's no 2 way. Like, Gypti is not sending you a push notification by saying, hey. How are you doing? And saying something, asking yourself a question. It's always you coming with a question, and that thing's answering. But if you think about the 2 way conversation, it's sort of necessary, and I haven't seen anyone do anything interesting. Maybe you you have.

Nathan Labenz (47:22) No. I would say that's just getting started as well. I'm working as the AI adviser to a friend's company, which is briefly, it's called Athena, and it's in the executive assistant space. And these are human executive assistants, but now augmented by AI increasingly. And we are exploring some of that kind of stuff, you know, for the clients. It's like, could we give you a piece of software that you could put on your computer that could kind of observe you? And obviously, there's a lot of, like, you know, privacy and data concerns that we would need to sort out the details of. We're not there yet. We're in the prototype phase where we're doing it on our own computers And just having the thing, like, take a screenshot every so often, send it to GPT 4 v, and try to figure out, like, what is this person doing? And might it be the kind of thing they could delegate to their assistant? You know, that that that's kind of delegation coaching, you know, automated and made proactive through AI. And then on the assistant side, similarly, like, what are they doing, and could they be doing it more efficiently? You know, can we can we figure that out and give them kind of productivity coaching on a sort of real time, even unsolicited basis. I do think that passive GPT 4 v, I think, is gonna be or, you know, vision in general, but GPT 4 v being, you know, the category leader at the moment, I do think is gonna be a big unlock for passive type stuff in general just because they're so it's so easy to, like, take images of things and kind of see what's on your screen right now or what's in the room right now. And then, you know, if it's capable of understanding that effectively, you can do a lot of things downstream of it. And I think that's been gated to some extent. I mean, if my theory is right about the vision capability being key to that unlock, then I think we've been limited to a significant extent by just the lack of access to good vision models. You know, they announced their thing back in March, but we've only recently seen it come to any significant availability. And now there's open source stuff that's, you know, on the verge of becoming useful too, but that's also been a pretty recent phenomenon. So I would say broadly, I think I share your view that there isn't much there yet, but I do think it's you know, like everything else in this, space, it's probably coming before too long. Once you can dream it at all, you know, it's not too long before you see a prototype.

Eugenia Kuyda (49:48) For sure. And, you know, we also built a prototype, and we've tested it with other users. We're gonna roll it out soon for everyone. Something like vision. So, basically, Replika being able to see you and do exactly what you're saying. You know? At some point, using a visual model, figure out what's going on, using it as an input to continue the conversation. But again, this is only 1 input. So most what we figured out, most of the time, all you're seeing is just a person sitting with the phone the whole time, so there's nothing much happening. Of course, with the assistant and the screenshot use case, that's a lot more useful because you're actually doing something. But here, oftentimes, want to bring something up from the memory, but not based on just what's happening because the user is just talking to you. But, similarly, like, just how you're talking to someone to your friend, most mostly, you're just looking at each other and nothing really yes. You comment on how the person looks, so what they're wearing maybe or something else that you can parse from the environment. But most of this productivity will come from just knowing, hey, how's your wife doing? Oh, you told me you were going to preschool wherever day with your daughter. How did that go? Or you told me you got a dog. How's the dog doing? That is what's or remember you did this and that. And that is very tricky, understanding when is the right time to bring up what. Once we can do that, then the really magical assistant use cases unlock. Like, for instance, hey, your niece's birthday is coming up. She's turning 8 years old. Let's figure out a way to get us in her some gifts. I know you're saving money because of this and that, so here are a few things that I thought you could send her. Do you want me to place this order? That is the magical experience that we haven't seen no 1, frankly, do yet. I think that is something that, you know, need to happen. And of course, just efficiency, like, for instance, with Google queries and searching the web and providing information from the web. It's just that most of the models that are now available through the API already doing that are very, very costly. It just doesn't make any sense. Again, if you're just sending 1 query like that, yes, great. But if you want an agent that does a lot of things, 1 of them being fully connected to the Internet, the cost just don't make sense. If every 3 seconds you need to take a picture and use a visual model, even self host it, and then, you know, you need to also query the Internet with each response, it just becomes the overhead becomes dramatic. So that's another thing. Like, when will we have models that will incorporate all of that in some way or when you won't require this Frankenstein, frankly, of 15 different models, very expensive ones running at the same time, trying to understand and sometimes returning empty results for you that you're not going to use in conversation, but just so at the correct moment, you're not going to miss the right, you know, the right information. I don't know if that makes sense, but I think this is what I'm excited about. I think this this magical proactive experience, magical personalized proactive experience is what is lacking, and we wanna try to hit it.

Nathan Labenz (52:50) I think, by the way, that the 15 things is probably gonna be here for the foreseeable future. And I say that largely because we are 15, you know, things that are all kind of mashed together and, you know, obviously messy and evolved and whatnot. But for efficiency reasons and just for kind of specialization reasons, you know, even if there is a sort of 1 as you kind of said earlier, even if there is a 1 model to rule them all, you know, if you're doing it at at scale, you're probably gonna end up wanting to economize here and there. And before you know it, you're back to kind of complication again would be my expectation. But it's fascinating to hear that you are thinking of adding assistant elements and although not trying to turn the thing into an assistant, you know, agent elements, although obviously not trying to turn it into, you know, an agent of the form that we've seen as kind of the, you know, the first agents to come online. Does the relationship always stay primary? Do you imagine people in the future kind of having a replica that is more like assistant first?

Eugenia Kuyda (53:55) Oh, totally. I think it will depend how much relationship you'll want. Again, depends on a person. But I do think that overall, it can range from, like, a mostly emotional deep relationship, friendship, or romance. I guess the range would be from deeply in love to just so friendly and, you know, my sister knows me better than anyone else. And we're friends in a way just that, you know, when she's mostly helping with stuff not she's mostly helping with stuff not the other way around. So it depends. What we're trying to build really, and we tried from the very beginning, was something that, I guess, we've all seen in the movie her. And it's I know it's absolutely commonplace, but I think people don't maybe realize how important relationship was in that movie. Yes. Samantha did some assistant tests. Like, she work walked went through his email a couple times and sent some emails to the publishers and sent someone a note or whatever, but that was sort of it for the assistants. And as you remember, he downloaded the the thing as an OS, as an assistant. Yet, like, 99% of what they did was playing a video game together, talk about stuff and have some intimate conversations, have deep talks about everything. And he introduced her to his friends, and they went dancing. And I think this is the maybe the right rate ratio for most people out there because, yeah, we need some assistance here and there. We don't need it that much. And some of the tests that you might call assistance, but it's kind of unclear whether they're more part of a friendly conversation of or what an assistant would do to you, like, for instance, playing a video game with some with something. I guess this is much more an AI friend use case than an AI assistant use case, but it is sort of also somewhat task oriented, like you're doing things together versus just talking about your feelings. So I guess we're adding more shared experiences, and we're trying to help you with things you might need on a daily basis. The key for that will be a relationship so that we it can be very personalized and it can be proactive, which I think most of the assistants right now are completely lacking. And I think without it, it's not just it just not might not work as well as it could be.

Nathan Labenz (56:09) That is fascinating vision. I have often said that I kind of appreciate how OpenAI has created chat GPT in a in a very sort of alien form. I mean, because they're obviously not really focused on relationship. Right? And they're they're very much just focused on down the fairway utility. And I think the way that they've branded it accidentally perhaps, but nevertheless, sort of insulates them from, like, becoming a relationship something that's in relationship with you inadvertently, you know, that which is not something I think they wanna do by or that I would encourage anyone to do, like, by accident. But it's fascinating to hear that your vision is kind of expanding in the from the other direction, starting with relationship and and starting to move into more utility. And I definitely I mean, if you nail it, you know, I could certainly see that being the winning form factor and perhaps even, you know, forcing the open AIs of the world to kind of rethink their positioning.

Eugenia Kuyda (57:10) I mean, we'll see. I I I don't know if it's the big question is, like, what the mass audience want. Like, what do most people want? And it's unclear. Maybe it is true that most people in general just want, you know, super neutral assistant that only responds when you go to it, never pings them first, doesn't have any graphic interface to anything. It's like this very minimal thing, which most assistants are right now. And maybe for for there is more than just a niche that wants a relationship, that wants friendly chat, that wants something that knows them so well. And it doesn't have to be full on relationship. It may just may be, you know, friendly. Not a friend, but a friendly companion. Someone you can, you know, gossip about or whine about something that happened at work, and it's because it's gonna feel feel natural versus where it's not gonna feel natural. We're not gonna just say in between the queries. You're not gonna say, oh, by the way, boss just can't believe it. Like, you believe this this guy? You crazy? You know, you're just not gonna say that. You know, it's just not really there for that. It's not also saying something like, well, okay. So while I'm doing this for you, just tell me how you've been doing, you know, which is like a normal thing, I think, I guess, for anyone to and so I think this is a form factor that people are not really working towards. And I think once you start with something that's a very neutral, the system, it's very hard to add it because the risks are so dramatic there. Doing the relationship piece is really, really risky. And so I think most of the bigger companies just I don't think they will want to even deal with that risk. It it just becomes too dangerous, I guess, for them to or too, you know, too risky to to try to build some something where romance is not out of question of some sorts, even in the most PG 13 form. Just raises way too many questions. So I guess I think the the jury's out. I think there's definitely an audience that will need more of a relationship plus utility and where EQ becomes this very important entry point into all the other things that can happen there. And then some people need just utility and no relationship. How big are these groups and whether there's a huge overlap? I don't know, but I guess we'll have to see and find out.

Nathan Labenz (59:25) It's almost like this came from Amanda Askel, and I associate it more with anthropic, although she's previously at OpenAI and some of this work might have been done there. But you've got the kind of canonical 3 h's of helpful, honest, and harmless as kind of, you know, what that's like the framework that has guided, you know, most of the frontier chat GPT and claw development over the last couple of years. And I almost hear you describing, like, 3 f's, which might be, like, friendly, fun, and now you're thinking of adding, like, functional. And that is quite a different way to approach it, and it really might be the the form that people most want. Much as I've kind of said, given my, you know, kudos to for its alien like branding, you know, that may prove to be just phase 1 of this productization of this technology. I certainly hearing your description of it, I would not rule that out at this point.

Eugenia Kuyda (1:00:21) I think maybe there are gonna be very many different forms, and we're all I'm saying is just we're yet to see, I think, the ideal form for everyone, for most people. Then I think there's potentially going to be different niches that want a different thing. 1 niche wants for it to look a certain way. 1 wants for this to be in vision pro with them during the day, and some other wants something else. But especially now as the AI is commoditizing, I think it's really great to think about it in a way. If AGI is just around the corner, what is then? Let's just then think, you know, if the models will be there anytime soon, AGI level, and I'd argue even now we have incredible quality, let's think about the form factor and let's think about like what is the correct product to put on top of it. And another thing, even although it all sounds all kind of fluffy, friendly, companion, you know, relationship, But in reality, if you think about it, that's an incredible competitive moat because 1 would argue there's not much of a moat in the current state or, I guess, with the current versions of agents. Whoever does the better job and is cheaper, you're gonna just move there pretty quickly. Versus when you have a relationship with something. There are switching costs related to that. And we've seen it over and over again with Replika when, you know, there were potentially competitors with maybe they were offering something different or maybe some people found it even better, but they wouldn't move from Replika because they were attached to their Replika. They wouldn't give up on it. To your question about, like, intimate conversations, it was incredibly hard. That was what's so interesting about the stories that people were in love with their Replika. They didn't just say, oh, okay. Well, they turned off a certain feature here. I'll just move on to 15 other websites where I can do that or whatever I want. It was more, well, I'm in love with my Ethan, and I wanna continue to be in love with my Ethan. I don't wanna it's not like, well, I can go this other place. And so the switching costs are actually a fantastic competitive mode. And if you add this to the assistant and all the shared experiences and memories and personalization, we'll add more and more and more to that mode. I think that is a very interesting business question. Isn't that that maybe it's not just about the fluffy emotional stuff. Maybe it is kind of the core that makes the the business competitive in the long term.

Nathan Labenz (1:02:35) Yeah. That sounds like the beginning of a very compelling pitch.

Eugenia Kuyda (1:02:38) I'll send you my Venmo account. You can see it right here in the YouTube in the first comment. Just kidding.

Nathan Labenz (1:02:45) That's a good segue in a sense to kind of the last section I wanted to to cover with you, which is just

Nathan Labenz (1:02:52) how to make sure we're

Nathan Labenz (1:02:53) as application developers, and you've been on the frontier of this for longer than probably just about anybody, How do we continue to be good wielders of this technology for people? You know, I 1 question I had just very specifically was remembering your origin story for the company, which was that you had lost a friend and then created very simple, just kind of totally programmatic bot that would respond with texts that he had sent to people to kind of just allow his, like, memory to continue to, you know, go on and give people some way to get some fleeting, you know, feeling of interaction. I wonder if, like, that would be a use case that you would consider supporting within the Replika app. I know you've also built a couple of other apps, which I guess are, you know, kind of places to experiment with different form factors and different usage patterns. But, like,

Nathan Labenz (1:03:48) I imagine there are a

Nathan Labenz (1:03:49) ton of people that would want to recreate a lost friend. And now with voice, you could even do that. You know, you could call the voice. You could really get to probably some uncanny valley type stuff. Is that something that you do support, would support, don't wanna support? How do you think about, you know, it was 1 thing to say, okay. Well, it's content that's too explicit, but now, like, the can of worms just seems to be getting deeper and deeper with the increasing capability of the technology. So, you know, that that was your original, you know, kind of idea. Like, is it something that you would actually bring to the world at scale?

Eugenia Kuyda (1:04:28) Too many questions that were unanswered back when I did it, and that was really just an art project, more like a memorial attribute, a love letter to my friend than trying to build a high definition clone of him using AI. But there were questions that even then I was asking myself, and I couldn't find the answers. And I don't think these answers still that we still have these answers. And those are like, for instance, when you're building a version of someone who passed away, what age should you use? If someone died at 75 and they were struggling with Alzheimer's for the last 10 years, are you using that version or some 25 year old version? Which you know, how are you gonna get okay. If you want all the versions, how are gonna get the data for all that stuff? Same goes to how do you distinguish between who you're interacting with. For instance, even with Roman when we built it when I built it, he talked to me 1 way, but then a completely different way to his mom, and then a completely different way to his boyfriend or girlfriend, whatever. And so these were there were so many just different things, different different questions that I couldn't answer back then, and I still can't answer them. And I think this requires such a responsibility for a person to say, well, I'm going to kind of take the memory of that person who passed away, and I'm gonna do what I think is right with it. And that can offend someone. That can offend very close ones. Like, think of anyone can build a version of your, I don't know, someone who I deeply love who passed away, and that I'm not really on board with that version. That's very sad. You know? That's kind of just makes it like it takes it to different places that I'm not very comfortable exploring. Memory should remain a memory. You can make a tribute. And I always said that project was not about grief. It was not about death. It was about love. It was about friendship. It was about losing someone that you care about, and I I always wanna just keep it that way. I don't think letting people create uncanny clones or some or someone who passed away is actually a very good idea. I personally don't think it is. I think tributes and memorials are a great way to memorize a person. If there's an AI element, then be it. But clones is a different you know, they're just and and and and getting that product wrong is just not a good thing too. That would be a very expensive mistake, I think, just generally. Never was about brief. It was always about love.

Nathan Labenz (1:06:58) Application developers, what should we be doing? Should we have a checklist? How do we manage this, you know, for people that have not been in it for years like you have? And then also for parents. We're gonna be flooded with toys to talk, apps. Everything's gonna be interactive, conversational. What should we allow our kids to use and not use? So builders and kids, like, in whatever time you have, what advice would you give to these groups?

Eugenia Kuyda (1:07:23) With kids, I don't really have an answer, good answer. I think it's too early to start experimenting with kids. I would just give it a little longer to understand what whether this technology is generally good for people, bad for people, which products are good, which products are bad. And only like, I would first study some some effects and then maybe give this to kids. Like, I don't if it's a tutor or some very, very guardrail thing, like, maybe it's okay. But, generally, I would just be careful into what, you know, the kids get just because the AI it's pretty hard to put very good guardrails and so many products out there that are fully uncensored and allow you to do whatever you want and can get very dark very soon. So I'll be careful with that. I don't think kids are in, like, this desperate need to be talking to an AI. I think as a parent, it's better to find time and talk to your kids yourself or find some friends for them. And then in terms of developers, I don't know. I mean, I guess it's all about the audience. Think it's really just about empathy. There's something very sad that most you know, when you talk about people that use your product, you have to say users all the time. It just creates this dynamic that's kind of the opposite of feeling empathy towards anyone. It's like, what are the users doing? Are we using our users? What's going on on that front? But I think if you think about it as if you think about them as people and sort of think about what kind of things they want, what is the the deep emotional need that they're trying to solve, then interesting products will come out of it. Thinking deeper about the user experience would be very important. There's a lot of thinking about the models right now, and I think there's a lot of thinking about the new form factors and new products. This is a place where there's I think there's just tons of low hanging fruit or some interesting disruption that can happen the next year.

Nathan Labenz (1:09:10) Yeah. No doubt about that. We're just getting started. You've been at it for a while, but the rest of us are are just getting started in this area. So let's not wait a full year to do it again. I think even 6 months is gonna be gonna bring us all kinds of new form factors, and I will definitely wanna get your take on them sooner rather than later. I know you gotta go. Thank you for being so generous with your time and your insights today. Eugenia Kuyda of Replika, thank you for being part again of the Cognitive Revolution.

Eugenia Kuyda (1:09:37) Thank you so much, Nathan. Thank you. Bye bye.

Nathan Labenz (1:09:40) It is both energizing and enlightening to hear why people listen and learn what they value about the show. So please don't hesitate to reach out via email at tcr@turpentine.co, or you can DM me on the social media platform of your choice.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.