Roy Lee, founder and CEO of Cluely, discusses his AI startup's $15 million Andreessen Horowitz investment and their provocative "cheat on everything" marketing approach that has gone viral across the tech industry.
Watch Episode Here
Read Episode Description
Roy Lee, founder and CEO of Cluely, discusses his AI startup's $15 million Andreessen Horowitz investment and their provocative "cheat on everything" marketing approach that has gone viral across the tech industry. They explore Cluely's real-time AI assistant that provides undetectable information during meetings and interviews, Roy's philosophy of "AI maximalism," and his vision for a post-AGI world where humans are freed from economic necessity to pursue intrinsic interests. The conversation covers his controversial stance on dissolving copyright and privacy norms for efficiency gains, the resonance of his message with young people, and how he believes society should adapt to increasingly capable AI systems. Despite the edgy messaging, Roy presents thoughtful perspectives on competing with tech giants and building technology that anticipates entirely new social contracts in an AI-dominated future.
Sponsors:
Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs, including up to 50% less for compute, 70% for storage, and 80% for networking. Run any workload, from infrastructure to AI, in a high-availability environment and try OCI for free with zero commitment at https://oracle.com/cognitive
The AGNTCY: The AGNTCY is an open-source collective dedicated to building the Internet of Agents, enabling AI agents to communicate and collaborate seamlessly across frameworks. Join a community of engineers focused on high-quality multi-agent software and support the initiative at https://agntcy.org
NetSuite by Oracle: NetSuite by Oracle is the AI-powered business management suite trusted by over 42,000 businesses, offering a unified platform for accounting, financial management, inventory, and HR. Gain total visibility and control to make quick decisions and automate everyday tasks—download the free ebook, Navigating Global Trade: Three Insights for Leaders, at https://netsuite.com/cognitive
PRODUCED BY:
https://aipodcast.ing
CHAPTERS:
(00:00) About the Episode
(03:24) Introduction and Cluey Overview
(10:55) Future Rules and Privacy
(13:20) Positive Vision for Future (Part 1)
(18:01) Sponsors: Oracle Cloud Infrastructure | The AGNTCY
(20:01) Positive Vision for Future (Part 2)
(21:23) Entrepreneurship and Impact Theory
(24:22) Anti-Establishment Marketing Strategy
(27:26) AI in Universities
(30:16) Columbia Expulsion Story
(32:48) AI Maximalism Ethics (Part 1)
(32:53) Sponsor: NetSuite by Oracle
(34:17) AI Maximalism Ethics (Part 2)
(38:29) AI Identification Debate
(46:00) Output vs Input Philosophy
(51:35) Learning and Skill Building
(56:40) Trust and Market Effects
(01:03:42) Assessment and Hiring Revolution
(01:06:47) Viral Marketing Strategy
(01:12:39) Long-term Company Strategy
(01:15:59) High-End Talent Acquisition
(01:18:56) Outro
SOCIAL LINKS:
Website: https://www.cognitiverevolutio...
Twitter (Podcast): https://x.com/cogrev_podcast
Twitter (Nathan): https://x.com/labenz
LinkedIn: https://linkedin.com/in/nathan...
Youtube: https://youtube.com/@Cognitive...
Apple: https://podcasts.apple.com/de/...
Spotify: https://open.spotify.com/show/...
Full Transcript
Transcript
Nathan Labenz: (0:00) Hello, and welcome back to The Cognitive Revolution. Today, my guest is Roy Lee, founder and CEO of Cluely, the AI startup that recently announced a $15 million investment from Andreessen Horowitz after repeatedly going viral both within and beyond the tech community with their provocative invitation to "cheat on everything."
Now it might surprise even some regular listeners to learn that despite all the concern I've expressed about AI-related existential risks on this feed, I'm actually usually a fan both of edgy marketing and more generally of people who dare to challenge what are arguably outdated social norms. And yet it wasn't the viral videos or the fundraise that inspired me to have this conversation, but rather a surprisingly earnest response to Y Combinator president Garry Tan after he tweeted that he was muting the term "Cluely," in which Roy wrote: "I have big dreams for this company, Garry. I know you might be turned off by the messaging and think that I'm some stupid kid who's doing something immature and unethical. But the end goal of this company is, I believe, a giant positive for humanity."
That goal, as you'll hear, is to help humanity achieve a utopian world in which humans are free to do whatever it is they intrinsically want to do rather than what economic necessity demands. I love that vision, and I applaud the ambition. And yet I do understand why so many people are uncomfortable with such an aggressive approach to AI adoption. So I tried to channel their concerns throughout this conversation.
We begin by covering Cluely's product, a real-time AI assistant that feeds you relevant information and suggestions during meetings, sales calls, and yes, interviews, all via an undetectable screen overlay that's strikingly similar to Apple's new Liquid Glass and clearly anticipates an always-on augmented reality form factor in the future. Beyond that, we discussed Roy's experience getting kicked out of Columbia for posting a video of how he cheated on an Amazon interview, his thoughts on why his cheating-centric message is resonating so powerfully with young people today, his belief that copyright and privacy norms will dissolve in the face of efficiency gains, his philosophy of AI maximalism and personal code of conduct with respect to AI use, his thoughts on the overemployment phenomenon (which is obviously super relevant in light of this week's Soham Gate story), his advice to people who are trying to evaluate talent today, and finally, his thoughts on how Cluely can compete with tech giants in the long term.
Overall, I found Roy's answers refreshingly bold but still quite thoughtful. At a minimum, regardless of whether you're inclined to cheerlead or criticize his approach, you have to recognize that he's thinking much more seriously and concretely than most about what life will look like in a post-AGI world.
Before we dive in, just for the sake of transparency, I did want to note that a16z did not have any role in the creation of this episode. The conversation came together through a simple Twitter DM exchange before the investment was announced. And as always, I am solely responsible for the guests I invite and the questions I ask. In this case, nothing substantive was cut during the editing process.
Now I hope you enjoy this conversation about AI maximalism and building technology that anticipates and might actually help create not just new social norms, but perhaps an entirely new social contract, with Roy Lee, founder and CEO of Cluely.
Roy Lee, founder and CEO of Cluely, welcome to The Cognitive Revolution.
Roy Lee: (3:28) Thanks for having me, brother.
Nathan Labenz: (3:30) I'm excited for this conversation. You have made quite a bit of noise online in recent weeks. I think anybody who is paying attention to the AI space has certainly seen your face and your posts and your cinematic, sometimes provocative marketing come across their feed. So I think there's actually a lot to unpack here.
For starters, a16z, which recently led a $15 million investment into the company and put out a blog post about why they're doing it, opened that blog post with the provocative question: "What if breaking the rules was the unlock?" And you've got the motto "cheat on everything." So for starters, what does it mean to cheat on everything? What does Cluely help me cheat on? Broadly speaking, what is Cluely?
Roy Lee: (4:16) Cluely is the new way that humans will interact with AI. Instead of going to ChatGPT.com, we envision a future where people use Cluely instead. And how it functions is a desktop app that integrates very deeply with your computer. It has access to your system audio, your microphone, and shows up as essentially a pane of glass over all your other applications, and it feeds you relevant information in real time.
During a meeting, Cluely will take real-time notes, and at any point, it will suggest questions that you might have. For example, "what does cheat on everything mean?" Cheat on everything means using Cluely to gain advantage in any situation by instantly accessing relevant info - and this is all information that Cluely is feeding me directly. So maximizing productivity and performance with AI everywhere. This is all stuff that Cluely literally tells me whenever you've mentioned terms - it will float up definitions of the terms, all in real time, without you ever knowing that I'm using AI. This is Cluely.
Nathan Labenz: (5:11) I was using it the other day in a conversation I was having with the CEO of Waymark, who's been a long-time friend of mine. And it was a pretty funny experience. I mean, it definitely is - I'm interested to hear a little bit more about the undetectable screen overlay, but it is transcribing and just kind of giving you this running diary of what is happening. And then I was just constantly refreshing the "what should I say now" prompt to have it spit up things for me to say. I thought often enough it was pretty good.
I think an obvious challenge, and I'm interested to hear how you think about this as well from a product vision perspective - a lot of the things that you have put forward in terms of use cases are sort of earlier in relationships or they're sort of like episodic moments, like you're going to do a coding test for a class or a job. These are things where there's not super deep context. I think another interesting question is going to be how do you build up context where you can actually get to the point where you can help me in a 10-year-plus relationship, which I have with my CEO. Because there was just a ton of stuff there that is commonly known and unspoken between the two of us that the thing just has never had any chance to get access to.
So yeah, two parts there at least, and you can take it in as many directions as you want. But how are you making this undetectable screen overlay thing work? I haven't tested that, but does that mean, like, if I take a screenshot, it won't show up? Help me understand that a little bit better.
Roy Lee: (6:49) Yeah. If you take a Mac native screenshot, it won't show up. If you screen record on your Mac, it will show up. The Mac screen record functions at a lower level in the OS. But I think undetectability is just a for-fun feature. Essentially, when you're, say, a sales rep and you're demoing a product and you want this overlaid on top but you don't want anybody to know that you're using it, then the undetectability feature is pretty interesting. And if nothing else, it's an interesting marketing hook.
And regarding context depth - so the way we're thinking about this is it is inevitable that a model will come out that has a very extended context window that is fully natively multimodal. You can feed it not just two minutes but one year of everything you've done on your computer, and it will be able to reason over it and learn everything about you and become this hyper-personalized AI assistant, like essentially Jarvis. And I think this is really what every consumer AI company in the world is building towards.
The way to win that, I believe - I think we're on the right track with the user experience. I think Apple pushes out Liquid Glass, which is essentially the exact same UX that we have, the translucent glass pane overlay on everything. And I think the world is realizing this is how AI will be used in the future. We are the first to market with this UX, and it's a land grab to force people into this interaction pattern of instead of going to ChatGPT.com, I'll press Command-Backslash and I'll use the AI desktop assistant that I have. And once we lock people into that interaction pattern, then we can collect all this data about them. And when the time comes that an open-source model comes out that is superintelligence, then we will be the ones to distribute that, and we will have all the data to make this Jarvis.
Nathan Labenz: (8:24) Interesting. So the idea now is to - and obviously, this is a pretty well-established best practice in AI application development - you want to be preparing for future model capabilities and not just trying to supplement the current models and compensate for their weaknesses, but really position yourself for new strengths yet to emerge.
Like, how much data are you gathering? That is not entirely clear to me yet as a user. I know that when I do a recording, then there is a session that I can go back to later and check out on the web. When I do just a random chat with it, I'm not sure if that's all getting stored or where it would be stored, because the sessions right now in the web seem to be just the actual audio and visual recording. But just like, "Hey, here's my screen, do something right now" - is that stuff also all kind of being logged? Is there even more being logged beyond that that I haven't thought of yet?
Roy Lee: (9:22) Yeah. Right now, in meetings when you use Cluely to record audio, it is actually not legal to have just a running recording of someone and save that recording somewhere, but it is fully possible for you to have an AI notetaker that will transcribe the meeting and summarize it, essentially turn it into notes. And that's what we do. We don't save the actual MP3 audio unless you're an enterprise user who specifically requested that. We don't save that. We only save the AI summary of what happened during the meeting. Similarly, whenever you use Cluely to trigger a response, we save the response and the question that you asked it. But all the data is stored under an encryption layer, so we actually don't know which user is making which request.
Nathan Labenz: (10:01) Interesting. Okay, cool. So I think this is both an interesting technology discussion, but perhaps even more so an interesting sociological discussion. And I do think you've done a masterful job of creating some provocative marketing and stories and even just positioning yourself. And I do appreciate that, by the way. I've always kind of been a fan of provocative marketing strategies.
One of the more interesting things I've heard you say is "in the future it will all be legal." I guess I'd love to hear you unpack, like what do you think are the current rules, norms, just expectations that people have of one another that will with time be looked back on as being clearly anachronistic or clearly from an earlier time, and then we'll all just agree that it was silly to ever have thought that way?
Roy Lee: (10:55) Yeah. I think the entire concept of data privacy is going - I can only imagine in the future that fear is going to go down as soon as there is one technology that will massively improve your efficiency in exchange for less data privacy. I think people will just run to that. It has been proven in the past that humans value efficiency of their own work more than pretty much anything else. And if giving up your own data will make your model more personalized and more efficient, then people will not hesitate to give it all the data that they can, and I think that will be a relic of the past.
I think the whole concept of copyright laws and protection laws around what data models can and can't use to train on, I think that will all float away into the past. Right now, America is in an AI arms race against China, and right now, China gives their AI model providers full access to all the data that China has. And it is inevitable that America, in order to keep up, will have to ease up on the legal restrictions around that. And if it means that we all, in turn, get a much better model, then I think everybody will be happy.
I think the whole concept of copyright infringement, patents, protecting your own technology - I think this will all go away. I can only imagine that in the future, we will tend towards a world where everything is open source and everything is available for everybody to use and technology will be fully democratized. It is the only way to make sure that the AI coding agents are improving because they have access to the production-level codebases of the best companies. It is the only way to train models that will significantly beat out China's.
Nathan Labenz: (12:22) There's a whole other conversation we could have about the competition with China. I might bracket that one for now. Can you maybe give me - I think this is in such short supply - I have these various AI mantras. One of them is that the scarcest resource is a positive vision for the future. So what is your positive vision for the future? And I would give you all the time you need to flesh that out in as much detail as you can.
What's life going to be like? Are we going to be still working? Are we going to be working less? Are we going to be exploring nature all the time while the AIs do all the work? Are we going to have universal basic income? You're clearly thinking about major societal changes, but I would love to hear the sort of "what's in it for the average everyday person who's not at the frontier right now." Like, what sort of life should they expect and hopefully find compelling enough to be excited about?
Roy Lee: (13:20) Yeah. I mean, it's a hard question. You're essentially going back - the analog would be you go back 600 years in time and you ask a blacksmith, "Hey, here's a steam engine. Can you predict what the world will look like with a steam engine?" And he couldn't possibly imagine that there will be data analysts in the future. The only thing he could see, quite shortsightedly, would be, "Wow, this invention is going to completely automate away my job. People are not going to learn how to put the hammer to the metal, and they're going to lose core critical thinking skills that define what it means to be human. They're going to lose out on the core manual labor that comes with blacksmithing. This is an art." And you would think only negative things about it. It will not be possible to understand the positive implications of technology, and I think this is something that most people are shortsighted in.
In reality, if we do have superintelligence - an AI that can just cognitively do everything that a human can do but better - this will inevitably result in faster scientific progress. Cancer is no longer a thing. Alzheimer's is no longer a thing. You and I don't die at 80 years old. We die at 800 years old, and our lives are infinitely expanded.
And the whole concept of capital creation and the idea of working to add capitalistic value to society, I think that will be probably removed as a concept. In reality, assume you had all the money in the world and at the snap of your fingers, you could essentially gain anything you want, because that is the world we're looking at with superintelligence. If you want a hamburger, you can snap your fingers, and AI superintelligence will know how to optimally get you a hamburger in time. Anything you want is at your fingertips. What is there left to do?
And I think that sort of reveals the true human instincts. In this world, you won't be just some mindless puppet. You will be doing what you natively want to do, which means you don't just go out and work a data analyst job eight hours a day doing some work that you hate. Instead, you go on a walk to get a coffee because you want to go on a walk and get a coffee. You run a coffee shop because you want to run a coffee shop and you go out and meet interesting people and you talk about interesting things and you philosophize about life and you explore nature and you do all the things that you were naturally compelled to.
I think it is proven that even when AI automates away difficult things, humans will still tend towards doing difficult things, and we see that with chess. When the first chess bot came out that sort of solved chess, everyone thought, "Man, this is the death of chess." But in reality, chess is now more popular than ever because humans do not - our default states of desire do not devolve to just output for the sake of output. I think humans naturally do things because we naturally want to do things. Everyone has things that they naturally want to do.
And when the time comes for us to truly abstract away all the stuff that is unnecessary and only do the things that we truly enjoy doing, then every single day, you will get to do the things that you want to do most, and all of the lazy grunt work that you don't want to do, you will not have to do unless you truly want to do it. And I think it will be obvious that people will truly want to put effort into things. You will no longer find beauty in woodworking for the sake of having a final sculpture at the end. You will just find beauty in the process of doing it and you will do it because you intrinsically want to do it.
And I think another thing that I'm quite optimistic about with scientific progress is the ability to sort of control and turn on and off what we find joy doing. Right now, probably one of the big societal dangers is that everyone doom scrolls four hours a day and you're just scrolling on this mindless algorithm and you essentially are not gaining any skills. Your brain is literally decaying. What if there was some sort of biological mechanism where you could literally flip a switch and you could no longer find dopamine from scrolling and you would instead force yourself to find dopamine from, I don't know, reading English texts from the 1800s. Every single person in the world would want that ability to choose what they find joy in, and I guarantee you 99% of humans would choose the ability to find joy in not hedonism or mindless stuff. People would want to do things that make them read literature, paint art, go for runs, live healthier lives, garden - these are the things that we are naturally and biologically inclined to want to do.
And truly, in the end state where output is no longer a scarcity at all and output is essentially infinite - you can have anything you want - I think many people are bearish on this for the wrong reason. If you can choose exactly the life that you want to have and live that, then every single person in the world will be living exactly the life that they want, and I think this will be as close as we can get to utopia.
Of course, that is the scenario where superintelligence does come out. And if superintelligence does not come out, then everything everyone's been saying is just a joke and AI just ends up defaulting to just a tool. Maybe 20% of white-collar jobs get automated away, but essentially everyone just lives the same life, and they're just 20% more productive.
Nathan Labenz: (17:56) Hey, we'll continue our interview in a moment after a word from our sponsors.
In business, they say you can have better, cheaper, or faster, but you only get to pick two. But what if you could have all three at the same time? That's exactly what Cohere, Thomson Reuters, and Specialized Bikes have found since they upgraded to the next generation of the cloud. Oracle Cloud OCI is the blazing fast platform for your infrastructure, database, application development, and AI needs, where you can run any workload in a high availability, consistently high performance environment, and spend less than you would with other clouds.>
How is it faster? OCI's block storage gives you more operations per second. Cheaper? OCI costs up to 50% less for compute, 70% less for storage, and 80% less for networking. And better? In test after test, OCI customers report lower latency and higher bandwidth versus other clouds. This is the cloud built for AI and all of your biggest workloads.>
Right now, with zero commitment, try OCI for free. Head to oracle.com/cognitive. That's oracle.com/cognitive.
Nathan Labenz: (19:11) It sounds like you are quite AGI-pilled, a superintelligence believer, though.
Roy Lee: (19:16) Yes. Yes. I think it is inevitable that the future will come, if not now, a few years from now.
Nathan Labenz: (19:22) What role do you - I mean, I find entrepreneurship in this context interesting, almost to the point of paradoxical. I've spent most of my career as an entrepreneur, and that's kind of always been my instinct. And yet I find in this AI era, I'm a little less inclined toward it partly because I'm like, "Yeah, where are the moats?" and all that kind of basic analysis. But also partly because I'm like, "Well, is it all gonna just kind of come out in the wash?" or, you know, is there the sort of idea of like, "Oh, I could build a generational company and have this long-term impact, maybe even something that outlives me." It feels like that's very, very hard to do in a future scenario where superintelligence for all comes on the scene and everybody is sort of living their best, most philosophical life.
Roy Lee: (20:15) Right.
Nathan Labenz: (20:15) What is the contribution that you want to make and on what time scale? And in a sense, who's it for? Is it for you? Is it for the public? In a world where everybody is sort of in that end state, what difference does it make what you do between now and then?
Roy Lee: (20:34) I think that future isn't an eventuality only because there are human beings who work tirelessly to make that future an eventuality, but there are enough humans out there that it will inevitably come. And I think I find it very exciting that I am helping society progress towards that future.
If my only contribution is sort of planting and expanding the user experience of this translucent glass pane overlay for AI usage, then I will be very happy with that contribution. If my contribution can be much more, then I will be even happier.
And right now, I think I'm living the most fulfilled life I could ever be living. In a different world, I would be interning at Meta as some brain-dead software engineer, fearing for my job and career and future prospects and getting ready for summer classes in my junior year at Columbia next year. And that is just an existence I don't find fulfilling at all.
Right now, this is probably the most exciting time in human history. Really, nobody knows. Even these projections I'm telling you, take it with less than a grain of salt - nobody knows what the future looks like in 5, 10, 20 years, and the only thing you can do is make movement in the direction that you believe is positive. And I truly believe I'm moving in the most positive direction for human society. In my perspective, I am working towards a future where cancer is cured, Alzheimer's is cured, and we all live forever in the most optimal idealistic life that we ever want. And that is a future that I'm chasing, and I think that's a very positive future for me.
Nathan Labenz: (21:55) And the theory of change, just to make sure I'm - you know, I like to pass the intellectual Turing test and say it back to you - is basically by normalizing and creating the right mode of interaction for always-on AI assistance for everyone, that your sense is the way that you can contribute to getting us to that future?
Roy Lee: (22:19) The more that people use AI, the more productive they will be. And if this tool - if all it does is make everyone 10% more productive, then I will have single-handedly resulted in essentially 700 million human beings worth of productivity into the world, which would be really, really great.
And if, alternatively - I actually think that this technology is way bigger than that - and if it actually rapidly enhances AI adoption worldwide, then I think it will do much, much more than that. The world - there will be more money that is funneled into AI research and development. There will be more of a common "okay, we're all using AI, this isn't so bad, this is great." And it will increase optimism for AI. It will increase usage and consumer adoption. And every single step that is required before we hit this utopic state, I think the spread of this tool will be singularly capable of helping with.
Nathan Labenz: (23:10) So tell me about your - I don't know if this is a strategy from the beginning or if it's something you kind of stumbled into and then have found has worked and doubled down on. But even in just your last couple of responses, you took a couple shots at the sort of mainline default respectable career path for young technologists.
What is it about hating on that that you think is so resonant? Why are people so drawn to or so activated by shots across the bow of the university or shots across the idea of taking what would normally be considered a pretty high-prestige job at big tech?
Roy Lee: (23:53) Yeah. I mean, one, I think people always want to root for the underdog. Right now, I might be pretty viral, but I'm nowhere near as viral as the immortal legacy of an Ivy League or a big tech institution.
But also, I think people naturally don't want to be a cog in the wheel. Nobody wants to grow up and live a life that is of meaningless capital production and that's all they do and that's all they will ever do. And everyone wants to make an impact and a dent in the universe. And I think everyone is attracted to the idea of starting their own thing because it'd be great if I could make an impact on the universe and be great if I wasn't caught in the wheel.
And I think right now, I am probably a living example of what that could look like. And I think people of my age are inspired and excited by the life that I am living and the alternative career path that I'm suggesting. And I don't think there's been very few people in this world who have been happy and completely content with just being some industrial old cog.
And I also think that the glamour of school and jobs is getting lost in the newer generations. I mean, we were all told growing up, "All you gotta do is go to a good school, get good grades, and you'll get a good job and you'll be safe." But in reality, that's not what we're seeing - college graduates with a 30% unemployment rate. It's ridiculous. You graduate college, do everything that you're told, and it is just not true.
And there's this seed of doubt in everyone's mind, like, "Hey, maybe everything the older generation told us was wrong. They've sort of put me in this terrible position where I ended up spending years and years of my life doing something that I hated doing, which is rote memorization and studying for useless concepts, all in exchange for some semblance of stability." And now there's AI here and unemployment rates are skyrocketing - what can I do?
And I think there's a lot of frustration in my generation and an interest in choosing something - there must be something better that I could have done. And I think I am a metaphorical and physical manifestation of the path not trodden.
Nathan Labenz: (25:48) It's interesting how broad that critique is, and I'm certainly pretty sympathetic to a lot of it. The gerontocracy is not serving us super well in many different ways, I would say. But that wasn't a super AI-specific account.
What is your report from the front on the university level in terms of what's going on with AI there? I know you've been kicked out for a little while now, but obviously students are using Claude and ChatGPT and stuff. But are the universities - is Columbia specifically - starting to find its footing at all, or are they still taking a bury-the-head-in-the-sand approach?
Roy Lee: (26:35) They're sort of letting the professors manage the classes as they please, and everyone has their own rules around AI. But generally, I can tell you with 100% confidence that at least 95 to 99, if not fully just 100% of all Columbia students have used AI to cheat on an exam before or an assignment before. And it's just - you would literally be weird if you said, "I've never used AI to cheat on an assignment before, like for homework or something." You would literally be seen as the odd one out.
And I think schools and just older generations have not caught on to how massive the AI hive mind has been for college students and people of my generation. Every single one of the people that will grow into high-profile roles in the future, every single one of them has cheated using AI and used AI to realize the potential of AI to massively improve their output in school.
And yeah, I think people are not ready for what happens when these kids get five years older. This is truly an AI-native generation.
Nathan Labenz: (27:36) So I feel like there's an interesting distinction between the homework and the exam. I just kind of want to hear you unpack that maybe a little bit more. Like, when I was in school, we didn't have the AI - without revealing too much, it's been a couple years. So there but there was a generally understood idea that, like, on the homework level, people would work in groups, they would talk to their classmates, and just kind of everybody usually would get mostly the right answers on the homework one way or another.
Roy Lee: (28:08) Right. Right.
Nathan Labenz: (28:09) And so it doesn't strike me as too different from a code of conduct standpoint that you might consult AI since you were already consulting the smartest person you knew in the class. On the other hand, when you show up to take an exam, we didn't really have means of cheating. I think cheating was pretty rare. Do you think that cheating is actually common on exams in today's world, or is it more at the homework level that you're seeing people use it?
Roy Lee: (28:38) Yeah. I mean, it depends. I think there are different sorts of exams. For a writing class, the exam might be a take-home essay, and I guarantee you most people are brainstorming, drafting, outlining, making edits with AI. That is functionally over.
But unlike the actual "you go in a classroom, you sit down and take an exam" - one, I think this actual format is outdated and probably in five years, we won't see this again ever anywhere in America. But also, I think regulations on that have been as stringent as they have always been, and it is very hard to cheat on those. But I think it's just an outdated format and probably will be obsolete very soon.
Nathan Labenz: (29:13) I've seen a bunch of different accounts online, and I don't want to spend too much time on this because there is a bunch of stuff online about it already. But how would you briefly tell the story of how and why you got kicked out of Columbia?
Roy Lee: (29:23) Yeah. So earlier in fall semester of this year, I had entertained the idea of working at a big tech job. And to get a job at a big tech company, you need to pass a programming riddle interview, and they'll essentially ask you the programming equivalent of "how many hairpins will fit in the Empire State Building." And it's pretty ridiculous because the questions are all online, so it ends up becoming not a question of critical thinking, but a question of how many hours have you spent memorizing these riddles. So it was pretty stupid, it's pretty obsolete.
And it's quite simple. It's just take a picture of the riddle, ask ChatGPT, "What's the answer to the riddle?" and just get the answers. That's exactly what I built. A tool called Interview Coder, which is an invisible - the UX looks extremely similar to Cluely. It's a pane of glass that appears over your screen, is invisible to screen share so that an interviewer doesn't know that you're using AI. It'll take a picture of the screen. It will give you the answer of what exactly is the answer to the programming problem.
I recorded myself publicly using this tool to get an offer at Amazon. I used it through the entire interview process and completely fooled the interviewer into not thinking that I'm using AI. I post it on YouTube. It goes very, very viral. An Amazon interviewer sees it, an Amazon executive sees it. They report it to Columbia essentially saying, "Hey, if you don't expel this kid, we're never gonna hire from your school again." And it was a very thinly veiled threat, and Columbia hears this, drags me into a loop of unnecessary disciplinary hearings that ultimately end in my suspension from school.
Nathan Labenz: (30:40) Interesting. I'll have to request comment from Amazon on the threat concept, but we do know that Columbia administration is at least somewhat willing to bow to threats. I've seen that on multiple dimensions. So at that point, was it basically over for you? Was there any sort of - were you offered a chance to recant or apologize, or was it kind of like "this is just beyond the pale, we just can't have this"?
Roy Lee: (31:05) I was offered chances to apologize, but I think I would have rather died than apologize at the moment. I thought what I was doing was obvious - not only was it essentially the equivalent of how a computer science major conducts a protest, but it also was, I think, generally a way that the future will tend. It was ridiculous that I'm not allowed to use AI on the programming assignment when in the job they're using AI to write 90% of the code anyways. That's the number that Google's reporting. It's just absurd to me.
And I thought if I bowed my head right now, I am probably one of the few people in the world right now who are capable of charging this - perhaps revolution. That might be a bit high-horsey of me to say, but this change in the industry, I'm the best person in the world to take a meaningful stab at it. And I think if I apologize right then and there, then it would just dilute the story, and I would not internally feel proud of myself for doing that.
Nathan Labenz: (31:57) Hey, we'll continue our interview in a moment after a word from our sponsors.
It is an interesting time for business. Tariff and trade policies are dynamic, supply chains squeezed, and cash flow tighter than ever. If your business can't adapt in real time, you are in a world of hurt. You need total visibility from global shipments to tariff impacts to real-time cash flow. And that's NetSuite by Oracle, your AI-powered business management suite trusted by over 42,000 businesses.>
NetSuite is the number one cloud ERP for many reasons. It brings accounting, financial management, inventory, and HR altogether into one suite. That gives you one source of truth, giving you visibility and the control you need to make quick decisions. And with real-time forecasting, you're peering into the future with actionable data. Plus with AI embedded throughout, you can automate a lot of those everyday tasks, letting your teams stay strategic.>
NetSuite helps you know what's stuck, what it's costing you, and how to pivot fast. Because in the AI era, there is nothing more important than speed of execution. It's one system, giving you full control and the ability to tame the chaos. That is NetSuite by Oracle. If your revenues are at least in the seven figures, download the free ebook, "Navigating Global Trade: 3 Insights for Leaders" at netsuite.com/cognitive. That's netsuite.com/cognitive.
Nathan Labenz: (33:26) How would you describe your own just kind of personal code of ethics as it relates to AI? I have some sub-questions there, but maybe start off with the broad one.
Roy Lee: (33:35) I guess I live principally by a theory of AI maximalism. I think every single chance I am able to use AI where it helps me, I should use AI where it's helpful. I think if you think that an AI can do a task today, then you would have to be really stupid not to think that it could do that task in five years. And the models we use today are the stupidest models we will ever use for the rest of our lives.
And if you adopt that framework, then you realize that a lot of the work that I'm doing, I should not have to be doing right now because an AI is capable of doing it. I live by this individually, but I also think that everyone in the world should adopt this framework.
Nathan Labenz: (34:12) I'm also an AI maximalist generally. I feel like I'm always trying to do two things at once. One of which is accomplish the object-level task in front of me, and the other is figure out how much and how AI can actually help me accomplish that task. So I definitely share that.
I guess, is there a limit to it though? Like, in thinking about this, I recalled the overemployed subreddit, which exists by the way earlier than AI, which is maybe its own indictment of certain big tech companies where people report doing this. But the thread is full of people telling stories of how they got multiple remote jobs at tech companies and sort of try to do the minimum to not get fired at both of them and just collect multiple comp packages. And of course, they're generally doing this without telling the companies that they're doing it. Does that feel okay? Like, if you're smart enough to hold down multiple big tech jobs with your AI assistant, more power to you? Or...
Roy Lee: (35:16) 100%. The companies, they themselves don't care. They're hiring you to fulfill - to eventually get to some level of output. If you're performing at that level of output, then why should they care about your input? That's also the framework that we adopt here. We don't set any strict hours on anything, but we do set strict output requirements, and the output requirements are quite strict. I think everybody works hard to fulfill the output. But if everybody could do this amount of output in an hour, then you would not need to come into the office for any longer.
I think overemployment is a fine thing. It is the fault of the company, and it is the onus of the company to make sure that their employees are performing at a level that they are happy with. And if you're happy with it, then you shouldn't be bitching about hours.
Nathan Labenz: (35:57) Okay. I know that right now, Cluely is this sort of real-time pane of glass assistant. I don't know what it might evolve into, but one could imagine if you are a maximalist and trying to help people get the max outputs, including potentially holding down remote jobs with as little input as possible thanks to AI, you could imagine just kind of taking the human out of the loop entirely. Like, I maybe have eight Cluelies running on eight laptops, one for each of my eight jobs, and at some point, the AI is just kind of doing the job. And it's - if it's today's AIs, it's probably doing kind of a not-so-great job, but it can respond to emails. Maybe it at least allows me to not get fired for long enough to make it economically attractive to try to pull this off.
I guess, is that the direction that you would imagine going? And then one thing that people are gonna start to do, I think, pretty soon is start to try to interrogate entities and figure out, are you an AI? Are you a human? Are you some sort of hybrid? What is it that I'm dealing with here?
One of the more interesting proposals I've heard for short and sweet, but I think powerful AI regulation - Yuval Noah Harari has advanced this idea that AI must always identify itself as AI. So I wonder what you think about that rule, first of all. If society wanted to put such a rule in place, would you support it? And do you imagine kind of pushing the product frontier to the point where this question of "will the AI identify itself as AI" actually becomes a real live issue for you in terms of how much people can push the cheating frontier?
Roy Lee: (37:40) Yeah. I don't think that's the solution. I think we're already seeing AI output and human output sort of converging. When you use AI to help draft an essay, are you required to say this is AI-generated? What if you just use it to add the final editing touches or help you outline? At what point is it AI-generated and at what point is it human-generated?
I think a much better framework of thinking about it is just: AI is a tool that will help you make more output in whatever capacity that means. And I think if you think about it that way, I don't say "this hat was made by a Chinese worker" and "that was made by a stitching machine" and "that was made by silicon" - there's infinitely regressive blame.
And I think the only thing that should matter is output for everybody. Everybody involved, it should all be output. If you go on a great date, you have a great time, that is the output. And if the output of that is satisfactory, then so what if an AI helped on the AI date?
And I think even in cases like that, shouldn't an AI be forced to announce themselves as an AI? I think it's pretty ridiculous, partially because it's completely unenforceable and also partially because truly, the only thing that matters is output.
And I think it's really hard for me to think of jobs that an AI could be able to automate that a human would be more internally, intrinsically fulfilled in doing. For example, customer service support calls. That is a shockingly big use case of AI. Humans are currently able to do customer support calls, but is there a person in the world who wakes up excited to go to their job where they are just customer service? I would argue probably not. They probably have a lot more fun and live a much better life doing maybe literally anything else. And I think, of course, then AI should be able to do that.
And I just think in the world, there's way too much focus on input and way too little focus on output. And there's also way less thought on what happens when output 10x's. And when output 10x's, I just think it will be a better future for everybody.
Nathan Labenz: (39:34) Yeah. That's fascinating. So, I mean, putting your own dating hat on, I don't know if you're on the market for romance these days, but if you went on a date and your date was later revealed to have been using an AI assistant that was telling him or her whatever to say at every given point, you feel like that's fine? No big deal? In the future, everybody will - are you envisioning a world in which spouses just say to each other what the AI is telling them to say to each other?
And then it starts to feel a little dystopian at some point, right? Like, I think people have a sense that it's not exactly all about the output. It is also - people do sort of - maybe you'll just say this is gonna be one day we'll look back on it and feel like it was a product of an earlier time. But people do, I think, care about where it comes from. It's like the idea that "it's the thought that counts." That's an old adage that people are kind of like, yeah, well, my kid may not have made a great piece of art for Father's Day, but they made some art for Father's Day, and it's the thought that counts.
And Google had this kind of step-in-it moment, right? I'm sure you recall this where they during the Olympics had this ad that showed the story was that a little girl wants to write to her hero who's an Olympian and the dad's like, "This letter's gotta be perfect." And everybody was like, "That's totally the wrong way to think about it. It doesn't need to be perfect. It's about the kid and them expressing themselves and figuring out who they are and what they care about."
And so I do feel like there's - I don't know. I don't think it's just the old man romantic in me saying that, are we really making everything better if we judge it purely on outputs and have the AI do it? It seems like there are some areas where there is this intrinsic preference that still feels right.
Roy Lee: (41:31) I think at one point, things filter down and I actually agree with you on this. 100% there's some conversations and some things that you value only for the sentimental value, or it's just that some human put effort into it. And I think those things can coexist in a world where AI sort of automates away all tasks.
I think the image example is a perfect example, actually. An AI can generate an image that is great. If you ever needed an image, if you ever cared about the output of having a proper image, then you can already do that with an AI. The technology already exists, yet the child decides to do it and it makes the moment all the more meaningful. I would argue that is a very, very human experience that can exist in a world of complete AI superiority.
I think those are actually the only things that will matter - some moments, things of effort that we intrinsically value. And this is sort of at the world that I was picturing - when all output is immediate and complete, when the path to getting anything is obfuscated, then we will still find value in the meaningful things that we ourselves put an effort into. And a child will still want to draw despite just being able to have an AI automatically spawn in a beautiful painting.
It doesn't matter because - and I would argue that's probably a case where the output is something different than just a tangible good drawing. I would argue that the actual output that you're wanting there is human input, and that is something that AI will never be able to completely obfuscate away. If that makes sense.
Nathan Labenz: (42:56) Yeah. I think so. I mean, it sounds like there is a distinction that you're making between things that are done for economic production purposes, in which case we really care about outputs, and then there are things that are done for - you call it sentimental, I'd call it connection, sort of overarching narrative of who we are and how we exist together and all that sort of stuff. It sounds like we sort of...
Roy Lee: (43:21) Sort of.
Nathan Labenz: (43:21) ...acknowledge, like, if my kid used an AI and tried to pass it off as their own work in giving me a Father's Day gift, that would be sort of a form of cheating that you would say is in some sense still gonna be a norm violation in the future because it wasn't supposed to be about that, it wasn't just about the output in the first place.
And so in that cheating, there's still something that - I guess, should I be upset? When my kid is a dad and their kid tries to pass off an AI-generated Father's Day card as a hand-drawn thing, should my kid be upset at their kid, or should they just be like, "Well, this is kind of the new world that we live in"?
Roy Lee: (44:02) I guess it's like, what are you actually judging the output on? It's obviously not the talent demonstrated in the painting. It is the level of input that you put in, and I think then that's just a bad judgment. It's poor output is what it is.
But I think generally, yeah, for most things that are economically productive, strictly economically productive, having an AI automate them away and not reveal itself to be AI is just a positive for humanity. Really, I guarantee you wouldn't care if your shipping was all done by robots. You order a t-shirt and instead of being made by Chinese child workers, it was instead made by a robot. You'd probably be pretty happy with that in fact.
And then there's just an AI automating itself - there's no need for it to in every step of the process say that, "Oh, an AI robot did this. This is not done by a human." It doesn't matter. But certainly, there's some tasks where you value the human input and the actual output is the human input. And I think those will be evergreen and continue to last even in a world of ASI. And this will actually be the only things that we do - the only things that we do of any value or spend any meaningful time on is the kid drawing a drawing to give to their father.
Nathan Labenz: (45:06) Yeah. I'm with you, by the way, on most of this, if not all of it.
Roy Lee: (45:10) Not all of it.
Nathan Labenz: (45:10) I think it's so often people who are in the AI discourse are extremely fortunate in terms of the employment that they have and how much they enjoy the work that they get paid to do. And I think it's a very healthy reminder that a very significant, probably super majority of people, even in a well-off country like the United States, if given the option to not have to work for that money anymore would absolutely take that.
And I do sometimes ask people that question. And depending on who I ask, sometimes people just look at me like I'm crazy. And they're like, "You're telling me - just to make sure - the deal you're offering me is I get the money, I don't have to do the job?" They're like, "Yeah, take it." It's like you sound crazy kind of asking it in a lot of corners.
So I'm with you on that. I do think there's - and I'm pretty bullish also on just kind of how adaptable people will be when it comes to finding new things to fulfill themselves and new sources of meaning, so on and so forth. In the utopian scenario, I think all that stuff is probably - there's more anxiety around it than I tend to think is probably merited. Of course, the transition from here to there is probably gonna be pretty choppy, and that's where hopefully we can make good decisions.
In practical terms, where does it pay to cheat today? And where would you say it fails? I guess the kind of unpacking that question slightly, one of the things that people presumably are concerned about - if they're university administrators or if they're hiring junior software developers - they're kind of like, "Look, we get it. The coding test that we give you, it's not fully representative. It's not maxing out your skills. We get that. However, it's a proxy metric that we have, that we use, that we kind of need something. We can't hire you for a year to do the job and then decide whether to hire you. So we've gotta have some reduction in scope. And so we kind of need something there to try to measure, gauge, assess on. And if you're cheating on that, then we basically lose signal, and we don't know what you're going to be able to do in the actual job."
So I guess, a couple different angles on that question. The first one is just like, in what areas does the AI cheating actually get you into a place where you can then be successful? And in what maybe other domains, if you cheated through the initial hurdles, would you find yourself way over your head and end up failing because the AI can't actually help you with the real thing?
Roy Lee: (47:52) Yeah. I think this sort of goes back to the whole idea of output is the only thing that matters. I think two responses to this.
One, I think assignments should be sort of obsolete and all your work should be judged - your entire competency should be judged based on the output you have generated in the past.
And my second point is a little bit more complicated, but generally, yeah, I think the whole concept of a programming problem should be obsolete. Instead, I should have open-sourced all of the work that I've ever done in the past. You should be able to look through all my code, and if not you directly, an AI should be able to look through all the code that I've ever written or all the tasks that I've ever done in my life, and then it should be able to tell you with exact certainty, "Hey, this guy is a proficient TypeScript developer and he's lacking in this and this. He's overall a 76 out of 100 candidate."
And that should be how assignments are done in the future. Either a human or an AI judges you strictly based on the output of your work in the past, and that is used to determine the output of your work in the future, and anything else will be obsolete. Or perhaps not obsolete, but generally, any assignment that can be done by an AI should not be asked anymore. That would be response one.
And response two is, I think, when you start with the framework of thinking about output, let's suppose that my goal here is to make Google and I'm free to use any AI tools at my disposal. There's only so far that an AI code editor can take me in the task of developing Google. If it can take me all the way, then beautiful. But realistically, there will need to be - at some point, the AI will not have enough knowledge as the models stand right now.
Then my task - what cheating using AI to make Google looks like is actually going back and manually backfilling all the information that I might need to build Google that the AI can't do. And perhaps that will go as far back as learning addition and basic multiplication so that I can get to matrix multiplication so that I can get to the particular linear algebra that is the Google PageRank algorithm. Perhaps it is all of that, but most likely, it won't be. There will be an AI that can fill in many of the gaps for me.
And oftentimes, the way to cheat - even if you adopt a "cheat on everything" philosophy - there will still be things that you have to learn. And I guess that is why "cheat on everything," I think, is a valid moral framework. If you try to do things the fastest way that you possibly can do everything, there will still be some things that the AI cannot do, and for you to make that last jump, it might require four years of learning before you can make that last jump on your own. And even if you wanted to cheat, you could not cheat there.
And I still think it is a good framework. You should take every jump you possibly can before a jump that technology cannot get you to, in which case you should take all the necessary steps to backfill the prerequisite skills that it takes to complete the last jump, if that makes any sense.
Nathan Labenz: (50:35) Do you have any...
Roy Lee: (50:38) Sure.
Nathan Labenz: (50:38) Just go ahead.
Roy Lee: (50:39) Cheat on everything and learning foundational skills - both of those coexist in the world that I propose.
Nathan Labenz: (50:46) Yeah. Similar - I always am looking for these short mantras. Mine on this point has been "you want to learn how to do hard things, but not necessarily the hard way." And I think that's a pretty similar philosophy to what you're saying.
I don't really know where I come down on this, but do you give any credence to the idea that kids need to learn to write essays in an unassisted way as sort of just a generally important foundational step? I mean, I'm genuinely uncertain about this, but we have tools that make many things not strictly necessary. And yet there is some fundamental strength building that feels important, right? Like in the physical realm, we have cars and we have boats, but it's still pretty helpful to run and swim to build up the physical capacity to do that. It's part of being a strong person.
Do you think that there are things that are just important to building a strong mind that kids need to do, or could it even be into adults? I'm not necessarily limiting to kids or young people, but are there things there that you think are potentially important that we might run the risk of losing if people become AI maximalists too enthusiastically?
Roy Lee: (52:17) I think generally people should be output-oriented and goal-oriented. And kids, when kids get together and decide, "I'm gonna - we're all gonna build a sand castle," they see the vision of the sand castle in their head, and they enjoy working enthusiastically towards the vision. They will learn a bunch of random skills as they are building towards the same castle, so this is objectively a good thing.
And I think, I can only imagine if I was seven years old and my goal was, "Man, I really want to learn - I just read Harry Potter, this was amazing, it was a great book, I want to learn how to write like that." If you gave me nothing but pen and paper, then I'd probably have a lot - I'd have a really hard time and it probably gets to maybe pages of seven-year-old-generated bullshit before it gets to realizing, "Oh, I'm not J.K. Rowling, I can't write this great book."
But imagine you gave that seven-year-old AI and all of a sudden, I can generate a million different story ideas. I can have AI draft a million different things, and all of sudden, I can explore different pipelines, and maybe I will actually come to something very interesting. As I'm writing, I will realize, "Hey, this isn't exactly as good as J.K. Rowling's" or "Hey, the plot - I feel like the Harry Potter story was a lot cooler in this way, I can go in this direction."
And I think, I guarantee you, I will have learned more from doing that exploration than I would have if you just gave me pen and pencil by firelight and told me, "Write a story that is comparable to Harry Potter." And I will learn different skills, sure - maybe I don't know exactly - maybe my grammatical knowledge is a little less, I don't exactly know when to put a semicolon, but I do know how to - I will have the ability - the goal of mine would be to recreate Harry Potter.
And this is from my life experience, something that I tried doing in seventh grade or when I was seven, and just, if I had AI, I can only imagine I would have learned so much more. And perhaps I wouldn't have learned the same foundational skills, but I would have learned a lot more foundational skills.
And if you have a child, I bet you would be excited to see them try this too. You would have loved to see them rather than pen and pencils try and write a novel, just play around with AI and see how cool of a story they can make. Everybody agrees that you would probably learn more foundational skills doing that.
Nathan Labenz: (54:07) Yeah. I think I'm very largely with you. We just did an episode not long ago with Mackenzie Price, who's the founder of Alpha School, and they're doing all AI instruction in two hours a day. And then the afternoon is just all about these exploratory projects, kids going all kinds of different directions, often with AI as a copilot or assistant to them in their adventures.
And yeah, I mean, I'm probing for what the possible downside of that may be, and I suspect we will identify some over time. But it does - I do agree that the output seems likely to be much bigger, and that's not to be ignored for sure.
I wonder what other externalities this sort of technology might create, and specifically around the cheating positioning. One thing that I do kind of worry about in society in general is a general trend toward lower and lower trust. It seems like that is a condition that predates AI, but also might make it more of a problem.
Even just in online discourse, right? There's always the question, "Am I arguing with an AI right now?" And that's also one of the reasons I kind of think an "AI must identify itself" rule is potentially good because here I am thinking I'm engaging in discourse and trying to move my fellow voters and trying to help steer society in a positive direction. And that's the way I'm thinking about the output. But the Russian troll farm on the other end of the equation, they're thinking about the output in terms of sucking up my time and making me disillusioned or whatever. Obviously, the Russian troll farm isn't necessarily gonna follow the rules anyway, so it's not a perfect example perhaps.
But do you worry about loss of trust? Do you worry about some markets that exist today kind of coming under stress? I was thinking also about when I go on to Fiverr, for example, or Upwork, and I look at somebody halfway around the world, I can - I feel like I can sort of today trust to a degree my back and forth with them and whatever. But in the future, if they're kind of cheating on that, I'm like, "Oh God, I don't know. What can I make of this? Is it even worth it to me to go into such a market?"
So yeah, externalities, loss of trust, any particular markets that you think might be adversely affected by this sort of - if you were to take the Kantian "universalize the maxim" - are there places where it fails?
Roy Lee: (56:48) Yeah. I would propose a different sort of example. Imagine I'm speaking with someone from Mexico. They only speak Spanish, and I only speak English, and I can use AI to live translate everything that they're doing and essentially assist me in the conversation that way. Is that a loss of trust? Probably not. You would probably look at that and think probably not.
Now let's take that a small step further. Imagine I'm talking with a consultant who knows a lot about the coal mining space, and I know nothing about coal mining space, we're trying to have a conversation about coal mining space. And as we're talking, I have Cluely pull up definitions so that I know exactly what he's talking about when he's talking about the Amherst mining incident of 1982, and I know and I have extra context about the conversation. Now we can have a deeper conversation about whatever it is he's actually trying to get at. Is that actually losing trust in conversation? Is that a loss of trust? I would argue probably not. It's just something that helps the conversation get to a point beyond the random need for memorization of facts and all this stuff.
And I think people are worried about a time where AI is more human than humans. And I think this is just the incorrect worry - they worry about a time when AI is able to romance their girlfriend or enjoy itself better than humans and then tell you "you will have more fun if you go to the pool right now than if you go to the beach." But I think actually an AI will never be able to do this because these are all our own preferences and opinions and - these are things that the ultimate meaning of what it is to be human is to decide the things that you want to do. And I think an AI will never actually be able to make those decisions for you, it will just be able to infinitely make it easier to make those decisions.
To get back to the conversation of whether we are losing trust as a society, I think there are so many cases where it is not the case that trust is being eroded, but rather we're getting to the actual core, philosophical meaning of the conversation or what is it you're actually trying to talk about. You have an enriched conversation thanks to AI, and the conversations will, as AI usage grows and grows, they will boil down to the deepest, most human essence of the conversation, like what is it you are actually trying to say, why are you here in a conversation with me?
Because at some point two humans made the conscious decision to engage in a conversation with each other and to talk about something, and perhaps the discourse is - maybe discourse on "Hey, what year did World War II start?" And AI tells you the answer. Maybe that's where discourse goes away, but I don't think that's actually discourse at all.
AI serves best in giving you information and helping you decide the core taste and thoughts and opinions that you have. But at the very core, the impetus of everything is your human decision and your human impulse. And I think society will just trend towards expecting that people are using AI, but it will never ever replace the "you think this girl is pretty, go talk to her." I make that decision.
Nathan Labenz: (59:28) Unclear to me to what degree you cheated with the initial example there of the real-time English to Spanish translation, but I did not too long ago tweet something about the Google Meet demo that they showed at I/O, which was a real-time simultaneous translation. And I do think that is one of the most just purely aspirational AI product implementations that I've ever seen because it's just been a human impulse or desire dream for so long to overcome language barriers.
Roy Lee: (1:00:03) Exactly.
Nathan Labenz: (1:00:04) And they have something here that is like actually making it real. And so as you were talking there, I was kind of reflecting on, okay, how should I feel under different situations where you brought that up purely randomly and it just happens to be a coincidence that I also have specifically cited that example as something I'm excited about, versus what if you really did your homework on me and scrolled back through my Twitter feed and saw it, versus Cluely just popped it up right now.
And I think in this example, I don't know that I really care that much. I mean, in a way, maybe I prefer the purely serendipitous one between you taking the time to go do your homework and the AI helping you pop that up at runtime. Yeah, I don't know. I think that probably comes down to the - it's output-oriented. It's a good point.
Roy Lee: (1:01:00) Yeah. I mean, I guess right now, the reason you prefer the serendipitous way is because as you understand it, the way that it gathered that information is by doing research on you manually via Twitter and it's not yet universal and commonplace that I could use an AI to just pull that information up.
But that's because your understanding of how "knowing X knowledge means Y effort" - as soon as that logical train of thought is broken, then in the future, you probably wouldn't think that. And for example, it probably meant a lot to someone to get a beautifully typed up, almost calligraphy-looking letter from someone because they know that if letters are beautiful, that means that someone put a lot of effort into making letters beautiful. But today that means nothing because your default assumption is that they probably just typed this up.
And I think in the future when there are more powerful AIs that are capable of doing exactly what Cluely does, then you will start to dissociate these things and you'll start to make different connections - perhaps it is not a knowledge of your Twitter that suggests effort being put into researching about you, it is perhaps a deeper knowledge about something personal that you might have only told me in person, and that would be suggestive of a level of effort.
But I think when you break that one link, it doesn't actually undermine the entire "I appreciate it when people put effort into me." It just disconnects that one specific link.
Nathan Labenz: (1:02:26) Going back for a second to the question of assessments, I thought you had a pretty interesting idea there around, if my concern as a hiring company is I need something that I can do in manageable time to get a sense for this person's skill level, your answer to that is go much deeper - use AI to go much deeper and it's on the person to show you a lot more of who they are.
I think that's interesting and maybe could really work. Do you feel though like that's asking too much of people in some way? I mean, I know it's common, of course, in the software world to have an open-source profile and whatever, but it's a lot harder for a lot of other things, right? If I'm a salesperson, I can't open source all the deals I've ever done.
And even just - I think one concern we might have about AI and society in general is like, it may sort of track us prematurely or if we become overly informed about every other person's history and we kind of know everywhere they've been, everything they've done and how well they've done it, it potentially starts to prevent people from upleveling and switching from track A to track B because the long-term narrative as understood by AI is on this particular course and like why would we take a bet on them to change it?
Whereas in today's world, I can kind of mothball my old LinkedIn profile and kind of reposition myself as a new kind of professional. If I can pass the assessments, I can kind of get in. I don't know. I'm always looking for these trade-offs. Any thoughts on that?
Roy Lee: (1:04:19) Yeah. I mean, I think if the AI does get to a point where it can track everything you've done, all your output, and quantitatively determine how good you would be as a customer support rep at Verizon, then that AI will also be smart enough to hopefully have regulations on - or at least be able to determine what regulations would be most optimal for society in terms of what information that we are able to know about each other.
But alternatively, let's assume that the models don't get much smarter and they're only as capable as they are now. I still think that right now, the current system is very, very flawed where you try to shove every single bit of your professional experience on a one-page resume and that gets sent to a human whose subjective eye and their laziness, their mood during the time of day, there's a million external factors, and you could have lied about your resume. There's so many things that make the current hiring process extremely inefficient.
And I think that right now, currently, a better thing exists where probably you could just - and I'm pretty sure this is sort of what Mercor does - but there's a different type of resume where you can just sort of speak to an AI about the things that you're good at, and AI judges your relevant skills that an employer puts in. For example, the employer says, "I really care about the guy's cadence and his ability to speak and his ability to persuade me," and the AI just throws a bunch of random questions, gives you a personalized assessment and evaluates you based on that plus your description of all the things that you've done in the past, plus an attempt at going deep and trying to figure out and verify that you've done the things that you say you've done.
But I think almost anything is better than what we have right now, partially because it's too gameable and partially because an AI can sort of do it better.
Nathan Labenz: (1:05:55) Let's change gears to your marketing. You are producing a lot of viral content. It is often quite cinematic in its look. How are you doing that? I assume you're an AI maximalist in marketing too.
Roy Lee: (1:06:10) Yeah. I actually think we have almost an entire in-house film studio. We have videographers. We have a really nice camera. We have lighting. We have editors. We have everything that is needed to almost produce a movie in-house.
And I think we don't use AI - well, we don't use AI for a lot because AI is not yet capable of doing the level of videography and editing that we ourselves are personally capable of doing. Again, the only thing that matters is output here. What we can use AI for and what we often use AI for is to storyboard exactly what every video looks like and help us draft scripts that will be funny and comedic.
But yeah, how we actually come across these ideas - I think this is just, we're a bunch of 20, 21-year-olds who've spent four hours a day scrolling on IG TikTok. We are sufficiently brain-wired enough to know what lands, what is funny, and what is the funniest 30-second video that you could see.
I think most of these tech bros are just so nerdy, they don't scroll, they pride themselves on not scrolling, they pride themselves on literally being intellectual and not funny as a result. And when I come out and I've been spending the last 20 years of my life on Instagram, I know what's funny way better than these tech bros do - or at the very least I know what's engageable better than they do.
And all that's left is now I've just been blessed with the money to make it controversial and cinematic. And I think the second benefit of something being cinematic is people pause and will watch it way more if there's like, "Oh, someone put effort into this thing."
Nathan Labenz: (1:07:33) Are you not a tech bro? I think from the outside world perspective, you're a tech bro, right?
Roy Lee: (1:07:38) Yeah. I consider myself of a different ilk. I think I am more of a bro than I am a tech bro. What's the difference? I mean, it depends. I think tech...
Nathan Labenz: (1:07:50) You're a tech.
Roy Lee: (1:07:51) A specific type of nerdy in front of a computer, code all day and talk AI SaaS and all this shit. But in reality, I feel like I am a - I'm much more of a bro. I'm more human and I'm more tech-agnostic. But I have no idea. It's interesting to me, random personality descriptions. But the world will probably see me as a tech bro, but I think no tech bro would see me as a tech bro.
Nathan Labenz: (1:08:15) I might have a proposal for you from my company Waymark, which we've just launched Waymark Cinematic. Our core product is an AI video creator for small business, and it wouldn't hit on the level that you guys are shipping on. But it does actually work really well for local and small businesses in their local communities.
But with all the new stuff that has come out, we recently just launched kind of a managed service line, Waymark Cinematic, and I would be very interested to see if we can use AI tools to hit your level. We just did one with the company Better featuring an AI Jake Paul. It actually - I'm not - it's funny. Part of the reason I started this company and created an AI video maker for small business is because I myself am not good at this sort of thing. I very much had to partner with people who really know it. But I would be interested to see if our creative team could use an AI process and create something that might actually fit in with the cinematic universe that you're creating.
Roy Lee: (1:09:16) I would be very curious to see that too.
Nathan Labenz: (1:09:18) Yeah. How about the influencer marketing game? I haven't honestly seen too much of this content myself. In just preparing for this, I saw comments online from people saying, like, "All I'm seeing is clearly UGC campaigns."
Roy Lee: (1:09:34) Yeah. Yeah. Yeah.
Nathan Labenz: (1:09:37) Is this another area? I assume you're indifferent to AI influencers versus real human influencers versus hybrids, but what's your kind of approach to the affiliate market?
Roy Lee: (1:09:50) I will also say, there's two big types of consumer marketing that is done on IG TikTok. One is influencer marketing and the other is UGC. And UGC - well, among many, those are sort of the two most popular ones.
UGC - just have some 20-year-old literally, these videos are not 10 seconds long. These accounts start completely fresh, and the only thing that you want to get at is "this feels like a real person talking." And they'll come up with some viral hook and they'll just talk about your product and it's really simple - maybe two sentences of dialogue, the whole thing, and it will be ridiculously simple and it'll be 10 seconds, but these are the videos that generate 5 million views, the most converting videos that we have on the platform.
And for UGC - we know exactly which UGC hooks will grab the attention of 20-year-olds in college, and we have this down to science. And we know exactly what sticks, what works best, what converts best, and all the other metrics. But the reason you don't see it on your timeline is...
Roy Lee: (1:11:05) ...you will see some of our other campaigns because believe me, we are going to take over the Internet.
Nathan Labenz: (1:11:10) Okay. We'll definitely watch for that. I pride myself on staying reasonably connected to youth culture, so it sounds like I got a little work to do to steer the algorithm to get me into the Cluely demographic.
Anything - I mean, we talked about the pane of glass. You shared kind of a vision for how you are on the same page in the sense as Apple with this sort of overlay AI experience. My general sense of your trajectory, if I understand it correctly, is like you can't sustain the sort of insurgent position forever, right? If you're right, things that are cheating now won't be seen as cheating later. This stuff won't resonate in the future.
So is the idea that you are going to blitz-scale as much and as fast as possible on the sort of asymmetric advantage that you have now to push past what the sort of polite companies like Anthropic and Google and OpenAI can do to achieve market share and then kind of mature alongside them. And they will also kind of bend your way as norms kind of resettle, and then you'll be competing with them on a less asymmetric basis long term?
Roy Lee: (1:12:34) I...
Nathan Labenz: (1:12:35) How do you see that long-term future unfolding for you?
Roy Lee: (1:12:37) Yeah. This is actually the future that I'm less certain about because everything depends on how much of corporate culture are we able to change with our marketing right now. I think already you are seeing tremors of the launch video effects sort of culture. All of a sudden launches are becoming way more cinematic. People are trying to crack the controversial thing way more.
And if this actually becomes more than just a wave in the water and just becomes an actual gigantic thing and all of a sudden corporate professionalism is dead and no longer appreciated, then I think we will carry this rebellious counterculture, "fuck it, I'm gonna do whatever I want" attitude on with us well into the company's maturity.
But if it is the case that this is sort of just a bump that lets us hit escape velocity, but still standard among all companies is that you'd be professional and you'd be corporate and you'd be buttoned up, and perhaps there's a future where we button up. And all of this is rested on...
Nathan Labenz: (1:13:33) All...
Roy Lee: (1:13:33) ...all of this is downstream of the case that our marketing takes us to escape velocity. And the only question is, will our marketing take us to escape velocity and change the game of marketing forever for everyone? If it does, then we'll keep it going, and if it doesn't, then we'll button up.
And I guess to answer the point of "cheat on everything" - will that specific "we are the rebel, punch it up" - will that fade as we become the dominant player in the market? I actually don't think that it is us being the little guys punching up right now that people are being so captivated by. I just think generally the marketing, whether it was me that did all this marketing, whether it was Sam Altman or Elon that did all this marketing, it would be just as controversial and engageable.
And I don't actually think that the core reason people are engaging with us is because we are the underdogs. In fact, I think to many people, we are no longer the underdogs after we did the big raise.
I think the reason it's resonating with people is because it's, more than anything, also super transparent. I think everything I do - I seem controversial, but in reality, this is just me being honest. And even cases where I'm not being super controversial, sometimes I'll be on Twitter and I'll make a heartfelt "I respect you a lot." That is also just me being honest.
And I think all the controversial shit that I do is just a slice of my life and my thoughts, and I just happen to have polarizing opinions. But I don't actually think we aim to be super polarizing. We just aim to be honest.
Nathan Labenz: (1:14:58) You mentioned the raise, and I've also seen you comment online that you're hiring with comp packages up to a million dollars plus equity. Tell me what you're looking for, for a million dollars plus equity, and maybe just a little bit about your experience trying to hire at the very high end of the AI talent pool right now.
Roy Lee: (1:15:23) Yeah. I mean, I guess I feel like people are so wrong about this right now. Revenue timelines are so compressed. VC money flows quite freely. And if you are actually making a meaningful attempt to swing big, there will be someone to fund you to do it. And everyone seems to agree that the best way to swing big is by having the best talent.
And for some reason, people are still trying to pay at the bottom end of compensation ranges for some stupid reason, like, "Oh, you need to be bought in by the company." Bro, nobody knows about your company. You guys were founded two months ago. Why would anybody go die for your company? You need to give them something to come in for, and then once they're in the company, that's when you have to convince them that you guys are the guys that are going to win, give them utmost loyalty.
Right now, there's not an employee here who would sell a dollar of their equity because they're so convinced that the company is going to grow massively. And I think this all happens downstream of us being willing to pay at the top end of the compensation packages and convincing them of the culture.
Nobody knows your company culture if they have not been there for a month. And if they have been there for a month, then they should be employed. So I guess my experience paying - I think paying at the top end of the comp packages allows for a really, really strong top of funnel. Even if the most money-hungry guy comes in saying, "I'm only here for the cash," they'll come in, I promise you, they'll be captivated by the culture, they will not bring themselves to leave because it is fun.
This company is more fun than any other company in the world. We literally exist like a frat house. We have conversations late at night. We eat steak together. We go to the gym together. We talk about girls together. We talk about dudes together and it's like this is a fun, exciting, fulfilling life. And even if you might have come in for the million-dollar-a-year comp package, you will stay because you cannot do anything other than Cluely. It is the most fun you will ever have in your adult life.
And generally, I think it is - there's surprisingly a lot of noise in programmers. We're looking for really, really competent full-stack engineers, and there's a lot of people who are really good at research, really good at low-latency optimizations, but it's not the thing that we're looking for. That's something that has surprised me since trying to hire - there's a lot of people with really great backgrounds, but it's just the background in the wrong thing. And as it turns out, we are actually looking for a very specific type of engineer who's capable of a very specific type of task.
Yeah. I think that answers your questions.
Nathan Labenz: (1:17:28) This has been fascinating, and I appreciate the earnest and just kind of minimally AI-assisted conversation. Although I wouldn't have minded if you had been relying on Cluely from time to time. I know you were at least at the beginning. Any thoughts you want to leave people with before we break?
Roy Lee: (1:17:48) It's the most interesting time in human history. I think you should take more risks. And right now, the riskiest move is probably what you determine to be the safest move.
Nathan Labenz: (1:17:58) Yeah. Food for thought. Roy Lee, founder and CEO of Cluely, thank you for being part of The Cognitive Revolution.
Roy Lee: (1:18:03) Awesome, man. Thanks for having me.
Nathan Labenz: (1:18:06) If you're finding value in the show, we'd appreciate it if you'd take a moment to share it with friends, post online, write a review on Apple Podcasts or Spotify, or just leave us a comment on YouTube. Of course, we always welcome your feedback, guest and topic suggestions, and sponsorship inquiries either via our website, cognitiverevolution.ai, or by DMing me on your favorite social network.
The Cognitive Revolution is part of the Turpentine Network, a network of podcasts which is now part of a16z, where experts talk technology, business, economics, geopolitics, culture, and more.
We're produced by AI Podcasting. If you're looking for podcast production help for everything from the moment you stop recording to the moment your audience starts listening, check them out and see my endorsement at aipodcasting.com.
And thank you to everyone who listens for being part of The Cognitive Revolution.