Calm AI for Crazy Days: Inside Granola's Design Philosophy, with co-founder Sam Stephenson

Granola co-founder Sam Stephenson discusses how the AI note-taking app uses minimalist design to focus on one task, and how note sharing, user research, and unexpected use cases shaped its growth. He also addresses privacy, transcription costs, and hopes for AI that reduces screen time.

Calm AI for Crazy Days: Inside Granola's Design Philosophy, with co-founder Sam Stephenson

Watch Episode Here


Listen to Episode Here


Show Notes

Sam Stephenson, co-founder of Granola, explains how a deliberately minimalist design philosophy helped turn the AI note-taking app into one of the fastest-growing products in the market. He shares why Granola focuses on doing one job exceptionally well, how note sharing drives growth, and what they’ve learned from surprising use cases, recipes, and constant user research. The conversation also covers privacy and consent, transcription and cost choices, team collaboration, and Sam’s hopes for AI products that create less screen time and more space for reflection.

Google: Try Gemini’s image creation model, Nano Banana, to create original art in seconds with easy iterative prompting via Google AI Studio or the Gemini app.

Google: Try Gemini’s image creation model, Nano Banana, to create original art in seconds with easy iterative prompting via Google AI Studio or the Gemini app.

Sponsors:

Roboflow:

Roboflow gives robotics and embodied AI teams the visual AI infrastructure to turn messy, real-world perception into precise, reliable action. Learn how leading robotics companies solve bin picking and more at https://roboflow.com

VCX:

VCX, by Fundrise, is the public ticker for private tech, giving everyday investors access to high-growth private companies in AI, space, defense tech, and more. Learn how to invest at https://getvcx.com

Claude:

Claude is the AI collaborator that understands your entire workflow, from drafting and research to coding and complex problem-solving. Start tackling bigger problems with Claude and unlock Claude Pro’s full capabilities at https://claude.ai/tcr

Tasklet:

Build your own Cognitive Revolution monitoring agent in one click.
Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai

CHAPTERS:

(00:00) About the Episode

(03:52) Special Sponsor

(05:52) Granola growth and users

(17:14) System2 goals and context (Part 1)

(17:19) Sponsors: Roboflow | VCX

(20:15) System2 goals and context (Part 2)

(33:09) Costs, pricing, and transcription (Part 1)

(33:22) Sponsors: Claude | Tasklet

(37:12) Costs, pricing, and transcription (Part 2)

(47:38) Meeting privacy and consent

(54:13) Agents, memory, and simplicity

(01:03:49) Recipes, use cases, and growth

(01:11:52) AI product design culture

(01:28:08) Future risks and vision

(01:33:33) Episode Outro

(01:36:59) Outro

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Website: https://www.cognitiverevolution.ai

Twitter (Podcast): https://x.com/cogrev_podcast

Twitter (Nathan): https://x.com/labenz

LinkedIn: https://linkedin.com/in/nathanlabenz/

Youtube: https://youtube.com/@CognitiveRevolutionPodcast

Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431

Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.


Introduction

Hello, and welcome back to the Cognitive Revolution!

Today my guest is Sam Stephenson, co-founder and designer at Granola, the breakout AI note-taking app that has recently raised $125M at a $1.5B valuation. 

As a user of the app myself, I’ve been struck by how streamlined, even minimalist, the Granola product experience is.

Considering how easy it is to code up new features these days, I figured this had to be a very deliberate choice on Sam’s part, and wanted to use this conversation to hear what he’s learned about designing AI products for mass market adoption.

We begin with Granola’s product design philosophy, which Sam playfully calls "surprisingly unambitious", at least in terms of the number of jobs they aim to do for users.  Taking inspiration from the kitchen tool company Oxo, which designs products that work for people with disabilities and delight everyone else, Granola aims to provide a calm product experience for people with crazy workdays.

In practice, as you’ll hear, the keys to Sam’s success are simple, timeless design disciplines.  Spend lots of time with lots of users, understand their challenges deeply, do one thing extremely well before adding more features.  

All very familiar advice to any product leader, but increasingly counter-trend in the vibe-coding era.

As always, we get into a lot of detail along the way.

  • Sam says that Granola’s rapid growth has been driven overwhelmingly by the single core mechanism of users sharing call notes with teammates and partners.

  • He also highlights a number of popular and surprising use cases, and explains how they are using Recipes, such as the “blind spot finder” I created when Granola sponsored a number of episodes earlier this year, both for marketing purposes and to inspire users to use the product to its full potential.  

  • He tells us what partners they are using for transcription, why Granola has no limits or concept of credits, and how they think about managing inference costs as they add new features over time.

  • He explains their decision to make Granola work at the operating system audio level, rather than joining calls as a participant like most other note-taking apps do, and how this relates to privacy and consent, their decision to store call transcripts but not raw audio, and the idea that we might want to engineer at least some AI systems to forget certain details over time, just as we humans do.  

  • He also unpacks how Granola is thinking about their new team collaboration features, the tricky balance between the immense upside of better information sharing across teams and organizations and the risks of over-sharing sensitive information, and also how much we can trust AI to decide what should be shared with whom. 

  • We discuss how Granola’s team works today, including why he thinks Figma remains valuable but should be nervous, and more importantly, how they use a combination of internal demo days and dog-fooding to inform what are ultimately mostly vibes-based decisions about what to launch.  

  • And toward the end, we get Sam’s greatest hopes and fears for Granola, namely that it will help people live something closer to their best lives, with less screentime and more space for reflection and strategic thinking, and on the other hand, the possibility that a few leading AI hyperscaler platforms could end up dominating an ever greater share of the market and capturing most of the value AI creates.   

Clearly there are multiple ways to build a successful AI product in today’s world, but for me, as someone who is constantly obsessing over frontier capabilities, this conversation was a very useful way to calibrate myself on how mainstream users understand and relate to AI, and a great reminder that timeless design discipline continues to work.  

With that, I hope you enjoy this inside look at the design thinking powering one of the fastest-growing AI products in the market today, with Sam Stephenson, co-founder of Granola.


Main Episode

Nathan Labenz: Sam Stevenson, co-founder and designer at Granola. Welcome to the Cognitive Revolution.

Sam Stephenson: Hey, thank you very much for having me on. It's fun to be here.

Nathan Labenz: I'm excited for the conversation. I have been using the product a bit over the last few months and then scrolling Twitter, as I am constantly doing, something popped up not too long ago that really caught my attention, which was a report from Ramp. which said that, I believe it was in the month of January, that Granola was the number two company out of every company tracked by RAMP, which I assume is pretty much everybody, in terms of the number of new customers added trailing only Anthropic in that month, which I thought was a credible feather in your calf. And so I wanted to start there and just say like, how are you doing it? That's a lot of new customers. to be adding and clearly something is really clicking. So what would you say is really clicking and driving that growth for Granola?

Sam Stephenson: Yeah, I was taken aback when I saw that too. Like, like, I guess we know, we know what's happening to us internally, but like, uh, yeah, to see us, to see yourself among that company is like a, it's a great feeling and it's very validating. So, um, yeah, I mean, like, I, I feel like, uh, Granola is growing fast and it's, it's like almost entirely through, um, word of mouth, people recommending it to each other or sharing notes with each other. And I guess we kind of, you know, that when we were starting that we could like build a thing good enough that people would talk about it with their friends and that hopefully had some inbuilt product virality. And it's, I mean, like it took us a while to unlock some of those viral loops and to get that to start happening. But I feel like the thing's really starting to snowball at the moment. And we're growing a lot month over month.

Nathan Labenz: I definitely want to get back to the viral loops and get into more detail on that. Maybe just start with, who are your customers? How do you understand them? How do you go out to meet them? How do you think about how they understand AI, how they relate to it, what they want from it? What's the sort of character sketch of your ICP?

Sam Stephenson: Yeah, I think this is interesting because I think the character that we hold in our heads is probably a more extreme than the average actual Granola user. When we were set out to build Granola, I think we're most interested actually in not in meeting notes. We were really just interested in... We wanted to be playing in the space of inventing what are gonna be the UIs and the interfaces that let ordinary people who are... like using computers to do their work, but I see the computer as a means to an end, not as like this thing to geek out on and to spend loads of time fiddling with. How do we help those people like access the power of this new technology? And then nodes were, I think, our way of getting our foot in the door into their lives and starting to build a product that could be habitual and could let us do more over time. I think we spoke to a lot of people in the early days in a very open-ended, very explorative way with no real agenda, just trying to learn about their work lives and I think they could like archetypal person that emerged as like a good target user for us over time was like somebody, basically somebody who's in back-to-back meetings all day. And that could be like many kinds of jobs. That could be like a salesperson, a account manager, a recruiter, could be an investor, it could be a founder of a company, it could be anyone in client services is like, is doing calls all day long, talking with clients and things like that. And I think they appealed because Not because we want to build a product only for them, but there's a great analogy my co-founder Chris kind of brought and I really like, which is there's this kitchen utensils brand. I'm going to **** it up. I think it was Oxo, but I might be wrong. And they had this thing where they would deliberately design products for handicapped people, people who had struggle, like holding up, like gripping utensils or only had one arm or they have some kind of disability. And they deliberately designed products for those people, not because they just wanted them to be used by those people, but because if you succeeded at building, like designing a product for the most extreme kind of user like that, then you end up creating a very kind of friendly, easy-to-use product for the rest of us too. And yeah, this is like, I think if you work in like accessibility in UI design, this is a common thing people talk about too, but I think our version of that was like somebody who is running from back-to-back meeting all day long, frazzled by the context switching, barely have time to go to the bathroom between meetings. Their calendar is just a constant block of stuff all the way through the day. These people exist in real life, but they also are just the most extreme portrayal of reality that all of us have to deal with pretty often in our lives. If we set our sights on building a great product for a person like that, then hopefully we achieve it, but hopefully we also just create a really simple, easy-to-use thing for the rest of us who who still have those moments of frantically jumping between things?

Nathan Labenz: Yeah, I like that. I've had a little bit of experience in my day as a product designer. Usually I've been so frantic and, you know, hustling to try to get to some some level of product market fit that I'm not yet thinking of the extreme long tail user. And maybe that's a mistake. Maybe I should be reversing the order in which I'm thinking about it, but It's an interesting kind of flip from the way I usually have approached things, which is hit the center of the distribution first and then work out from there. Could you give me a little bit more, and I've seen some comments from other places around system one, system two thinking, but could you maybe describe a little bit more like what is the, obviously people are busy, right? Everybody's short on time, but what does that translate to in terms of what they are unable to do that you are specifically trying to design around.

Sam Stephenson: Yeah, yeah, yeah. I think us tool builders, myself included, I think we fall into this trap way too often of assuming that the user of your software is coming to your software in this calm state of mind where they're going to give it their full attention, and they're going to think through their actions, and they're going to be capable of stringing together quite complicated sequences of maneuvers in the software to do what they want to do. Yeah, and therefore, because of that, we feel like we can get away with pretty sophisticated, complicated bits of software that people have to learn and develop mastery in over time. I think sometimes that's the case, but I think it's probably way less than we would like to think. I just think the reality of like work for most people who work on a computer is way more reactive and way more like chaotic. And so much of the time people are not operating with their like rational, methodical parts of their brain. They're in like a reactive, like the system one kind of brain, you know, if you're familiar with that, where really like you wake up in the morning, you check your inbox, Like, ah, **** there's three things there that are like burning fires, and I've got to deal with it right now. But I also have a meeting starting in 20 minutes, and like, oh, **** how am I going to juggle all of this? And then they're just kind of in a constant state of feeling behind, feeling overwhelmed, never quite getting on top of things. And I think just that's the reality for a lot of people at work, and as software designers, we need to kind of assume that reality and design for our software to fit in that reality.

Nathan Labenz: Yeah, that immediately calls into question the way that most people do user testing, right? Where you're like, it's been a little while since I've done this, but when you sit down with somebody and you're like, I would just like to watch you use my software and I'm not gonna interfere at all, but you are immediately putting them in a state of Full focus on this cuz they don't wanna look stupid to you if nothing else. So that I think is immediately insightful. Is it, do you have a tip for like how to user test in a way that kind of captures that real world frazzled state of mind as opposed to the kind of best self that you don't actually expect to get from new users most of the time?

Sam Stephenson: Yeah, it's a great point. I, yeah, I feel like I'm I fall into the same-- I feel like I'm guilty of the same thing all the time, too. You want to watch someone use your thing and get validated that they understand it. Yeah, I think what helps. I think one thing we found incredibly helpful is, whenever possible, when we're talking to a user, I'll just try and get as grounded as possible in What's the reality of your working day and how does Granola fit into it? And this is hard with like watching them use the products live, but for example, when we were thinking about introducing folders in Granola or like some kind of organizational principle, the way we started interviewing people about it was we would ask them to share their screen and bring up Granola, and then we'd just go like, We go to their Granola home screen, where they have a list of all the meetings they've done, and we go meeting by meeting, and I click on one and ask them, What was this meeting about? Who was there? If you could share these notes with someone, who should see them? What would you think about organizing this in your head? They're in an interview, they're in a rational state of mind, but they also can't lie about the facts of what's on the screen in front of them and what notes they took and what meeting actually happened. The trap which I'm at least constantly trying to fight when I'm talking with a user is as soon as you get into abstraction, like layers of abstraction, and them talking about theoretically or generally what they do, then you can't trust any of that. You're in like a... Yeah, you're getting the person's imagined view of themselves, which is often so different from how they actually behave in the real world. I did a similar thing with people's calendars. Getting people to pull up their calendars and using that as a forcing function to talk about the reality of their day and what's happened in their day. Really helpful for the same reason. And then I guess that's helpful for discovery, talking to people and learning about their work and how Granola might fit into it. In terms of evaluating stuff we've built, I think we're very dependent on how it feels for us to use it ourselves. And now the team's a little bigger. We can observe people in the team using it, like in the wild. And I find that's maybe more useful and telling than getting somebody to talk about their experience, because you can literally, like I can sit in on a sales call and quietly watch somebody on our team using granola on the call, and I get a much less filtered version of what they're actually doing with the product.

Nathan Labenz: That's interesting and helpful. I'm imagining even a more, perhaps, comical version of sit down with somebody immediately, like a fake fire alarm goes off in the background and then you like spill your coffee and then you might have to get through the university review board before you can deploy something like that.

Sam Stephenson: Yeah. Yeah.

Nathan Labenz: How about, I want to get into also the, how this is all translating into actual form factor design decisions of the product. But before going to that, just on a philosophy level, one of the I see a big tension between tension might be a little bit of an overstatement, but we hear a lot in the AI space about how AI is going to do the routine work. It's going to do the stuff we don't want to do. It's going to take care of the drudgery. And then the upside of that is going to be we get to be our better, more system two, more strategic, higher level of thinking, best selves. Okay, that's great. But then the other reality of product design, which I think you're embracing, is like meet people where they are. And so I'm wondering if you are, do you think that like the, in the fullness of time or in the, the most successful form of the product, are you still meeting people with a jam-packed calendar who are stuck in system one and helping them manage that reality? Or do you think you are ultimately like helping them get into that system two state, which they're not in today? And then that would, if so, that would maybe imply a different regime of product design. in the future. Do you come down on one side of that debate, or do you manage to find a balance between those two competing lines of thought?

Sam Stephenson: It's a good question. It's a good question. I think both. I feel like we'll be able to make progress in some areas faster and in other areas not there. I think, for example, there are so many tools out there trying to be a work an executive assistant for you for work, or an agent helper that understands what's going on and helps you do your knowledge work. And I've tried a lot of them. And I think the thing that they struggle with every time is like, knowledge work is just so messy and nuanced. And I just think we're still a long way off, I think. AI is having access to all of the information about a person's life and understanding all of the social nuance of like, what's my relationship with this person? And how should I write when I write to that person? And how should I rank the importance of what they're asking for versus the importance of the other things they're being asked for? And therefore the, I don't know, the to-do lists that these things create for you or the drafted emails that they go and write for you. You can look at a screenshot of it and be like, Oh yeah, that looks like it's going to really help me. But then when you actually engage with the content, so often if they miss the mark even slightly, then they're useless. They're like, It's almost there, but I'm just not quite comfortable using that. And they fail. And I think we'll get there. I think models will get better and it'll get easier to connect all the tools you need to connect. But I think it might take a surprisingly long time before we get there. We want to be playing in that space too. And so the game for us is like, how do we... I think a principle that I think we found helpful often with this stuff is to almost be surprisingly unambitious in the kind of tasks we promised to help you with. I think it'll be a while before Granola can help you with the kind of thorny, top-of-mind, most intense burning fires going on in your work life at any time. But I could definitely see Granola helping you with the menial stuff that you keep putting off because it's not that important and also there's a lot of it and it's boring. And I think just being very deliberate about where we focus our attention there will mean we can help and be more effective in helping you more quickly. Yeah, so I think like I definitely see in terms of helping people get into System 2 mode, I think that's the lens we have to take and that's how we have to approach it. I think there'll be surfaces in the product where that are designed for you to spend more time in them and to do more thinking work. The full-screen chat interface we have in Granola now that has access to all of your meetings and all your company's meetings is designed to be sat with and used in long-form conversation. And I'll use that to write posts or job descriptions, or I'll use it to analyze what's going on in a certain part of the company. And it's a very different mode to the meeting notepad, back-to-back meeting moment, and means we can do different things design-wise.

Nathan Labenz: Yeah, I like the multi-meeting feature a lot. It was one of the biggest things that I found to be particularly cool about using the product. Let's talk about the actual kind of design then a little bit. Actually, let me do one more beat on kind of, what do you think the barriers are to that kind of deep context? I would say, personally, I've made progress, and I'm obviously an early adopter and willing to put in the work, but And regular listeners heard me talk about this a little bit, so I'll keep it brief. But basically, over the last couple of months, I've been really investing in making sure that whatever agent I'm using has enough context where it's not for lack of context that it fails. And that has involved a pretty tedious process of exporting everything from the systems where they live. All my emails from the last five years, all my Slack messages from the last five years, DMs across all these different channels. Actually all calls that I've transcribed over time, even all the podcasts with all the speaker recognition, diarization, all that now lives in a single database on my computer and the agent can query it. And so it's not for lack of access at this point to my general circumstance that it fails. And I'd say it, it definitely has moved the needle in terms of how much I, or how often I can actually say, Hey, can you write an intro to this person? Look up my history with each one and my kind of pattern of intros and actually get something out that I am, if not immediately ready to send. At least, yes, that definitely worked. And with a little tweak, it'll get there. So I feel like I'm starting to see the promised land if I wouldn't necessarily say I've entered it. I can see over the hill at least a bit. What do you think is going to make that slow for most people. Is it going to be like trust or I guess we could also imagine that the systems themselves, I think we see a little bit of this from some systems are going to try to hoard their data and prevent people from exporting it all and doing that. Like exporting stuff from Slack is not a ton of fun. Pick on one particular platform. What do you see as the kind of fundamental barriers there that, that keep that from happening, say in the next quarter or two?

Sam Stephenson: Yeah, I mean, I mean, everything you've said makes me, I feel like it's the things that you've done, that you're doing, like connecting, like the machine is only as good as the context you've given it, as you're saying. Like, I think, I mean, a lot's possible nowadays, like on a personal level, getting all your stuff into one place. Like, I think it's much tougher if you work at a company and a lot of the context that needs to happen is like, you know, inside a company with all of the complicated permissions and, you know, you've got to get security to sign off on connecting, like, you know, all the tools together and things like that. I think things are way more fragmented there still for most people. There are definitely, like, a lot of Granolo customers are, like, surprisingly progressive, I think, in, you know, how much they're willing to let employees, like, explore with this stuff and figure out good workflows. I think everyone kind of realizes that we're in a moment where if you don't, then you're going to get left behind really quickly. Yeah, but I think context is a big, big one. And then just the level of personalization and memory that's necessary for the agent to do a good job, I think is maybe underestimated by a lot of software. I think the agent just has to know a lot about you, right? A lot about you, about the people you work with, the projects you're working on, the priority level of things. And I think it's possible with OpenClore or any of the general-purpose agents to teach them those things if you're extremely proactive today. But average computer user is not an extremely proactive-- in setting these things up. I think the job for us is to kind of make as much of that happen in the background as possible so that you don't have to think and you just get useful stuff put in front of you every time you turn on Granola.

Nathan Labenz: The point about permissions is it's funny how simple that point is and how, in theory, easy to resolve it would be, but also how fundamental it really might be for a while yet. Notably, I am an admin on all of the systems that I was using. And I do think the ability, at least for me, to get that unified view that crosses personal and professional and just puts all of me into one database has been huge. And I just wouldn't have it if I weren't the admin on all of the systems. And I think it probably would be pretty hard to talk the admins into allowing me to do it if I wasn't myself the admin. So I definitely see that as a real challenge. Presumably you could get around it in the not too distant future with like computer use. You don't necessarily need the Slack API if your computer use agent can just go troll through, but still you're going to be probably told explicitly you're not allowed to do that. So that'll be another barriers, just outright rules. Yeah, that's really interesting. It makes me think of a context as a service startup as an interesting opportunity.

Sam Stephenson: Yeah, yeah, I think as just like I feel like innovation in this area would unlock so much for so many companies and all of us at the moment. How do you make the agents understand what's okay to be shared versus not okay to be shared and who should know what kinds of information and things like that? Granola defaults to private for everything. So every note you start is only visible to you unless you take an action to put it in a shared space. And it's because conversations are potentially really personal, right? And you only have to say one personal or dodgy thing to kind of pollute a transcript and make it not OK to share with the wider organization. But that gives us a really tricky design problem in that you've got to-- so much of the value of transcripts in Granola is them being in a shared space where the company can collectively access it. How do you help those end up in the right places, you know, while also not putting information that shouldn't be shared into places, into public places? It's a tricky one. We haven't nailed it yet. You know, like we're trying stuff and it's getting better, but we could do a lot better still.

Nathan Labenz: Yeah, that's interesting. This is an obviously AI-pilled take on the... question, but my first instinct would be to run the transcripts through, and actually do this sometimes with the podcast, right? Like occasionally, especially if it's we get into politics or whatever, I'll sometimes run the transcript through the AI and say, is there anything in this episode of this podcast that the guest might regret having said publicly or something like that? And I would say they're pretty good at coming back with sensitive things. Those are, these are meant to be public conversations, so it's probably a little bit more obvious in general, like what is going on and what would be sensitive or not. In the context of private company internal conversations, you've got a lot that the AI doesn't have background knowledge on. But is that kind of the direction that you're headed? Try to use the AI to help surface these potentially sensitive things for people.

Sam Stephenson: We're tinkering with this a bunch at the moment. Yeah. Yeah. I feel like there probably is a world where we could automatically put stuff in the right places 99% and 98% of the time. And the jury's out as to whether that's enough for people. If there's even a tiny risk that something sensitive goes into a public place, then that's not okay for a lot of people. I guess we have a basic version of this at the moment where you finish a note, Granola will auto-suggest the folder it should go into, and that's an LLM on the back end. It's like looking at the core, looking at the list of folders you have and the characteristics of things that usually end up in that folder and suggesting the one that it should go in. It's not accurate enough for us to turn that on automatically at the moment, but there's a world where it could be with more work and with better models.

Nathan Labenz: Yeah, I hear better models are coming soon, so that won't, that trend doesn't seem to be stopping anytime in the immediate future. Okay, I do want to, maybe let's just talk about inference budget, because some of the things you're saying there are really interesting, very much feels like the future, and also I'm not sure how you're gonna make it work at the current price point. So I'm wondering how you're thinking about that. Like I, it's my full-time job to be a student of AI and it's a very justifiable business expense to me to pay whatever my inference bill ends up being. And it's just me. When you are a company, you've gotta be a little more disciplined about that. And when you are offering a product at a fixed monthly per seat price point, that is. another level of that problem. Maybe I've missed it or maybe I haven't hit certain limits or warnings or whatever, but one thing that jumped out at me as I was kind of reflecting on my use of the product is I've never seen a you have this many credits this month or you're like approaching that limit or this action is going to use this many things. As far as I have experienced, it's always been just use it and it feels like it's unlimited. So maybe tell me if I haven't hit certain limits or I'm missing things, but that's an interesting design choice. And it also has me thinking like, are you guys in the Uber phase of who cares what it costs? Let's just make it great and we'll worry about our margins later. Or are there tricks or maybe it's maybe usage patterns are such that it actually is working economically. And how much especially when you want to go into these really deep contexts, right? And you're getting into easily now, it can be. A dollar plus for a single API call. If you throw 10 different meeting transcripts into Sonnet alone, you're getting to a dollar. So if you're going to then say, okay, I'm going to try to organize your folders with this really full contextual awareness, you're starting to spend real money, right? You're starting to see why Thriving Revenue is going the way it is. So I guess a lot of directions to go there, but broadly, like, how are you thinking about inference budget and how it relates to driving the futuristic value that you're describing?

Sam Stephenson: Yeah, yeah. I mean, I think where we're at today is really a function of where we're at as a company and how we've prioritized things over the last couple of years. When we were tiny and pre-launch and around launch, I think we explicitly told ourselves there is no budget. Whatever makes us create the best product, we should work on that, and just creating a good product is going to be hard enough. When the user numbers are small enough, you have like... it's a waste of time to focus on optimizing cost. And so, yeah, Granola was, I mean, Granola was actually like really expensive, like when we launched because so much of the cost was in transcription APIs and that's the cost of that has gone down a lot as we've scaled and as the technology's got more commoditized and cheaper. So yeah, and I think that was kind of the early days. I think today where we're at is like Most usage of the product is the notes, and that's a pretty predictable cost. The number of meetings someone does has physical limits to it, and it averages out over the cost of a month, and you can figure out what margins you're comfortable with there. I do think, as Granola, sophisticated users who have lots of meetings and want to be able to chat and do stuff with large amounts of meetings, do rack up pretty big bills, which we swallow at the moment. And yeah, I think over time, as Granola kind of transitions to kind of more directly doing work for you, rather than just being a kind of companion or aid or whatever, I think we'll have to look at the pricing plans and how do we, I think usage-based pricing or like a cursor or Claude model makes a lot of sense. But I think there's something very nice and transparent about paying a price and getting a thing, and especially what people think of us as like a meeting notes app where you bring it to your meetings and you get meeting notes out, and that's the core of it. People really like the simple seat-based thing.

Nathan Labenz: Yeah, that echoes a lot of my... kind of experience and general advice I've given over time, although you've taken it to quite a bit larger scale than I ever have, that's for sure. But I do think people in general shouldn't optimize too soon on cost. And then the question just becomes, can you get all the way to a unicorn valuation before you really have to get serious about that? And I guess the answer is yes, you can.

Sam Stephenson: The line of our burn rate, we have a line that we keep up to date of if user growth keeps going at the rate it's going and how much the product costs. There was a point like a year ago or something, like six months after we launched, where from where we were then, with what Granola cost, if you plotted it out a year from now, it got terrifyingly expensive. It was kind of distressing to look at. But the transcription costs have gone down a lot for us, and I think there was a time where half of our burn rate as a company was going on transcription. It's a lot better and more under control now. Yeah. So like, I think we we'll probably, we might, the pendulum might actually swing back again as we do more LLM stuff and work assistant type stuff over the coming months. Probably get more expensive for us. And I think at some point, like that's going to trigger a, we should think about usage based pricing, but yeah, for now we're comfortable with where we're at.

Nathan Labenz: Yeah, that's, and then there could be countervailing forces there too, in the same way there has been with transcription, obviously a lot of downward price, the trend in model cost is obviously dramatically down for a given level of capability anyway.

Sam Stephenson: We all like to tell ourselves that things are gonna get cheaper, but also the new like good models are always so enticing that.

Nathan Labenz: Yeah, so far at least, I do think at some point, I think of this as being an Imad Mostak idea from, you know, he's the CEO of Stability and now has the Intelligent Internet Project, and he's a huge believer in satisficing. He's like, you know, yes, so far, every new generation of model, everybody's kind of found new utility in it, but how much longer is that really gonna go before you'll get to a point where something really will be good enough for like a large majority of general knowledge work cases and the best models will really be kind of doing the genius in a data center thing. And do you really need a genius to like turn your meeting notes into a follow-up e-mail or is there a point where you can kind of let the cost come down again? I suspect that there probably is. On transcription, I don't know if this is something you can share or want to share, but how does transcription work today? Is it, it's pretty fast. I noticed that in using it. And it was so fast. I was like, is this running locally? I think there are small models that maybe could run locally, but I wasn't sure. So what can you tell us about how transcription works or what have you learned about transcription that you think maybe is underappreciated?

Sam Stephenson: Yeah. The way it works is pretty simple. We record audio from the microphone and the system audio of your computer, and we pipe those to real-time transcription APIs from third-party providers. We work with a couple of those. It's all cloud-based APIs. You transcribe in real-time for a bunch of reasons. It's very comforting to see the transcript come in real-time. You trust that the thing is working. It means that we can generate notes as soon as the meeting ends, we don't have to wait for the notes to arrive. It also means we can discard the audio, so we deliberately don't hold on to any audio from the conversation. I just think it's far less creepy to only be keeping a transcript and not the full-on audio recording. Yeah, and there's a lot of trade-offs there. The real-time transcription is a lot worse than if we were to send all the audio in one go and wait a minute for the transcript to come back. The quality is a bit worse. The speaker separation is a lot worse. There's a lot of trade-offs, but overall, I think it's the right move for us. Yeah, we're kind of constantly looking at transcription because it's maybe the most fundamental of the technologies in Granola. And if you improve that by 10%, then there are a lot of knock-on effects to the rest of the product. I think it probably makes sense for us to move to one-device models at some point for speed and cost. Although speaker separation is something that's... If we can make that better in the product, we'll just make everything else 10 times better. And so I can also see us just chasing whatever provider or solution is going to get us to that. We have to stay nimble. We've architected the product so we can swap out transcription models fairly quickly and easily. And so we're constantly trying new ones and trying to figure out what the best strategy is there.

Nathan Labenz: I don't know if you want to name names, but I'll tell you on my end, I'm a pretty happy Deepgram customer. And I don't do real-time because I don't really need to, but I have a whole skill of skills after we record. I'll get the audio out of Riverside and send that over to Deepgram and transcribe, and then downstream of that as much other things, clip ideation and clip creation, so on and so forth. These days I'm even making songs out of Suno that all come, including key phrases, fun phrases out of the transcript. So I'd say Deepgram has worked pretty well for me. Is there anything else that you would suggest that people try that again, you think is like underappreciated or anybody you would want to shout out as being particularly good?

Sam Stephenson: Deepgram and Assembly are the two main providers we work with. Both are great. Like we've used Deepgram. I think we settled on them before launch and they're just, every time we evaluate, the models are good and then we can't find a better one. So and Assembly, I think, Assembly, we've been using for a while now too. I think we actually split traffic in the desktop app between the both of them because they're comparable and it's helpful to have redundancy because it's such a core infrastructure thing. Yeah, no, I think we're already happy with them. It's just that we keep our eyes open for like, how do we make this faster or how do we make this have better speaker separation or better quality all the time?

Nathan Labenz: Yeah, totally. Makes sense. Even just for API, we're definitely entering a world where Uptimes ain't what they used to be. And I haven't had any problems, particularly with DeepGram on the API uptime front, to be clear, but 98 point something is you want to have a fallback if for no other reason than you don't want to go down just because one of your API providers goes down.

Sam Stephenson: Yeah. People really squeal. When GNOVA goes down, it's you notice very quickly and it's a very painful thing. Yeah, the redundancy is important.

Nathan Labenz: Okay. You mentioned this idea of not storing audio. So that was definitely something I also wanted to dig in on from a kind of design perspective. I think there's multiple different... Give me all the layers, but like the layers that I'm noticing immediately, as you said, is kind of, it's less creepy if it keeps the audio. I would be really interested to hear your thoughts on the way people are thinking about AI and being surveilled, entering this realm where Certain powerful entities are it's indicating their interest in processing large amounts of data on individual citizens. There's that out there. And then there's this kind of we're doing it to ourselves, which we obviously have that right to do, but then we're also doing it to everybody around us as we're doing it to ourselves. And this also relates, I think, to The decision to have the product work at the operating system audio layer as opposed to like joining the call. Everybody has seen, and I actually use both in today's world. I use both a call joiner note taker, and then I also have Grid on my computer and they both do their thing. And again, this is something I do because I'm weird and try to use all the AI products. I don't think most people need to have multiple note takers on every call, but it's, it stands out, right, as the sort of It works on your computer, it works at the audio level. This does mean that it can happen without people on the other end of the call knowing that it's happening. You do have guidelines and some features for like how you should talk about it and putting, there's one feature that allows you to automatically put a notice on the calendar invite that we'll be using Granola for this. I don't know how many people actually do that. What are your thoughts on privacy, surveillance, disclosure, consent? What really matters in this case? Is it really that big of a difference if it's audio versus the transcript? I don't have a great intuition for that, but it's not immediately obvious that I should be that much more comfortable just because it's a transcript versus the original audio. So I'm sure you've come at this from every angle. Tell me everything.

Sam Stephenson: This is a sticky question and a thing we ended up to grapple with at the very beginning, making the decisions about being a desktop app and whatever. It continues to be a thing we have to keep looking at and improving today. Now we have much bigger enterprise customers using us, and they have much harder requirements than an average person spending a lot of time on Twitter who finds us and downloads us. Yeah, so I guess at the beginning, there are a few things we wanted to optimize for. I guess we wanted people to never have to... We wanted Granola to work everywhere. You shouldn't have to think about, Oh, can I use Granola for this meeting? Or, Should I use Granola for this meeting? It should just work, wherever you're having a conversation. We're interested in... At the beginning, actually, we were really, we thought, Notes, we just need to get notes, that's all. And we toyed a lot with, Should we even show the transcript, or should the transcript self-destruct after a little while? Because really, notes is the value we're giving to people. I think we learned pretty quickly that Notes are good for humans, but like the transcript is amazing for LLMs to do stuff with. And we keep the transcript. But I think we're not, we're not interested in having a word for word league, like record of the conversation that would hold up in court as like, he said this or she said this. We get so little, like all Granola needs is the, it needs the notes of what happened and it needs the kind of enough detail to inform, to give the LLM the knowledge to go and do the work it needs to do. But we don't need the word for word. So I think, yeah, I think because of that, audio doesn't really matter. We initially, I think this lasted a week, when we first started giving the product to people, we stored the audio in a S3 bucket and we were like, we're not gonna use it now, but at some point in the future, there will probably be like, We can probably train a model on all of this audio and it's going to make everything way better. And so let's just keep it, just in case. And we lasted like a week and then we're like, Now, this is too creepy. It makes the product feel heavy and serious, like a thing that you're using against people. And none of that felt good. And so we turned that off. And so that's kind of how we got to just getting the transcript. I think on the question of consent, letting people know how to use it, Granola is a tool that you can use on your computer, and if you want it to be done quietly when nobody knows about it, then you can. Our stance is it's a tool, like voice memos on your phone or a notepad, where you can record what's happening. So it's on you to follow the laws of wherever you are and to disclose to people that you're using it. I think there's no reason for us why granola should be hidden. It's in our interest for granola to be out in the open and for everybody to be on the same page that it's being used. It's great for us for growth and for people discovering us and learning about it. We handicap ourselves by not being a bot that joins the meeting, being like this huge glowing orb with a logo on it that people can discover. We've built some ways to help users disclose it and we're working on a bunch more to make it more transparent that you're using it because it's in everybody's interest for that to be out in the open. Yeah, and I think that's where we're at today. I do think Granola is a tool for work, primarily. People use it in their personal lives, but it's primarily for work. And I think I do believe that over the coming years, it's just going to become more and more normal that people transcribe conversations at work. There's so much upside to it, and I think we and others will learn or figure out how to help mitigate the downsides of it, that it will become a default. We used to get a lot more questions about privacy and consent when we started two years ago, and we get much less now. People are much, much more comfortable with it already. We only see that continuing. I do think it's very different talking about a work context and a personal context. Like I think the always-on wearable thingamajigs, like they have a real, they're gonna have a real hard time like navigating the privacy question in people's personal lives. We have a much easier. Meetings are like a set piece where you sit down and you all agree that you're gonna do work together. And it's a nice social moment where you can have a social contract around recording this conversation that doesn't exist in the rest of your life. Yeah, that's roughly how we think about it today, but we're still a lot to figure out there.

Nathan Labenz: It's interesting how much you have seen a shift already, and I do feel like you're probably right that there's just going to be more and more shift because the value is really hard to pass up. You were coming about like, you know, potentially you've got kind of three layers, right, of raw audio, which you don't even keep, transcript, which you sort of thought maybe we don't need to keep, but It is really valuable. And then also I do see in the product today, there's like grounding of answers and notes to, you know, specific moments in the transcript, which I think is huge for confidence building, if nothing else, and, you know, ability to dive in and sanity check. And then the most kind of superficial, in theory, you know, should have full value, but sometimes doesn't, is the notes itself. I wonder how you think about that in the context of you know, just the ever more AI-ified future. Like one thing I'm doing right now is, you know, many people, I've got my main personal computer and then I've got my new Mac mini that's kind of riding shotgun with me. And on my main computer, that's where the database, you know, of all my contacts that I mentioned previously lives. And that's where I've got a Claude code is kind of my go-to because I do trust it the most. certainly relative to God knows what OpenClaw model with who knows what model in there. And also on this computer, I'm logged into everything too, so there's another consideration. But I basically run my agent on my computer as kind of an extension of myself, where I'm giving it real-time instructions. It's kind of the copilot model. It's agentic in the sense that it can do stuff, but it's copilot in the sense that I don't have it doing super long running loops or waking up when I'm not around. It's taking explicit direction from me as kind of, the main mode. And because of that, and because I generally trust the provider of the model, I'm pretty comfortable with it having such deep access. A question I'm thinking about a lot right now is, okay, for this agent on the other computer, what sort of access should it have? What should the model be for how it gets information about me? One of the things I'm experimenting with is basically creating summaries, right now going through this full five years and creating a monthly summary. And then I plan to create an annual summary and then I have a few other views. And I'm thinking about project level, overall time, what were the projects that were discrete things and which of those are active? And then relationships across the last five years. What is the survey level view of all the relationships that matter and the deep context on those? And I'm thinking maybe that's what I provide to The agent that I'm thinking of more as the autonomous thing, like this is the one that I do plan to let go off and do work without me and it will wake up and check its inbox and it's gonna have its own e-mail and it's gonna have the ability to kind of send e-mail perhaps without my review. So maybe that a more abstracted understanding of me is the right thing. Or maybe there's the other option I'm kind of toying with is like, maybe that agent should call the personal agent and they can kind of talk to each other and sort of the one like represents me as like, what is the least information that you really need to do a good job? And maybe I'll give that to you, but I'm still not quite sure I trust that dynamic either, right? Because certainly models are all very foolable. They're not adversarially robust to attacks on information extraction. I don't know, any tips for me on how you think I should be thinking about this in a more sophisticated way?

Sam Stephenson: No, I don't know. I think you're already more sophisticated than me. I like thinking about it like humans that I would be interacting with really helps me think about it, I think. So we have an executive assistant on our team who works with Chris and me with a bunch of stuff, and we work with an executive assistant. They're only as good as the context you give them and as how up to date they are, but I wouldn't want them with me every moment of the day and not every moment that Chris and me are together. I like being able to talk with just Chris in the room and to be able to speak freely and not worry about outside of the relationship we have. And that comes at a cost of our executive system not having the full picture of everything we talked about, but we have touchpoints that give them enough to do their work and go and do things. And yeah, I feel like... I'm not sure anyone's really figured this out yet, but I feel like we need to figure out those kind of ways of agents interacting with each other. I think forgetfulness is a thing that we don't think about very much with these things too, but it's baked into all of us as humans for the fidelity of our memories to drop off pretty quickly. I feel like there could be a lot of cases where that's a feature, not a bug. For agents too, it's an elegant way of diffusing the dangerous stuff from from a transcript or from your emails or from anything. If stuff gets fuzzy pretty quickly in the agent's memory. Yeah. Yeah.

Nathan Labenz: Yeah. I like that a lot, actually. I've been toying with this idea of agents that, and there's a lot of open questions here, many unanswered questions on this idea, but the idea of agents that would get smaller over time, literally in terms of like, the weights of the model kind of being gradually pruned down so that the model, in theory, the agent sort of settles into its niche and can be like very good in its niche. But yeah, but like over time you sort of, as it gets smaller and smaller, you get efficiency out of that. And you also get the assurance that like it really can't do anything else at some point. And I do think there's a lot of design space I feel like in general with AI, we're doing this depth first search on this one particular architecture with this one kind of particular vision of like the everything AI. And I really wish we were doing a lot more breadth of search before going so deep on on one particular thing because it just seems like, you know, we're we're baking in a lot of decisions on trade-offs for just having kind of one main thing that everybody's chasing and there are all these different scenarios and contexts and and just different different people probably would want to make decisions very differently that I such that I do wish we had a more breadth wider view you know before we get to AGI with like the you know don't um whatever I won't make silly analogies but it's odd that we're taking the first AI that worked straight to AGI as opposed to exploring like a bit of a wider sampling of possible AI design space coming back to Earth Features. So I would say of all the AI products I've used, Granola really stands out for maybe being the most disciplined about feature bloat. And that discipline, I assume, is got to be something that you are holding the line on incredibly purposefully, because obviously in today's world, like we can vibe code our way to new features. 10 times before lunch, right? In a world where you could code anything, you guys have made the very, again, disciplined choice to like keep the number of features to a relative minimum. How do you think about that? Why are there not more features? I obviously this has something to do with people being like overwhelmed as they're encountering the software. But then how do you think about what features actually ought to carry enough weight that they actually get promoted and put into the product?

Sam Stephenson: Yeah, it's a good question. I think you're right, though. What you said about us being cognizant of keeping the product feeling stress-free for people who are operating in quite a stressful environment is a lot of it, honestly. I think especially in the... Yeah, I think we have different approaches for different parts of the product. Definitely for the core flow, if it pops up before the meeting, you click the button, you have the notepad with you during the meeting, and then that turns into generated notes afterwards. At least I'm almost paranoid of adding stuff to that view. It's got to pass a really high bar, and I think it's got to be really useful to justify the expense of it making the notepad feel less of a calm place or less of a place that you want to spend time in, which is tricky. It either means that we like very features that are actually really useful, the templates thing that lets you structure the notes in any way you want. A lot of users haven't even discovered that, it's way too hidden, but it's hidden because Most users shouldn't have to think about it, I think. It's for people who want to go deep and really optimize their workflow. And yeah, there's a lot more features like that where we're paying a price in making them small and tucked away, but I think it's worth it in the bigger picture to have a very calm, nice-feeling app. I think we're more lenient in other parts of the app. If you're looking at a folder or if you're chatting with a lot of meetings, kind of like I was talking about earlier, I think you're usually in a different frame of mind. I think when you're in a meeting, we can expect to have 2% of your attention and the other 98% is on the person you're talking to. Whereas when you're chatting with Granola, I think we can have more like 80% of your attention and therefore we can afford to throw more stuff at you. And so we're a bit faster and looser in the stuff that we try and the stuff that we put in there. Yeah. Yeah. And I think honestly, I would like us to be more I think we can afford to be more experimental, especially in those areas of the app. We are on the side of keeping things very calm in Zen, but sometimes at the price of getting good things out quickly.

Nathan Labenz: One thing you'd mentioned at the top was viral hooks working, or I don't know if you used the word viral exactly, but features that are driving growth. And then you also mentioned that the most obvious one, which is having the bot join the call in a visible way, is not one that you're using. What I have used is the recipes feature, which I created the blind spot finder because my personal mission for my AI study in general is have no major blind spots in the AI landscape. An increasingly impossible mission, but nevertheless, I try. And so I love the fact that it can go back and grab. And this is something, as we've discussed, like this is something I can engineer on my own, but obviously the vast majority of people can't. I love the fact that it can go back into deep context. and pull all that in and then try to take this sort of higher level view that that does feel like very consistent with the help me with the system two side of my life. Maybe. So I guess the two questions are like, what are some of your favorite other recipes, other use cases that you've seen people create that are cool? And is that like a big driver of peer-to-peer growth or what, if not that, like what are the other things that are really helping you propagate the product through users?

Sam Stephenson: Yeah, yeah. Recipes have been really interesting and yeah, I think a lot of use cases like the one you described have come out of the woodwork and weren't really things that we were anticipating when we built the thing. I think the motivation for it was pretty simple. It was like power users of Granola have figured out that you can chat with the body of meetings and learn some pretty profound things or get it to do some really interesting things, but we just wanted a way to make it repeatable and very easy to trigger that. And so we built recipes which create a prompt essentially that you can run at the click of a button. I think the use cases that have surprised me or seem interesting is like, there are a bunch of cases where it's hard to sit down and write something, but it's very easy to just get in a room with people and shoot the **** talk about it. And then with Granola's help, you can like corral that messy transcript into something useful. Our CX team, who are fielding bug reports and customer issues all day long, will pretty regularly do a meeting where they just bring up the main issues that people have talked about or the things people are getting confused by. And then after the meeting, one of them will go and hit the recipe that converts that transcript into a bunch of suggested updates for our documentation, for example. And it can take the transcript, go look at our documentation, figure out where the holes are, and then suggest a bunch of edits. We do similar things with our job descriptions on the website. If we're looking to hire for a new role, it's easier often to just go talk about it with the people involved. involved in that role. And then we have enough examples on our website that you can usually point granola at those examples, plus the transcripts of the conversations, and it'll do a pretty great first draft of a job description. Yeah, those are, I think in terms of frequency of use, at least for us internally, those are the biggest ones and the most time-saving. The other one that really caught me off guard was I like personal coaching type stuff, where Granola can look across the history of conversations you've had and pick out patterns about how you are as a person and how you could be better. I've like never worked on anything, I think, that's had such a strong emotional reaction as those things. When we were doing user interviews for the recipes feature, we'd use the Coach Me Matt recipe, which is, it's written by Matt Mockery, who's a famous CEO coach, and we'd like ask someone to get on a call with us and then go to their recipes page and click this recipe, and then we'd watch them react to the result of it. And I don't think anyone quite got moved to tears, but there were real profound emotional reactions to this thing, which is incredibly validating and satisfying as a product builder to have that. And yeah, I think people get real deep value in that. Granola really understands the surprising amount about you just by dint of being with you in all the conversations. And I'm continually surprised by the depth of its understanding there.

Nathan Labenz: That reminds me of another one that I thought was really cool, which was from Dan Shipper, who created the, I forget exactly what he called it, but it was basically the implicit company culture recipe that just looks at calls across your business and tries to say, like, what is the culture of this business in practice? As opposed to, you know, what you may have put on your website or your handbook or whatever. I suspect for a lot of leaders, that would be a really incredible source of reality check in some cases and ideas and, you know, who knows what else. One quick double click on how you're using it. Do you guys, are you an in-person company? And when you talk about getting people together, are you like putting a laptop in a, like on a table in a conference room and just like hitting record and then everybody's in the same space along with the recording or are you like, You're nodding. I guess that is it.

Sam Stephenson: Yeah, pretty much. We're 100% in person, which is ironic for a meeting. We've built a thing that's designed to be used on video calls mostly. But now we have the mobile app. The mobile app is the main thing we use in person. Someone will turn Granola on, put the phone on a table, and then we talk about it for half an hour. You don't have to be super structured. People can just air out their thoughts, debate things, leave it up to the robots to figure out the structure and how to format that information.

Nathan Labenz: Yeah, I love that for a job description in particular. The idea that you could go to the team and just kind of get random unstructured thoughts and then make sure that is really represented. I think that could take the level of job descriptions that most companies put out up quite significantly.

Sam Stephenson: Our design and product team is growing at the moment, and I'm going through a process of trying to write down some of our design product principles for everyone internally. The next designer that joins, how do they ramp up on how we think about product quickly? And it's been super helpful for that because I want those principles to be written in the language that our team understands and to feel like it's kind of come from us as a collective and just being able to get in the room and talk about them and then as a group, but also one-on-one with each other and stuff, and then amalgamate all that together has been super helpful. And you can get the AI to come up with pretty punchy principles that are worded in a more interesting, different way than me trying to capture the average of everything everyone was saying.

Nathan Labenz: So are these recipes the thing that is driving the growth or are there other viral future design principles you could share?

Sam Stephenson: I think, I mean, Recipes do well on like social media or public sharing 'cause people create a recipe, they wanna show it off to the world and talk about it with people. And so that's been pretty good for us. Most of our virality is like either just people recommending it to each other or people sharing notes with each other. We try and make the sharing notes as quick and seamless as possible. And yeah, they might. So often I'll be talking to a user and I'll ask how they found out about Granola, and they'll say, I was in this meeting, and then 30 seconds after the meeting, these beautiful notes just popped up in Slack. And I was like, How did you do that? There's no way you could have written those in that amount of time, and how did they end up in Slack so quickly? And then they dig and they find out that it's Granola and they want to try it for themselves. Yeah, that's honestly, those are the main ones. I think as we've worked on building more team functionality into Granola, we're starting to unlock Like there is now, there's a benefit to having all of your team using Granola, not just each individual making themselves more productive. You can pull stuff together and use that shared context. So we have a bunch of teams now like actively encouraging each other to use the thing and all like procurement teams buying a license for everyone in the company so that everyone's on it. But most of it is pretty basic. People like the product and they tell each other about it.

Nathan Labenz: You only need one viral growth mechanism if it really works. That's one of the profound lesson of growth in general. It usually doesn't come from that many different hooks. It's one hook really works and that drives it. Okay, let's talk about design. You said your team is growing. It strikes me that There's a lot of good candidates, but I would say product design and product building and the sort of relationship between design and building is a pretty good candidate for one of the jobs that has changed the most over the last year or so. So what does that look like for you? Like how big is the team? Do you still have traditional design and/or product manager and separate engineering roles? How much of those blurred together? How helpful are AIs when it comes to going from an idea to a candidate design in your design system? Like how well do those things work for you? What have you learned about actually making the AIs useful as a sort of designer assistant? In short, like what is, how has AI transformed your product creation practice at Granola?

Sam Stephenson: It feels like a slow exchange on the day-to-day, but then if I like compare what we're doing today to what we were doing last year, it's like completely different. Yeah. So I guess as a team, like as a company, we're like 60 people now, I think. And I think if I break that down, that's 25 to 30 engineers and then three product people, three designers, three designers, three designers plus me, a couple of like design engineers and then, and then every, everyone not on the product engineering design side and Yeah, I think we, as I've been hiring designers especially, I guess I'm closest to design and lead the hiring for that. And it's a prerequisite or like a necessary thing that you're at least curious and playing, actively playing with core code or you're like using AI to help you build stuff. I think that's important in some areas more than others. Granola's core interfaces, like the Notepad, I find Figma mocks really hard. It's really hard to evaluate whether a thing is good, like looking at a mock-up in Figma of the Granola Notepad. I think just because so much of what matters is how does it feel with your real content in it, and how does it feel ergonomically to have it open on a meeting and be glancing at it out of the side of your eye while you're also trying to talk to somebody, and there's no substitute to just building a prototype version of a feature in the real app and using it for real in a meeting. And so they're like, they're really biased towards using AI coding tools to like hack stuff together into the real app. And all of our designers are doing that at this point. I think there are other parts where like, It's, I think, more traditional, like mocking up a flow in Figma and evaluating it that way still makes sense. The stuff around our paywall, how do you sign up to Granola and what's the sequence of screens we show you and what information and what things do you need to understand as you do that? It's just really useful being able to see the whole thing in its entirety. And the design problems are all around copywriting and are we putting the right ideas in your head at the right moments? Yeah, that's just a much easier thing to evaluate and move quickly on a canvas, where you're not having to turn everything into the product. So yeah, it's a mixed bag. But the lines are really blurry these days. Designers are submitting code, building prototypes in code. Engineers are doing design, as you would have thought of it, a few years ago. Yeah, it's a real hodgepodge, in a good way. I like having more fun, I think, than I ever have as a designer-builder person.

Nathan Labenz: Yeah, I'm right with you on that. It's unbelievably empowering and unbelievably fun. How would you say it has impact, I don't know if you have metrics on this sort of thing, or you could just give me a finger to the wind, but what's the before and after when it comes to idea to ship and whatever it may be, intermediate milestones you think are most important between those starting and ending points?

Sam Stephenson: Yeah, I think what's it improved? I think it's been immensely helpful in evaluating whether a thing is worth pursuing or not, or worth building. What's an example? So the Granola Notepad, the chat floats at the bottom of the app, basically everywhere in the app now, right? And in the same way. And that wasn't always the case. I think we introduced that in September last year, and before that it was like only in some parts of the app and mostly in like a sidebar view. And I think... debated, should we change this chat, make it bigger, make it more prominent, make it more globally accessible, blah, blah, blah, for a long time. And I think I'd been mocking up versions of it for a while, like in different places, in different parts of the app. And the trigger point for being like, oh yeah, **** this is obvious, we should do this, was I was able to just like prototype putting it in vibe code our way from getting a sidebar chat into a floating chat in the app that we could all live with internally. And within a day of everybody having it internally, it was obvious that it was just way better and we should move to it. So it's really sped up the time from idea to a thing that we can start using and feeling in the app for ourselves. I think there are still lots of pieces of UI and lots of flows need a lot of thinking through all of the states and edge cases and things like that. And there, I think, still some of the old ways are still very useful. Figma's great for seeing everything out and being able to see all the options and states and stuff. But yeah, idea to evaluation so much faster.

Nathan Labenz: So you're not on the Figma. The death of Figma is perhaps exaggerated in your mind.

Sam Stephenson: I think they've got their work cut out. At least for me, Figma's just become a more specialist tool. I use Figma now for ideating, and whereas Figma used to be the start, middle, and end of my design process, it's now a thing I pick up and use every now and again. I'll often start just building the first thing that's in my head in the product, and then I'll get it working, but then I'm like, ah, this bit doesn't feel good. It's faster to jump into Figma and try 10 different versions of it side by side and figure out which one of those feels good. And then jump back into code and implement that and then rinse and repeat, but have something feeling good overall. Yeah, I think the days of having full app schematics of everything, all of the screens and all of the flows laid out in Figma are totally gone for us. It's way more of an ad hoc thing that you pick up and put down.

Nathan Labenz: Okay, I'm not sure where that bottom line's out for me in terms of the future of Figma, but if I'm Figma, it makes me a little nervous.

Sam Stephenson: Yeah, I'd be nervous too. The question is, I don't think that need's going to go away. If they can hold onto that and cement themselves as the best place for that kind of work, for ideating, then great. But other people are going to, I presume Claude Code is going to run at it from their direction and other people are going to come at it.

Nathan Labenz: Yeah, there's a lot of candidates to try to be the everything app these days. So it's, yeah, it's gonna be very interesting to watch those dynamics. In terms of like the team dynamic that you have, I was sort of guessing coming in that it would be a challenge to manage kind of product discipline and also to just manage people's feelings about their product ideas because the ratio of like random ideas that people have and like vibe code to things that actually get launched has got to be, I mean, correct me if I'm wrong, but it feels like it's got to be like a hundred to one, which in a sense is maybe good. Like that, you know, I certainly can see the argument that that's what users, you know, ultimately benefit from and deserve. From the individual engineer's standpoint or the individual, you know, person who came up with the idea standpoint, I could also kind of imagine it being frustrating. And I guess I'm wondering like how, but then you also said when something really hits, if you can get it into the internal version of the app quickly, then it can become maybe obvious if it's like working or not. And so I think maybe what would create frustration in a lot of cases is I have this idea, we're not even trying it. Whereas if I had this idea and we tried it and nobody really cared, then that's at least like closure. So how is the team dynamic there in terms of and I guess the other question you gotta keep in mind too is like, how do I make sure people keep actually trying, right? If their ideas are only getting through to product at sub 1%, how do I avoid people just getting discouraged and feeling why even bother that it's probably not gonna work? So I see a bunch of kind of competing challenges there. I'm wondering how you're dealing with that from a team culture and management perspective.

Sam Stephenson: Yeah, yeah, it's definitely a thing. I think One, I think we're very mindful of, like Granola is, as a company is, we're a company of people who like to build great product experiences. And I think we're not like exceptionally technically deep company or we're not, we're not exceptionally sales driven. Like building good experiences is the thing that we like to focus on. And we look for people who feel like they fit that mold when we're hiring. And so like, Humility and open-mindedness are like two huge characteristics that we really look for in people that join, which means like you should be curious and then you should like, and we want people who are going to think freely and come up with good ideas, but also have the humility to assume that nine times out of the ten, your idea is not going to work. Like when it hits the ground and is put in front of real people. And that's not, I think... It's not because Chris or I says it won't work, but because it just doesn't land with the end users. Yeah, so we try and get the right people in the door, basically, that think and feel like us and approach building software like us. And then I think the next thing is just like people... If you have smart people, I feel like a lot of the time they'll make good decisions if they have... good information about what's happening, what's the state of the product, and the thing that needs working on. So every engineer is involved in talking to users, and as much as any product or designer there, there's no kind of waterfall effect or anything going on there. And so I think this doesn't always happen, but a lot of the time that leads to engineers being the ones that have the ideas that end up being the thing that goes into production and solving the user's problem. Yeah. So people giving them good information and like actually being close to the users. And then, yeah, I think we just try and encourage this like nature all the time. Like everyone should be just, if you have an idea, you should try it and you should put it out in the company and see what people think. We do demos every Friday, which is like, One of the highlights of my week always, and sometimes it's stuff that's on the roadmap and people are working towards, but sometimes it's like a completely left-field idea. What if we just got rid of the Granola sidebar and just had it be like the one interface? It's like this. What if transcription worked in this radically different way? And people are trying stuff like that all the time. And I think we try and cultivate this idea that the trying is good and it's okay to build a thing. see if it works, show everybody so that they can learn from it, and then put it to rest and take what you learned from that and move on. I try and exhibit those qualities too. If I have an idea about how the app should be, I will try and make time to flesh it out and share it on Slack, and usually it doesn't go anywhere. Usually my ideas are terrible too, and that's okay. Hopefully, collectively, we all inspire each other and move our thinking forward, and that ends up putting us in a good place.

Nathan Labenz: As we're talking, I just went back to the labs tab in granola and joined the beta. And I wonder what you could tell me about what I should expect as a beta user. And this seems like something that might, especially if you have any real scale, right? If you have, if you can get to the point where you've got some critical mass of people who want to be those beta users that could really grease a lot of the friction away in terms of kind of everything we've just been talking about, right? Like, how do you get real responses? How do people feel like their ideas got a fair shot, et cetera, et cetera? How are you using the beta program?

Sam Stephenson: Yeah, not enough, honestly. It's probably a bit disappointing. It's big. Like, I think we have 10,000-ish people on it, and I think we... We make product decisions almost exclusively qualitatively. Like we do some measurement and some, especially like in the growth stuff, we'll measure things in A/B test. But a lot of the core product stuff is purely decided based on gut feel and like observing users using the thing. And so all of those decisions about what we're building and is a core feature, right? Tend to happen either from us using it and talking about it internally or us working with very specific people. Like I think When we were pre-launch, we deliberately didn't try and go out and get users. We just iterated and iterated with a handful of people and tried to just be as close to them as we could and observe them using it in the wild. And high-fidelity feedback you get from being really close to someone, I think, outweighs the quantity of talking with a lot of people really quickly. And we still try and follow that mandate today. So most features, we're working very directly with a small number of people to iterate on. And and so by the time things go to better they're actually pretty well baked already usually. And it's like bug catching, stability testing, just sense checking. We didn't, you know, **** ** some obvious thing. Yeah.

Nathan Labenz: That's interesting. I do wonder if we're going to soon get to the point where AIs will be able to make enough sense out of the kind of usage logs where you could potentially accelerate that process. and potentially even raise the quality of the decisions you could make by just like launching a ton of stuff to a small number of users and kind of seeing what happens. And then you get like a real, you know, you get a real ground truth measurement there, right? I mean, I think everything you're saying there sounds enlightened in the sense that it's often, and I feel like I've encountered this Many times with successful product builders where it's like, it's not really that complicated conceptually what you need to do, but it is still something that a lot of people just don't want to do or don't find intuitive to do or don't have the whatever the disposition to do. And so just being really focused on who the users are, what they want, listening to them, spending a lot of time with them. Those themes just come up over and over again. Can AI do it? Maybe soon, I don't know. Maybe. It'll be interesting to see if we can cross that threshold in the not too distant future. Any thoughts on, are you holding your breath for that, or maybe dreading that day if it comes?

Sam Stephenson: Definitely, that'll be the moment it takes my job, yeah. But yeah, I don't know. I feel like if humans, we'll see, but if humans are the ones making the decisions, still I feel like humans are gonna wanna be the ones, you're gonna want pretty raw input to feel good about making that decision, I think. Who knows? That might be very outdated thinking very quickly, but I think that's where I'm at today. Yeah. I could see features where the usage is really disparate. People use it for many different things, like chat, for example, like open-ended chat. I think that's one where we go to beta earlier because you learn from having a volume of people using it in different ways and you learn what the patterns are and stuff like that. And I could see those... That feels like an area where if an AI was understanding the themes of what people were asking and could therefore bias the interface towards being good at those things, that does seem interesting.

Nathan Labenz: What scares you the most in terms of granola's future? And I'll give you just kind of one candidate, which is my friend Andrew Kritsch coined the term the big tech singularity, which in his mind is just this future state where because a few companies have such powerful AIs, they can turn their focus to any particular niche or even industry. He's thinking like pharma and material science. He's thinking at the macro scale, but big and small, right? If they have just such a dominant AI advantage, then potentially they can turn their focus to any particular area and steal what you've learned, clone what you've built, and even potentially offer it at subsidized prices or whatever. This is not, that's not a granola specific risk, but I do worry about that a lot myself. Do you worry about that? And what else keeps you up at night as you think about granola's future?

Sam Stephenson: That's the main one, I think. Yeah. I feel like I've always worried. Like I've worried about this since day one. At the moment ChatGPT came out, you could like plot some like hand wavy line into the future where like the, just this one tool does everything and No, no startup or specialized service ever needs to exist again. Yeah, it's I don't know, it's a depressing version of the future and definitely, I don't know, it feels like a version of the future that could pan out, but I certainly hope it's not. And I don't have, I don't have great answers as to why. I do what I do because I enjoy it and I would rather If, even if we are marching towards a big tech singularity, I would I'll be much happier building Granola in the meantime until that singularity. But I also think, I don't know, I feel like we, especially in tech, we like love to assume that the next big thing that comes along is going to swallow everything and going to be the thing that takes over. And I feel like, I remember feeling this about Alexa and I think I think even like Facebook Messenger had some chatbot, like, I don't know, 2013 or back in the day. And I remember thinking, Damn, interfaces, they're cooked. We're just going to be chatting with these bots backwards and forwards. And maybe that's a case where that technology was too early and it really is coming true now. But I don't know. I feel like we over-index on one solution being the thing that's going to be the be all and end all. And in reality, life is messy and complicated and there is still a lot of merit in having specialized tools for different things. Yeah, I think all we can do at Granola is be really good at the stuff we're really good at, and then architect the product and everything in a way where we get to stand on the shoulders of the giants building the big models. If Granola doesn't get better when the next version of Claude or GPT comes out, then I think over time we're cooked. We have to go with the rising tide and just be a little bit better than them in our particular area. Trust that people will pay for that, which I think Back-to-back meetings are painful enough for enough people that I think people will pay for that.

Nathan Labenz: That plus maybe a little bit more vigorous antitrust enforcement, and you'll have a place in the future. How about just your general positive vision for the AI future? One of my common refrains is the scarcest resource is a positive vision for the future. And so I'd love to hear yours.

Sam Stephenson: Yeah, I think... When you sit down and study somebody's day, like work life, if you're like a laptop, e-mail knowledge worker, whatever, so much of it is you being like a slave to your computer. You're reduced to the machine that needs to go move the mouse around and click buttons and move information from one place to the next. And I think we are going to a world where we're going to have help. And so much of that kind of menial mindless work is going to be taken care of or like we're gonna have, we're all just gonna have help with it. And I, yeah, I think I just, I think that sounds like it could move work towards being a much more interesting place for a lot of people. And I don't know, I don't like, I think like in meetings, the number one piece of positive feedback we get from Granola is about Granola is that it helps me feel more present in the meeting and more engaged and I'm able to, my brain's able to be firing on all cylinders because I'm not caught up frantically trying to note down everything that I need to be remembering. I think that's... I don't know, I feel really proud to have created that for some people and I think that analogy played out across a bunch of other parts of our work life, I think is quite an exciting future.

Nathan Labenz: I just asked Granola how I should bring this conversation to a close, and it said to thank you for the specifics. And I do think that's a great suggestion. You've definitely shared a lot of really interesting details, so thank you for that. And then second point was call back the positive vision. And on that point, I'm definitely with you in terms of If we can be less slaves to our computer, I'm trying to measure myself by, am I getting outside more this year? And if I'm not, is AI really serving me? So I wouldn't say I've moved the needle on that yet, but I do love the vision of untethering ourselves from our computers. So. Thank you. This has been an excellent conversation. I really appreciate it. And it's been a great window into the thinking behind how to make a viral hit AI product that everyone can use and enjoy. Sam Stephenson, co-founder and designer at Granola. Thank you for being part of the cognitive revolution.

Sam Stephenson: Thanks so much, Nathan.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.