Zapier's AI Revolution: From No-Code Pioneer to LLM Knowledge Worker

Zapier's AI Revolution: From No-Code Pioneer to LLM Knowledge Worker

Join Nathan for an insightful episode of The Cognitive Revolution with Wade Foster, co-founder and CEO of Zapier.


Watch Episode Here


Read Episode Description

Join Nathan for an insightful episode of The Cognitive Revolution with Wade Foster, co-founder and CEO of Zapier. Discover how this no-code pioneer is evolving into an AI-powered platform for the future of work. Learn about Zapier's ambitious vision, their integration of AI throughout their product, and how they're adapting as a company. From AI-driven lead qualification to innovative customer use cases, explore the cutting edge of automation at scale. Wade shares valuable insights on effective AI prompting, internal AI adoption strategies, and his perspective on recent AI advancements. Don't miss this fascinating look into the future of AI-powered automation with one of the industry's leading innovators.

Check out the ZapConnect 2024 event: https://zapier.com/zapconnect

Apply to join over 400 Founders and Execs in the Turpentine Network: https://www.turpentinenetwork....

SPONSORS:
WorkOS: Building an enterprise-ready SaaS app? WorkOS has got you covered with easy-to-integrate APIs for SAML, SCIM, and more. Join top startups like Vercel, Perplexity, Jasper & Webflow in powering your app with WorkOS. Enjoy a free tier for up to 1M users! Start now at https://bit.ly/WorkOS-Turpenti...

Weights & Biases Weave: Weights & Biases Weave is a lightweight AI developer toolkit designed to simplify your LLM app development. With Weave, you can trace and debug input, metadata and output with just 2 lines of code. Make real progress on your LLM development and visit the following link to get started with Weave today: https://wandb.me/cr

80,000 Hours: 80,000 Hours offers free one-on-one career advising for Cognitive Revolution listeners aiming to tackle global challenges, especially in AI. They connect high-potential individuals with experts, opportunities, and personalized career plans to maximize positive impact. Apply for a free call at https://80000hours.org/cogniti... to accelerate your career and contribute to solving pressing AI-related issues.

Omneky: Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/

RECOMMENDED PODCAST:
This Won't Last - Eavesdrop on Keith Rabois, Kevin Ryan, Logan Bartlett, and Zach Weinberg's monthly backchannel ft their hottest takes on the future of tech, business, and venture capital.
Spotify: https://open.spotify.com/show/...

CHAPTERS:
(00:00:00) About the Show
(00:00:22) Sponsors: WorkOS
(00:01:22) About the Episode
(00:03:41) Introduction and Zapier's Competitive Edge
(00:07:20) AI as Knowledge Worker Companion
(00:10:27) Impressive AI Use Cases
(00:16:25) Sponsors: Weights & Biases Weave | 80,000 Hours
(00:19:05) AI Implementation Challenges
(00:19:13) LLM Performance and Prompting
(00:22:42) AI Adoption within Zapier
(00:31:00) Sponsors: Omneky
(00:31:23) AI-Assisted Workflow Creation
(00:36:07) AI Culture and Adoption at Zapier
(00:43:03) AI Impact on Zapier's Productivity
(00:48:06) Zapier's AI Integration Strategy
(00:54:43) Outro

SOCIAL LINKS:
Website: https://www.cognitiverevolutio...
Twitter (Podcast): https://x.com/cogrev_podcast
Twitter (Nathan): https://x.com/labenz
LinkedIn: https://www.linkedin.com/in/na...
Youtube: https://www.youtube.com/@Cogni...
Apple: https://podcasts.apple.com/de/...
Spotify: https://open.spotify.com/show/...


Full Transcript

Full Transcript

Transcript

Nathan Labenz: (00:00) Hello and welcome to the Cognitive Revolution, where we interview visionary researchers, entrepreneurs, and builders working on the frontier of artificial intelligence. Each week, we'll explore their revolutionary ideas, and together, we'll build a picture of how AI technology will transform work, life, and society in the coming years. I'm Nathan Labenz, joined by my cohost, Erik Torenberg. Hello, and welcome back to the Cognitive Revolution. Today, my guest is Wade Foster, cofounder and CEO of Zapier, the automation platform that's been a no code pioneer since its founding in 2011. Zapier has long been known as an easy way to connect apps and automate workflows. But as Wade explains, they're now pursuing a much bigger vision to be, as Y Combinator president Gary Tan recently put it, the AI powered knowledge worker of the future. In this conversation, Wade shares insights into how Zapier is embracing AI in their product and also adapting to AI as a company. More than 400,000 customers have already delegated more than a 100,000,000 tasks to AI through Zapier. And Wade highlights some of the most common and also some of the most original use cases. From AI powered lead qualification and SDR style outreach to a voice memo to invoice system that 1 customer built for his wife's small landscaping business. We also explore how Zapier, like so many other companies right now, is evolving its own product strategy, moving from a DIY platform to a done for you by AI platform with AI functionality woven throughout. While the AI functions that design Zaps are still a work in progress, it's clear that Zapier hopes to lower the barriers to entry by allowing anyone to design new workflows with a natural language interface. Along the way, we also touch on the challenges of decomposing tasks that are intuitive for humans into the sort of discrete steps that AI can handle effectively. Zapier's strategy for encouraging AI adoption internally, including their very founder mode decision to stop work across the entire company for a week long AI hackathon after GPT-four was released last year, Wade's tips for effective prompting and pushing the boundaries of what's possible with current models, and his perspective on the state of AI progress since GPT-four and how that relates to his cofounder and previous guest, Mike Knoop's, perspective on progress toward AGI more generally. As always, if you're finding value in the show, we'd appreciate it if you take a moment to share it with friends or write us a review on Apple Podcasts or Spotify. And don't forget, you can always share your feedback or suggestions via our website, cognitiverevolution.ai, or by DMing me on your favorite social network. Now I hope you enjoy this inside look at AI powered automation at scale with Wade Foster, CEO of Zapier. Wade Foster, CEO of Zapier. Welcome to the Cognitive Revolution.

Wade Foster: (02:46) Yeah. Thanks for having me, Nathan.

Nathan Labenz: (02:47) I'm excited for this. You guys obviously have built a very well known company in the no code space, 1 of the the early category definers. And I wanted to start off with just big picture stuff about the company, and then definitely we'll have a chance to go deep on some of the AI products and features and and cultural adaptations that I'm sure that you're dealing with over the last couple of years. For starters, obviously, the competition has grown up around you, right, as they've seen your success. How would you describe Zapier today in a increasingly crowded market? What's the key competitive advantage? Why do people choose you? Why is Zapier still leading in the space?

Wade Foster: (03:22) You're right. There's a lot more options forever, which I think is good for everybody to to have that. But I think the thing that Zapier still does better than everyone else is it's just so easy to get started with. It's not hard to come in and go like, you know what? I wanna make a new payment from Stripe post into Slack. I wanna take this attachment I get in from Gmail, push this into Dropbox, and just start with these, like, real basic integrations and just start to get your wheels turning. And then I think the thing that we have increasingly been doing at Zapier is saying, hey. It's not just integrations. It's how do you really build, like, a full blown automation platform out of this stuff? And so more and more folks get started with Zapier on these easy things, but then find that the depth of the platform just keeps getting bigger and bigger. And so they end up just being able to do more and more with Zapier, and it's not it's not necessary to go look for other things to solve an increasing amount of problem, at the end of the day. And so I think that easy to get started and just more depth for use, like just keeps people in and around the ecosystem.

Nathan Labenz: (04:20) When you talk about depth, is that like number of other things that you integrate with or how do you measure depth?

Wade Foster: (04:25) Yeah. We've launched things like tables and interfaces in the last couple of years. And so now it's not just simply, like, an integration you can do, but you can actually build on full on applications. We had a customer earlier this year that I met who had a wife that started a gardening business. Business. You know, she already want always wanted to start a gardening business. And turns out she liked the gardening part more than the business part and would never send out invoices to customers. Just wouldn't do it. And so husband was like lovingly like, hey, sweetie. I'm glad you're having fun with this, but it would be great if maybe we could get paid. And he started thinking like, how could I make this easier for her to get paid? And he started thinking like, how could I make this easier for her to get paid? And turns to Zapier and says, I wonder if I can do something that makes it a lot easier to build these invoices. Turns out he could. And he made what he calls voice to invoice. And what it is made a little app on his phone, and it's powered by JotForm, which has a field where can just press record. That's all the form does. You just press record, and you just talk into it. And so when you talk into it, he says, hey. You know, met with this client today, Sally. We agreed on a budget of $150 for tulips and $200 for daisies or whatever. And that then goes through Zapier, sends it to the Whisper API for transcription, then that gets sent to ChatGPT, has a couple prompts, then extract out the actual terms of the invoice. And then that gets sent to a Google spreadsheet that starts to enumerate, okay, a $100 for this item, $200 for that item, and then eventually gets sent to QuickVoice to generate the invoice and gets sent. And now you can build these full on applications. 10 years ago, that would have been you know, you could raise 1000000 bucks on an idea like that. Nowadays, he's able to build that on Zapier in 30 minutes, an hour. That to me is what's increasingly awesome about the combination of no code and AI is the types of experiences that just savvy people can build. Not necessarily engineers, just people who are using their brain a little bit can build pretty impressive stuff. I I think that's the thing that just gets me excited right now.

Nathan Labenz: (06:24) Yeah. 1 of the things that gets me excited is when, I saw Gary Tan tweet about Zapier that Zapier could be the AI knowledge worker of the future. And this is sort of 1 early, you know, example of this in the wild. Is that how you think about the kind of big picture potential for the company today? Do you want it to be something that feels like a knowledge worker companion, or is that too out there for what you would define as your goal?

Wade Foster: (06:50) Well, we've had this line on our about page for a long time, which is we're just some humans who think computers should do more work. And I think where that comes from is that certainly us as founders, but, like, the people we hang out, the folks who are around, it was not uncommon for us to have ideas of, oh, I'd like to build this or I'd like an app that does this thing. And to realize pretty quickly, it's harder than we thought it would be. There's more hassle. The technology is not quite there. And the distance it takes to go from, like, idea to execution is pretty big still. And at the end of the day, what we're just trying to do at Zapier is shrink that distance, make it so much quicker for a person to just go from idea to, like, living, breathing thing at the end of the day. So I do get pretty excited about the potential of Zapier to be this just sidekick, assistant. Like, you insert whatever word, but, like, it's gonna be this tool that just sits by you to help you solve a whole host of problems that you might have. Generally, at work is where we're spending most of our time thinking about this stuff.

Nathan Labenz: (07:46) Yeah. It's really interesting how over the last year, right, we've all been on quite a journey of of figuring out what more work can computers do broadly than they used to be able to do. And we know that there's a pretty big answer to that, but the exact shape of it and the sort of nooks and crannies are often quite surprising. Obviously, you could do a lot more Zapier now than you could in a pre, let's say, GPT-four world. Do you have kind of stats or ways of describing how much of the individual task flows that are are flowing through Zapier involve some AI element today? I guess you could just measure that in terms of Yeah. I know. People calling OpenAI or Claude or whatever.

Wade Foster: (08:26) We we're at about 400,000 customers that have delegated now over a 100,000,000 tasks to AI, and it's the fastest growing category on Zapier is AI related workflows. I'm very bullish on, like, the potential of this as a, you know, thing that's gonna continue to solve problems for a lot of folks.

Nathan Labenz: (08:44) Yeah. That's fascinating. So 400,000 customers and a 100,000,000 individual tasks. So it's a 100,000,000, like, trigger logic and output that had some language model call.

Wade Foster: (08:56) Some AI was involved somewhere. Yeah. Yeah.

Nathan Labenz: (08:58) Mhmm. Okay. Interesting. So let's talk about the shape of that. I think this is 1 of the hardest things for people who are new and not obsessed with it as I am. And I sometimes think it took me thousands of hours to map out for myself and develop my intuition. Maybe you can give us some shortcuts and you have the benefit of obviously all these customers. Yeah. I guess for starters, what are some of the most impressive, hardest, highest stakes successes that you've seen where people are able to get AI to do something that, like, really moves the needle for them?

Wade Foster: (09:27) Yeah. Well, I mentioned the voice to invoice problem there. I think another example that comes to mind is Ziki Chen, who's the CEO at Runway. Runway sells this you can almost think of it like Notion meets, like, financial planning. It makes it really easy for people who aren't in finance, like, understand, like, the finances of the business and figure out how it works. Seeking, the runway team, they are great at building product. And as great as they are at that, they might be even better at marketing product. And so when they announced, like, their new site, it kinda went viral. And they pretty instantly had dozens of people signing up for access to the product in a minute. And this is a pretty small team and the engineers are all 100% focused on building the actual product. And sales team is, like, 2 people. There's 1 person on success. So there's just not that many people here, and they're getting dozens of folks coming in every single minute. So, you know, what do they do? CT says, hey. I wonder if I can build something with Zapier, ChatGPT to help us better qualify these folks coming in and send out prospects for them. And 1 of the things he builds is quick Zap that takes the lead that hits their database, pops it over to Zapier. They then iterate through the latest batch that comes through. And in doing that, you hit the ChatGPT and filters out anything that's a free email domain. And so instead of having to enumerate all the email domains, they can just tell ChatGPT anything that looks like free, just figure it out. So they can kind of vaguely tell it, and then ChatGPT figures out the rest. Then they have the leftover ones hit Clearbit and say, hey. I wanna go enrich all these because these feel like they could be actual legitimate prospects. I think they then come back and hit ChatGPT 1 more time and say, for the qualified folks, we want you to write an email draft. And they have a very specific prompt they use in ChatGPT that like it's like, hey, make it witty, but not cringe and some guidance for, like, how they wanna send this. And then they have that land in Gmail as a draft. And so in the morning, those sales reps can pop into their Gmail drafts and go like, send send send send send send send, you know, quickly review them because you wanna just double check it a little bit. But they can move through these things in a way that if they were trying to do all that stuff manually, you couldn't have done it with the headcount they have. And so when you ask Seeky, like, the impact of that is I would've had to hire another 10, 20 people to get that job done. And instead, this was something he built in 30 minutes on Zapier, ChatGPT, few other tools, etcetera. So I think you see stories like that are are pretty common, honestly. The the biggest hurdle I think folks have is just actually coming up with the ideas for them. The creativity is, I think, the hardest part. And I certainly see this pretty reg you probably come across this too. Like, go on Twitter, x, whatever. And you'll see folks that are, like, in the scene, the AI scene, and they'll be like, what do you actually use AI for? What actually are you using it for? And I think we're in that period where some of the novelty of the demos have worn off and folks are still sometimes searching for, yeah, but where's the sticky use case? Where's the impact? Where's the repeatable stuff? And so it's I think it's really useful to get Danny and Seeking and all these other use cases out there for folks because they can start to go, okay, I don't have that exact problem, but I have a version of that. And now I can figure out maybe how do I go replicate it for myself.

Nathan Labenz: (12:35) To go a little bit deeper in in the advice that you would give people or what are the takeaways from that? I think people creativity or having the ideas is definitely 1 limiting factor. I think another is not having a good sense for what the AI is gonna do, what it's not gonna do well. Recently, I've heard a number of people say that you only wanna use the AI where it's strictly required. Anything that can be done in a traditional code way should because it'll be better, faster, cheaper.

Wade Foster: (13:02) Mhmm.

Nathan Labenz: (13:02) Then there's some things that are kind of in between where it's like classification or, like, deciding on, like, a fork in the road. How sort of detailed do you recommend people or do you find people need to be in terms of exactly how prescriptive they're gonna be? Another way to say that is, like, how much do the AIs actually have any autonomy in deciding what to do versus being told exactly like, this is your input, this is your task, and we want the output in this format.

Wade Foster: (13:32) I think at the stage we're at right now, you definitely have to be more prescriptive. There is a very real last mile problem based on the state of the art that we have today. And so I I do think I would subscribe to the advice you gave there, which is if you have a job to be done that can be done in a deterministic way with code, or you can use a no code tool, but at the end of the day, it's all code under the hood, you're probably gonna wanna have it run that way because it's gonna be more reliable. It's gonna be cheaper. It's gonna be faster. You're gonna have all those things that you get there. And then there are a handful of, like, things that an AI or LLMs are quite good at, certain tasks that you actually can't do it the other way. Or maybe you could, but it's exceptionally difficult. And that is things like how do you extract information from unstructured text? How do you you know, categorization, you mentioned. So if you gotta go categorize a whole bunch of stuff, it's gonna be better, faster, cheaper than a human. If you want it to generate text or generate something novel, it's gonna be better at those types of things. So there's a whole host of use cases that we actually didn't have a way to do those free LLMs, but you can inject an LLM in the middle of a deterministic workflow and now have a whole new type of thing that you can build. And so today, that is, I think, where we're seeing the most success. And I y'all be honest, we had to figure that out too. We launched a product central that was inference everywhere. It was trying to use AI to delegate a whole bunch of tasks, and it is having some success. But the place where a lot of that 400,000 users and a 100,000,000 tasks, that's happening through our traditional workflow engine. And I think it's because in that traditional workflow engine, you can pair mix and match deterministic steps with an AI step, and you can basically delegate exactly what you want to the spot that you need it. And that improves the overall reliability of the end to end task at the end of the day.

Nathan Labenz: (15:26) Hey. We'll continue our interview in a moment after a word from our sponsors.

Nathan Labenz: (15:30) How often are you seeing your users need to really push for LLM performance to get it to where they need it? Like, for Seeky, for example, is he able to do this on just a good prompt? Does he need, like, few shot examples, or does he have to go as far as, like, fine tuning a model to do certain key tasks?

Wade Foster: (15:52) Yeah. Most of our folks are just doing it with basic prompting. You probably wanna spend a little bit of time on your prompting, but it's not like you're having to spend hours crafting the perfect prompt at the end of the day. You can usually prompt it, maybe add a few examples if you feel like it, but you don't even have to do that in a lot of cases. And so that's where I think folks are seeing a lot of the success today. No one's having to go train their own models or anything like that for the use cases we're seeing on Zapier.

Nathan Labenz: (16:16) Are there use cases that you've seen people kind of have many people want to do and try to do and just, for whatever reason, not be able to achieve any sort of no go or there be, dragon sort of zones you would describe?

Wade Foster: (16:29) I think we like to communicate much more, like, colloquially, and there's a delta between that and how computers work today. There's a real skill even still in describing stuff to computers. And so that's like what we just touched on with prompts. The best prompts are pretty specific, and so there's a real skill that comes with authoring prompts. I think most folks, they get tripped up because they haven't actually really worked on that skill set at the end of the day, and so their prompts end up kind of vague or especially vague that it's really hard for the LM to figure out what it's going to do. But if you actually step back and think about it, it's probably hard for a human to figure out what you wanna do too if you're being that vague. But I think the real magic is gonna come when these models can get to a point where they can handle increasingly vague statements and where they're figuring out how to take a just, like, really messy human statement and then actually transcribe that into a better prompt under the hood or at least prompt the human back to say, hey. Did you want more like a or more like b? You know, that's what you would do if you're working with an assistant. You're just like, hey. Can you go get me lunch or whatever? And they'd be like, sure. Do you want this or that? And it's actually neither. Could you do that? Like, a great assistant kinda knows how to interact with you in that way. And I think that's really what we're waiting for these models is to get to where they can kinda learn from you a little bit and extract your preferences and have a little bit more memory and context and use that to keep getting better and better over time.

Nathan Labenz: (17:52) Yeah. Tyler Cowen says context is that which is scarce. I'm working on a project right now to try to get I assume it's going to ultimately be a fine tuned model. I'm not sure exactly which 1 will be best, but try to get some model to write as me and trying to do that in a way that hopefully I will find compelling. So my strategy for this is to combine a huge amount of context. And I'm currently in the process of trying to extract my data out of all these different places where it is locked up, Gmail, Twitter, and whatever, and then try to summarize that into something where it's hopefully still gonna be pretty long, pretty thorough, pretty fact rich so that the AI has enough to go on. And then also train it on my actual output that hopefully it can pick up my style and patterns and whatnot. And then that's when I'm gonna start putting things through Zaps and and being like, okay. Can you respond to this email as me Yep. Knowing that you have 50,000 words of context on me so that you don't have to hallucinate things. And I don't a 100% yet know if that's gonna work, but some of the stuff we've seen just in the last couple weeks with GPT-four 0 fine tuning on the software engineering benchmark definitely suggests that there is a lot of juice still there.

Wade Foster: (19:08) Yeah. You might be able to do that without fine tuning. We did a similar version, my assistant and I, where we took a bunch of my own emails, posts I'd written internally, Slack messages, all that sort of stuff, and fed it into 1 of these things and said, hey. The prompt was effectively, like, describe the writing style here. And then we just did a couple iterations of that to where now we actually have a pretty, like, consolidated prompt. It's not that long that basically describes my own personal writing style. And we use that. We actually ran an internal test where we had a bunch of questions that were submitted by employees. And I went through myself and answered those questions. Then we had an AI version of me answer those questions. And then we posed back to the company, which answer is Wade? Which answer is AI, Wade? And it wasn't that easy for the company to tell. I think folks still picked up on me a little bit more often, but it was still pretty clearly, like, that this prompt effectively was doing a pretty darn good job of that.

Nathan Labenz: (20:13) The CEO Turing test is maybe the next default.

Wade Foster: (20:16) It's pretty fascinating, though, to think about because how many folks would love to actually get feedback from the CEO but are maybe a little scared for whatever reason? And just being able to talk to somebody, it's like, you can get the advice of Wade, but maybe without the judgment of Wade. It might be a perk at the end of the day.

Nathan Labenz: (20:33) That's funny. So just to make sure I have this set up correct, it was purely using a prompt.

Wade Foster: (20:40) You basically just go to or Claude or any of these things and feed it a bunch of documents, data, etcetera, and then you just ask it. Can you generate a prompt for me that would cause you to write in the style of all this stuff? That's effectively what you're trying to get it to do. And then you might have to loop through it a a handful of times, ask it a couple different ways, and then you'll get, like, a nice prompt that you can say, and we can make it your system prompt then if you want in ChatGPT. And it works pretty good.

Nathan Labenz: (21:07) I get that process. But do I understand correctly that you didn't include any actual samples?

Wade Foster: (21:12) We had a no. We gave a bunch of samples.

Nathan Labenz: (21:14) 2. Okay.

Wade Foster: (21:15) Well, no. No. So let me back up. We gave it a bunch of examples that generated the prompt, but then that prompt itself stands alone.

Nathan Labenz: (21:22) Disposal and you can just do that. Yeah. I've tried that a bit. And Mhmm. I would say my current state of the art pre fine tuning for write as me Sure. Is give it just a bunch of samples and ask it to write as me. And I find Claude does the best at that for me, and it works pretty well also. 1 of the challenges that I've had is I get a lot of hallucinations if it is just kind of working with all these samples. And it's like, from the LLM's perspective, I feel like I appear to be making up facts. Right? It has a hard time separating fact from fiction. What I'm writing is, like, plausible enough, and then it will write plausible enough stuff, but it'll be wrong. Do you think that there is a a way in which perhaps just the generally, like, well known status of Zapier and you is trained into the model in such a way where you're benefiting from base training perhaps? Because otherwise, like, it would need context. Presumably, I don't know what questions you're answering here, but, like, you know, what's the deal with the team retreat? Right? It's not gonna know how to answer that. So how do you make sure that it is, like, grounded enough to be effective?

Wade Foster: (22:30) Yeah. I do think that is where it falls down today. If you're asking about something very specific to Zapier that would not be in the training like, some internal knowledge, that's not gonna be in the training data anywhere. It's not gonna be in the samples from at least the stand alone prompt, and so that's just not gonna do a good job at that kind of answers. It's gonna do what all these AI models is. It's gonna hallucinate. It's gonna give you a very confident sounding answer that looks correct and is just wildly not.

Nathan Labenz: (22:55) Yeah. So going back to what people do on the platform and the Mhmm. Sort of the limitations you said, it's easy to get started, connect 1 thing to another thing. It's, you know, presumably a lot harder for most people to think about how they take something that they do intuitively and break it down into think Ziki's thing was 16 steps. We had him on the show not too long ago.

Wade Foster: (23:19) Oh, did he share the

Nathan Labenz: (23:19) same thing? Get into the SDR thing in-depth. We talked more about, like, how they're using AI Yeah. Yeah. In the runway product. He did mention it very briefly. But, anyway, that sort of decomposition and layering of structure onto something that people do in a kind of muscle memory sort of way, I assume also must be, like, a real challenge for a lot of people. And I wonder how do you help them with that, and to what degree can AI help them with that? Like, there is the product experience where you go in. You can either just start with a blank canvas and start building steps 1 after the other, or you can say, here's what I wanna build and have AI generate a draft for you. Are people really attracted to that? Like, how well is that experience working for folks today?

Wade Foster: (24:04) Yeah. People are definitely attracted to that and using it today. I think it is still fairly immature, to be honest, but it is working. And pretty much daily, we'll get notes of folks that are like, holy cow. That actually worked. I think the magic of it is as easy as we've made Zapier, if you're gonna try and set something up without an AI, there's a lot of configuration that you have to do. You have to think through triggers and actions and connect apps and map fields and just there's kind of a lot of buttons you gotta click. Now it's easy to click the buttons, plus a lot of buttons, a lot of knobs to turn. And so the idea of being able to describe in natural language what you want or to have a little bit of a thought partner to say, hey. Can I do something like this? How might I do this? And it's a feed you back ideas. It acts kinda like a better rubber duck for you on these things. I do think there's 2 vectors of improvement that we're working on. 1 is the, like, better coach, better guide for these things. So you're still in control as the human, but you can ask it questions, and it can say, why don't you try this? Or why don't you go this way? Or would this might this work for you? And it's helping you come with ideas. Helping you get your creative juices flowing. So that's that's vector 1. I think the second factor is then how do you actually take over some or all of the building for you? Because that's another big hurdle where you can say, hey. Okay. You said you wanted this. I've already done it. Do you want me to go ahead and insert that in for you? And that's a different sort of, like, lane of improvement to go on. I think both are pretty necessary to help folks reap the fullest potential. I think if you're just doing building, you're only gonna be successful with folks that sort of show up with ideas. And if you're just doing the ideas only, the building still is sophisticated enough that folks get tripped up along the way. And so you have to improve on both dimensions to really reach its fullest potential.

Nathan Labenz: (25:49) Have you used Gamma, gamma. App?

Wade Foster: (25:53) I haven't yet.

Nathan Labenz: (25:53) No. I definitely recommend them. I think Gamma, which is another former guest, is great at providing a sort of guided experience at the beginning. What do you want to say? Give you an outline. Okay. Iterate on that a little bit. It's a slide maker. So now we translate this thing to a series of slides. And then when you get into the slide level, also, there's like the AI is also kind of at that level. Like, do you want me to revise this 1 slide? And they also do a really nice job on accept or reject changes. I think they've got a really tasteful UI on that Mhmm. Front. So, anyway, I recommend them. Do you have others that you look to that you think are out in front right now in terms of their maturity for AI guided experiences?

Wade Foster: (26:36) I think what, like, Claude is doing with artifacts is, like, pretty, pretty impressive right now. Like, that's I think you're just gonna see a lot more of that kinda experience where you have AI on 1 side, builder motif on the other. And it would not surprise me as the next generation of the application layer feels a lot like that, where you almost have this idea of, like, disposable experiences where it spin up a thing, use it, and then that's it. Last for a moment in time. I think those experiences are just pretty freaking incredible when they work. How often are you, like, on your phone looking for a mobile app to do some utility thing? You're like, it would just be better if I could create it. And if the time it took me to create it was, I don't know, what, under 10 minutes? Under 5 minutes? I don't know. Like, what's the time frame at which you'd go, like, this is actually better experience than going through the App Store and trying to figure out, like, what I would do? So that's the kind of thing I think we're gunning for here.

Nathan Labenz: (27:27) Hey. We'll continue our interview in a moment after a word from our sponsors.

Nathan Labenz: (27:31) Yeah. It's happening fast. I I was just complaining earlier today that I don't have a good personal search engine. Like, I wanna just search through all the stuff that I've previously seen effectively. Yep. And that's still oddly difficult, I find. Somebody responded, maybe you should just build 1. And I was like, you know what? Maybe I should. It does feel much more in reach.

Wade Foster: (27:50) There yeah. There was the product. I think they rebranded Rewind.

Nathan Labenz: (27:55) Was super excited for that, but they've pivoted away from that vision and narrowed in a little bit more. I'm not sure exactly why, but in my experience, I did have challenges with that thing just like consuming a lot of resources on my computer. It sort of would bog down the machine. That was a barrier. So you have a cofounder who's out with a big prize who is saying that progress toward AGI has stalled. Cards on the table, I don't quite share that perspective. I wonder how you would address that disagreement from the standpoint of just what is happening within the Zapier platform by users. So if we look back at GPT-four 18 months ago, first version, and then we fast forward to today, There's been a lot of things that have launched, right? Like longer context, Windows, multimodality, fine tuning of advanced models, just way cheaper for 1 thing. And from my perspective, it seems like those things have unlocked a lot, but maybe for practical purposes, for most people, they haven't unlocked so much. Have you seen the frontier of the sort of difficulty or the value of tasks move a lot over the last 18 months? Or would you say it has been more like of course, like it has diffused. I'm sure like more users are doing stuff, but in terms of the most impressive examples, how much more impressive are they than 18 months ago?

Wade Foster: (29:16) They're hugely more impressive. Like, the progress has been unquestionably strong, I think. Mike points out, and I think sometimes gets misconveyed, is he he really focuses on that g in AGI. And the definition he would use of AGI, which is obviously depending on what you talk to, you're gonna hear 1000000 different definitions, is its ability to learn something new from scratch. It's like basically being able to top domains. And right now, I think what he would say is what's happening in LLMs is that it's trained on a set of stuff, then it's effectively pattern matching on top of that. And I think there's some truth to that. But the reality is bigger models, like cheaper, multimodal, the commercial value, the useful utility of automation has grown immensely. And I think that's where Zapier is playing most right now is like, how do we actually help people just build more useful automation inside their workplace? And on that dimension, compared to where we were 18 months ago, it's like night and day difference. Working with unstructured data was nearly impossible. I remember we built an email parser. It's been years ago. That thing was, like, the bane of our existence. People wanted that thing to do so much, but the current state of the art email parsing tech was just not good enough. Now you can just do a basic prompt, and it's better than any parsing engine that you would use 2, 3 years ago. So, yeah, the progress is incredible here.

Nathan Labenz: (30:43) Yeah. That's really interesting. I was just thinking about email parsing project earlier. And, yeah, especially HTML emails, like, for folks who don't know, it's the gnarliest thing in the entire world. Somehow, we have ended up in this state where it's, like, the worst vestiges of old technology have somehow been grandfathered into to HTML emails. And it is a real nightmare to try to code. 1 might say you might need to be AGI complete to just deal with HTML emails.

Wade Foster: (31:06) Morgan email.

Nathan Labenz: (31:08) But I guess that difference is probably a little too permissive. So what do you hope for next? Obviously, there's a lot of speculation around strawberry, Qstar, whatever. Zapier strikes me as being in sort of an interesting position where if I'm in the workflow business as you are, and it's like, man, this is amazing because all these hard things that we just used to not be able to program to do, now we can just kinda a little AI into this otherwise structured workflow and that works. Amazing. But a lot of people also have this vision that might not be too far off where they're like, I just don't even want to think about that sort of structure. I just want to hand the AI a project level thing and say, go do this, figure it out, come back and show me when you're done. More like the the classic intern who you just give something to and wait. How does that play with Zapier, you think? Like, do you want something that can do, like, mid scale, longer time horizon tasks? Do you want something that's just, like, much more reliable in terms of, like, very local things? Is there a form of that sort of agent thing that becomes, a competitive force vis a vis Zapier?

Wade Foster: (32:16) Yeah. I think what do I want? What do consumers want? We all want a world where we can increasingly describe tasks to be done in simpler, vaguer language and have an assistant just go do those things. Like, that sounds like the dream, doesn't it? The question is then, what type of product experience do you need to deliver on that? And I still think under the hood, you're gonna have, like, form of some form of workflow engine that's powering a lot of this stuff because you're gonna have to route data to different services. You're gonna have it execute various tasks with inside that. It's gonna have to feed the output to some other service and you have to deliver it back to the user. I think you're also gonna have different types of jobs be done. There's some things that you want super low latency, and cost is gonna be a real important concern, and so that's gonna have to work in 1 particular way. You have other areas where it's like accuracy is, like, really, really, really important, and you're willing to spend a lot of money on it. So that experience is gonna look quite different too. I think you go into any business, and it's full of workflows that all have different shapes, sizes, contours, what have you. And so I think we're gonna just see an enormous different ways that these things can sort of be put together. I think the question is, are are we gonna have 1 god like AI that you can just interface with that just magically knows behind the scenes how to route and do all that sort of stuff? Maybe. I don't see a ton of evidence that's what's coming next. And then, of course, if we get there, like, all of us who build this kind of stuff are, who knows what we're gonna do at that point in time. You start to get to AGI territory where it's hard to your guess is as good as mine of what the world will look like then. But in the meantime, I'm pretty bullish on how we at Zapier can take these smarter, more flexible models and inject them into, like, your existing toolkits to help you solve bigger, better problems for an easier to use interfaces.

Nathan Labenz: (34:02) For your AI sort of builder experience, the sort of what do you wanna build? Okay. Here's a a somewhat vague description of this app that I wanna create. How far have you pushed that today? And do you how much farther do you think there is to push it before you would be maxing out what current models can do and would, you know, only see benefit from the next step up in reasoning capability or whatever?

Wade Foster: (34:29) I think there's still quite a bit more we can do with the existing models, but we're definitely starting to feel like the boundaries of some of this stuff where but it's like there's certain types of tasks that we can do pretty predictably if they're simple enough and small enough. But when it breaks outside the bounds of those things, it definitely feels like self driving cars, where it's like you can get the demo sort of working, but that last mile stuff ends up being exceptionally difficult. It's all the edge cases, the weird anomalies, the things like that where humans can spot those things pretty easily, but the AI just will just trip up up on those things left and right. And that's where I do think we're just waiting for the models to see, hey. Are they gonna have a step function improvement and what it's capable of? And if so, great. That's gonna unlock a whole new set of tasks for ourselves. But even within the existing models, I still think there's more we can do to make that experience just better across a whole number of workflows.

Nathan Labenz: (35:23) My number 1 tip for people generally is fine tune on chain of thought or reasoning traces, they're sometimes called. But basically demonstrate with not necessarily a huge number of examples exactly how you want to approach the task, kind of all the stuff that happens in the internal monologue that typically in work doesn't get captured, right? Because we just have often like a ticket definition and then the result and the thought process that happened in between is usually not captured. But getting some of that stuff down in explicit form and then fine tuning on that so that the model is now imitating not just your outputs, but also the reasoning process to get there. That's my number 1 thing usually for pushing product experiences like that. What have you found to be the biggest step up in performance of all the things?

Wade Foster: (36:15) I think what you just described is, like, pretty accurate. I think in some ways, it's almost like the opposite of what you wanna do for Google search. Like, in Google search, you're, like, just so lazy. It's, the shortest little keyword or whatever snippet that you can get. When you're working with these models, you wanna describe it in its fullest, like, version of itself. So that definitely stands out for sure. More examples is good. Yeah. I I think narrowing the task is also really helpful and then starting to chain those narrower sets of tasks. Like, the more you try and, like, do multiple sets of things in 1, you're gonna see it trip over. And so I think this to the extent that you can say, hey. I only need you to extract x here. And then the next step is, okay. Now I want you to do this next thing here. That to me is the next thing that really has helped improve performance, especially within the workflow engine is to really get hyper specific about that stuff. Break it down to its most granular bits that you can think of.

Nathan Labenz: (37:09) Yeah. Test decomposition, definitely also really big.

Wade Foster: (37:12) Yeah. And it's tricky for I mean, like, we don't walk around as humans usually thinking about, like, the smallest bit of the task. It's kind of a nerdy concept.

Nathan Labenz: (37:20) Yeah. I just was joking to somebody. You wanna pretend like you're dickens and you're being paid by the word for your reasoning between the input and output.

Wade Foster: (37:29) Like, what's that old thing that they I feel like everyone does this, like, in elementary school where it's there's 2 people. 1 person is guy gonna make, like, a peanut butter jelly sandwich, and then you're like, you have to give them instructions on how to make the peanut butter jelly sandwich. And the person on the other side is instructed to only do what the person tells you to do. And, like, it all chaos always ensues because it turns out it's a lot more complicated to describe how to make a peanut butter jelly sandwich than we think it actually is. So it kinda it it feels a little bit like that experience at the end of the day.

Nathan Labenz: (37:56) That's a good exercise that maybe schools should be doing more of because clear communication certainly seems like it's always valuable, yeah, only heading upward in importance and returns.

Wade Foster: (38:08) Yeah. I it it's not lost on me that I do think the people with the best communication skills might end up being really exceptional managers. Like, folks that are just really well equipped to doing this might actually end up being some of the folks who have already developed some of the best skills for working in this world because they're used to delegation. They're used to describing their intent really well, and they might end up really thriving in, like, this next chapter of the economy because they've already tuned this engine really well versus maybe the rest of us who are still trying to figure exactly what delegation means might have some work to do.

Nathan Labenz: (38:42) Shifting focus a little bit to how you guys are working internally at Zapier. How would you describe the productivity gains that you've seen within the company? Obviously, software development is 1 seemingly very natural use case. Are you guys able to qualitatively or quantitatively say we're producing more software in a given unit of time? Do you have tools, best practices? Is everybody using Cursor? What's it look like now within Zapier just on the development front?

Wade Foster: (39:11) The biggest step function change we saw was probably 15, 16 months ago where this would have been probably around the time of GPT-four release. We stopped the company for a full week and said, hey. We're gonna do hands down, we're gonna do AI hackathon. And it's for everybody, not just engineers, everybody in the company. So if you're an engineer, we want you to go build with this stuff. If you're a non engineer, we want you go use it. Get used to these things. Figure out what it's doing, etcetera. And in the weeks and months after that, we saw the adoption of AI go from probably 10% of the workforce to past 50. I think it got up to maybe 70% of the workforce using it as a day to day tool in their job. Qualitatively, it is, like, a very common tool in people's tool chest. It's hard to measure exactly quantitatively where that's showing up. You know, I think engineers are using, like, Copilot. There's a lot of buzz around cursor. Cloud artifacts are a lot of fun to play with. There's a lot of just, like, people using it to write code. They're using it to write integration code. They're using it to write just any sorts of features. So that's, like, super common. You are a data analyst. Like, the that's a team that if you said, hey. What if we took JACK GPT away from you? What would you do? You're gonna pry it from our cold dead hands. So there's huge step function gains and functions like that. I think on the GTM side, seeing a lot on just, like, various forms of content creation, I think you gotta be careful there. A lot of companies I see use it in ways that are borderline spam. It's not actually gonna help them stand out and do interesting things. But I think, again, if you're using it as an assistant and if you're really trying to make sure, like, here's what your quality bar is, here's the types of things you want out of this, If you have really good prompts, you can find a way to generate, like, hyper personalized emails at scale. You can find a way to generate, like, really good templates around how your product works. Like, really good if you're ecommerce, you can generate great product descriptions for the products you have. There's these use cases that are on the content generation side that are really quite good, but I find the degree of quality there to be quite wide right now. So there's a lot of folks where I'm like, don't do that. You're gonna give the industry a bad name if it doesn't already have a bad name. But the people who are using it very deftly, I think, are having a ton of success. Our HR team has been huge adopters of this stuff. They're using it all throughout onboarding to help guide folks into the company. We have chatbots for all of our handbooks, documentation, things like that. It's super common experience probably in any company where you're like, drop into a Slack channel. Can anyone give me the answer to this? In the past, there's a fleet of humans answering those questions even though it's been answered before. It's in some document somewhere. So our HR team has done a really good job of consolidating those into chatbots and setting up Slack bots to answer that stuff on their behalf. Using it for, like, to again, an assistant for helping you write really good, like, performance reviews. I think performance reviews in every company is everybody's, like, worst nightmare headache to do. And so we've got little chatbots that help people just, hey. Toss, like, their accomplishments, the things they gotta work in, and it'll just write it for you. And so it saves everybody time. To just be like, do the sloppy version of it, and it ends up actually being quite good versus wasting all this time on, like, polish and presentation and all that stuff that doesn't matter. I I think we've just seen across the company in every department, people are going through the learning curve of figuring out where can we use these tools to up level the work we do. And not everything's a winner. There's certain stuff like a lot of companies are talking about how much success they're having, for example, in customer service. I think for us, like, we're actually finding it harder to get gains in customer service than others are finding. And I think maybe that is because the diversity of our support, the technical of our support is pretty high. Now we're still having success there, but it's not like the Klarna example that made the waves probably, what was it, 6 months ago where it was like, we were able to get rid of 700 jobs or what have you. And I looked at that, I was like, yeah. I think that's probably because they weren't doing much with automation to begin with. And so they had just a lot more gains to be had where, like, I think we were already pretty sophisticated on this front, and so the gains that we could eke out of it were just not as much. I don't know. We're still working on it. We still think there's a lot of improvements that could be had. We just haven't quite cracked the nut yet. You know, I think your mileage probably varies, and it just requires a lot of experimentation is what we're finding.

Nathan Labenz: (43:32) Yeah. Certainly everything is extremely context dependent. You mentioned stopping the company for a week. I'd love that anecdote. Beyond that, have you done anything else? For example, I see companies with all kinds of different intuitions. Right? Some are like, we wanna have a bottoms up experimental culture. We wanna give prizes to the people that come up with the best AI automations. Others are like, this is gonna be thou shalt not put anything into chat GPT, but, you know, it's not officially sanctioned. How do you approach that? There's also management. Right? Do we set an expectation? Everybody's gonna do something. You're gonna be accountable on a quarterly basis for some Yeah. AI adoption milestone or whatever. How have you approached that from a cultural standpoint? What would you recommend to others?

Wade Foster: (44:14) Yeah. I I think you gotta feel it out. We started in sort of, like, a little bit more of a bottoms up approach. And around the chat GPT launch, we just were like, yeah. Hey. This is cool. Like, you should check it out. Everybody should try it. Just trying to soft influence to get folks doing it. When the GPT-four launch came out, we we felt, okay. This needs to be a lot more of a toss down mandate. And so that's when we stopped the company for the weeks, said, hey. You gotta go do this, and we gotta go figure this out. I think now we have a bit more of a healthy mix where it's like, in some places, we're like, hey. There is a strategic imperative reason for us to go figure this out, so we are gonna say this needs to happen. In other areas of the company, it's more just like, hey. You're smart. You know tools. You figure it out. You're gonna figure it out, and we're just keeping tabs on that stuff. In terms of things that have worked to keep the drumbeat going, we've done a couple things. When we ran the hackathon, we worked with both legal and our technical teams and data privacy and all that sort of stuff to set up some rules of engagement to make sure it was done in a way that we felt, you know, took really good care of customer data and all that sort of stuff. We felt like it was important to have guardrails around some of those things. I would not suggest just going wild west without working with your teams to create the proper environment to to do this. That was a really important first step for us that we did in combination with legal engineering teams, things like that. And then since we've gotten on from that, we do an all hands every Thursday, once a week. Every month or so, we usually will do, like, an AI showcase as part of those all hands. We'll just curate maybe half a dozen use cases, and folks will record little videos or come in and show off things live. And it'll just be, like, showing off what they do. And so that pushes some cool use cases, get people's minds churning. We have a fun AI channel inside of Slack where people are showing off stuff that they're building, things that they're playing with. We have a handful of both engineers and nontechnical, but, like, very sophisticated AI builders that will do, like, weekly AI office hours that anyone can just show up and share stuff around. So we just try to, like, foster these, like, forums, whether they're official forums, like tops down, do it this way, or just employees taking initiative and figuring these things out. And we just generally try to put more oomph behind any of these things because we think that there's just so much goodness that can come from that. It does require a little bit of the company to, like, encourage it and nudge it along. I think the biggest thing that I would tell other leaders is don't be shy about saying this is important. I think that's why you've seen a lot of the founder led businesses have probably done better during this period is because we're able to just come in and say, let's go do it. I think more companies have that ability than they realize. Just say, this is important. If we sleep on this stuff, it's gonna be a problem. Maybe it's not a problem today, but give it a year, give it 5 years. I think for almost any business, it's gonna be a problem.

Nathan Labenz: (47:03) 5 years a long time in this space.

Wade Foster: (47:05) Oh, yeah. Super long time. 5 years, everybody's stuff is gonna look different.

Nathan Labenz: (47:10) When you referred to some areas being top down strategic mandate, is that like core product experiences or anything else outside of that? And then I always ask of entrepreneurs, what would a 10x more expensive version of the product look like where you spared no expense on the AI? You have also the Zap Connect event happening. So maybe tie all those together. What are the sort of new strategic initiatives product wise? Where are you applying more AI than ever before? And whether or that translates into a higher price point?

Wade Foster: (47:44) Yeah. I think in terms of a 10x higher price point, generally for us, that's gonna mean moving from like a suite of tools to actually like solving bigger sets of solutions. And for us, where that sort of intersects with AI is we've launched things like tables, interfaces, chatbots. In addition to our workflow engine, we have 1 that's coming around code. And I think in all of these cases, we're thinking through what does, like, an AI first interface look like in those motifs? I think the world was captured by ChatGPT, and we all felt, oh, chat is the new, like, user experience. But the further I get from that, I actually think a lot of the more traditional UIs we're used to interacting with are still great user experiences. You know, a table is a really good way to interact with stuff. A workflow is a really good way to interact with this stuff. But what if you reinvented it with AI as a first order primitive in these experiences? So that's a thing that we're paying a lot of attention to. And then on the enterprise side, we're trying to figure out how do we package all these things together to solve a 10 x, 100 x type of problem for you so that you as the end user don't have to walk in and figure out how to configure all this stuff from the ground up? We can come in and say, hey. Here's more of a end to end solution out of the box. Maybe you have to tweak some things on the edges, but you're not having to build the whole dang thing, from the ground up, which right now, Zapier is a DIYers haven. Like, it's the best place to hang out. But I think we can do a little bit better job of just packaging some of the best use cases up for folks and saying, hey. This right here is gonna change the way your marketing team works, or this is gonna change the way your HR team works. This is gonna change the way your finance team works if you use Zapier in this specific way.

Nathan Labenz: (49:31) Done for you by AI beats do it yourself.

Wade Foster: (49:33) Services are 1 of the best places for AI right now is that services are premium priced experiences that AI can you can offload a good chunk of the work to an AI. So you can still sell at a premium, but the labor to use it, it just went way, way, way down. Does that collapse?

Nathan Labenz: (49:53) It seems like that collapses.

Wade Foster: (49:55) That's capitalism in a nutshell. Right? What's the Jeff Bezos saying? Your margin is my opportunity. Right. So eventually, capitalism competes away all this stuff. And who wins? Consumers. We all win at the end of the day.

Nathan Labenz: (50:06) High consumer surplus is definitely 1 of my main predictions for the next few years.

Wade Foster: (50:10) Oh, totally. It's gonna be fun for the consumer. I think for those of us that are built in software, it'll be fun too, but I'm sure there's gonna be some stressful moments as well. Yeah.

Nathan Labenz: (50:18) Well, this is great. I appreciate the time. Anything else you wanna talk about in terms of event, new launches, or just anything else we didn't get to?

Wade Foster: (50:28) Come check out Zapier. It's we are not the same old product that you all have, maybe remember as, like, that simple integrations tool. You can build a lot of crazy stuff on right now. We are a small business enterprise, tables, interfaces, automation, full on across the board. It's been a fun couple years of shipping over here.

Nathan Labenz: (50:44) Love it. Wade Foster, CEO of Zapier, thank you for being part of the Cognitive Revolution. Thanks, Nathan. It is both energizing and enlightening to hear why people listen and learn what they value about the show. So please don't hesitate to reach out via email at tcr@turpentine.co, or you can DM me on the social media platform of your choice.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.