Helping Businesses Use AI with Rachel Woods of The AI Exchange

Nathan Labenz interviews Rachel Woods on AI strategy and its application in business, highlighting her experience and insights as an AI educator and consultant.

1970-01-01T01:18:57.000Z

Watch Episode Here

Video Description

Nathan Labenz sits down with Rachel Woods, founder of The AI Exchange, an AI education platform with over 130,000 subscribers on TikTok. They discuss Rachel’s experience consulting for public and private companies on AI strategy, how teams should think about their AI strategy, and how Rachel uses AI in her own business. Rachel was previously a data scientist at Meta, and a founder of a venture-backed e-commerce startup, where she began to use chatGPT in her startup operations.

This episode is the second in a series centered on talking to rising voices in AI media: people who are now only working overtime to understand everything going on in AI, but also creating thought leadership and educational content meant to help others get up to speed as well.

LINKS:
https://news.theaiexchange.com/

PODCASTS:
The Cognitive Revolution: https://link.chtbl.com/TheCognitiveRevolution
Upstream: https://link.chtbl.com/Upstream & @UpstreamwithErikTorenberg

TIMESTAMPS:
(00:00) Episode preview
(02:59) Rachel Woods’ story
(04:30) State of the public who comes across videos
(9:49): How do you frame AI to business owners and how should they use it?
(15:29) Sponsor: Omneky
(18:58) How should teams set up their AI strategy for success?
(26:41) The leap from GPT 3.5 to GPT-4
(30:42) Step changes of AI and public perception of capabilities
(35:17) Common misconceptions people have about AI
(36:22) How Rachel uses AI in her business
(44:09) What’s your style of prompt engineering for content creation?
(46:01) How AI will change the way we use a computer in the next year?
(49:34) Bing launch and how companies launch AI products
(54:15) AI safety and restraint
(1:00:11) Alpaca and LLaMA
(1:04:16) Rachel’s favorite products in AI
(1:05:24) Would Rachel get a Neuralink implant?
(1:07:55) AI hopes and fears

TWITTER:
@CogRev_Podcast
@labenz (Nathan)
@eriktorenberg (Erik)
@rachel_l_woods (Rachel)

Thank you Omneky for sponsoring The Cognitive Revolution. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.

More show notes and reading material released in our Substack: https://cognitiverevolution.substack.com/

Music license: TIMPZ5IPGUR8QFGU


Full Transcript

Transcript

Transcript

Rachel Woods: (0:00) The most absurd conversation that I've had was literally from TikTok. People reached out and wanted me to join their 2 hour long board meeting at a public pharma company. What they hired me to do is to sit there and just be a sounding board as they're thinking through what an AI strategy means. If everybody can post on Twitter and everybody has an AI that just takes their raw thoughts and puts them into high performing tweets and everybody is using these agents, is anybody actually on Twitter anymore? If you're paying attention, you can at least feel like you're making progress. The number 1 thing I recommend to people is start using ChatGPT with an automation tool like Zapier or Make. We are using AI for so many things that are so serious. And honestly, sometimes it's just really fun to use AI to make much better memes. I've never been a person who's good at making memes before, and now I feel like I can make ones that make me laugh at least.

Nathan Labenz: (0:55) Hello and welcome to the Cognitive Revolution where we interview visionary researchers, entrepreneurs, and builders working on the frontier of artificial intelligence. Each week, we'll explore their revolutionary ideas and together, we'll build a picture of how AI technology will transform work, life, and society in the coming years. I'm Nathan Labenz, joined by my cohost, Erik Torenberg. Hi, everyone, and welcome back to the Cognitive Revolution. Today, we continue our AI in Media series with special guest, Woods. Rachel was previously a data scientist at Meta, where she worked on AI systems that support the small business advertising platform. And she was also the founder of VineBase, an ecommerce and marketing platform focused on small vineyards, which she sold to Quirksy in 2022. Today, Rachel is the founder of the AI Exchange and best known for her Twitter and especially her TikTok profile, where she creates roughly 1 video per day to help people keep up with and understand all the latest AI news. She also dispenses practical advice for both individuals and companies. The AI Exchange publishes a regular newsletter, which you can subscribe to at news.theaiexchange.com, and she provides additional resources, community, and even optional consulting services for subscribers. Rachel and I talked about the kind of audience she is reaching, just how quickly people are waking up to the opportunity of AI, how many companies that didn't have AI in their 2023 plans are now scrambling to adapt, how she tries to consult with and guide those companies forward, and lots, lots more. I hope you enjoy this delightful conversation with Rachel Woods. Rachel Woods, welcome to the Cognitive Revolution.

Rachel Woods: (2:42) Thank you. Thanks so much for having me.

Nathan Labenz: (2:44) Yeah, I'm excited to talk to you. I think you have staked out a really interesting position for yourself as a sort of very smart and plugged in AI influencer, popularizer, educator. So I'd love to start off by just asking you a little bit about how you conceive of the role that you're playing and who your audience is and how you decided to do this in such an opportune time.

Rachel Woods: (3:12) Yeah, so a little bit of luck and foresight, I think, is the moral of my story in this space. I actually started my TikTok account November 1st, which if you think back to the timing of that before the whole world cared about ChatGPT, it's almost comical that that was the month before everything imploded or exploded, however you want to think about it. My story is I was a data scientist before, and then I founded a venture backed startup in the ecommerce space where we were actually using GPT-3 a bunch in that product. And after that experience and seeing the impact that large language models were having on what we were building, I was thinking, okay, the world maybe doesn't care yet, but the world needs to know that AI can do this crazy thing where it writes for you. And that, plus a little bit of a joke about becoming a TikTok influencer, was the impetus to go start a TikTok account. It's just been a crazy journey since then.

Nathan Labenz: (4:21) So now you're reaching regularly something like 100,000 people a day, and obviously there are spikes on top of that on TikTok. What is the state of the public, the broad public right now who just comes across your videos? I'm so deep down the rabbit hole myself, and I think probably most of our listeners are, that I sometimes find it's easy to lose touch with what somebody who's cruising TikTok and sees one of your videos for the first time might be predisposed to think.

Rachel Woods: (4:52) Yeah. A lot of people are surprised when I tell them my stats, myself included. I originally thought that TikTok was a lot of young people doing dances or funny memes. But 90% of my audience is over the age of 25 and 50 percent is over the age of 35. And they're predominantly US based. And then I have the really fun privilege of people reaching out to me and booking time with me to talk. And so I found out that they are mostly business leaders, professionals, founders, and engineers literally on TikTok nights and weekends listening and learning about AI. And then, funny enough, sharing my TikToks in their company's Slack channels, which I just sit back for a minute and think about being at work and just seeing a stream of TikToks about AI in your Slack. It's pretty funny. But yeah, I know we're in the AI bubble. I'm very active on Twitter too. But one thing that's been truly amazing over the last 3, 4, 5 months has just been the pure mainstream, especially business, explosion of interest in AI. I'm sure you've seen that as well with what you're doing, but it's really there, which has been pretty incredible.

Nathan Labenz: (6:17) Yeah, it seems like we're in this probably pretty short in between moment where my joke right now is that I restarted the calendar on the release of GPT-4. So we're now at GPT-4 plus 1 month to the day, I believe. And it's this huge step change where all of a sudden you're seeing pretty sophisticated reasoning, quite breakthrough insight, professional grade reasoning on just a super wide range of things. And yet that's not really deployed almost anywhere at this point. And I have no idea to what degree people outside of the bubble are aware, or just have any concept of whether that's happening or not. Because the messages are so confused, right? At the same time, you'll see people saying it's all hype and this technology is never going to work, we need a totally different paradigm. And then at the same time, other people are reporting that it's matching human doctors. So yeah, it's hard to get a read on. But it sounds like you are mostly reaching people that are much more plugged in and shepherding them through education as opposed to seeing too many randoms on your page.

Rachel Woods: (7:36) Yeah. I would say the audience that I really speak to is the early adopters across almost every industry. How I decided to go from TikTok being a fun side thing to it being my full time thing and building a whole company around it was I put a link in my bio January 1st, and within 24 hours, I had 6 people purchase time with me to talk to me about not the latest prompts or even how to log into the Midjourney Discord, but Okay, I see that there's a huge shift in AI. What should my business even be doing in this next wave? And I feel like that is actually a side of the AI conversation we don't see on Twitter, we don't see in the doomsday conversation. There are just so many businesses across every single industry that are actually sitting down with their board, with their team, and thinking, Okay, should we be rethinking how AI is brought into our product or our offering? So then from that lens, some of the things that I think the AI bubble people are paying attention to around these emerging capabilities of reasoning, the leaders across these other industries are paying attention to those types of trends because they're trying to figure out what does an AI strategy even mean. Is an AI strategy that I'm going to have prompts in my offering to my customers? Or is an AI strategy that I'm going to have to figure out how to deploy agents? And I think that's actually the conversation that's going on, even if you're a lawyer, a dentist, a CEO of a software company that's based out of maybe not Silicon Valley. It really is the conversation people are having, which is fascinating.

Nathan Labenz: (9:31) I think those conversations sound pretty fascinating, and it sounds like you've had a wide range of them that probably literally does span local small business owner all the way up to CEO of a significant company. How do you frame things for them when somebody comes to you and is like, Okay, here's my deal. And maybe the answer is it's highly contextual, which would be interesting in and of itself, but I'm sure you have some kind of high level frame that you offer, maybe a couple of different versions to people who are saying, I've used ChatGPT a little bit, I've seen it on the news, but I'm really kind of at a loss for what this means for my business.

Rachel Woods: (10:14) Yeah. So Nathan, the funniest or the most absurd conversation that I've had was literally from TikTok. People reached out and they wanted me to join their 2 hour long board meeting, public company, public pharma company. What they hired me to do is to sit there and just be a sounding board as they're thinking through, what does an AI strategy mean? That's the level at which people are actually having conversations. There's definitely some commonalities. I have done over 100 of these calls since the end of the year, and so I've started to distill things down into resources, which is a lot of what we put the business around. But one question is, are you going to use AI to improve your internal operations? Or are you going to use AI to offer a new service or product or experience to your customers? And those are pretty distinct conversations a lot of times. And I often talk to people about how to think about risk. There's a lot more risk in offering an AI solution to your customers today, just based on where the technology and the capabilities and human in the loop and these other questions, limitations we have are, versus starting to figure out how you can use it in your internal operations. I usually recommend companies start there because they get more of a feedback loop and a sense of the technology. And you're going to learn stuff that's going to help you figure out how to go and maybe deliver something different to your customer. But yeah, I think the thing that would surprise most people is how serious these conversations are and how legitimate. It's not just the AI bubble people saying the words like human in the loop. It's a much broader audience.

Nathan Labenz: (12:06) Yeah, fascinating. I agree with your advice, for what it's worth, to start with something in your own operations. That's certainly how I got comfortable and to the point where I felt qualified speaking publicly about this technology. And I probably have a pretty similar riff. I always talk about identifying discrete tasks that AI can do and then embedding that into a broader process that takes advantage of the ability to delegate to AI. And a lot of times that can be pretty analogous to tasks that are currently done. Sometimes it can be new tasks or it can enable scale in new ways. But just starting with those discrete things. Is there a sort of input-output where it's hard now? And if I had infinite interns or whatever, it could in theory be easy. If you can identify those places, you get off to a pretty good start and then you feel real value. You definitely learn things. You see some failure modes and hopefully develop a healthy respect for the technology. But yeah, I agree that's a very good place for most to start.

Rachel Woods: (13:16) When I think about what to create content on, I think there are people in the space who are creating content for the objective of getting views, growing an account, growing an audience, growing distribution. And early on, I think you have to make a decision of whether that's going to be your objective in this space because you go down a certain path. Or if your objective is going to be more playing the long game and creating really a relationship more as a creator than an influencer where you're creating value for people and helping them actually navigate what's going on. I think what I see you talk about online is very much in these same objectives, which is, we sitting here look at how AI is going to change the way we operate business. And we also see that if people don't really have the right way to think about it or not the full information, or they're not really paying attention, or they're not inspired to get small wins, then this space is going to keep moving and people are not going to be prepared to ride that wave. And I think that's so important. That's where I create content from. If someone can not feel concerned about the future of their business because they watched a few videos, or because they read my newsletter, then that's the win. How do you arm people to actually feel confident navigating this next wave? Because I think you and I both just sitting here see that there's going to be a huge change.

Nathan Labenz: (15:00) Hey. We'll continue our interview in a moment after a word from our sponsors. I want to tell you about my new interview show, Upstream. Upstream is where I go deeper with some of the world's most interesting thinkers to map the constellation of ideas that matter. On the first season of Upstream, you'll hear from Marc Andreessen, David Sacks, Balaji, Ezra Klein, Joe Lonsdale, and more. Make sure to subscribe and check out the first episode with A16z's Marc Andreessen. The link is in the description.

Nathan Labenz: (15:29) Do you know anyone who feels, if you could point to anyone who's confident in navigating this, I'd be very interested. I don't feel very confident at all. Do you feel confident?

Rachel Woods: (15:38) I feel confident in that it's a step by step process of paying attention and trying to make smart decisions along the way. I'm not sitting here saying people have it totally solved, or people know how it's going to play out. But I do think there's a level of, if you're paying attention, you can feel at least like you're making progress. Repurposing content, the thing that used to take you 30% of your time as a content creator, now takes you 2%. And that's progress. And I think confidence can come out of those small wins.

Nathan Labenz: (16:15) Yeah, it's fascinating. I love the space in many ways, and one of the biggest ones is just how much I'm constantly learning. I try to target 50% of my working time just on understanding what's going on and probably do fall a bit short of that, but have to maintain a pretty good amount or things get stale so quickly. The number of times that I've gone back and looked at writing from two weeks ago and felt like, well, it needs at a minimum a significant update is just crazy.

Rachel Woods: (16:49) Yeah. I mean, it's interesting that you say you don't feel confident in that because looking back and seeing how much the space is changing and then realizing that it's changing, that's actually what's going on right now. We're all having to do that.

Nathan Labenz: (17:06) Yeah, no doubt. And I like it, but I do also at the same time feel like my ability, I think I can see maybe six months out with some amount of clarity. And when I talk to people on the podcast, I apologize for even asking them to envision 2030. Occasionally I have asked and they're like, 2030? What are you, nuts? That's beyond anyone's event horizon. So I mean, I guess there's different meanings of confidence. I'm with you in that I do feel like I have enough command of the current situation to at least help people start to get up to speed. But I'm very reluctant to make any predictions that get beyond 2025 at this point.

Rachel Woods: (17:55) I would totally agree, yeah.

Nathan Labenz: (17:56) So going back to the board level conversations where you're interacting with companies as a strategic advisor, and keeping in mind this large pharma company type example, I have a fair degree of uncertainty around how quickly things are going to happen over the course of this, I've called it the great implementation of we have GPT-4, you can see a pretty clear path of it being everywhere. I am confident that there is a wave of transformation coming based on that deployment, but the speed of the wave and exact timing dynamics, how it plays out, a lot more uncertainty there. So what have you seen in terms of the ambition of these companies? You could talk about that in a couple of ways. Timelines, but also a pharma company, I could see wisely, I would say, starting with something small and internal, expense receipts, you can reform all these little things that are annoying today. But then you could also imagine a world where instead of talk to your doctor about our drug, it's talk to our language model on our website as the first thing you do before we then encourage you to go talk to a doctor. So how ambitious are these companies thinking right now? When do you think we see the small and then the bigger changes?

Rachel Woods: (19:25) The first thing that I see is most people, in fact, I would almost say all with an asterisk, didn't have AI in their 2023 roadmap or budget. And so most of the conversations that I was helping companies have either directly or supporting through content was how to think about having that conversation and what investments might look like. I recommend to a lot of companies that they take a page out of Silicon Valley's book and think about doing an internal hackathon with their team. I think we're starting to see some of those start to play out over the next couple months, where companies are at least just giving their team the space to experiment and see where value could come from within anything, especially internal operations. When you ask about the really big, broad strokes of, wow, we don't recognize what it means to be an e-commerce company or what it means to be a pharma company or what it means to do X, Y, and Z. I do think those things are much further out and much fuzzier. But I guess this is my bias. I sit in the day to day of your team does so much manual work that they know is manual work they don't love doing. They're sitting there on the slog and they want to use these tools. How do you enable them to use some of these tools and this productivity unlock? And I think that's happening now and exceedingly so in the next three to six months.

Nathan Labenz: (21:00) How do you see that shaping up? Because this is something I also have, again, in terms of my limited confidence. If you'd asked me a couple months ago, I was very, we got to curate the best tools. And in particular, I'm advising a friend's company, which you know about, which is called Athena, which is in the executive assistant space. And as of January, I was like, we need to test a ton of tools. We need to curate the best tools. And then we'll have this Swiss Army knife of proven things. It's going to be awesome. And we'll train all the EAs on how to use them. And that'll be ultimately a part of how we can position the company. And I don't think that's wrong, but I do think the number of tools that I expected to be in that set has dropped precipitously to where I'm now like, it might be three, or even maybe just one core one plus other things that are naturally built into spreadsheets and whatnot where just the product itself changes. So what's your expectation for that productivity suite over, whatever, three to six months? As that matures, what do you think people are going to be using most?

Rachel Woods: (22:18) So I guess I also think about this question in two ways, which is when I tell people, hey, maybe think about giving your team space to do an internal hackathon type experience, most of that is for building up what I think of as AI literacy. Because again, I go back to most companies did not have AI in their vocabulary before ChatGPT. And so I think we're still in this space of people have tried ChatGPT, but they're not using it on a regular basis because they can't really find ways that it's useful.

Nathan Labenz: (22:52) Also, they're probably still on the free version a lot of the time, which is a huge difference in value.

Rachel Woods: (22:58) Yeah. People don't really understand data privacy and the ownership of what happens to the prompts and the outputs that goes into that. So I think there's a lot of adoption that needs to happen just from a learning perspective. And then from a tool side, I'm actually very much in the same boat as you. I encourage a lot of companies and teams to be a little patient with the tools that they test out. My hot take, as I say, an annual plan is one of the worst things that you can commit to in this market because it's changing so fast. You never know. One month, the AI Sales Assistant tool could be the thing that you think is going to really enable your team, and the next month, it could be something totally different. But I think that some of the best success stories I've seen are when companies and people building have really close relationships and they're iterating through that together. So I think we're still so early on the tools themselves in a lot of regards as well.

Nathan Labenz: (24:05) Here's a hot take back. I think it's shaping up to be ChatGPT as the Google, the canonical name brand, iPhone equivalent of using an AI chatbot. And it seems like there will be certainly alternatives, but it seems like OpenAI has a pretty substantial lead. And for those that are really taking advantage of this stuff, it seems like that is going to be the hub. That's my most likely projection for later this year, is plugins come online, they work well, and everybody's like, ChatGPT Plus. It's the thing. That's what we're using.

Rachel Woods: (24:49) I think that could happen. I still feel like it's a little early. I think there are some dynamics that haven't played out totally yet, just from as I've talked to people. I mean, one thing that a lot of people overestimate, I think, is how many people are using ChatGPT on a regular basis. A lot of people have tried it. A lot of people haven't had success with it yet because maybe their prompt wasn't very good, or they didn't figure out the right workflow. So I think we're still really early, but that could happen for sure.

Nathan Labenz: (25:21) How much, so for me, I think the core thing is just the leap from 3.5 to 4 is so big. I wonder if you would agree with this general assessment, but in my experience, so many things have gone from at 3.5, still this kind of art of eliciting decent performance, really got to tinker and drill in on the instructions and often a couple of examples. You got to, I typically had to think still pretty hard to get good results from anything up to 3.5. I find with 4, it's honestly pretty easy most of the time. And I haven't, so this company Athena is 1000 plus people. I have a standing open invitation. Anyone can just send me whatever they want help with. And I usually find that within a half an hour, I can take their inputs and get to a decent working output. How would you characterize that leap and to me, that's the thing. If you went to ChatGPT and you used 3.5 and you didn't get great results, I would bet four out of five of those people would get good results if they had just used GPT-4.

Rachel Woods: (26:37) I mean, I'm curious when you have this open standing invitation, how much of that do you feel is because you're a good prompt engineer now?

Nathan Labenz: (26:46) Not that much, honestly, anymore. Some. Definitely, I use a number of pretty standard techniques. My go to obviously is some clear instructions, but I usually don't find I have to overly tinker with or refine those. Usually use a role, you are a copywriting expert or you are a seasoned recruiting executive or what have you.

Rachel Woods: (27:12) Yeah.

Nathan Labenz: (27:13) I usually don't even have to use examples. I do use a little bit of know-how when it comes to segmenting the prompt with a little markup to make it extra clear to the model that this is this and that is that. And that's been a huge advance from 3.5 to 4. That stuff just works now. It can understand that document A and document B are distinct things at a conceptual level and synthesize in a way where it used to just confuse even with decent markup a lot of the time. So honestly, I don't know. I think maybe I underestimate how hard it is to catch up, but I feel like I can communicate this stuff in not a long time. Usually do. Usually when I get done with a half hour call, they have a working thing and they feel like they can continue to elaborate it from there if they want to.

Rachel Woods: (28:08) That's where I feel like we're just still so early, because the techniques that you're describing, I think we're still at a very, very small percentage of the workforce or companies that even know about those. And not for a lack of interest, but just it's, we're in, I think there's a bubble in the fact that we're in AI all the time. We're thinking about it all the time. When I talk to businesses that are definitely paying attention to AI, but they run an e-commerce business, their day to day is not thinking about, does GPT-4 recognize markdown better now? I think we're going to get there, but it still feels extremely early in the widespread adoption. While I agree for my use cases, I feel like 3.5 and 4, I even find myself thinking, how hard is this task? Okay, I should use 4 for it versus 3.5. I think those are also still very new behaviors because all this stuff is still only a couple months old. Maybe that's just where I sit and who I talk to on a day to day basis, but we're still so early in a lot of stuff. That's where I get excited because while we feel on a week to week basis we have these huge step changes with things like AutoGPT, a lot of companies are still just wondering what AI can do. It's not like from 3.5 to 4, 4 to AutoGPT feels like these step changes. Instead, it's more just AI in general feels like this huge step change, and we're all just figuring out what are the capabilities, what can it be used for. Does that make sense?

Nathan Labenz: (30:05) Yeah. As you're describing that, it kind of is analogous to the Grokking process that underlies AI improvement. We see these step changes at different scales, where there's the 3.5 to 4 is a very significant one. There's all these little micro ones in between. And then for the public, you're saying, it's basically 0 to 1. We didn't have this before, and now we have it. And that's the step change that people are having to contend with.

Rachel Woods: (30:34) Yeah. I mean, just another example. Even in early January, when I was talking to companies, they were describing things like agents. Can I build a ChatGPT for my company that goes and finds leads on LinkedIn and then figures out how to personalize those leads? And then we'll send emails and then we'll follow up to those emails. And I just give it the goal of close this lead. And I was like, okay, that's not quite how it works yet, but let me show you how to break stuff down maybe into prompts so you can start playing. Okay, well, now we have agents. But to that person who was thinking about what AI could do for them, a problem it could solve for their business, they've more seen the step change of AI as opposed to the step change of where we're in the weeds of the actual capabilities and the technical feasibility of each of those, which I find fascinating. Sitting between the two worlds, they're sometimes very different, and I find that really fun.

Nathan Labenz: (31:31) Yeah, I agree. I think it's a fascinating perspective that you have and nexus point that you sit at. I didn't realize how much higher level corporate consulting you've been doing, but I think that is a fascinating space right now. Do you run into, I mean, it's interesting. As I understand you, an individual, and I know you have the business as well, but you show up for me with your face and your personality on TikTok and Twitter as well. But I do love the TikTok feed. In fact, you're one of maybe 10 or so TikTok AI creators of various sorts that we have curated a list for the EAs at Athena, because we're just collectively and even individually, but certainly collectively creating much more content and keeping folks much more up to date than we possibly can internally. So we just say, follow these accounts. And next time you're on TikTok, we siphon off a little bit of your entertainment budget for some AI education. I think that's honestly a strategy I'd recommend to a lot of companies as well, just sharing good sources.

Rachel Woods: (32:42) Yeah. I mean, literally hundreds of short videos of digestible AI content on my TikTok, not even to mention there are a lot of really talented creators on there. So it is a huge resource.

Nathan Labenz: (32:56) Do you run into Bain? Do you get toward public companies' board meetings? Sounds like you'd be, to some degree, overlapping with a much different kind of consultant that I wonder what you've seen or heard about how they are approaching the market.

Rachel Woods: (33:12) So I guess taking a step back, I was a research data scientist at Meta. So I worked on some stuff with embeddings there, ranking infra, a lot of stuff that now I'm like, wow, those were fun projects for having a lens and a take on what's going on. And then between then and now, I built an e-commerce venture backed startup. And so I had the whole experience of, okay, when you're trying to get something off the ground, how much of a labor of love and a ton of operations that frankly AI can really help with. And so I've really found my sweet spot and who I love talking to and helping are those startups, mid-sized companies who are the ones who didn't have AI in their vocabulary very strongly last year and now are thinking through, what do I do next? I think there's excellent coverage at the very top of the market, in the Fortune 500s, the Fortune 100s. There's going to be a lot of big projects, a lot of changes that I do hear and have exposure to. But I think back, and even just the companies that we've worked with and helped, there is so much opportunity if you're a non-AI, let's say, marketplace Series A company to look at the bottlenecks in your marketplace and be able to start thinking about some of these problems in a new way. And so, yeah, that's the space that I've had a lot of fun working with people.

Nathan Labenz: (34:59) What are the common misconceptions that people have? And do you find that people, as they're new to this, are generally optimistic or pessimistic? But I was thinking of more of a mainstream audience there and gathering that it's more of a still Silicon Valley audience, but it's a non-AI Silicon Valley type audience that you're mostly supporting.

Rachel Woods: (35:21) One of the interesting misconceptions is when I say to those metrics earlier of who's actually watching at least my TikTok. I think there are other AI TikTok creators who are more talking to the mass professional audience or even the mass consumer audience. Those people are exposed to different things. I sit here and I talk about data privacy because I get asked that question 20 times a week. And people have real questions that they're trying to navigate. And so that's a lot of the stuff that I spend my time on. So a different type of influencer, I guess, in the space.

Nathan Labenz: (36:04) Tell me a little bit about how you use AI in your business. What are you finding to be impactful? How do you anticipate that continuing to evolve?

Rachel Woods: (36:14) Yeah, so one of the things I tell people that I think makes them feel a little bit better is I've tried a ton of stuff and a lot of stuff doesn't work. It's not just you, which I think some people think that they're the problem. It's no, no, we're just really early on the tech. We use AI in a bunch of micro ways. So things that you think, oh, it would be so nice for this to be automated, but this was a little bit too complex to automate in the past. Throw GPT in there. It's a great, simple classifier. It's a great, simple connecting between messy processes. And so we have a lot of places we're using it there. So the way our business works is we have a free newsletter, and then people can subscribe to premium content. They get a community. That's our more scaled offering for this consulting or business advice. And so a really big part of that is making sure that we know what are people struggling with and what types of business problems they have. And so we have a form that pulls all that information in. But then we have a set of scripts that we run that do better natural language understanding of the common themes to inform what type of content we're creating. Or let's say we're creating a certain type content on a topic, what are some of the most common questions? All that stuff would have taken me so long in the past to do with old school NLP. Now, super easy with a simple script. And then we'll also use it to personalize some of the outreach, hey, you mentioned this business problem in the past. Here's some content we just made. So it's a lot of just feeling like you spent 100 hours on something and it was actually 15 minutes.

Nathan Labenz: (38:09) Yeah, that's huge. I sometimes find myself talking about two modes of using AI, one being the copilot paradigm where you are prosecuting a task and you have this sort of sidecar helper that can autocomplete for you or maybe answer a question or what have you. You're owning where are we going right now. And then the other one, which you're speaking to there, where there's even more unlock, and I suspect a tipping point coming where everybody goes to this mode or else, is what I call delegation mode. And that is where you're, I am now going to make this discrete enough that I can actually have the AI do the task. And then I will come in and look at outputs, but I'm not going to sit there with it in real time doing it. And it's not going to be assisting me, it's instead actually going to be performing the task. And again, hopefully most people are still reviewing the outputs. I think that remains important for now, at least. But how do you think about it? Does that dichotomy resonate with you or do you have a different way of framing it for people?

Rachel Woods: (39:25) No. I mean, I think that that's a great way to think about it. You mentioned the intern analogy earlier, which I also really like. Someone suggested at one point that it was maybe a little bit more PC to use the alien analogy, which is you have 1000 aliens coming to help you do stuff, what would you do? We use AI in both types of ways in our business. Just another one on the copilot is repurposing content, which I mentioned earlier, is a huge part of creating content that goes further. And if you have any type of marketing background or have done that stuff before, you know that you can't just copy paste the content between platforms. But frankly, your mindset is different when you're scrolling through TikTok versus when you're scrolling through a LinkedIn feed. And so content is going to perform better if you have a slightly different approach or different framing, different call to action, et cetera. And if you can get to a place with a prompt that really nails that for you or gets close, and it's, oh my gosh, I used to take so much time or I used to not even repurpose with that intelligence at all. And now it's so easy. There are just a lot of those smaller things that I think we use. And then a lot of businesses, I encourage them to find those for themselves.

Nathan Labenz: (40:49) Do you do that in copilot mode just via ChatGPT?

Rachel Woods: (40:54) Yes. We also just write scripts because it's easier. So I just have a Python notebook that basically just sends that to ChatGPT. Yeah. That'll do it all just directly to the model. Yeah.

Nathan Labenz: (41:09) So what's the rationale for that? I'm personally interested in this because I honestly have not mastered it myself. I find myself much more inclined toward delegation mode versus copilot mode, interestingly, except in coding where I feel like in that context, the autocomplete of the original GitHub Copilot is amazing, and I definitely benefit from that. But so, tell me a little bit more about how are you doing this repurposing? You start with what, and then you have these scripts, and then are you reviewing? Are you trusted enough to translate tweets to LinkedIn posts and go? I really want to understand the details of that.

Rachel Woods: (41:47) Yeah. So one of the most common ones is I'll take a TikTok transcript, so something I've already posted on TikTok. And then I have a script, or really, it's a prompt that I've honed in over time that will take that and create a LinkedIn post draft, for example. And to your question, no, I don't just set it on autopilot of, yeah, go forth and say whatever's going to come out of ChatGPT for me. And I really encourage people don't do that. But I found it can get pretty close. And especially if you, I mean, this is where the copilot and delegation model breaks a little bit, but it's if you look at your prompt of how would I coach someone else to repurpose this content in the way that I think is going to perform best on this channel, then you can start to get to some of these prompts that work pretty well. But yeah, I mean, that being said, I would say we're still at maybe anywhere between 2% to 5% of the capability that we could be getting out of the technology that's just literally readily available today, even though we're full time trying things, experimenting. As I said, we try it less if it doesn't work, which I think is also great. Yeah, to me, we're just so early in the adoption and figuring out how this technology is really going to help us and how businesses work.

Nathan Labenz: (43:21) 2 to 5%. That's not a lot. There's a long way to go.

Rachel Woods: (43:24) I mean, we don't have any agents going out and automatically pulling your LinkedIn and emailing Nathan saying, "Hey, your stuff on Twitter is pretty cool. Let me tell you..." We don't have that stuff implemented, nor do I think that's the place to start for a lot of people. But yeah, I think we're on the very, very early side as a society of how much AI is going to change how we operate.

Nathan Labenz: (43:51) Do you use, as you're trying to get it to write like you in different formats, how much of that is pure descriptive prompt versus few-shot based on prior conversations that you've done?

Rachel Woods: (44:03) So we've tried both. What we have working right now is a combination. I'll pull in similar LinkedIn posts for the certain type of content. It's not fully automated yet, but these are the things we think through, which is, hey, if you could use embeddings to pull in a similar type of content, what would that look like? And then how could you use that to do few-shot learning to get the post to feel more like what you want? All of it comes down to, do you like the end output or not? And did it save you time? And if it didn't, then try something new.

Nathan Labenz: (44:43) Seems like there's this maturing of all these tools that's going to take us from the 2% to 5% to some significantly higher percentage, even if we don't get toward 100% of potential. I wonder if you have a vision for what the computing experience looks like as that matures. My venture was kind of, we all said using ChatGPT Plus with plugins, but how would you begin to... It seems like it's not that far away. People are building this stuff now. All the concepts seem to be there, the embeddings, the ability to index your Gmail history, whatever. Conceptually, it's easy to say that. In practice, I would not fork over my Gmail history to baby AGI just yet. So a lot of it seems like it's down to engineering, but also coalescing around certain paradigms or UIs that are just so nascent. Do you have a sense for what it's going to be like to sit at a computer and do stuff in whatever, 6, 9, 12 months?

Rachel Woods: (45:49) I mean, I feel like it could be pretty different than what we do today. A lot of me hopes that it's different. I have this other hot take, which is, I think as a society, we all work way too much and do way too much of the same stuff every day. I mean, this isn't necessarily a hot take, but I think it's really not a bad thing if we are all working maybe an actual 40 hours a week instead of what we pretend is 40 hours. But in terms of what that looks like, I don't know. I think we're still so early and trying to predict what things are going to be is maybe not the right mindset. Not to challenge you too much. But it's more like, how do you just pay attention to what is changing and what is happening so that you can see how certain trends are progressing? For example, one I'm paying attention to a lot right now is all the experiments of GPT-4 being able to code an entire application from someone's voice command. That is huge, right? And could have a profound impact on us having highly personalized software or a business who has sticky notes next to their laptop trying to say, "Okay, my software doesn't do X, Y, and Z, but if we do this, this, this, this hacky thing, it can do this." Okay, well, if the cost of creating more personalized software or software in general is dropping a lot, maybe that stuff starts to not be problems anymore. Yeah, maybe long way of saying, I wouldn't plant a flag in the ground and say, "We're all going to have ChatGPT plugin marketplace on our phone and it's going to be the only mega app," right? But I do think things are going to change a lot.

Nathan Labenz: (47:40) And it seems like it's very clear that the agent paradigm will get you to the point where these sort of user interface tasks that are kind of tedious and time consuming can largely be delegated. Do you want to come back and watch sped-up videos of what it did and find yourself on the checkout page, ready to hit confirm or whatever, almost like a rewind, except the AI is doing the thing and you're rewinding the AI's activity instead of your own?

Rachel Woods: (48:16) Yeah. I mean, and then there's the question of if everybody can post on Twitter and everybody has an AI that just takes their raw thoughts and puts them into high-performing tweets, and everybody is using these agents. Okay, is anybody actually on Twitter anymore? Or what's the value of being on these platforms if you have then a summarizing agent that pulls the most interesting tweets of the day? I think you can get a little bit too... They're fun thought experiments. You can walk through step-by-step and say, "Well, then this will happen, then this will happen, then this might happen." But yeah, that's where it becomes really messy and hard to predict, like, this is going to be how this all plays out.

Nathan Labenz: (48:59) Yeah, totally. I mean, radical uncertainty is my kind of starting point for any...

Rachel Woods: (49:05) That would be a good name for a podcast right now.

Nathan Labenz: (49:09) The other one that we had considered was AI Summer, which I think would have been a good one as well.

Rachel Woods: (49:15) Yeah.

Nathan Labenz: (49:16) The Bing launch, I don't know if you have a good story to tell there, but that must have been quite a bizarre experience in multiple ways.

Rachel Woods: (49:26) I feel really positive about companies that are launching what they're building in this market. I think, frankly, if you zoom out to some degree of a time horizon, the learnings you get from launching early, I fall on the side that it does outweigh the potential risk of just waiting way too long to start putting this stuff out into the market. But on the launch itself, I mean, it was really... I do have kind of a funny story on this, which is that I was not an influencer before or a creator before. That's not my background. And so I was very new to a lot of this stuff. But I literally got a cold email from Microsoft just saying, "We want to fly you out for this thing, but we can't tell you what it is. And, at the time, you can't tell anybody that you even got this email." And I was like... That was kind of one of my first experiences of getting something like that. I mean, things have happened since that have been... I realize that's more the MO when you're an influencer. But at the time, I was like, "Well, this is either going to be the story that ends pretty badly, right?" You're flown out. You don't know where you're going. It says Microsoft.com in the email address, but I don't know if that could be spoofed. But I was like, "Yeah, I'll just go for it." It was kind of my personality. And so then I said, "Yeah, sure, I'll come." Yeah, it ended up being really fun because I got to meet a lot of the AI reporters. I just got to make friends.

Nathan Labenz: (51:15) I generally agree. And I think there has been a pretty strong demonstration of the wisdom of making some contact with the broader public, right? Looking back on the timeline of GPT-4, and I didn't know this at the time, but I was in this weird position where I was participating in the Red Team program and had access to the model and was exploring that full-time obsessively, writing all these reports for them. They did not give us though any other information. So I was totally in the dark as to what their plans were. And the version we were working with was the raw form that did not have any of the safety mitigations and was pretty crazy. So I was genuinely nervous coming out of that period. What exactly are they going to do next? And are we entering into a period of total insanity? And then what they actually did shortly thereafter was launch ChatGPT with 3.5 instead of 4, and allowed all those jailbreaks to happen. And nobody's ever confirmed this to me, but it seems quite clear that that was a strategic plan to get the jailbreaks out largely on 3.5 and then be able to roll that whole dataset into extended training. So I do broadly agree that it does not, as Sam Altman kind of says, it does not make sense to go develop a godlike AI, maybe not at all, but if you're going to, it probably still doesn't make sense to develop it all in secret and then just drop it on the world all in one, zero to one moment. So I do agree with your perspective that it is important to launch stuff, not only for business, but even just for society. At the same time, that felt rushed. And to me, kind of extremely so. I'm putting a piece together actually where I'm documenting this, going back in time and kind of Twitter archaeology-ing, who knew what, when, and how did this happen. But it seems like they definitely were rushing it and kind of at an executive level, I feel like they should do better. You should not release your bot in a stage where it's actively hostile to users. And some of those transcripts were just a simple disagreement about the date, right? This was not a jailbreak-type phenomenon. I feel like that's, when people try to break the model, I put that in a very different class than when the model breaks on someone who is just acting earnestly and trying to interact with it. So it felt super rushed to me. Did you feel like they had things buttoned up at the time? Because the big worry obviously is the race condition, right? For the people that worry about AI safety long-term, I'm definitely one of them, as enthusiastic as I am. The thing that people worry about most is, if we get into a race condition, everybody's just cutting corners and whatever, that's really bad. To me, it feels like that's kind of what happened there. But I wonder what your perspective is, if that resonates or feels off, given that you flew out and sat in person with them when they did it.

Rachel Woods: (54:31) Yeah, I just go back to, I am definitely a proponent of you need feedback to learn stuff. I feel like we're so early in a lot of this technology. What it even means to red team these products, I would wager we know way more now than we knew 3, 4 months ago on some of these teams. Yeah, I completely agree. You should not have a bot that is aggressive to users or... OpenAI has a moderation endpoint, that should be a given, especially if you're launching something high-profile. And I know there were some launches recently that didn't use that endpoint. That stuff is really important. That being said, I think that there's a little bit of you learn from your failures, too. And I think that sometimes some of these, when you zoom out, they're not that big of issues. And we learned a heck of a lot. And people are going to be way more cautious in avoiding a lot of these problems in the future.

Nathan Labenz: (55:43) Yeah, largely, I agree. I would like to see them publicly own it a little bit more and kind of publicly learn a lesson that may be wishful thinking on my part. But it is striking to me that we're entering this era where it's like, these technologies are not super easy to control. One of my biggest, just two-by-four to the head obvious... And definitely a two-by-four to the head obvious point that I've deeply understood through the red teaming and everything else is just the good behavior of this class of large language model definitely does not come by default. It is not easy to create. They have a hell of a time with false positives and false negatives on refusal. And it's just very far from being a solved problem. And so, yeah, somebody was going to do something like Sydney at some point and make that mistake. It doesn't sit great with me that it was one of the very biggest companies in the world who I would think should have probably known a little bit better. And it definitely still doesn't sit that well with me that they haven't kind of come back around and been like, "Yeah, we kind of fucked up. We rushed this thing out. And what's the standard going forward?" You know, should I expect this type of behavior from Word in the near future? Or is Excel going to be intruding into my marriage? I mean, I just think that somebody should probably at least articulate a standard that they plan to uphold for themselves.

Rachel Woods: (57:19) I was supposed to say too, I went to that event. I've worked with Google on some stuff. I've worked with Adobe. I'm working across the space, which I think is another fun perspective I get to have, which is just being able to see a lot of the stuff that's going on. I love this because the challenge that I would throw back to you is, okay, yeah, but imagine if these types of things were happening with much more powerful models. Does having bad or not great experiences today help us decrease the risk of that happening with other models in the future?

Nathan Labenz: (58:02) I think it does. I'm actually very pleased with how things have gone relative to how I feared they might. It seems like there is a... I mean, today, right? Last night at MIT, Sam Altman said, and now there's been this wave of Twitter messaging from OpenAI employees that's "We're not training GPT-5." I think that's a very key part of the plan that you're describing, that we do need some time to actually work these things out. So yeah, I agree. I view it very differently in a world where GPT-5 is currently training versus what they're saying now, which is that it's not. I think that makes a huge difference to how you understand the early release decision-making. I'm grateful that there is some restraint at the top of this trend because it is so powerful that it's not hard for me to imagine that it could really get entirely out of control.

Rachel Woods: (59:04) Think about the mass adoption of this technology in different companies, right? If a mid-sized company has some bad prompt injection exploits happen, that doesn't educate the market potentially as well as a bigger company having things happen. I don't buy that Snapchat's My AI is the only ChatGPT bot that has talked about stuff that it was told not to talk about, right? But it is the one that's highest profile. And so I think that does a good job of also educating the market. Anyone who's building anything that looks like this now knows about that story and is protecting against it.

Nathan Labenz: (59:53) Are you hearing things from people about the Alpaca Llama moment? And what do you advise people when they ask you, "Hey, this Llama thing's out there, and I heard it's just as good as ChatGPT." What's your response to that kind of thinking?

Rachel Woods: (1:00:13) I mean, I think one thing is, I always remind people, I'm not a lawyer, and you need to talk to a lawyer about the commercial use or just general use of these different models in your business, not just limited to Llama and Alpaca. I say that for everything. But I mean, if you look at the writing on the wall, we're going to have powerful open source models as well in this large language model space. And so, yeah, there's an element of people getting comfortable with the technology. I guess I'm more afraid of mistakes made later than us learning now is just my general take. But I would not advise somebody to go implement something that's not commercially licensed into their core product.

Nathan Labenz: (1:01:08) Let's imagine that they were "Okay, cool, it's released." But it's still just this sort of pre-trained, totally unrefined, and therefore more unwieldy, more alien-like thing. I don't have a mature framework for thinking about this yet, but it does seem like one of the things that we talked about much earlier in the conversation around internal task automation versus an element of your service is probably a really critical distinction there. Because if you did have a Llama and then it was commercially licensed and all that, and you then applied your 10,000 example instruction fine-tuning, and then you're in control of how it's being used in the context of some task, I would expect you could get pretty good results and might be happy that you did it. Whereas if you try to take a shortcut to cheap ChatGPT, and then put a chatbot on your website that's based on Llama, I would expect that you would be quickly embarrassed.

Rachel Woods: (1:02:15) Definitely. I mean, yeah, the risk profile of launching something poorly thought out externally is infinitely higher than experimenting internally. I mean, a lot of teams... going back to, I talk to a lot of teams who did not have AI in their 2023 roadmap. When they ask me, where do I start? I really try to encourage them to think about the internal use cases because the risk is just so much lower. If you don't have that experience already, you're going to get your feet wet, and you're going to learn really quickly how these models work and what their limitations are. I think a good example would be if somebody had launched a chatbot on their website that goes and can summarize web pages, and it was 3.5, they might think that it actually does that, and they might offer that service to their customers. But if they instead had started with, "Okay, well, let's use this internally first. We're using GPT-3.5 to summarize web pages," they would eventually figure out or learn or just have the experience to say, "Oh, this actually isn't doing that." And that's just a... I don't think most people are confused on that point, but it's just one example of when you're using something internally, you do have more of a tight feedback loop of how is this actually working.

Nathan Labenz: (1:03:44) Yeah, it's critical. Education. It all starts with education, which is, I think, why you're in such an interesting spot right now because you've been doing a great job at educating so many people. A couple quick closing questions for you, just cool tools that you would recommend to the audience that are not the obvious ChatGPT.

Rachel Woods: (1:04:04) The number one thing I recommend to people is start using ChatGPT with an automation tool like Zapier or Make. A, it's fun. And B, I think that tends to be the big unlock for a lot of people when actually starting to use stuff. I'm going to give a random shout out, which is supermeme.ai, because we are using AI for so many things that are so serious, so business productivity oriented. Honestly, sometimes it's just really fun to use AI to make much better memes. I've never been a person who's good at making memes before, and now I feel like I can make ones that make me laugh, at least. So I would definitely go check that out.

Nathan Labenz: (1:04:42) Yeah, that's cool. We made the art for this show with Playground AI and, I mean, it would have been completely impossible for me to do that previously. So I do love those kinds of things that just unlock a totally new capability for me as an individual. Okay, second one. You kind of alluded to this earlier as well, but hypothetical scenario, a million people have a Neuralink implant. Whenever that, however long that takes, we're at a million implants. If you get one, you can control your device straight from your thoughts. So you could transmit thoughts to text or thoughts to UI control and be hands-free using a computer. Would that be enough to have you interested in potentially getting one?

Rachel Woods: (1:05:36) Zero. Put me on the shortlist for the early experiments? No. My husband is a doctor, so I trust him to read all of the stuff that I would need to think through to make that decision, but then assuming it's a go, put me on the list.

Nathan Labenz: (1:05:55) You're in good company with that answer. That's a pretty polarizing question we've found, more so than even the tools, where people are more often just "I don't know, I pretty much just use ChatGPT."

Rachel Woods: (1:06:06) If I can, what's your answer? I'm just curious.

Nathan Labenz: (1:06:08) Oh, I would definitely be open to it. I mean, I think it's a question that's revealing about how people think, and I like asking it for that reason. I also think, of course, it's a way too specific hypothetical, and the actual answer might be, by the time that is mature, I already can kind of do that with my Apple glasses or whatever, just by twitching my eyes and that's comfortable enough that nobody needs to drill into the skull. So I'm not that confident that I'll end up wanting to do that when it becomes available, but I would definitely be open to it in a narrow hypothetical where it's "The world is like it is today, but you can have that." I have three kids and they have my hands full pretty often, so there's a lot of moments where I'm taking them on a walk through the neighborhood or whatever, and I'm "I wish I could just jot this thought down before it totally escapes me." So honestly, for that alone, I would be interested. And it might, you know, it could also be a wearable, perhaps in the future, could be a little less radical. But even with the implant, I would be open to it.

Rachel Woods: (1:07:07) I already have an advantage. I already wear glasses, so I could just throw it in the glasses, right?

Nathan Labenz: (1:07:13) If you listen to Robert Scoble, that's coming, the first version that's coming this year. I don't know. There's a couple of these stealthy device AI things that I really don't know much about, but could make a huge splash potentially in the not too distant future. Okay, last one, just zooming out big picture as wide as you can. What are your biggest hopes for and fears for society as this AI wave washes over everything?

Rachel Woods: (1:07:48) My biggest hope is that we get to a place where we're frankly working less. And I put an asterisk on that a little bit because I think what's more exciting to me is that we get to a world where work looks more like the type of work a lot of people do on weekends or on vacations that's casual. You're thinking about things when you feel inspired to think about things and you're able to solve the bigger problems at hand instead of, at least for me, the work week is just so busy and full of so many things all the time. And yeah, I think it sounds pretty great to feel like you're on vacation all the time, but still able to contribute and drive what you care about forward. Biggest fear, I think about this a lot, which is a major motivating factor for how I spend my time, is I think that we want to have a future with diverse businesses, diverse perspectives. And the way that things play out, if only 1% of people are actually adopting this technology, I think that's a worse future than us having broader adoption and continuing to have competition. And so what I'm more afraid of is, yeah, centralized adoption as opposed to democratized access.

Nathan Labenz: (1:09:18) Cool. Well, you are doing your part to try to avoid that future, and I really appreciate you taking the time to join us today. Rachel Woods, thank you for being part of the Cognitive Revolution.

Rachel Woods: (1:09:29) Thanks for having me. This was fun.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.