Scouting the AI Revolution with Robert Scoble and "Ben's Bites" creator Ben Tossell
Nathan Labenz talks with AI experts Robert Scoble and Ben Tossell about AI scouting, mainstream adoption, and future predictions in AI media.
Watch Episode Here
Video Description
Nathan Labenz sits down with prominent - and notably prescient - AI media figures Robert Scoble and Ben Tossell. They discuss what the work of being an AI scout looks like, how to bring the mainstream along with emerging AI developments, and their predictions and hopes for AI.
Robert Scoble is a long-time Silicon Valley technology explorer and connector, a futurist who's met so many technology legends in their primes that he's now also something of a historian. https://scobleizer.blog/
Ben Tossell is the creator of Ben's Bites, an AI round up newsletter meant to be read in 5 minutes or less, read by over 90,000 subscribers. https://www.bensbites.co/
This episode caps off our series centered on talking to rising voices in AI media: people who are now only working overtime to understand everything going on in AI, but also creating thought leadership and educational content meant to help others get up to speed as well.
BOOK RECOMMENDATION:
The Scout Mindset from Julia Galef
PODCAST RECOMMENDATION:
Youtube: @UpstreamwithErikTorenberg
Audio: https://link.chtbl.com/Upstream
TIMESTAMPS:
(00:00) Episode preview
(01:27) What an AI Scout is
(07:03) Robert Scoble’s story and being Apple’s first child laborer
(12:35) How an AI understands
(13:25) The dormant chips sleeping in our Apple devices
(15:36) Sponsor: Omneky
(19:39) Tapping into the potential of a new Siri
(22:29) Computer vision in GPT-4
(23:49) Would Robert get a Neuralink implant?
(25:05) Robert’s outlook for robotics
(28:27) Expectations for the next Siri
(30:18) Startups vs. incumbents in the AI race
(31:42) The future of apps
(32:16) Tools that Robert uses and the new tech coming
(37:15) AI Safety
(39:12) The AI race and US-China relations
(40:30) Robert’s Tesla FSD experience and The Exponential Age
(44:49) Ben Tossell’s Story and Ben’s Bites
(47:11) Will AI-first or AI-layer products win?
(50:43) Poor, Good, Best Incumbents in AI
(56:19) AI-wrapper products
TWITTER:
@CogRev_Podcast
@labenz (Nathan)
@eriktorenberg (Erik)
@Scobleizer (Robert)
@bentossell (Ben
Thank you Omneky for sponsoring The Cognitive Revolution. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.
More show notes and reading material released in our Substack: https://cognitiverevolution.substack.com/
Music License: XYDAPDAZTICP9TYG
Full Transcript
Transcript
Transcript
Robert Scoble: (0:00) I tried to talk Bill Gates out of $300 million one time. Actually, a little bit more, but he said, "I don't think so. Let's go." So I know how hard it is to get a corporation the size of Microsoft to spend $300 million. They just spent $10 billion on OpenAI. With AI, it's not where are you today. It's how fast is it improving. Even ChatGPT, if you say, "Oh, I can't use it, it generates too much bullshit for my work," how many more updates do you need before it's perfect? Every hour is changing.
Ben Tossell: (0:31) Everyone should always want fewer tools. I don't need to use 10 things when one thing will do. AI will be in all of the big tools, all the big platforms. We're seeing them all move to push stuff out every single day. How much of their market share could be eaten by these smaller tools who take an AI-first approach? Because then I think the details really matter.
Nathan Labenz: (0:54) Hello, and welcome to the Cognitive Revolution, where we interview visionary researchers, entrepreneurs, and builders working on the frontier of artificial intelligence. Each week, we'll explore their revolutionary ideas, and together, we'll build a picture of how AI technology will transform work, life, and society in the coming years. I'm Nathan Labenz, joined by my co-host, Erik Torenberg. Hi, everyone, and welcome back to the Cognitive Revolution. Today, we're talking to two more prolific AI scouts, Robert Scoble and Ben Tossell. I've used that term "AI scout" a couple times recently, including to describe myself. So before diving into today's episode, I wanted to take just a minute to tell you how I think about the role of the AI scout and how it relates to the show. Zooming way out, as I often ask our guests to do, I really do believe that there's a good chance that we're entering a critical period in human history. While the best AIs still generally fall short of human expert performance, they do now consistently outperform the average human on a huge number of challenging and economically valuable tasks. And, of course, they are only continuing to improve. Considering that their architectures and training processes are so fundamentally different from our own, that their strengths and weaknesses are also so distinct from ours, and that their capabilities are proving difficult to predict and their behaviors sometimes hard to control, I believe that society is flying very rapidly and all too blindly into an increasingly hard to predict AI future. In other contexts, governments and large corporations routinely make major investments in intelligence and competitive research to better understand both their rivals and their market dynamics. And while I certainly hope that AI never becomes a rival to humanity, it seems only prudent at this point that society should develop specialists who devote themselves to studying AI from every angle, at every level of abstraction, using all the available information and all relevant analytical frameworks so that we can better characterize AI behavior and understand AI systems as they actually exist rather than how we might wish they were. I call this new role the AI scout, taking inspiration in part from Julia Galef, author of The Scout Mindset, who emphasizes the value of working toward accurate beliefs even when they may lead to uncomfortable conclusions. To become the most effective AI scout that I can be, I aim to spend 50% of my time just studying AI fundamentals and keeping up with the latest developments. Talking to the entrepreneurs, builders, researchers, and fellow scouts that you've heard on this show has been a major part of that for me over the last few months. But I also spend a lot of time reading research papers, trying all sorts of AI products, using the latest models, and, of course, scrolling Twitter and listening to some other great podcasts. My goal in all this is to have no major AI blind spots and to continually deepen my understanding of the most critical questions in the space so that I can help my teammates, the companies that I advise, you, the Cognitive Revolution audience, and perhaps even policymakers and society at large understand AI more accurately and ultimately make better decisions about how to develop, deploy, and use it. In addition to the show, I'm also starting to publish some more polished and hopefully enduring AI analyses. Recent topics include LLM pricing trends and their economic implications, the competitive dynamics between the leading AI companies, whether GPT-4 can do science, and lots more besides. To get that content, if you don't already, I encourage you to follow me on Twitter where I am Labenz, and my DMs are open, and also to sign up for our newsletter on the website, cognitiverevolution.ai. I share all this today because I really do think we need more AI scouts. If you're listening to this podcast and fascinated by the subject, there's a good chance that you can add value in such a role, likely sooner than you'd expect. My approach is just one approach. Today, you'll hear from two other AI scouts who scout in their own ways. And in any case, as Riley Goodside, another pioneering AI scout, said in a recent episode, we're all still new to this. The pace is relentless, and the volume of information can be overwhelming. But the good news is that amidst such rapid change, new people can quickly reach the frontier and begin to contribute. Now on to today's guests. Robert Scoble, who writes as Scobleizer on Twitter, is a longtime Silicon Valley technology explorer and connector, a futurist who's met so many technology legends in their prime that he's now also something of a historian. Like me, he attempts to understand technology from as many angles as possible, to identify the driving forces, and to anticipate what's going to happen next. We covered a lot of ground, including how he expects people to interact with AI systems in our daily lives, what Apple's going to do with Siri and all those extra chips that have been sitting dormant in our devices for years now, what form factors will dominate the future, how AI will power augmented and virtual reality, and lots more. In the second half of the episode, I talked to Ben Tossell. Ben is the author of the AI newsletter, Ben's Bites, which combines editorial and curated stories in a choose-your-own-adventure format, which often include more than 50 links in a single edition. We talked about the relentless pace of AI and the challenges inherent in keeping up, when it's worth switching tools for new AI features versus better to just wait for your current tool to add AI support, how he sees the competition between startups and incumbents developing, and how he personally uses AI in his own daily work. Though he says at one point that he doesn't use many AI tools, he then goes on to describe six or so in intimate detail, showing that, in fact, he very much does. These are two fast-paced and wide-ranging conversations, and I hope you enjoy hearing from AI scouts, Robert Scoble and Ben Tossell. Robert Scoble, welcome to the Cognitive Revolution.
Robert Scoble: (7:06) Hey, thanks for having me on.
Nathan Labenz: (7:08) Our audience will probably be familiar with you from your prolific output online for years. So just give us a quick intro to who you are, what you do.
Robert Scoble: (7:18) As a child, I was Apple's first child laborer when I was 13. And that sort of got me started. That's the benefit of having a dad who's an electrical engineer and moved us here to Cupertino back in 1971. That got me to fall in love with new things. I fell in love with the Apple II motherboard because I made a couple hundred of them with my mom. But that got me to study new things. Apple was my first startup that I toured, and since then I've been seeing startups at a pretty fast rate, thousands of them. I launched Siri. I was the first one to see Flipboard. I was the 79th user of Instagram. I had the first ride in the first Tesla with Elon Musk. Mercedes-Benz gave me its first ride in its first autonomous car. So I've been at it for a while. I wrote four books on technology that predict decade-long changes. The last book was on spatial computing, which is still happening, which includes AI. I've been watching AI because I care about augmented reality, and AI is going to be how that is all fed. Today, I'm following 38,000 people in the AI space on Twitter. I'm the only human to do that.
Nathan Labenz: (8:36) I'm trying, in a sense, to follow in your footsteps a little bit. I'm describing myself these days as an AI scout. And I think of that as trying to zoom out as far as possible. All of that kind of comes together, hopefully, into a worldview.
Robert Scoble: (8:54) Yeah. It's why I still do Twitter. I don't really care that people are reading me. I'm using it to learn about the market and learn about all sorts of things, watch my investments and stuff like that. And it's a very powerful way to learn. We're on an exponential path of a whole bunch of technologies. These data centers that run our lives are getting so massive now. Microsoft has data centers that are a mile long. It's crazy to think about that. I've toured data centers because I used to work at Rackspace in the cloud computing world, so I got inside a lot of data centers. The ones that are being built now are just insane. Also, data flows. I mean, we're in a different world than we were 15 years ago. Today, if something happens, my grandma calls me up really quick because she's not on Twitter, she's not on Facebook, but she finds out the news so fast because of Twitter and Facebook.
Robert Scoble: (10:01) There's a bunch of work being done behind the scenes on these things. The reason I'm following 38,000 people, I started noticing all sorts of college kids are going into AI. They're going into computer science, but then they're grouping up with a few kids at Stanford or Carnegie Mellon or a bunch of different places around the world and making all sorts of new AIs. The new AI papers that are coming out from those kids are just extraordinary and fast. I'm seeing a lot of papers every day. Then you start looking at ChatGPT and it's like, oh, this is not Siri anymore. You can talk to it and it can write you some code, and then it can explain how the code works. I'm like, whoa. That leads you into a whole bunch of new things, and not just code, but marketing copy, emails. It can do a lot of things. Now business people are starting to ask, what does this all mean for my business, for myself? How do I use it? Where are the mistakes? It still generates mistakes because this new technology really predicts what your next possible word is. So if it's writing a sentence, it's just going, what's the next word? What's the next word? It doesn't know anything. It doesn't really understand anything. It does, sort of, but not like a human being thinks that they understand things. Sometimes it goes down a bad path and generates some bullshit. You have to find the bullshit in your code, otherwise it won't run. It actually can help you find it too. It's like, this code doesn't run. Can you help me debug it? And it will. And it usually knows where the mistakes are. It's like, whoa! This is a new kind of intelligence you have to learn to talk to and learn where to trust it and not trust it.
Nathan Labenz: (12:09) I love it because it has richness on multiple dimensions. The technical depth is great. The practical utility and fun is, certainly at this point with GPT-4, next level. And there's kind of this philosophical question, what does it all mean? And what does it understand? And maybe frame it as, how does it understand? You mentioned your experience writing decade-long forward-looking predictions. It's probably never been tougher to do that than right now.
Robert Scoble: (12:46) No. If you go to the right Silicon Valley parties, you know what's coming for a while. This is why I started reading all these AI research papers, because those might not come out for consumers for five, seven, 10 years, but they tell you how the technology is working in their lab at Stanford or Carnegie Mellon or somewhere like that.
Nathan Labenz: (13:11) Very practically, can you paint a picture for us of daily life? I'm walking around, what do I have? What am I wearing? How am I communicating? With what am I communicating? If you could take that as far out as 2030, I'd be very interested to hear what that sketch looks like.
Robert Scoble: (13:30) I'm talking to you on an iMac. It has an M1 processor in it. 21% of that processor is neural network, a very powerful AI inferencing engine. Think of it as the runtime for ChatGPT or Stable Diffusion. A 2 gigabyte model, you can run it in the processor on an M1. An entrepreneur I talked to last week about this said it's more powerful on inferencing than a 3080 NVIDIA card, not on model building. If you need to build the model, if you're an AI team, you're going to need to buy some NVIDIA A100 cards and run them either in your office or run them up on the cloud somewhere, buy some NVIDIA space from Amazon or somewhere. This is really interesting. This part of the chip is cold right now. It's not being used at all. They shipped it 2 years ago. I got this computer almost 2 years ago. It's been sitting on my desk with that part of the chip completely unused. It turns on once in a while, but almost completely unused. It's cold right now. It's not being used right now. Next to it is an ultra wideband chip. I have a bunch of these radios. This is an ultra wideband radio from Estimote. What is ultra wideband? It's in your phone. It's in your headphones. It's in your Mac. It's in your TV. Apple has shipped 15 devices into my house that have this chip, and it too is pretty much not turned on. At least the full capabilities aren't turned on. Apple has shipped a mesh network with a huge amount of AI into people's homes. Millions of people's homes are like mine, and they haven't turned it on.
Nathan Labenz: (15:36) Hey, we'll continue our interview in a moment after a word from our sponsors. I want to tell you about my new interview show, Upstream. Upstream is where I go deeper with some of the world's most interesting thinkers to map the constellation of ideas that matter. On the first season of Upstream, you'll hear from Marc Andreessen, David Sachs, Balaji, Ezra Klein, Joe Lonsdale, and more. Make sure to subscribe and check out the first episode with a16z's Marc Andreessen. The link is in the description.
Robert Scoble: (16:07) That tells you right there that this move has been planned for a long, long time. I had dinner with the guy who ran Siri 9 years ago. I said, what are you learning by working at Apple? And he said, well, I'm learning that Google's kicking my ass. And I'm like, how do you know that? He said, oh, we instrumented Google's AI, and we instrumented our AI. We know, this was 9 years ago, we know that Google is learning faster than we are, so we have to rebuild Siri from scratch and have a whole different idea. And that's what's coming in June 2023. There's a new headset that uses this AI mesh. The whole ecosystem uses the AI mesh. Think of it, your phone can talk to the AI inferencing engine that's in your Mac and run Stable Diffusion in there, or you can run all sorts of things. The entrepreneur who told me about this runs a company called Supernormal, which takes notes on a video meeting, takes a transcript, and then at the end of the call, looks through the transcript for patterns. Did we talk about tasks? Did we talk about themes? Pulls those out as notes. By the way, that takes 300 milliseconds, so a fraction of a second, it takes your notes. That's why I know the chip is unused, because even this guy is using it already, but I don't run that many things that have these new AI workloads yet on the AI part of the M1 processor, so it's sitting there mostly unused. You're going to see all sorts of new AI things coming out later this year that use the Apple mesh or a hybrid, some inferencing done on the local machine, some done up in the cloud.
Nathan Labenz: (17:57) Yeah. So give me a little more color on the capability you're painting the picture of. There's all this latent potential that hasn't been realized. The hardware is already deployed, which is pretty incredible.
Robert Scoble: (18:09) I tried to talk Bill Gates out of $300 million one time, actually a little bit more. He turned me down, and he had good reason to. I sat next to Tim O'Reilly when he wrote the Web 2.0 memo that became Web 2.0. I went back and told him to buy a whole bunch of things in the Web 2.0 space and he turned me down. He turned me down a second time because I was seeing a bunch of things happening and I said, hey, you should buy this and this and this. Give me $300 million, I'll go buy for you. He's like, I don't think so, Scoble. I know how hard it is to get a corporation the size of Microsoft to spend $300 million. They just spent $10 billion on OpenAI. They laid a bunch of people off, took that money savings, and invested it in OpenAI. And then just this morning, they showed off their new Office suite that has ChatGPT built into a bunch of their tools. They're now building all sorts of things like slide decks just by talking to the engine. Hey, I need a slide deck. I need to pitch a bunch of people next week. You just talk to it and it starts building things. That is crazy.
Nathan Labenz: (19:21) What's my life going to be like? That's what I really want to know. Am I going to be still using a computer and a keyboard?
Robert Scoble: (19:28) For a while. How many years? If we're talking 5 years from now, you're wearing a pair of augmented reality glasses and talking to the computer. Hey Siri, or Hey ChatGPT, or Hey Microsoft. You might not even need to use hey, because it's going to know where you're looking. It's going to know what your hands are doing. Siri's going to have some fun things. There's a new Siri coming in June too because of all this AI stuff. And Siri, if you're wearing a headset, can know what you're holding, what you're touching, what you're gesturing toward, what you're looking at, what you're staring at. And so it can answer questions that ChatGPT can't answer. And we haven't yet seen Twitter move into this world. Twitter knows what things we're talking about right now. So, oh man, it's endless. If you're wearing the glasses or the headset, you're going to be able to talk to Siri and have it do all sorts of things and have a conversation with you about a whole lot of things.
Nathan Labenz: (20:28) And that's going to be pretty interesting. The typical white collar worker today is, I think, working probably more than ever and is always on and feels like I was promised on some level this future of leisure, and it never seems to have quite materialized. We get all these new tools and connectivity and famously, we see that everywhere except the productivity statistics and everybody's working a lot.
Robert Scoble: (21:04) Somebody's on TikTok because the numbers keep going up.
Nathan Labenz: (21:09) Yeah, there's a lot of social media use on the clock, I think, as well.
Robert Scoble: (21:12) Yeah, and leisure is intermixed into your workday. I have a surround sound system here. I listen to music all day long. At night, you watch movies and TV shows. I mean, entertainment is different than when I was a kid. When I was a kid, my parents had a black and white TV that you had to get off the couch to change the channel between 4 channels. Now you have trillions of videos on YouTube to watch on these TVs. The problem with trying to predict the future is it's easy to predict what we have and how it might be impacted by this new technology, but what we're about to get on our glasses is a camera and an eye sensor and a microphone and an AI computer, so it's going to do computer vision that's going to be pretty crazy someday.
Nathan Labenz: (22:11) The computer vision just in GPT-4 might be the most mind blowing part of it for me. That's something, my company Waymark uses small business images that they posted online to synthesize more content for them. And man, it has been a grind to interpret what is in these small business user generated content uploads.
Robert Scoble: (22:36) Imagine you get something in front of a vendor and you hold it in front of your glasses and it understands it. It understands, is this a legal document? Is this a note from your doctor? Is this a customer complaint? Whatever it is. There was a demo where they took a picture of handwritten notes that the guy had made, some ideas, and it built an app out of that. It's like, woah. And brain computer interfaces are coming that understand how your brain works and let you talk to the brain. That's what I'm saying. It's going to get even weirder from here.
Nathan Labenz: (23:18) If a million people already had a Neuralink in their heads, and you could, by getting one yourself, enable thought to text. In other words, you think and your thoughts go directly to a computer interface. Would you be interested in getting one? I thought actually one of the best answers we got was basically, it depends on the competitive landscape. The person said, if everybody else has it and I can't compete without it, then I'd probably get one.
Robert Scoble: (23:50) This is why you're going to get augmented reality glasses. Because if I show up at a boardroom and I have glasses on and I have access to all this ChatGPT stuff and patterns, it can display that in 3D in a way that a 2D screen isn't as good. I've seen many, many examples of that. If I have glasses and you don't, I have a huge advantage over you. I can probably get you fired because I can see patterns that you can't, and certainly the boss can't. And the boss is going to ask you, why aren't you wearing the glasses that this guy has? Because he obviously is seeing patterns in your business that you don't see and you're screwed. You're not up to date. Imagine you go to a board meeting without a phone today or without a computer and say, hey, I don't want to use this newfangled computer stuff. You're not going to last very long.
Nathan Labenz: (24:44) What's your outlook for robotics that could be domestic robots?
Robert Scoble: (24:50) It's coming. It's just what year. If we have a humanoid robot, let's say right now my wife orders some DoorDash and a robot comes to the front door and rings the doorbell. If it's that good to deliver something to my front door, it's good enough to come in the house and do a bunch of work for me. The robot could be at the front door and say, hey, here's your grocery you just ordered. Would you like me to come in and put it away for you? I'm like, yeah, sure, come on in. Would you like me to do your dishes? Would you like me to do your laundry? And if it's capable enough to get in a car, pick up a delivery somewhere, go into a grocery store and buy all your groceries, get in a car, and then bring it to your house and bring it to your front door, it's also capable enough to go and fold laundry and stuff like that. Stanford has a robot that already can do thousands of tasks, so we know it's coming. Is it 4 years away? Is it 6? We could have an argument about that. I think it's around 4 years. When that comes, it's also going to have a large language model AI that you talk to because it's going to understand you. Hey, robot, can you tell me a bedtime story for my kids? A lot of people are now using ChatGPT to write bedtime stories for their kids and they're reading their kids the bedtime story that ChatGPT made for them. So now you're going to have a robot. Can you play chess with me, robot? Sit down, have a chess game with me. Teach me how to play chess. It can do that. All of a sudden, you have a thing in your house that you have a relationship with, a friendship with, and you trust it. So if you tell it to wash your clothes, do you care that it got rid of the Tide soap in the garage and replaced it with some other brand? No. As long as the clothes come out nice and clean, I don't really care. I trust my robot to do that. And as long as the robot keeps doing it as well as I used to do it, it's going to take over my whole house, my whole life. So I can dream, so I can talk to ChatGPT and build a new business or answer some customer email that it's having some trouble with.
Nathan Labenz: (27:24) Yeah, I'm with you on almost all that vision. I think the one thing that still seems almost anachronistic as you describe that is responding to the customer email. Why is that even coming to you at that point?
Robert Scoble: (27:37) Because it might not understand something specific because the customer is making a new request that it's never heard before. Or it understands the email, but it knows that needs a human approval, that thing that it's being asked for. Can you send $100,000 to this company? A human probably will still need to sign off on that.
Nathan Labenz: (28:00) Wouldn't want to auto-approve that one.
Robert Scoble: (28:01) Yeah, you still want to watch to make sure that system doesn't go and empty all your bank accounts.
Nathan Labenz: (28:10) So what is your expectation for the next Siri? Will it be able to execute transactions? If I were to say, order me Uber Eats from whatever, get me the burrito, you think it can go all the way through to transaction and the burrito is set to be delivered in 22 minutes?
Robert Scoble: (28:30) Siri might call other AIs to load onto the M1 processor and run, like Super Normal, the app that watches your video conferencing and takes notes. And all of a sudden, AI is firing up. Do you care that it's Super Normal and it just charged you $5? It might ask you, right? You've got to turn on Super Normal or turn on our meeting note thing, and that'll cost you $5 a month until you approve. Yes. By the way, you can approve with your eyes, with your hands, with your voice, with your pen, because it's watching you draw. You can even select a virtual thing on a table. Yeah, go ahead. I have proof. All of a sudden, it's calling Super Normal, shoving that into the neural network and firing up, and now it's taking notes on our meeting.
Nathan Labenz: (29:31) Yeah. I'm definitely seeing a lot of that kind of multi-model systems and delegations, ensemble architectures. But I do think one question for all of these, really for all entrepreneurs right now, is this a startup's game or is it an incumbent's game? So if you're building something on LangChain and then I'm listening to you and I'm like, wait a second, this is coming to Siri and by the way, probably Google Assistant too and all of the phones natively this year. Do you see startups winning in this space, or do you see them kind of exploring the space and then just being crushed by Siri 2?
Robert Scoble: (30:10) Siri was bought by Apple for $220 million. I was talking to somebody who's talking to Satya at Microsoft. They spent $10 billion on OpenAI's ChatGPT to integrate all that. Apple, if Apple wants to buy ChatGPT, it's going to cost $40 billion, or license it. If they want to buy OpenAI, it'll be a lot more. $40 billion compared to 15 years ago or 12 years ago when Siri was bought for $228 million. That alone tells you something major has changed. OpenAI is a startup. It's 300 people. And they already got $10 billion from Microsoft.
Nathan Labenz: (30:57) It feels like we're in this kind of Cambrian explosion moment. But yeah, I don't know. It does seem like we're kind of headed for like a super app, where Siri and Google Assistant and whatever the Microsoft version are going to kind of code on the fly, spin up little interfaces with things to the point where, what's the future of apps? I kind of wonder.
Robert Scoble: (31:27) If you ask ChatGPT to create you a spreadsheet and put in all my customer data, and if it can hook up and do that, woah, and that's not very far from here. I mean, it might be possible. I mean, that kind of thing is getting there. Certainly now that Microsoft and Google are building these things into their tooling, right, into their apps, I'm expecting that kind of thing to happen pretty quick.
Nathan Labenz: (31:58) I think you're the perfect person to ask about tools that you use today.
Robert Scoble: (32:04) They're doing drug discovery with this technology, right at Pfizer and other places. Music is coming along and generated music will matter in augmented reality glasses. You're going to see new kinds of music. For instance, if you want to walk through a high school marching band digitally in your living room, the music industry tells me that's very, very difficult to capture and distribute right now, for a lot of reasons. But your generative AIs can create music in your house, so they can create a drum, a saxophone, a clarinet, a flute, and let you walk between those in your house in a way that the music industry cannot do. There's going to be a new thing. There's a new holodeck coming. A holodeck is an interesting way to think about it because you're going to have a 3D environment probably by Christmas from Unity or Stable Diffusion that you can talk to. Hey, environment, take me to the Taj Mahal. And boom, you're in front of the Taj Mahal. Take me to Yosemite National Park. Boom, you're in the park. Or something that looks sort of like Yosemite, because it still hallucinates a little bit.
Nathan Labenz: (33:25) Our memories aren't that accurate anyway.
Robert Scoble: (33:27) No, right? You just need to feel like you're in Yosemite National Park and have it accurate enough. You can start talking to this environment and then manipulating it with your eyes and your hands and your voice. Hey, can you put a purple tree right there? It knows where you're gesturing to. It knows where you're looking at, and it knows how to do that. That's called inpainting from Stable Diffusion. It can inpaint a purple tree in the 3D scene around you. Now you have a holodeck, right? If it can hook up and do all sorts of things, can you hook up that remote control so it actually works? If it can hook up all the code for the remote control and make it so that if I push a button, some code runs the environment around me? Oh my God. And that's coming.
Nathan Labenz: (34:15) So how are we going to absorb all this? Do you think that society is going to just...
Robert Scoble: (34:21) Hey, ChatGPT, I'm really a slow human being. This stuff is overwhelming for me. Can you simplify your environment? Can you teach me step by step what I need to know to make you answer a customer email today? Or entertain me in any way? Yeah, yeah, we can show you all sorts of things we can do.
Nathan Labenz: (34:45) That user experience, I think, could be probably pretty good. But the fallout outside of the glasses is what I am wondering if you have any intuition for.
Robert Scoble: (34:56) It'll work on the phone too. A phone has just a little tiny screen compared to glasses, which could have a wraparound environment around you, right? All the way around you. Huge. Instead of looking at, I'm looking at an iMac right now, which is a 2 and a half foot wide screen. If I'm wearing glasses, it's a wraparound 40 foot screen, right? A screen like Universal Studios has. They drive you through these screens, the world's biggest screens they claimed. And they had huge, huge, huge screens on both sides of the tram. It was really awesome. That's coming to your living room this Christmas if you have the Apple headset. Most people won't get the first one, so they'll get the second one next Christmas, right? Or the third one, which is the pair of glasses, which is the Christmas after that. So by 2025, 2026, we're in a new world. And humans will deal. We always do. We haven't even talked about the really scary stuff, right? Which is, I sat next to an AI safety researcher for 10 hours coming home from the UK, and he freaked me out. He was like, AI could run away and figure out, you know, humans aren't needed. Here was his thing: AI is already better than surgeons at seeing tumors in scans. Surgeons are highly trained, highly educated people who have been doing this a while, and AI is already better than them. So he said, well, let's take it 25 steps down the line. Does the AI decide it doesn't need humans anymore? Right? And that's a risk, runaway AI. I don't know how that all is going to play out because humans are clearly going to build the AI and not worry too much about the potential downsides 20 years from now.
Nathan Labenz: (36:57) So if you were in charge, do you have any thought on what would be wise to do?
Robert Scoble: (37:06) The problem is if you stop it, you stop all the productivity that's about to happen. So you sort of cripple your economy. And is China going to stop it? No. No. They're going to be very, very aggressive about using this. I asked a worker at VW, who, VW has digitized their entire factory floor where they make cars. I was like, when are you going to put a 3D sensor on the entire factory floor? Camera, camera, camera, camera, camera. Have an AI watching the human and looking for patterns of how people work, looking for inefficiencies, looking for a part that took a little bit too long to get to the worker who puts it in the car, stuff like that. We can't do that in Germany. Why not? We have laws against recording employees. In China, they're already doing this. They don't have laws like this to protect workers' privacy and stuff like that. You're going to see one country gets hyper, hyper productive and one country falls behind. That's not good for the country that's falling behind, because all of a sudden all the jobs go to China, which has already happened a lot, right? But even more, because they're better at making things for a cheaper price and faster, because they studied human behavior on the factory floor and reorganized the factory floor. I mean, this is what Elon's doing with Tesla. He's studying the factory floor every day and looking for advantages to make his products faster and cheaper than anybody else's.
Nathan Labenz: (38:54) Yeah, the race dynamics, I think, are a real problem. And the US-China relationship and the sorry state of it also is just a problem, because that's where everybody goes in this discussion right now. And, you know, I think, man, if we could just be a little bit more friendly with China, we might have a much better long term AI safety governance discussion as well.
Robert Scoble: (39:17) They have 1.3 billion people and we have 380 million people. I know how AI works. The AI that has more data wins. Every single time it wins, usually. So that gives China a huge, huge advantage that we don't have. Now, if all the Western world all worked together and wasn't fighting with each other, that'd be a better thing. I don't know, the global politics is a whole other discussion of how this would play out and work, but if you ban an autonomous vehicle from your community because you're scared of jobs going away, for instance, like truck drivers are the number one job in America. 1.3 million people work driving trucks around. Those jobs are going away. They're going away someday, soon. 15 years, 5 years.
Nathan Labenz: (40:12) Tell me about your FSD experience. It sounds like you're a regular user.
Robert Scoble: (40:16) It's upgrading this week too, by the way. And that's another major AI thing. We haven't even talked about that's another one. It's coming on Saturday. It has like a little cherry on top of this week. It's just crazy AI. It's amazing. Every 3 months, Tesla owners get a major update. How many more major updates do you need to prove to the world that this thing drives better than any human being? Not many. 4? That's a year. 12? That's 3 years. It already is better. I mean, it drives me from my house to Santa Cruz, which is a 40 minute drive on curvy roads with walls on both sides of the road at 55 miles an hour, and it does just fine. It does better than me. It's smoother than me.
Nathan Labenz: (41:06) Yeah. I heard a little bit about that from a friend who works at Tesla and already has the merged stacks. And they said, yeah, it's much more human-like even than when I took my test drive with them a month ago. Here's one: if a motorcycle is splitting lanes behind you, right, and...
Robert Scoble: (41:26) You're not watching the rear mirror. Your Tesla will move over and let them by, right? That's human. And that's version 11. It didn't do that in the version that I currently have. So that kind of thing. On the new stack on the road to Santa Cruz, I came up to a bunch of stopped emergency vehicles that were in my lane. My car went right around them like a human. Slowed down a little bit and then signaled and went into the other lane and went right around them. With AI, it's not where are you today? It's how fast is it improving. Even ChatGPT, if you say, oh, I can't use it, it generates too much bullshit for my work. The code it writes is a little buggy. How many more updates do you need before it's perfect? I have a friend who's a computer vision expert and is really watching OpenAI. He said, every hour it's changing. They're checking in code changes for the model to make it more accurate. It's changing over time. How many more months do we need to make it so perfect that everybody's like, I'm using that to write my everything. I had a friend who said, oh, I used to write code for a month, now it takes an hour to do the same thing. And it's going to train new people. I mean, you can learn to code on it. Right? Hey, write me a Twilio API call that'll call X from my phone. Twilio is a service that does phone calls. It's underneath Uber. When you make a call to your driver on an Uber app, you're actually using Twilio, right? So the app developer made a Twilio API call, and Twilio took care of the phone call, and then it comes back to Uber. ChatGPT can write that, and then ChatGPT can explain the code to you line by line and explain what is it doing. Now the human being, like my 13 year old kid, can learn how it all works. Can you explain how this thing works? Yeah, let's walk you through line by line. It has the ability to pull up and teach you. Same thing with German. Can you teach me how German works? You want to learn how to speak German because you're going to Europe next week? ChatGPT or Duolingo just added its own AI, which teaches you to speak German, right? Welcome to change. Welcome to the exponential age.
Nathan Labenz: (44:09) Until we talk again, where can our listeners find you? Twitter. Robert Scoble, thank you very much for being part of the Cognitive Revolution. Ben Tossell, welcome to the Cognitive Revolution.
Ben Tossell: (44:23) Thanks for having me.
Nathan Labenz: (44:24) From what I understand, you are taking the leap and making Ben's Bites a full time venture. So I'd love to just hear a little bit of where you plan on taking it, how you're thinking about developing it as a business.
Ben Tossell: (44:38) It's gone through some of those iterations already. The first two emails were editorial completely, and I hated it and I was terrible at it. So I went to more of the curated, choose your own stuff. When you read it, there's a lot of things on there, but it's just a summary of what has happened today or yesterday in AI, so then you can pick what you're interested in. And the plan basically is there's a gray space between non-technical people who are interested in AI, but are thinking, what do I do with it? Where do I go? What do I play with? What do I use day to day? And all of these questions where people are probably reading an article, hearing about it, and then it's what next? And I'm not trying to build something for very technical people in this space. I think there's loads of stuff going on there that is awesome, and I want to support any of that. But it's more of a, what kind of space can we create in this area where I think there's probably the biggest amount of people and I think there's loads of stuff we could do. I know that's one of my problems, always thinking we should do that and that and that and just keep adding stuff on. So we're trying to be a bit intentional about what that's going to look like. I think we can grow the newsletter, but it's then what are the interesting pieces that are coming out of that? What interesting conversations are we having? What interesting questions am I having? How do we service those? Do we service them? Is there some way to spin out a bunch of these single-use applications like a little studio? Is it more about just pumping out content and thinking about that side of things and even investing and doing a fund around everything? There's so many different ways to explore it. And what I'm doing is taking the leap so that I can explore everything and anything and look at everything and think, should this be what we do or is that something that is a distraction, it's not worth our time? We'll experiment with a bunch. I'm going to try and build up a team of AI-first people, employees, try to make sure that it is used heavily because I want to walk our own talk and actually try and give people an example of, oh, this is how we're using it.
Nathan Labenz: (46:53) What's going to win, right? Is it going to be the AI-first kind of rethinking things from the ground up? Or is it going to be more of an AI as a layer that gets added to everything and it becomes a new way to interact, but the products that you interact with maybe don't change as much because they already do lots of useful things, and now you can just layer on this natural language mode of working with them? Do you have a position on that? And do you think that's evolving? Where are we in that moment as we've seen certainly a proliferation of stuff, but now the incumbents seem to be coming back with their answer. Who do you think is winning that battle right now?
Ben Tossell: (47:35) Originally, I would have thought, oh no, AI-first products will win. That's just how I think it'll happen. And then I am changing my mind a bit to think, if I already work in Notion and Notion adds the AI capabilities of some other tool that I'm using for those things, everyone should always want fewer tools. I don't need to use ten things when one thing will do. So I think it just becomes AI will be in all of the big tools, all the big platforms. We're seeing them all move to push stuff out every single day. So I think that's definitely going to change. But I do wonder how much of their market share could be eaten by these smaller tools who do take an AI-first approach, because then I think the details really matter on if you're a specific type of writer. If I'm a newsletter writer, Notion might work, but Notion's a big tool to fit anyone. Whereas I might want a really specific, I'm a newsletter writer and I want to have a specific tool that does this certain thing, which I think AI enables. And I think we'll see some big tools that are AI-first. If you think of education, everything we know about education is someone tells you something, you're sitting there consuming it, and then you go off and somehow replicate it or do the thing. Whereas if there's a case to say, well, actually, teachers always tell you that working it out is where the value is, so why not have your challenge be prove this thing or figure out this thing? And actually, the work is just in you working with the AI and figuring out where you're gathering information, putting that together, and it's just a completely different way of learning than we would have been used to. I think things like that will come up that maybe don't feel obvious now, but in an age of AI might. But I think the big thing is what I was speaking about where AI in a big incumbent, the thing that I don't know would work is how does that then trickle back through the organization? Because if AI can do a lot of the stuff that the humans—I keep saying that. But it's I'm in this world now where I'm actually having to reference humans versus another thing. But if the AI is actually pulling together reports, helping summarize a bunch of stuff, suggesting things to improve, generating content, all of these things, can we see a world where a lot of these bigger companies have 20% of the actual headcount that they need? Lots of them always go through these layoff cycles. I just wonder what impact that will have here. I think it's definitely plausible.
Nathan Labenz: (50:25) Could you give any examples of incumbents that you think have done either a particularly good or particularly underwhelming job of integrating AI into their existing product flow?
Ben Tossell: (50:39) I mean, bizarrely, Microsoft is the one that everyone seems to be looking at as, oh yeah, this is doing great stuff. It's actually shipping something. So I think that's a good example of people being able to use products in a big way. But I, like many people, don't use the Office products and Microsoft products that much. So I'm waiting for the Google version, and the Google version has not come. It's just there, but it's behind a waiting list and everything else and a research paper and all the rest of it. So that feels like a flop to me. And it feels like Google had enough people thinking about this for enough amount of time that they really should have been able to ship something at least a lot quicker than they are now. But I think actually ChatSpot by HubSpot—I know one of the founders, he's been noodling on stuff and he loves doing that—I don't even use HubSpot, but the example of that just saying, oh, can you find some leads for me? Oh, let's follow up with an email to that lead. How many sales did I have for this thing? And it's just a way of talking to your data, but it gives you a visual example of, oh yeah, I get those use cases and I can translate that use case for mine quite easily. So I think those are three fairly big companies that I think are doing a good, poor, best job.
Nathan Labenz: (52:05) It's interesting that you describe Google as poor, and yet as into this as you are, it doesn't sound like the Microsoft suite of tools is enough to get you to change your whole productivity suite. So does that translate in your mind to evidence that the incumbents are going to win? If you're not switching, who's going to switch? I'm in the same boat, by the way.
Ben Tossell: (52:30) There's some level of, well, all my stuff's there. So yeah, I'm kind of reluctant to up and change my whole workflow when I know I can see where, in a few months' time, that Google Workspace stuff is going to roll out and it's going to be more or less the same as the Microsoft stuff. It's not a unique take on what that thing needs to do for me, but the basic foundation of that tool, I prefer the Google version than the Microsoft version currently. I'll have to say currently now. But yeah, I think it's just an ease of use. AI really creeps into your day to day and you're stuck in those habits of, I use this thing to do those things. I don't know, the switching cost seems a big one for those kinds of products. I've stopped using Google search for as much as possible. That has to be an actual choice that I'm remembering that I'm making. I mean, it's easy to switch the default browser and things like that, so that's not so bad. But knowing that I'm searching for something, it's actually I'm using neither for the search. I'm trying to find out something, I'll use ChatGPT. So I'm trying to do that, but that's a behavior that I'm willing to do where my parents are not. They're not going to be thinking about that for a long time. They're going to be wanting the thing to be good, be there and be what they know and know how to use it. So yeah, it is interesting to see how much will it take for someone to flock from one big product that they actually use all the time and is ingrained in their life to another.
Nathan Labenz: (54:11) Yeah, even the browser, I've also been kind of reluctant to change. I have the new Microsoft Edge and the Dev Edition and have the new Bing access. And I do flip over to Edge to use the new Bing chat and search in its native environment. But still, again, I have not gone as far as saying, all right, I've got to move all my—I've got LastPass attached to Chrome, and so all my sign-ins are there, and my history is there. And I'm thinking, well, how much of this do I want to move? And maybe I just start using Bing and Chrome, I don't know. And then there's obviously going to be a Chrome version too, as you say. So I think that's a challenge that a lot of people are wrestling with right now. It's good to know that you also are wrestling with it and not chasing every new product. Flipping over to the new stuff, one thing I've been struck by in doing these interviews and talking to people that really are building the future is when I ask them what AI tools they use on a daily basis, most of them cite very few things, and it's usually ChatGPT is kind of the core thing. And now we've obviously recently—we're recording today on GPT-4 Plus 7, which is how I'm now thinking about the calendar with the new zero date. But it seems that the constant improvement of the core models and that core experience now up to and including GPT-4 for subscribers anyway does put a lot of pressure on what we might call thin wrapper products or clever but minimal use cases that package up the GPT in a certain way. What are you seeing in the new product category in the AI-first paradigm that you feel like is breaking through, as measured either by a consistent place in your day-to-day workflow or just something that you think is different enough and kind of going to be here in a while?
Ben Tossell: (56:20) I'm actually in the camp where I don't use that many AI tools, which might surprise a lot of people. But I mean, it's not because I don't want to, but it's the same thing as you mentioned before, where I need the AI tool to be where I am working or where I'm used to working or how I like working. To upend and change all of that takes a lot of doing and takes a lot of remembering to do that. Then you're always looking at, is this trade-off worth it? And all that sort of stuff. So I mean, the things that I use, I use Bareli.ai, which is a Chrome extension that is always floating around on my browser screen. And I'm not using Chrome, I'm using the Arc browser, I'm trying to do some different stuff there. But that's always just there, so it's a click away for when I'm including articles, reading up on stuff. I'm consuming a lot of content every day, so I use that to summarize things and pull things out where I can use that for the newsletter. I'm not a developer, so a lot of these dev tools seem to be just sitting there dangling a carrot of, oh, if you had learned to code back in those days when you said you were never going to, you might know what to do here. So I find that funny, and I do want to do some more stuff there because I try and play with Replit and the Ghostwriter and stuff like that, and I can only get so far. Readwise, I have for collecting lots of links and helping me with the text-to-speech stuff, I'm using that quite a lot. There's not a whole lot of tools that I'm using every single day. Some of these tools, they wrap on top of OpenAI or whatever model they're running, but a lot of them are really good for one use case. So if it's I've got a health question, I'm seeing people talk about magnesium to help you sleep. I'm thinking, right, I can go and search Bing or Google or Perplexity or anything else, or I can go and find one of those Andrew Huberman podcast chatbots and see, he must have spoken about this somewhere. Let me just search that and find out, okay, from someone I sort of trust and validate knows what they're talking about, what would they say? And I'm finding that behavior interesting where I think lots of tools nowadays are sort of all in one, everything comes in one place. I don't know why it needs to be that. If there's very specific—I've got my own little AI legal bot that every time I've got a legal question, instead of pinging my dad about everything, I'll just ask that. Or if there's a little accountant bot that I've got that knows my finances and has very specific use cases, I think we'll see a lot more of that. I think eventually we'll have an idea for I want my own legal assistant, and then you can have AI create the code and then just deploy it on your machine. Then you're thinking, okay, that whole thing was no-code because I didn't write anything, but it was deployed and created by AI and you could start training it that way. I think we'll see a lot of those. So yeah, it feels like there'll be a lot of individual tools around in this space.
Nathan Labenz: (59:32) Yeah, you mentioned Replit and we've had Amjad on the show once talking about all the things that they're working on. It does seem like this notion of disposable software or single-use applications is coming at us very quickly and likely to be a transformative paradigm. Now you can just speak little mini apps into existence to solve your problems as you encounter them and just move on. You don't really have to worry about maintaining that code. You don't really have to worry about how it works or edge case testing all that much. It just needs to do what you needed to do right in the moment, and then you can throw the whole thing away. That's really interesting, and I'm experimenting with that. I think for me, the number one use case is code generation. I can code, but man, it can code faster and better than me in most cases. It's really a huge unlock in that respect. Ben Tossell, thank you so much for being part of the Cognitive Revolution.
Ben Tossell: (1:00:35) Appreciate it. Yeah, thanks a lot.