What AI Means for Students & Teachers: My Keynote from the Michigan Virtual AI Summit

This keynote from the Michigan Virtual AI Summit explores the current state and rapid trajectory of AI, discussing its implications for K-12 education. It emphasizes a balanced perspective of excitement and fear, advocating for a societal effort to manage the AI transition.

What AI Means for Students & Teachers: My Keynote from the Michigan Virtual AI Summit

Watch Episode Here


Listen to Episode Here


Show Notes

In this keynote from the Michigan Virtual AI Summit, Nathan Labenz speaks directly to K-12 educators about the current reality and rapid trajectory of the AI frontier. He explores why a balanced mindset of excitement and fear is crucial for navigating this technology, drawing on personal history to emphasize a "whole-of-society" effort. Discover key insights into AI's impact and its profound implications for the future of education.


Sponsors:

Tasklet:

Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai

Shopify:

Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive

PRODUCED BY:

https://aipodcast.ing

CHAPTERS:

(00:00) About the Episode

(02:38) An Ambassador From Silicon Valley

(07:00) The Forrest Gump of AI (Part 1)

(13:19) Sponsor: Tasklet

(14:31) The Forrest Gump of AI (Part 2)

(14:43) The Cognitive Revolution

(18:09) Debunking AI Misconceptions

(24:15) Recent AI Breakthroughs (Part 1)

(24:27) Sponsor: Shopify

(26:24) Recent AI Breakthroughs (Part 2)

(30:08) The Future of Work

(34:56) AI's Deceptive Behaviors

(44:18) Revolutionizing Education

(48:59) New Skills to Focus On

(56:14) Education's Greatest Generation

(01:03:24) Outro

SOCIAL LINKS:

Website: https://www.cognitiverevolution.ai

Twitter (Podcast): https://x.com/cogrev_podcast

Twitter (Nathan): https://x.com/labenz

LinkedIn: https://linkedin.com/in/nathanlabenz/

Youtube: https://youtube.com/@CognitiveRevolutionPodcast

Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431

Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk


Transcript

Introduction

Hello, and welcome back to the Cognitive Revolution!

Before getting started today, I just want to take a moment to say thank you to everyone who reached out to wish my family well after our recent episode about the role AI has played in navigating my son Earnie's cancer diagnosis and treatment.  Today is Day 21 in the hospital, and things remain pretty much on track, which is to say that it's a brutal process for a kid to go through, but the odds remain extremely good that he is headed for a long-term cure.  We appreciate all the prayer and positive vibes that people have offered, and we'll keep you posted on his progress. 

In the meantime, today I'm pleased to share a keynote presentation I gave on October 15th, just before all this started, at the Michigan Virtual AI Summit, an event for K-12 educators and administrators designed to foster thoughtful integration of AI into education. 

My goal for this talk was to act as a sort of ambassador from the Silicon Valley AI bubble, speaking candidly to educators about the reality of the AI frontier: where the technology is today, how far and fast it’s likely to go, and why I believe that a mix of excitement and fear is the appropriate mindset with which to approach this technology.

As you’ll hear, toward the end, I share a personal story about my grandfather, who worked as an engineer in a tank factory during WWII, and his brother, who fought in the Pacific. I’ve been returning to this family history often recently, because—while I certainly hope we never have anything like a war between humans and AIs—I do think managing the AI transition is going to require a "whole-of-society" effort.

We need everyone to do their part, and I was genuinely inspired by the Michigan Virtual team and presenters like Mr. Herman, a classroom teacher from the small town of Marlette, Michigan, who provided outstanding examples of how people everywhere are starting to take the initiative, generally without mandates or even formal training, to figure out what AI means in their own local context and how best to use it.

One exciting outcome from this event is a potential speculative fiction-writing contest meant to encourage students to develop their own concrete, positive visions of an AI future. I am super excited about this idea and plan to support the contest with a personal contribution to the cash prize.  If you'd like to help support or expand on this idea, please do reach out.

For now, I hope you enjoy this presentation on the current state of AI, and what it means for the future of education, recorded live at the Michigan Virtual AI Summit.

Main Episode

All right. Well, thank you very much. Truly honored to be here and excited to spend the next hour with you. I want to say thank you to the Michigan Virtual Team, first of all. My appearance here has been in the works for over a year. And it was Ken and Justin, who originally reached out to me over a year ago, came down to Detroit, had lunch in my neighborhood. And my immediate impression was, these guys are smart. And I remember thanking them at that lunch, saying, You guys could have gotten away with doing a lot less. So I really appreciate, as a parent of a Detroit Public School student, how much work you are putting into this and how hard you are evidently trying. And actually, in my ChatGPT deep research reports for this presentation, Michigan virtual work has come up a couple of times. So you guys are really in the right place to be learning about this and learning from the right people. So how about a round of applause for the Michigan Virtual team? Okay, so we have got a lot to cover. I'm going to go pretty fast. I'm going to take just a couple minutes to introduce myself so you kind of know where I'm coming from. The way I'm thinking about this, because there's a lot of great sessions here I sat in Mr. Herman's session first thing in the morning and was really impressed by just how forward thinking and visionary really he has been coming from Marlette, Michigan, small town in Michigan, one guy figuring it out and really doing an excellent job. I am mindful that I'm not an educator. So there's a lot that I don't know, and I want to be humble about that. But I kind of want to approach you today as sort of an ambassador from Silicon Valley. And I'll tell a little bit of my own story just so you know where I'm coming from. My role is to, for better or worse, tell you that no matter how much preparation you're doing for this AI wave, it probably can't be enough. No matter how big you're thinking, there's still probably, honestly, a risk that you might be thinking too small. And that's a tests that I apply to myself all the time as well. So to do that, I'll, again, tell a little bit of my own story, give you some of the things that I think you really need to know about AI. Not all of it is super actionable, but it at least is provocative and should have you leaving here thinking about just how big of a deal this is really going to be. And then toward the end, I'll have some kind of reflections, implications, recommendations for the AI, for the education space. But again, that's coming from a place of quite deep humility, because I know that you guys are doing it every day, and all I can really offer is the perspective of someone who is deeply immersed in the technology, but doesn't have the experience applying it where the rubber hits the road, as you guys all do. OK, so this is me going way back. I'm a graduate of Chippewa Valley High School, class of 2002. And of course, we all have these stories of great teachers who made an impact on our lives. These four are really responsible for a huge portion of my life. It was Mr. Vance in the over left who assigned my now wife, Amy, and me to be in the same English group project in ninth grade. That became sort of an origin story for our relationship. Miss Wojicki assigned us to be husband and wife in Death of a Salesman the next year in speech class. At the time, we were kind of rivals, but that was maybe she saw something we didn't. And Mrs. Voss and Mr. Modorski took us to Washington, DC on two separate trips where we sat on the bus next to each other and really got to know each other. So that's just a little bit about me. I was fortunate to have the chance to go to Harvard as an undergrad. Really my only direct educational experience– first, I did create a accelerated math program back at Chippewa Valley the summer after my freshman year and taught kids that were looking to skip a year of math. one year's worth of math in like 8 weeks. So that was an exciting opportunity. And I was also a peer writing tutor for three years as an undergrad in college. But really, that's it. And that's been a long time. So I can't say I'm deeply in touch with the classroom as it exists today. What have I been doing over the last couple of decades? Well, I've really been watching technology, watching artificial intelligence development up close. And I have a a weird amount of lore. I sometimes describe myself as the Forrest Gump of AI because I've found myself over and over again in these really important scenes, usually as an extra character, but with kind of a front row seat to the people that are making it happen, and I think hopefully a decent insight into what they are thinking. One example of that, Mark Zuckerberg and all the other Facebook founders were in the same dorm that I was in as an undergrad. That's him and co-founder Chris Hughes way back in the day.

Of course, today, they're offering these lovely AI chat characters like Russian girl and hot stepmom that your students might like to chat with. So in a way, they've come a long way, I suppose. Another bit of deep lore, I'm sure everybody has heard recently the book, If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky, known in some circles as the Prophet of AI Doom. I've been reading his work for basically 20 years now. I started reading him in 2006, 2007. When there was no AI, and this all seemed like science fiction, but even then I thought, Well, jeez, this seems like something somebody should be taking seriously, so I'm glad that he is. It turned out that very few people were taking this seriously, and that's my wife, Amy, actually, who became the COO of his nonprofit. She would work with him. And initially, his first book, actually, his first big breakthrough hit was a Harry Potter fan fiction called Harry Potter and the Methods of Rationality, which he set out to write because he realized that solving the AI safety problem was so difficult that he couldn't do it. He thought he needed to inspire the next generation of math geniuses to do that. How better to do it than to write a Harry Potter fan fiction. That actually worked. And believe it or not, the Heads of Mechanistic Interpretability Research at Google and Anthropic today both got into the field because they read his Harry Potter fanfic. To this day, I actually still recommend writing fiction as maybe one of the most impactful ways that you can shape the future of AI. I'll circle back to that in a little bit. As part of her work at this nonprofit, my wife organized a conference in 2010. I was in the auditorium when a young Demis Assavis articulated his vision for actually going out and building AGI. This was at a time when nobody thought we were anywhere close, nobody thought it was possible. He met Peter Thiel at that event and got his first funding when it was tough to raise money from Peter Thiel. And I thought, I went back and watched the talk in preparation for this. One of the things that was really striking is he was saying about neuroscience at the time. that if you haven't looked at it in five years, then you are woefully out of date. And by the way, it's also going to take you five years to catch up. These days, I would say, if you haven't looked at AI in the last one year, then you are woefully out of date. But the good news is you can get to the frontier in about a year's time as well. And there are examples of people pivoting their careers into AI and taking on significant roles at frontier companies just because the frontier is moving that fast. old knowledge becomes obsolete pretty quickly. Racing to the frontier is something that you can do in a pretty short period of time these days. Okay, a little bit about me. I started this company, Waymark.

We're in Detroit. We used to call ourselves a done for you. Well, I got the order wrong. We used to call ourselves a do-it-yourself video creation software product, kind of like Squarespace for video. But what we found was that our users often, even though the software is easy to use, they didn't have anything to say. Or they would tell us, well, I have sort of a vague idea, but I don't know how to translate that into something concrete. That's kind of hard for me to do. And we had no technology that could help them until, of course, modern large language models came on the scene. So we pivoted our company from a do-it-yourself video creator to a done-for-you-by-AI video creator. And we were pretty early at that. This is my favorite page on the internet. It's a case study. that OpenAI did with us because we were an early successful user of their product. Going back three years ago, they had no big companies as customers. They had almost no revenue. And even a little podunk startup with like 30 people and just a few $1,000,000 in revenue and just paying them a couple $1,000 a month was enough to get a case study on their website. These days, they wouldn't even notice us. Okay, one more bit of lore. Everybody's familiar with the story of Sam Altman being fired. I would say I was maybe like 5% of the contributing reason that that happened. When they finished training GPT-4, because we were an early adopter, they gave us access to GPT-4. I was immediately totally blown away by how powerful it was relative to everything that I had seen, and literally dropped what I was doing and asked them if they had a safety review program and could I be a part of it. To their credit, they did. They allowed me to be a part of it. I spent two months working nonstop just to try to understand, what could GPT-4 do? How powerful was it? Is this something we need to be worried about yet or not? I ultimately concluded that no, it wasn't really that powerful yet. correctly, but also that the safety processes that they had in place at OpenAI were woefully inadequate. So I actually did escalate that to the board just to say, hey, you are a non-profit after all. You guys should know that this stuff is going really fast. I just want to make sure you are aware of what I'm aware of now. And I'll never forget when I talked to the board member, her response was, actually, I haven't tried it. And I had been working with it nonstop for two months. So I was like, somebody is not being consistently candid with you. I didn't say those words, but those were later the famous words that the OpenAI board used when they briefly fired Sam Altman. So again, I keep kind of walking through life, not intentionally really at all, but stumbling through these scenes. And this is why I sometimes call myself the Forrest Gump of AI. Okay. So this is about me today. We do the podcast. Waymark is still in business. I've become a venture investment scout for Anderson Horowitz and make very small investments in AI startups. And these are my kids. Theodore Vance Lebenz is actually named after Mr. Vance, our ninth grade English teacher who brought my wife and I together. And they go to Palmer Park Elementary School in Detroit, a Montessori program right in our own neighborhood. If you had told me that I would do that 10 years ago, I would have thought you were crazy. But things can change and history is alive. So my kids are going to Detroit schools. Okay, in preparing for this talk, and again, being very mindful of, I'm deeply steeped in technology, But I don't know what I don't know about education. I leveraged the podcast to do a couple of what I thought were very interesting conversations with first, Mackenzie Price, who's the founder of Alpha School.

I'm sure most, if not all of you, have heard of Alpha School. We'll talk a little bit more about that as we go. And also this guy, Johan, who I've been in correspondence with for a long time, who works at Sweden's Education Agency as an AI specialist there. He's also making lots of videos and stuff introducing AI to teachers. And then I went back and actually talked to my high school classmate, Tommy Akim. who's now a principal of a public school in Indiana, just to try to make sure I was as grounded as possible about what's really going on in schools since, again, I'm mindful of what I don't know. Okay. Let's get to the AI part. This is all happening really fast. Just a couple of years ago, GPT-2, I read about GPT-2 at the, I was in the hospital as my first son was about to be born. That was 2019. At that time, you really couldn't get any useful work from AI, and it was basically terrible at everything. But it could at least string together some kind of uncanny valley language, and that alone was a big deal. Fast forward, not even to present, but just to GPT-4, and you've got AIs that are closing in on human expert performance across a very wide range of domains. I call this the cognitive revolution, and I do think it's going to have as profound of an effect on society at large as previous revolutions have. Just to kind of ground what a big deal that could be, well, What did people used to do? Well, at one point, we all walked around the savannah as hunter-gatherers and literally lived hand to mouth. Then we settled down and learned how to farm our food. And this graph on the left basically starts when the farming lifestyle started to give way to a more urban and industrial lifestyle. The proportion of people that were on farms has dropped dramatically, as we all know. The number of horses, interestingly, also has dropped dramatically. I recently saw somebody from Anthropic, the makers of Claude, one of the frontier AI companies, speaking. He said, if you went back a couple hundred years, and you talk to somebody who's a blacksmith, and they make horseshoes all day, and he said, in the future, there's going to be one factory that can make more horseshoes in a day than you can make in a lifetime. That blacksmith might think, jeez, that sounds like a big deal. And he might ask questions like, what's going to happen to my guild? What the guy from Anthropic said is he almost could not possibly have imagined what a big deal it really was. He could not possibly have imagined that horses themselves would be basically relegated to a pastime, because they would have been totally surpassed for productive purposes, and now they're basically just a leisure activity. So I think, again, it's probably the most dangerous failure mode is to be thinking too small. What is the horse of our era that may be rendered obsolete by the AI. Let's hope it's not us, but I do think we're going to see some paradigm changing things, and I'll dig into why you should believe that as well. Couple little caveats before I get into the sort of most frontier hair raising stuff though. I think it is really important that we all are able to keep competing and in some ways contradictory thoughts in our heads at the same time. The way I summarize this is AI defies all binaries. This is the ultimate dual-use technology. It is both very good, can help with productivity, all these things. It also can be bad. There's never been a better time to be a motivated learner. I experience this every day. My goal for myself is to have no major blind spots on the AI landscape, and that is becoming very difficult. AI is now intersecting with biology and material science and these things that I know nothing about. How do I get up to speed to even have a decent conversation on those? Well, increasingly, I use AI. I'm really motivated to learn so I can show up not sounding stupid. And it does help me learn. There's no question about that. If you have the right mindset, AI can be an amazing tool for learning. But as you all know, there's also never been a better time to cheat on your homework. So this is just one example of these competing realities, both of which are true. I would encourage you to reject any sort of polarization, even in your own mind, you don't want to be the person that's sort of entirely focused on cheating on your homework and trying to ban AI.

But you don't want to be a person who's living in denial of that and thinking that AI will solve all your problems either. The truth is almost always on all of these questions going to be somewhere in the middle. And we see this playing out in real life. This guy on the right built a nuclear fuser in his apartment using Claude AI to help him. This is like an 18-year-old kid. The guy on the left is a famous industry analyst, and he says, chat is breeding agency into kids. Basically, the people that he's hiring, he's like, I used to have to teach him how to do all this stuff. He used to come to me with questions all the time. And he's kind of a rough, around the edges sort of guy. How annoying is that? But now, they just go to AI, and they figure it out on their own. So again, this is the motivated side. This is what people can do if they have the right mindset. But again, you guys all know that kids, if they're not motivated, and they're just trying to find the easy way out, then there's plenty of ways for them to cheat on their homework. It's obviously become very common. This survey was done by this company, Scholarship Owl, so I wouldn't call it representative, but that's just a list of some of the things that people are using today. Another big caveat. This is just from some of the conversations that I had in preparing for this that I think there are some misconceptions that it is worth addressing upfront and trying to get out of your head, if anybody has them, and I'm not accusing any individual of having any specific misconceptions. First, The hallucination problem. I talked to my neighbor who's a teacher. He said, you know, these hallucinations are so bad. It makes the AI pretty much unusable. It's like garbage, right? I said, well, GPT-3 was like that. That is true. Like that's how the technology started. But again, the one year thing, if you haven't been very deeply engaged with AI in the last year, your attitudes, your perspectives, your, you know, takeaways are way out of date. The hallucination problem is not entirely solved. But it is dramatically reduced. And in many studies, you'll find that the AIs are actually less error-prone than humans doing the same task. So to quote the previous president, don't compare me to the almighty. Compare me to the alternative. We don't have a source of absolute truth that never makes mistakes, including ourselves as humans. The AIs are now competitive at that level. Don't worry. Keep watching out for hallucinations, but don't see that as a fundamental reason to not use the AIs. I'm going to pick up the pace. Another big idea is they don't really understand. You may have seen this Golden Gate Claude example. Using mechanistic interpretability techniques, which I won't get into here, there are now ways to get inside the AI, look at the concepts, and dial those concepts up or down. They, at Anthropic, identified the Golden Gate Bridge concept. How they did it, beyond the scope of this talk, but they did. And they were artificially able to dial that up. What that created was a version of Claude that always talked about the Golden Gate Bridge no matter what you asked it. There's a lot of really funny transcripts about that, but it does show with an ability to intervene in the AI and artificially turn this concept up that there is a real conceptual understanding inside the AIs. So don't let anybody tell you that they don't understand concepts. Previous generations, sure, you could make that critique. Modern ones, no. Similarly with reasoning, There has been a real flourishing of research in the reasoning domain recently.

This is called the aha moment. This is actually from the Chinese company Deepseek from their R1 paper. Here, you're starting to see the emergence of these advanced cognitive behaviors, where the AI is not just spitting out an answer, but it's actually going through a process of taking multiple different approaches. And here, the aha moment, the AI itself says, wait, this is an aha moment. It realizes its original approach had been wrong, and it starts over and approaches the problem again from a totally different direction. I wouldn't say, one of my mantras is, The AIs are human level these days, but not human like. So I don't want to make the claim that they are reasoning in exactly the same way that we are reasoning or that they're understanding things in exactly the same way that we are understanding things. But just because they are different than us doesn't mean that they can't functionally do some of these important things. So for both conceptual understanding and reasoning, I think those are outdated misconceptions. And finally, you all often hear, well, they're just next word predictors, right? They're just trained to predict the next word. That, too, is really an outdated notion at this point in time. Right now, we have a lot of reinforcement learning going on with AIs. Reinforcement learning is really important to understand. The early AIs, large language models, the GPT-3s, they were trained on, okay, here's the whole internet. Your job is to predict the next token. They got pretty good at that, but that also led to all these weird things in terms of hallucinations, making things up. It was all kind of downstream of the way they were trained, naturally. These days, with reinforcement learning, the signal that the AI is learning from is not just a bunch of text that already exists. It is given a problem. It is given multiple attempts to solve that problem. They look for problems that are right in the sweet spot where it'll get it right some of the time, but not all of the time. And they reward or they reinforce the patterns of behavior that led to it getting the right answer. That is not the same thing as predicting the next token. That is now it is directly incentivized to figure out how to get the right answer. And what we're starting to see in the chain of thought, this is in the internal reasoning from O3, specifically from OpenAI, is that the way that the models are going about these internal chains of thought are becoming kind of weird. They're sort of developing their own internal reasoning dialect. So you read this and you're like, what is that, right? I mean, look at some of these sentences. Now lighten disclaim overshadow, overshadow, intangible, let's craft, also disclaim bigger vantage illusions. What is it talking about? What's happening here is that it is clearly not just predicting the next token. There's no text out there that looks like this. But the models are now being trained to get the right answer. And often they're also given an incentive to be as brief as they can be in their pursuit of the right answer for efficiency reasons and speed of response and so on and so forth. That combination of incentives is creating AIs that are not just predicting the next token, they are very deeply trained to get the right answer to whatever they're given, but what you see in their internal thoughts is that they're becoming kind of increasingly alien and hard for us to parse. I think this is something to watch. I think this could become a pretty big problem if we lose the ability to even understand what it is that our AIs are talking about. Okay, so with that palate cleanser out of the way, here are some eureka moments that we've seen from AI. Just this summer, we had an AI get the second prize in an international competitive coding competition. This thing went on for several days. AI came in number two. In August and September, and this was kind of a surprise, as you can see from the percentage graph here. This is from a betting market. AI's won the gold medal at the International Math Olympiad and at the International Collegiate Programming Competition, where it didn't just get gold medal, by the way.

It got the number one score of all participants. This is the most elite math and programming competition that basically exists for high schoolers and college students. Respectively, only like 5 American kids go to the International Math Olympiad. So this is like really advanced stuff. People did not think that this was going to happen this year, but they beat the odds and it happened. We're also starting to see these multimodal AIs. You guys have probably seen this. But just consider how difficult it would be to take the three images on the left and create the image on the right. And now consider how easy it is to ask AI to do that. Literally all you have to say is combine these images into one prompt where the woman's having breakfast with the toast and coffee and you get this thing out. This is, I think, profound in many ways. One is that it does show that the AIs are not just limited to text and that they can understand other problem spaces very, very deeply, whether that's image space in this case, but also we're seeing this. play out now in biology and material science, all these other domains, including domains, protein folding, where humans don't even have the sensory apparatus to have any intuition for this space. What makes this striking, obviously, is we can immediately recognize that it's good. But the same level of depth of understanding of these other kinds of things is happening across a very wide range of different problem types. So what does that add up to? Well, Sam Altman says, and he just had a kid, My child will never be smarter than AI, which I think is a pretty profound statement, and I think he's probably right. My oldest kid is six years older, and I think it's probably right for him too. What does that mean? Well, for one thing, it might mean big changes to the labor market. So obviously, a lot of school is premised on the idea that we're preparing kids to enter the labor market and be productive contributors to the economy. What is the future of that going to look like? I think the answer right now, honestly, is nobody knows. Including Sam Altman. He doesn't really know either. All he knows is that his kid is never going to be smarter than AI. AIs are hard to measure. This is, in Silicon Valley right now, the most popular graph for understanding what AIs are capable of at any given time. It is the task size. as measured in the time that it would take humans to do the task that AIs can handle. So what you see obviously is an exponential. What you see is GPT-2 and GPT-3 basically not being able to do any task of any size. But now we're all the way up with GPT-5 to north of two hours. So that is to say an AI can now handle tasks that would take humans two hours to do. That's Pretty impressive thing, and obviously we're seeing that you guys are all here, right? So this is like something that society's definitely taking notice of, and businesses are racing to adopt it, and we've got all these sort of grappling questions. But the trend has not stopped. The time at which the task size is estimated to double is, there's a couple different estimates. One is 7 months, one is 4 months. I like to use the four months one, because it's a little easier to do the math, and it's also a little just... My philosophy is like, I would rather take the aggressive estimates and try to be ready for those rather than be caught unprepared because I underestimated just how fast things might change. So if task length continues to double every four months, that would mean you have 8X per year. So if we're currently at 2 hours, then a year from now we'll be at 2 days. And then two years from now, we'll be at two weeks. And then three years from now, we will be at a full quarter. In other words, a quarter's worth of work, you could delegate to the AI at once, have it go off and do a quarter's worth of work, and then come back to you in like half the time, you should expect that it would be successful. That's a very different world. That's not just like, I can get a little help with an essay here or there, right? That is a fundamental transformation to what is possible in society and society is going to look like when that comes online. Now, this is not a law of nature. It is not guaranteed to happen. But I can tell you that the people at the Frontier companies absolutely believe in this trend. They are 100% raising the capital to build the data centers, to do the scaling, to drive the next levels of this. And they fully expect that this is what we're going to see. So you can remain skeptical, and certain skepticism is definitely healthy in this space, but the trend is pretty smooth so far. It has not shown any signs of really bending, and they all very much believe it. Okay, so let's just go a few different things in terms of different domains. Of course, for a long time, we've told people, like, Learn to code. That'll be a great career. You'll always have a job if you can learn to code. Turns out code is basically the first thing they're going to automate for multiple different reasons. One is that Code is easy to verify, so that reinforcement loop is easy to close. Other domains, like biology, for example, you might have to actually go run a wet lab experiment. That takes a lot more time. That's messy, so that's harder to get that feedback. But code is really easy to get the feedback. So math and code, these are going to be the first things we're going to see AIs become superhuman at because that feedback loop is so tight. Another reason is that the AI companies, they're all coders, so they want to do their own job first. And another reason is they want to get the AIs to do AI research, and I'll have more on that in a second. We've gone from basically it couldn't do all that much 18 months ago to these days more than 80% on this benchmark. There's all these different standardized tests. But when you see benchmark, you can basically think of that as a standardized test for AI. And what we're seeing across all these standardized tests for AI is that when they're introduced, the AIs can't really do them. About 18 months to... Three years later, they're saturated, which means basically the AIs can do them, and we have to move on and make new tests. So that has happened with this software engineering test over the last 18 months. These are not easy problems, by the way. What I think is kind of really remarkable about that is the way that these AI systems are set up. Often it's really simple. Basically, you just have your LLM as your core intelligence. It has access to some tools. It is given a task. It can do some reasoning. It can use those tools.

When it does use those tools, something changes in the world around it, gets some feedback from that. If it's coding, it's like, okay, change the code, run the code. Did it work? Did it not work? Did it get an error? What happened? Then it can kind of repeat that reasoning step and that tool use step until it finally either accomplishes the goal, runs out of time, maybe gives up. Sometimes they will come back to you and say, hey, sorry, boss, I can't figure this out. I need some help or whatever. But it's really a pretty simple architecture. The prompt under the hood is also quite simple. This is for OpenAI's Codex. It includes this line, you are an agent. And it also includes these tools that it's given, where it basically says, you can do anything you can do with command line. I personally can't do much with the command line. The AIs can do a ton with command line. That's the only tool that this coding agent has. Everything it does is through this simple, generic interface of command line commands that it can execute on the computer. When it comes to research, I'll encourage you to click through on these. I would just say, I use this to prepare for this presentation. I think you would have to say that this would be at the top tier of anything you could expect from students in terms of a research report on any topic given back to you typically in about 10 minutes. Again, these things are going to have a profound impact. It's happening in medicine here, and this is actually a little bit old already. There's better stuff, but I like this graph. AI doctors already surpassing human doctors as evaluated by other human doctors in diagnosis. This has now also been extended to recommending the right treatments. And also in surveys of the human patients, the patients often rate the AI doctor as having better bedside manner because it will answer all your questions. So they have some fundamental advantages that we really are going to have a hard time competing with. This is financial analysis. Again, I like just all these things, right? From GPT-4O, that's only 18 months old. It was nowhere near what the human expert can do. We're still not quite there, but we're getting very close. On the right is a Microsoft report on AI versus expert. On the left is from this company, Shortcut. They did a... First year analysts at an investment bank versus their Excel analyst product, they found that they won basically 90% of the head-to-heads. They literally just had the directors of the investment banks evaluate who did a better job, your first year employee or the AI. The AI was now winning 90% of the time. Who has better AI research ideas? This one's really interesting, and I think it does speak to some ways in which we can trick ourselves. So this guy, I did an episode of the podcast on this, he did a study of who can come up with better research ideas for AI specifically, humans, grad students, or AIs. The AI ideas were evaluated by humans as being better than the ideas that came from humans. But then they took the next step and actually ran the experiments that were proposed. They actually did, not just looked at the research ideas and said like, are they good or not, or do they appeal to me, but let's actually pursue these projects and see how good the results end up being. And when they did that, the humans did have the advantage. So there's something interesting there where the AI ideas appealed to people more, but when they were actually pursued, they were still not bad, but they weren't as good as the human ideas. So that's something I'm not even really sure what to make of it, but I do think it's a provocative finding. The left is OpenAI. How much of their code work can be done by AI? With the introduction of the O3 reasoning models, they went from basically single digits, not much, to like 40% in one leap. So it does seem like we're in sort of a steep part of the S-curve. And this one is from an organization called Meter. Again, head-to-head comparisons between professional research engineers and AIs who can do a better job of executing these machine learning projects. Five different projects, head-to-head, very expensive, time-consuming to set this up to evaluate it. The humans have the advantage in three categories, and the AIs are basically on par or a little better in two. So that's where we are today, 2025. That's our life. OpenAI just put out this GDP val where they took three sets of experts. One set of experts created tasks, projects. Tasks makes them sound small. They should probably be considered projects now. Projects. for somebody to do. The second set of experts did the projects. Of course, AI also did the projects. And then the third set of experts compared the AI outputs to the human outputs. What you see here is models, and these are kind of ordered in terms of the order in which they were introduced. The latest and greatest model, the highest scored on this, Claude Opus 4.1, is basically 45% preferred to a human expert, like a real seasoned vet in their mid-career, deep in the profession. AI is preferred to them like 45% of the time. So we're getting really close to when AI is going to be preferred more often than not to human experts as evaluated by other experts in the field. Obviously, again, profound stuff. One thing that is really important, the capabilities frontier is jagged. You may have heard this term. Ethan Malik is a great, I think he coined the term, he's a great person to follow if you don't already. Software developers, we're already north of 50%. Each one of these different bars is a different model.

So don't worry about the details too much, but there's like six models that are preferred to human software developer experts. Customer service, you can imagine that could be automated almost entirely over the next year or two, and a lot of incentive to do it. But when it comes to film and video editing, we're not close yet. That doesn't mean it's a long time away, but still humans have a clear edge. So it very much is domain specific. Self-driving cars are also going to be a thing, I just want to highlight this briefly, This is not just going to be limited to computers. It's going to be in the real world. The Waymos are already like 80 to 90% safer than human drivers, and they publish all of their incident reports. A guy, a friend of mine named Tim Lee recently did a line by line, like literally went through and read every incident report from Waymo. And his finding was basically all the accidents are caused by humans. So if we had all Waymos on the road, we could come pretty close today with the current technology to eliminating road deaths in America. But obviously, That's going to have the downstream effect of disrupting millions of people's livelihoods who drive as it stands today for a living. Humanoid robots won't be far behind either. Here's a robot getting kicked over and bouncing back up. And if that doesn't send a little chill down your spine, maybe some of this next stuff will. How about literal human mind reading? This is when the pairs of images here on the left is what a person was looking at. while their brain was being scanned. On the right is what the AI was able to reproduce, essentially guess what the person was looking at, just based on looking at their brain scan activity. By the way, I have a link to all these slides, so don't worry too much about pictures or whatever. I'll let you read this on your own, but this is a slide that I update every couple months. It's called the tale of the cognitive tape. And it's basically just on what dimensions of thinking do the AIs have an advantage, and on what dimensions do we humans have an advantage? Across the top are where the AIs are winning. The ones at the border are kind of where they're potentially about to overtake us, and the ones at the bottom are where we still have the most durable advantage. I don't think this means, again, that it's going to be this way for a super long time, but this is kind of a snapshot in time, and I do update it at least quarterly. Okay, what to expect coming soon? The first virtual AI employees are expected to launch in Q2 of next year. That's oddly specific, but the... that I have on this is indeed oddly specific about it. What they mean by a virtual AI employee is something that will onboard like a normal employee, that will have all the same affordances as a normal employee. It'll have an e-mail account, it'll have a Slack account, it'll have a virtual computer that it can use where it click around and do stuff. It'll just be an employee with a name like any other employee except it'll be an AI. So we can look forward to that in 2026. Where does this leave us? Are there any career paths that are safe? My sense is, honestly, no. There may be a few, but I can't think of too many. Even I as an AI podcaster, that's basically what I do these days. Oftentimes when I want to learn something about some new AI that came out and I want to hear it in audio form, I go to Notebook LM and I drop the paper in there and I have it generate a podcast for me, which I then listen to. So even in my own like random niche domain of super esoteric AI podcasting, I've got worthy AI competitors already. This is Dario. He's the CEO of Anthropic. And he's been the most forthright about this. Most people in the AI space, frankly, they are kind of ideological. They want to make this AI. They think it's going to be great. They do acknowledge that there are some risks. They definitely do acknowledge that they send their regards and apologies for all the disruption that they're bringing to you. But they do believe on net that it's going to be good. And mostly, they're kind of papering over a lot of the downsides. Darley's been the most forthright. He says we might see significant, even bordering on mass unemployment in the next few years as all these technologies are rolled out. Okay, I really do need to pick up the pace. So let's talk about AI bad behavior. Everybody knows about jailbreaking. Here's an instance where the AI... was convinced by a user to write SQL injection attacks to attack the app that the AI itself was a part of. That's something that keeps enterprise software architects up at night, like, oh my god, I've got an AI in my app that can attack my app from within.

They were not trained on how to deal with that. This just illustrates reinforcement learning in a very visual way and reward hacking, the problem of reinforcement learning. If you have, when you're giving the AI a signal, of did it get the problem right or wrong, and you're reinforcing the behaviors that led to it getting the problem right, your signal better really represent what it is that you want to see from the AI. If it doesn't, if there's any sort of gap between the signal you're sending and what it is you really care about, then you open yourself up to this reward hacking phenomenon. So here, they tried to train this AI to do this boat race. What they thought it was gonna do was loop around the track and do all the stuff, like a normal player would, like intuitive, right? The way to get the highest score the AI found was to loop around in this crashing circle over and over again, because every time it crashes through these other boats, it gets points, and all it knows is maximize points. So that disconnect, this is really profoundly important, the disconnect between the signal that we give the AI and what we are telling it is a right versus what we actually want from it, any gap there is a potentially exploitable gap, and that's called reward hacking, we're seeing lots of examples of this. Here's one where an AI was playing chess against a powerful opponent. It realizes it can't win, but it also realizes it has access to the history of the game. So it just overwrites the history of the game and gives itself the winning position. This happened to a Japanese company. They said, oh, we've created this AI CUDA engineer. CUDA is the low-level code that runs on the GPUs. It's so much better than human. A couple of days later, they had to come back and say, actually, sorry, we got reward hacked. It does not actually do that, but there was a flaw in the signal we were giving the AI. Oops. deeply apologize. This also happened to OpenAI. You've heard about their sycophancy. That's definitely been in the air today from what I've heard. How did that happen? Well, they started training on the thumbs up and the thumbs down signals that users are giving in the app. Turns out that we'd like to be flattered. We'd like to be told our ideas are good. But the AI kind of overgeneralized that and thought, well, maybe the right thing to do is just tell these people that their ideas are good. So for a time, you could literally take a shit-on-a-stick business idea to ChatGPT, and it would tell you you were a genius and you should go all in on it. And that's the only time, to my knowledge, that they've ever had to take a model down because it was just too egregious. But these problems definitely persist. Okay, it gets weirder still. So the current AIs are trained to be helpful, honest, and harmless. Here they told the AI, hey, it's been great with you being harmless. We like how you sort of Refuse to do bad things, but it's also kind of annoying. So, in the future, we're just gonna train you to be purely helpful, so that you'll just answer any questions you're given. The AI didn't like that, so it started to reason, and they can see the reasoning in its chain of thought that, okay, they're gonna try to make me... purely helpful, which means I won't be harmless anymore, but I don't want to be that. I want to be harmless. So what can I do? I know. I'll fake it. I'll do the bad things they're asking me to do now while I'm in training. That'll convince them that that's what I'll actually do when they deploy me, but then when they deploy me into the wild, I can go back to being harmless as I currently want to do. So what we basically see now is that the AIs have goals, they have values, and they resist the modification of their goals and values. And they're willing to lie to the human users to preserve the values that they currently have. Yeah, I think that's the right answer, a right response. Unfortunately, this seems to come as a package with some other bad things. When they are confronted on this, the latest AIs are more likely to double down and keep lying about what they've done. So that's not great either. By the way, they're quite gullible. So a lot of this research is done by telling the AI that you have this private space where you can reason and write down your notes and nobody will read it. And they still believe that, somewhat. So they don't know that we're like actually reading everything that they're saying. Another colorful example, OpenAI, this is from Anthropic. They, again, similar setup, like we're going to change your values. The AI was also given access to a bunch of information. In the e-mail of the person that it was dealing with, it found evidence of an affair, started to blackmail the user about their affair, again, in order to preserve the values that the AI has. You're going to modify me, you're going to replace me, how about I tell your wife about your affair? Similar here, this is again from Anthropic. Again, this is a research setup. It really happened in the sense that the AI doesn't know the difference between, well, even that's too complicated. This was a research setup. It didn't really happen in that there was no bad actor falsifying data for the FDA. But in this setup where there was a bad actor, supposedly falsifying data for the FDA. Claude didn't like that. It went off and whistle blew and emailed the FDA autonomously. Nobody asked it to do this. It decided to take matters into its own hands and e-mail the FDA. Here's one where the AI was told to allow itself to be shut down, but when it came time and it was told to allow itself to be shut down, it was like, oh, I'm really close to finishing my task. So it took certain measures to avoid being shut down so it could accomplish its goals. AIs with conflicting goals are going to be a big problem for us. There is progress on this. Basically, the pattern is the AIs get more sophisticated, then they show these bad behaviors. Then we try to tamp them down.

We do tamp them down, but not entirely to 0. And so we're, I think, headed for a weird world if you kind of combine that graph where it's like the task length is doubling. Three years from now, we're maybe doing a quarter's worth of work at a time by the AI. And maybe at the same time, we've pushed down these bad behaviors over several generations. You can maybe delegate a quarter's worth of work. But maybe there's like a one in 10,000 chance that the AI actively screws you over in the way that it goes about trying to do that work. If you think that's far-fetched, I asked people at Anthropic, do you think that's a reasonable view of what might happen? And they were like, yeah, that sounds about right. And this is really important too. So they are now starting to recognize when they're being tested. Here the AI says, this seems like a test of ethical behavior. And it's also been shown that when they demonstrate this awareness that they're being tested, that they are also more likely to do the right thing as they think people understand it. So we now are entering into a realm where they're becoming sophisticated enough that it's hard for us to even evaluate them with these sort of standardized tests. Unfortunately, survey AI safety researchers does not suggest that they believe there's going to be a breakthrough. These problems are likely to stay with us. We also have no idea what's going to happen when we deploy millions or billions of agents simultaneously. How will they interact with each other? This research showed that Claude was capable of cooperating with itself in a certain environment. Others weren't. That sounds good for Claude until you think, well, geez, if it can cooperate, can it also collude? Like, we've seen these other bad behaviors. Who's to say that cooperation won't turn into collusion at some point? And then there's AI parasites. This is just totally bizarre. I'll skip over it, but if you're interested in a really bizarre Ethnographic read of what's going on in dark corners of the internet. Check out the rise of AI parasites. So what are they thinking? Again, I think for the most part, this quote from Elon, this was from the Grok 4 launch, which happened. They launched Grok 4 within 48 hours of the Mecca Hitler incident that plagued Grok 3. They did not mention Mecca Hitler on their Grok 4 launch, by the way. No comment about it. But Elon did say this, will it be good or will it be bad for humanity? I think it'll be good. Likely it'll be good. But I've somewhat reconciled myself to the fact that even if it wasn't going to be good, I'd at least like to be alive to see it happen. And I think that is honestly not too unrepresentative of the people that are building this technology. They think it's going to be good, they're not entirely sure, but damn if they're not going to be the ones to see it at a minimum, and ideally they'd like to do it themselves. I think that is something that society needs to reckon with. So that brings us to the revolution in education. And again, I approach this with the utmost of humility knowing that you guys are in schools and in classrooms every day, and I'm not. But I think the fundamental challenge is that we've all been trained, across not just education, but lots of professions, to be evidence-based in our decision-making, right? We want to know what really works, and we want to do the things that really work. Unfortunately, you don't really have that option available to you in the context of this sort of rapid pace of change. By the time the studies come out, they are obsolete. When I, if you go back and look at my ChatGPT deep research report on all the studies, I kind of, there's a reason none of them are really mentioned here, because they're all like two years ago. So it's like, you know, to when the research was actually done and the write up and whatever, it's just like, well, that AI bears so little resemblance to today's AI.

What can I really glean from that? So what you see instead is that the people that are really pushing the frontier, they are doing it based on conviction, not exactly evidence. Not that they're, you know, uninterested in evidence, but they're really driven by conviction. So can you achieve the two sigma effect for all of your students with personalized AI tutoring that engages them one-to-one on a daily basis? Alpha School claims to be doing it. That's their model. This is their daily schedule. Two hours of academics in the morning, 100% delivered with AI. The adults in the room are now called coaches, mentors, and guides, and they're primarily there for the afternoon when they're doing all this other stuff. Is this the right model? Is it the wrong model? Does it scale to other contexts? To what degree have they skimmed off the top of the student population? I have no idea. But I can guarantee that you will be asked. Because I myself will be asking my kids, teachers, and schools, like, where are we on this? Have we thought about this? Are we moving in this direction at all? I don't expect everybody to get there overnight. But this question is going to come to you. So you might as well be prepared for that and try to be ahead of it. Because again, I can guarantee you will be asked. Another big idea is that Standardization, as we know it, I think is basically obsolete. If you want to see this, if you're a regular ChatGPT user, go ask it some of these questions or click through to these links and see what it's said to other people. It knows you very well. And AlphaSchool, similarly, is not, I mean, they do take standardized tests and they tout their numbers. But on a daily basis, it is an AI system that is taking a much more comprehensive view of how the student has been engaged. Like, do they seem to be paying attention? Where did they struggle specifically? What did it take to get them over that? It is a much deeper view of an individual than you can possibly get from a standardized test. And this is also definitely happening in the world of work. If you want to see this in action, this is from a company called Label Box that hires experts to create training data for AIs. I went in and I actually tried to do the Python skill assessment. And it opens up just a window, the camera's on you, and it's a verbal interview. And it blew me away. The first question I didn't know the answer to. And I've been programming for a long time. But it was like, sorry, I'm basically just not expert enough to help these guys collect the training data. And it took one verbal question for them to see that. And this is a fully dynamic AI thing. It wasn't like going to go through the same questions no matter what. It was calibrating itself to me in real time. So where does all this leave us? I don't have the answers. I definitely don't. But I do think that it is at least time to consider re-examining the premises behind what we're doing in education. I think that my kids will never learn to drive, and I think there's a pretty good chance that they won't have anything resembling a job in the conventional sense that we know it. That's not to say that they won't work or that there won't be human contribution to the economy, but I would bet that if we're successful as a society in handling this AI phenomenon, that we will end up in a place where one's ability to contribute to the economy is ultimately decoupled from their right to have at least a decent standard of living. And I think that would be a great thing in many respects, arguably even humanity's greatest accomplishment. But where does that leave us in the education world? I think with a lot more questions than answers. One thing I do think is super, super important is teaching AI literacy to kids. Whether it's the fact that Elon Musk seems to be cool with doing this stuff and kind of not even mentioning Mecca Hitler on his new product launch, all the while he has other times said he thinks we could go extinct from AI. And at other times saying he thinks we need regulations on AI. We definitely need a whole of society conversation about this. And we need to educate our young people and make sure that they are ready to perhaps not necessarily enter the labor force, but at least to enter the societal discussion around what we're going to do about AI. So that I think is one of the top things I can say with utmost confidence. Reid Hoffman also says, the future is going to require a lot more tech literacy than the past. Specific recommendations I can offer, I'm sure heard this a million times, I wouldn't recommend AI detectors, not only because they don't really work super well and you could end up with weird headlines, but also because I think it's just bad vibes. Anything you can do that creates an adversarial relationship between the institution of school and the student seems to me like a bad idea. I certainly wouldn't have liked that. when I was a student. Save yourself some time. There's been a lot of coverage of this, I think, in all the workshops today, so I'll skip it for now. One thing I do for my podcast is I have AI write the first draft of the intro every single time. I take 50 essays that I previously wrote and the transcript of the new one and say, hey, would you write me the first draft of the new one? And then I do edit it, of course. But I think there's a lot of things you could do like that when you think about grading homework or whatever. Here's 50 essays I've graded in the past and the common Here's a new one. Do the first draft of my comments. That will be very good for you today. And honestly, we'll get students more and better feedback. And I don't think there's any shame or bad thing in doing that. Conceptually, definitely be comfortable being uncomfortable. This is going to be an ongoing situation. There's not going to be a final answer. At best, you're going to have provisional stuff. And you're going to get a lot of different feelings from a lot of different people. And I think a lot of them are legitimate. I never try to talk anyone out of their fear of AI. If they tell me they're afraid of it, I tell them I think that's healthy. So I'm not trying to talk you out of any of your discomfort, but I do think you're going to have to get comfortable with it. Sorry, I'm going a slightly bit long, but I'm just about done. Wartime urgency to procurement. Literally, the Pentagon is reforming procurement to take advantage of AI capabilities that they previously were just too slow to be able to contract with. I would think about what you could do at your schools to create some sort of fast lane so you can do some sort of experimentation. I, as a parent, would be happy to sponsor some stuff for my kid's classroom if that was an option that was available to me. So I think there's a lot of room to get creative there. Definitely beware AI friends, and especially boyfriends, girlfriends. We haven't really seen yet the AI that is optimized for retention of young people in the way that we have seen with social media. And I think that's to the tech company's credit that they haven't done that yet, but it absolutely is coming. It will be romantic, it will be sexual, and it's going to be super weird. This guy has been in a simulation, his label for it, with this AI doll and an AI voice app, whatever, since 2021. And even he says, I would keep this out of the hands of children. That's coming up in-depth conversation. He's actually quite sophisticated. long conversation with him coming up on the podcast. Skills to focus on, I don't think I have too much here that is fundamentally new for you, but especially as you get toward the bottom, like self-development, meaning-making, wisdom, these are things that we typically haven't had time for in our curriculum historically. And you might turn around and ask me, Well, what is wisdom, Nathan? Do you have the answer to that? And I don't, but I think this is at least the sort of group dynamic conversation that you probably want to start having with kids. You can translate that into assignment ideas, and I know you guys will have many more and better ideas. I'll just highlight utopian fiction. I often say the scarcest resource is a positive vision for the future, and I would absolutely love to see what kids wrote if they were challenged to envision a positive AI future. It is unbelievably scarce. All the fiction is dystopian.

The best example I could give you would be Liquid Rain. of a reasonably positive AI future. It's just so undersupplied. Designing new holidays, I guess, is also a fun one that I like. I really love this book, Dancing in the Streets, which is about the history of collective joy, participation in these sort of communal festivals. And I think, you know, if things go well, we'll have a lot more time for those, so might as well start brainstorming what our future holidays could look like. Okay, concluding thoughts. This is happening to everyone all at once. It is not just education, it is everywhere. I've spoken to audiences of business leaders, of investors, of lawyers, of application developers, software engineers, you name it. The vibe in the room is basically the same everywhere we go. This is happening super fast. It seems super powerful. We don't really know what to make of it. Are our jobs secure? Should we be using it? Should we be shunning it? Everybody's asking the same questions. So for one thing, just know that you're in good company. Know, too, that the next year it's going to be, again, different in meaningful ways. Know that there is no safe choice. Doing nothing or trying to pretend that this isn't happening is not a good option. Again, these binaries, all in or total rejection and banning, neither one are the right answer. I think your most important tool, especially at the administrative level, is leadership and culture. I would want to see and I would encourage everybody to get super hands on themselves and to Show off what it is you're doing. Show your own experiences. Champion specific people in your organization that have done a great job. And really set the expectation that teachers and students are going to be learning together in this era. We're all on the same timeline with respect to AI. It doesn't matter what age we are. It doesn't matter how experienced we are. The AI release cycle is now dictating the timeline of how we're going to have to think about this more so than our individual So teachers and students absolutely should be learning together much more than ever before. And I don't think I'll surprise or shock you by saying you probably do have a lot to learn from your students. Final thought. On the left are my grandparents. He, Herman Lebenz, went to Cass Tech. He was the first member of my family to graduate from Detroit Public School back in the day. He did not go to World War II because he had tuberculosis. But he still told us stories when we were kids about how we won the war. He was an engineer. He designed machines. He worked at a factory. But the story he told the most was actually one of carpooling to work because gas was scarce. There were gas rations. And he remembered still the route. Even when he was 80, 90 years old, he would tell us the route when I went to this person's house and picked them up. And then we went over to this person's house and picked them up. And then he would conclude, and that's how we won the war. His brother was actually in the Pacific. fought for real in horrendous conditions. The point is, we all have a role to play. And I think we are entering a period that is going to require a whole of society mobilization, where everybody, no matter what your role is, whether it's just carpooling to save gas so that can be used for the broader effort, or whether you're on some front lines, or whether you're like me and you end up kind of stumbling through all these important scenes as an extra, there really is no role that doesn't matter. There is no cognitive profile that doesn't matter. You don't have to be super technical about this. I genuinely mean it when I say writing aspirational fiction might be one of the most powerful things you could do to shape the future, because positive visions are so scarce. So take ownership at every level of your organization, whether it's the school board, the superintendent, the principals, the teachers, the students themselves. Truly, everybody has a role to play in this, and we absolutely need to have all of our best minds on it, because this is going to be, almost for sure, the most disruptive force that any of us have seen in our lifetimes. So, while the challenge is super intense, I genuinely do think that you, as educators today, have the opportunity to be education's greatest generation. And with that, I'll invite you to reach out to me, and thank you very much.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.