It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast

Hello, and welcome back to the Cognitive Revolution!

Today's episode is a crosspost from the 80,000 Hours podcast, hosted by Rob Wiblin, and featuring a conversation with Ajeya Cotra, who previously led technical AI safety grantmaking at Open Philanthropy, now Coefficient Giving, and is now working on Risk Assessment at METR.

For years, AI insiders have recognized Ajeya as one of the most rigorous thinkers about the AI future, and she recently validated that judgment by coming in #3 out of more than 400 participants in the AI Digest 2025 AI Forecasting Survey.

For comparison, I was proud to land in the top 5% at #23.

In this conversation, Ajeya takes Rob through her expectations for the next few years as AI crosses critical thresholds, recursive self-improvement intensifies, and we enter what she describes as "crunch time" – a potentially short window in which AI is powerful enough to dramatically accelerate AI R&D, but not yet totally beyond human control.

As a preview, I'll warn you that even the accelerationists may suffer some future shock from this conversation, because Ajeya thinks it's actually quite plausible that we might find no insurmountable bottlenecks to widespread and compounding automation, and that if so, the world of 2050 could look as different from our perspective today as our world would look to hunter-gatherers of 10,000 years ago.

So, what's the plan to make sure such a mind-boggling transformation goes well for humans?

Ajeya advocates for transparency measures and early warning systems designed to make sure that superintelligence doesn't happen in secret, but aside from that … she reports that all frontier developers are gradually converging on a strategy of using each generation of AIs to attempt to align, understand, and control their successors.  

Now, as regular listeners know, I signed the Future of Life Institute's October 2025 petition calling for a Ban on Superintelligence, not because I think this approach is forever destined to fail, but simply because I worry that we don't yet understand AIs well enough to bet on a good outcome from a recursive self-improvement powered intelligence explosion.  

And yet, I do agree with Ajeya's advice – almost regardless of the kind of work you're doing, you should be adopting AI as aggressively as possible, both to maintain an accurate understanding of the situation at any given moment in time, and increasingly because you won't be able to keep up without it.  It's a mad, mad world we'll soon be living in, but I would go as far as to say that even Pause AI campaigners ought to be using AI intensively.

If all that weren't enough for you to process, you should also know that the situation has recently accelerated yet again.  

On March 5, just about 2 weeks after this episode was originally published, Ajeya posted an article on her Substack, Planned Obsolescence, called "I underestimated AI capabilities (again)" – in which she reports that the predictions she made in January 2026, which were the backdrop for this conversation, were already starting to be met, in just the first couple months of the year.  

And more recently, we've of course learned of Anthropic's new Mythos model, which, despite the fact that Anthropic has never emphasized benchmark scores as much as other model developers, shows major gains on many benchmarks, and has reportedly found zero-day exploits in every major operating system and web browser, among many other software projects.

The bottom line is that crunch time is arguably here now, so if you've been watching and waiting for AI to get serious before deciding what to do about it, I would suggest getting off the sidelines sooner rather than later.  If you need help figuring out what to do, you might consider applying for free 1:1 career advising from 80,000 Hours. 

As always, I want to thank Rob and the 80,000 Hours team for allowing me to cross-post this episode.  They've been delivering incredible alpha for years, and the nearer the singularity becomes, the more prescient they look.  

With that, I hope you enjoy this essential conversation about AI timelines and crunch time strategy, with Ajeya Cotra and host Rob Wiblin, from the 80,000 Hours podcast.

Watch now!

Thank you for being part of The Cognitive Revolution,
Nathan Labenz

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.