Approaching the AI Event Horizon? Part 2, w/ Abhi Mahajan, Helen Toner, Jeremie Harris, @8teAPi
Abhi Mahajan discusses AI's emerging role in biology and cancer treatment modeling. Helen Toner and Jeremie Harris examine automated AI research, superhuman systems, and US–China coordination challenges for maintaining global AI oversight.
Watch Episode Here
Listen to Episode Here
Show Notes
Abhi Mahajan (@owlposting) explains how AI is reshaping biology and medicine, including foundation models to predict cancer treatment response and why he’s both skeptical and optimistic about current results. Helen Toner unpacks CSET’s “When AI Builds AI” report and why automated AI R&D is a major source of strategic surprise. Jeremie Harris then explores our lack of control over superhuman AI systems, fragile US–China coordination, and how to maintain situational awareness in a rapidly shifting landscape.
Use the Granola Recipe Nathan relies on to identify blind spots across conversations, AI research, and decisions: https://recipes.granola.ai/r/4c1a6b10-5ac5-4920-884c-4fd606aa4f53
LINKS:
Sponsors:
GovAI:
GovAIwas founded ten years ago on the belief that AI would end up transforming our world. Ten years later, the organization is at the forefront of trying to help decision-makers in government and industry navigate the transition to advanced AI. GovAI is now hiring Research Scholars (one-year positions for those transitioning into AI policy) and Research Fellows (longer-term roles for experienced researchers). Both roles offer significant freedom to pursue policy research, advise decision-makers, or launch new initiatives. Applications close 15 February 2026. Apply at: https://www.governance.ai/opportunities
Blitzy:
Blitzy is the autonomous code generation platform that ingests millions of lines of code to accelerate enterprise software development by up to 5x with premium, spec-driven output. Schedule a strategy session with their AI solutions consultants at https://blitzy.com
Tasklet:
Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai
Serval:
Serval uses AI-powered automations to cut IT help desk tickets by more than 50%, freeing your team from repetitive tasks like password resets and onboarding. Book your free pilot and guarantee 50% help desk automation by week four at https://serval.com/cognitive
CHAPTERS:
(00:00) About the Episode
(03:54) Introducing Abhi and pipeline
(06:40) Biology's messy ground truth
(13:53) Noetic's tumor foundation model (Part 1)
(13:59) Sponsors: GovAI | Blitzy
(17:05) Noetic's tumor foundation model (Part 2)
(24:42) Calibrating AI biology impact
(30:53) China's biotech rise (Part 1)
(34:23) Sponsors: Tasklet | Serval
(37:12) China's biotech rise (Part 2)
(38:28) Reading biology ML critically
(46:00) Automated AI R&D workshop
(52:29) Software-only singularity debates
(01:03:34) Labs, policy, and oversight
(01:18:50) Infrastructure and Taiwan risk
(01:26:53) Export controls and DeepSeek
(01:30:33) Labs arms race dynamics
(01:47:26) Security and blind spots
(01:59:15) AI productivity and markets
(02:09:44) Closing reflections and outlook
(02:24:50) Outro
PRODUCED BY:
SOCIAL LINKS:
Website: https://www.cognitiverevolution.ai
Twitter (Podcast): https://x.com/cogrev_podcast
Twitter (Nathan): https://x.com/labenz
LinkedIn: https://linkedin.com/in/nathanlabenz/
Youtube: https://youtube.com/@CognitiveRevolutionPodcast
Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
Transcript
This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.
Introduction
Hello, and welcome back to the Cognitive Revolution.
Coming up, you'll hear Part 2 of a marathon live show that I co-hosted with my friend Prakash, also known as @8teAPi on Twitter, in which we explore AI for Science, Recursive Self-Improvement, and Geopolitical Competition.
I love doing deep dive episodes, but I can only cover so many topics that way, and so I'm experimenting with higher-intensity live shows as a way to deliver what I hope is the same high-quality analysis in a denser format.
In the first half, which hit the feed yesterday, we talked to:
- Professor James Zou of Stanford about his work on AI for Science;
- Sam Hammond about how well the Trump administration is managing international AI competition;
- and Shoshannah Tekofsky about AI Agent behavior in the wild.
In this second half, we talk to:
- Abhi Mahajan, also known as @owlposting, about AI for Biology and Medicine, including the foundation models he's building at Noetek AI to better predict which patients will respond to which cancer treatments, and why, though he's skeptical of many AI for biology results that have been published to date, he does expect trends to continue to the point where AI is ultimately transformative for the field;
- Then we talk to Helen Toner about a report that CSET just put out, called "When AI Builds AI", which summarizes conversations from a closed-door workshop in which participants tried but failed to establish any consensus expectation about the impact of automated AI R&D, ultimately leading to the conclusion that automated AI R&D is a major source of potential strategic surprise;
- And then finally we have Jeremie Harris, talking about the very challenging position we find ourselves in, where we lack both the technical means to reliably control superhuman AI systems, and the trust and coordination mechanisms needed for the US and China to address this problem collaboratively – plus a bit of discussion of how he maintains situational awareness, and how our respective personal productivity stacks are evolving.
As you'll hear, the challenges of making sense of massive disagreement among leading experts, and simply keeping up to date with AI developments broadly come up repeatedly in these conversations, and to be honest, nobody has great solutions. One that I can recommend, though, is using LLMs to help identify blindspots, and for that purpose I'm really enjoying the blind spot finder Recipe that I recently created in Granola. Granola works at the operating system level, so it can capture all the audio into and out of your computer, including, if you wish, the contents of this video. And its Recipe feature can work across sessions to identify trends, opportunities, or blind spots that only become apparent with that zoomed out view. Obviously this is a tool that grows in value over time, but if you want to try it, I suggest downloading the app, starting a session while you play this episode, and then asking it to identify blind spots based on this conversation. What's so cool about this feature, for active Granola users, is that the blind spots it identifies for you will be different from the ones it identifies for me.
As I said last time, this was fun for me, but especially because it's a new format, I would love your feedback. Do you feel you got as much value from this more time efficient approach as you usually do from our full deep-dive episodes? Or did we miss the mark in some way? Please let me know in the comments, or if you prefer by reaching out privately, via our website, cognitiverevolution.ai, or by DM'ing me on the social media platform of your choice.
With that, I hope you The Cognitive Revolution, LIVE, from February 11, co-hosted with @8teaPi