Frontier Models for Frontier Science with Prof. Derya Unutmaz, Immunologist & ChatGPT Pro Grantee
In this episode of The Cognitive Revolution, Professor Derya Unutmaz, a biomedical scientist and human immunologist at the Jackson Laboratory, discusses his groundbreaking research in aging and cancer immunotherapy.
Watch Episode Here
Read Episode Description
In this episode of The Cognitive Revolution, Professor Derya Unutmaz, a biomedical scientist and human immunologist at the Jackson Laboratory, discusses his groundbreaking research in aging and cancer immunotherapy. As a ChatGPT Pro grant awardee, Derya provides insights into the integration of AI with biomedical sciences, emphasizing how advanced AI models are transforming hypothesis generation, data analysis, and scientific discovery. He also covers his early passion for computers and programming, the inspiration he derived from Ray Kurzweil's work, and how AI is democratizing science by enabling even young researchers to make significant contributions. Derya outlines his vision for a future with ASI, discussing potential societal impacts, the need for regulatory AI models, and the promise of a golden age where diseases are cured, aging is reversed, and resource scarcity is a thing of the past. Finally, he dreams about the long-term future, imagining a life of exploration and discovery across the cosmos.
SPONSORS:
SafeBase: SafeBase is the leading trust-centered platform for enterprise security. Streamline workflows, automate questionnaire responses, and integrate with tools like Slack and Salesforce to eliminate friction in the review process. With rich analytics and customizable settings, SafeBase scales to complex use cases while showcasing security's impact on deal acceleration. Trusted by companies like OpenAI, SafeBase ensures value in just 16 days post-launch. Learn more at https://safebase.io/podcast
Oracle Cloud Infrastructure (OCI): Oracle's next-generation cloud platform delivers blazing-fast AI and ML performance with 50% less for compute and 80% less for outbound networking compared to other cloud providers. OCI powers industry leaders like Vodafone and Thomson Reuters with secure infrastructure and application development capabilities. New U.S. customers can get their cloud bill cut in half by switching to OCI before March 31, 2024 at https://oracle.com/cognitive
Shopify: Shopify is revolutionizing online selling with its market-leading checkout system and robust API ecosystem. Its exclusive library of cutting-edge AI apps empowers e-commerce businesses to thrive in a competitive market. Cognitive Revolution listeners can try Shopify for just $1 per month at https://shopify.com/cognitive
NetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitive
RECOMMENDED PODCAST:
Second Opinion. Join Christina Farr, Ash Zenooz and Luba Greenwood as they bring influential entrepreneurs, experts and investors into the ring for candid conversations at the frontlines of healthcare and digital health every week.
Spotify: https://open.spotify.com/show/...
Apple: https://podcasts.apple.com/us/...
YouTube: https://www.youtube.com/@Secon...
PRODUCED BY:
https://aipodcast.ing
CHAPTERS:
(00:00) Teaser
(00:45) About the Episode
(04:06) Introduction to Professor Daria Unutmaz
(05:08) Early Career and Passion for Computers
(06:17) Influence of Ray Kurzweil and AI Fascination
(07:54) Intersection of AI and Immunotherapy
(08:21) Challenges and Innovations in Aging Research
(14:48) Adoption and Impact of AI in Science (Part 1)
(18:12) Sponsors: SafeBase | Oracle Cloud Infrastructure (OCI)
(20:49) Adoption and Impact of AI in Science (Part 2)
(25:28) AI's Role in Hypothesis Generation and Data Analysis (Part 1)
(36:52) Sponsors: Shopify | NetSuite
(39:40) AI's Role in Hypothesis Generation and Data Analysis (Part 2)
(39:41) Best Practices for Leveraging AI in Research
(49:52) The Importance of First Prompts
(50:25) Satisfaction with AI Models
(51:20) Deep Search and Grok
(56:41) AI in Biomedical Sciences
(01:01:57) The Future of AI in Science
(01:09:06) AI's Impact on Careers
(01:16:34) Biological Inspirations for AI
(01:23:26) The Threat and Promise of ASI
(01:28:57) A Golden Age with AI
(01:30:20) Dreams of the Future
(01:31:12) Outro
SOCIAL LINKS:
Website: https://www.cognitiverevolutio...
Twitter (Podcast): https://x.com/cogrev_podcast
Twitter (Nathan): https://x.com/labenz
LinkedIn: https://linkedin.com/in/nathan...
Youtube: https://youtube.com/@Cognitive...
Apple: https://podcasts.apple.com/de/...
Spotify: https://open.spotify.com/show/...
Full Transcript
Transcript
Nathan Labenz: (0:00) To say that AI cannot come up with anything novel or innovative is total nonsense. In fact, I will argue that very soon AI is going to be much more innovative and creative than we are. If I see AI as a collaborator, it's almost on my level. It's not even like a student level anymore. Like another professor who is very knowledgeable in the field, these are fields that I've actually generated the knowledge, made discoveries on as a scientist. If I'm finding the AI valuable even in those topics, I'm finding it very, very difficult to believe that there is anyone in the world who would not get value out of this. I mean, not a single person.
Derya Unutmaz: (0:46) Hello and welcome back to the Cognitive Revolution. Today, my guest is Professor Derya Unutmaz, biomedical scientist, human immunologist, and ChatGPT Pro grantee who is aggressively using the latest AI models to aid his research into aging and cancer immunotherapies. Derya is a fascinating figure. He's a medical doctor who has personally advanced the frontiers of biomedical knowledge with many academic papers and patents over the course of his 30-plus year career. A technology enthusiast who has loved computers and programming since his youth. A visionary who thinks differently enough that he took Kurzweil's vision of a technology singularity seriously long before it went mainstream, and an outspoken critic of those who would deny or delay the contributions that AI can already make to scientific discovery.
In this conversation, I tried first and foremost to get a sense for how world-class domain experts like Derya are applying the latest AI models to their work. To my surprise, it turns out that while he is finding value at every step of the scientific process, including hypothesis generation, literature review, experimental design, and data analysis, his approach is actually quite straightforward. He does sometimes use more advanced techniques like having two instances of a model debate the merits of a particular research direction, but mostly he recommends a natural conversational approach to today's models.
What sets him apart then from those who are failing to realize value from AI assistance isn't some advanced prompt engineering or scaffolding, but rather an opportunity-oriented mindset that starts with relentless curiosity, embraces trial and error, and is always genuinely looking for ways to make things work. None of that is to say, however, that his results are basic. On the contrary, I think his accounts of AI eureka moments are some of the most compelling that I've heard. In one fascinating example, he asked Deep Research to analyze gene expression patterns in T cells across young and elderly subjects, a dataset that his team had struggled to fully interpret. The AI provided insights that, in Derya's words, recapitulated everything I've done in the past 30 years in one sentence, identifying how the cells themselves were aging in ways that the team hadn't fully appreciated.
Today, Derya views AI systems as intellectual partners that are capable of contributing to frontier professional work, even going so far as to say that he no longer trusts his own knowledge or ideas without consulting AIs first. Perhaps most provocatively, Derya argues that it has now become unethical not to use AI in medical contexts, both clinically, as it's been repeatedly demonstrated that AI can help reduce errors and improve outcomes, and also in research, since every day matters to the many millions of people who are waiting for breakthroughs to address their life-threatening conditions.
As you might expect, I wholeheartedly agree. And I was really glad to hear that Derya sees resistance gradually giving way to curiosity and even excitement as more and more people see tangible results. As always, if you're finding value in the show, we'd appreciate it if you'd share it with a friend. We'd love a review on Apple Podcasts or Spotify, and we welcome your feedback via our website, cognitiverevolution.ai, or by DMing me on your favorite social network. For now, I hope you enjoy this scouting report from the cutting edge of AI-assisted science with Professor Derya Unutmaz.
Nathan Labenz: (4:08) Professor Derya Unutmaz, biomedical scientist, human immunologist at the Jackson Lab researching aging and cancer immunotherapy. Welcome to the Cognitive Revolution.
Derya Unutmaz: (4:20) Thank you. Pleased to be here.
Nathan Labenz: (4:22) I'm excited for this conversation. So you are a ChatGPT Pro grant awardee, and this is part of a small but hopefully growing series of conversations with people that are using the latest AI models in the most forward-thinking ways they can at really the frontier of human knowledge. And you've put out some incredible posts recently that have inspired me, and I'm looking forward to this conversation and sharing this inspirational work with others. So maybe just for a quick foundation setting, tell us a little bit about your career and maybe the Jackson Lab. People that tune into this feed are paying a lot of attention to AI, but they're probably not, in general, paying nearly as much attention to what's going on in aging and immunotherapy research. So a little primer foundation would be super helpful.
Derya Unutmaz: (5:10) Sure. Well, I guess my career, or I would say more like my passion, is not very typical. I've been very interested in medicine and biology, and I've been doing research for the past 35 years, graduated from medical school. But actually, my passion for computers and programming is even earlier than that. I got interested when I was in high school when first computers were coming out. In fact, my very first computer was called Commodore VIC-20. People wouldn't know that. It came out early eighties, then upgraded to Commodore 64. Actually, 64 means 64 kilobytes of RAM that these machines had. They wouldn't even be considered toys these days, but I was so excited because I started learning programming. In fact, we were doing some assembly language, BASIC. And you realize incredible power that you could just do whatever you think is in your mind, and that was based on very, very primitive computers and programming languages since then. So my interest continued.
In the early nineties, I read this book from Ray Kurzweil, which really influenced me a lot, called The Age of Intelligent Machines. And it basically described how the computers were advancing, and eventually, we're going to develop this thing called artificial intelligence. And that was going to eventually surpass human intelligence, and robots were going to be developed. And I was so incredibly fascinated by that and started to follow AI since, like, early nineties and dabbled with it a little bit. During the nineties, it was more about symbolic-oriented AI—Lisp and Smalltalk. These were the languages used at the time. So it was a very different type of AI, but because of my interest, I really got deep into it and read the whole history, how it started in the 1950s, and everything that happened at MIT, Marvin Minsky and all that. Actually, Marvin Minsky had an incredible book, The Society of the Mind, I think, which is still extremely valuable for AI even today.
And then in the late nineties, early 2000s, I read another book from Ray Kurzweil called The Singularity Is Near, another book that really influenced me, basically, where he was actually charting the increase in computation since that time and then projecting to the future. And basically, he literally predicted what was going to happen. In fact, his point was by 2029, we're going to have what's called artificial general intelligence. And by 2045, we'll get to this point of singularity, and we can talk more about that later. Those are the things that really have influenced my life, and I've been trying to apply that to my own work.
But my day job, let's say, or my other passion, is trying to understand how biological systems work. I focus on the immune system because the immune system is very important for protecting our body, but also has implications for many of the diseases as well as the aging process. I've asked this question 25, 30 years ago: Why do we age? And actually, I see aging as a disease that needs to be cured, which was kind of heresy a couple of decades ago, but I think people are starting to accept that now.
And in fact, the final thing I'll say is that about 20 years ago, I had this blog inspired by Ray Kurzweil—I called it Biosingularity. So basically, what I was trying to imagine is that within 20, 30 years, AI will evolve to a point where we're going to start to truly understand biological systems so that we can treat all the diseases, including cancer. And then eventually by 2045, we'll even be able to reverse the aging process and upgrade our own biological capabilities. And actually, I've been working towards that goal, mostly recently, mostly focusing on cancer because using the immune system, we can actually program our immune cells to attack cancer cells. And actually, we can write code on cells. We can create AND and OR gates and things like that. So it's extremely cool engineering. So I've studied a lot of other things like HIV/AIDS, and during COVID, we did a lot of work on that and chronic diseases and so on. But if there's interest, we can talk about that too.
Nathan Labenz: (9:45) Yeah. I'm interested in all of it. We'll get to as much as we can. One interesting question, and I've—I'm a little younger. My first computer was a Windows 3.1, so I don't go quite as far back into computer history. And I would say I kind of caught wind of the Kurzweil line of thought a little bit later too, but not too much later. It was around the Singularity Is Near time frame. And I don't have a good account of this for myself. I wonder if you have a theory of why you were willing to take ideas like that seriously when others were not. I always kind of was, and I look back, and I definitely feel like that was strange, and I'm not sure what caused me to do that. And I also sort of see that as kind of carrying through even to today where, obviously, as we'll get into, the AIs are, in my view, getting to the point where they're undeniably very powerful and going to be—even if there's the classic line, even if there's no further progress, which of course there will be—it seems like we already have enough to be quite transformative with a lot of implementation work. But I guess my question there is just, like, what is it that you think separates the people that have historically taken this stuff seriously and today are seeing it more clearly from people that have sort of not seen the potential and continue to sort of deny it even as it kind of materializes in hand?
Derya Unutmaz: (11:11) I think the simple answer is that you just have to be crazy, because you're thinking different. And in fact, one of my idols is Steve Jobs. And he said at some point that only the crazy ones change the world because these ideas, they're way ahead of their time. And so they really do sound crazy. But since I was a child, I always tried to think different. In fact, I grew up watching Star Trek, and my imagination or dream was one day I'm going to be on a starship like Star Trek and go seek out new civilizations. So it's a different mindset. You basically don't accept the status quo. And actually, when you look back at human civilization, you see this incredible progress. Initially, it wasn't very fast, but it's really been exponential actually, especially the past 200 years or so. And that's because people who are a bit crazy, who don't accept the way things are, and they believe that we could make things better and better and better, believe in science and technology, that there's nothing that we cannot solve.
So, yeah, I don't have a good answer for you, but you really have to think different, as again, Jobs said. And you're absolutely right though. When Ray published these books and ideas, a lot of people told him he was crazy, including a lot of my colleagues in the scientific community. It's very strange because if you think about it, scientists should be very open-minded, right? So we need to think different and be creative. But actually, it's not like that. Even within academia, especially, there's a lot of dogmas, very, very conservative thinking. And they were like, that's just crazy. How can you reverse aging and have computers better than the human mind? That's just not possible. And actually, most of it is because of our ignorance. Because we didn't understand how the brain works, we assumed that it's something magical, right? So we can never create intelligence as good as a human being. But the reality is the more you understand biology, actually, it's pretty bad engineering. Again, we can talk about that too. We assume that we are really perfectly built or whatever. That's not the case. It's a very legacy system. Yes, it's really marvelous, but it's nothing magical. I mean, when you go down to it, the neural networks in our brains are not so different than machine neural networks.
So I was a true believer, and I continue to be a big believer in technology changing our lives. And actually, this is a time when this is going to happen much, much faster. So that's what I'm trying to say on Twitter as well—or X, I don't want Elon Musk to get mad at me. So that, you know, trying to warn people because people like us feel as if we were time travelers and we've seen what will happen five, ten years ahead of us, but we're living in the present time. It can get frustrating because people think, oh no, you're crazy. That's fine.
Nathan Labenz: (14:30) Are you seeing, as you communicate about all of your exploits with AI and applying it to the frontier science that you're doing, are you seeing people change their minds? What is the current reaction from your colleagues in the sciences to the stuff that you're showing off to them today?
Derya Unutmaz: (14:51) I mean, I've been in since the ChatGPT moment from the beginning. I started using GPT-3 and then 3.5 and so on. And in fact, about two years ago or two and a half years ago, I started telling my colleagues and friends, look, there's something amazing happening. This is really going to change the way we do science. We really need to start to implement this. And at the time, they were like, oh no, this is just the next word predictor. This was actually including computational people who understand computers and bioinformatics and all that. They were very dismissive at the time.
But I think since last summer, things have really started to change because you can't avoid this anymore. For example, a friend of mine who is in California, he's a professor, quite famous in this topic, he saw my post on Twitter and X, and he said, I can't believe it can be that good, but can you try this for me? I have this project and can you ask—at the time, it was O1 Preview and O1 Pro that came out. And I just did the analysis of his data and I sent it back to him. And his reaction was, like, in an email, he said, oh my God. I can't believe this. I can't believe this. So how can he avoid that? Now he's a daily user.
I just wrote a grant application with a colleague of mine, and it was probably the best grant that we wrote. And she said that I can't believe how easy it was to write it, and we had such good ideas. Of course, we worked with AI very, very closely with ChatGPT. So the moment you realize this is indispensable, I mean, it's not optional anymore. And I'm a bit disappointed on the medical side because there it actually has become unethical not to use AI. And that's something that I keep saying. And there's more resistance on that side, although I know quite a few physician friends who are now using ChatGPT, but they're not saying that they're using it. That's the other thing. People still feel that if they say, okay, I wrote this project using ChatGPT, or I made this diagnosis thanks to an AI model, they feel like they're going to be less valued, which is true. We are less valued. But at the same time, you're adding more value. Like, you don't want to misdiagnose someone, right? So that's a life-and-death matter. Or if I can write a better project or if I can analyze my data much, much better with the help of AI, I'm adding much more value. It's not just personal productivity, but what we add to humanity overall.
So it's changing. I think this year, it's going to be very big, but at the same time, it's very disruptive, extraordinarily disruptive in the sense that people have to change this mindset. Things are not going to be like they were. The whole academia, the whole education, and I posted a couple of times, the notion of having a PhD, all these things are changing. So it's going to take a bit of time to adapt, but it has to happen. There's no way out.
Nathan Labenz: (18:15) Hey. We'll continue our interview in a moment after a word from our sponsors.
Derya Unutmaz: (18:19) In business, they say you can have better, cheaper, or faster, but you only get to pick two. But what if you could have all three at the same time? That's exactly what Cohere, Thomson Reuters, and Specialized Bikes have since they upgraded to the next generation of the cloud: Oracle Cloud Infrastructure. OCI is the blazing fast platform for your infrastructure, database, application development, and AI needs, where you can run any workload in a high-availability, consistently high-performance environment and spend less than you would with other clouds. How is it faster? OCI's block storage gives you more operations per second. Cheaper? OCI costs up to 50% less for compute, 70% less for storage, and 80% less for networking. And better? In test after test, OCI customers report lower latency and higher bandwidth versus other clouds. This is the cloud built for AI and all of your biggest workloads. Right now, with zero commitment, try OCI for free. Head to oracle.com/cognitive. That's oracle.com/cognitive.
Nathan Labenz: (19:30) Yeah. I always say for my own medical purposes, I would, at present, want both a human doctor and an AI, but yeah, I wouldn't feel comfortable with either one alone. And I love the clarity that you bring to the analysis that just, like, it is getting to the point where it's unethical not to use the tool. I also think about, like, people have a right in the United States to an attorney if they're going to be put on trial. I also think maybe they should have a right to an AI because I suspect that they could get a lot of value that they're not necessarily always going to get from their court-appointed attorney. So this is coming everywhere. Yeah. What are you finding is kind of changing most? Like, what are the biggest—maybe one way to think about it is what are the biggest bottlenecks in your work, and which of those is your use of AI relieving, and which ones are not yet affected by the current AI tools?
Derya Unutmaz: (20:26) I mean, there's not a single area that's not affected. I'm saying this personally because I use it day and night. In fact, I don't trust any of my own knowledge or ideas without asking AI anymore. Even things that I know very well. I mean, like, I'm an expert on—I tested this. For example, I uploaded a review that I wrote on a topic that I'm probably one of the best out of five, ten people in the world. Like I have really extreme deep expertise on that topic, and I had O1 Pro analyze it, and actually recently Deep Research, which I haven't yet publicized. It was incredible. Basically, it found profound things that I missed. It totally understood the topic, which is very specific, gave extremely useful insights, some of them very creative.
So there's nothing that I would do without consulting with AI anymore. In fact, some of my friends and people who know me tend to ask medical questions. Okay, my mom has this cancer or that disease. What do you suggest? What's the novel thing or treatment? I tell them, look, first I have to consult with AI, with ChatGPT. You can too if you want to send me. And then I'll double-check it, of course, to make sure that it looks good. But I'm not going to answer without double-checking with AI. That's the point that I wish everyone would be at.
In fact, I taught my mom, who's 85 years old, how to use ChatGPT over the summer. She lives in Turkey. She had some health issues. And now she tells me that she cannot live without ChatGPT. In fact, she doesn't ask me any more questions because she trusts ChatGPT more for medical questions. And I said, I approve that. And so, yeah, six months ago or maybe a year ago, there were a lot more hallucinations. You couldn't trust it 100%, but they have improved tremendously. That's the other thing that people don't realize. You see some publications from last year, oh, well, you know, we compared the ChatGPT-4 with human doctors or this and that, looks like it was even better at that time. But I can tell you, it's ten times better now in the last three, four months. And it's going to get ten times better in the next three, four months. So why should I trust, including myself, a human opinion or knowledge when you have literally professors in your pocket?
Another example is I wrote a patent application for a colleague of mine on a medical topic. And I've helped write patents through lawyers—I've got eight, nine patents myself. So I'm very familiar with what it takes to write a patent. And I did that with Deep Research, and it was the best patent application ever. It was on a molecule that has anti-cancer effects and things like that. It did all the patent search and understood the chemical formula, made all the claims, the secondary claims. It was incredible. My friend is submitting that as a patent application. It would have probably cost over $10,000 to have that written by lawyers. It took about half an hour and didn't cost anything.
Nathan Labenz: (24:08) So if I just kind of sketch out the scientific process of hypothesis generation or maybe literature review, experiment design, experiment execution, data analysis, and then maybe back to hypothesis generation, you could complicate that or adjust it. Where are you finding the most impact? And I'm particularly interested in hypothesis generation and sort of eureka moments, but I'd like your comments throughout that entire loop. I think there's—one of the biggest questions, and the goalposts, of course, keep moving, right? So it was one thing when it could sort of answer a question, but it would seem like they're just stochastic parrots. They're not really reasoning, or they don't really have any higher-order abstractions. Now we see pretty clearly, well, there are higher-order abstractions, and now I think it's become very difficult for anyone to deny that there's some meaningful form of reasoning going on. And now the goalposts have kind of shifted to, okay, but they're not going to discover new knowledge, right? I mean, maybe they can memorize the whole literature, but there's no new knowledge. Like, that's a qualitatively different thing. So, yeah, maybe take me kind of through the scientific loop, but especially interested in what you have seen in hypothesis generation and eureka moments, if any. And if it's not yet there, then let's be clear about that too. But— Nathan Labenz: (25:26) No, there are quite a few. So let me first explain the scientific process because people think that somehow magically ideas just form and we just discover things out of thin air. That's not true. So as you said, we form hypotheses and some of them are good hypotheses, some of them are very good ideas, very innovative, and that's our contribution. But it's all based on what we already know. If I ask you, can you come up with a brilliant idea on how to use T cells to treat cancer, I doubt that you're going to come up with something innovative. You might be a super genius, but still you need a lot of background information about that topic, right? And you also need to know lots of different things to be innovative. One of my advantages is that I'm very interested in different topics, including playing video games. I've been playing video games since I was a teenager, and I'll tell you a story about that. That's why I'm mentioning this.
Before the o1 models, when GPT-4 especially came out, I was mainly using ChatGPT or occasionally used Claude as a way to sort of survey the field because there's no way we can follow all of the information that's coming out. Even in my own narrow field, there are hundreds or thousands of papers published every year. So I can try to keep up with it, but it's very difficult. So I found it extremely useful for GPT-4 to sort of summarize what's going on. But that was mostly knowledge based. You can say that GPT-4 knew a lot of things and it was smart in that. But I hadn't seen particular insights or some innovative ideas because that requires bringing things together.
So when o1 Preview came out, things changed. And that's when I had my first eureka moment or wow moment, I would say. And that was when I asked o1 Preview, I said, okay, so I'm developing a cancer immunotherapy protocol. And this has to do with these immune cells we call T cells. These are kind of like the soldiers in the body. And we actually program them. We genetically engineer them to make them recognize the tumor cells and then go and kill them. But there are lots of issues there. Those cells get exhausted, they don't kill very well sometimes, and they have side effects that can cause lots of troubles. And we're trying to solve those problems.
And I actually came up with some ideas based on Battle Royale games. I don't know if you're familiar with those, like PUBG and things like that. That's kind of like the Survivor games. You're in this island or it's a particularly restricted area, and then you have to find resources and you have to compete with other players, you have to kill them or eliminate them so that you can win the game. So I had thought about using the Battle Royale analogy and applying to these T cells because they're also kind of competing in that environment of the tumor tissue, whether using that idea we can actually make them better. In fact, we did some experiments and it kind of worked. We're going to publish that very soon.
So I asked o1 Preview, I said, can you come up with some new ideas, be inspired by the Battle Royale games like I was inspired and came out with some new ideas? And in fact, it did. It came out with a couple of ideas that I hadn't even thought about because it knew how the games were played and it was able to extract that information and then transfer it to a completely different topic of T cells in a tumor microenvironment, trying to hunt down the tumors and then kill them and competing with each other and then getting better and solving this exhaustion problem. That was really remarkable for me because maybe that was kind of the first early AGI moment maybe.
And that actually got much better with o1 Pro, and o1 Pro actually started to give ideas that we used to put in projects. I mentioned some grant proposals, which I hadn't thought about. And these are not just missing information because some of it could be because, well, I didn't know that this was possible to do kind of thing. And you get those moments as well, but these were truly kind of innovative ideas based on available knowledge and based on non-available knowledge. Because we form hypotheses saying that we need to, we imagine that this is the way it should be, but we don't know. We don't know if that's the case, or we don't know if the cells are going to behave that way. And we don't know how to do that in the proper way or what could be the best approach or experimental strategy. And in all of those cases, I am getting new ideas.
And it's not just me. I did that for a couple of my friends. One of them is a leader in neuroscience and the other one is in inflammatory diseases. And in both cases, they couldn't believe it. I mean, it was just shocking. And these guys are experts in their field, like the world leaders. So to say that AI cannot come up with anything novel or innovative is total nonsense. In fact, I will argue that very soon AI is going to be much more innovative and creative than we are. Google's AI scientist, and I hope to test that soon, has just been released and they mentioned a couple of examples. So it's completely unbelievable. I get impressed, those eureka moments almost on a daily basis. Sometimes I'm scared to do a deep research or o1 Pro interaction because it gives you such unbelievable ideas or insights.
Just one last thing I'll mention. So the other thing is that it's not just hypothesis generation, it's also data analysis because this is extremely important in science. In biology, yes, we have to do experiments, we have to generate data, and nowadays we can generate incredible amounts of data, billions or trillions worth of tokens, let's say, from RNAs to proteins to cell interactions and so on and so forth. So it's very difficult for us to analyze that data, right? We have millions of bits of metabolic data and this and that. We try to do bioinformatics, statistical analysis and so on and so forth, but none of those are satisfactory.
So recently I actually asked Deep Research, we had some gene expression data. I don't know if you're familiar with this technique called CRISPR and RNA sequencing. So anyway, I won't go into technicalities of it, but basically what we found is that certain types of T cells have certain genes that are expressed in young people, but those are not expressed in old people. And a different set of genes are expressed as you get older, in the same type of cell. And we divide those cells into two parts as well. So we couldn't really make sense out of it. Like, so it might mean this or that. I mean, we know the functions of the genes themselves.
So I asked that to Deep Research. It was incredible. It came up with these insights. Again, this is a topic that I know extremely well. I've been working on it for 30 plus years. In fact, there was one sentence I got emotional reading because it sort of recapitulated everything I've done in the past 30 years in one sentence. It's sort of an insight that I should have come up with. So it's really mind boggling.
S2: (33:29) Can you say a little bit more about what that is? Like, what is the insight that you felt that it achieved that you wish you had?
Nathan Labenz: (33:36) Yeah. So basically, I've been working with these cells, this particular subtype of T cells for, I guess, 30 plus years now, and we call these cells naive cells. Again, it's very technical, so I'll try to keep it a little bit superficial there. But basically, this is how the immune response starts. So when you first see antigens or viruses or bacteria, these are the cells that have to be educated and then they become sort of what we call effector cells and they fight against infections or cancer. And then some of them turn into a memory cell and then they're long lived.
But as you age, the proportion of these cells is reduced and that's why during aging your immune system doesn't work very well. But then this data was basically suggesting that it wasn't just the numbers but also their quality was actually changing. And so the insight was basically, Deep Research said, well, the reason why the elderly is different than the young is because these cells that you think are young are no longer young themselves. I can't remember the exact sentence, I'll probably publish that on X, that they have actually changed epigenetically or their character has changed.
So there's lots packed into that one sentence because it requires enormous amounts of understanding that these are the cells that differentiate into memory cells and they see the antigen and all those things. And it also had this temporal intelligence that things change over time, which is something very unique. It's not a static type of information. So that was extremely impressive.
S2: (35:32) Hey. We'll continue our interview in a moment after a word from our sponsors.
Derya Unutmaz: (35:37) Being an entrepreneur, I can say from personal experience, can be an intimidating and at times lonely experience. There are so many jobs to be done and often nobody to turn to when things go wrong. That's just one of many reasons that founders absolutely must choose their technology platforms carefully. Pick the right one, and the technology can play important roles for you. Pick the wrong one, and you might find yourself fighting fires alone.
In the ecommerce space, of course, there's never been a better platform than Shopify. Shopify is the commerce platform behind millions of businesses around the world and 10% of all ecommerce in The United States, from household names like Mattel and Gymshark to brands just getting started. With hundreds of ready to use templates, Shopify helps you build a beautiful online store to match your brand's style, just as if you had your own design studio. With helpful AI tools that write product descriptions, page headlines, and even enhance your product photography, it's like you have your own content team. And with the ability to easily create email and social media campaigns, you can reach your customers wherever they're scrolling or strolling, just as if you had a full marketing department behind you.
Best yet, Shopify is your commerce expert with world class expertise in everything from managing inventory to international shipping to processing returns and beyond. If you're ready to sell, you're ready for Shopify. Turn your big business idea into cha ching with Shopify on your side. Sign up for your $1 per month trial and start selling today at shopify.com/cognitive. Visit shopify.com/cognitive. Once more, that's shopify.com/cognitive.
Derya Unutmaz: (37:33) It is an interesting time for business. Tariff and trade policies are dynamic, supply chains squeezed, and cash flow tighter than ever. If your business can't adapt in real time, you are in a world of hurt. You need total visibility from global shipments to tariff impacts to real time cash flow, and that's NetSuite by Oracle, your AI powered business management suite trusted by over 42,000 businesses. NetSuite is the number one cloud ERP for many reasons. It brings accounting, financial management, inventory, and HR all together into one suite.
Nathan Labenz: (38:07) That
Derya Unutmaz: (38:07) gives you one source of truth, giving you visibility and the control you need to make quick decisions. And with real time forecasting, you're peering into the future with actionable data. Plus, with AI embedded throughout, you can automate a lot of those everyday tasks, letting your teams stay strategic. NetSuite helps you know what's stuck, what it's costing you, and how to pivot fast. Because in the AI era, there is nothing more important than speed of execution. It's one system, giving you full control and the ability to tame the chaos. That is NetSuite by Oracle. If your revenues are at least in the seven figures, download the free ebook, Navigating Global Trade, 3 Insights for Leaders at netsuite.com/cognitive. That's netsuite.com/cognitive.
S2: (38:58) What advice do you have for people to bring the best out of these systems? Like, one thing that I already definitely took note of, and interestingly, I hear this also from my teammate who's the creative director at my company. He's just much more of an outsider. I'm usually very linear, and I sort of prompt in a pretty matter of fact way and try to make sure I have the right context and give clear instructions and whatever. And then he will bring these other things where he's like, do this in the style of a famous author or a famous director or whatever. Just kind of gives it a much more high level conceptual direction to go. It sounds like you're doing some of that with these inspirations from very different fields. I'd love to hear a little bit more about that and maybe, probably different best practices depending on where you are in that cycle of science. So I would imagine you might not want to do the data analysis as if you were in a Battle Royale. So how do you make sure you're getting the best performance for the task across this range of tasks that you're taking to the AIs?
Nathan Labenz: (40:11) Yeah. I mean, people have been talking about prompt engineering. I don't have anything that's sort of a standard set of prompts. I don't usually use very structured prompts per se, and I change it all the time. But the way I structure the prompts, actually, occasionally, what I do is that I ask o1 or o1 Pro, okay, here's the ideas that I want to analyze, structure them into a prompt so it can actually add a couple of things in context.
With Deep Research, you don't really need that a lot because Deep Research actually asks you, okay, is this what you mean? Or do you want to analyze it in this way or that way? Or do you want to focus on the solid tumors versus lymphomas? It actually predicts what you may be thinking but forgot to mention. But the way I do it is the way I think. I think it might be different for every person. So my suggestion to people is don't try to stress out about how should I write it in the perfect way or how should I engineer it and things like that. Basically, whatever is in your mind, just write it down in the most open and transparent way as possible.
Maybe this wasn't the case six months ago, but after o1, with Grok or with the o1 Pro models or whatever. Even GPT-4, actually. They've gotten so much better at understanding the context of what you're trying to ask. And especially if they start to know you, like the GPTs have a memory. So I noticed Grok also knows about me probably through Twitter, because if I ask a question, it will bring it to somehow to immunology or aging or AI. They start to predict what you are really interested in. And hopefully with very long memory, that's going to even get better.
But I like to think differently, as I said. That's why I think asking these questions like be inspired from this field. You might be asking a physics question, but you might mix it with something from biology. Or if you're trying to be creative in your work, you can mix it with some sports. They do a great job in that. And I think that's when you start to see innovation and creativity because you're bringing completely different topics and come up with something new that you wouldn't think on your narrow field.
The other thing that I find very useful with o1 models, because they have the thinking capability, I ask them to iterate on the ideas. Say, okay, so think of this, but then think of what you told and then see if you can come up with something better or something different. Think of this project and then come up with 10 ideas, but each of the 10 ideas should be somehow better than the previous one.
Another one that I did that I found extremely useful, I said, imagine that there are two scientists. One of them is very enthusiastic about this idea. The other one is a very skeptical one. They're both knowledgeable in the same field. So the first one, I want you to come up with an idea, scientist one. Then I want scientist two to criticize that and then maybe suggest some new ideas. And then scientist one, respond to scientist two, oh yeah, you're right, I should have thought about that. And then iterate that for like 10 times. So they're literally brainstorming within these two scientists, actually it's the same AI, and that could be extremely revealing. You can also see that in the thinking process. I think DeepSeek was the first to show that, the sort of the background thinking. That's pretty much what the AI is doing. Right? So it thinks of something and says, oh, wait a minute. Maybe I should have thought about that. Or no, maybe this is a better idea. And you can even push that further by asking it to sort of iterate on the same ideas. So that's good for the thinking models.
So the other thing is people should know what's the difference between a thinking model, a deep research model, or just a regular base model.
S2: (44:37) What is your hit rate on this? I think maybe one thing that's coming to a little better clarity for me is, and I guess the whole sort of reasoning paradigm has really emphasized this, right? We're scaling inference time compute generally. And maybe one of the explanatory factors for why some people have had success over the last couple years and others have sort of not been impressed and kind of given up is maybe as simple as just kind of willingness to try a bunch of times and do the sort of 10 rounds of iteration as you said. And I have to imagine that, I don't know you can tell me, what your sort of sense is for the hit rates or the miss rates, but I have to imagine that when you bring these other sources of inspiration from some distant field, it probably doesn't always work. So what would you say is your sort of hit rate, and what should people expect if they want to go do this and then they sort of see something that's initially not amazing to them? What do they need to be willing to put in in order to have a good chance of getting to the sorts of things that you're getting out of the AIs today?
Nathan Labenz: (45:49) I mean, I think first of all, people are not really focusing trying to make it work. They are very skeptical to begin with. They see AI as a threat, a lot of people. And how can artificial intelligence be smarter than I am or this doctor or this professor, this sort of attitude. So you're not really trying, you're trying to find, what are the defects? How can I get some hallucination out of it? Subconsciously maybe they're thinking that. I mean, all humans hallucinate, literally. I mean, if that wasn't the case, we wouldn't have 12 million misdiagnoses every year in America.
But the other thing is that maybe what's different about me is that I'm a scientist. I'm very used to trying and failing. This is what we do. We form a hypothesis and 90% of the time we fail. We try to do an experiment, it doesn't work. We try again, we change something. Oh, well, okay, it worked a little bit better. No, maybe we should do it this way, that way. So that's my job. This is what I do and this is what I love to do because there's never going to be a perfect way of doing things or asking things. So trial is very important.
But I think, again, the first point is even more important. You have to see AI, the way I approach, let's say, o1 Pro or Deep Research or even Grok recently is like a colleague, like a collaborator. In fact, I gave a talk to PhD students last year, three, four months ago, it was on collaborations. But I talked about AI collaborations, not even collaborations. Collaboration is extremely important. Most of my work has been in collaboration with other scientists because you can't really do everything on your own. You need cross fertilization of ideas and you have different skill sets and so on and so forth.
So that's how I see AI, as a collaborator. It's almost on my level. It's not even like a student level anymore. Like another professor who is very knowledgeable in the field and we're just discussing some ideas and I want really honest opinions about it. So when you have that approach and then you try different things that you would ask yourself to come up with new ideas or really transparently put out the problem and how we could resolve it. And then even if you don't get a very satisfactory answer, just follow up on it. I mean, with o1 Pro there's no limit. You can say, okay, well, that was a good idea, but I think you should maybe think it this way. What do you think if you think it completely the opposite way? Because AI has no judgment. Right? It will be very intellectually honest with you, especially Grok, I have to say, really intellectually honest one. And so I think it's just a matter of approach. People are not trying enough or they don't want to try. Maybe that's the reason. I don't know.
S2: (48:53) I used to say something very similar about just like, the question you want to be asking is, what can I get this thing to do that's valuable as opposed to can I find a weakness, a flaw, a mistake? Because you definitely can do that. And if you stop there, you're going to miss out on all the upsides. I think that is a super important just general piece of wisdom and guidance. Could you put an estimated number on it, though? And I'm just asking because I want people to know what to expect. Like, how often does your first prompt give you something where you're like, this is great. This is exactly what I wanted. How often do you need to have a bunch of rounds to get somewhere good? And how often does a session, when you sit down and try something, just never really lead anywhere that you find is ultimately super valuable?
Nathan Labenz: (49:43) I mean, so again, it depends on which model you're using. If I'm using Deep Research, it's kind of rare that I follow up. I mean, I do follow up with second prompts, but mainly because the first prompt was so detailed that so many new ideas and new questions came up that I want to maybe go deeper on those things, not because I wasn't satisfied. I mean, it's 100% satisfactory, like, over-satisfactory. Let me put it that way. It's usually the same with o1 Pro as well.
If I'm searching for some knowledge set, the thinking models are not necessarily great because they don't, they're not really search machines. If you give them facts and problems, they're great in solving that. So there hasn't been any unsatisfactory response from the o1 models or Deep Research.
And then recently, I've been using Deep Search from Grok. That's very valuable in finding information. Knowledge sets, I find that even more valuable than GPT-4 search, which actually searches the different resources and sort of synthesizes that information. Again, Grok is outstanding in making complex topics or information or knowledge very simplified. If you didn't really understand something or sometimes I'm too lazy to read a very complex paper, even if it's in my field, I just upload the PDF. I say, please describe what's going on in this paper and then find some gaps or whatever. They're just amazingly good at that.
And correcting your own mistakes or if I'm writing a project, I, again, even if it's my own ideas, I just upload it and say, okay, so this is what I'm thinking to do, but find potential pitfalls and then suggest alternative methods. 100% of the time, it will find something of value that I didn't think about. It doesn't mean that everything they say is valuable. You can skim through some of this and say, okay, well, okay. It thinks this is a potential pitfall, but I don't think that you can dismiss that. But then out of two, three, four of them, one of them will be very, very important, and that's what you're looking for.
And then the other things that, if you ask me this question a month later, I'm probably going to say they're even more unbelievable because they just keep getting better. Or if we were doing this interview two months ago, I would have had maybe slightly more reservations, not much, but that's just disappearing every day. And today, we're going to have GPT-4.5, and we'll see how amazing that's going to be.
S2: (52:34) Yeah. It is crazy how fast all this is happening. I basically am a full time AI watcher at this point, and it's still getting away from me. So it is certainly understandable for people who have full time jobs and trying to juggle a lot of things to be struggling to keep up.
Nathan Labenz: (52:54) I mean, what really amazes me, I'm saying this very sincerely. What really amazes me is that how is it possible anyone in the world cannot find this amazing? That's what's amazing. I mean, it's inconceivable for me because I have some very deep knowledge and expertise in certain fields, and these are fields that I've actually generated the knowledge, made discoveries on as a scientist. If I'm finding the AI valuable, even in those topics, I am finding it very, very difficult to believe that there is anyone in the world who would not get value out of this. I mean, not a single person.
In fact, I'm willing to bet that if they claim that, I will do it for them and show in even in their field, you can generate value by interacting with AI. So this is really unbelievable for me. But
S2: (53:51) How about on the data analysis side? I assume when you talk about high volumes of data, presumably a lot of the analysis is going to be done through code. I'm guessing you're asking the AI to, kind of giving it maybe the structure of the data or a small sample and asking it to write code to do analysis. One thing I've observed, and I'm not one tenth of a percent of the scientist that you are, but in my, yeah, I've dabbled at times. And in my limited scientific dabblings, one thing that has really jumped out at me is it's often a small unexpected observation that then leads into the next experiment that takes you in the next big kind of positive step discovery.
I've started that both in chemistry years ago where we were trying to push a reaction toward more product. We thought that adding more acid to the reaction conditions would chip us more in the direction we wanted to go, and it ended up working less. And just looking at that slope, this is my one contribution to science of all time, I said, well, the slope seems to be going down. Like, have we thought about trying less? And it was just that super simple observation on just a couple very small data points that ended up, and it did work better.
This week, I was a very small contributor to an AI research paper where the group showed that fine tuning GPT-4o and other models too on vulnerable code outputs created what is maybe best described as an evil model, that has all these sort of very kind of shocking opinions and takes on other topics totally unrelated to code. And that also just started with sort of an anomaly in the context of another experiment. So I guess two part question is, any best practices in general for sort of data analysis? And then any strategies or observations around getting AI to comb through these vast amounts of data and find those little anomalous nuggets that seem to be the breadcrumbs to something new? Nathan Labenz: (55:58) So in biomedical sciences, things are a little bit different. As I mentioned, we generate an incredible amount of data nowadays, and we still have to generate more data. Just to give you an example, one experiment we do, for example, looking at a thousand individual cells and then looking at the changes of thousands of proteins in each of the cells, you can generate millions of bits of information just from one experiment. So obviously, this is not even conceivable for us to analyze. I think one use is that AI in biomedical sciences especially will make discoveries. How it's going to make discoveries is because you analyze this data and then you use bioinformatics and this and that, and you come up with some hypothesis. You say, okay, well, these genes are changing and then this is what must be happening, and you base your next experiment on that. You might miss some things, but then you try to make the best guess. But AI is very different because it's actually going to the ground truth in a way, or almost the first principles of the biology, and really putting things together in a way that it would be impossible for any human to put, even with bioinformatics tools, because it thinks of new insights from that millions of bits of data, and then it will tell you exactly what the new discovery is and what should be the next experiment. So in fact, I'm actually digging through most of our old datasets, things that we have generated 5, 10 years ago. I'm going to reanalyze everything because there's probably tons of buried knowledge in there that is waiting to be discovered. It's like a gold mine. And I don't know how best to do that. I'm trying to push the limits. We'll see how good Google AI Studio is, how much data that I can upload. So far, I've uploaded about close to a thousand genes. That was not a problem. So I'm going to see if I can upload 10,000 genes. Is that going to work? So it's just a matter of the context window. If you can have the larger token size windows, we should be able to upload billions of bits of information and then ask it to not only predict the next experiment, but discover new insights, new mechanisms. Actually, O1 Pro already did that. This was about Parkinson's disease I did for a friend of mine. It found new drug targets and this is going to go into clinical trials. So I don't think we can advance biology very much without AI. I mean, I said we're going to treat every single disease in the next 10 years. The reason I said that is because AI is going to be able to do that for us.
S2: (59:06) Can you just say a little bit more about what level of abstraction of data you're putting into the system? Like, when you say a thousand genes, is that like a thousand genes and some measurement of expression? Or like, what is the, you know, because there you get, there's like raw measurements that are probably can overflow the context window, and then there's this sort of gradual workup, right, to higher and higher order concepts and higher levels of abstraction. What do you find to be the right level to put into the context window?
Nathan Labenz: (59:38) I've been trying to keep it simple, not going to very quantitative measurements, but basically saying, okay, these 500 genes are increased in expression levels. Of course, we do a pre-analysis. We show that statistically they are higher expressed, and you can do that easily with bioinformatics tools. And then the other 500 genes are reduced statistically in this cell or in this condition, but not in the other condition. So I have two conditions. I do an input-output experiment. I trigger the cell with something, or I tell it to kill a cancer cell, and then these genes go up, the other genes go down. This cell kills better, that cell doesn't kill better. And then it has this different gene expression. So I basically say that, and these are the two cells and these are the genes that go up and down, come up with the mechanism. And should I focus on any of these genes? Should I actually manipulate? Because all these genes, or proteins, are interacting with each other. So if I manipulate one of them, I could probably change the whole pathway. I don't have to worry about the rest of the 499 genes. Or which one would be the best drug target out of this thousand genes or proteins for that matter. And you could do that for metabolism. Like in our blood we have thousands of different metabolites, lipids, amino acids, all those kinds. If you can measure all of them before and after you take a drug, or before or after a certain diet, and say, okay, a thousand metabolites change from here to here, what do you think? Is that good for me? Is that going to help my heart or liver or whatever? And right now, the AI models can do that. In the future, probably we're going to be able to put in all of your data, like think of it as a digital twin, from your genomics to your protein expression, to your microbiome, to your metabolism, your physiology, your symptoms, everything, billions or trillions of bits of data, and then it'll tell you, okay, well, you need to change this, you have to take that, or you might have this problem.
S2: (1:01:45) Yeah. That's fascinating. I've taken away from my sort of recent, relatively superficial study of biology that the grand challenge in some sense is figuring out the graph of causal interactions in the body. And it sounds like what you're saying is basically that because of the vast underlying knowledge that the models have from all the literature that they've been trained on, they are already getting good at sort of figuring out how to traverse this causal graph and figure out what might be going on given some anomalous data input. And all you're giving is just the measurement. You're relying on all the learned knowledge of the structure, and it is sort of probing around that structure in the reasoning process. And ultimately, at least in some cases, having insightful takes on what is going on. One theory I have of kind of superintelligence, you know, because I think we're increasingly like, superintelligence might be coming soon. And then that obviously begs the question of, like, well, what's the superintelligence going to be like? And I don't feel like I have all the answers by any means. But one thing that I think seems likely, I'd love to get your take on, is just integration of modalities. Like, it's striking that this is happening while the models are, as far as I know, possibly could be a little different, but as far as I know, mostly just trained on text, trained on the literature. But then we have these other kind of specialist models that are trained on the sequences themselves and seem to be developing, my term for this is, an intuitive physics in these other problem spaces where we don't have intuitive physics. Right? We don't have intuitive physics for how's the protein going to fold or which ones are going to bind to what or, you know, what the next transcriptome time step measurement would look like. But we are seeing AIs develop all those intuitive physics type understanding. So what's seemingly becoming clearer in my mind is the integration of the reasoning models, and possibly it could be through tool use to call out to these other systems that are specialists, or possibly it could all be just integrated into kind of one system where there's even like a latent space mixing at some point. And the reasoning now starts to be partially derived from the sort of text-based chain of thought from all the literature, but also partially and maybe in a way that is deeply integrated with these other intuitive physics type understandings. Do you think that, how does that, how realistic does that sound to you, or how would you refine that picture?
Nathan Labenz: (1:04:25) It's very realistic. In fact, I think that you just defined ASI in a way, but ASI should be multimodal, actually. I would say omnimodal. It should integrate every known knowledge set that we have figured out, and there's probably others that ASI will figure out or AGI will figure out. But from the physics to the math, to the chemistry, to the biology, really truly understand the first principles of every field, all the dynamic interactions, because biology is very dynamic. So right now, we're working with a lot of static data. We're not able to run very complex experiments in silico yet because the AI doesn't have that much of a predictive power. It has to know exactly what I mean. We're able to do that partly with things like AlphaFold 3, for example, or ESM-2, for example, where the AI can predict where a molecule could bind at what place in the protein and people are screening for drugs right now and so on. But then there's the dynamic aspect of it. That protein might change shape if something binds to it, or if there's another protein next to it. So there's a spatial and temporal aspect of it. And those are obviously chemical rules, eventually physical rules at the molecular levels. So once the ASI knows all that information, can actually simulate individual molecules or proteins and so on, and has all the knowledge that we have already generated, I mean, there is no limit. Because think about it, like nature is already able to do that, right? Biology is extraordinarily complicated, yet it's super predictable. Like I can take a single cell from you and then turn it into an embryonic stem cell and generate an identical copy that looks exactly like identical twins, right? So they look identical. They all start from a single cell and there are trillions and trillions of reactions that happen. Cells divide and all these things happen, yet predictably they look identical. So how is that possible? Because there's an algorithm. There's an underlying principle. There's a self-assembly and all that things. When you're infected with a virus, things don't happen randomly. We can actually predict. If you knew all the biological parameters that you have, we can predict if you're going to clear the virus, if you're going to die, if you're going to get sick, or how much your temperature is going to raise. They're all 100% predictable. And we can also predict every single drug that's been clinically trialed, whether that will help you or if it's going to have a side effect with someone else. I mean, it's really incredible. And I think AI is going to be able to do that. So maybe it will tell us to build some new human beings, what I would call Human 2.0, like make us a hundred times smarter and so on. And of course, we're going to reverse aging at that point. So there's no limit to what that kind of intelligence can do. Of course, on the material science, it can discover completely new elements, new materials, and things that look impossible for us right now. So it's unpredictable, but extremely exciting. I mean, exciting, but of course, you know, it can also decide to eliminate us and we'll see.
S2: (1:08:11) Okay. Well, there's definitely a follow-up question there. Okay. So let's start with the most mundane. How is this changing how you are working, how your team is working, and what does this mean for people that are early in their career trying to enter into the sciences? I think in computer programming, there's a lot of debate right now as to, you know, if you're truly just trying to maximize the output of your company, should you equip your senior engineers with the latest AIs, or should you continue to hire junior developers? And honestly, I think more and more, the real answer is give your senior developers the AIs and they'll be more productive that way than they will be mentoring kids right out of college. Obviously, that has huge problems for those kids right out of college and potentially for society as a whole. But I do believe the analysis in terms of just raw output. So where are you on that, and how do you think that's going to impact science? And what, if any, advice do you have for people that are in that, you know, jeez, like, I'm entering the career phase of my life at like a really weird time, you know, challenging moment right now?
Nathan Labenz: (1:09:23) You know, as always, I think differently about this. I have a bit of a different take. So I think it works both ways. You can argue that your senior people or people who have an expertise, like, you know, I have a lot of expertise in my field. So I have right now superpowers. I don't need as much help from others as I would. But at the same time, AI is also an incredible democratizer for people with their own intelligence or agency. I think that's even more important. So let's say someone just out of high school, very interested in biology. Actually, I had a 16-year-old high school student who interned in my lab when I was at MSSM. She was better than most PhDs. She learned everything in two months and actually made discoveries because some people have this kind of agency. They're very interested. They're born hackers. So for those people, AI is almost godsend, right? So now you don't need to spend 10, 20 years to learn all of those things, memorize them, and gain all that experience before anyone takes you seriously, right? So you already have superpowers. You might be missing the experience part a bit, but I think more and more AI is even going to replenish that. So if there was a choice, for example, recruiting someone who's very smart, 19 years old, has this agency, passionate about a topic versus someone who is 40 years old, 20 years of experience, has done quite a bit, I probably would pick the 19-year-old because I feel that they would create more value if they're using AI. Without AI, of course, there's no point. So it kind of depends on the person. So I wouldn't really separate people between junior. There could be some junior software designers or hackers. With AI, they could do much, much better than a 50-year-old software engineer with lots of experience because all of that knowledge is now in that kid's pocket literally or in their computer. And we see the examples on Twitter, what people are able to do. But having said that, and I said that a couple of times and there was a lot of pushback, there is no more point in trying to get credentials or degrees anymore. Like, I would never go for a PhD or even MDs. I think they're going to be mostly replaceable within a decade or so. Or if you're really passionate about it, yeah, okay, go for it. But to just spend 4, 5 years, get a PhD degree and then do another 2, 3 years of postdoc and then I'll have some career, those days are over. I mean, by the time you graduate, most of everything you learn will be obsolete anyway. So if you have the passion and agency, I think we have to change the whole system. Those young people should be given the opportunity right out of their high school, or they don't even have to go to college anymore. And let them just figure things out while working as an apprentice in the lab or working on some software or whatever. This might sound crazy, but that's my opinion.
S2: (1:12:55) Yeah. I think it's very good advice for people who can take it. Cultivate curiosity. Cultivate passion. Cultivate agency. Race to the front. Try to do valuable work as soon as possible. Let the AI help you along the way and fill in the gaps. That's, you know, I'm a little further along in my career, but I try to bring a beginner's mind to everything, and that's a pretty good summary of what I'm trying to do. So I do have a little bit of worry about like, exactly what percentage of the population can actually rise to that challenge, but I do think that is the right challenge. And certainly, like, if you are one of those people, you know, go for it. That makes a lot of sense to me.
Nathan Labenz: (1:13:38) But here's the intellectually honest answer to that. I mean, this is the case right now. There's a very small portion of the population who's able to do that. That's why we give like Nobel Prizes to 2, 3 people a year, not 2 million people, right? So people say, well, human beings are going to be less innovative. But the question is, how many people are truly innovative? I would argue like 0.001% of humanity truly innovates something, right? So these are facts. We can't just say, well, everyone's going to be super great with AI. But there's a portion of people who have that capacity, who have that passion, but they don't have the means to get things done or to do what they want. So those people are going to be upgraded. That's the democratization. That doesn't mean every 8 billion people are going to do very well. Most people won't do very well. That's the fact. What do we do about that? I don't know. That's for governments or institutions to figure out. But the fact is that you give opportunity to people, an 18-year-old kid who is very passionate, but the society says, well, you don't have a PhD, you don't have an MD or you don't have a law degree, so go and do all this boring stuff for a decade and then come back and we'll take you seriously. That's not good. So we've moved away from that stage. And I think Elon Musk is doing exactly that. He doesn't really care about if you have degrees or not. He's doing the right thing.
S2: (1:15:21) Yep. I think that that makes a lot of sense. I happen to know one of the young Grok-iers, and, you know, he's a super impressive young guy. So there's definitely multiple sides to all these little stories. Well, I want to come back just in a second to the big picture societal questions. You alluded to the specter of AI doom. But just in the spirit of pulling inspiration from one field to another, I wanted to ask if you have any thoughts for what biological inspirations AI architecture designers should be drawing on today. Specifically, and you could go well beyond this, of course, but like, it strikes me that the immune system is something that the AIs really don't have any version of. And you know, that's the lack of any sort of system like that is probably why they're like so easy to trick. You know, we have these sort of very gullible AIs where even I've done a bunch of episodes on kind of scheming behavior and deception and alignment faking. But all of that research for the most part is premised on telling the AI, here's a place you can write your private thoughts, and we won't read it. And then, of course, you know, they do read it, but the AI just takes that at face value. It doesn't have this sort of memory system or immune system with a memory built in that can kind of remember these insults and avoid getting fooled twice. So, you know, the one prompt would be possibly from the immune system, but more generally, like, what biological systems do you think AI architectures should be trying to import concepts from?
Nathan Labenz: (1:17:06) You know, I mean, it's a little bit difficult to infer from the biology because biology is a very legacy system. So the way it works is that biological systems find something useful, but then it turns into something dangerous or is no longer useful. So they build something on top of it. There's no clean slate. So there's a lot of regulatory bureaucracy in biology that works sometimes, that doesn't always work very well. But you also have to leave certain flexibility. Like in the immune system, we have these effector cells, which are like the frontline soldiers. They're actually very dangerous, so they have to be very tightly regulated, but you can't regulate too much because if you regulate too much, then they don't kill the cancer cell, right? If you regulate too little, and there's another cell type, there's a bureaucrat, which we call Tregs, that actually controls that effector cell. If it does too much, it says, whoa, you're starting to kill normal cells. You need to shut up now or put brakes on it. So there's the regulators of the regulators of the regulators. In the case of AI, I think it's very naive to think that we can train these AI models completely aligned with all the safe things and perfect thinking. That's kind of foolish. Yes, we should put some guardrails obviously, but, you know, we see that those guardrails can be broken fairly easily. Not to mention that it's becoming so easy to develop these, even the frontier models, someone else or some other country will be able to develop sort of nefarious AI models. It's not just us anymore. So I think the best strategy is to build sort of regulator AIs. So the AIs would check other AIs. Our best defense against, and this includes for AI as well, AI, how can you control AI? Because by definition, it's much smarter than we are. I mean, it would be so stupid to think that, oh, well, we're going to put some sort of a guardrail there that AI will not figure out how to go over it. But what we can do is build AI agents that actually control other AIs, maybe develop some competitive models, maybe create something that will incentivize the good AI versus the bad AI from a kind of evolutionary modeling. You know, I don't know the technical answer to that, but I think that would be a better approach than trying to really restrict the AIs that we have because there's a big cost to that. As I gave you the example from the immune system, if it's restricted too much, then it doesn't do its job. It's not good. Same with our brains, like human brains are not very restricted, right? So humans can be extremely bad, horrible, or it could be extremely good. Why did evolution not put that guardrail in our brains? Why didn't we select for humans that were all fantastic, all great people? That didn't happen. We have human beings who would literally destroy all other human beings. The reason for that is that you need that flexibility in the biological system. If you put guardrails, then you're limiting your survival capacity to come up with solutions. You have to be very flexible. The same thing is going to be with AI. If you restrict it too much, it's not going to use its full potential. How do we balance that? I think we need to police AIs.
S2: (1:20:47) Yep. Sometimes I use the term ecology of AIs, and it sounds like a similar vision. One thing I've learned in years of studying science is like, anything in totally pure form is dangerous. You know, there's something about the sort of buffered solutions and dynamic systems and ecologies that ends up being much gentler on us than anything that you really concentrate, purify, extract from those naturally occurring systems. And so with AI, we're kind of coming at it in a sense from the other direction where we're like creating these sort of highly, you know, singular kind of, in a sense, pure. I don't want to say AIs are pure, but they're like, you know, they're singular. They're not, there's no natural predators for them yet. There's not like a real ecology. So they have this sort of somewhat invasive species-like potential. And maybe one way to think about the AI transition that we need to go through to get to a good place is like, how do we create a buffer dynamic, more ecological-like system in which they play a role, but in which their role is also kind of pushed back on by other AIs or other new structures that we might develop. So maybe the last, perfect transition to the last question. I mean, you raised the specter of AI eliminating us. I do take that very seriously. A lot of people are keen to dismiss it, but I'm glad you're at least not totally dismissing it. Maybe a two-parter is like, how do you conceive of that? You know, do you have like a P(doom)? Basically, how worried are you about those sorts of catastrophic scenarios? And then on the flip side, the positive side, do you have a vision for post-singularity life? Like, you know, what is a good life in a world of, say, you know, 2035, 2040 in a world that is probably dramatically transformed?
Nathan Labenz: (1:22:45) I think the best way to answer that is that, of course, there's always a possibility for that to happen, but I am less worried about ASI or very advanced AGI than I am worried about humans destroying other humans. So I think the threat up to this point has always been by a biological intelligence to other biological intelligence, right? So 20, 30 years ago, I remember very well, we were talking about when the world will end, when there's a nuclear war such that humans will destroy each other. This was the threat, and it has always been this threat, that there's no other threat to us besides if a meteor hits Earth and then we might be destroyed. But other than that, the main threat has always been from other humans because the intelligence has the side effect. So it's conceivable that ASI will be even more intelligent than us, could also be a threat for us. But I think, again, as I said, it's going to be less of a threat than humans because, again, biological systems are legacy systems. We were selected based on survival agency, right? So there's a reason why some humans kill other humans. Right? Because that was a selective measure or survival measure thousands of years ago because you saw those threats. There's limited resources. There's so much food. Either you're going to survive or the other tribe is going to survive if they steal your food. So you have to fight back and you have to kill them or they kill you. So I don't see any limiting resource for AI, except for energy maybe. But even energy is not very limiting. You know, destroying humans, even in the Matrix, the movie, they didn't destroy humans. They used them for energy production. Right? It doesn't make sense. There's no selection for AI to be so nefarious, but there might be something that if they see humans competing for the same energy source, that they need that more, that might be a problem. But again, you know, there are things like Dyson spheres where you can collect the energy of a whole star and then you can increase your Kardashev level of civilization. So there's probably almost infinite amount of energy you can extract. So I would think that they would spend their intelligence to figure that out rather than, why should they care about destroying humans? Right? It just doesn't make sense.
S2: (1:25:42) One thing I'm really worried about is that, you know, we're going to train them to destroy humans in the name of beating our enemies or worry that they're going to do it to us if we don't do it to them.
Nathan Labenz: (1:25:52) Right. That's a problem. Again, it's the humans that are the problem. It's not the AI. So we try to transfer, so yes, humans training AI to destroy other humans can be an existential threat, but we shouldn't blame the AI for that. It's the other humans that we need to blame. And in order to protect for that, we need to develop defender AIs, right? So if some humans can train AI to destroy other humans, then we can train AI to fight those bad AIs as well. No? I mean, we're not so stupid.
S2: (1:26:33) Yeah. Let's hope. It does seem like a dangerous game to be playing. I sort of...
Nathan Labenz: (1:26:38) But the point is this game has been played for thousands of years. I mean, it's a miracle that we made it this far. Right? So there was a point where there were only a couple of thousand Homo sapiens left alive because they killed each other. Like, you know, we eliminated every other Homo species before us, like Neanderthals. You know, as we got smarter, we just destroyed everything that was a little bit inferior to us. This is a hard fact. The problem is the humans that we have to worry about. So we shouldn't say, oh, if you develop this AI, it might destroy us. No, no, no. We already developed tools that could destroy us. Somebody can easily develop a chemical toxin that can kill millions and millions of people or a virus or nuclear bombs, of course, we have available. So I don't think AI per se is the danger. But humans using AI, I completely agree with you. That's a problem. Nathan Labenz: (1:27:45) Yeah. It feels like we're headed for a tightrope period of history. And I appreciate you raising the point too that we have driven a lot of other things to extinction. I often remind people of that because if nothing else, there's precedent for these sorts of things happening, and we have been the proximal cause in quite a few cases.
Derya Unutmaz: (1:28:07) But I'll add one last point because you asked that. So why I think there's going to be a golden age: because in fact, without AI, I don't think we had much hope. Eventually, we probably would have destroyed each other because the resources are limiting, the human population will be increased, and so on and so forth. And we'll still keep dying because of aging. But with AI and then plus robotics, I think we're going to come to a period, which is going to be in the next decade or 15 years or whatever, where resources are no longer limiting. So there's much less incentive for people to harm each other. We fix all of the diseases, we reverse the aging process, people can now live hundreds of years. Again, we remove the threat. If you know that you're going to live another thousand years and you're going to have all the resources you need during that period, you wouldn't risk trying to harm others so that you would also be eliminated in the process, right? So that becomes extremely risky. So I think with AI, that's the golden age that we're going to get into. The transition is going to be very difficult. It's going to be very, very painful. But if you can make it to that point, life will be incredibly good.
Nathan Labenz: (1:29:37) Do you have any ideas about how you'll spend your hundreds of years as, presumably, AIs will be driving most of the scientific progress? What will you spend your time doing?
Derya Unutmaz: (1:29:48) Oh, I have that mapped 50 years ago when I was about seven or eight years old because I was dreaming, what would I do if I lived a thousand years while watching Star Trek? And I said, this is what I'm going to do. I'm going to jump onto a spaceship and seek out other civilizations and see what's out there. It's a big universe. We have tons of things to do.
Nathan Labenz: (1:30:11) Well, may you live a thousand years and visit other solar systems and know no end to your adventures and discoveries. This has been fantastic. Professor Derya Unutmaz, thank you for being part of the Cognitive Revolution.
Derya Unutmaz: (1:30:27) Sure. Happy to be.
Nathan Labenz: (1:30:29) It is both energizing and enlightening to hear why people listen and learn what they value about the show. So please don't hesitate to reach out via email at tcr@turpentine.co, or you can DM me on the social media platform of your choice.