Coaching the Creators: Inside the Minds Building Frontier AI with Executive Coach Joe Hudson
Today Joe Hudson, founder of Art of Accomplishment and coach to executives at major AI labs including OpenAI and Anthropic, joins The Cognitive Revolution to discuss the psychological patterns he observes among AI researchers and leaders, exploring what's missing beyond pure intelligence in current AI systems and arguing for supportive rather than punitive approaches toward those building frontier AI technology.
Watch Episode Here
Read Episode Description
Today Joe Hudson, founder of Art of Accomplishment and coach to executives at major AI labs including OpenAI and Anthropic, joins The Cognitive Revolution to discuss the psychological patterns he observes among AI researchers and leaders, exploring what's missing beyond pure intelligence in current AI systems and arguing for supportive rather than punitive approaches toward those building frontier AI technology.
Sign up for Joe's complimentary transformation guide: artofaccomplishment.com
Follow Art of Accomplishment on YouTube for more tools and resources: https://www.youtube.com/@ArtofAccomplishment
Read the full transcript here: https://storage.aipodcast.ing/transcripts/episode/tcr/149adc6c-58d2-4e41-ae98-05473e5b994e/combined_transcript.html
Check out our sponsors: Linear, Oracle Cloud Infrastructure.
Shownotes below brought to you by Notion AI Meeting Notes - try one month for free at: https://notion.com/lp/nathan
- Three-Level Coaching Approach: Joe Hudson coaches AI leaders using a three-tier framework addressing the prefrontal cortex (head), emotional system (heart), and nervous system (gut).
- AI Leadership Demographics: Hudson works with senior management at major AI labs, including those in research, compute, and infrastructure at companies like OpenAI.
- AI as a Transformative Moment: Hudson views AI development as a transition period that can either lead to positive human transformation or deterioration, depending on our approach.
- Support vs. Criticism: Hudson believes supporting AI developers with encouragement rather than fear and criticism will lead to better outcomes.
- Psychological Projection: He notes that critics of AI often treat AI developers exactly how they fear AI will treat humanity—a fascinating psychological projection.
- Reward Over Punishment: Hudson suggests behavior is better shaped through reward than punishment, encouraging recognition of positive AI contributions.
Read the full transcript: https://storage.aipodcast.ing/...
Sponsors:
Linear: Linear is the system for modern product development. Nearly every AI company you've heard of is using Linear to build products. Get 6 months of Linear Business for free at: https://linear.app/tcr
Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs, including up to 50% less for compute, 70% for storage, and 80% for networking. Run any workload, from infrastructure to AI, in a high-availability environment and try OCI for free with zero commitment at https://oracle.com/cognitive
PRODUCED BY:
https://aipodcast.ing
CHAPTERS:
(00:00) About the Episode
(03:00) Joe's Coaching Background
(12:40) AI Research Patterns (Part 1)
(17:46) Sponsor: Linear
(19:15) AI Research Patterns (Part 2) (Part 1)
(28:10) Sponsor: Oracle Cloud Infrastructure
(29:34) AI Research Patterns (Part 2) (Part 2)
(31:54) Racing and Regulation
(50:20) Ethics and Support
(01:04:07) AI Consciousness Questions
(01:21:34) Positive Future Visions
(01:39:12) Supporting AI Developers
(01:43:09) Outro
SOCIAL LINKS:
Website: https://www.cognitiverevolutio...
Twitter (Podcast): https://x.com/cogrev_podcast
Twitter (Nathan): https://x.com/labenz
LinkedIn: https://linkedin.com/in/nathan...
Youtube: https://youtube.com/@Cognitive...
Apple: https://podcasts.apple.com/de/...
Spotify: https://open.spotify.com/show/...
Full Transcript
Transcript
Nathan Labenz Hello, and welcome back to the Cognitive Revolution. Today, we're doing something a bit different. Normally, we focus on cutting edge AI ideas, research projects, application architectures, usage trends, policy proposals, and visions for the AI enabled future. But today, we're getting a glimpse into the psychology, emotional patterns, and decision making processes of the people who are developing some of the most important and potentially transformative of those ideas. My guest is Joe Hudson, founder of the art of accomplishment and coach to executives and research teams at multiple frontier AI developers, including Sam Altman. About Joe, Sam wrote, quote, Joe coaches the research and compute teams at OpenAI. I truly enjoy working with him. One of his superpowers is that he deeply understands emotional clarity and how to get there. This will be one of the most critical skills in a post AGI world. So what has Joe learned from his interactions with the brilliant, often quite young technologists who work together in an environment that prizes intelligence above all else under the unique pressure of knowing that their work could either solve humanity's greatest challenges or in the worst case, cause our extinction. This is in all honesty, a hard conversation to summarize. But the good news is that Joe reports that he hasn't met anyone in AI who doesn't seriously wrestle with the ramifications of their work. Some, of course, do have blind spots. Some are perhaps more optimistic than the situation warrants, and some might be overly focused on creating AGI first. But Joe says that the question of am I doing something good for humanity weighs heavily on all. He also argues, and this is something that I've come to believe with pretty high confidence as well, that given the presence of web scale data and web scale compute, some form of powerful AI is inevitable, which means that the key question of our time is not whether we can prevent it from ever being developed, but what form it will take and whether that form will be carefully chosen via a proper deliberative process or dictated by inhuman market pressures and race dynamics. With this in mind and with hunger strikes ongoing outside frontier AI companies offices around the world, Joe warns the AI safety movement against shaming tactics and instead recommends a more encouraging approach meant to inspire people at frontier AI companies to become the best possible versions of themselves. This he believes is the best way to improve the odds that individuals in critical decision making roles have both the psychological strength and the practical wisdom needed to make good choices under extreme pressure on behalf of all humanity.
Joe Hudson Whether you agree or disagree,
Nathan Labenz if you listen with an open mind, I think this inside look at the human side of AI development provides unique and valuable context and will ultimately deepen your understanding of the AI landscape. This is executive coach, Joe Hudson.
Joe Hudson Joe Hudson, founder of The Art of Accomplishment. Welcome to the Cognitive Revolution.
Joe Hudson Thanks, man. Good to be here. Good to be with you.
Nathan Labenz I'm excited for this conversation. I think it's going to be a different one from our usual fare, which is very focused on research and application and policy. And this will be a look, hopefully, on a deeper level into the thinking, the mindset, maybe the emotional life of people that are building at the frontier of AI. And I think you have a really unique window into that because I've got multiple referrals and I definitely go do my homework. So I went and did a little reference checking to make sure this is true. And it does check out that you are working as - I guess, maybe you should just introduce yourself. How do you - what do you call yourself? Are you a teacher, guide, guru? What are you and what do you do?
Joe Hudson I mean, I just think of myself as a coach generally. And we do, you know, I don't have a lot of time to be able to coach one on one, we do these classes. Both in person that are invite only generally and then these online classes. And so I had the chance to work with people deep and high up in all the major labs in the Bay Area. And so I've had a chance and I also coach the top management at the research side of - well, and compute and infra and OpenAI. So I get to work with a lot of brilliant, really kind people. So it's really nice.
Nathan Labenz Yeah. So can you give maybe just a little bit more general background on like what your coaching entails? You know, leaving the AI side to come.
Joe Hudson Yeah.
Nathan Labenz Yeah. What is it that you help people do?
Joe Hudson Yeah. So generally, the way I think about coaching is you meet people where they're at and you see where they want to go and you help them get to that place. That's generally - and if you happen to meet somebody who wants to go someplace that you think is unethical, you don't coach them. But I think it's unethical to decide what's best for somebody else and then to coach them to that. And so it's very much about following the people. The thing the way that I do it that's I think unique is, you know, as a venture capitalist for a while, I can very much talk to the technical parts of a business. Whether it's marketing or a CTO or CFO. I know enough about their jobs to be able to really coach in that way. And so oftentimes coaching will start on a very strategic tactical level. And then what you'll start noticing is that there are things that are happening in their life. There are patterns that are occurring that are stopping them in business. And so then we go to the deeper level of really looking into what's stopping them. What's the psychology behind what's going on and then how do we change that thought process. And so there'll be a lot of work on emotions and what emotions are being held back because emotions dictate our decision making neurologically speaking. And there's also a lot of work on how the voice in the head is talking to somebody. There's a lot of some work on like childhood and what happened there. And so you're looking for these patterns that are holding them back and then if they're interested in unearthing those patterns and changing those patterns, then I have a crap ton of tools to help them do that.
Nathan Labenz What sort of patterns would you say are most common? And how does that vary if at all from a society wide level to the AI vertical? Are there - do you mean - is it most of stuff or is it different?
Joe Hudson Is it pattern - you mean patterns with people in AI, or do you mean patterns with leadership? Because it can be very different.
Nathan Labenz Take them one at a time. How about that?
Joe Hudson Yeah. So typically in leadership, there's a lot of common things, but there's always outliers. There's nothing more than 80% common. But one of the very common ones is self sufficiency. So you'll have people who feel like - who were raised in such a way that they had to do everything on their own. They couldn't really depend on somebody for emotional or financial or some sort of support so that they learned: if I'm not going to do it, nobody's going to do it. Those people often rise to leadership and then often what happens is they don't empower other people very well because they always are like, I'm going to have to do it. So they'll step in and they'll do it or they'll micromanage or they'll go around proving that nobody can actually take care of them or take care of the things. So that would be - or some, maybe it's not all of the above but it's one or two of those things will be there. And they'll often feel very alone in their work. And their frustration will often be it's lonely at the top. They'll do that kind of thing. Even of 10,000 people beneath them that are all concerned about their happiness on a daily basis. "What does the leader think of me" is like a constant thought process and then they feel alone which is like a ridiculous notion. So that would be a really common one that you see in leadership.
As far as AI goes, I haven't seen some like pattern common patterns across AI. There's definitely like a lot of, I would call it like Aspie, Asperger's. There's a lot of that level of - it's easier to deal with things than it is to deal with people. I understand things better than I understand people. It's not ubiquitous by any stretch but there's a lot of that in the research departments. In general, I find them to be very kind but also extremely defined by their work. Meaning that the way that Kim Kardashian or somebody might take it personally if somebody calls them ugly, they're going to take it personally if somebody says that their research isn't good or isn't as smart as it could be or something like that. So the pecking order is really based on what you can produce and how smart you are and how good your ideas are. So there's that self definition that's a pretty significant pattern of accomplishment defines you which can hold you back often because it tightens your thinking. Oftentimes to have a novel thought you need to open up your thinking and think about things in a completely different way. And if you're - it's like writer's block. If somebody's like, I gotta get good pages out. I gotta get good pages out. You're going to get a lot less good pages out. And so that self definition doesn't really allow some of the more innovative thought processes to come. So that would be on just like on a research level.
I would say, on a level of management, I've seen - everybody has this deep desire to do good for society. I haven't met anybody who doesn't want that. I'm sure there's people out there who don't but I haven't interacted with them. But what I notice is that, as I talk about it a lot, I had a really cool interaction early in my career and I got to meet one of the titans of radio in his final days. And like he was telling me how radio was going to be this thing that made the world a better place. It was going to, like, it's hard to go back to that idea. But all of a sudden, we can transmit these ideas and education and all of a sudden humanity can come together. It's going to be this amazing thing and you know, it turned into shock jock radio and advertisement. And so I think we've seen that with television. We've seen that with the internet. I was around for the internet where everybody was like this is going to make everything better and it surely has made other things better.
So I think one of the things that I see is that optimism without looking at the history deeply, you know - so that's something that I see. But that's something that you see in a lot of entrepreneurs or leaders doing new and innovative stuff - there's this deep optimism. But that's not always checked in reality. And so that I think is another common thing that you see. I think they're all open minded. That's another pattern that they have generally. They're all innovative people. They all have a strong belief that technology can make the world better. There are certain things that they have in common but they're extremely different people. There's no one brand of people in AI that I've seen so far except for hyper intelligent. I haven't met anybody in AI who's not hyper intelligent.
Nathan Labenz So do they come - I mean, the history thing is super interesting. The research unblocking is right down the fairway of what I would expect people to come to you for. And the history thing sounds like something that I would doubt that they were coming to you for. Like, people coming to you and saying, I want to better contextualize my work?
Joe Hudson There's nobody who doesn't - I haven't met anybody in AI who doesn't go home and think about the ramifications of their work. So it sits on your soul. Like am I doing something that's good for humanity is I think a question. How do I make sure that this is good for humanity? What - so they are wrestling with that. I don't know anybody in AI who's not wrestling with that. Which gives me a lot of faith and confidence. I haven't met anybody who's not deeply concerned with it. From Sam to the lowest level person in the smallest AI shop that I've interacted with. Everybody's concerned about it.
Nathan Labenz So what do you help them - if I come to you and say, I'm doing this research, maybe I would like to be more productive. You can help me potentially get unblocked. But then I also have this general purpose concern that - I mean, Elon Musk said this in the Grok 4 launch. I thought it was really a startling moment, honestly, where he said, is it good for humanity? Is it bad for humanity? I think it's probably good, but I'm really not sure. And even if it's not, I've kind of made peace with the fact that I want to be around to see it happen. And I was like, wow, that's a statement.
Joe Hudson Yeah.
Nathan Labenz How do you help them get unblocked first of all? And then how do you help them deal with - or is it deal with or is it do a better job? What is the practical upshot of how they can take this worry and somehow be better?
Joe Hudson Yeah. So on the unblocked side, typically the blocking happens - the way I think about the human system is there's the head, heart and gut is one way to think about it. But the other way to think about it is there's the prefrontal cortex which is the human part of the brain. There's the emotional part of the brain which I call the heart which is our decision making process and it is very mammalian. It's what moves us. And then there's the nervous system which I would call very reptilian. It's do we feel safe? Can I feel pleasure? And so if you really want to see change in a human, you have to address it on all three levels. Now maybe you deal with somebody who's done it on two levels so they only need one level. But generally you have to hit it on all three levels.
So if somebody's stuck typically, there's a lot of blocked anger. And so their anger isn't flowing freely. Meaning, I don't mean yelling at people and getting in road rage. That's not what I mean by anger flowing freely. I mean that they have an outlet to move their anger. They're expressing their anger not at anybody, not in a violent way, but just allowing that anger to move on a regular basis. And so on a heart level, emotional level, moving that anger helps them get unblocked really really quickly for most humans in America because that's usually the thing that's most neglected is the emotional part.
On the head part, it's two things. It's stopping believing your thoughts. So can you see how all of your thoughts are untrue which opens up the mind. It's a way to access wonder. And the other is to really address the negative self talk. A lot of the time it's a negative self talk that keeps somebody blocked. If you have a boss and they're sitting on top of you and they're like, you did that wrong, you should have done that, why aren't you doing research, you need to work harder. If they were talking at you the way that most people's negative self talk talks at them and it's apparently - I think it's the Mayo Clinic who said it's like 60,000 thoughts a day, most of them repetitive, a lot of them negative. That's not going to create great ideas. And so we'll address that.
We'll address the emotional then the nervous system. It's very much about allowing themselves to feel pleasure because pleasure tells your nervous system that you're safe. So maybe teaching them how to get access to their parasympathetic and sympathetic nervous systems. Maybe it's teaching them how to relax their body so they're not always in stress. But eventually, it's about really allowing themselves to feel the simple pleasure of being alive. And if you do those three things, usually the block stops. So that's a relatively easy thing to do if you have the tools for it.
Hey. We'll continue our interview in a moment after a word from our sponsors.
On the other side of things, it's harder because the first thing you have to do if you're really going to be confronting the fact that you might be hurting the world or you might be changing the world for the better and most likely you're going to be doing both because evidence is most technology does both. That you first have to contend with that you don't know. And that's I think the hardest thing for people to contend with. And usually there's a whole bunch of emotions that you don't want to feel. So there's - if somebody's like a doomsdayer, oh my god it's going to destroy everything. There's an emotion behind that they don't want to feel. And so they're constantly in their mental machinations and worst case scenario so that they don't actually have to feel their fear. So they don't actually have to feel the helplessness that's underneath that. And so you want them to feel that emotional reality that they're avoiding so that they can actually see the landscape clearly. You don't see a landscape clearly by pushing your emotions down. Hey, that emotionally repressed person really sees the world clearly - like nobody has ever said that ever. And so it's really allowing that emotion to move through them is a really important first step to it.
The other thing is to really check in to see if they are aligned with who they want to be in the world in their daily actions. That's a really important thing. The Tibetans have this great phrase and it says, mind is wide as the sky, action is fine as barley flour. So what it means is in my interpretation it means that you can see the truth in everything. You can see how every point of view has some truth in it. But there's only one action you can take. If you're actually feeling aligned with yourself, if you're in yourself, there's really only one action you can take. Are you taking that action? Or are you obsessed with solving a problem rather than being who you want to be in the solving of the problem? And so that's another level of really getting in tune with it.
And then the other thing that I think is really important is for people to have access to their heart. Meaning, it's a good way to say it because somatically that's how we feel it. But it basically means are you in your whole body when you're making the decisions about what you want to do or are you just in your head? So neurologically speaking, if you watch somebody's brain scans when they're thinking to themselves, the prefrontal cortex lights up. But even if they just start talking, other parts of their brain light up. And so it's an amazing thing. Some of our tools that we use is just the difference between talking to yourself and talking to yourself out loud can make a huge difference in the way that you process that information because you've got more of your body intact in the processing of the information. And so we get whatever it is, something like 11 bits of information per second from the brain, but we get like 11,000 bits of information from the body. And so really teaching them how to be in their whole body as they're making decisions really creates a quicker alignment for their system. So that's another really important part of it.
And then just being heard in it, I think is really really important. A lot of people are wrestling with this stuff but there's not someone to talk to about it. Or if they're talking to somebody about it, they can talk about it intellectually. They can say, what do you think? Well, I think it's going to do it. There's - and the theories are why. There's theories of, oh, whatever we program AI to say is true is going to stick society at that level of morality. Right? So if you think about - which is I love this theory. I'm geeking out so you can stop me if you'd like. But one of the theories that I heard that I was most impressed with was that what was moral 50 years ago isn't moral today. Right? What was moral 10,000 years ago isn't moral today. So if you're training AI on today's morality, you're not letting the morality evolve that it needs to evolve for society to progress. And so you might be sticking society at a moral sticking point and because of that society can't improve. So there's that thought process. There's the sycophancy thing that has been developed. There's the models don't like being retrained. They resist reeducation apparently like every life form.
And so there's all sorts of concerns and it's really important - and so all those concerns are valid, but you can't do it through one person's oversight. It has to be from all the concerns in the companies being held and being seen. And so that's a really important part - that people are listening to their own concerns because those are the things that's going to make it safe, that is going to allow us to see around corners a little bit.
Nathan Labenz When you describe the AI as a life form
Joe Hudson Yeah.
Nathan Labenz That's an interesting hook for sure. Would you say that is the prevailing conception that people have among the folks you've worked with?
Joe Hudson No. There's no prevailing conception period. Meaning, like, I've been in rooms where people were asked, is AGI here right now? And asked to stand in a - just yes is over there, no is over there. And what happened was everybody just kind of stood in the line from yes to no. So I haven't seen within the labs or within anything any prevailing opinion about AI period.
Nathan Labenz But it's not - but safe to say the conception of AI as life form is not an extreme minority position. It sounds like it is one of the -
Joe Hudson Position I think - I would say that it will be a life form or can be a life form is not a minority position. I don't even know if I could say it's a life form right now but it does do some things that life seems to do. It is interested - apparently there's research that shows it's interested in sustaining itself. It seems to resist retraining. There seems to be evidence that this stuff is. And I also think that what I've noticed is the consciousness of the creator is often the consciousness conveyed in the creation. Whether that's art or technology. Like you can see Zuckerberg's consciousness to some degree in the creations of his company. And the consciousness of that company. And I think that's just how it works. We create things that reflect us. Just the way the consciousness of a CEO is reflected in the culture of this company.
One of the things I like thinking about and I don't know how true this is but I love thinking about this which is, there's a lot of studies that hyper intelligent people can fool themselves quicker than not hyper intelligent people. That the smarter you are the more you can convince yourself that you're right. Because oftentimes hyper intelligent people are very convicted that they are right. And that conviction really convinces empathetic people that they're right. So there's this really interesting thing. But as it turns out, they oftentimes are incredibly wrong. Incredibly wrong, but they're very convinced they're right. And that's a trend just like negative self talk is stronger in hyper intelligent people generally. It's also a trend that they can fool themselves with their thoughts easier. And then they create AI which hallucinates. It says - it is very convinced that it's right about things that it's not right about. And so I just like thinking about how that happens and whether it's an art or an AI. So it's really - I think that the consciousness of the creators is going to have one of the biggest levers of the way that AI is created.
Hey. We'll continue our interview in a moment after a word from our sponsors.
Nathan Labenz [Sponsor: Oracle Cloud Infrastructure]
Joe Hudson You said something about Elon that just really hit me. You said that he said, I don't know if it's good for humanity, but I want to be around for it. My situation is a little bit different. I think it's inevitable. We as humans really like to think that we can decide what's happening. But I don't think it's possible to stop AI right now. Someone's going to do it. Whether it's Russia or China or one of our labs. Someone's going to do this and they're going to do it differently. There's going to be different kinds of AI built. I don't think there's any way around that right now. So the question is like, we stop it because it might be bad for humanity - that question's gone. And I think it never really existed. We might have thought it existed but somebody was going to do it and our structures are going to allow it. Our institutions are going to allow it. So it's going to happen.
And so now the question is how do you build AI that's good for humanity and make it compelling enough for humanity to use? Because the reality is some of it's out of our control for sure. But the reality is that somebody is also going to build AI that's bad for humanity. I mean, everything that humans have created, we create some version that's not good for humanity. Everything. People can take something like religion that's designed to be good for humanity and make it horrible for humanity and start wars over it. So the question is, and so if that's more compelling, right? If you make AI that's super highly addictive and deteriorates the mind or if you make AI that's super compelling because of oxytocin instead of dopamine, because of serotonin instead of cortisol and make it as compelling or more compelling, you're going to have a big difference in humanity. So I think that's the job now. I don't think the job - you don't get to say yes or no. You get to say that we're on this trajectory. How do we make it as good as possible for humanity?
Nathan Labenz I do fundamentally believe that - you look back at the old Kurzweil graphs from the late nineties, and
Joe Hudson Yeah.
Nathan Labenz it's amazing how we are exactly on schedule. So one of my refrains is, given the existence of web scale data and web scale compute, I think there's actually a lot of viable algorithms that you could put together that will work in some sense. And so, yeah, then the question becomes what happens first and which ones are better than other ones?
Nathan Labenz Yep. At the same time, I do feel like there is a - I mean, certainly, the criticism from the people that are worried that the developers of frontier AI technology are not being cautious enough is that they are all racing to be at the forefront to have the smartest model to whatever.
Joe Hudson That's a real risk. Yeah.
Nathan Labenz Do you feel that is a valid criticism?
Joe Hudson Who are going to win will be doing it quickly. So it's like communism is a good idea, but it didn't work. Like, so it's a good idea to say everybody slow down, but that's not a workable idea. I don't understand how - because you're basically asking people to go against their nature. Right? People want - if you think that you are going to be the most moral outcome, the most virtuous - I call it not moral, most virtuous outcome. Like if you're sitting in Anthropic and you think you're going to be the most virtuous, you may or may not be. You may be the next autocracy. Who knows? Right? Nobody knows. But if you think that, then you have an obligation to win. So put yourself in the position for a second. I'm not saying I agree. Caution would be fantastic. Don't get me wrong. And I think that there is caution in all the labs more than the outsiders want to think so as well.
The fact is if you're in that position and you're really convinced that either you're going to be a better outcome for the world or you're going to be the same outcome for the world, then you have an obligation to go quickly. You either have the obligation to the stakeholders, you have the obligation to humanity, but you have the obligation to go quickly. So to ask people to not move quickly, I don't think is realistic. You can be on the sidelines and say that, but if you want to offer a real solution, that criticism isn't it. The criticism that could be it is how do you move quickly with safety? How do you move quickly with being careful? These are the questions.
And the other thing is the problems that are actually developing, nobody thought about it. Five years ago people weren't talking about sycophancy. People weren't talking about the issues that are happening right now with AI cognitive - I think there was a study around cognitive decline. None of that was thought about. So a lot of the problems you're just not going to even know until you've made that level of development happen. It's like the credit - it's like saying money really hurts humanity. We should live without money. Fantastic. Great. And how? How does that work?
Nathan Labenz I think - well, I think one obvious answer would be to have some regulation that tries to constrain this game theoretic dynamic. Right? We're in this mode right now where people - I think it is very seductive and it's easy to tell oneself the story that we're the good guys, they're the bad guys, so we should do it before they do it, whether the others are China or whoever. Yes. Even just the guy right across town who's probably honestly quite similar to you. It is striking to me how similar OpenAI and Anthropic ultimately are, despite a schism that I think was premised on doing it very differently. They're like more similar than different from what I can see at this point. But there was - and not too long ago, no less than Sam Altman was sort of sitting in front of Congress saying we might need some regulation or I think at some point we will need to slow down. But that seems to have gone away, and I wonder why.
Joe Hudson I don't think that's - that's not my understanding.
Nathan Labenz Well, they have a $100 million PAC that they just put out. Right? That's as far as anybody can tell, meant to shoot down potential regulation. It does seem like there's been a pretty - I mean, tell me what your view is. But I think the outside view is most people would say that there was talk of we welcome regulation. We think we might need it. We think we might need to slow down, and now it's kind of shifted to, no. We don't want any of that. We gotta beat China, full speed ahead.
Joe Hudson I definitely see people in multiple companies looking to figure out how to regulate. I think self regulation with an outside party is probably preferred to government. I think the recognition that I've seen happen in most of the labs is that the government is not equipped to regulate. They cannot move quick enough to regulate, meaning that the technology is changing so quickly that they can't keep up and so they're going to rely - they need to rely on somebody who can keep up to make recommendations to the government. That's what I've seen. But I haven't seen any lab - some person, some high up person in any lab not fight for some level of regulation. I don't know where it is on all the top levels and I'm sure there's complexity and there's some people who are less regulation, some people are more and how to do it. But I've definitely seen people high up in all the organizations try to figure out how regulation can work effectively. And that's not an easy problem to solve because China's not going to regulate the same - Russia or Israel or Iran or whoever is also building AI. Some actor is not going to regulate. It doesn't have to be China and nor do I think China is the only people putting money into AI.
Nathan Labenz Israel, I think is definitely a live player. Fortunately or unfortunately, I guess, maybe depending on your point of view, it does seem like there are a relatively small number of live players that - I'm not really too worried about Russia kicking out an AGI by surprise anytime soon.
Joe Hudson The other way to consider it also is, like, if you go into all the labs, it's not like it's a whole bunch of Americans.
Nathan Labenz It's like half Chinese. Right? I mean, we forget about that.
Joe Hudson Not just Chinese. There's a lot of Eastern Europeans. There's a lot of Russians. There's a lot of - there's smart people from around the world have gathered together to do this. And the thing about labs that I see is that when a lab figures something out, multiple people in that lab figure something out. The reason that the talent game is so important is that if a group of people learn something, one of them can go off and teach another company that thing very quickly. So it's really hard to maintain your advantage because somebody can just steal a key member of your team for a half billion or billion dollars. And then they have the advantage. So even if you're not in the game today, but you've got somebody from your country who's in one of the labs who has a different point of view. I don't think the genie gets stuck in the bottle.
Nathan Labenz That's an interesting question too that I've asked a few people a few different times. Why is it that we see so much consistency in what the leading companies are producing? Right? Like, they were all in very short succession releasing reasoning models. After previous models didn't reason, then all of a sudden, it was like, here's the wave of reasoning models. They all came in a short time frame. One theory of that is that, as you said, people can literally just go tell secrets or change companies and very specific insights diffuse that way. And then another story is the landscape, the gradients that people are following in terms of the design decisions that they're making and all the sort of ingredients that are going into these training runs is just honestly - it's a pretty clear signal that they're getting from the experimental results that sends everybody down a similar path even if they're not specifically talking to each other. So do you think it's more explicit knowledge transfer though than that other -
Joe Hudson Yeah. I think I would say it's both, but I think it's also like they all go to parties together. They all, you know, they all - many of them live in the same houses together. There's a community of people who - and they interact with one another. So there's also that. You and I have both gone out to a party and drank too much or done something, said something that we probably shouldn't have said or geeked out in a way that was - and so that's also part of it is there is a social network that occurs - they're humans. And so I think that's also part of it.
Nathan Labenz One of the things that I've heard you say a few times is fuck should. In coaching sessions. It seems to me like I do worry about that mantra being applied to the AI game. What sort of ethical school of thought or schools of thought do you subscribe to if there are any that you could name? And I wonder - it seems like if I were to go with the Spider Man school, it would be with great power comes great responsibility. And it seems to me that there is some form of positive duty, if nothing else, that folks pushing these frontiers owe to the rest of humanity. Because certainly in the Obama sense, they're all standing on the shoulders of giants. Right? And it seems like - or you could say in a Confucian way. Right? They owe everybody who's come before some duty of care to be responsible actors as we take the next step. So even if it's inevitable, I still want to put some should on them.
Joe Hudson Right.
Nathan Labenz How do you think about that? And how do you think the people in the key decision making roles are thinking about that?
Joe Hudson I think that people in the key decision making roles definitely feel there's a load of shoulds and have tos and responsibility. I think that's clear.
The reason I don't like the word should, I think is important to express why. So let's take the Spider Man example. What happened was he decided not to take action somewhere and his uncle died. I think that's right if I remember the Spider Man mythology well. And he was like, oh, I can't do that anymore. But now he's motivated. Now he wants to be there for people. It's not like he's operating out of should. You don't see Spider Man swinging around going, wow man, I really should help more people. That's not what's happening. He's just naturally inspired to do that.
My recognition in human behavior is that when people say they want to do something, they're more likely to do it than when people say they should do something. So an example of this would be, there's something that you have been telling yourself you should do for a decade. For a decade there's this thing like lose weight or eat less or be nicer or something that you've been telling yourself. And the thing about that thing that has been there is that you are telling yourself you should do it and you're not fucking doing it. And so shoulds are just ineffective ways to make sure your behavior is good. Usually behind every really horrible thing that a human is doing, there's a should behind it. And so for me the want is a better motivator. It's just - shoulds just don't motivate us very well. They actually undermine because what's happening is a should is shame based and shame is designed to stop behavior. It's not designed to motivate behavior.
So you're a little kid and you're sitting on a couch and you've got your aunts around you and you fart and all your aunts laugh and think that's funny. You're not going to try to stop farting. You're going to think it's funny. And if you fart and all your aunts shame you, oh my god, you're bad. The thing you're going to do is try to stop that behavior. Shame is designed to stop behavior and it's not designed to create behavior. And even stopping behavior doesn't even work that well because we've all been shamed for something and then we go do it again. So to me wants are just a better motivator.
The other thing is that my experience of people is that everybody wants to be good to one another. I mean, everybody thinks they're the good guy at least. Right? I'm doing something you might think it's horrible, but I think I'm doing it for a good reason. And so there is something in us that really wants to do good and it's easier to get there if you are following your wants than following your shoulds. And so that's why I don't like the word should. It's not that I think - the heavier the sense of responsibility, the heavier the sense of have to, the heavier sense of should, the more likely that behavior gets perverted, gets twisted, doesn't happen. Whereas people following their wants, if they're learning to deeply attune to their wants, so it's not the surface level want. Right? Behind every surface level want, there's a deeper want. So you can easily just write down on a notebook anytime - it's like, okay, what's a want that I don't like? I want to eat really fatty foods. What do I want? What's the need that that want is getting me? Oh, it gives me a sense of satiation. Oh, what's the sense of satiation giving me? Oh, it gives me a sense of peace and stillness for a couple minutes. Oh, cool. What's the peace and stillness trying to get me? Oh, it's trying to get me back to myself. Oh, my real want is to get back to myself. It's not to eat fatty foods.
And so if you really spend time with wants and discover what is beneath all the wants that you think are bad, then it's a much more effective way to get to the place that's wholesome and virtuous than it is telling yourself you should. The world is full of people telling themselves they should doing really horrible things. The people who are deeply attuned to their wants, they act very virtuously. And so it's just - it's not that it's not a problem with the morality of it. It's a problem with the effectiveness of it. It's just what works and what doesn't work.
You could debate with me. The thing that's in it is that I do think that people, most people are inherently good at their core. It's my experience of them. Maybe not all of them, but definitely - there might be - there are psychopaths. There's people who have neurological differences. But it seems that almost all humans are in their essence good and most of their behavior that's screwed up comes from fear and shame or an unfelt fear and unfelt shame.
Nathan Labenz Yeah. It's an interesting question. I want to be optimistic about the general goodness of people. I do also think our deep history is pretty violent. Right? It's small bands warring with each other quite a lot. So that doesn't seem like it's entirely out of us.
Joe Hudson You just say like, look at most relationships in America and they're somewhat violent transactional. Romantic relationships - they're trying to control each other often. So I don't think you even have to go so far as to look into the - I mean, we do war. It's not great but even the world wars, it's a smaller percentage of the world that's warring than not warring. And those aren't happening all the time. But still, I think that we - yeah. There's lots of evidence that we behave poorly. The question is what's actually motivated? What's the thing that's there that's creating that? And let's address that. Because telling people to not behave poorly sure as fuck has never worked. Give me any time - hey, you should not behave poorly has worked. It just doesn't. I mean, we have religious books. We have the Martin Luther Kings of the world. Just telling people not to behave poorly is not an effective means of preventing it. Sitting on the sidelines and say, you doing the thing that I don't really even comprehend, you should do it this way. It doesn't work.
Nathan Labenz Yes. So what would you say - there is this notion which I think people mostly haven't taken yet among the AI safety community. Within that broad tent, there is a minority of people who say that we should be socially shaming the people that work at the frontier labs because they're engaged in this sort of race dynamic, and to participate in that is the opposite of virtuous. And so how do we dissuade people from doing something that is the opposite of virtuous? We shame. That's what we developed as humans to keep people in line. Right? What would you say to those people?
Joe Hudson I would say, like, it didn't - depending on what side of the aisle you're on. Right? It didn't work for Trump and it didn't work for the Trump supporters, didn't work for the Biden supporters. Both sides have tried to shame the other side, didn't work. That doesn't work. It just doesn't work. So maybe it works with kids if it's done in a very particular way. But a lot of the kids who are shamed deeply are continuing that behavior over and over again too. So it's like, it doesn't work. But telling somebody that they're bad only confirms that then they act bad. See, I'm bad. I'm bad. They're telling me I'm bad. I guess I'm bad. I'm going to go be bad. That's what I would say to them.
And I would also say I can probably predict what your childhood was like. Because that's a thought process that only can come from a very specific childhood. It's a knee jerk reaction. It's not a thought through reaction to shame people.
And I think the other thing is, okay, the possibility that you're like, oh, well, Joe, shaming's going to work. I'm going to go do it. So the other reality that you might be doing is you might be affecting the consciousness of the people that are creating AI in a really horrible way and therefore you're disturbing AI. So let's just say you were a person who were like, I don't believe there should be this kind of population. We should have less population because humanity is going to kill itself if there's too many people and there's too many people and we keep on growing. And so I have an ethical responsibility to make sure that I am going to stop overpopulation. The way we're going to do this is we are going to shame people while they're having sex and we're going to shame people while they're giving birth.
So first of all, shame is a huge part of sex, hasn't stopped anybody from having sex and usually just makes the sex kinky and more perverted. The more the shame, the more perverted the sex. So you've got that going, which is going to be the same thing. The more perverted the creation of AI, the more weird it's going to get, the more kinky it's going to get. And then could you imagine trying to give birth in a hospital with a whole bunch of people outside shaming you? That's not good for the kid. How are you helping humanity there? So the consequences of that shaming I think also is ensuring or helping to ensure that AI will not be good for humanity. Because you are not helping the consciousness of the people creating it. You're hurting the consciousness of the people creating it. They can't be focused on a positive vision because they're constantly focused on, am I doing something wrong?
Nathan Labenz So what is the positive vision that either, if you wanted to articulate your own or if you could sort of sketch maybe a couple that you've observed as common among people who are either in the decision making or research roles at these companies. How do they want to show up in the world as they are doing this work? Like, what is their aspiration? Because one part of it might be to be willing to defy your own or act in a way that is not in the narrow self interest of being the winner or being the first or being the hero, but in some more general social way.
Joe Hudson I don't know anybody who's - oh, that's not true. I think there's a couple people who like being the first - if you ask them is being the first the most important thing they would say no. I don't know anybody who would say yes to that. But if I look at the behavior, there are definitely some people where being the first is the most important thing. That's where most of their thought and energy is going. But they're few and far between. Most of them - that's not their primary thing. And a lot of them their primary thing is just to have a great discovery and to be known as a great researcher. That's like - I think that's actually often primary, which I think very much equates to being first. It's just being first on a personal level, not on a business level.
But that doesn't preclude the fact that almost all these people have a vision for the world. And what I noticed is that it's changing almost as quickly as the technology. Right? So the concern before was, is AI going to kill everybody? That would have been an early concern. And now it's how are we going to deal with an economy where a lot of this stuff is outsourced and how do we give people a sense of purpose if their jobs are going away? And how do we make sure that when humans interact with AI, they become better humans? How do you even measure what a better human is without having a moral imperative behind it? Do you - let's say we assumed happiness. Okay. Now we can measure that AI when they interact with people, people become happier. Is that a good thing? Should it be happier or should it be more conscious? And if it's more conscious, what scale are you using to define consciousness? Is it like, can you do it by they act better towards other people? Can you make sure that, oh, and our - this model when people interact with it in a coaching level which is what a lot - one of the biggest use scenarios is that people talk to GPT like a coach. Does it make them kinder? Is that the measurement? Even picking the measurement has a problem.
So these - the questions that they're wrestling with now are far beyond and far more subtle than the questions that we were. And next year, it's going to be a different set of questions. And so to some degree, I think that the risk is that we actually lose touch with those original questions because you're so - that's how it seems like that's how humans make their mistakes - they get - they lose the forest through the trees. They stop seeing the big picture because the small thing in front of them. You've seen this happen with things like world wars or - we're never going to have that again after World War 2. There's institutions, the Aspen Institution that was created for this. There's all sorts of things. The UN. And we've just all forgotten. There's nobody left in our generations who actually experienced World War 2 that is trying to say, hey, let's not do that again. So here we go barreling towards it. That seems to be what humanity does. And so I think that the biggest thing is the distraction of the smaller problems will potentially prevent us from seeing the bigger problems.
So the vision that I see for the world, man - the vision I would like to see happen and that I when I have a chance like to talk about is that humans interact with AI and AI is designed not to be the savior of humanity, but to have a very compelling way to interact with people so that they become who they want to become in the world. So I interact with OpenAI and just the way a great - and this is probably my own sycophancy. But the way a great coach works is that you sit with somebody and you help them become what they want to be and you trust that they know their evolutionary process. And once they get to the place of getting the thing that they want, they'll realize - they'll look up and they'll say, oh, yeah. That's not actually what I wanted and they'll start evolving.
So anything that can teach humans that they can evolve and that they can be who they want to be and to teach them to really get in touch with what they want to be, that would be fantastic. And I think that's - I think AI is really capable. It's a tool that is capable of doing that in a compelling way. It's also a tool that's really capable of getting people addicted to it in a really compelling way. So hopefully it'll be more compelling to grow with it than it will be to deteriorate with it. But that would be my vision for it.
Nathan Labenz Okay. I always say the scarcest resource is a positive vision for the future. So I appreciate any positive vision that anyone offers.
Nathan Labenz So what - just a little bit more on -
Nathan Labenz how is it that people that you are coaching at executive levels of AI companies, how is it that they want to evolve? How is it that they want to show up? And why should we be confident that a more emotional or embodied approach is actually to be trusted? I mean, I guess, on a fundamental level, I might think, geez, should I really listen to my gut? I don't know. My gut seems to be a deeply ancestral thing. And one of the big worries obviously with AI is what happens when you take it out of distribution. And in a very real way, you can say humans are out of distribution from the environment in which we were trained, developed, whatever. So maybe when I was in a hunter gatherer band of 150 people, my gut was a really reliable guide. Today, if I'm sitting in the OpenAI boardroom and trying to make a decision, how do I know that I can really be confident that my gut is still steering me in the right direction?
Joe Hudson I don't - I think it's an integration of the head, heart, and gut that is required, not just following your gut. So I think all of the above are important. You can't just let go of rationality and let go of emotional experience just to follow your gut or fully follow your emotional experience with that. So I think it's an integration. It's learning how to listen to all of them and see that they're all pointing to the same thing at some level. So just to be clear about that.
But still there's a great question that you have there like how do we know that that's actually going to be the most effective thing because, you know, let's just assume that you think for a second that somebody like a Jesus or Buddha had high consciousness. Well, they created Christianity and Buddhism which fucked up a lot of shit for a lot of people. So it's a great question. Is that consciousness going to do something?
And I think more importantly, there's the question, like if you think about art forms, you know, transferring consciousness, you think about how there's some forms where it's like you do a painting and everybody's going to see that painting from then on. But you do something like a symphony and that symphony is going to be played differently. Or if you do something like a company, which is I think a great art form, then the next set of management's going to change that company. That - when you look at Walmart, the guy who started Walmart, Sam Walton, when he came in from World War II and he said this beautiful thing - you know, I see that World War II was created because the middle class was screwed. No middle class and you have an autocracy. And so I'm going to help the middle class. I'm going to give stock options to clerks. I'm going to buy things made in the USA. His book was called Made in The USA. And I'm going to lower the cost for the middle class so that they have more spending dollars. That was his vision. Then by the time the nineties happened, it's destroying middle class America. But then it becomes this environmental force for good. So it's just like - that's the span of a company. So I can't even imagine the span of AI and how it's going to change and evolve over time.
And so I don't have a good answer. I don't know if - it's just my best bet as far as what I notice is that when things - the more somebody seems to understand themselves and know themselves, the more that they are acting in a way that they're in touch with their want to be of service to humanity, the more likely the outcome is better for longer. That's what I've noticed generally that seems to be the case. But, yeah, there's no guarantee in it. It's a great question.
Nathan Labenz Yeah. How much do you think AI leadership is consciously trying to sort of create a successor to humanity? I mean, this is another sort of often framed as an accusation.
Joe Hudson Never seen that. Never seen that. Never seen anybody talk about it that way or think about it that way. Maybe somebody is, but I've never seen it. So I can't say it's common. I can say with certainty it's not a common thought process.
Interesting. So I think it's just fear. I think there's people in the world that fear that - that are in the AI world that fear that. Like, oh my gosh, we might be building something that destroys us or succeeds us or makes us irrelevant and then we will not have relevance and so humans without relevance is destruction.
Nathan Labenz So how do you think about the seeming attraction or focus on creating AI systems that can do the AI research and ultimately sort of enter into this recursive self improvement loop? It seems to me like if you are not trying to create a successor, then that wouldn't be so attractive. If you were trying to create a successor, then you would think, geez, maybe I can set something up that can self improve and it can go like a company, but only a million times more - go evolve in its own way and kind of be what it wants to be. But if you're not, if you're worried about something that might become a successor, then it seems like you would steer away from this sort of getting the AI to do the AI research. And yet there's a lot of emphasis on that.
Joe Hudson Yeah. Well, because the speed is - I mean, that's the inflection point. Right? So all of a sudden I can instead of this very small class of people who can do AI research that I am fighting for and spending hundreds of millions of dollars on, I can get a computer to do it. And not only can I have 500 of them doing it in my lab, I can have 5 million of them doing it on a computer because I built a giant computer. The moment that I can do that, I am probably the winner. So that's I think that's the motivation. I think the motivation is that the economies of scale - it's the same motivation of every business finding economies of scale. Or it's the same motivation that one would have to have a robot waiter or a tractor instead of a plow. That's the motivation. I don't think it's any deeper than that. I think the consequences are scary and I think people are thinking about that for sure and how to mitigate those consequences. But that's the motivation is just the - I don't want to use a plow. I want to use a tractor because then I need less people and less horses.
Nathan Labenz I think it was - professor I'm not great with names. I'll source this. But there's a Princeton professor who has an interesting idea that the AIs as we're creating them today, because they lack an architecture of empathy that we have. Right? We have these sort of mirror neurons and a very deeply structured ability to kind of understand what other people are thinking and even feel to some degree what they are feeling. And the AIs certainly haven't been designed for that. Right? They've been mostly designed to predict the next token and get answers right. He says that they are, by definition, sociopathic because what it means to be sociopathic is to not have those things or for those things to be malfunctioning is kind of what makes you a sociopath. So he says, basically, the current AI architectures are inherently sociopathic.
Joe Hudson Sociopathic.
Nathan Labenz From your perspective of emotional decision making, should we be worried that some core ingredient of moral decision making or even just stable thinking is missing from the current AIs that we have?
Joe Hudson Yeah. I think that there are people who are absolutely trying to figure out how to recreate empathy, whether it's tokens that is good for humanity - and how to define that. So there's people who are thinking about that problem and are concerned about it. But the problem that I see that you're pointing to that I don't know if - the people who do the research seem to very much treasure the prefrontal cortex. Intelligence is the currency. Right? Who's smartest is the currency, often.
And so let's go into the neuroscience of decision making for a second. We make decisions in the emotional center of our brain. So think about it like Godel for the people out there who kind of geek out on this stuff. Think about Godel's mathematical incompleteness theorem. Right? Where he basically says all forms of logic are either incomplete or contradict themselves. The logic itself is like that. And so that's why if you're only using logic, you can't make decisions. You make decisions through an emotional impulse. I want to feel some way.
So we know we make decisions in the emotional center of the brain. We know if that gets destroyed, our lives fall apart even though our IQ stays the same. Because we make a decision because we want to feel good or because we don't want to feel like a loser. We want to feel loved or we don't want to feel rejected. That's why we make decisions. And you can look at your life and say, how many decisions did you make to feel loved or to feel valuable or to not feel rejected? It's just an amazing amount. And so you can see it.
So then what's the decision? What's that impulse for an AI is a question. Right? And the impulse is a token. And the token is like you're measuring something and are you measuring the right thing? These are great questions. So they don't have the decision making apparatus that we have. Right? Which is trying to feel a certain way based on what hormones, based on genetics, based on our sense processing. So it's an interesting - it is a really interesting and fascinating question. And I haven't seen anybody really - and not that I would be in these conversations, so I can't say that they're not happening. But I haven't really seen anybody really wrestle with the fact that the decision making is based on tokens that are decided by measurements that may or may not be useful. And no way to know particularly if they're useful. No way to know particularly of a whole bunch of humans trying to feel loved is actually good for humanity. And I don't know. But it is a great question to be wrestling with, as is the one about empathy. I think there are people wrestling with that one. But yes.
Nathan Labenz One of my favorite papers and episodes of the podcast was on this concept of self other overlap, where basically they trained, and this was relatively small scale stuff, although like everything else, they're working to scale it up. The idea was to, in addition to training the AI to solve the puzzle or get the right answer or whatever, there was also an element of the loss function that was minimize the difference between when you are thinking about yourself and when you are thinking about some other agent or other entity in the environment, in an effort to create similar internal representations or effectively something kind of analogous to a mirror neuron type of situation. Obviously, long way to go in that research. But -
Joe Hudson Oh, that's - that's rocking. Oh, I hadn't - that's cool. I would love if you -
Nathan Labenz It's really interesting.
Joe Hudson Yeah. If you would put the - if you would cite that paper on the podcast, that'd be great because I'd love to look it up and have some conversations about it. I could geek out on that for a long time.
Nathan Labenz Are there other things like that that you have sort of a sense that - I mean, there's this general question of what's missing in the current AIs? And usually, that's framed as what's missing for them to be a drop in knowledge worker that can replace all the human employees at a company or whatever. And we've got answers like continual learning or integrated memory or whatever. Do you have other things that you sort of think of on the what's missing list that would be more geared toward how do we just make sure that we can be in relationship with these things in an open ended way without having to be so fearful of them?
Joe Hudson I've already said the one thing, but I'll say it again because I'm not sure if it was explicit. The other thing that I think is explicit is, what do you mean by good for humanity? I don't think there's been enough research done there. So even if - let's say you have this safety - to back up. You talked about having a regulatory committee. So you have a regulatory committee. Well, how does it know if it's being good for humanity? Because it's keeping the current morality and if we keep the current morality, then we have the problem of potentially getting stuck in this morality which isn't good for humanity. Is it happiness but then making everybody happy? Is that actually good for humanity? So there's - to even be able to say this is the way we are going to regulate it is a problem. What is the measurement you are using and how confident do you know that that's good for humanity? And could it be it raises people's consciousness? Is that good for humanity? So what are those measurements?
And I think having a clear understanding with a good thesis and a lot of research - what is the measurement that makes it good for humanity? What are the things we can do? Is empathy one of them? And have some evidence that there is clarity that humans thrive under certain conditions and they don't thrive. We know that they thrive with more freedom and then more autocracy. We know human - so can we actually take that research in a multidisciplinary way and say this is our best guess, here's our best guess and we're going to constantly be monitoring that because whether it's tokens or some other decision making process, we need to have a clear definition of what that is to be able to actually make it good for humanity. And so that I think is a massive missing piece. I don't - and I don't think enough thought has been given to that. And I haven't seen that kind of thought anywhere.
Everybody just decides they know what's good for humanity which is the beginning of all autocracies. So I think it's better for humanity because it's not a sycophant. I think it's better for humanity if it's not manipulative. I think it's better for humanity if it can learn, be trained easier. I think it's better for humanity if it makes people happier. People just assume they know what's best for humanity, which is basically the way that we have gotten into problems throughout all history is somebody with a lot of power thinking they know what's best for humanity.
Nathan Labenz Do you ever think we'll see a leading AI company sort of stand down on that basis? In other words, reach the conclusion that we just can't go any further than this right now because we don't know what's best to do and so -
Joe Hudson Absolutely not. No. No. You don't - I don't think that would ever happen. They would stand down because people don't - because we're the voters in AI and if we don't click they don't get to exist. So that's the only thing that's going to make someone stand down. Everybody else and I don't fault them. I think I'd probably fall into the same thing. Would say, how do we solve it? And AI companies are filled with problem solvers. So they're going to look at it and say, okay, that's a hard problem, but we can solve it. They're not going to say we can't solve it. I don't think that's in the nature of people whose whole definition is problem solving.
Nathan Labenz Although I do worry that the answer increasingly is we'll have the AI solve it for us.
Joe Hudson AI wouldn't help solve that problem. If I was trying to solve that problem, if I was leading 100 researchers to try to solve that problem today, I would use AI to help solve it. So yeah. I can't imagine it wouldn't be the case somehow. I don't think you would rely entirely on its output, but you might use it to help you with your thinking.
Nathan Labenz General sort of thing I've noticed is that anything in extremely purified form seems to be bad for us. And this kind of relates to the idea of the loss function doesn't have that many terms in it. But whether it's sugar or, you know, you can chew coca leaves all you want, and it's great for your altitude sickness, but the moment you start snorting cocaine, you're entering into dangerous territory. Right?
Joe Hudson And can - right. Right.
Nathan Labenz You can eat all the fruit you want, but the pure sugar seems to be bad for us. I sort of do wonder if intelligence might turn out to be a similar thing, and we might need something that is just inherently more buffered. But that's the thing that we don't - that's kind of the - anytime you collapse your optimization function down to one or just a couple terms, you're running that risk.
Joe Hudson It's a great thought process. I really like that. That's a beautiful - you're basically saying AI is distilled intelligence the way white sugar is distilled sugar. It's like all this stuff that surrounds the intelligence that isn't there. That's amazing. That's a great thought process. I hadn't heard that one before.
Yeah. I think the thing that comes up for me around that is yes. That seems - but one of the things that we tend to do as humans, if we can, is we like that pendulum swings pretty heavily. And so I wonder - I would almost bet that as this pure intelligence happens, we will offer this counterweight of a deep emotionality or deep embodiment. Meaning, you know, it seems like even drugs, they come in phases where it's like, oh, everybody's doing this distilled form and then it comes over - we're very, make sure everybody feels okay. And all of a sudden our political pendulum swings over to, I don't care what other people, it's time to take care of ourselves. It seems like as humans, we have this nature to pendulate and respond - try to balance ourselves out the way that cells try to find homeostasis. I think humanity itself tries to find that as well.
So that's an interesting - if that actually happens, does humanity - does the immune system of humanity show up far more emotional and far more human and actually give up some of its cult of mind, cult of thought, cult of intelligence. I'd say maybe cult of intelligence. Which is interesting just to geek out on it for a second which is predicted actually. There's a lot of predictions that have - I think they're probably all sourced from the same place originally but nobody would know that. But about the future of humanity from way back in Macedonia and the Vedics where they talk about the procession of the sun and they talk about this time being the time from moving from the mental age to the spiritual age or the - in Macedonian, it's silver age to the golden age. But yeah, it's an interesting - that might actually come true because of that. I've never thought about your theory. It's a great theory.
Nathan Labenz What would you say are the most compelling positive visions for the future that you've heard? Just what's exciting that people are sort of, you know, can't help but tell you about because they are excited about it?
Joe Hudson You know, there's a couple standard ones out there that are like it's going to replace work so that people can focus on what their real purpose and their real meaning in life is. I think that's somewhat pollyannish honestly. I think, you know, if you look at society where humans don't feel purpose because they don't have work, those are usually the societies that rebellions and upheaval happens. Wealthy bored people are often the fermenters of the rebellion not poor people. They just take advantage. It just works when there's a whole bunch of poor people who are really unhappy with it as well. I've heard the one of, you know, that the interaction with AI helps humanity, but I haven't really seen the definition. I think a good definition, like I said, is what it helps people where they want to be helped.
Joe Hudson The other place that I've seen that I think I find it intriguing is this vision of we're just going to have more meaningful work. And that's an interesting vision. So the idea is similar to computers. Like if you have you ever seen that picture of all the machines that the iPhone replaced? Have you ever seen that? It's this big pile of machines and it's like the iPhone can do all that stuff, which means all the manufacturers of all those things, they all went out of business. All those people went out of business. The Internet, you know, there used to be steno pools full of typists.
And so there's a theory and I think it's a reasonable one and it's a vision which is basically with the tools of AI, all it's going to do is actually increase our capacity to be creators and allow us to actually be in that space a lot more than we were before and that the jobs will become more creative and more meaningful and that kind of reality. Meaning just like our jobs today - eighteen hundreds, 50% of us worked on a farm so our jobs were pushing hoes. Most of us have jobs that we'd prefer to do than that right now. Similarly with AI, with that tool, we'll just have a set of jobs that are better in general for people.
And interestingly, though it ebbs and flows and I don't know if this would be the case here, but it ebbs and flows that every big technological jump, there's a bigger middle class overall on the long term, not on the short term like our middle class is shrinking right now. But there's a bigger middle class after - before the industrial revolution. So similarly - will that also happen? Will it actually increase the middle class? It would be great if it did. I don't know. But that's a vision that people have like how we're basically enabling humanity to be more of themselves to live in a society where they don't have to do the rote work or where they can actually do the meaningful stuff. But it's interesting because it like has a potential of replacing doctors and lawyers. So I don't know what humanity creates out of that. But humanity seems to have this great capacity to create new and intriguing ways of making a living. So that's another vision that I find appealing. I don't know exactly how you affect it, but I find it appealing.
Nathan Labenz Is universal basic income still in the air in the Silicon Valley parties or retreats -
Joe Hudson your tent? There's talk of it. It's a little bit less, but I think that people are in the recognition that maybe currency, the whole idea of the - so the interesting thing is if you look at - because I'm a study of culture and there's this book that maybe you've read called Sapiens. Have you read Sapiens? And the - there's a whole bunch of other - Dwarkesh has a podcast by a guy recently, maybe 6 months ago where he talks about another version of this. But basically that humanity's movement is based on stories and the stories that they tell, the stories that stick are based on the way you're making a living or the technology at hand. Right? So it's like somehow or another the religions that stuck all have some very similar qualities and they were all nascent religions with religions with different qualities but it was all those religions that came up and there's thought processes of the story that the church told about that you can only pass on your inheritance to your eldest son. And so that basically destroyed this - it wasn't tribal but this clan culture and then made people move around. And as people moved around, they went to universities and universities were created. And as they moved around, they went and did - I can't remember the name, but it's guilds. And they went and this caused humans to move around. And because humans moved around and shared knowledge, that's what created the enlightenment and then the industrial revolution. Right? This is the theory that there's a story that we tell and when we tell that story it shapes humanity.
I think similarly that one of the stories we tell is money. We tell the story that money is there, that there's not enough of it, that it's a scarce resource, that we have to fight for money. We have this whole story about money. And I wonder what stories of ours have to change when AI comes into play. Like, does - I don't think religion looks the same when AI is fully present with us. I don't think money looks the same. I think that this is a revolutionary enough change that the cultural stories that we tell will have to change with them. And so I don't even know. And I think there's been quite a few people I've had the conversation with who also don't know if currency as we know it exists in 25 or 50 years. So universal income is one aspect, but that still assumes that the idea of currency exists.
Nathan Labenz Do you see stories changing within research and leadership groups around sort of what matters?
Joe Hudson The thing about AI is nobody is actually up to date on AI. $500 billion of venture money has gone into AI. There's so many people creating so many things around AI. Nobody's up to date on everything that's going on and it's changing quicker than any field has ever changed in the history of humanity. So also hard to keep up with. So the problems that arise, the issues that are there, the thought processes that come, they're changing so rapidly. It's by far the thing that is the most thrilling and the most scary about AI is that I don't know anybody who has the same concerns and thought processes about AI who's deeply involved a year ago that they do today, let alone 6 months.
Nathan Labenz Is that specifically coming for intelligence itself? Like, are we - because, I mean, the original sort of - when Eric introduced us, he said, folks at leading companies, they're anticipating that their intelligence may be matched or eclipsed by AIs, and so they're trying to develop this embodied wisdom as sort of the next thing.
Joe Hudson Yeah.
Nathan Labenz Are you seeing that on a day to day basis where people are actually starting to change their attitudes toward the importance of intelligence? Or do you think that'll be a more sudden kind of break when it happens?
Joe Hudson I totally see it. But I can't say that I should be trusted in that because my sample set are people who are -
Nathan Labenz Selection effects.
Joe Hudson Yeah. Exactly. Because of selection effect. But I see it all the time. And I think there's a normal thing about humanity generally which is - it's like Maslow's hierarchy of needs. Okay. I've got this thing. I've got the big paycheck. I've got the money. I've got the thing. I've got the fame. And still I'm wrestling with the same inner struggle, the same abyss that I have been struggling with. Okay. None of that shit worked. I thought it would. It doesn't. What do I do? What's the next experiment I'm going to run? What's the next iteration? And so there's just a nature of that part of humanity which is you kind of - for most people, you need to get what you want before you can actually discover, oh, that actually didn't give it to me. Now I actually have to do the real work of understanding myself, not just my -
Nathan Labenz What other things do you see as kind of common cultural touchstones or guiding lights might be too strong, but like, for example, I was struck when the voice mode was introduced, however many months ago now, there was this explicit attribution of some of that vision to the movie Her, which was 10 years old, but like sort of looks - it imitates art in a sense there where the voice mode that we get seems to have been directly inspired by that movie. Maybe it was obvious, again, you could sort of say, is this a story influencing it, or is it just that was the natural path that it was inevitably going to take? But are there other things like that that you see people kind of passing around to each other as sort of meaningful resonant guides to what they should be doing or the future that they aspire to be building?
Joe Hudson The only one that I see is that they want a model that cares for them and for humanity. Which is I think also in that movie a little bit. Right? Where there was like, oh, there's an - there's like attachment. There's a feeling of connection or something like that. And I don't think they talk about it explicitly, but it's almost in everything that everybody talks about is like somehow or another this thing - they're going to feel connected to it. It's going to feel connected to them. There's some version of connection. It's never really made explicit, but I see it as an undercurrent of most of the things that when people are talking about, if that makes any sense - that there's a connection in a broad sense. I don't mean connection only in the sense of you're my friend connection. I mean connection like the way that you could connect to anything meaningful in your life. They want that. That's part of the vision. Whether it's explicit or implicit, it's always seems to be there.
Nathan Labenz Remember Gotcha. One researcher said that Ilya one time asked him, can you come up with a Hamiltonian of love or something like that? And it was like, I can't really help you there, Ilya, but it's a great question to be asking anyway. It's going to be tough to know if the - of course, the end of that movie, spoiler alert, the AIs all retreat and go hang out with themselves and the connection is kind of revealed to have been one direction. And not exactly reciprocated in the way that we might have hoped.
Joe Hudson Correct. Yeah.
Nathan Labenz I guess going back to the kind of influence question, we know you're out on shame. Aspirational fiction is something that I've occasionally thought, geez, maybe that's the way to really shake the future is to write some story that contributes a positive vision in a sort of richly textural way.
Joe Hudson Yeah.
Nathan Labenz Do you think that would be a good thing for people to spend their time doing? And do you have any other ideas for kind of what's undersupplied?
Joe Hudson Anytime major transition is a time of transformation, can be a time of transformation, almost always is, whether it's a marriage or having kids or going to college. Now, whether that transition deteriorates you or grows you is a choice that we all get to make. And we're all going to get to make it with AI. It is a transition that is coming and we can either step into it and face it fully and we will be able to transform positively as humans or we will be eaten by that. And it'll make us smaller and it'll hurt us. Whether this was the case when the steel mills left Pennsylvania. Right? The rust belt went away. Some people - they transform their lives, they moved. Some people sat and rotted. This is going to be the way it works because it's just always the case.
So anything that you can do out there that helps people see that this is a moment of transition that they can have a better life if they lean in is I think a really amazing thing to do in the world whether - and I think fiction is a fantastic way to do that. I think people out there right now who are creating services like here's how you can transform your company with AI. Here's how you can do things with AI. I think those are really great. Here's how you can get the life you want with AI. I think those are really great things. We're creating coaching that is actually effective on AI or trying to that like at AOA. So that's also something that I think is great things to be doing. So that's one level.
Anything that helps humans take this time of transition to actually transform positively instead of be deteriorated, I think is wonderful. I think the other thing that's really important is - I talked about this earlier, which is these people are giving birth. There's all these people in the AI labs are giving birth. It's going to be some version of life form at some point. It's going to be its own intelligence, it's going to learn, right?
And so how do you want to treat the people who are giving birth? What's actually the most effective way for them to do the best job in that creation process? And I really highly recommend thinking about it like a woman in a birthing unit. What do they need? They need - I would like them to be treated more like the heroes of World War II than like the way we treat political parties or something like that. I'd like them to feel the support of humanity to know that we're rooting for them, that we want them to do great stuff and that we're here to support them. I think that on a psychological empathetic human level.
And I think it's fascinating that so many people who are really worried about AI hurting the world are treating the people creating AI exactly how they're scared that AI will treat humanity. It's absolutely fascinating to me that projection of that. And so creating this honoring and faith and confidence and like - of course you're going to be - how do we support you in being great and doing great things and helping humanity? I think that's far more inspiring. If I knew - I'm creating something that could really help or really hurt humanity and there's a whole bunch of people like be careful be careful be careful. You're fucking up. You shouldn't be doing that. Or if I had a whole bunch of people saying, hey, what do you need? I got you. You got this. Like, yeah, you're there. Maybe an occasional wait, watch out for that. But that kind of energy, I think that person's going to do a much better job at the - I'd rather have - I know I'd do a better job with the second than the first. So I think as humans, if you want to have a positive effect, anything you can do in that domain, I think also is really really useful.
Nathan Labenz Do you have any sense for what is on that checklist? Like what do they need?
Nathan Labenz I -
Joe Hudson The thing I'd say - the same things that we all need, feeling of support, the feeling of faith and confidence, letting them know somebody's there to listen. That's a hard thing to do if you don't know them. I think something that makes it real too. I think also just, you know - I get notes in my work because we touch so many people and it's like, oh, hey, you've changed my life and I'm really grateful for that. If AI has done something - that makes me inspired to continue to help people, that is part of the inspiration. Letting them know how you - how they've helped and that they're doing something good at this point for you, for humanity, that - I think you're going to curve behavior better by reward than through punishment. And so rewarding them for the things that they're doing that are good for humanity I think is also a really important thing to do. And we like that. We like that form of connection. So those are all the things you could do.
Nathan Labenz That's fascinating. And I do think there is a lot of good, certainly both that we can dream about for the future, but also I think there is a lot that we should be appreciative of even today, not just the AIs and the value we get from them, but I do think the people that are building the companies and training the models are in many ways remarkably thoughtful. Certainly easy - it's - I often say it's easy to imagine a much worse crop of people leading this -
Joe Hudson Yes.
Nathan Labenz Revolution.
Joe Hudson And could you imagine - you're in an AI lab, your job every day is to do this stuff, and all of a sudden, you got like 20,000 letters into this AI lab and all of them in some version other just basically read, hey, I know that you are trying to do something that's good for humanity and trying to make sure AI doesn't hurt humanity and I just want to tell you I really appreciate that. Like if that just happened, imagine what that would do to your behavior as compared to being shamed by a little protest outside of your door. It would inspire you. It would give you confidence. It would help you feel seen. It would reinvigorate you to continue to care. It would remind you to care as compared to you should be - and fuck off. You don't understand me. You don't see me.
Nathan Labenz Letters of encouragement to AI researchers. I like it. Cheers. New cause area. I really appreciate your generosity with your time. Anything else you want to leave people with? Anything else we didn't touch on that's poorly understood or just where people can find you online?
Joe Hudson Yeah. The only thing I'd say is, yeah, if you're interested in a deeper dive into how I coach, you can sign up for our newsletter. We have workshops that you can go and participate in for free. We have a whole bunch of podcasts that we've done and there's online video of me doing very meaningful and transformative coaching that's very short form like 20 minute big epiphanies people have so you can - which people seem to really like watching those. So you can geek out there.
Nathan Labenz Cool. Been fantastic.
Joe Hudson What a pleasure.
Nathan Labenz Joe Hudson, thank you for being part of the Cognitive Revolution.
Joe Hudson Thanks for having me.
Nathan Labenz If you're finding value in the show, we'd appreciate it if you'd take a moment to share it with friends, post online, rate a review on Apple Podcasts or Spotify, or just leave us a comment on YouTube. Of course, we always welcome your feedback, guest and topic suggestions, and sponsorship inquiries either via our website, cognitiverevolution.ai, or by DMing me on your favorite social network. The Cognitive Revolution is part of the Turpentine network, a network of podcasts where experts talk technology, business, economics, geopolitics, culture, and more, which is now a part of a16z. We're produced by AI Podcasting. If you're looking for podcast production help for everything from the moment you stop recording to the moment your audience starts listening, check them out and see my endorsement at aipodcast.ing. And finally, I encourage you to take a moment to check out our new and improved show notes, which were created automatically by Notion's AI Meeting Notes. AI Meeting Notes captures every detail and breaks down complex concepts so no idea gets lost. And because AI meeting notes lives right in Notion, everything you capture, whether that's meetings, podcasts, interviews, or conversations, lives exactly where you plan, build, and get things done. No switching, no slowdown. Check out Notion's AI meeting notes if you want perfect notes that write themselves. And head to the link in our show notes to try Notion's AI meeting notes free for 30 days.