Cutting-edge Neurotechnology with Nita Farahany in Conversation with Luisa Rodriguez on The 80,000 Hours Podcast
Listen to Episode Here
Show Notes
We're sharing a few of Nathan's favorite AI scouting episodes from other shows. Today: Nita Farahani shares her insights with 80,000 Hours' Luisa Rodriguez on the current state of neurotechnology and its potential impacts on various fields, including work, healthcare, privacy, and even human cognition. The discussion includes Farahani's assessment of devices like SmartCap and Neuralink, the concept of cognitive liberty, the risks of the technology, and the need for proper regulation and ethical considerations.
If you need an ecommerce platform, check out our sponsor Shopify: https://shopify.com/cognitive for a $1/month trial period.
Nita Farahany is a professor of law and philosophy at Duke Law School and discusses the applications of cutting-edge neurotechnology.
You can subscribe to The 80,000 Hours Podcast here: https://80000hours.org/podcast/
---
SPONSORS:
Shopify is the global commerce platform that helps you sell at every stage of your business. Shopify powers 10% of ALL eCommerce in the US. And Shopify's the global force behind Allbirds, Rothy's, and Brooklinen, and 1,000,000s of other entrepreneurs across 175 countries.From their all-in-one e-commerce platform, to their in-person POS system – wherever and whatever you're selling, Shopify's got you covered. With free Shopify Magic, sell more with less effort by whipping up captivating content that converts – from blog posts to product descriptions using AI. Sign up for $1/month trial period: https://shopify.com/cognitive
Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off www.omneky.com
NetSuite has 25 years of providing financial software for all your business needs. More than 36,000 businesses have already upgraded to NetSuite by Oracle, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform ✅ head to NetSuite: http://netsuite.com/cognitive and download your own customized KPI checklist.
X/SOCIAL:
@labenz (Nathan)
@8000hours (The 8000 Hours podcast)
@CogRev_Podcast (Cognitive Revolution)
TIMESTAMPS:
(00:00) Nathan intro's this episode with Nita Farahany
(03:26) The Future of Neurotechnology: From Neuralink to Meta
(05:40) The Potential of Brain-Computer Interfaces in Military Applications
(11:29) The Role of Neurotechnology in National Security
(15:20) SPONSORS: SHOPIFY | NETSUITE
(18:14) The Potential of Brain-Computer Interfaces in Communication and Data Transfer
(21:19) The Future of Super Soldiers and Brain-Controlled Drones
(32:20) SPONSORS: OMNEKY
(41:05) The Controversial Realm of Neurotechnology Weapons
(45:16) Ideal International Agreements for Weapon Regulation
(47:01) The Threat of Hacking in Neurotechnology
(50:36) Brain Signatures for Identification
(01:11:28) The Future of Neurotechnology in the Workplace
(01:24:59) The Intrusion of Brain Decoding into Personal Relationships
(01:30:00) Regulating Brain Decoding: A Call for a Change in Worldview
(01:38:03) The Impact of Neurotechnology on Identity and Self-Perception
(01:41:47) The Future of Cognitive Enhancement and Transhumanism
(01:46:21) The Role of Neurotechnology in Addressing Neurological Diseases
(01:51:04) The Dark Side of Neurotechnology: Hacking and Manipulation
(01:54:50) The Importance of Cognitive Liberty in Neurotechnology
(01:55:15) The Future of Neurotechnology: Best and Worst Case Scenarios
This show is produced by Turpentine: a network of podcasts, newsletters, and more, covering technology, business, and culture — all from the perspective of industry insiders and experts. We’re launching new shows every week, and we’re looking for industry-leading sponsors — if you think that might be you and your company, email us at erik@turpentine.co.
Full Transcript
Transcript
Luisa Rodriguez: There was a patient who was suffering from really severe depression to the point where she described herself as being terminally ill. Every different kind of treatment had failed for her. And finally, she agreed with her physicians to have electrodes implanted into her brain. And those electrodes were able to trace the specific neuronal firing patterns in her brain when she was experiencing the most severe symptoms of depression. And then they were able to, after tracing those, every time you would have activation of those signals, basically interrupt those signals. So think of it like a pacemaker, but for the brain, where when a signal goes wrong, it would override it and put that new signal in. And that meant that she now has a typical range of emotions. She has been able to overcome depression. She now lives a life worth living. That's a great story. But that means we're down to the point where you could trace specific, at least with implanted electrodes, neuronal firing patterns, and then interrupt and disrupt those patterns. Can we do the same for other kinds of thoughts?
Nathan Labenz: Hello, and welcome to the Cognitive Revolution, where we interview visionary researchers, entrepreneurs, and builders working on the frontier of artificial intelligence. Each week, we'll explore their revolutionary ideas and together we'll build a picture of how AI technology will transform work, life, and society in the coming years. I'm Nathan Labenz joined by my cohost Erik Torenberg.
Hello. Happy New Year and welcome back to the Cognitive Revolution. Today, we're continuing our short series of holiday bonus content with an outstanding conversation about recent advances in neurotechnology from the 80,000 Hours podcast.
In a world of so much AI hype, I periodically try to discipline my own thinking by taking a step back and asking myself, is there any way that this is all just way overblown? Is there any way that life continues to look pretty normal for decades to come? Invariably, no matter how much I question, I end up reaching the same conclusion. Even if large language models hit a plateau at the level of GPT-4—and if you heard my recent emergency episode on state space models and the new Mamba architecture, you know that that is not at all my expectation—but let's just imagine for a moment that such a leveling off were to happen. It still seems clear to me that the parallel revolution in biotechnology itself enabled by AI advances sets the stage for a radically different future.
So today, I'm really excited to feature this interview by 80,000 Hours podcast host, Luisa Rodriguez, and guest, Nita Farahany. Nita is a professor of law and philosophy at Duke Law School and a leading scholar on the ethical, legal, and social implications of emerging technologies. Or to use my own vernacular, a neurotechnology scout. As you'll hear, Luisa repeatedly expresses her own astonishment that such mind-blowing technologies have remained relatively under the radar. And for what it's worth, I have often felt the same way over the last year. Going back to our episode with Tanishq Matthew Abraham, I recall telling my wife Amy that we're collectively so overwhelmed with AI advances that literal mind-reading technology, which presumably at one point would have been front page news, is now somehow barely noticed or commented upon.
Yet, to the best of my understanding, everything you'll hear checks out. Neuralink, which Elon Musk said he founded so that humans can "go along for the ride" with AI, is now in clinical trials. And in the months since our episode with Tanishq, Meta has published work showing that they can now decode brain signals in real time with much less invasive equipment. There is a ton more to come over the next two hours, but for me it all adds up to one inescapable conclusion. For better or worse, we will live in extremely interesting times.
So now I hope you enjoy this paradigm-shifting conversation on emerging neurotech, courtesy of the 80,000 Hours podcast with host Luisa Rodriguez and professor Nita Farahany.
Luisa Rodriguez: Hi, listeners. This is Luisa Rodriguez, one of the hosts of the 80,000 Hours podcast. I was so excited to talk to Nita Farahany because while I was reading her book about cutting-edge neurotechnology, I kept thinking over and over again, wait, how does this crazy technology already exist without me knowing about it? Does anybody know about it? And so I wanted to share it with all of you.
Some of the wilder things we discuss are: how close we are to actual mind reading, for example, a study showing 80% plus accuracy on decoding whole paragraphs of what a person was thinking; how hacking neural interfaces could cure depression; how companies might use neural data in the workplace, like tracking how productive you are or using your emotional states against you in negotiations; how close we are to being able to unlock our phones by singing a song in our heads; how neurodata has been used for interrogations and even criminal prosecutions; the possibility of linking brains to the point where you could experience exactly the same thing as another person; military applications of this tech, including the possibility of one soldier controlling whole swarms of drones with their mind; and plenty more.
Without further ado, Nita Farahany.
Luisa Rodriguez: Today, I'm speaking with Nita Farahany. Nita is a professor of law and philosophy at Duke Law School and the author of The Battle for Your Brain: Defending Your Right to Think Freely in the Age of Neurotechnology. Her resume is incredibly long and impressive, too long to go into too much detail here. But to note just a few highlights, Nita was appointed by President Obama to the Presidential Commission for the Study of Bioethical Issues, where she served for seven years. And she's part of the expert network for the World Economic Forum. So we're very lucky to have her on as a guest. Thanks so much for coming on the podcast, Nita.
Nita Farahany: Thanks for having me.
Luisa Rodriguez: So I hope to talk about where neurotechnology is going, what that means for our rights to privacy, self-determination, and inequality. But first, my sense is that you're both excited and worried about the impacts of neurotechnology on society. What's your basic pitch for exactly what's at stake here?
Nita Farahany: So I think that's exactly right, which is I'm both excited and very worried about it. And my basic pitch is that I think that this is transformational technology that could be the most empowering technology that we've ever introduced into society or the most oppressive that we've ever introduced. And it really depends on what actions we take to direct the technology and its applications and the ways in which we handle the data that comes off of it in ways that are either aligned with human interest and human flourishing or misaligned.
Luisa Rodriguez: Cool. Okay. I think we'll get into a bunch of the reasons why you're excited and worried in more detail. But I want to spend some time going through a few different applications of new neurotechnologies and talk about where they are now and where they're going, and then get to some of those implications. So first, I wanted to ask you about applications of new neurotechnology in security and surveillance, starting with what already exists. What's one important neurotechnology that exists today with applications to national security and surveillance?
Nita Farahany: So the hard thing is that there's not a lot of information out there about the ways in which neurotechnology is being used for surveillance from a national security perspective. So every military around the world has programs that have invested in technologies either for purposes of enhancement, like to make super soldiers, for example. And this is from soft neurotechnology in the sense of things like drugs. Right? So for a very long time, many of the cognitive enhancers that have been developed really were first developed with national security or with military applications. You know, air force pilots were some of the earliest test cases for modafinil, right, the drug for wakefulness. So the neurotech from a national security perspective really has started with investments rather than in surveillance—in enhancements.
Then you look at the ways in which there are lots of investments in the military to try to see if technology that decodes the brain could be used, or technology that stimulates the brain could either be used to monitor soldiers' brains or to lead to enhancements of soldiers' brains or to lead to brain-to-brain communication. And so if we break those down, we have the enhancement applications. You know, there is transcranial magnetic stimulation, or there is transcranial direct current stimulation. These are different ways of stimulating the brain with outside neurotechnology. And the transcranial direct current stimulation, there have been devices that have been used with militaries to try to improve training, for example. So target practice, having military uses of the technology while they're trying to do target identification to enhance learning. Or, there have been attempts to embed EEG sensors into helmets that people are wearing on the field to try to detect even things like if they sense automatically a target. Like, it turns out our brains can pick up information well before we consciously process that. And so trying to detect that information and then use that, you know, sent back to command center.
Or there have been attempts at brain-to-brain communication where the military has invested a lot of money in trying to figure out if it's possible to have silent transmission of communication between people on the field. None of this, as far as I'm aware, is at scale in any military across the world. They're all significant research development programs. There's a lot of military investment in trying to decode specific what are called evoked response potentials in the brain. So this is how our brains automatically react to information, and that could be how we automatically react to a picture that's shown to us to see if we recognize an image or recognize a co-conspirator, for example.
Or it's something like an N400 signal, which looks to see if you show congruence of recognition of different statements. So DARPA, for example, has a program underway in the United States to look at whether you can show a veteran two pieces of information to try to see if they're suicidal or suffering from depression. So you might say two statements: "I" and then "am suicidal." Right? And do those two things go together? Are they congruent? Or does the brain show incongruence—those two things don't go together? And then use that as a way of trying to probe the brain for information. That's an early stage research program. Right? So it's not at scale yet.
Luisa Rodriguez: Right. Wow. Well, I want to come back to that in a bit. But first, are there any broader applications to national security?
Nita Farahany: I guess there's two other categories in the national security side. One is to look and see whether or not it's possible to develop brain biometrics. And by this, I mean, so each person's brain seems to process information a little bit differently. And people are used to at this point, even if they're not happy about it, they're used to different biometrics being collected about them. So for example, a face print used to unlock a mobile phone. And a brain biometric is a functional biometric. It's how your brain responds to something to unlock a device. And so if I were to sing my favorite song in my head and you were to sing the same song in your head, the neural signatures would look different.
And so we could use that as a functional biometric where to unlock a phone instead of needing a password, I would sing that little song in my head, and then the specific pattern of neuronal activity could be used to unlock the phone. And a lot of militaries around the world are interested in these functional biometrics, especially brain-based biometrics because they may be unique, much harder to replicate, and much harder to therefore hack in the ways that passwords are really easy to hack or things like that. It's also really easy to change it. So if somehow the functional biometrics were unlocked and somebody got access to them, I could just change it to my next favorite song or how I do a math calculation or anything else. And each time, it'd be a unique neural signature that is unlikely to be replicated by other people.
One last category is this area that people are worried about, which is cognitive warfare. And that's the possibility that militaries are developing brain-based weapons to try to target people's brains and disable or disorient them. And this has come to light especially around claims of Havana syndrome that a number of US diplomats have claimed. So it started with US diplomats who were in Havana, Cuba, who started to experience a common set of symptoms, like ringing in their ears and dizziness and different neurological symptoms. And they had such consistency in their complaints about it, then the military started to look into it. And then the same kinds of claims started to happen with diplomats in other places around the world, US diplomats in other places around the world. And because of where it first started, it came to be described as Havana syndrome.
And the National Academies of Science did a big research look into the question, and they came out believing or concluding that they thought that there was likely some kind of microwave weapon or some kind of weapon with electromagnetic frequency that was being used to actually target and disrupt people's brains. And the US national intelligence agencies came out in a joint statement last year saying they didn't think that a foreign adversary was likely behind it and that there were still a couple dozen cases that they couldn't explain, so they didn't have any sort of answer as to what that was. But at the same time, the Biden administration has sanctioned four Chinese-based companies for the development of purported brain control weaponry.
And if you look at other areas, like the use of TikTok in the United States versus in China, informational warfare seems to be growing as a concept of cognitive warfare. And a lot of different militaries around the world and NATO have started to hold convenings and conferences and conversations around this concept of cognitive warfare and whether this might be a new domain of warfare that's really underway.
Luisa Rodriguez: We'll continue our interview in a moment after a word from our sponsors.
Luisa Rodriguez: That's a bunch of technologies that I think many people just have absolutely no idea exist now, or at least some of them are being tested, some of them may be more at scale than others. But I think there's a ton of mind-blowing and just pretty new stuff for many people there. So I do want to go through a couple of those one-by-one. So one thing you mentioned and a thing that I read about in your book that really blew my mind is the potential to use brain-computer interfaces to create so-called super soldiers that can control swarms of drones with their minds, communicate and upload data brain-to-brain, and identify targets subconsciously. And you already alluded to a few of those things. But can you basically just describe what that would look like in a bit more detail, starting with controlling swarms of drones with your mind?
Nita Farahany: Sure. So I mean, one thing I think people maybe don't fully appreciate is the possibility of so much more that we could control with our brains than our bodies. Right? And that's what a lot of neurotechnology is looking at, is because you take signals from the brain and use those in really different ways. Probably the best way to understand this form of drones is I was at a presentation in 2018 by a company that was later acquired by Meta called CTRL-labs. The guy started the presentation by saying, why are we such clumsy output devices? Like, our brains are incredible input devices. We have so much information stored in our brains, but we're limited by our bodies. And wouldn't it be great if, instead of using these sledgehammer-like devices at the end of our arms—as he was waving his hands—we could operate octopus-like tentacles instead?
And the idea was you could really use the output of the brain if you trained it to be able to control everything from octopus-like tentacles to an entire swarm of drones. And a swarm of drones in a swarm-like pattern responds in directionality. Right? So it's, you know, go left, go right, but it's actually organizing instead of one drone at a time, all of them to act in a swarm-like collective behavior. And so what the military has tested out is the possibility of using one brain that is controlling one drone. Right? So using your mind to think up, right, left, you know, left, right, to instead have an entire swarm that is responsive to that activity. Right? And so it's not necessarily that your brain is connected to all of the swarm. Right? There's a lot of programming that's happening to connect the drones to each other that are responding in a swarm-like response to your brain activity that is serving as the interface, the neural interface for how it actually operates.
And it's kind of incredible to think about, which is, you know, we're so used to our brains operating our bodies to instead think about our brains operating a lot more and a lot more in collective action and behavior that's animal-like and swarms rather than human animal-like, right, that we're more used to.
Luisa Rodriguez: Yeah. Right. So I guess if you're thinking the controls or the operations for one individual drone, and it's something like up, down, right, left, is the thing that you're gaining the time saved from having to use your finger to click on a keyboard up, down, right, left? Is there more that you gain?
Nita Farahany: Yeah. You don't even have to think up, down, left, right. I mean, and right now, we have become so used to the interfaces we use, like a keyboard or a mouse, that we're used to not thinking, like, I'm going to move my finger left and I'm going to move my finger right. But really what we've done is we've added friction between us and operating a device. Right? Which is there's some intermediary that you have to use brainpower to operate. You have to use your brain to have your finger move left and right. And just think about the time that you lose there too. But it's also unnatural. Right? I mean, we've learned it, so it's become more natural, but it's not the way—like, think about right now, you know, whoever's listening can't see me, but I'm moving my hands in the way that I normally do when I'm expressing a thought. I'm not thinking move my hand up or down or left or right. It's just part of how I express myself.
And similarly, the idea of being able to operate a drone is you're not thinking, okay, now go left or now go right. You're actually, if you're looking at a screen that is the navigation, you're just navigating. Right? Just like you're just intentionally navigating, and then the drones are an extension of your body. They're an extension of your mind that are navigating based on how you are naturally navigating through space. And that's the difference with neural interfaces. It's meant to be a much more natural and seamless way of interacting with the world and with other objects that become extensions of our minds rather than the more direct connection that we have right now with our body. It's forging a connection with external technology without the kinds of intermediaries that we're used to, which if you kind of step back and look at them, they're a little weird. Like, it's a little weird that you move your hand around to actually try to navigate through a space. Or if you're in virtual reality, it's weird that you have to be using a joystick to move. Right? You should just be able to think about moving naturally.
Luisa Rodriguez: Totally. Yeah. That really, really helped me. I don't know if this works, but another analogy I'm thinking of is, yeah, like, I've now got muscle memory for my keyboard. I know that the L is on the right and the A is on the left. And not only will it remove the fact that I had to learn to type, but it, in theory, could also remove something like the fact that I'm used to having to translate whatever kinds of thoughts I have that are both verbal and visual into linear sentences created on a Word doc where I edit in a certain way. And I don't know. I can't backspace as quickly as I want to. I have to switch to my mouse. It's yeah. I guess a mix of physical hand-eye coordination and also just the way of thinking.
Nita Farahany: Yeah. So we've learned a way of expressing ourselves through chokeholds.
Luisa Rodriguez: Right? Right.
Nita Farahany: But we have become accustomed to those chokeholds. And so it's as if it's natural. And in many ways it is for us because that's what we've learned. That's how we've wired our brains. Neural interface imagines a new world where rather than having the chokehold, you know, you are operating more like one with the devices that you're operating, and you're operating without the chokeholds in between.
You know, there's still going to be limitations on being able to have a full throttle thought expressed through another medium. Right? I mean, we have limitations of language right now of how we communicate. And so you can hear my words, but you can't also see the visual images in my mind that go with those words. You can't feel the feelings that I am feeling along with the words that I'm saying. You can pick some of that up from the tenor of my voice or pieces like that, but you're not getting all of it. And even when you're interacting with a swarm of drones or, you know, there's still these limitations. But I think people dream of a world in which brain-to-brain communication might enable sending across to another person a more full throttle thought than we currently have. I don't know of any technology that does that yet. Right? I don't know of anything that actually captures it. And part of it is I don't think anybody has figured out how to decode those multiple layers of thought from cognition to metacognition to the full embodiment of thought. But, you know, I think it's neat to think about that. Right? Which is the possibility of actually getting to that level of communication with one another.
Luisa Rodriguez: Yeah. Cool. Okay. So the idea of neurotechnology removing these chokeholds, I think, is going to be a theme. So in this case, we're talking about removing that chokehold in interacting with drones. You also just mentioned communicating and uploading data brain-to-brain. Can you say more about what that might look like in the military context?
Nita Farahany: Yeah. So, you know, one thing people worry about in the field is interception of communication. Right? And they worry, you know, enemy combatants overhearing or intercepting or decrypting whatever they're sending to each other. And also that the speed and the complexity of what you're sending back and forth between people may be limited by existing technology. So brain-to-brain communication imagines a world in which you could send signals to another person from your brain directly to their brain.
The closest that we've really come to some of that brain-to-brain communication has been, you know, there was a neat study that was done, I think, at the University of Washington where there were three different people in three different rooms. And they were playing a collaborative game of something like Tetris where two people had on electroencephalography headsets. And the third person also had on, I think, an electroencephalography headset, but also something like a stimulation device, a neurostimulation device. And two people were considered senders. One was a receiver. The senders could see the entire game board. So they could see the piece falling from the top of the board, and they could see the bottom of the board. And so they knew whether or not you needed to rotate the piece in order to satisfy the row. The receiver could only see the falling piece. They couldn't see the bottom of the board and had to use the brain signals that were being sent from the senders, like, yes, rotate or no, don't rotate. And so they would think, yes, rotate or no, don't rotate. That would be translated into a signal that would be received by the receiver, and that person would see it as a flash of light in their brain for yes or no flash of light for no.
And they played this game with different groups of these three-person teams getting above an 80% accuracy rate of solving the rotation of the piece. So it's not like a full thought. Right? It's not like sending words to another person's brain, but using modes of communication like a flash of light. Right? So in advance, you would set up some kind of, you know, yes, fire. No, don't fire. You're going to see a flash of light if you're supposed to fire. You're not going to see a flash of light if you're not supposed to fire. And then using that, right, silent brain-to-brain communication mediated through neurotechnology as a way to communicate with another person, which is pretty mind-blowing.
Luisa Rodriguez: It's really mind-blowing. Like as I was reading your book and reading about studies like this, I just had this feeling of, how does this exist and I didn't know? I don't feel like anybody really knows. I mean, I'm sure that's not true.
Nita Farahany: No. I mean, honestly, I think that's probably one of the things that I hear most from people is, how is this so advanced and I had no idea about it? Why are people not talking about it? Even a lot of the neuroscientists have said to me in reading the book, you know, putting it all together, like all the different pieces that I put together in one book and showing it sector by sector, both where the technology is, but also the ways it's being applied in all of these different contexts, I think for a lot of people has been very startling.
That was a really intentional move that I decided to use in the book, which is there's not a lot of futurism in the book. Right? It's mostly describing existing technology. And that was so that people would read it and understand I was talking about something that is here and now, just not fully at scale across society. And, hopefully, to help serve as a wake-up call, right, to say we're sitting at a moment before technology that will truly transform humanity is about to go to scale across society, and this is what is already happening in this space, what can already be achieved. And you can bet with all of the advances in generative AI and all of the rapid ways in which the technology is going, we're going to be able to do a lot more five and ten years from now. That doesn't change. Like, right now, it's already here, and we need to do something about it.
Luisa Rodriguez: Totally. Yeah. Yeah. Yeah. I had so many moments like this where, yeah, I find it definitely interesting to think about where the technology might go, but the specific things that are already happening—and again, we'll get into a bunch of them—truly just blew my mind. So, yeah, I guess pulling it back in a bit, this kind of technology that was used in this Tetris game, yeah, I just want to understand how it works a bit better.Nita Farahany: (28:09) So everything we think, everything we feel, when that's happening, neurons are firing in our brain. And when you have any particular thought, like relaxation, or you have a particular thought like rotate or no, don't rotate, hundreds of thousands of neurons are firing in your brain at the same time in characteristic patterns that can be picked up. Those are called brain waves. They can be picked up, these patterns together, by electroencephalography. So these are just sensors that are placed on the scalp. It picks up the electrical activity that's happening in the brain. And then those patterns can be decoded with AI. It's like any other kind of pattern where it can be translated and trained over time. And so it happens with training where lots and lots of prior research has been done where you'll say, okay, this is what it looks like when a person's brain is relaxing, and this is what it looks like when they're stressed, or this is what it looks like when they're saying yes, or this is what it looks like when they're saying no. Each person's brain is slightly calibrated to their own brain activity when they put on one of the devices as well. So that's EEG. There's lots of different brain signals that could be picked up, but one of the dominant ones for these more widespread headsets is EEG activity. And people may have heard of EEG, and maybe what they're thinking of right now is a big medical cap that has a bunch of wires coming off of it and a bunch of gel that's applied and 64 or 128 of these weird looking things. One of the big innovations has been dry sensors, so you don't have to apply them with gel, and just a few of them. So a few worn across the forehead or some inside of the ear, worn inside of earbuds or headphones. Like I have on headphones right now, you have on headphones right now, the soft cups around them can be packed with EEG sensors that can pick up that brain activity. And so in the Tetris example that we were just talking about, they're not wearing big medical-grade caps. They're wearing something that could be worn in the form of a baseball cap or a stiff headset or a headband worn across the forehead that has these sensors. So the devices are getting smaller and smaller, and the capability of decoding it is getting higher and higher.
Luisa Rodriguez: (30:38) Hey, we'll continue our interview in a moment after a word from our sponsors.
Luisa Rodriguez: (30:42) Okay. So it's picking up these brain waves and smart enough now to decode them reasonably well. And where exactly is the limit on how well we can decode? Can you give some examples of things that we can do and things that we can't do yet?
Nita Farahany: (30:57) So I think there have been bigger advances made in decoding the brain than in brain-to-brain communication so far. And in decoding the brain, there's lots of different signals, and those different signals have different value. So the best studies, the ones that people may have heard about in the news or something, are oftentimes with something called functional magnetic resonance imaging. This is like a giant MRI people go into. And the benefit of this is it can look really deeply into the brain, so you get spatial resolution. But it's a very, very slow signal. It's picking up what's called blood oxygenation levels. So as you're thinking, blood goes into one area. It's oxygenated. You use up the oxygen. It leaves. It's deoxygenated. That signal can be picked up on fMRI. And so the really powerful studies that have done things like decoding whole paragraphs that a person is thinking about or visual images, like when they're dreaming or where they're imagining it, those have primarily been through fMRI, not these portable devices. So EEG that we were just talking about is better at picking up bigger brain states because it doesn't—think about your head. You've got a big thick skull. A lot of brain waves don't make it through that skull. And then it's a very noisy signal because you're moving, you're blinking, muscle twitching. And so that's a noisier signal, so you don't get as much. And you pick up more things like, are you happy? Are you sad? It's easier to pick up like yes, no, left, right, rather than whole paragraphs.
Luisa Rodriguez: (32:34) Does it get harder when you're trying to pass it to someone else? I guess when I imagine—yeah, I guess I can imagine it being a pretty different technology from going to what is this person thinking to how do you infuse that into someone else's brain such that it manifests as a flash of light. That's just pretty wild.
Nita Farahany: (32:54) That's harder. So what's easier to do is brain-to-text. So I can have something decoded, created as a text, and then send it to you. And then that is brain-to-brain in a way. It's just not directly into your brain. You have to read your text message to get it.
Luisa Rodriguez: (33:10) Okay. So you're reading a physical text message, like on a phone.
Nita Farahany: (33:15) So that is one of the brain-to-brain things that people have talked about, but it's not really brain-to-brain. It's brain-to-brain mediated through a text message or something else.
Luisa Rodriguez: (33:23) Sure. It's kind of like voice control, but with your brain, and then someone reads it.
Nita Farahany: (33:27) Yeah. Exactly. Brain-to-brain, there are some signals that people have started to figure out. I was at a conference at the Royal Society recently, and this guy was following me around. And he was like, I want to give you a demo of my neurotech. I was like, I don't want a demo of your neurotech. Finally, I was like, fine. I'm about to leave. I'll do a demo of your neurotech. And he put these headphones on me, and he's like, how much time do you have? And I was like, like five seconds because I'm going back to the airport. And he was like, this demo is six seconds, and I'm going to—you can choose this one, and it induces a feeling of drunkenness or vertigo. And so he pushes it, and oh my god, I had to hold on to something because suddenly I experienced vertigo. And that was like, okay. I'm impressed. And I had to leave. But, and happily, the vertigo went away, and I was able to go to the flight. But so think about that. Which is, suppose you and I agreed in advance. Every time you experience vertigo, that means yes. And when you experience nothing, that means no. And so you see the piece falling from the top of the screen, and suddenly you have vertigo, and you're like, okay, yes. And then you see a piece falling, and you don't experience vertigo, and you're like, okay, good, no. I think that's kind of how to think about brain-to-brain right now is there's—it's almost like Morse code. You agree in advance on what the signal means. And so that same idea, which is inducing a flash of light, is stimulating the visual cortex. And so there are specific signals and specific stimulation that people have figured out can do things like appear in the visual cortex or give you nausea or give you vertigo or give you a shot of dopamine and pleasure. So it's kind of hacking into the brain's basic functions like that and then agreeing in advance what that means for communication purposes.
Luisa Rodriguez: (35:16) Cool. Okay. So that's the current state of things. And when you imagine this being applied in the military context, I guess eventually, we can imagine it being used by soldiers to communicate brain-to-brain, and then I guess also to upload data. But do you have specific applications in mind?
Nita Farahany: (35:36) Yeah. I mean, I think I imagine that this is sort of a seamless way of communicating on the battlefield without risk of interception. It's primarily about secure communication. I—that may just be because I'm limited in my military thinking. I'm not a national security expert, but think about the ways in which brains are both used for enhancements, but also used to create super soldiers and then used to try to have secure ways of communication or brain biometrics to have a much higher way of being able to access secure information. But I'm sure that there's a lot more there that I just don't know about that is all classified that I don't get to know about.
Luisa Rodriguez: (36:17) Yep. Another one you mentioned is identifying targets subconsciously. How would that work?
Nita Farahany: (36:24) Well, so this gets back to the idea that there's a lot that's happening subconsciously in our brain before we consciously process information. And target identification turns out to be one of them, which is the brain may automatically recognize features of a target. Like, if you're looking at surveillance images, for example, the brain may detect and recognize through one of these evoked response potentials a target before—and this can be milliseconds to seconds later that you consciously are aware of it. Or maybe it never reaches your conscious awareness, but your unconscious, kind of subconscious processing and visual scanning is able to pick it up. And so software systems are being trained on this where you have somebody who's very good at target identification, who maybe can't articulate what it is about that target that made them identify it. Some of the best people at target identification are not very good at training other people because they can't explain and verbalize what it was, like the characteristics. This is sort of the same idea of I can't fully convey a full thought to you, but your brain is able to do a lot more than we otherwise think. And so people who are really good at target identification, usually, it's by another person watching them rather than them explaining to the person how to identify a target, and then kind of repeat processing of learning by watching. So target identification using EEG, if you could figure out what that signal is and identify it every time they recognize the target, you can both use that to potentially train future people, but also use it as an early detection system. So this person who's really good at target identification lit up, then you could have AI look at it. What is it that makes this a target every time the person is able to identify it? So there's been some really interesting studies that have been done around that kind of automatic recognition of features like targets. And what is it that makes some people so good at it? And can we use that as an early warning system or use that to send that signal back to command so that they get automatic threat detection much faster than somebody could verbalize it?
Luisa Rodriguez: (38:33) Wild. Okay. So that's the kind of way we can imagine so-called super soldiers going forward. And it sounds like most of this technology is early-ish?
Nita Farahany: (38:46) Yeah. I mean, I think so. But again, from a national security perspective, I couldn't really tell you. So what I can tell you is that, at least from looking at all of the research studies that are published and from anything I know from conversations with people in the military, it seems like a lot of this stuff is early with a big question mark around stuff like Havana syndrome, where there are a lot of declassified documents that have come out of China that suggest that they're investing a lot in purported brain-disrupting technology. And if you could have something like a microwave or have something like radio frequencies that you're able to kind of pinpoint and target, certainly that could disrupt brains. But we're still, I'd say, in kind of primitive days of being able to have full robust brain-to-brain communication between people or thinking that the most efficient way to operate a swarm of drones is by having somebody wear an EEG headset to do so.
Luisa Rodriguez: (39:50) Right. Sure. Sure. And it feels both like we don't have the super soldiers yet, but also all of the things you've just described to me when I first heard about them completely shocked me. I had no idea that we were anywhere near there.
Nita Farahany: (40:03) Yeah. Probably I'm normalized to it. So I'd say we are so much further ahead than 99.9% of people realize. And yet, from a neuroscientific perspective, there's still a far gulf to pass till we ever get to full brain-to-brain communication, but there's still a lot that we can already do.
Luisa Rodriguez: (40:23) Going back to a category you just mentioned, which is research being done on different kinds of weapons that would use kind of neurotechnology to basically damage brain tissue. I think this includes things like acoustic weapons, laser weapons, and electromagnetic weapons. What do we know about these?
Nita Farahany: (40:41) Not a lot, to be honest. So the National Academy's report that looked at the possibility of microwave weapons that could be used to disrupt brain activity sort of posited what that looks like scientifically. And there were a whole bunch of people in the scientific community that looked at it and said, that's not possible. It—you'd have to have a very large microwave, and that would be detected on satellite. It's not the kind of thing that would just happen. So I'd say it's really disputed in the scientific community to where we are with any of those technologies and how they actually interact with the brain. The best thinking on this or the best, most public, I think, discussions about it all center around the scientific discussions of Havana syndrome. There's also a whole lot of people who believe that they suffer from the effects of these kinds of technologies and kinds of weapons. I don't think that they're deployed on any kind of scale that would lead to ordinary people and ordinary civilians experiencing the effects of them.
Luisa Rodriguez: (41:46) Are these kinds of technologies even accepted under international law? Are there even any laws that would apply to them?
Nita Farahany: (41:54) Really good question. And unclear. So they don't fall clearly under bioweapons or chemical weapons or other kinds of treaties that we have. I've argued that the use of them would clearly violate different provisions of the UN Declaration of Human Rights. But it's not as if there's ever been a case that has been brought where they've been interpreted to apply to the destruction of capacities of thinking or kind of experiencing self that it would cause. So there's certainly a lot of discussion internationally around neurotechnologies and the regulation of them. Not a lot that's been happening around, and what does that mean for the development, use, and deployment of weapons for cognitive warfare.
Luisa Rodriguez: (42:43) Right. Right. Do you have a sense of what you'd like to see in an ideal world, in terms of the kinds of international agreements that might regulate these kinds of weapons?
Nita Farahany: (42:53) Yeah. I mean, I go into this in the book and I think I called that chapter Big Brother is Listening or something like that. But where I think that there are both provisions of the UN Declaration of Human Rights that should really guide us to say use of these kinds of weapons to disable, disorient, or destroy in any way human capacity for thinking and for decision-making and just self, operating as self, really should be the kind of most fundamentally regulated, most fundamentally prohibited kinds of things that are out there. I mean, they get it. Our capacity for even being and destroying the kind of capacity for being seems like the core of all human rights would be violated. In addition to that, I look at different provisions around torture, and in particular psychological torture, and think that some of the treaties and some of the regulations that exist there should be applied in this context as well. That many times torture has really focused on physical pain that a person experiences, and even psychological torture has really looked more at physical pain that a person is suffering. Whereas to me, it seems like the basic idea of stripping a person of their dignity and their ability or capacity for thought would also constitute psychological torture and that we ought to interpret it that way.
Luisa Rodriguez: (44:17) That makes a ton of sense to me. I guess thinking about just other risks and things we should be worried about for these kind of military applications of this technology, one risk that comes to mind is the potential for hacking. I guess to the extent that you'd be kind of uploading a bunch of data from your brain into, well, sending it out brain-to-brain or just uploading it to physical machines. Does that make brains in general more vulnerable to some kind of hacking by another state or a non-state actor?
Nita Farahany: (44:54) Maybe. So we've talked a little bit about we're not quite as there yet in writing to the brain as we are in reading the brain. But we are somewhat there in writing to the brain. And I'll answer this a little bit by analogy. So there was a patient who was suffering from really severe depression to the point where she described herself as being terminally ill, like she was at the end of her life. And every different kind of treatment had failed for her. And finally, she agreed with her physicians to have electrodes implanted into her brain. And those electrodes were able to trace the specific neuronal firing patterns in her brain when she was experiencing the most severe symptoms of depression. And then were able to—after tracing those, every time that you would have activation of those signals, basically interrupt those signals. So think of it like a pacemaker, but for the brain where, when a signal goes wrong, it would override it and put that new signal in. And that meant that she now actually has a typical range of emotions. She has been able to overcome depression. She now lives a life worth living. That's a great story. That's a happy story and a happy application of this technology. But that means we're down to the point where you could trace specific, at least with implanted electrodes, neuronal firing patterns, and then interrupt and disrupt those patterns. Can we do the same for other kinds of thoughts? Could it be that one day we get to the point where if you're wearing EEG headsets that also have the capacity for neurostimulation, that you could pick up specific patterns of thoughts and disrupt those specific patterns of thoughts if they're hacked. If your device is hacked, for example. Maybe. I mean, we're now sort of imagining a science fiction world where this is happening.
Luisa Rodriguez: (46:44) But sure.
Nita Farahany: (46:45) But that's sort of how I would imagine it would first happen is that you could have either first very general stimulation like I experienced at the Royal Society meeting where suddenly I'm experiencing vertigo. And that's—somebody could hack your device. Like, I'm wearing this headset for meditation, but it's hacked, and suddenly I'm experiencing vertigo, and I'm disabled. Devices get hacked. And we could imagine devices getting hacked, and especially ones that have neurostimulation capacity, they could be hacked either in really specific patterns, or they could be hacked in ways that generally could just take a person out.
Luisa Rodriguez: (47:19) Okay. Well, that is incredibly horrifying.
Nita Farahany: (47:22) So do I worry about that? Yes. I worry about it. I mean, I've been talking with a lot of neurotech companies about there's not a lot of investment in cybersecurity that's been happening in this space where when you start to imagine a world in which not only could the information, like what you're thinking and feeling be hacked—from a privacy concern. But if you're wearing a neurostimulation device, can the device be hacked to create kinds of stimulation that would be harmful to the person? Maybe so. It seems like a really basic and fundamental requirement for these technologies should be to have really good cybersecurity measures that are implemented.
Luisa Rodriguez: (47:59) Nice. Yep. I completely buy that. That sounds really important to me. Moving to another technology that you mentioned already, and that seems relevant here. It sounds like different governments are getting excited about the possibility of using brain signatures for identification. Can you explain what that looks like?
Nita Farahany: (48:18) Yeah. So neural signatures may be unique across everyone. We don't know yet because not everybody has had their neural signature quantified yet or registered yet. So we can start with something called authentication, which is when you have a baseline that you record of something and then you match it, that's authentication. Identification would be a world in which I can pick you out and identify you uniquely rather than authenticate you. So brain biometrics right now primarily are being looked at from an authentication perspective because we don't know if they're unique across billions of people. And what that means is if I record myself like, I read a sentence, whatever that sentence is. Like, Nita had a KIND bar for breakfast this morning. And you think that same sentence, Nita had a KIND bar for breakfast this morning. And we both record that, our neural signature when we're thinking that. Mine will look different than yours. Even though it's exactly the same sentence. And that, whether it's a little song I sing in my head or a sentence that I think, is something that's called a functional biometric rather than a static biometric. A functional biometric is you're doing something. It's sort of like when people, like the patterns that you unlock a phone with.
Luisa Rodriguez: (49:42) Sure. The shapes, a star, whatever, if you do with your finger.
Nita Farahany: (49:46) Yeah. And I think how you do it is more telling than the numbers or something. It's a functional biometric.
Luisa Rodriguez: (49:52) Right. Right.
Nita Farahany: (49:53) So that's what brain biometrics are, is they're functional biometrics rather than just resting state of your brain. It's you doing something, and then using that whatever that doing something is to unlock it. So every time I think, Nita had a KIND bar for breakfast, I can use that. I can record myself thinking, Nita had a KIND bar for breakfast, my brainwave activity. And then I can unlock whatever it is. Get into the secure facility that I'm trying to get into by thinking the same thing. And a lot of governments are investing in research into brain biometrics because they're looking for secure ways to authenticate people. And this would be a very secure and silent way of authenticating somebody. I don't have to say my password out loud. I have to—you don't ever see it. It's different between us. You can change it really easily. Today, I think Nita had a KIND bar for breakfast. Tomorrow, I change it to Nita had oatmeal for breakfast. And just go down the path each day.
Luisa Rodriguez: (50:51) And why—what do we know about neuroscientifically why they're different?
Nita Farahany: (50:56) Well, I mean, I have a theory. I'm not a neuroscientist, but I think part of it is our brains and how we think are shaped by the uniqueness of every experience we've ever had. And the structure of our brains. And so when I learned a KIND bar, the association of KIND bar in my brain may be different than the association of KIND bar in your brain. And so when you first developed the neuronal pattern for KIND bar in your brain, it's imbued with all kinds of context-specific information and everything that ever came before it for how your brain processes information, which is going to look a little bit different. Now it's not so different. I mean, you and I both have visual cortexes, and we have sensory cortex. Like, the brain structures are preserved across brains, but the very specific neuronal firing is going to be a little bit different between each one, which also means that each brain has to be calibrated when they put a device on. It's going to be like the little differences are important enough that you have to calibrate the device to your brain.
Luisa Rodriguez: (52:02) Right. And is the way you do that basically like playing different songs to a bunch of people when they first use the thing? Like, we're going to play you the same three songs and then look at the kind of very specific individual differences that we see in your kind of reactions to the songs. Is that the kind of calibration, or is it something else?
Nita Farahany: (52:22) I think it depends on the context for what it's being used for. So, like, if I'm using a device that has been calibrated for gaming, where left, right, up, down means something for the device, then you're going to calibrate it around left, right, up, down. And so you'll do a set of exercises that you'll be like, okay. Now push the box to the right and push the box to the left with your mind.
Luisa Rodriguez: (52:45) I see.
Nita Farahany: (52:46) And it starts to kind of learn that. So you calibrate it to figure out what that looks like for you. But when you start to get to devices that are around typing, for example, or more complex kinds of decoding. So it's kind of use case-specific for whatever you have to get the baseline for.
Luisa Rodriguez: (53:02) Sure. Okay. So then because of all this context and kind of very, very, microscale individual differences between people, your reaction to a song or the way you think, Nita had a KIND bar for breakfast, is different enough that you can use it to distinguish between people. That alone is just incredible and incredible that we're relatively close to this being a technology that governments would actually use to identify people is my impression. Is that basically true?
Nita Farahany: (53:37) Yeah. It's basically true. And it is incredible for sure. And I used that example in the book because there were a few things I was trying to do in the book. One of them was to help people understand, this is technology that is really here. But it's also to kind of build the case to explain why it's going to go to scale across society. Like, how people are going to end up integrating it into their everyday lives and how, without even realizing it, our last fortress of privacy will fall. And one of them is on the government side, is on brain biometrics. So people have given up their thumbprint and their face prints really without even thinking about what the implications are in order to unlock their devices. Like, oh, sure. Face ID? Like, yes. I will give it to every single application on my phone and every single company that's out there in order to make it easier for me to not have to type in my password. And the same thing I think is going to happen with the brain where if brain biometrics become the gateway for you to be able to access other information, you'd like, oh, sure. Here's me singing a little song without realizing you're giving away how your brain works. And you're uploading information and raw brainwave activity and sort of handing that over on a silver platter. So it was one of the many examples that I sort of put into the book to help people both understand here's what's already happening, and here is why it's going to end up becoming part of your everyday life.
Luisa Rodriguez: (55:00) Yeah. Yeah. I think when I was reading about the example, I was like, this just sounds pretty good. It sounds like it'll increase the safety of my device and my stuff because it's way harder to—well, I was thinking at the time that it might be much harder to recreate my unique snowflake brainwaves than it would be to hack into, I don't know, my password manager. Luisa Rodriguez: (55:24) Well, I mean, I think that's right. Let me just give everybody a moment, which is to say it's not just, "Oh, isn't that creepy? We weren't even thinking about it." There may be really big benefits to actually adopting brain biometrics. It will be more secure, and it will be easier. And a functional biometric is probably a lot better than a lot of the passwords that are out there. People suffer from identity theft and hack into systems all the time. And so there are really good reasons to invest in functional biometrics, including brain biometrics. I just don't want people to stop with that thought. So you're about to go on. Now go on. But then I want you to be like, "And then, but..."
Nita Farahany: (56:06) Then I was like, "Oh, but maybe this is..." I'm imagining I'm wearing this headband. I'm using it for all of my devices. And then you point out that it's not totally clear exactly what all the brain data will be accessible to whoever's collecting it. Can they sell it? Are they looking specifically at my reaction to that song? Or do they... Kind of like location data on my phone where I've left that on because that has some benefits to me. Will there be a feature where I leave my brain data scanning on, and then they not only have how I react when I listen to a song, but they also have, as I move through the world, whatever data they can get from my brain waves?
Luisa Rodriguez: (56:50) Let's animate that just so people understand what that means. It means multifunctional devices. The primary devices that are coming are earbuds, headphones, watches that pick up brain activity, but also let you take conference calls, listen to music, do a podcast—all of those things. And so passively, it's collecting brainwave activity while you use it in every other way. People are used to multifunctional watches. They're used to rings. They're used to all of these devices. It is another form of quantification of brain activity.
And then what does it mean? So you do it to unlock your app on your phone. Now you're interacting with an app on your phone. And how you react to the advertisement that just popped up—are you engaged? Is your mind wandering? Did you experience pleasure, interest, curiosity? What is your actual reaction to everything? A political message ad pops up on your phone. Did you react in disgust? Did you react in curiosity and interest? I mean, these are all the kinds of things that can start to be picked up, and it's your reaction to both explicit content and also subliminally primed or unconsciously primed content, all of which can be captured.
Nita Farahany: (58:17) Right. I find myself drawn to the benefits, but also I'm not the kind of person who's super privacy-oriented. And I can easily see myself being like, "Who cares if they know my reaction to a song? I feel fine about that." But then I can really easily imagine the slippery slope where the technology keeps getting better and better, and it picks up more complex thoughts. And also, I'm not even correctly thinking about all the ways this data could be used. I'm probably imagining these kind of benign cases. But actually, there are probably a hundred different uses that I'm not even thinking of, and some of them might actually bother me.
Luisa Rodriguez: (58:58) Some of them might be totally fine.
Nita Farahany: (59:00) Sure.
Luisa Rodriguez: (59:01) And you're right—a lot of people are not that worried about their privacy in general. And so they may react to this and say, "Oh, that's fine. Maybe I'm just going to get much better advertisements." And that's okay if people choose that. If they're okay with giving up their mental privacy, that's fine. I'm fine with people making choices that are informed choices and deciding to do whatever they will do.
I would guess there is a lot more going on in your mind than you think that you want other people to know. I would just ask you: Do you ever tell a little white lie? Do you ever tell a friend that you like their couch when you walk in?
Nita Farahany: (59:35) Yes.
Luisa Rodriguez: (59:36) Right? Or if you have a partner, do you ever tell them that their new shirt looks great? Or, "No, you can't tell about that giant zit on your forehead. You look terrific." I mean, there are a lot of things like that. Or your instant reaction to something is disgust, but you have a higher-order way of thinking about it.
Or less benignly, you harbor some biases that you're trying to work on. You realize you grew up with some ingrained societal and structural biases, and you're working on that. And so your instant reaction to somebody with a different skin color, or a different hairstyle, or a different—pick your bias—is one that you're not proud of. You recognize it. You sense it in yourself because that's something you're working on, and your higher-order cognitive processing kicks in, and you think, "No. That is not me. That is not who I want to be." But your brain would reveal it.
Or you're figuring out your sexual orientation. You're figuring out your gender identity when you're much younger, and your reaction to advertisements or your reaction to stimuli around you gives you away well before you're ready to share that with the world. There's a lot of that. And maybe you don't have it in your life, but you might.
Nita Farahany: (1:01:01) Yeah. I'm sure I do.
Luisa Rodriguez: (1:01:02) It's hard to imagine that world is just what I would say. It's hard to—because we're so used to all of the rest of our private information that we in some ways intentionally express. "Yeah, I drove there, so you picked it up on my GPS" or "I typed that, but I intentionally typed it." There's a lot of filtering that you're doing that you're just not even fully aware of. And just imagine the filter being gone. Filter's gone. All of it can be picked up and decoded by other people. And we haven't even gotten to manipulating your brain based on what people learn about you. This is just the passive decoding of information.
Nita Farahany: (1:01:37) Right. Maybe putting a pin in that. So one example from the work that I actually found compelling that feels like it fits in here is you talk about data from a Fitbit being used in a criminal case where I think there was a man accused of killing his partner, but his Fitbit data actually revealed that his alibi, which is that he was sleeping, checked on a baby, and then went back to sleep—the data seemed to support that.
Luisa Rodriguez: (1:02:05) Yeah. There's been a few of these cases.
Nita Farahany: (1:02:07) Yeah. So one, that's pretty crazy to me. But two, then you talk about how not only is it possible to use neurodata in the same way, it's actually happened. I guess one case that really stuck out to me was in the United Arab Emirates. Do you want to talk about what happened there?
Luisa Rodriguez: (1:02:21) So the Fitbit cases are passive collection of data. Meaning you have your Fitbit on, and it's tracking your movements and activities. And you're not consciously creating the information, and then later, the police subpoena that information and use it to confirm or to try to show that you weren't doing what you said you were doing at the time.
With brain data, it's a little bit different for the context in the UAE, which is it's been used as a tool of interrogation. So instead of passive creation of data, a person's hauled into law enforcement, into the police station, and then they are required to put on a headset—an EEG headset. So again, these headsets can be like earbuds or headphones, but just imagine a cap that has dry electrodes that are picking up people's brainwave activities.
And then they're shown a series of images or read a series of prompts. And the law enforcement are looking for what are called evoked response potentials. So they're looking for automatic reactions in the brain. And here, what they're looking for is recognition. So you say a terrorist name that the person shouldn't know—there's no context in which they should know it—and they recognize it. Their brain shows recognition memory. Or you show them crime scene details, and their brain shows recognition memory. And in the UAE, it's been used apparently to obtain murder convictions by doing this.
Similar technology has been used for years in India, and there's been a really interesting set of legal challenges to the constitutionality of doing that in India. But in countries around the world, this technology apparently has already been used in a number of cases to obtain criminal convictions.
So I have not gotten verification of this other case yet, but MIT Tech Review reported on this, and I reached out to the woman who made the comment about it at a conference. So apparently, a patient who suffers from epilepsy has implanted electrodes in their brain. And this is not uncommon with some conditions like this that can either be used to control the epileptic seizures or detect them earlier, something like that. So this person had implanted electrodes, and I say that just because the data is being captured regularly all the time. If you have implanted electrodes, it's passively always collecting brain data.
Nita Farahany: (1:04:56) Right.
Luisa Rodriguez: (1:04:57) And I think the person was accused of a crime, and they sought their brain data from the company. So the defendant themselves rather than the government in this case to try to show that they were having an epileptic seizure at the time, not that they were violently assaulting somebody. And that would be the first case of its kind if that turns out to be true.
And really, it's just like the Fitbit data, where people would say, "Okay, Google, provide my Fitbit data because I want to show I was actually asleep at the time. Not that I was moving around and I couldn't have killed somebody because I was asleep at the time. Or my pattern and alibi fits with what the data shows." The brain data is going to be a lot more compelling than the Fitbit data in those instances.
Nita Farahany: (1:05:44) Right.
Luisa Rodriguez: (1:05:44) And just like the person can ask for the data, so too can the government then subpoena from a third party, the person who actually operates the device, that data as well.
Nita Farahany: (1:05:55) Yeah. I mean, is that ethical? Should I feel good about that on the one hand?
Luisa Rodriguez: (1:06:01) Convictions? No. You shouldn't feel good about that.
Nita Farahany: (1:06:04) Okay. Okay. Yeah. Convince me.
Luisa Rodriguez: (1:06:08) Yeah. So, I mean, first, before we're done, I'm going to convince you that there is a need for a right to mental privacy. And mental privacy is not absolute. Sometimes it will yield. And the question is when are we going to say it yields? And are we going to say interrogating a person's brain to figure out if they know about a bomb that's about to go off—is that better than the other kinds of methods that we're using, and will that justify it?
But in general, if you are using implanted electrodes to control your seizures, should you be worrying about the risk that the government's going to subpoena your brain data to learn whatever it wants to learn about you—whether you were having an epileptic seizure at the time, what you were thinking on X date or X time? They wiretap and surveil people all the time. There's backdoors into phones to listen to what people are doing. Do we really want the government to have a backdoor into our brains to be able to listen to what we're thinking and feeling at all times? I don't think so. I mean, that's the ultimate Orwellian nightmare.
So should we feel good about it because we might be able to solve more crimes by hacking into people's brains? I'm going to give that one a big no.
Nita Farahany: (1:07:20) Okay. Fair enough. My next question is whether this will actually take off. But I feel like we've just already got some evidence that it will. And to the extent that these technologies are going to be compelling and useful to people, we'll be giving away more and more of this kind of mental privacy.
Luisa Rodriguez: (1:07:40) I'll say this, which is I write about not just the little neurotech companies, but big tech. And the reason I think this is going to take off is because every big tech company has a huge investment right now in neurotech, and they're all looking at ways to integrate it into their multifunctional devices.
So Meta acquired Control Labs, and they have talked openly a lot about their watch that will have EMG, electromyography, that picks up brain activity as it goes from your brain down your arm to your wrist to pick up your intention to move or to type or to swipe. Apple has a patent on putting EEG sensors into their AirPods, and they already have announced that they're using eye tracking in their Apple Vision Pro to make inferences about brains and mental states and intentions.
And before I published my book, I had not heard from really any of these companies. And suddenly, since I've published my book, Apple, Meta, Microsoft, IBM, Google—all of them have invited me out to give talks and have conversations with me. I mean, I don't think it's just because they found my book interesting. They're all circling around what's happening in these spaces, and that's what's going to make this go widespread. Multifunctional devices that put brain sensors into our everyday technology.
Nita Farahany: (1:08:55) Wild. Yeah. A related area that you write about in the book is thinking about how some of these neurotechnologies are going to affect the workplace. And I was really shocked by some of the ways neurotechnology is already being used in work settings. I guess first, can you talk about how EEGs are being used to track fatigue and focus?
Luisa Rodriguez: (1:09:16) Yeah. I'd say that chapter has probably startled people the most. One of the kind of entry points into the chapter is I talk about a company called SmartCap that for more than a decade has been selling an EEG headset, electroencephalography headset that has basically a headband that can be put into a hard hat or a baseball cap or anything kind of wearable on the head that tracks fatigue levels of employees.
And this is by looking at their brain waves, scoring them on a scale of 1 to 5 from hyper-alert to asleep, and then giving real-time tracking that can be tracked by both the employee, but also their manager about what their brain metrics show about whether they're asleep at the wheel or not. And this has been used in long-haul trucking and in mining and aviation. And there's more than 5,000 companies worldwide that have already used SmartCap technologies. That alone, I think, surprises a lot of people that this is already something that's been around for a decade.
Nita Farahany: (1:10:21) Yeah. Yeah. It really did. I had no idea that truckers had this technology already to check on whether they're too tired to drive, for example.
Luisa Rodriguez: (1:10:29) Right. Yeah. And it is—I give that example both to show it's already happening, but maybe that's an application where if done right, it might be okay. And I say maybe it's okay from a mental privacy perspective because if the only thing you were measuring from a long-haul trucker was whether they were wide awake or they were falling asleep at the wheel, and you weren't using the brain data to discover anything else about what they were thinking or feeling—is their right to mental privacy really stronger than society's interest in not having them barrel down the highway while they're asleep? Probably not.
And so then I go from there to talk about, okay, well, what about for productivity scoring? And here, it's I think a little bit harder to swallow, and that's all kinds of productivity tracking software that's on people's computers at this point from their workplace. During the pandemic, employers started doing even things that were turning on webcams to see if people were at their desk at home. And more than 80% of companies admit, whether they're white-collar workers or factory workers, that they're using some form of surveillance of their employees to try to track their productivity.
And if you hire somebody to go shopping for you, they're on a clock, and it leads to all kinds of really problematic incentives and really bad workplace conditions. But then let's look at brain devices, and there are companies that are selling productivity tracking of employees using these devices. And they're enterprise solutions to—"We'll give your employee a multifunctional device like a pair of earbuds that track their attention, their focus, whether they're bored or engaged at work, whether their mind is wandering, and they can take their conference calls and everything else so they forget that you're tracking their brainwave activity."
So those products are already being sold, and I presented just this chapter at Davos, and I had a company CEO come up to me afterwards to say, "We would be a great use case for you because we've already partnered with one of these companies. We've tried out this technology on more than 1,000 of our employees, and we've tracked far more than if they're paying attention or their mind is wandering. But are they bored? Are they engaged? Do they work better at home or work better in the office? We've made managerial-level decisions, hiring and firing decisions." So that kind of blew my mind. So that's application two.
Nita Farahany: (1:12:57) Yep. I both want to ask more questions about that, but I also want to make sure I understand how the technology works because I'm always just very interested in the science bit. So I guess there's tracking fatigue, there's tracking focus, there's tracking productivity. Are all of these kind of doing the same thing? It's tracking brain waves, and we basically have done enough analysis about what brain waves correlate with what brain states that we can say, "Oh, that's the tired brainwave" or whatever?
Luisa Rodriguez: (1:13:26) Yeah. No. That's a really good question. And the answer is no. It is still somewhat of a mess. And if you talk to a lot of neuroscientists, what they'll tell you is, "What are you measuring? You're measuring muscle twitches or eye blinks, and how can you possibly be making decisions based on such crappy data?" The data has gotten better, but there's still a lot of noise. And it is still unclear that people are getting exactly what they think they're getting when they're measuring this information and making really serious decisions about a person's livelihood based on it.
How it worked was using medical-grade EEG, a lot of these brain states were measured. And so, "Here's what it looks like when you're bored, or here's what it looks like when you're engaged, or here's what it looks like when you're paying attention, here's what it looks like when your mind is wandering, or when you're happy or sad," or whatever the brain state is. And then using fewer electrodes, measuring the same behaviors and brain states, using pattern classification, it'd be like, "Okay. Can you see it with the far fewer electrodes, and can you correlate it to get the different metrics that people are trying to measure?" And so that's the basis for it.
But there's still real questions about how good is the data that you're capturing to begin with. Maybe the software is great. Maybe the algorithms are terrific. But if the data quality is terrible, then you basically are taking a bunch of noise and trying to make meaning out of it.
Nita Farahany: (1:14:55) Right. Yeah. Okay. That makes sense. So I understand how it works and then also understand why you'd be worried about it being accurate enough.
Luisa Rodriguez: (1:15:04) I worry about it for far more than accuracy, just to be clear. I mean, these uses in the workplace—to me with the power differential between employers and employees, and the broad surveillance state that has emerged both within society, but also within workplaces—makes me think that the use of this technology by individuals could be really good. The use of this technology by companies to surveil their employees would be super creepy and problematic.
Nita Farahany: (1:15:32) Yeah. Yeah. I'd love to make that even more concrete. I want to picture it. What are you imagining when you're imagining companies using the technology for bad?
Luisa Rodriguez: (1:15:45) So I wrote a couple of scenarios in the book. Most of it is grounded in, "Here's exactly what's happening today." But I wanted to help people understand, no matter what their frame of reference is, why it would be problematic. And so I wanted to try to help people who really strongly believe in freedom of contract in the workplace—the staunchest libertarian who thinks, "Okay. But the market will take care of itself." Why in a context like this, the market can't just take care of itself.
And the kind of scenario that I painted in the book for that was imagine this. You've got your employee who's wearing these earbuds to take their conference calls, do everything else. And there's asymmetry in information. That is, the employer can see what the person's brain is doing at any given time. But of course, the employee can't see what the employer's brain is doing at any given time.
And so the employer calls the employee up and says, "Hey. Wanted to let you know that you did great last quarter, and so you're going to get a raise. And I'm delighted to let you know that you're going to get a 2% raise in salary." And the employee, their brain data shows that they are just thrilled. They're just so happy. "Hooray. I'm getting a 2% raise." But they know better than to say, "Oh, hooray." And they know that that would give away their negotiating position right away. So they say, "Oh, you know, thanks so much. I was actually hoping for a bigger raise. I was really hoping for 10%."
And while that's happening, they're afraid. And you register that in the brainwave activity. And the employer says, "Okay. Well, I'm going to think about it. I'll get back to you." And then they go and they look at the brain data, and they see the person was overjoyed when they got the 2%. They're super fearful when they offer the 10%. And they have this additional asymmetry of knowledge, which really frustrates freedom of contract.
And so it turns out the employer can easily handle the 10%. They've got the funds. Their revenue really went up last quarter. They could have easily done it. They have this information. They come back the next day, and they say, "So sorry. We can only afford 2%." And the person feels relieved, but still condensed. And the employer walks away having gained a significant advantage from what the brain data revealed.
And that is to just help people understand that every conversation—your reaction to every piece of information—can suddenly be gleaned. It's not just whether you're paying attention or your mind is wandering. It is your reaction to company-level policy as it's flashed up and how you actually feel about it. It is working with other people in the company where your brain starts to synchronize with theirs. Because when people are working together, you start to see brainwave synchrony between them.
And maybe you guys are planning for collective action to unionize against the company, but you see a bunch of brain waves that are synchronizing in ways that they shouldn't, and you're able to triangulate that with all of the other information that you're surveilling them on and you prevent them from doing so. So these are some of the dystopian things that my brain goes to.
Nita Farahany: (1:18:49) Yeah. Yeah. Yeah. My brain is trying to console myself by being like, "Yeah, but it's not specific thoughts. It's emotions." And then you describe that scenario, and I'm like, "Wow, emotions give away a lot."
Luisa Rodriguez: (1:19:03) They give away a lot. Yeah. And if you artfully probe a person's emotional response, you can get a lot. And I'm describing brain state because that's where we are right now. But there was a really powerful study that came out just a couple of months after my book was published using much more sophisticated neurotechnology, functional magnetic resonance imaging that can look much more deeply into the brain.
But the researchers had people listen to stories, podcasts actually—not this one, but other podcasts. And then they had them imagine stories, and they trained their classifier, which is, "Here's the brain image data, and here's what the person was listening to." And then they took just brain image data and said, "Okay. Translate what this person is listening to or what they're imagining."
And at a really high rate of accuracy—like 80 plus percent—were able to decode whole paragraphs of what a person was imagining or thinking. And that's mind reading. I mean, that's pretty serious mind reading.
Nita Farahany: (1:20:09) That is actually mind reading.
Luisa Rodriguez: (1:20:11) Yes. And then they decided, "Okay, well, let's—we have synthetic data on functional near-infrared spectroscopy, which is a more portable system rather than fMRI. Can we make it work for that too?" And they found that they were able to get a really high degree of accuracy with that.
And they haven't done it with EEG yet, but they have a bunch of EEG data. It would require building a new classifier because EEG is electrical activity rather than blood flow in the brain, so it's a new model. But I think they can do it.
And when we start to see that data, one of the things that made that study so powerful was they were using generative AIs. They were using GPT-1. And the leap in what AI can do means a leap in what we can do for decoding the brain. And so I'm describing a scenario in the workplace where the employer is just looking at broad emotional brain states. I would not be surprised if in a few years or even less, really, what we're talking about is decoding more complex thought.
Nita Farahany: (1:21:11) Yeah. I'm a bit speechless.
Luisa Rodriguez: (1:21:13) Yeah. That blew my mind. And I mean, as soon as last fall when ChatGPT was released, I reached out to some of the leading researchers in the field, and I was like, "Okay. Obviously, this is going to change a lot. I'm trying to understand exactly how it changes your models," like the people who are doing speech synthesis and speech decoding. And they were like, "Yeah. It's going to change a lot."
It's going to rapidly allow the customization of decoding per person. It's going to be much easier to do a lot of this work. Imagine this, which is like you're decoding—what naturally comes next? If you're trying to predict the next word, it becomes much faster. So what you can take from brain activity using generative AI through the way in which it actually generates the next token really is like advances on decoding the brain on steroids.
And they were right. It didn't take long before it changed everything, and we're less than a year out. And give it time as people start to use these large language models for their classifiers. Brain data is just patterns of activity, and it can be decoded. And the more powerful the AI, the more powerful the decoding.
Nita Farahany: (1:22:26) Yeah. I'm aware that we started talking about this in the context of the workplace, but this just is mind reading. And presumably, even though there'll be incentives for employers to get these kinds of wearable technologies to their employees, maybe early, this will also just permeate into other aspects of life, I'd guess.
Luisa Rodriguez: (1:22:48) Yeah. I mean, I think it will just be in every aspect of life. Yeah. Nita Farahany: (1:22:52) Yeah. I mean, I can imagine it affecting my relationships.
Luisa Rodriguez: (1:22:57) Yeah. I have a harder time imagining that. So I mean, I... Really? Yeah. And here's why I have a slightly harder time imagining that. Maybe in the same way that some people are like, you have to give me your password to your phone because I want to surveil you in weird and creepy ways. I don't trust you, and so I need to actually go through all your text messages. Maybe they're like, need to see your brain data. Do you really love me? You claim you love me, but maybe you're just in lust with me, or maybe you actually don't have any of the feelings that you claim that you have. You know, that's already a deeply unhealthy relationship.
Nita Farahany: (1:23:31) Right? And so to the extent that people
Luisa Rodriguez: (1:23:34) are saying I need the brain data.
Nita Farahany: (1:23:35) The brain data, that's even creepier.
Luisa Rodriguez: (1:23:37) I mean, that just, once you're there, you probably need to reevaluate whether you're in that relationship to begin with.
Nita Farahany: (1:23:43) Yeah. There are already issues.
Luisa Rodriguez: (1:23:45) But I don't know. Tell me how you see it changing your relationship.
Nita Farahany: (1:23:48) Yeah. I mean, okay. I guess
Luisa Rodriguez: (1:23:51) Now that I've already said that if you're going to say anything that makes me deeply question your relationship. No.
Nita Farahany: (1:23:55) But go ahead. So yeah. No. It's true that when I imagine this being available to me and my partner, I do not see us using it to be truth serum. Yeah. Question each other. Exactly. Exactly. And so maybe there'd be some types of relationships that actually wouldn't end up drawing on it because we've got such strong social norms against that kind of demanding truth of people. So maybe it's other contexts where there aren't as strong norms of giving people the right to some degree of privacy. Although maybe it becomes a way of intimacy. Right?
Luisa Rodriguez: (1:24:32) I mean, we've talked about brain-to-brain communication. Yeah. I was just trying to think, you know, people sometimes talk about, oh, yeah, we're on the same wavelength.
Nita Farahany: (1:24:40) Yeah. Right.
Luisa Rodriguez: (1:24:41) What if you really wanted to find out if you're actually on the same wavelength. Right? And start to figure out, you know, through compatibility testing. Is there brain compatibility testing? And is there, are we actually truly in sync with each other? We think we're in sync with each other? So
Nita Farahany: (1:24:57) Yeah. Yeah. Yeah. So I don't really have that much specific on the relationship side, and maybe it is stuff like that. I guess I have this general feeling of, you just told me that mind reading exists, and that it's going to get better and better. And that feels like it's going to have implications all over, and I'm freaking out a bit.
Luisa Rodriguez: (1:25:20) Yeah. No. I mean, you should freak out. Right? I mean, what I'm describing is a world of brain transparency that people just don't even realize is coming or is happening. And I wrote this book as a big wake-up call, right, for people to understand descriptively what's happening today and normatively what do we need to do about it. Because it will change everything. It will change our workplaces. It will change our interactions with the government. It will change our interactions with each other. It will make all of us unwitting neuromarketing subjects at all times. Because at every moment in time when you're interacting on any platform that also has issued you a multifunctional device where they're looking at your brainwave activity, they are marketing to you. They're cognitively shaping you. Right? I mean, we need to recognize that this has revealed a new set of responsibilities that we need to humans. And those set of responsibilities really are around the right to cognitive liberty. And so I wrote the book as both a wake-up call, but also as an agenda setting to say, what do we need to do given that this is coming? And there's a lot of hope, and, you know, we should be able to reap the benefits of the technology.
Nita Farahany: (1:26:27) Right.
Luisa Rodriguez: (1:26:28) But how do we do that without actually ending up in this world of, oh my god, mind reading is here and now what?
Nita Farahany: (1:26:34) Yeah. Now what? We didn't prepare. Your employer can read your mind. If you've got any amount of imposter syndrome, you will not be able to negotiate a raise in your salary. If you are, I don't know, more distractible than your other coworkers, your employer will now know that. If you sometimes have a bone to pick with your manager, they might have a lot of color on that.
Luisa Rodriguez: (1:27:00) Yeah. If you hate your boss, they're going to know you hate your boss. Right? No. There's no poker face that's going to fix that for you.
Nita Farahany: (1:27:06) Yep. Yeah. I guess going back to the relationship one, maybe a dating site springs up that uses this and crushes the competition. Yeah. So it feels like there are loads of implications. And maybe you can talk about how you want to see regulation put in place to make sure we reap those benefits and don't fall into some of these crazy dystopian futures.
Luisa Rodriguez: (1:27:28) I would say I'm not, it's not even about regulation. It's about changing our worldview. Right? Which is, so, you know, for me, regulation is one piece of that just because I think the only way that you really shift behavior in society is through a set of sticks and carrots. So I think we need constraints, which is in order for anybody to truly exercise freedom in the digital era, I think we have to recognize cognitive freedom. And cognitive freedom for me is about cognitive liberty, the right to self-determination over your brain and mental experiences. And in the book, I lay out what the human rights framework for that should be because I think it's a global issue. And I say, you know, we need to recognize this as the right, as the organizing principle. And that means we have to update our right to self-determination to be an individual right, our right to privacy to include a right to mental privacy, and our right to freedom of thought to secure us against interference, manipulation, and punishment of our thoughts. But, you know, I think that's one category that's really important that then will translate down into national laws and context-specific laws like employment should include a right to mental privacy, et cetera. But then at the other end, I think we have to really start to realign incentives to enable the incentives for tech companies to align with cognitive liberty. Because, you know, if the primary incentive is to maximize attention and engagement, which right now it is because the primary business models are ad revenue, then those incentives mean that the outcome is going to be commodify as much brain data as possible, use people's brain data to keep them on devices, to keep them addicted, to keep them unhealthy, rather than to enable people to have self-determination. There's no liability scheme that is sufficient to actually shift business models for legacy companies. And so legacy tech companies need massive investments to shift to align cognitive liberty with what their bottom line is. And so for me, it's yes, we need the rights. We also need the incentives. We need the carrots and sticks to actually start to enable human flourishing in the digital age.
Nita Farahany: (1:29:34) Yeah. Nice. Let's move on to another topic. So people like Elon Musk are investing a lot in neurotechnology geared at cognitive enhancement. So things like Neuralink, which is a kind of brain-computer interface like we talked about earlier. But can you explain how Neuralink differs from what we've already talked about?
Luisa Rodriguez: (1:29:50) Sure. So first, I'm not positive that I would characterize Neuralink as being focused on cognitive enhancement, at least not in the short run. I say that because, you know, most, what we're going to talk about now is implanted brain-computer interface or implanted neurotechnology. Most of that is really geared at therapy, at least in probably the next decade. Right? Is that, and that's because there's a lot of risk. Okay. So let me back up. What is implanted neurotechnology? Instead of what we've been talking about, which is, you know, a baseball cap or earbuds or headphones, it is literally putting a device inside the brain. So, you know, drilling a hole in the skull, putting electrodes underneath the skull or deeper into the brain. And what's innovative about what Neuralink is doing is twofold. One is there's only about 400 surgeons in the entire world who can do that kind of implanted neurotechnology surgery right now. So that's a big bottleneck to, you know, having this go widespread. And so they've been developing a robot to do the surgery, and that would really enable clinics much more broadly to be able to do this kind of surgery. And the second is the implanted arrays. Actually, there's three. The second is the implanted arrays have been, you know, sort of imagine a disc that has a bunch of electrodes on it. And they've been developing something that has these hair-like structures that have little tiny electrodes on the end that could maybe, you know, embed themselves better into the brain tissue, and they're just packed with electrodes. So many, many more that could pick up more signal from the brain. And then the third is that it wirelessly communicates with whatever is external. Most of the brain-computer interface devices up until now that have been in clinical studies have required the person to be in the lab, and it has something attached to their head that's like a big cable or something that then gets the signal out. And so, basically, imagine getting it in much more easily, having it be much smaller with densely packed arrays of electrodes, and having it communicate wirelessly. And so we can talk about the enhancement side of that, but the benefit of that from a therapeutic perspective would be you can wear that, you know, technology everywhere. It allows mobility well beyond the lab, but it's right now primarily being designed for people who are paraplegics to be able to potentially walk again or to be able to communicate from their brain. So we've talked about brain-to-text communication, you know, to be able to operate a cursor on your screen or be able to type from your mind. That's the kind of stuff that Neuralink right now is focused on.
Nita Farahany: (1:32:35) Yeah. Okay. Let's focus on that then for a bit. So the kinds of things it sounds like it can help with is, for whatever reason, a person's ability to get their brain to communicate with a part of their body, or to form speech is impaired in some way. And how exactly does Neuralink or this kind of technology bypass that?
Luisa Rodriguez: (1:32:56) So we've talked a little bit about how noisy EEG signal is. Right? So if you're wearing a brain sensor in an everyday device, it has to go from deep within your brain all the way through your skull, get to the surface, and where it's interacting with muscle twitches and eye blinks and a bunch of other what we call noise. If you have electrodes deep inside the brain, you can pick up signals at much greater resolution without all of that noise. And so for somebody who their primary way of communicating is getting signal from their brain to their computer, you want the highest resolution that's possible. And when what you're talking about is much more complex than yes or no or left or right, but you're literally trying to use the brain to be able to communicate with the rest of the body, to move an arm or to potentially reconnect with the spinal cord or, you know, to type thoughts at a rate that is much faster than one letter at a time, the depth and the increased number of electrodes, right, 1,000 electrodes instead of four electrodes that you might have on the surface, all of that allows much, much greater signal. And much, much greater signal means that you're picking up a lot more of what's happening in the brain and translating it to the rest of the world.
Nita Farahany: (1:34:14) Right. Right. Okay. So imagining someone isn't able to speak for whatever reason. In that case, I think I understand that you've got these implants and that they can communicate, not wirelessly, but wirefully, to a computer that can then write that out as text. How does it work if you're sending signals to the spine? Do you have to surgically implant something to receive those signals in the spine?
Luisa Rodriguez: (1:34:41) Yeah, you need a receiver.
Nita Farahany: (1:34:42) Yeah, okay, okay.
Luisa Rodriguez: (1:34:43) Yeah, you need a receiver. And for some people, they may have had some disruption in their ability for their central nervous system, their brain to communicate with their peripheral nervous system, or, you know, other parts of their body. And so, yeah, you need the signal to be picked up from the brain and then transmitted. But you could do that with a receiver at the other end. And, you know, just recently, there were, this wasn't Neuralink, but just recently, there was a report of, I think, the first person who has, through brain-computer interface, been able to walk again by having that kind of pick up the signal from the brain and communicate it with the spine when the brain and the spine were no longer communicating with each other.
Nita Farahany: (1:35:23) Wow. That is pretty incredible. And again, not something I had any idea we'd achieved.
Luisa Rodriguez: (1:35:29) Yeah. Which is, I mean, it really is, it is remarkable. Right? I mean, these kinds of injuries that are so life-altering that, you know, these kinds of technologies could, you know, help people reclaim self-determination and independence in ways that have been lost, I think is really exciting.
Nita Farahany: (1:35:48) Yeah. Really exciting. I am curious though. Why is it that, at least for now, the focus is on therapeutic applications as opposed to augmentation?
Luisa Rodriguez: (1:35:58) Well, I mean, you know, I'll say this, just Elon Musk has been very vocal about the fact that he sees this as a potential for enhancement. Right? So I mean, I'm not going to, I'm not going to say he doesn't have those kinds of intentionality. And I actually just recently was at a conference where I was talking with the founder and CEO of a different neurotech company that already has brain-computer interface inside people's bodies. That's Synchron, and the founder is Tom Oxley. And we were having a little bit of a debate about, there's a ton of electrodes in the Neuralink device. And there was a reporter who was writing a story, and she really wanted to take the take that the only reason you would have that many electrodes is for enhancement purposes in order to kind of merge us with machines and enable us to win in the race against AI and wanted to know if I agreed with that. And I was like, I don't know. I'm not going to go on the record with that.
Nita Farahany: (1:36:51) And okay.
Luisa Rodriguez: (1:36:52) I asked Tom about it, and he was like, yeah. That's totally why. That's like, he's doing it because that's the goal is enhancement. So I say all that to say, why is it not enhancement today? Why do I think it's not enhancement despite what Tom thinks, despite having the huge number of arrays? For now, it's regulated by regulatory bodies that treat it as a medical device, not as an enhancement device. Right? And if they go to the FDA and they say, this is really for cognitive enhancement or for enhancement of the body rather than therapeutic reasons, they're never going to get regulatory approval. And they have to get regulatory approval to go through each step of the clinical studies that they need to get approval. And from a risk-benefit perspective, I mean, think about putting something inside your brain, right, and your brain tissue. And yeah. The monkeys that they have done this on have not fared that well. Right? They've had infection. They've had other problems. I mean, some of them have done fine. Right? But some of them have had serious side effects. And so when you're doing a risk-benefit analysis, when the benefit is clear because a person has lost their ability to communicate or walk or, you know, something else, then the risk may be worth it. But when you're talking about is something that expands beyond human capabilities, most regulatory bodies are just not even equipped to approve drugs that are enhancement drugs, let alone a brain-computer interface device that is an enhancement device. And so his ambitions may be there, and maybe this creates the proof of concept in a therapeutic way that enables that kind of enhancement in the future. And maybe that enhancement looks like what we were talking about earlier, which is the possibility of communicating brain-to-brain with each other, picking up a full resolution thought. The content of how you feel and the visual images and the metacognition that goes along with the cognition. Maybe one day that'll be possible, and maybe one day people who are healthy will decide that the benefits that can be offered through being able to go from brain to other technology or brain to other human brain are worth the risk, but that's going to be a while from now.
Nita Farahany: (1:38:59) Yeah. Okay. That makes sense to me. Do you mind actually painting even more of a picture of what Elon Musk and people like him have in mind? Like, what this could look like?
Luisa Rodriguez: (1:39:10) Yeah. I mean, so have you ever seen The Matrix?
Nita Farahany: (1:39:14) Sure.
Luisa Rodriguez: (1:39:14) I don't think that's going to happen. Okay. Like, I mean, so if the thought that just popped into your mind is the brain jack where suddenly you're uploading into your brain the ability to do martial arts, and then you're like, okay. Got that. And then you could do it. We're nowhere close to that, and maybe that is the vision that somebody like Elon Musk has is that we can brain jack you. You've got all these electrodes in the brain, and we can just fuse a whole bunch of information into your brain. By the way, just on that note, I've always wondered, you need a lot of muscle memory to do that kind of stuff too. It's not just that you need to know how it works. And so suddenly your body is perfectly fit and has all the muscle memory
Nita Farahany: (1:39:54) anyway. The strength?
Luisa Rodriguez: (1:39:55) And the strength and yeah. I mean, I feel like you could put all of that in my brain, and I would still not be able to do martial arts. I would have this big disconnect, which is my brain would know how to do it and my body would not cooperate and it would be a huge problem.
Nita Farahany: (1:40:08) Yeah. It'd be like if you ever played the piano as a kid and then you try as an adult and you're like, oh, I used to know how to do this.
Luisa Rodriguez: (1:40:15) I've been doing that recently.
Nita Farahany: (1:40:16) Yeah. Yeah.
Luisa Rodriguez: (1:40:17) It's been really bad. My eight-year-old is taking piano lessons and, you know, and I sit down to practice with her and she's like, oh, play this piece that you used to play. And I sit down and try to play it and it's horrible. Okay. It'd be like that. But worse. Right? Because your body just would not cooperate in any way, shape, or form. Anyway, what are they trying to do? I think part of the idea is to try to enable capabilities that we don't have yet, like brain-to-brain communication. Or we talked a while ago about part of what inspired me to write this book, The Battle for Your Brain, which was seeing a presentation where somebody was talking about, what if we could operate octopus-like tentacles with our minds instead of using our hands as the way that we navigate the world? And that's within the realm of possibility. Right? I mean, you can operate a swarm of drones. You could operate, you know, octopus-like tentacles. And so when you start to think that the human brain is in some ways limited by our physical bodies and our ability to get the output from our brain to either connect with each other, to work collaboratively with each other to solve some of the biggest problems in the world in a way that we can't as efficiently or effectively do right now because brain-to-words communication with another person is limited versus brain-to-brain. I think those are some of the ways that they're imagining it. They're imagining a transhuman future, which is being able to go beyond human limitations and merge humans with technology much more seamlessly. And to, in many ways, use the power of the human brain. Because I think most people even looking at the advances in generative AI right now recognize that human brainpower is much, much richer and more complex than anywhere that we're reaching with current iterations of generative AI. But unless we can get all of that out, right, and have some way of actually being able to realize the full potential of the human brain and how it works, maybe that benefit or that advantage is something that we can't really fully realize. And so for people who are investing in this transhumanist future, they believe that the best hope for humanity is being able to expand our capabilities and our output.
Nita Farahany: (1:42:39) Yeah. We actually regularly have guests on the show to talk about the risks and the promise that AI brings. And some of those people are really worried about AI basically becoming more powerful than people. And I guess, yeah, I do have some sense that
Luisa Rodriguez: (1:42:57) I think that's a real fear. I mean, I think it's a real fear and it's yeah.
Nita Farahany: (1:43:01) I agree.
Luisa Rodriguez: (1:43:02) It's not an unfounded one. I mean, you know, it's a, I think if AI develops intentionality, we've given it the keys to everything in the world. And, you know, it doesn't have to even be intentionality. It can be accidental, bad actors. You know, there's different categories that can emerge from this than it is figuring out, you know, what are we going to do as human beings? And one solution that people have put forward is this possibility of brain-computer interface as a way to augment human thinking and human capacity.
Nita Farahany: (1:43:33) So it sounds like you think some of the cognitive enhancement stuff is pretty far away. What is kind of what you see as the medium term ways that things like Neuralink might change people's experiences in society now?
Luisa Rodriguez: (1:43:49) Well, I think it'll enable people to regain self-determination. Right? So for the people who've lost it and who are unable to communicate their thoughts, who are unable to move and to, you know, act as independently as they would like to do, their kind of freedom of action has been constrained in many ways. And I think Neuralink and devices like it. I mean, there's a number of these companies out there that have really promising implanted neurotechnology. It's just it's a very small population of people so far who they can reach. Deep brain stimulation for people who are suffering from, you know, intractable depression or from Parkinson's disease. There's a lot of neurological disease and suffering. In fact, neurological disease and suffering worldwide is getting worse, while overall physical health is improving otherwise. And so what I say Neuralink is offering is for, it's a way to start to reset that balance, to start to try to actually get a handle on the large toll of suffering that is unmet needs that people are experiencing worldwide.
Nita Farahany: (1:44:59) Cool. Yeah. I found that, I mean, I yeah. I just find that really moving. Does that come with risks besides some of these health risks that obviously come from getting really deep into someone's brain?
Luisa Rodriguez: (1:45:11) Yeah. There, I mean, there are. So the more people who have brain-computer interface technologies, implanted neurotechnology, the more that they need to have a better sense of where am I, and where do I end, and where does the technology begin, and how do I understand the interrelationship between me and the technology. I was talking to a researcher scientist recently who does a lot of work in deep brain stimulation, and she was talking with me about her hearing loss and how she has started to wear hearing aids and that that's required her to sort of reestablish her sense of self in the world because her concept of hearing is fundamentally changed. And so even just trying to understand what circumstances can she be in? What is she going to hear? How is she going to react? It's required an updating of self, and the sounds and input that she's getting are different than ordinary hearing that she had in the past. And we were talking about that in relationship to deep brain stimulation where she sees patients who are suffering from, you know, intractable depression. They then have an implanted device, and it takes about a year before they start to develop a sense of this is me, and that's the technology, and here's where I end, and here's where the technology begins. And here's me plus technology, this new concept of self. And I think that idea, we have to get to this place, whether it's with implanted neurotechnology or wearable neurotechnology or just me and my mobile device, to start to update human thinking about us in relationship to our technology and our concept of self as a relational self.
Nita Farahany: (1:46:50) Right. Right. I can imagine it really hitting on questions of identity. Yeah. I guess there are, the examples you're giving are of kind of regaining some types of function or having access to some kinds of emotions.
Luisa Rodriguez: (1:47:05) But it changes, it changes self. Right? And, I mean, we talked earlier about hacking. Right? I mean, we could get into the dark side of all this, but before we even do that, right, the rest, it is how do people understand themselves? And, you know, one thing people have worried about a lot with these technologies is a discontinuity of self. There's you, and then there's you after the implant, and maybe you after the implant is a fundamentally different person. Or maybe accidentally, you know, in the surgery parts of the empathetic you got damaged, and suddenly you are, you know, a violent killer or something like that. I mean, there's all those kinds of things that might emerge. But I think probably the most fundamental is that people have really grappled on is how do you get informed consent truly for somebody to understand? What does it mean to be a different person in relation to a technology that is implanted in your brain before and after? And how do you think about that future self and make decisions that are truly informed when you can't have any idea of what that actually is like?
Nita Farahany: (1:48:09) Right. What that future self will experience, what their life will be like. How do you know if you want to become them?
Luisa Rodriguez: (1:48:15) I mean, but then there's all kinds of risks of hacking and, you know, Manchurian candidates and all kinds of things like that. But, I mean, I think the more ordinary everyday challenges are the broader conceptions around self.
Nita Farahany: (1:48:28) Yeah. Out of curiosity, can you take me into the dark side? What are some of those less likely, but maybe scarier risks?Luisa Rodriguez: Yeah. I'm happy to go there. Although I'll say this, which is, again, I do a lot on the ethics of neurotechnology. And I am far more concerned from an ethical perspective about wide-scale consumer-based neurotechnology than I am about implanted neurotechnology. And the reasons that are true are both a very different risk-benefit calculus for the people who are currently part of the population who would receive implanted neurotechnology, but also because it's happening in a really tightly regulated space as opposed to consumer technology where there's almost no regulations and it's just the Wild West. But in the dystopian world and with all of those caveats, which I think are really important, I think it's still possible without really good cybersecurity measures that there's a backdoor into the chips, that some bad actor could gain access to implanted electrodes in a person's brain. And if they're both read and write devices, not just interrupting a person's mental privacy, but have the capacity of stimulating the brain and changing how a person behaves, there's no way we would really even know that's happening. When something is invisibly happening in a person's brain that changes their behavior, how do you have any idea that that's happening because somebody has hacked into their device versus that's coming from their will or their intentionality. And we have to understand people's relationship to their technology, and we have to be able to somehow observe something has happened to this person, which would lead us to be able to investigate that something has happened to their device and somebody has gained access to it or interference with it or something like that. We're dealing with such small, tiny patient populations. It's not like the president of the United States has implanted neurotechnology where some foreign actor is going to say it's worth it to hack into their device and turn them into the Manchurian candidate. But in the imagined sci-fi world of what could go wrong, what could go wrong if this goes to scale and if Elon Musk really does get a brain-computer interface device into every one of our brains, is that we'd have almost no idea that they'd been hacked, that the person had been hacked, and that their behavior is not their own.
Nita Farahany: Do you have thoughts on cognitive enhancement neurotechnology that doesn't relate to things like Neuralink?
Luisa Rodriguez: It's interesting. My book was recently reviewed in the New York Review of Books, and the reviewer really took issue with my stance on enhancement, I think, which was, I don't think it's cheating. And I think if people want to enhance themselves that it's actually part of human nature. And she really went after the science saying none of them work scientifically. Maybe. And that's sort of beside the point. I mean, the point I was making was it's not cheating. And saying that the science isn't there doesn't answer whether or not if the science was there, it would be permissible to do so. I kind of take issue with treating cognitive enhancement in school settings and in life as something that we should punish. I understand and appreciate the arguments about coercion and race to the bottom or race to the top, however you want to think about it. I don't think the solution to that is by saying you can't use enhancers, nor do I think life is a zero-sum game where me enhancing myself somehow prevents you from being able to do so or trades off with your opportunities in life.
Nita Farahany: What do you see as the kind of best case outcome for all of this technology? What does the world look like?
Luisa Rodriguez: Best case would be I get to use the device to enhance, to meditate, to improve my focus and attention, to tell when notifications actually cause distraction or causing stress and help me to make adjustments because I have user-level controls that make me able to adjust my interaction with other technology to optimize my brain health and wellness. That's best case scenario. I use it one day maybe to have brain-to-brain communication with the people I want to have brain-to-brain communication with. I don't think I want to communicate with everybody inside their brains. But maybe there's some people where for me, a new level of intimacy is sharing a full thought, and helping them to truly see how I see something and cultivating empathy in a really brand new way because somebody can actually get inside my head and I can get inside their head and we can truly understand each other.
Nita Farahany: Yeah. That does sound pretty special.
Luisa Rodriguez: I think it'd be neat. I imagine this world in which it's like, wow, you can actually feel everything I feel, sort of see everything I see.
Nita Farahany: Yeah. True empathy.
Luisa Rodriguez: Yeah. So I think the best case scenario is it's used by us how we want to use it without it being a creepy tool of surveillance and where we get to choose with whom we share what. And that it isn't used by governments to engage in cognitive warfare. It isn't used by governments to interrogate our brains. It's not that we have to worry at all times that they're going to subpoena all of our brain data from companies, that companies allow the data to live on device and to be overwritten on device rather than capturing it, commodifying it, and using it to instrument us. So that's the best case scenario.
Nita Farahany: Cool. Okay. And I was going to ask about the worst case scenario, but you kind of slipped some of that in there.
Luisa Rodriguez: Yeah. I mean, that it's the opposite of all of that.
Nita Farahany: Right. So yeah. Okay. Well, that feels incredibly important and also just much closer in time than some of these other technologies felt to me. I guess it makes me curious to what extent are people thinking about this and thinking about how people can be changing their worldviews to make sure that as we're doing things like setting incentives and deciding how we want to use the technology, we bring about those best case outcomes.
Luisa Rodriguez: Yeah. A lot of people are thinking about this surprisingly. Okay. In a good way. I mean, not enough, but in a lot of emerging technologies, you don't see the conversations happening before they go to scale across society. And in a kind of really refreshing way, I'd say with neurotechnologies, there's a lot of international conversations that are happening on this. So UNESCO had a huge meeting this summer, and they're launching a potentially multiyear effort on it. The OECD in 2019 issued a set of principles directed really at regulators and are thinking hard about how to translate that from a commercial actor perspective. Across Europe, you see a lot of activity in this space. Chile has updated their constitution to include specific rights for people around neurotechnologies and mental integrity. Mexico and Spain and other countries in Latin America are starting to look into specific rights in this space. Cool. Thanks to some advocates who are focusing in those areas. Not as much conversation here in the US, so far less in the United States. But the UK issued a big report on this recently out of the information and technologies offices there. So you see across the UK, Europe, Latin America, a lot of activity. And in ways that I think are thoughtful and grounded and recognize some of the unique both benefits and also harms to try to enable the technology to progress in a way that makes sense. I'd say there's been a few concrete approaches, in countries like Chile. So far, the rest of the world, it's primarily at the level of kind of principles, so kind of soft law or recommendations or ethical guidance. And I think for me, the concept of cognitive liberty is a way to help unify those efforts and to say, it's also not just neurotechnology. I mean, it's all of these other technologies that are affecting our brains and mental experiences and that we need to think about it in a more holistic way rather than in a tech-specific way.
Nita Farahany: Yeah. Yeah. That makes sense to me. We've talked about neurotechnology specifically, but tons of other things fall into this bucket. And it's a bit artificial to separate them out this way.
Luisa Rodriguez: Well, and it's good to talk about the technologies and the unique threat that it poses. But then when I put sort of my law professor hat on and think about it from a governance perspective, it's to say, and then let's find the commonalities that we govern in a way that's actually comprehensive.
Nita Farahany: Yeah. Right. Okay. So Chile is doing well on this. The US is not doing very well on this. Is that just because there hasn't been much advocacy and it's not a priority for whoever should be thinking about this in the US?
Luisa Rodriguez: Yeah. I mean, I'd say the FDA has done a good job when it comes to implanted neurotechnology to be kind of thought leaders from a regulatory perspective. It's not quite clear who would regulate it and how in the US, and I think that's part of the challenge.
Nita Farahany: Yep. Okay. So we've talked about a couple of ideas you'd like to become more widespread. In general, this kind of framework of we should consider cognitive liberty a human right. Are there any kind of ideas that you think should be more widely spread that we haven't talked about yet here?
Luisa Rodriguez: So I think we have to build an ecosystem around cognitive liberty. And what I mean by that is if you're thinking about investing, ideally, you're investing in technologies that enhance cognitive liberty, that don't diminish them. And that's investing from an educational perspective. It's investing from a technological perspective. If you have a portfolio and you are thinking about what does smart investment that aligns with overall human flourishing in the long run look like, then I think it's really thinking about what impact of technologies are on human cognitive liberty. And if they are contrary to human cognitive liberty, it's choosing a different company to invest in. I think that's part of how we start to align incentives in ways that actually maximize human potential is by investing in those technologies that expand it. I think BCI technology can be technology that expands human cognitive liberty. I mean, I think, especially the companies that have a commitment to not building their business model around commodifying the brain data. So there are companies that, one company has asked me to come on as an adviser, OpenBCI. And the reason that they want me to come on as adviser is because they want to figure out how to align their product around cognitive liberty. And to me, that's exciting as an opportunity to work with a company who's totally believes that the future of computing needs to be rethought and that it needs to be about enabling the individual keeping all of their data on device, not having it commodified and extracted, not instrumenting the person for attention and engagement or selling them advertisements, but trying to liberate them. So investing in technologies that liberate people's minds, that's a good thing. Investing more in technologies that narrow our focus and diminish us, that's kind of a surefire way to ensure that AI takes over. Because the more humans are distracted, kept on device, their brains are diminished, they're addicted, they're acting compulsively and not critically thinking, the worse off we are as a species.
Nita Farahany: Let's end that topic there. I found that very compelling. We've got time for one final question. We like to end with something, I don't know, positive and maybe more uplifting than some of the dystopian things we've talked about so far. What is something that you're excited about possibly happening over your lifetime? And maybe this is in the space of neurotechnology, maybe it's something totally unrelated.
Luisa Rodriguez: Honestly, the thing that I'm most excited about is seeing my kids grow up. So I have a three-year-old and an eight-year-old. We lost a child in between. And so I'd say I probably have an even greater appreciation for our living children and getting to see them grow and the privilege that it is to see them get bigger and to take on interest and to see what makes them curious. I mean, I think one of the great privileges of being a parent is getting to see the world anew through the innocent and curious eyes of children. So the thing that kind of gets me the most excited is getting the privilege of watching them grow and seeing the world through their eyes. It's just things you don't notice, things you've taken for granted. Everything is new to them.
Nita Farahany: Do you have examples?
Luisa Rodriguez: It can be, I don't have a specific one right now for you off the top of my head, but I mean, they catch you by surprise all the time. You'll be driving down the road and will have never noticed a road sign there or something. And they'll be like, oh, isn't that interesting? Why does it say that? And you'll read it. Totally changes your perspective of that drive. Or, I mean, just anything. You take most things for granted and have filtered out a lot of things in your environment. Kids don't. And it forces you to really think about life and the world differently.
Nita Farahany: That's really lovely. Thank you for sharing. My guest today has been Nita Farahany. Thank you so much for coming on.
Luisa Rodriguez: Thanks for having me.
Nita Farahany: It is both energizing and enlightening to hear why people listen and learn what they value about the show. So please don't hesitate to reach out via email at tcr@turpentine.co, or you can DM me on the social media platform of your choice.