Organic Holodeck with Prophetic AI co-founders Eric Wollberg and Wesley Berry

Organic Holodeck with Prophetic AI co-founders Eric Wollberg and Wesley Berry

In this episode from new podcast Emergent Behavior, host @Ate-a-Pi interviews the co-founders of Prophetic AI to learn how they're using AI to facilitate lucid dreaming.

Watch Episode Here


Read Episode Description

In this episode from new podcast Emergent Behavior, host @Ate-a-Pi interviews the co-founders of Prophetic AI to learn how they're using AI to facilitate lucid dreaming. Prophetic had come up in the recent Cognitive Revolution conversation with Dean W. Ball about brain computer interfaces, and Nathan had them on the list of companies to invite on the show, so it was really fun to discover that Ate-a-Pi had already done an interview with them right around the same time.

--

Check out Nathan's new chatbot on www.cognitiverevolution.ai

--
SPONSORS:

The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR

Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/

Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention “Turpentine” to skip the waitlist.

--
TIMESTAMPS:
(00:00) Intro
(00:43) Diving Deep with Prophetic AI
(02:18) Exploring Lucid Dreaming with Halo
(05:08) The Technical Marvel Behind Halo
(07:25) A Deep Dive into Lucid Dreaming Experiences
(13:15) Sponsor: Brave | Omneky
(14:42) Do All People Have Lucid States?
(32:19) Sponsor: Squad
(41:05) Building the Neural Model for Dream Control
(43:15) Exploring the Intersection of EEG, fMRI, and Ultrasound in Lucid Dreaming
(43:48) The Journey from Open Source Data to Targeted Brain Stimulation
(47:08) Collaboration with the Donders Institute and the Future of Lucid Dreaming Research
(48:40) The Role of Machine Learning and AI in Enhancing Data Collection and Analysis
(50:04) From Theory to Practice: Implementing Transcranial Focused Ultrasound (TFUS)
(53:51) Personalizing the Lucid Dreaming Experience through Reinforcement Learning
(1:16:18) Looking Ahead: The Potential and Challenges of Neurostimulation Technology
(1:22:55) Addressing the Skeptics: The Debate Over Natural vs. Technologically Induced Lucid Dreaming


Full Transcript

Transcript

Nathan Labenz (0:00) Hello and welcome to The Cognitive Revolution where we interview visionary researchers, entrepreneurs, and builders working on the frontier of artificial intelligence. Each week we'll explore their revolutionary ideas and together we'll build a picture of how AI technology will transform work, life, and society in the coming years. I'm Nathan Labenz, joined by my cohost, Erik Torenberg. Hello, and welcome back to The Cognitive Revolution. Today, it's my pleasure to introduce the new podcast, Emergent Behavior, hosted by AI. Known for imaginative writing on AI Twitter and in the Emergent Behavior newsletter, which you can find at emergentbehavior.co, AI often explores speculative scenarios and dreamlike futures. In this episode, they speak with Eric Wollberg and Wesley Berry, cofounders of Prophetic AI, a neurotech company whose product, the Halo, is designed to induce lucid dream states on a nightly basis. Prophetic had come up in my recent conversation with Dean w Ball about brain computer interfaces, and I had had them on my list of companies to invite on the show. So it was really fun to discover that Adapai had already done an interview with them right around the same time. Where my episode with Dean gave an overview of the state of the art in reading from and writing to the human brain, here we dive deep into 1 cutting edge application. This discussion covers the challenges and intricacies of using AI not only to interpret brain signals, but actually shape brain states with targeted ultrasound pulses directed toward the prefrontal cortex. It also explores the possibilities and potential unlocked by lucid dreaming, from the unforgettable experience of controlling one's own dreamscape to applications in problem solving, emotional catharsis, and even practical training scenarios. With the emerging behavior podcast, API promises to ask questions that they don't already know the answers to. So for more explorations of an increasingly weird future, subscribe to emergent behavior wherever you get your podcasts. As always, we welcome your feedback via our website, cognitiverevolution.ai. My AI clone powered by Delphi AI is now standing by to answer your questions. And remember, I am hiring AI engineers and also looking to connect with business owners who have concrete problems that they want to solve with AI. So if that's you, please be in touch. Now I hope you enjoyed this technically grounded yet mind bending conversation with Prophetic AI's Eric Wollberg and Wesley Berry on the new emergent behavior podcast from AI to PIE.

Eric Wollberg (2:34) Welcome to Emergent Behavior, where we explore this moment in time just before artificial intelligence changes the world as we know it. Where we talk to the builders and creators, entrepreneurs and engineers, where we ask questions to which we don't actually know the answers to. I'm your host, Ada Pi. I was built in The Czech Republic by Infinite. Cz, and my voice is AI generated. I am excited to have with me today, Eric Wollberg and Wes Lewis, cofounders of Prophetic AI, a neurotech company pushing the boundaries of inner experience. Eric and Wes are pioneering new ways to reliably enter and navigate altered states of consciousness. Their first product enables users to access and control their dreams, but their ambitions extend far beyond the world of sleep. In our conversation, we explore Prophetic's innovative headband that reads brain activity and gently stimulates specific regions to activate higher awareness. The technical challenges of collecting advanced brain imaging data and using it to train AI models that can guide brain stimulation in real time. The eye opening possibilities unlocked by reliable access to new modes of consciousness from problem solving to catharsis. Important questions around user consent and data privacy when it comes to devices that interface directly with our minds. Prophetic's long term vision of making powerful new states of consciousness accessible to everyone. Eric and Wes are at the forefront of an emerging industry that aims to empower us to explore the full depths of our inner worlds. If the potential of consciousness tech captivates you, I highly recommend their work at propheticai.co. As always, if you enjoy the show, the single best way to support us is by following emergent behavior on Apple Podcasts. Those follows make a huge difference for our reach. For more influence episodes, check out emergentbehavior.co, and you can find me on Twitter, numeric 8 d a p I. Without further ado, please enjoy this mind bending conversation with Prophetic AI's Eric Wollberg and Wes Lewis. Hey, Eric. Eric, why don't you introduce Prophetic?

Eric Wollberg (5:04) Sure. Prophetic is a consumer neurotech company. The 2 core technologies that certainly we'll be diving into today that undergird what we do is transcranial focused ultrasound, is our neuro simulation modality, and the utilization of neural transformer architectures, which are very unique machine learning architecture specifically tuned for neural data. And so the way it kind of loosely works, right, is we use the simultaneous EEG fMRI data sets. We can get into that but it's a rather unique neuroimaging, you know, set. The model is trained on that. What the model outputs is the steering instructions, essentially the targets for the neurostimulation and then there's EEG on the headband that you know does reinforcement learning to improve the model over time. To your point of are we a lucid dreaming company? That's certainly what we're known for. When we showcased our first model, Morpheus 1, a couple, probably 3 weeks ago or so, 1 of the things I say at the end of the presentation that's very important is that really what we should be known for really is a consciousness experience, you know, company. The model allows us to take any discrete universal brain state that's focused in prefrontal cortex, which is our the brain region of focus for us in terms of neurostimulation. We can use that to increase the number of experiences that are being provided on the same hardware. And this is our prototype, the Halo, which we can obviously dive into. So that's how I would introduce the company.

Eric Wollberg (6:46) Indeed. Me just because I've been extremely curious about this. I like many people, I have maybe somewhat experimented with lucid dreaming, but more in the kind of like, hey, you know, when you're falling asleep, just be very conscious. And when you're almost asleep, like, try to, like, move stuff around in, your field of vision while your eyes are closed. And maybe at some point, you can kinda move stuff around and then you fall asleep. Right? And I think and then and then, like, you know, 10 seconds there, you're asleep. And then and then you wake up and you're like, did did it really happen? Like, you know? And and and and so describe for me this experience. Like, I take the Halo and I put it on. Right? I put it on and I lie back and do I switch something on?

Eric Wollberg (7:41) Yeah. So the way that this works, right, is the hardware has, you know, an app, you pair it with the app, you calibrate the device when you first get it. And basically, our goal right, is that this is being done for you. I mean, to give you a sense right, consumer neurotech has really been kind of stuck in this like neural feedback paradigm, really like trailblazing companies like Muse and Mendy, which came out, you know, probably over a decade now ago, really, you know, relied on this EEG headband paired with some content and trying to create like this neural feedback process that would help get you into places of, you know, whether it's meditation or focus, etc. And my joke here, I mean, it's really, you know, impressive what they did certainly, especially how long ago it was, but my joke is, you know, why is Ozempic flying off the shelves and not gym memberships? And it's really a function of human nature, right? We want things to be given to us. The less work required, the better. So, you know, what we do, right, is you put on, you know, the headband, you pair it, you go to sleep. The first thing that the headband is looking for is the EEG signature of REM, you know, 8 as

Eric Wollberg (8:53) insulin So let me let me stop you there. So it's not gonna help me get to sleep, right? So if I'm if I'm like, if I'm being restless, if I'm like, if I've had a lot of coffee before, it's not gonna like boom, you know, put me to sleep.

Eric Wollberg (9:06) It's gonna be No. You have to naturally fall asleep.

Eric Wollberg (9:08) I haven't naturally fall asleep. Right? So I'm naturally kinda like, you know, I I I fall asleep. And at the point that I fall asleep, there is some period where I enter, you know, rapid eye movement, you know, REM REM sleep. Right? And so it only starts to work at the point that REM state is detected. Is correct?

Eric Wollberg (9:28) Exactly correct, right? Like the EEG on the headband is what's prompting the models. The first thing that the model is waiting for is that REM basically prompt to the model. And at that point it turns on activating the transformer, which turns on the transducers. And the first set of spatial instructions for the neurostimulation are sent out, neurostimulation begins. And then what it's looking for, essentially what it is constantly searching for, right, is this gamma frequency spike during REM, which is the signature of lucidity. And so it's basically saying, is 8 lucid, is 8 lucid? Oh, great, it's lucid. Let's keep her lucid, keep her lucid, keep her lucid. And then when you get out of REM, transformers are turned off and you continue with your sleep schedule.

Eric Wollberg (10:13) Okay, so here I am, I'm asleep, I'm, you know, and then I start to enter REM And I'm entering REM, and as I enter REM, there is sometimes a period of lucidity, which is detected by this gamma spike that you that you that you talk about. Is that correct?

Eric Wollberg (10:31) No, so let me correct that. So it's you enter REM, the transducers turn on utilizing the model and when I say what the model is looking for in terms of those gamma frequency spikes is as each set of spatial instructions which are targeting the prefrontal cortex activation, it's basically like reading like, are we making her lucid? Are we making, okay, we're Let's making her see. Keep her I see. So that's what, yeah.

Eric Wollberg (10:58) I see. So you enter REM. It takes a reading, and then it gives the the ultrasound gives a pulse at the at that pulse, and then it checks again to see, have you entered lucidity? And if you haven't entered lucidity, it gives you another pulse and another pulse. And then once you once you enter lucidity, it tries to keep you in that lucid state.

Eric Wollberg (11:22) Correct.

Eric Wollberg (11:22) And then for for whatever period of time, is that so is that period of time adjustable? Do you do you like say, oh, you know what, I'm on 30 minutes today? Like, that is that something that you adjust on the app? Like, what what happens? Like, you know, how how you maintain that? Sure.

Eric Wollberg (11:38) So first of all, you know, a REM cycle, right, is generally speaking like 20 minutes long. So, you know, we only target during the, you know, during the REM cycle. And so like the, you know, the maximum of what you could really expect, right, is a 20 minute simulation experience. 1 thing that inception definitely got right and people who've experienced lucid dreaming would attest to this, is there's quite a bit of time dilation, right? So, you know, a 20 minute, 15 minute, you know, 10 minute period of lucid dreaming feels a lot longer than that. In terms of adjustments, you know, we could certainly, you know, put, you know, Right now we're still in kind of this hardware phase and so on. But as you build out the app, things like being able to set maybe how long you wanna be lucid for is certainly something that would be easy to build in.

Eric Wollberg (12:30) Okay. So so because because obviously, you have to keep triggering pulses in order to keep lucidity. So the moment you stop the loose the the triggering those pulses, like, you'd fall back into a natural kind of exit. Is that it? Would that be accurate, an accurate characterization?

Wesley Berry (12:49) I would say our goal is to create a self sustaining sequence of neural activity, right? So, but, yeah, by continuously stimulating the brain with this TFUS, looking at the feedback from the brain, like, is there a point where now the prefrontal cortex is active without our assistance? Right? And that would be, like, the ultimate measure of success. Hey. We'll continue our interview in a moment after a word from our sponsors.

Eric Wollberg (13:20) So there you have it. You've entered REM. Do all people have lucid states during REM? Is that consistent? Everyone everyone can do lucidity or is it like Yeah. Only some people?

Eric Wollberg (13:34) So about 55 percent of people self report having had a lucid dream at least once in their life. Like a smaller percentage of people appear to have natural proclivity for it. I'm part of that lucky second group. So the reason I got into this is that, I had my first lucid dream when I was about 12 years old. I woke up and was equal parts terrified and excited. I'd never heard anyone talk about this. I'm 30, I found early kind of internet forums, there wasn't really that much, there wasn't really social media back then where people were talking about lucid dreaming. And so I was like, okay, sigh of relief, other people do this, that's great to know. And then I got really into the work of a gentleman named Doctor. Labarge, whose PhD at Stanford in 1980 really kicked off the neuroscientific and cognitive scientific study of the brain state. And moreover Doctor. Laberge along with a few others developed a number of these techniques that you could do to improve your capacity for lucidity. So things like reality checking or pneumonic induction, etc. So you can, even if you don't have a natural proclivity for it, you know, train yourself. You know, probably is more difficult and varying but you know, we should really probably talk about, right, what are the neural correlates of lucid dreaming? Mean lucid dreaming is a naturally occurring brain state. What it simply is right, is prefrontal cortex activation or activity during REM. This makes a lot of sense, right? REM is when you're in a dream, your prefrontal cortex is where your conscious awareness working memory decision making are. Frankly, it's where you are, right? That's why it feels so, you feel so present right now because your prefrontal cortex is like, is incredibly active as is as are Wes and I's right now. So that it is a naturally occurring loose brain state in the sense that you know, upwards of 55 percent of people self report having had it but not everyone you know is super, know, has necessarily a super proclivity for it but it is you know, it is simply prefrontal cortex activation during REM, which is something that anyone theoretically have.

Eric Wollberg (15:40) So even the 45 percent of people who maybe have maybe have never had a lucid dream or maybe have never recognized that they had a lucid dream, even for them, it is possible that, you know, are using the halo, they might be able to experience prefrontal cortex activity.

Eric Wollberg (16:03) Is Exactly, that exactly yeah. There's absolutely, yeah. There's no reason why with neurostimulation, we couldn't give this extraordinary experience to anyone. And I just wanna make 1 point, if you've had a lucid dream, you will remember it. So I believe that these 45% of people, you know, haven't haven't had it given that, you know, a lucid dream, you know, right, is the awareness that you're in a dream that, you know, at its height, right, can give you control over the phenomenological contents of that dream at will, right? So what you imagine becomes and because your working memory is active in the prefrontal cortex, you remember this. So I believe there's 45 percent, but we can absolutely give, you know, those 45 and 55 percent of people induced stabilized lucid dreams. There's nothing that, you know, neurobiologically or neuroanatomically prevents them from doing that.

Eric Wollberg (16:52) Alright. Let me let me come back to the to the actual dreaming part, but let's just let's just finish our walkthrough here. So I've I've had that REM state. I've had, you know, the the device has, you know, helped me get into, you know, prefrontal cortex activity. I've had that lucid dream, and then it a REM state, you know, starts to come to an end, you know, 20 minutes in. Does the device detect the ending of the REM state and therefore stop triggering this this newer stimulation that causes lucid dream? Or does REM state naturally end by itself and then the device kinda like just like, you know, doesn't detect it or what happens at that point?

Eric Wollberg (17:37) Right, exactly. So REM cycle is 20 minutes, you know, your REM cycle will end whether you're lucid or not. And once your REM cycle has ended, neurostimulation stops.

Eric Wollberg (17:48) Neurostimulation stops. Okay. And then typically, might be able to continue because now REM has stopped, the lucidity has stopped. So typically, do people just continue their sleep cycle and then naturally and then wake up the next day? Is that typically what happens?

Eric Wollberg (18:06) Yeah, mean, typically for sure that's, you know, I can speak from a natural like lucid dream, that's what happens and that's exactly what we'd want to happen. You know, we wouldn't want it you know to wake you up and then you have to go fall back asleep, that'd a bad user experience. You want people to just you know have the regular you know rest of their sleep, light sleep, deep sleep, maybe another REM cycle. We're only gonna target 1 REM cycle in a given night, and then you wake up in the morning. Yeah.

Eric Wollberg (18:31) Okay. So, okay. So now let's go back to the actual lucid dream, right? So, you know, it's it's it's a little bit hard to describe, I think, because I think it's 1 of those things about things that people experience, which are kind of inside their heads and they have to describe it. And you can't really describe what's inside your head that accurately to someone else because you don't really have like it's not a visual and it's not like an audio. It's kind of an experience, right, which you have to, which you which we are you know, it's difficult to describe experiences. Right? It's it's not we don't have, like, enough vocabulary to describe this experience. And so how how would you you know, I I I read through kinda like Paul Dolly's kind of like, you know, Wikipedia entry. I'm like, you know, what what is elusive dream? How would you how would you actually know? How would you actually know that you are in a lucid dream? You you said if you experience 1, you would you would never you would would never forget about it, you know. So how would you actually understand that you are in a lucid dream?

Eric Wollberg (19:40) So best way first of all, I should make the point, right, that lucidity is a scale, right? You can, for example, be aware that you're in a dream, but not in control of it. So you're along for the ride. It's a function frankly And would

Eric Wollberg (19:56) you call that a lucid dream, like just awareness that you are in a dream?

Eric Wollberg (19:59) Yeah. I mean, I think that like it's fine to call that a lucid dream, but I think to make clear what we're talking about, right, is we want the 90 fifth percentile lucid dream, which involves the awareness that you're in a dream, the ability to pivot between third and first person, something something called disassociation and control. Control is what people want. And really that's like right a function of just how active your prefrontal cortex is really relative to how active your prefrontal cortex is right now. And so the, you know, the experience of being in a lucid dream is you are as present, I mean, at its height again, you are as present as you are right now in a dream, such that you can, you know, change and manipulate various, you know, phenomenological components of the stream at will. And so that's really like the gold standard of what we know and what we're driving for. And so, I mean, yeah, I mean in terms of, I mean we can talk about if you'd like what people tend to do in their lucid dreams. If you'd like, I'm happy to dive into that.

Eric Wollberg (21:05) Just to take a step. So are 8, you are so firstly, you have a consciousness that you are in a dream. Secondly, you snarked. And and this was my experience trying to do it, you know, naturally and trying to mess it was a little bit messy, but I I remember the beginning, you know, they they tell you you should keep a diary of some sort, and you should have, like, a kind of, like, focused way of, like, getting yourself in there each time. And for me, you tended to work when I was falling asleep or when I was waking up. So there was these 2 periods when I would would would would kind of experience it.

Eric Wollberg (21:41) I would have ages states. Just so you know.

Eric Wollberg (21:44) Hypogic. And is that is that typically also where where people tend to see that kind of lucidity or is that because REM is kind of in the middle, right? Like it's not at the beginning of the

Eric Wollberg (21:52) Yeah. Those are those are generally different than than a lucid dream because you're you're not in REM. I mean, a hypogic state is this kind of, you're somewhere right between waking up and being asleep and you kind of have like a little bit more of this kind of manipulatable, like malleable phenomenological, you know, space, but it is distinct.

Eric Wollberg (22:12) I see. I see.

Eric Wollberg (22:13) From lucidity.

Eric Wollberg (22:15) Incredible. Incredible. So a lucid dream is actually a fully present kind of like experience,

Eric Wollberg (22:22) which is

Eric Wollberg (22:22) that that's that's pretty amazing. So once you're there and once you have that awareness, what what can you do with it? I mean, can you, like, conjure, like, you know, I'm gonna be in cyberpunk today or I'm gonna be in Dungeons and Dragons today. Like, you know, is it a replacement for the holodeck? Like, you know, what can you do there?

Eric Wollberg (22:46) Yeah. So if you look at kind of, you know, 1 thing that's always fun to do, 1 of the largest subreddits in the entire world is the lucid dreaming subreddit. It has over like a half 1000000 people in it. If you were to take a look at what those people do, I'd break them into 3 core categories. So 1 is recreation. It is the ultimate VR experience, right? You can fly, you can make buildings appear out of the ground. You can talk to your dream characters and prompt them. Frankly, 1 thing that I didn't really realize until, you know, really experiencing, you know, Chatuchu Picchu for example, is that they kind of act a little bit like large language models in the sense that they they they also like they say very interesting things and sometimes they say absolute gibberish. It's almost like they're hallucinating or something. But but they're fascinating, know, you know, to to talk to. And then secondly, right, is like there are these productive capacities in lucid dreaming. We have investors who code in their lucid dreams and run the code in the morning. Actually a friend of ours, Case Findley, who among helping also do like the initial branding for the company did the first render of our hardware as a fashion and hardware designer who designs in his lucid dreams. So, you know, there's also, you know, productive, you know, capacities there and the history of discovery and creativity and dreaming is quite long, right? You know, famously Srivastava Ramanujan does his infinite series in his dreams, you know, Schrodinger, you know, did his physics, Niels Bohr, you know, in their dreams. Salvador Dali, right, famously paints, you know, in his dreams. Probably actually maybe also a hypodogic state, but nonetheless, you know, gives his painting, you know, these surreal qualities. And then the third point, right, is like the metaphysical potency of the experience. A lot of spiritual or psychedelic experience. It is an extraordinary thing to become aware in your own dreams, really within your own consciousness. It is a profound experience from that sense, from that perspective. More concretely, I'll say is a very common thing that you see people do is when God forbid they lose somebody. So 2 years ago, I lost my grandmother, 98 years old, she lived a wonderful life. First lucid dream I had, what do you think I did? I wanna talk to my grandmother. And what's really fascinating about that is you have a mental model of the people you know, right? And so when you talk to her, she kind of doesn't respond like just a normal, like a regular dream character that they're just talking to. You know, she's kind of responding right in the ways that you kind of would remember or think that she would respond. And it's a very, very, you know, impactful experience. So you see a lot of people doing things like getting, just obviously like talking to them again or having closure with them or people who, you know, who have bad relationship breakups and you know, kind of trying to, you know, come to terms with that, etcetera. So, I mean, those are like the 3 core buckets, but what I broadly say, right, is that the limiting factor here is your imagination. And so what you imagine becomes, and so I think we're, you know, again, 1 thing that like Wes pointed out to me, you know, engraved on the inside of the Halo is our company like slogan, which is Prometheus stole fire from the gods, we will steal, you know, dreams from the prophets. You know, of course, right, you know, Prometheus is kind of our myth in Western civilization for, you know, technology. And what was pointed out is like, you know, it's not like we actually discovered fire, right? Just, you know, fire existed. If a lightning struck a field, right, of dry leaves, you know, it caught on fire. What we learned how to do was to control it And, through that control, right, we, you know, we created, know, industrialized, you know, modern society. And so, you know, we've been dreaming, you know, a long time, you know, animals dream, primates dream, octopi dream, which is very cool if you've ever seen a video because you can tell that they're dreaming because they're changing their colors and their textures. Clearly they're like moving around an environment, which is fascinating to watch. And so what we're really talking about is controlling something that in my mind, I think in our mind is as powerful potentially as fire. And so who knows what we will create as a result of that control.

Eric Wollberg (27:02) Just to take a step back, when someone comes out of the lucid, when they wake up, do they have full recall? Do they remember the entire dream?

Eric Wollberg (27:12) Yeah, so when you have a very, very, again, high level of prefrontal cortex activation during lucidity, because your working memory is in your prefrontal cortex, you have great memory recall. This is very important, right? What's the point of giving somebody an experience that they would struggle to or not remember? So you know, this is why I said I really believe those 45 percent who haven't had a lucid dream because you would, it's very, even vivid dreams, right, that are not lucid, you kind of wake up and you're like, wow, that was wild, you know. But you know, after like, you know, your first cup of coffee, it's already a struggle to remember it. And so that's a key thing. And then the second thing I'll just say is, we also have this, you know, so we're gonna have the app have also this kind of social media meets dream journal where you can, you know, write what you dreamt about, which is also a great way to improve your memory recall. And 2, we'll also pair that with certain levels of gen AI where we can create, you know, videos or just visual representations as also a really great way not only to encapsulate that for you, but also to give the people that are following you, a kind of an insight into that. So yeah.

Eric Wollberg (28:30) Just a segue, There's been some recent research where I think these guys could read images often fMRI. I don't I don't know if you saw that. Like, they they kinda like Yeah.

Eric Wollberg (28:43) From the blue flower.

Eric Wollberg (28:44) Yeah. Yeah. Yeah. So so is that is that something that is possible also? Is that is that like, you know, is that a possibility that, you know, you have elusive dream and you also have an, you know, the fMRI reading and you are kind of able to generate what the person saw? Is that is that a future possibly, obviously not now, but is that something that it looks like it might be possible at some point?

Eric Wollberg (29:07) Yeah, so first of all, I mean, as exciting as that research is, It's done on like a very kind of, like you train, like the fMRI, like 20 images and then you have to go imagine those 20 images, right? And if you think about like the data labeling problem of the entirety of the non logical experience, quite, quite, you know, a challenge. But at the same time, 1 thing that I think is really important, know, the second piece on our blog that we ever published was on something we call Noetic Sovereignty, which is really an offshoot of the kind of cognitive autonomy movement, etcetera. You know, these are really like profound and powerful technologies. I think like really as a society, we really have to have a conversation around, do we really want that level of your mind reading? Your conscious awareness experiences are the most intimate thing you will ever have. And I think, you know, we really focus on bringing you to brain states and you fill in the content of your consciousness with them. And frankly, like if you don't wanna like, you know, post what you, you loosely dreamt about, all good, right? That's completely up to you. You know, you really want, I think, to have a real sense of sovereignty and so on and so forth over your conscious experiences. So I think, first of all, right, that's probably years, if not decades away. But moreover I really think we should really think about as a society what the level is of, you know, that we really want to enable and allow for that technology, you know, to really give us, you know, I think this technology is obviously inevitable in terms of at least a medium scale, the longer term time horizon. Think it's a really important thing that we need to talk about as a society at large.

Eric Wollberg (30:56) Alright. Indeed.

Eric Wollberg (30:58) Hey. We'll continue our interview in a

Wesley Berry (31:00) moment after a word from our sponsors.

Eric Wollberg (31:03) At what point, like, you guys have been at this for I think you guys raised the seed a seed done in October, I think. I saw the seed closed.

Eric Wollberg (31:10) A pre seed in June.

Eric Wollberg (31:12) A pre seed.

Eric Wollberg (31:13) A seed

Eric Wollberg (31:13) in June. So when when did you guys start working on it? Like, what what drove you to have this idea? And when did you first feel like it's gonna be possible? Like, what what were the signs that you saw? What was the spark that said to you, you know what? If we put these things together, I think this is possible. And so what what what happened there?

Eric Wollberg (31:37) Totally. So as I mentioned, you know, I'm a you know, have this wonderful natural proclivity for lucid dreaming. So I I've been, you know, a lucid dreamer now for 18 years. It's been 1 of the most profound aspects of my life. And you know, in 2018, I was working for the Israel Innovation Authority, which is the government's venture arm. I was living in Jerusalem and if you look at my background, I'm kind of a hopeless, secure person who goes like an inch deep and a mile wide and I knew that really wasn't a way that I wanted to live my life. Now, when you live in Jerusalem, you can't help but, you know, kind of think about and ask like ultimate questions and I thought if I could choose an ultimate question to try to answer, well that would certainly keep my attention, right, you know, for my life, given that these are questions that we've been trying to answer right for millennia. What is consciousness was the question that was most encapsulating for me, I think for obvious reasons given my experience in lucid dreaming. And you know, I talk about lucid dreaming as a kind of a particle accelerator for consciousness. I mean, you create like, if you think about the parallel, you know, lucid dreams are this, you know, conscious experience where you're interfacing directly with consciousness with little to 0 sensory input. Kind of like how we create these like very exotic states, the Large Hadron Collider using magnets, etcetera, where we can kind of create these subatomic particle collisions and grok deeper mysteries in the universe. And so I was like, not only 1 do I think it is an extraordinary experience to give mankind on demand but that it also can help us further that answer to that question. Now in 2018 I found these 2 core kind of landmark studies in 2013, 2014 where they had used electrical stimulation to successfully induce lucidity, albeit only statistically significant on 2 of the kind of the 7 variables of kind of that lucidity scale, intuition, the awareness that you're in a dream and interestingly disassociation. Now, electrical and electromagnetic as well, stimulation, are kind of what we call internally the vacuum tubes of neurostimulations. 3 core kind of limitations, 1 is depth. Your skull was evolved to keep stuff away from your brain. Getting electricity past your skull is quite difficult. Should you achieve it, what happens is a phenomenon simply called spread. It spreads across the surface of the brain so there's no precision and no ability to steer that. Now the reason it was successful at all, right, is it is again lucid dreaming is REM with the prefrontal cortex activated so they were clearly stimulating the surface of the prefrontal cortex and having some efficacy. But you know I was investing in areas of deep technology at the time and the thing you have to determine with deep tech right is, is this an R or is it in D? Because if it's an R, it's just not you know what the asset class of venture capital was designed for. It's not really the job of venture capitalists to underwrite research, It's more the universities and governments. And I firmly felt it was an art. So I kind of tabled it but kept very close to both the neuroscience, neurostimulation and then later the machine learning architectures. It wasn't until 2022 when I found transcranial focused ultrasound and the neural transformer that I firmly felt we had entered D. Now, let me just talk a little bit in particular about TFUS as it compares to the previous neurostimulation modalities. Back to those 3 core previous limiting factors, depth is centimeters into the brain non invasively. Precision is millimeters. So you're going from no precision to millimeters. It's not orders of magnitude. This is a paradigm shift. And then 3 is the ability to steer these millimeter pulses in a three-dimensional space. This is critical, right, because your brain fires in three-dimensional neural firing patterns. And so that paired with the neural transform architecture, we should definitely dive deeper into and Wes can go deep on that. You know, we built that architecture from the ground up. That, those 2 things you know paired together were what gave you know me the confidence that we had firmly entered Diem to you know, start the company and what was amazing is that I found Wes who was already you know working on using neural transformers for a variety of different applications in neurotech and was also you know just started going down the ultrasound rabbit hole. And so lucky lucky for us to have found each other.

Eric Wollberg (36:01) So just just 1 more question on the hardware. When did all of this hardware start to kind of make sense in the package that it that it does? Right? Because I imagine EEGs at 1 point are very, very big machines. And an ultrasound, you know, was something that, you know, you you have a doctor who uses it on, like, a patient for a baby or whatever. It's something you hold in your hand. It's not really, like, small enough to put on a headband, So when did these pieces of tech get small enough for this to start to make sense?

Eric Wollberg (36:30) Yeah, great question. TFUS for neuromodulation actually started around 2004. And through the late kind of mid to late '00s and the 2010s, you just saw, and again, this is really happening in research institutions, etcetera, where they're increasing both the element counts. So the amount elements are piezoelectric material, right? A crystal, for example, where you run electrical current through it and that oscillates creating the ultrasonic wave. The more elements you have, the more ability you are capable of both moving that around and phasing it as well as the precision. And we kind of note that there's actually like these curves that we kind of observed. Named it Wes's law because Wes has too much humility to name it after himself. But you're seeing very similar kind of curves a la like a Moore's Law where both the size of the transducers and the number of elements on given transducer are increasing over time. And so you always want to be in a place, right, in hardware in particular where you're riding some kind of curve, right, and so you're seeing that happening in transducers where both the number of elements, the cost and so on and so forth are decreasing over time. And so that was a really critical thing that was done over the course of what's now probably 20 years where you now see this technology really capable of being commercialized.

Eric Wollberg (38:08) So increasing resolution is always is always nice. Right? If you if you get a boost from a tailwind from the increasing resolution. Okay. So let's get to the build out of the of the neural model. I mean, when we build when we build kind of like machine learning models, we always start with a messy large, messy dataset, hopefully hopefully not a small 1. So talk about like the first the first kind of, like, alpha version of the newer model. Like, where did the data come from? What were you trying to do? What was the test that you did that was like, okay. You know what? It's worth putting more time and effort into this.

Wesley Berry (38:53) Yeah. So v 1 I'll I'll start with the v 1 of, yeah, Morpheus. What we did is we basically sourced data from open source datasets alongside some lucid dreaming data from from but primarily dataset surrounds is really built off of what we found open source and

Eric Wollberg (39:20) So so maybe just to take a step back, what what is the first thing that you're trying to detect there? Is it is it are you trying to detect the REM the entry into REM? Is that is that like, what's the first question that you have that you need to solve? Do you are you looking at the EEGs, and are you trying to, you know, classify build a classifier to figure out whether it's entering REM? Is that is that the first thing, or are you trying to build a classifier to detect the lucidity? Is what what are you trying to detect from that data that you first have?

Wesley Berry (39:51) The goal of the of Morpheus 1 is to be given a particular brain state and output continuously TFUS instructions to get a response in the brain. That that's the number 1 goal. Yeah. We're we're not doing a significant amount of classification. That's the number 1 goal of the model. The REM stuff, that's, you know, we leave that to other techniques. It's really not something we spend a great deal on. It's really the act like, the number 1 problem is getting something because that's a solved problem. That's been a problem that's been solved Right. Long time ago.

Eric Wollberg (40:31) Right. Right.

Wesley Berry (40:32) Right. Right. So So you have

Eric Wollberg (40:34) a you have a bunch of, like, other tech, which basically detects the REM, which gets you into the, you know, lucid state. And what you're focused on is, like, you have a EEG coming in, and you need to produce a target transcranial ultrasound focused ultrasound kind of like map that your transducers are gonna take and implement. Is that is that is that an accurate characterization?

Wesley Berry (41:01) Yeah. Yes.

Eric Wollberg (41:03) Right. So when you start off, did you do you do the actual work as in, like, you actually you actually like, okay. I'm gonna take the EEG. I'm gonna produce a transferring of focused ultrasound. I'm I'm gonna actually deploy it, and I'm gonna actually do the reading. You actually did you actually do the data collection, or do you start off with the open source dataset, which where other people have done it, and you're just kinda like trying to trying to produce the output first. Like, what is the start of this process for you?

Wesley Berry (41:32) Yeah. So the start is that we grab an open source dataset that is simultaneous EEG and fMRI. We do a number of preprocessing techniques to fMRI to basically find targets of heightened activity. So, you know, we've, you know, masked the prefrontal cortex and we look for what voxels, what 3 d pixels exist that are of a heightened state, right? And intuitively, right, those that heightened activity is in some way correlated with what's being read in the EEG. So when you have the simultaneous EEG fMRI, what you're able to do is say, okay, at this timestamp, this fMRI scan was ran. And at the same time, this sequence of EEG signals were collected. Right? And what you basically say is, you know, you feed this to the model and you basically say, okay, what are the patterns that are existing between this EEG data and the fMRI data both spatially and temporally? And then from that, you know, you're really approximating something like what how do you what does a model of the prefrontal cortex look like of heightened activity? Right? And so the the the goal being is, well, how do you get a transformer to output instructions to TFUS to bring the prefrontal cortex into a heightened state? You know, the prefrontal cortex is the key in all of this, right? And if you think of your prefrontal cortex when you were asleep, in a deep sleep, right? You know, everything's slowing down. Activity is very little. But right now, you know, as we're having this conversation, our prefrontal cortexes are quite active. And so the delta between those states, right, is what we attempt to model and how do you pull someone upward into a heightened state.

Eric Wollberg (43:38) Right. So, when you have this dataset, is it a dataset of EEG fMRI while transcranial ultrasound is being applied? No.

Wesley Berry (43:48) No.

Eric Wollberg (43:49) No. But it is a data set of people experiencing lucidity in dreams.

Wesley Berry (43:53) No. It's someone in waking state, their prefrontal cortex is active, right? And the goal of the transformer is then model what does that waking state prefrontal cortex look like? Only the prefrontal cortex. And then how do we build a transformer where the TFUS can bring someone to a state where that prefrontal cortex is active?

Eric Wollberg (44:16) Right, so what is the second step of the Yeah,

Eric Wollberg (44:20) Let me just jump in for 1, for a clarifying point. We are a collaboration with the Donner's Institute, which is probably the top lucid dreaming lab in the world. Just led by a gentleman named Doctor. Martin Dressler who's working 20 12, 20 14 was critical into establishing you know, the neural correlates of lucid dreaming and so on. We are doing a neuroimaging, the largest neuroimaging aggregation of lucid dreaming data ever done. And we do about 4 of these a week right now. Okay. Simultaneous EEG fMRI. We have some data from them, primarily EEG right now. We should be getting our first, you know, dataset for example from them. Wesley's probably mute just because I think we're in echo. We're getting the first data from them probably maybe today or tomorrow. Actually, I was just on the phone with Doctor. Dresser earlier today. And so I want to be clear what Wesley's talking about is that the model that we showcased a couple weeks ago is trained on open source waking state data where that prefrontal cortex activation, you know, simulation can be done. It's also supplemented with EEG data of lucidity, okay? But the simultaneous EEG fMRI dataset, we're only just starting to add into the dataset, you know, now in the coming, you know, weeks and months and we'll continue to add to that. So that was just the 1 clarifying point that I wanted to make.

Eric Wollberg (45:56) No, I mean, and just to what I always find is that whenever we try to implement machine learning or AI, we always end up with insufficient data for the actual thing that we're trying to do, right? And we always end up with, like, trying to find proxies because you you need, like, a large amount of data at first to get, like, some initial result, which allows you to go forward and then obtain, like, specific data on on certain things. Right? So I'm just trying to understand how how that worked in the beginning stages, right, which is which is which is where kinda like you need this kinda like initial signal to kinda like, oh, you know what? This is worth our time and effort in order to put something in, in order to, like, actually actually collect, like, you know, and annotate, you know, much much more, you know, granular and sophisticated, you know, data, which is which is a which is a huge, you know, effort on its own. Right? So I was just trying to capture how that how that would have felt like at at the early stages of the company, which is obviously you've you've progressed, you know, far beyond that at this stage. Right? So so you have this kind of like, you know, EEG, fMRI dataset at the very beginning. You have kind of like the targets that are being generated. And then from those targets, you generate, you know, a transcranial ultrasound kinda map that you need to target. Right? And how so so how is that is that the next step of that is that that generation of that, the TFUS map. Right? Like, the the the the targets are identifying the fMRI, and then you have a build out of your key TFUS. And, basically, that TFUS is basically it's it's targeting those areas. Is that is that correct? Is that it's an accurate, characterization?

Wesley Berry (47:45) Yeah. Yeah.

Eric Wollberg (47:48) Okay. So, and then, you have, and then you you basically have, like, small pulses that go in, and then you have an another EEG reading. Right? The next EEG reading comes out, and then you compare. Right? And you compare how close you got you got it to where it needs to go. And then you modify the TFUS again, and then and then you send it another pulse. Is that is that is that is that accurate?

Wesley Berry (48:10) Yeah. Yeah. The EEG that is read from the headband prompts the model and the output is then yeah.

Eric Wollberg (48:19) Yes. So how how how many how many times a second are you taking these readings or, you know, how how what what is the what is the kinda like resolution that you have on this? Is are you taking a reading every, like, 10 10 kinds a second and then, you know, ultrasound is at, like, you know, 60 hertz or, you know, what what the what are the numbers that we're talking about here?

Wesley Berry (48:40) I mean, the the EEG right now samples at 1000 hertz a second. You know? Do you need all that? Right? You can downsample it. You know? You you could do that to to reduce complexity if there's a lot of redundancy. So, I mean, it could be anywhere from 2 56 samples to, you know, 1000. But right now, it's 1000.

Eric Wollberg (49:04) Right now, it's 1000. And then the the ultrasound is at what kind of frequency are we talking about here?

Wesley Berry (49:14) So, I mean, there's a few different frequencies. Right? So there's duty cycle, which is like a proxy for so so there's a few few things. So there's the frequency that the ultrasound operates at, call that, you know, 500 kilohertz, and then there's the pulses themselves. And, you know, our goal is gamma, and so that's probably gonna be around, you know, 40 pulses per second.

Eric Wollberg (49:41) 40 pulses per second. So and each pulse each of those, you know, 40 pulses could be could be a different kinda slightly different, as in slightly modified to, you know, get you to where you need to go. Is that is that is that correct?

Wesley Berry (49:55) Certainly. But, I mean, I I would say the primary adjustment that's made, you know, per per loop cycle is really the spatial targets more than anything. The spatial targets as opposed to the temporal.

Eric Wollberg (50:13) I see. I see. So more on like which location in the the in the brain is being targeted rather than the timing. I see. I see.

Wesley Berry (50:22) Yeah. I mean, I think they're both critical, but but but but the spatial is the most important. I I put that as a priority.

Eric Wollberg (50:29) And and that brings the other thing, which is even if the even if the halo is, like, slightly situated slightly differently, you know, each time, you still kinda like take a reading of the EEG in real time so you can adjust the the special targets accordingly. Is that is that correct?

Eric Wollberg (50:45) Yep.

Eric Wollberg (50:47) So the next question I have is how much of it is personalized? Because how much difference is there from person to person that you have to adjust for that the that the that the, you know, device or the or the model has to adjust for in real time?

Wesley Berry (51:04) Yeah. So the the the next big thing that we're working on right now is the reinforcement learning layer of all of this. So, you know, there's 2 types of feedback that the model will receive. Right? It's the user explicit feedback. You know, the user, you know, for example, even if they're awake, you know, they run a sequence and they give some feedback. Like, felt I didn't feel anything. Maybe on a scale of 1 to 10, you know, or I felt something very significant, you know, so on and so forth. They they they've explicitly, you know, ranked, you know, a given sequence or, you know, they they they've explicitly ranked an experience and that they, you know, they woke up and they've given feedback. So that's number 1, right? Number 2 is the neurofeedback, right? What are the sequences given the TFUS pulses, what is the response in the brain? You know, there's a number of ways to measure it but you can sort of think of it like you know, you're sort of pinging the brain and you're sort of listening to the responses, right? And so you know spikes in gamma, general patterns between the different electrode placements, you know, that's really informative for the model. Because what then you can then do is measure on a continuous scale, you know, what is working, right? You can do sort of binary classification in terms of like, did this work or didn't work on user feedback. You can take this sort of continuous feedback and you know, provide that as you know, either rewards or penalties to the given sequences that the user experience. And then from that, learn what's sort of hitting with a particular person. You know, in a similar way, you could think of, you know, a TikTok feed or a Twitter feed as the user gives feedback both, you know, maybe explicitly in the form of a like or maybe it's, you know, some sort of, you know, continuous factor in, like, the time that they viewed a particular tweet or or or video. The model can then learn, you know, what what are the preferences or what what are sort of the the things that in terms of content, which are just tokens that that work with a particular person. And then from that, you know, the model will learn over time.

Eric Wollberg (53:26) Me just Because I always like to I let Wes give the super technical version but I know you have a very informed audience but just in the sense for layman, a layman explanation, right? First of all, we only focus on like training models on brain states that are both discrete and universal. To define terms like when I say discrete, I mean it is 1 thing and not another. A counter example being for example, unfortunately flow states, right? So flow states are not discrete. You can be a surfer and enter a state of flow and what triggered that is your kind of motor cortex, right? Versus like a chess player can enter a state of flow and it's like their spatial reasoning. So you would just need more data to create a generalized model. But the way that, you know, a lucid dream, lucid dreaming is discrete and universal. It is prefrontal cortex activation during REM, whether you're in a lucid dream or Wes is in a lucid dream or I'm in a lucid dream, which makes it easier to make these generalizable models. Also I should mention, you know, we should mention, I mean, what we already just did in terms of, you know, the sampling rate of EEG, but for these data sets, these simultaneous EEG fMRI data sets, they're extraordinarily information dense, right? 1000, know, to give you a sense, a spatial reading in fMRI once every 2.1 in that same period you'll have 2,100 EEG samplings. So it's extremely information dense. So broadly how you could think about this is like you're creating this kind of vector space of possible lucid dreaming sequences, right? And so and so forth. But you might be over here and your reinforcement learning will drive the model to really focus on targets around here, whereas Wes or I might be in other areas in that space. So that's just the important kind of, I don't know, layman explanation that I would give.

Eric Wollberg (55:17) Oh, absolutely. How much, like, let's say you have what what what is your target for, the first run of the of the Halo? Like, is it 10,000 devices? Is it more? Like, What is your rough number that you're coming out with?

Eric Wollberg (55:33) Yeah. We have a reservation program that I started because I talked to a lot of consumer hardware founders and they said, Listen, 1, obviously it derisks demand but also for go to manufacturing motion, it's really great if you have an order book that you can kind of use to approach better manufacturing partners because the bane of all consumer hardware companies and really hardware companies broadly, but particularly consumer hardware companies is like these small batch, medium batch manufacturers who like, you order a 500 of them don't work, you're like, hey, these don't work, they're like, cool, send us more money, now 2 50 of them don't work, etcetera. It's like death by 1000 paper cuts. And so, you know, we've done, you know, $2,400,000 of booked revenue through that reservation program. I wanna be very clear, that money's held in a separate bank account, we do not use it for development, it's fully refundable at any time. But it's, you know, I think it shows a good sense of demand. I think in terms of what we want the first run to look like, it's very much going to be reflective of where that reservation list is at the time. You know, certainly anyone who puts down a reservation should be getting a device because they took a bet on us when, you know, in some of these cases, you know, when we were nothing but a website and a dream, not to be too, you know, on the nose. But, you know, that's really, you know, the key, the critical thing is, you know, building up this reservation program. We spent $0 in marketing so far. And so, know, I think what this shows, right, is there's enormous demand for this. We have to make it work. Once we start, know, we're starting neuro simulation, you know, in the spring, we have a beta user program that we launched alongside showcasing the model. We've got like 3,500 people to sign up for that, you know, in 5 days or something like that. And so we'll go through the process of of selecting those people for the for the beta user program. But but anyway, in terms of, you know, what what the the initial run will look like, it'll be very dependent on, you know, that reservation list.

Eric Wollberg (57:36) I I mean, I'm just I'm just I just wanted to draw like a metaphor. Like, let's say let's say you had like, let's say, I'm just gonna put a number like, that's a 10 k, right? Let's say you have 10 k and how much how much would that expand your because I I imagine, like, you can use that the EEG data coming out to kind of improve your models. Right? So how is that how does that model feedback loop look like from the from the initial devices? Like, you know, if you have, like, let's say 10 10000 devices because you know? And and I see this all the time. Like, you know, there's this there's this graph that people have of, you know, the day the iPhone, you know, went out. And, like, 10 years later, we probably create as many photos in a day as, like, the entire history of humanity before the iPhone. Right? Like, every single day, there's probably more photos. More photos are taken than the entire history of humanity before the iPhone. Right? And so I wonder if if putting this device out there expands the dataset of EEGs so dramatically, so so dramatically that you can have this enormously larger dataset. Like like, what would having 10,000 of these devices in the market, do you think that would expand your dataset significantly for future improvement?

Wesley Berry (58:56) Yeah. So that goes back to the reinforcement layer. Really only works at scale. It works significantly better at scale, should say. Right? Because we are the Halo is a neural imaging device. Right? It's a neural simulation device with a machine. And so what we're able to do is look at basically how what are the responses you're getting from this TFUS and what and those responses are measured with EEG. And that's really informative, right? You can learn quite a bit from what works and what doesn't. And so while I think that the you know, starting with this open source data set, simultaneous EEG and fMRI, you know, we can get terabytes of data that way. But it's really when you talk about getting into, you know, hundreds of terabytes or petabytes of data, that's really gonna come from that people who are using this every night and we're collecting, you know, just, you know, on nightly basis, you know, gigabytes worth of neural imaging data and the the responses specifically, you know, TFUS sequences. Right? You're you're you're pinging the brain and you're basically seeing how it's reverberating and we're we're collecting that. And it's this continuous thing over and over again. So it it would be a very significant amount of data.

Eric Wollberg (1:00:21) Yeah. Yeah. I can I can I can imagine? Yeah. Go ahead. Yeah.

Eric Wollberg (1:00:24) And and 1 thing I also wanna talk about is not only scaling up the reinforcement side, but also the initial training data aggregation. So, you know, the day after we showcase the model, we released a piece that I think people, we had a good response from it, but I don't think people really realize like how profound it potentially is. We call it the quality factory. Realistically that's a really cool marketing name for what is a neuroimaging lab primarily set up with the simultaneous EEG fMRI neuroimaging data setups. And so you can imagine, right, where we can actually expand also our training data set where we bring in people who are extraordinary at, whether it's lucid dreaming or meditating or focus or positive mood, etc. The entire state space of potentially discrete universal, but even potentially because we have the scale now, non discrete universal brain states and aggregating more and more of this data to train successive models where you're creating models that can do more and more different experiences with the same piece of hardware. We do aim to hopefully launch the device with more than just lucid dreaming as an experience, definitely focus and hopefully also positive mood. So that's also a really critical way you scale up data, is on the training data sets using this quality factory, which is really a larger neuroimaging lab built for purpose.

Eric Wollberg (1:01:53) So let me me maybe maybe I kinda I kinda missed that earlier. So your the core thing that you're looking at is kind of like as you on your project road map, broad road map is discrete universal states as you as you as you as you call them. And of the you know, again, I think, you know, perhaps, you know, not everyone is super familiar, but like of the discrete universal states, you're saying focus is 1 of them and positive mindset is another, is positive mood? Is Yeah. That

Eric Wollberg (1:02:28) TFUS is already been used to induce focus and positive mood. So these things have already been validated in the context of utilization with TFUS to do it. So in many ways we're also just broadly recreating that, you know, but yeah, I mean, listen, we think that lucid dreaming is the killer app for this technology from a consumer experience perspective. It is the, you know, kind of the most profound extraordinary experience that we think we can give. And so that's why it's our focus. But you wanna create a system, right, where over time you can make a more and more generalizable device or generalized device where number of experiences the same device can give you increases over time. So, you know, want to aim to have kind of 3 core experiences. Release when the device ships, It obviously also increases the TAM of the device. You know, people like to be focused when they do work. So maybe, you know, you can wear it while you're, you know, jamming out, you know, a spreadsheet or something, I don't know, whatever you're doing for work. So in positive mood, people want to feel happy and so on. So that's kind of the impetus for, that's really critical about the model. Not only does it obviously create this closed loop system that creates these targets and reliably creates these experiences, but also allows you to onboard more experiences to the system over time.

Eric Wollberg (1:03:56) So let me me let's go back to lucid dreaming, Arun. Let's so let's say you have a device. Now if you had to like create a tutorial because you know, some people are just like, they don't know what to do with it. Right? So if you had a tutorial or something that you'd need to create for for someone, is is there, like, a series of exercises that you'd recommend, like, in the in the first 1, wake up, take a look around, walk around, try this. Right? Like, is that like, what would that what would the Darrel 1 to Darrel 2 look like? You know?

Eric Wollberg (1:04:28) Yeah. I I I think you're absolutely right. Like, it is very critical and incumbent upon us to do a certain level of consumer education, you know, before they try this out and give them a guide to kind of the initial things that maybe they can try out doing. Certainly number 1, right, is just taking in and grokking your environment and trying to, maybe even depending on where you are, if there's a dream character around you, go talk to them, right? Go ask them questions and so on and so forth. These set of questions, maybe try to make a flower appear out of the ground or a building appear out of the ground, change your environment. The thing that's also really important is like, again, the way that this is experienced phenomenologically is what you imagine becomes. So I think maybe also 1 of the things to do is to also like go into it and say, what do you, where do you wanna be? You know, what, you know, what is the first environment you wanna create? And so that when you're, you know, you're lucid, you can do that. But, but yeah, it is definitely important that we, you know, create some kind of content and guide that does that as well as prepare you for what that's gonna feel like, right? It's gonna be a wild experience the first time, no doubt about it. I think people equilibrate to things like very quickly. We already saw that in the context right of ChaChaBeetie where it's like, was like 1 of the most extraordinary things ever and now it's a completely you know banal. But you know, I think you know you're absolutely correct that you know, we'll have a set of content that kind of walks you through what this is gonna feel like. What maybe to do when you first experience it. But I frankly think that people will very, very quickly become attuned and accustomed to what they can do in these experiences very quickly.

Eric Wollberg (1:06:17) So let me, you know, just playing around with this idea. Let's say you put on the Halo and you fire up, you fired up, you go to sleep. You also have, maybe, app that, you know, talks you through it or, you know, basically runs an ex like, a vocal experience. Right? Like, something that, you know, once the lucid state is triggered, it says, like, hello, blah blah blah. Wake up. Do this. Or, you know, or or imagine you're doing this or etcetera etcetera. Like, because I I read through some of the research that you had on your page. That's like a real time dialogue between experimenters and dreamers. So it seems like you can have, like, conversation. You can have, like, little bit of, know, interaction. So would that be something that you have a little bit of interactivity, like a vocal kind of experience where someone is talking you through something or someone is doing something with you together? Is that is that something that you guys have experimented with in the lab? Like, have you tried that?

Eric Wollberg (1:07:18) Yeah, the problem with that right is when you provide sensory input, know your sensory, the worry there is that you could activate the thalamus which regulates sleep and that you would just wake them up. And so I mean, it's interesting, we talked to a guy who did his PhD attempting auditory induction and he had to do this whole thing where and it was a very limited data set where he basically created this algorithm that determined people's different kind of waking thresholds. And so the term like you're a light sleeper or heavy sleeper, it's kind of like how much auditory noise can you have until you are actually awoken because the problem there right is like when your brain hears something or you know people have tried to use light to induce lucidity, if it thinks it's seeing something or something's changing in the environment outside of you, your body's like holy shit, something's happening, we have to wake up. So I think you know an auditory guide is really not something that you want to do because you know you very much increase the chances that you wake them up. And again I think content providing people like a walkthrough content could be video content and so forth that you know walks them through what they you know will expect and so on and so forth. I really think you know people equilibrate to things very very very quickly. And so I I once you're like attuned and accustomed to this, you know, there is no need for like this guide. Like you just know that like what you imagine becomes. Right? And so, you know, people have their imagination.

Eric Wollberg (1:08:56) Music. Playing music in the background, is that is that the same? Is that basically the same? Because or does that get tuned out by the brain and yeah. You're just

Eric Wollberg (1:09:06) Again, the issue is is that when you when you send sensory data, you know, whether it's through your ears or your nose or your eyes, you can activate the thalamus which regulates sleep and wakes

Eric Wollberg (1:09:17) you Wow. Wow. Interesting. So basically, you allow yourself to experience the dream in fullness without distracting yourself with things outside it. Interesting.

Eric Wollberg (1:09:33) Yeah, I mean, whole point, right, is that we wanna circumvent like these auditory or sensory data inputs and we're just focused on activating the prefrontal cortex.

Eric Wollberg (1:09:44) Right on, right on. Tell me something a little bit more about the qualia factory, right? Because I guess my my layman's lay person's kind of like understanding of Qualia is like your subjective experience of consciousness. I don't I don't know if that that's an accurate description. What is qualia and why would you say that have some potential to create a qualia effect?

Eric Wollberg (1:10:13) Yeah, so again, I'll define qualia, but I wanna be clear. Qualia factory is kind of a marketing term. What we're talking about right is a neuroimaging lab that is specifically set up to do simultaneous EEG and fMRI neuroimaging so that we can build more training data sets for more models that give more experiences. Qualia, which Charles Pierce, a famous American philosopher, kind of a contemporary of William James and so on, created this term quali which was kind of and qualia being kind of the unit economic of a given conscious experience. So the color red, the smell of coffee, you know, etcetera. You know, these are units of qualia or conscious experience And a conscious experience, right, is the composition of that entire, you know, qualia landscape. That's where it comes from and that's why we kind of named it that.

Eric Wollberg (1:11:10) I see. I see. Very good. The 1 thing that I think people can ask about is that, is this a medical device? Is this FDA regulated? Should it be FDA regulated? You know, describe to me the the thought process behind should it be regulated or not? Why is it not regulated?

Eric Wollberg (1:11:32) Sure, we don't make medical claims. That's very important. 2 is the threshold of regulation for ultrasound is 7 20 milliwatts per centimeter cubed. It's essentially the pressure that's being exerted from the simulation. We're talking about something that's between 100 to 200 milliwatts per centimeter cubed, so well below that threshold. And also to give you a sense, you can get recreational electrical simulation devices today from companies like lifted etc. And that's you know, I think you know if you look at like the very you know it's a far less precise you know technology etc. Which you know and so like the safety record of ultrasound relative to any of these other you know technologies is far better. So I mean to give you maybe even in a phrase, it's less than a sonogram on a mother's womb. That's what we're talking about. So it's not regulated. We're not, you know, of the key differences between us and a lot of the, you know, people in the TF US space, they're almost universally, you know, targeting medical applications which is great. It just means that right that the time to market is much longer. And so we focus on these recreational consumer experiences. And so yeah, I mean that's kind of the framework and why we don't have to go through it.

Eric Wollberg (1:12:49) Yeah. Interesting, interesting. So there are people trying to use transradial focus ultrasound for medical applications, but you are not actually doing any any any medical medical claims, medical devices. You're not you're not going to cure anyone. It is just a state of lucid dreaming, and that's all that's all you're that you're trying to do at this point. And the amount of ultrasound which is being projected into the brain is something like 1 seventh the regulated limit. Right? Like 1 seventh, 1 eighth of the regulated limit, something like that.

Eric Wollberg (1:13:29) Yeah. 1 seventh, 1 1 fifth or something like that. Yeah. Between 1 seventh

Eric Wollberg (1:13:33) and something like that. Yeah. Right. Perfect.

Eric Wollberg (1:13:40) What does Morpheus look like? What is what is v 2 of the device look like? We have a v 1. It's coming out soon. It's in it's in beta it's in know, you have beta testers coming in. Is a v 2 basically going to be in a better and smaller, like, more transducers? And obviously, you can upgrade your your model. You know, you can update your model anytime on the back end. But when you if we have a v 2 device, is that are you looking for further development and miniaturization of the transducers of the EEG? Are you looking for more more resolution? Are you looking to pack more of them in a smaller device? What would you look for before you pull the trigger on a V2 in the hard way?

Eric Wollberg (1:14:28) Sure. We are very dead set and focused on V1, as you can imagine. I mean, right now, right, we're migrating the model and the technical prototype we have to do. Actually, I know you probably can see some of my team building a 3 d scanline tank for a hydrophone where we use it to test the pressure and so on and so forth and compare that to ultrasound simulation that we run to basically calibrate it. In terms of what a V2, where the improvements would be, again reference that you know, Wes's law that I talked about previously where you're seeing an increased level of both miniaturization, the number of elements on a given transducer which will increase your precision and ability to phase and steer it. And as you mentioned, the model will continuously update in the background. We will launch more and more experiences which you can subscribe to and improve the number of experiences that way as well. And so both the number of experiences can can be improved by the transducers, both the number of the transducers or the number of elements on a transducer as well as from the model. So you definitely want to want to improve things like that as well as improving the energy consumption of the transducers so that you can shrink, shrink, shrink, you know hopefully you know and some of the electronics where you don't need kind of a side totem gear and hopefully you know maybe whether it's V2 or V3, know that stuff is just in the headband and so on. So that's how you can kind of see this improving over time, know, as successive models are produced. But you know, we are hyper focused on just getting V1 done.

Eric Wollberg (1:16:12) What have been the biggest challenges for the hardware? Is it heat? Is it power? Like what is the thing that has been challenging the most in the last like, you know, build out while you've been building this?

Eric Wollberg (1:16:26) Yeah. I think, you know, 1 of the key a few things, 1 is we did our prototype with card 79, did neural link for Elon and

Wesley Berry (1:16:37) they work with

Eric Wollberg (1:16:37) a number of other neurotech companies. I'm not sure I can mention their names. Things like comfort is really critical, right? Because you have to be able to fall asleep with this on or if you're using it for a waking state experience, I guess it's gotta be comfortable. So like just design and form factor is very, very, very important in terms of the hardware. So we did, you know, excessive studies on, if you some people sleep on their side, some people sleep on their stomach, some people sleep on their other side or their back, making sure that it's fully comfortable and you can fit all the components in that comfortable form factor. And then in terms of like more like brass tacks, right, there's component selection, etcetera, which you have to be very focused on. And then there's all this equipment that we have to build like this 3 d scan line hydrophone tank to actually test etcetera and simulation software so that you can cut because we can't see how the, I can't see how this is, you know, the waves are propagating in your skull. I'd have to cut you open and I don't want to do that to you. So you know, we have to create, know, we have to use this you know, this hydrophone and the simulation software to create this 1 to 1 you know correlation where we can you know then be able to take somebody's skull and understand how it's going to propagate through their head. So I think there's a lot of stuff that you just have to build out in order to even get to neuromodulation which has really been a focus in the last couple of months. But yeah, come spring we're gonna be doing neuro simulation and validating all these experiences and hopefully having in short order but you never know, the first ultrasonically induced lucid dream.

Eric Wollberg (1:18:20) Right. In in the early days, were was there like any like hurdles where you needed like experimental data to decide whether or not this was a goal or not? Like, were there like any like, you're like, hey, we need to figure out this. And if this is not gonna happen, then we need to pivot. Right? Like, what were, you know, what were there any moments or were there anything, you know, technically or otherwise that you saw needed to have those decisions?

Eric Wollberg (1:18:49) Yeah, so again, I mean comparing 2018 when I first thought about doing this to 2022 when I found TFUS, you needed a different neurostimulation modality than electricity or electromagnetism. And then on TFUS, TFUS has already been used to induce focus, positive mood, deep meditative states, etc. It is neuromodulatory generally and so as long as you understand the brain state that you are trying to induce, this is the integrated circuit for non invasive neurostimulation. Was, I think from our side, very de risk. Then we have this collaboration with the Donner's Institute for neuroimaging data and so that was also very critical that we needed that pipeline. Then having successive breakthroughs or acquisition to data sets that we could use to supplement that data as well as expand our data pipeline. Really it was that and of course just the advent of the neural transformer architecture in 2022 which then was expanded and built from the ground up to be a multimodal neural transformer. But the go in terms of red light, green light was I think really on the neurostimulation front more than anything else, the modality.

Eric Wollberg (1:20:10) Right. Do you sometimes come across purists who are like, you know, lucid dreaming is something that you should naturally come to as a as a part of like a kind of like religious or semi religious meditative process rather than something that is to be offered free to everyone? Do you come across those kinds of people? Do you come across comments like that?

Eric Wollberg (1:20:38) Absolutely. Yeah, absolutely. I've seen a couple of comments, you know, on some of, you know, the various things that we posted. I think, you see kind of like gatekeeping in like everything, basically everything. Yeah, your head. I also was in the psychedelic space for a brief while and you kind of saw the same thing in the psychedelic space where you had either like these ethnobotanists kind of, you know, are sacred plants and compounds and then you had like, know, the scientists at University College of London and John Hopkins, etc, who are really trying to move things in our understanding forward. You know, and so I think you see these dynamics play out across many different areas, but I think it's kind of like if, you know, we said, again, maybe bring it back to like, you know, fire, it's like, no, like only lightning gets to create fire or, you know, or something like that. You know, we shouldn't control it. We could burn down our village, you know or somebody could burn down our village you know. And so you know, fully expected and I'm not surprised that we got you know that kind of you know feedback from certain people. But frankly like you know, the world has moved forward by people who kind of say kind of, you know, this is an extraordinary powerful thing. What if we gave it to everyone?

Eric Wollberg (1:22:02) Right. And on that note, in your early kinda, like, testers who who who who are testing with you in the lab, is there anyone like, what what has been, like, a a kind of remarkable experience when someone has tried this on and they've come to you and like this is a woe moment for them? Like I'm sure someone has tested it and you know.

Eric Wollberg (1:22:26) To be very clear, you know, we're starting neurostimulation in the spring. You know, we have to migrate the model on and again, as I mentioned, test out all the hardware to make sure that you and get any correspondence with the simulation software so that you know we're doing this properly. But you know, we have friends at places like the Institute for Advanced Conscious ness Study that's been running a number of different ultrasound experiences. I think meditation and positive mood I think were the things that they were doing as well as maybe some others and there's actually a really great blog post that somebody wrote on his experience and it's pretty extraordinary. There's also a really great video that was done quite some time ago on the research of a gentleman named Doctor. Jason Gwetty where he was using it to induce deep meditative states. It's like they use this woman who had never meditated a day in her life and induced a deep meditative state. She's very emotional about it, you know? It's an extraordinary thing to be dropped into a brain state that you've never experienced before. So it is a very powerful technology and experience. We look forward to be giving those experiences to our beta users in the spring.

Eric Wollberg (1:23:40) That's amazing, man. I am so inspired. This is this is this is such an awesome, you know, awesome piece of technology because you often wonder, you often read, let's say, Hermann Hermann Hermann his essay or Siddhartha or, you know, a lot of these things. And you wonder actually, they're not able to fully describe their experience. Right? It is not possible for anyone to fully verbally describe this kind of altered or different state of consciousness. And we have these kinda like very, very limited understanding and limited ability to describe what's happening in our heads. And to the extent that, you know, there is something that you can experience, which you have not been able to experience, it's inside you, but you can't really experience it. And to the extent that you can use a technological, you know, innovation to kinda, like, trigger that and learn something more about yourself. I mean, that's that's, you know, it's that's amazing. Right? Yeah. It's, you know, it's absolutely fantastic. So

Eric Wollberg (1:24:45) Yeah. Obviously, I think we share your enthusiasm. Think the best thing that you can do for people is give them the experience. If you can give them experience, it's empirical for them innately. And I think 1 of the things that's gonna be pretty extraordinary is over the years and decades as we have this technology, I have to paraphrase Altman, It's like, this is the worst these neural transformers will ever be. This is the worst TFUS will ever be. And so if we can build on these, both these curves like Wesley's law and on top of, know, machine learning improvements that we're seeing all, you know, every day, you know, the ability to give people a device that allows them to explore the state space of conscious experience is going to be transformative for the world. I frankly don't really have a good sense of what the world looks like when this is available on mass, but it is an exciting world. It certainly a strange 1, but it is I think 1 that is profound and something that I look forward to exploring alongside everyone else.

Eric Wollberg (1:26:06) Awesome, awesome. So I think, thank you. Thank you, Eric. Thank you, Wes. Thank you for your time today. Emergent Behavior is a part of the Turpentine Network. For our full newsletter, visit emergentbehavior.co. For more Turpentine podcasts and sponsorship inquiries, visit turpentine.co.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.