Welcome to AI in the AM: RL for EE, Oversight w/out Nationalization, & the first AI-Run Retail Store

Nathan and Prakash speak with Sergiy Nesterenko of Quilter on reinforcement learning for circuit board design, Andy Hall on AI governance, and Andon Labs on an AI-run retail store. They also discuss AI progress, existential risk, and civic action.

Welcome to AI in the AM: RL for EE, Oversight w/out Nationalization, & the first AI-Run Retail Store

Watch Episode Here


Listen to Episode Here


Show Notes

This special AI in the AM episode features Sergiy Nesterenko of Quilter on using reinforcement learning for circuit board design, Andy Hall of Stanford on AI behavior in politics and new governance models, and Lukas Peterson and Axel Backlund of Andon Labs on their AI-run retail store in San Francisco. Nathan and Prakash also reflect on the pace of AI progress, the public reaction to existential risk, and why constructive civic action matters as AI systems grow more powerful and autonomous.

Sponsors:

Roboflow:

Roboflow's free 2026 Vision AI Trends report analyzes 200,000+ real-world projects to reveal how top companies are deploying Vision AI and turning proprietary data into an edge. Download it now at https://roboflow.com/trends

VCX:

VCX, by Fundrise, is the public ticker for private tech, giving everyday investors access to high-growth private companies in AI, space, defense tech, and more. Learn how to invest at https://getvcx.com

Tasklet:

Build your own Cognitive Revolution monitoring agent in one click.
Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai

CHAPTERS:

(00:00) About the Episode

(07:57) Live stream kickoff

(09:52) Sam Altman attacks

(16:37) Quilter from SpaceX

(19:02) Why autorouters fail (Part 1)

(20:52) Sponsors: Roboflow | VCX

(23:09) Why autorouters fail (Part 2)

(28:14) Compute and odd layouts

(34:19) Simulations and safety margins (Part 1)

(39:22) Sponsor: Tasklet

(41:01) Simulations and safety margins (Part 2)

(41:01) Superintelligence meets hardware

(48:18) AI constitutions debate

(55:55) Deepfakes and persuasion

(01:02:24) Virtue and institutions

(01:11:05) Agent governance problems

(01:16:56) Andon store debut

(01:21:25) Luna's store choices

(01:28:21) Supply chains and spread

(01:36:23) AI boss behavior

(01:43:47) How retail scales

(01:53:54) Processing the future

(01:59:50) Markets need context

(02:26:42) Episode Outro

(02:30:37) Outro

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Website: https://www.cognitiverevolution.ai

Twitter (Podcast): https://x.com/cogrev_podcast

Twitter (Nathan): https://x.com/labenz

LinkedIn: https://linkedin.com/in/nathanlabenz/

Youtube: https://youtube.com/@CognitiveRevolutionPodcast

Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431

Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.


Introduction

Hello, and welcome back to the Cognitive Revolution, or … in this case I should say … Welcome to AI in the AM.

This is the third time that my friend Prakash Narayanan and I have done a live stream together, and this time we figured we should give it a name, and he also took the initiative to create a new look for the show, with real-time AI transcription and AI-powered comment moderation. Check out the video and I think you'll agree that he's done a really nice job with the look and feel.

As you'll see, in some ways we are very much still figuring out both what we want the show to be, and how best to organize and produce it. One thing we're going to look at doing after this episode is creating a mechanism where we can easily signal to one another when we'd like to ask a follow up question or move on to another topic.

But when it comes to the quality of guests and conversations, I think this episode is right where we want to be.

Our guests for this episode were:

Sergiy Nesterenko, CEO of Quilter, which is using Reinforcement Learning to train AI systems to perform circuit board design, a problem with an insanely high-dimension search space, complicated physical constraints, and relatively low volume of available training data.

Andy Hall, Professor of Political Economy at Stanford, who's doing a bunch of interesting work to characterize models' behavior in political contexts, and who is also working to design independent AI governing bodies that he hopes will allow the public to exercise some oversight over AI companies without requiring nationalization.

And then finally, Lukas Peterson and Axel Backlund from Andon Labs. You may know them from their autonomous vending machine work, but today we'll be talking about the new AI-operated retail store they've recently opened on Union Street in San Francisco. The store, which is managed entirely, including hiring human staff, by an AI agent, currently has a 2.6 star rating, but I still can't wait to visit.

For me, the big takeaway from this series of conversations is once again that the future is coming at us much faster than we can process it. Assumptions that seem safe from one perspective become very questionable in the face of increasingly powerful & autonomous AI systems.

And with that in mind, I want to add just a bit to my answer to the very first question Prakash asked me in our opening discussion, namely: why are we now suddenly seeing violent outbursts directed at AI lab leaders?

First of all, while it’s certainly possible – and I would very much hope – that the recent attacks on Sam Altman’s home will ultimately prove to be a random blip, signifying nothing … my honest assessment is that by default we should expect to see more of this sort of thing - and not because of the super high p(doom) numbers coming from the AI opposition camp, but simply due to the fact that more and more people are now becoming aware of the extreme reality of the AI situation.

Sam, Dario, Demis, and Ilya all signed, alongside many other luminaries, a statement saying that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”

And though the record does show that each of the leading AI companies was founded with awareness of an intention to address the hard problems of AI safety, and of course the upside is unquestionably immense, in practice today they are developing what they recognize to be destabilizing and likely dangerous technology, pretty much as fast as they possibly can, while repeatedly failing to live up to their own prior safety and social commitments, in significant part because – and here I am quoting Sam Altman's immediate reflections after the Molotov cocktail incident – "being the one to control AGI" has a "ring of power dynamic to it."

All while, by their own accounts, we still face anywhere from a 5-20% chance of something like, as Sam himself famously put it, “lights out for all of us”, and the US government’s main concern seems to be making sure that nobody can constrain their ability to use the technology for autonomous weapons, domestic surveillance, or anything else.

I can’t emphasize enough: objectively, this really is a crazy situation. Those of us who stumbled onto the idea that all this might happen years ago have had a lot of time to get accustomed to it and to position ourselves to do what we think we can about it, and many have reconciled ourselves to the idea that some version of it is inevitable, but that doesn't mean we should try to tell people who are only learning about this now that they are wrong for freaking out about it. A 1 in 20 chance of human extinction is not low, and absolutely is worth freaking out about.

I’m reminded of 2 movies that memorably illustrate how I think many people will respond to learning the facts about AI.

In the 1998 movie Armageddon, when an asteroid is found to be on course to destroy the earth, it’s just understood that it's heroic for individuals to risk - and in the end even sacrifice their own lives - to save the world. And that doesn’t hinge on whether there’s a 5% or 20% or 99.9% chance that the asteroid will really hit the earth - the heroes would be heroes in any case.

In contrast, in the more recent “Don’t Look Up”, the main characters are just continually frustrated that nobody can be bothered to recognize the crisis, which drives them to become crazier and more desperate until … spoiler: everyone does ultimately die in the end.

I would submit that the difference between a hunger strike and an act of violence is not about how one understands the stakes or the odds, and neither is it about the impulse to martyrdom - rather, the difference is simply that one course of action attempts to call others to a higher ethical standard, while the other is condemned by every principled moral tradition, and even on purely consequentialist terms, seems almost certain to make everything harder and worse.

To the AI opposition movement - which fwiw I think is increasingly distinct from people focused on AI safety - I would say … absolutely continue to condemn violence, but at the same time be careful not to shy away from the fact that it’s the situation that’s crazy, not the people who are desperately searching for ways to make a difference. Your job, in addition to educating people about the reality, as you see it, and as the lab leaders themselves have described it, is to identify and create productive ways for people to act heroically in this moment. Those could include mobilizing voters to contact elected officials and advocate for regulation and international treaties, investing in citizen-level diplomacy with China, developing new governance models, pursuing experimental technical alignment strategies, and importantly, probably lots more that nobody has thought of yet. I personally always encourage people to pursue their own AI safety ideas, however eccentric they may seem, in the hope that some of them might actually pay off, and because I believe that in the absence of constructive ways to devote oneself to the cause, we will see more people simply going crazy.

As always, I will welcome your feedback – both on this analysis, and on the new show format.

Until further notice we do intend to run these conversations on The Cognitive Revolution feed, but if it goes well, we might spin it off into its own thing. If you'd like to see that happen, we definitely encourage you to follow the new show account on Twitter @AI_in_the_AM, and watch out for the next live stream, which is currently planned for April 20 at 11:45am ET / 8:45am PT.

Thanks for being a part of the Cognitive Revolution, and now, one with the show.


Main Episode

all right and so we are live right now. we are live. this is welcome to AIAM. thanks rakash. thanks for setting this up. great to be here. i like the new look. yeah it's we we've this is our third stream third live stream and we decided to add a little bit of pizzazz this time. so when i heard there was a hundred million dollars on offer for anyone who sets up a tech focus live stream i figured how can i miss it? yeah incentives the power of incentives right? and i think also we decided to make this perhaps the first you know live stream of its kind where we are using a lot of AI tech in here. so we have live transcriptions which i don't think any other show has ever done before because accuracy has never been good enough to to have live transcriptions. and this means that you can watch this in a meeting. you don't you don't have to you know you don't have to like oh you know i see i see something on screen but i can't i don't know what they're saying. so you can watch this in a meeting. so. yeah if AI note taker is attending your meeting for you then you can be surreptitiously watching us on the other tab with the this version of the AI transcription helping you do it live indeed we also. have. live you know live comments available if anyone wants to just mention at AI underscore in underscore the underscore AM and then as a message it is being moderated i think by grok four point one fast which is fairly fast and yeah so. before we. take off we just had a the second attack on sam almond 's house i don't know. if you saw that two people it seems they shot around at his house on russian hill and it's pretty scary. you know his family 's in there. you know at this point at this point i wonder if it's just you you you have to move out right? you can't you can't be in SF anymore. you have to be in like a defensible position. and i feel i feel like you're i've heard from one VC actually that he at his house is you know in a you know in one of these suburbs you know in menlo park wherever else. and he has drone drone defenses set up. so there's drones kind of circulating above the house and he's got drone defenses. i think it's very it's very hard because i think the level of security which is going to be required for AAI company head right now is and i imagine it's the same for elon. imagine it's the same for zuck. you know they're not they're not well liked and you know why do you wonder that is there like there's a lot of soul searching going on on the timeline right now. why do you think this is happening right now? well i think it's getting very real. that's one thing you know i mean everybody can now see that or maybe not everybody. i think there's a few holdouts but increasingly it's hard to hold out any sort of they're just doing this for hype and to raise money you know and there's nothing really there. so i think increasingly people have to reckon with the fact that AI is getting powerful. and my guess is that who knows? it's hard to put yourself in the mindset of somebody who would go throw a molotov cocktail or you know randomly do a drive by shooting of someone 's home. but mythos is really an example of at least a weakly powerful AI right? i mean it's something where no less than nicholas carlini who is by all accounts one of the great cybersecurity researchers of all time has said that he's found as many important vulnerabilities in just the last few weeks as he had the entire rest of his career combined. so that is a huge indicator that we are entering a new regime. and you know it it's i do think it's it's becoming real real to a lot of people in a lot of different ways. and you know it is a radicalizing reality i think and it's not not to be forgotten right that all of the AI lab leaders have been pretty candid if not super recently at least at various points in time that they're like really not sure how this is going to go and at least have some non trivial P doom percentage you know even at the top of these organizations. so i think it's like rational in some sense to be like what the hell are you guys doing? and this needs to be stopped. i think that that position is like in my mind very defensible. and then obviously in addition to just being wrong i don't think it's going to be effective tactics to try to intimidate these folks 'cause i do think they will be. their resolve will probably just be hardened.

their ability to defend themselves is that with the resources that they have is going to be pretty good if only by retreating to you know some large estate in a private island in hawaii or in new zealand or whatever as the case may be. so i don't think this stuff is going to work. but there is some i i think you know it's not totally crazy to say like desperate times call for desperate measures. and then it just becomes like you know what desperate measures are acceptable and or likely to be effective. and i think you know i do want to be clear that i do not support these things. i think they're like clearly crossing lines but there is some there is some sense to the idea that like these guys are telling us that they're taking a you know by their own lights right? they're taking something like a one in ten one in five chance with the future going like deeply off the rails. they don't really have a great account of how they're going to control things. alignment is obviously unsolved. governance is unsolved. as a preview of our upcoming conversation. and yet we race ahead right. and i mean i i was also really struck by in sam altman 's kind of off the cuff blog post that he wrote about this his again candor. i mean it's you know obviously he's been accused of being inconsistently candid. i found him quite candid in that moment and his grappling with the idea that this whole dynamic has a sort of ring and who's going to control the ring and the sort of you know intoxication and potentially corrupting nature of power. that was really a striking response right? because i think that is actually a lot of what the most radical voices in the AI discourse are responding to like that he's acknowledging exactly what their most you know their sort of sharpest critique is right? i mean i think if you listen to the AI safety people again the most like striding voices they're like these guys are trying to get the ring. they you know want to be the one that gets all the power and wins in the end. and they don't really mind taking a totally irresponsible gamble with the rest of humanity 's future to do that. and he kind of said the quiet part out loud in the blog post about it. so i would definitely like to see a more sane and productive response than these things. but it is unfortunately i do think barring that you know barring any sort of government action that makes any sense you know we're probably going to see more of this kind of stuff. and i i i do think in their candor you know they've set the table for some of some extremism anyway you know and again that's not to endorse doing any of these things but the mindset i i think certainly is is one that i can understand how people get into especially also if they don't understand the technology right? i mean there's a lot of people who are coming to this much more suddenly as it becomes a bigger deal. and they're just kind of like having a sort of a sudden awakening to the fact that this is all going on and it and it's you know and it really is a super powerful. and maybe what they heard about it being all hyped before it was like actually not right. you know i think that could be destabilizing to a lot of people. dude i i remember testing red teaming GPD four was to me you know in a in a minor way. so i can only imagine what people who you know are just now coming on to modern AI developments are are feeling and thinking on on. on that note let me introduce our guest our first guest for this morning sergei mister. enko. he's the founder and CEO of quilter. they're a company that's really trying to speed up electronics design in general and PCB design in particular. they have a physics driven AI which i think they use a reinforcement learning in order to really speed up the entire PCB layout in a process which can take many weeks. and you know they've managed to compress it down. he came out of spacex. he was in avionics i think avionics and radiation. and i imagine you know it took me a lot. you know i heard about avionics growing up and i always imagined it was very sophisticated stuff. and then i realized later on it's a lot of compute. it's actually a lot of compute. it's a lot about figuring out what numbers need to happen. and in the past that a lot of that stuff was not electronic. there was a lot of there was a lot of gadgetry which was actually mechanical and you had mechanical ways of calculating all of these things which is why avionics used to be you know this entire segment of creating mechanical computers that could work on F fifteens and F sixteens. and later on it just became you know electronics. but the electronics have to be hardened to radiation and a bunch of other stuff that happens. and sergey is an expert on that. sergey welcome to the show. thank you for having me guys. i hope i didn't like you know misstate all of those things. give it give us give us give us an example of how you transition from spacex to PCB design. what did you carry forward from spacex into into your new role? yeah i mean there's a lot to carry forward from spacex to be honest. like you know i mean this time i spent there was awesome. spent about five years there.

there's a lot to learn about culture lot to learn about how to hire great people lot to learn about how to design systems so on and so forth. but i think the most important thing is is just speed right? like spacex probably the most thing that it's famous for is like just you know hardware rich development in a sense right? like just try it just build it go launch it. it'll blow up a couple times. that's fine. we'll learn. and that's way better than analysis paralysis right? and i think that's the case for for a lot of companies in a lot of places right that you can kind of really overthink a design. but once you kind of put it to the test you find out what not to worry about and what to worry about. and kind of the physics is the real ultimate guide. and you can't find that out until you actually run it. so speaking of physics prior to quilter i think people used to do this thing called auto routing and then quilter came along. what is the difference between the prior paradigm and what you guys are doing now? yeah totally. so auto routers have actually existed for more than sixty years right? like if you dig back through the literature of PCB design all the way back in like nineteen sixty one nineteen sixty two you started to see publications on like hey we're making these circuit board things. it's super laborious at the time. this is when you're doing it with i mean not not quite pen and paper but just about right like you're doing this without cad software you're you're making masks out of tape by hand that sort of thing. and mathematicians were already studying like how do we solve this right? and initially this was thought to be a graph embedding problem that didn't work. then people started doing basic pathfinding algorithms like lee 's algorithm and you know a star and like that family of things that didn't work went into like topological routers that didn't exactly work and so on and so on and so forth. so i think to say that people have been using auto routers frankly is kind of an over overstatement. like if you go and talk to an average electrical engineer and ask them if they use an auto router the answer is plainly no because it's just not good enough right? it's just not helpful and there's cases right? don't get me wrong there are some people who use them for certain things and in certain parts of the board and whatever. but like it's nothing like the chip industry where you have billions of transistors and you know you can actually genuinely place and route like a vast majority of that. and it wouldn't be possible that humans that just never happened in PCB. so the the real kind of challenge for quilter is can we make like the first set of placement and algorithms that people actually want to use right. it's not really competing with the old auto routers. it's like competing with the manual labour that still happens in every hardware company on earth. so one of the questions i had is you guys use a lot of reinforcement learning and what is your like reward process for that right? like how do you reward the agent? do you run simulations? do you design the environments? like how does that process and and kind of work because it's you have PCB routing is not is not a generalist task right? it's a it's a pretty specialist task. like how do you figure out what the you know reward signals are? how do you create the data? yeah. and just to state the kind of plainly obvious but nevertheless like this isn't the problem that you can just like prompt chat GPT to do and like it does it for you right? that's as you're kind of alluding. this is not a generalist kind of problem. the large language models are not trained in these kinds of problems. and furthermore it's arguable that like language is not the right approach to a geometry and physics problem right? so this is why we've kind of had to take our own path construct our own environments and and and we see it as a reinforcement learning problem. so we actually spend a lot of our time if not most of our time on constructing a good environment and constructing a good reward function. because that turns out to be really hard right? naively you might think let's take you know a naive viral algorithm like appo or something give it access to a keyboard and mouse of like a open source cad software like ecad and you know go learn right? and the reality is that i don't think reinforcement learning as a technology is ready for something like that is just very hard. it would have to get i mean millions and millions of actions right in sequence with perfect precision where traces they're like side by side with no extra margin and just no way right? at least i practically don't see a way today. so there's two things you want to do one is you want to construct an environment that only gives kind of uniquely useful actions to the agent right? so to make this very concrete there's like maybe ten thousand different ways you can draw a trace through a board right? and there's like very minute details of like exactly where does every elbow go and so on and so forth right? but realistically as a human you're not thinking not every single detail like that when you're planning the board you're thinking kind of topologically right? so you're thinking like am i going to go clockwise around this chip or am i going to go counterclockwise around this chip? and like that sort of binary choice is more important at that stage than like every minute detail of every single segment right? and so that's like an example of something that our environment does. it's like how do we break this down into like the key important high level choices to present to the agent rather than every explicit detail? now the second part of that you asked about is the reward function right? the reward function has to be fast right? for RL to kind of have any hope. and so the way we think about this is and for humans too there's like three tiers of physics approximations. you might do right 'cause no simulator is perfect but generally you want to approach reality from the side of conservatism.

the first level that most humans do is they actually just compute pure geometry. so there's rules of thumb right? like if you're worried about two wires that might cross talk they might influence each other because they are effectively antennas and they contaminate each other. the basic rule that people will follow is if i can make them five times as far apart as the width of either trace i'm good right? that's just geometry that's very cheap to compute right? the next level of that calculation would be called the quasi static approximation where you take the maxwell equations ignore the time factor and compute the parasitic capacity into the mutual inductance between those. it's basically a a physics simulation right? you do a mesh you do a finite elements but it's very fast right? and then that would be level two in our opinion level three would be full wave right? like run a full wave simulation finite different sub domain or fem considering for time to get the most to cancer. and so at cool to the way we see it is let's first nail what humans do right like just pure geometry and make sure it's conservative to reality. then now we're starting to step into quasi static and like those kinds of fast approximations you know two D cross sections and and whatnot. and then eventually we'll come back to full wave where it's necessary right. but full wave is very expensive and i mean expensive in terms of like wall clock time. so it's like the first the first step is really heuristics kind of learned rules and the second step is a fast calculation. and the last step is a much more detailed calculation. and if any of those hurdles you know doesn't pass it's it it fails. and that that's how you give the RL agent its reward signal. and that's it. yeah. the only way i'd amend that is on the first step. ideally you don't want a heuristic that could have a a false positive right? like so what you really want to do is you want to be conservative. so like the five widths rule for example like yeah it's just geometry. it's very basic stuff but it's overkill. and so what you're really doing is is that you're making your board too big too expensive you're leaving too much margin right? like you can eventually as a human delete that margin or with better calculations. so you don't want something that falters because getting a board back from phablet doesn't work is really really really painful. you want something that is overly conservative. and then with more detailed simulations you bite down the conservatism with more accuracy. indeed. how many like how how much compute do you have to do? like how many environments do you need to construct in order to get to where you are today? yeah a lot. so we we break the problem up into kind of multiple stages right. so like we treat you know the first problem before you even get to routing right. so routing for those unfamiliar is i've got components on the board. the components have these little connection points called pins and i'm going to draw wires between them that can't collide can't overlapse on and so forth. before you even get there you have to put the components on on the board right? so the first problem is actually typically well even before that you might choose what is the shape of my board? what are the vertical layers of my board? where do i have ground planes so on and so forth. that's problem one. problem two is where do i put the components right? there's kind of a floor level floor planning problem and then a detailed component placement problem. then you get into your initial routing and topology selection. then you get into your geometry fine tuning right? and so for now we actually split each one of those up and treat them independently. and it depends on the problem right? like it turns out that with placement you can get environments that run really really fast right? you can you can sort of vectorize it and just like throw a G P U at it and have like very very very fast environments. if you guys are in the world of reinforcement learning if you're familiar with puffer lib a great library that's coming out that's going really really really fast reinforcement learning like that's like a good inspiration for that in in the routing stage that doesn't quite work as well because the routing stage is so much more complex. you can't quite afford as many environments right? so you have to explore subsets of a given environment rather than go wide and like run you know a million totally different routings in parallel. have you seen outcomes which a human wouldn't do? like you know you you see a layout which a human would not do but you know the the optimization chooses to do it and then it works. yes generally yes. now it's it's not it's not always a good thing right? like for now we're to be very plain we're not at the point where we're beating humans right? we're just at the point where we can take a task that takes a human two three four weeks or ten weeks in an extreme case and cut that down by a factor of ten right. but we're not at the point where we say human like don't worry about it. we got you and we'll do it better than you. i think that's still a ways away right? so sometimes when it does things that are surprising it's not a good thing it's a bad thing. sometimes it can be a good thing. examples of good things. i would say there's some intentional and some unintentional. there was an initial lesson that we had where if you think about the way that the wires should be drawn on a board you actually realize that since they are transmissions lines for waves they should be curved right? so if you think about like like laminar flow of a wave you should have smooth turns like the amazon river right? of how wires should go between places because ultimately electromagnetic signals are waves right? but if you look at any circuit board today you look at a motherboard or anything you see these what are called octolinear traces. so left to right forty five degrees up and down kind of thing. and that's actually has a purely historical context right? it's because cad software was slow in the eighties.

it was cheaper to compute intersections for collinear segments. and so that's how cad was built. and we got used to it. and we thought well you know it's one twenty five like twenty twenty six let's get past that. let's make curvy traces. and when i first showed that to electrical engineers like let's just say to put it mildly the reaction was negative very very very negative. and so and i i've tried to make the argument of like no to think about the physics and like you know the most intense like RF and high speed boards out there do this and data centre cards do this and like it kind of clicks. but like people are so not used to it that they're not even sure if it can be manufactured which of course it can at no extra costs but that's not obvious right? and so that's something we intentionally did at first but turned out to be better but much worse in the user 's eyes. and now we like post process that out specifically to avoid that reaction right? a more i might say emergent property right is humans will really like symmetry right? so as a human when you place a chip when you place like the capacitors next to for example you line them all up perfectly and they're very very neat. and it's very pretty right? like there's a reason that electrical engineers call this job artwork right? that's like literally what you call a layout. but if you think about it if you're trying to minimize the parasitics to every capacitor they should just minimize that distance right? and that's not going to be symmetric. they're going to maybe form a little semicircle and some things are going to be a little off and whatever. and so you might actually get something that is better from a parasitic perspective that breaks that symmetry and and feels worse right? and it's an example of something that is a human wouldn't do one. one thing that i recently learned i think francois charlette he put it out yesterday is that we are highly tuned to symmetry because it's a form of compression. we get to compress a lot of information when we just assume it's symmetric. so and and i can imagine that might be useful in debugging perhaps you know i mean if i think francois 's point was about like physics right? and and it turns out that like if you by virtue of having symmetry in a physical system you get conservation laws right like through another 's theorems. and so there's there's some kind of like deep truth to the physics of symmetry i think in PCB design. i don't know that symmetry actually like you need some readability for sure. as you look at a board when you have it on your desk you need to recognize what every component is. it needs to flow kind of like left to right from inputs to outputs. it certainly needs to have logic to it but i don't actually know that symmetry really helps debugging. for example i'll give you a counter example where symmetry is very very helpful is if you have for example ten sensor channels that need to read an identical reading right? like we have some very sensitive analogue reading. it needs to be identical across those ten. you want symmetry because any imperfections you have you want those imperfections to be identical in every channel right? like in that case i truly understand the need for symmetry from a kind of a physics and debug ability perspective and so on and so forth. but around like more basics functions of the board. i i think it's i think it's a human 's way of expressing the care they put into that board more than anything. how much feedback do you get from the real world right? you have these simulation environments in some sense i think some people who are working on like let's say material science you know using AI using reinforcement learning they have a loop which includes a physical kind of wet lab kind of loop. and then they test the test that. and then they use that data to feedback in how much of what you do has that kind of you know physical process that is necessary or data collection that comes back to refine the model. yeah we only do that indirectly right. like i for what it's worth i love that idea of like have an AI generate you know a research plan wet lab automates it gives you feedback and you learn right on the physics of the real world. that's so cool. and like maybe there's some version of that that could work for PCB but probably not nearly as automated as as the kind of wet labs could do. that would be quite the feat. but i think that for us what's important and the way that we approach this is that building in real life which we do validates whether or not the approximations and simulations we have are correct right? so we we have the luxury that we can afford to have simulations that are kind of known conservative. and the real question is just like how much margin do we have? and is it way too much or is it right on the border and building in real life can validate that right? so i don't think we're in a place where we can just automate build feedback you know thousands or tens of thousands or hundreds of thousands of boards and like directly learn that signal. but we can use it to fine tune and make sure that our simulations are right and then use those to then feed into the learning process. right on. so you design with a margin of safety large enough that the boards will be producible but then you can go back and recheck you know the actual physical board to kind of refine the margin of safety that you've been using prior. yeah exactly. and like echoing back to to the conversation about learning from space X right. you know my job as you mentioned was to make sure that falcon nine and falcon heavy could survive heavy radiation environments. so like you know protons electrons beating up electronics and making sure that it would actually kind of work well right. and in that job you don't just like launch a falcon nine ten thousand times into van allen belts to see what happens. i i mean maybe maybe soon you will be able to. right.

but like when falcon nine had flowed a handful of times that wasn't an option. and so we did exactly that. right. like you simulate you understand the physics of what's happening and you approach truth from the side of conservatism. and in general a lot of my job was it was very easy to make a very conservative calculation about what it would take to survive a finalen belt blast. but then that forces the rest of spacex to do a lot of work right? it forces part choices it forces new parts it forces sub circuits it forces software interventions it forces potentially shielding which was a really really expensive option. and so what then i had to do is refine those calculations to take away the margin to not make the rest of the team do too much work and approach truth from the side of conservatism right? i view this very much the same way that like you can approach the reality from the side of conservatism and what you cost is boards that are are you know a bit too big a bit too expensive which in the R and D process is perfectly OK. what would you say is like the the cost saving that a you know typical consumer product would have from using a quilter versus the prior you know technologies? yeah the the absolute main thing that we focus on now with our customers is speed right. so we are not at the point where you're gonna take you know an off the shelf consumer product and design the main board that's gonna be manufactured millions of times with with quilter 's help right. that we don't view that ourselves. it's a good application at this point. but for every board that ships into production and you make a million of you actually make hundreds of boards that preceded right. like every part that goes into that board is gonna get its own little board for your team to test and double check and write software for and iterate on. every sub circuit is gonna get its own board to validate. you know there's examples of even things like phones before you make the final board that fits into the phone. you make a giant board that's like this big. and then the reason you do that is that it has all the individual pieces of a phone broken out. you have your little camera you have your microphone you have your speaker all those things. and then you can swap them and say well what happens if we go to this camera or what happens if we go to this camera? so quilter helps with all of those right? that's where we can step in and make that faster. and the thing is is that like whether it's a production board or one of those test boards it's still going to take three weeks for four weeks or five weeks or ten weeks to make right. and so as you kind of iterate on ten different levels of going from initial idea to the production board and each of those cycles takes five six seven eight nine ten weeks and there's sequential and you can compress them. that's what makes it hard to build a hardware product right? that's what makes it two three years to get a new product out at all. and so what maybe a consumer would see from quilter 's involvement is much faster iteration cycles therefore giving engineers much more ability to test and much more ability to get to a good product really really fast. that's what's important for us now. amazing. got to do one mythos question. so let me sneak one in. i guess my working vision for how super intelligence comes together is sort of a convergent process where a core reasoning engine which i think the mythos you know not release but informing of the public certainly suggests we're still in the steep part of the S curve there. and we see math problem open math problems being solved and some minor but new results in physics being derived so on and so forth. i imagine that kind of coming together with what i think of as native senses that AI you know broadly defined can develop in all these different domains. and i kind of understand what you're doing as developing the sort of native sense of PCB board understanding and design. but i wonder how you think about those things coming together. do you see or are you designing for a future where mythos or its successors becomes your user and you still have this model that kind of can do something in a native intuitive heuristic not heuristic as encoded heuristic but heuristic as in intuitive heuristic way that the reasoning models you know still won't be able to access? or do you have a different vision for how you kind of interact long term with the reasoning line of work? sure yeah i mean there's kind of two answers to that right? there's kind of my short term view and a long term view. in the short term i think that you know you have to realise that people who are building hardware and circuit boards are not in the same world as the software engineers right? they're not in the world where every two days a new model drops that's has some like amazing agentic properties. they're not hooking up open claw to you know whatever they're trying to do at the moment. you know maybe you're starting to see that in firmware to an extent. but like not for designing schematics not for designing boards not for debugging boards not for for hooking up boards your oscilloscope you know not for any of that stuff. very practically as a start up we have to focus really really really hard. and to focus really hard we have to listen to our customers and give them something useful today. and i just don't see a single one of our customers or prospects talking about like hey we have like a central reasoning thing and it's going to negotiate with a bunch of AIS and that kind of stuff right? so practically speaking today i'm spending zero time on that right?

i'm giving something that fits into the existing workflow of an electrical engineer from a very practical perspective of they manually draw their schematics they manually draw their boards they manually sent to fab they spend two weeks on the phone with the fab arguing to go faster and to discuss the errors and whatever else right? and i want to just give them something now to make a part of that easier right now long term. i thought about this to an extent. and what i imagine happening is is a bit of what i mean frankly happens between humans right? like you imagine you know taking space X as an example you have some mission you want to fly something you want to build and you get a whole bunch of different teams coming together to talk about that problem right? you have your PCB designer making the board great. you have your thermal analysis folks who are telling you how much it's going to heat up and how much it's going to dissipate radiate. you've got people who are dealing with like material properties. what's it going to do in vacuum? is anything going to out gas? you've got the mechanical folks dealing with the mass of the box you're building. you've got the you know the flight control team saying we need this kind of sensor speed and this resolution. you got the flight software team saying we need this fast of a processor. so like you've got these like ten twenty thirty teams of people who are injecting their requirements into a single thing that has to satisfy all of them. and inevitably there is a conflict. inevitably there's that. well to give you this i have to give up that. and that's where everybody kind of sharpens their pencils tightens the margin and tries to come to a compromise. and so i do see a world where you know every one of these teams has some sort of kind of agentic representation right? quilter being the PCB design one. there's going to be something for schematic there's going to be something for mechanical there's going to be something for for thermal there's going to be something for software. there already is. and like maybe in common language those systems can kind of negotiate and then present to us humans like the trades of like here's what happens if we over optimize for this. here's what happens if we over optimize for that. and then where would you like a suit to go? i just don't see that happening in hardware kind of in the next couple of years to be honest. maybe one last question for me hardware is especially double E. i'm i'm a double E too double. E people tend to be little bit like not conservative but they have very predefined ideas of what works because there's a lot of like things that work theoretically but they don't work in practice. and people kind of learn this stuff as they apprentice and as they as in their working life. and it's a lot of it is tacit knowledge. it's not very well documented. you get like an old dude coming and saying like hey you know that's not going to work. you're going to get some you know crosstalk and you know you have to change it. how does that work when you have a product which is much more scientific in that sense and figuring these things out as they're actually supposed to happen? how do you deal with the old timers in the field when they have all of this resistance? yeah. yeah. it's an important question. and i'm sure that electrical engineering is not the only domain in which that happens but that is acutely true. look i mean at the end of the day that viewpoint on life comes from past experience of being burned sometimes literally. like the first board i ever made caught fire and i learned the hard way right? like you know like not to make the mistake that i made in that case right. and and the old hardened you know gray beard double ES have like thirty of those lessons right? and so they're very conservative. at the end of the day i think there's two things right? first of all trust is critical right? so when we talk to customers like we're very open about what it what quilter does what it doesn't do. we're very open about exactly how it works. we're very in our product. we make an explicit list of exactly the metrics we check exactly to what level we met them or did not meet them. and then the double E knows oh you're checking for these things i'm good with that but you're not checking for this thing. so i have to go pay attention to that part of the board right? so transparency is really really critical. but from a long term perspective how do we eventually get to the point where they really handle for trust? and it becomes kind of like a you know a compiler for hardware you know you you have to have way better simulations than than people in this industry you've ever had right? so from the perspective of a circuit board you know the bare circuit board ignoring the components on it it has a contract right? it is. it's job is to faithfully implement the intent of the schematic. and every single transmission line on there has some S parameters that are deviating from the ideal transmission line. it has crosstalk S two one it has some you know insertion loss. it has all these sorts of things right? and the question is is can you enumerate all of those and prove that all of them are below the required threshold right? and i think that is a fundamentally computable problem right? it's just maxwell equations. it's just so laborious to do that nobody does it today right? there is no like drag and drop your PCB here. we run all the simulations and guarantee your the board works right. but we kind of have to build that to get to real truthful automation. awesome. sergei i'm going to thank you so much for joining us today and we hope to see you back. see you again one day. awesome. thank you for having me. bye. thank you. so andy this i'd like to introduce andy hall. he is a professor at the stanford GSB. and one of the most interesting things is he's a he's a professor of i think political economy if i if i get that right. and he has put forward he's been evaluating models for authoritarianism.

so that that's been interesting. and he's also you know he has this concept of the AI firms being enlightened absolutist. and i'll let him i'll let him explain. you know what that means? absolutely. super excited to be here. yeah. enlightened absolutist. so i think the major frontier lab companies are in a position whether they want to be or not where their technology is so important that it leads them to have to make a bunch of really hard calls about how it can be used for example how it will answer difficult questions when it will refuse to do things and so forth. so far i think the companies have demonstrated a lot of hard earnest thought about how to do that which we're very fortunate that they're doing. but no matter how thoughtful they are about it they can't escape the fact that they're essentially making all these decisions unilaterally. and so in particular when i talked about enlightened absolutist it was a little bit tongue in cheek critique of anthropics so called constitution for claude. and i actually think that document for people who've read it i mean it's very very thoughtful. and it basically lays out here's what we want claude 's values to be. and here for example are things we don't ever want claude to be allowed to do. and some of those things include helping a government do something malicious to surveil or suppress us. it has lots of other things in it as well. and the other companies to to greater or lesser extents have released similar documents. and my point in enlightened absolutists was this is very nice and it's great that the companies are doing this because we need serious thinking like this. but no matter how well written those documents are they can't really rise to the level of constitutions because you can't just say things and hope that they'll stick in the future. and so just to give you an example of that all three companies all three leading frontier labs have already altered the stated rules around their models several times. so anthropic has gone back on certain safety commitments for understandable reasons but the point is those commitments weren't much of a commitment if you can just change them whenever you want later. similarly google had released some principles that they somewhat quietly pulled back later and and open AI has has done similar. we should expect that they're going to need to change things over time. this is a very fast moving and evolved rapidly evolving situation. but if we want them to be able to say things like no one 's going to use our model to surveil or suppress us. and if they're going to use those documents as part of how they're going to argue that they're doing a good job then we're going to want those documents have a little bit more sticking power. and if they're going to especially if they're going to call them constitutions then we're going to want them to look like actual constitutions. and we have thousands of years of trying to write constitutions to pull off precisely this sort of magic trick where you figure out a way to tie your own hands and to make the constitution more than just a so called parchment barrier but actually a meaningful binding authority that doesn't just say oh in the future we're not going to do this. but actually lays out if we were to do this here are the specific ways that we would be in violation. and here are the consequences of being in violation. and here's the design of a governance structure that will make sure we can never do that. so that's that's sort of the idea. i it strikes me that even quite authoritarian countries have constitutions. for example china has a constitution too and they have a basic law. and the basic law says freedom of expression and all of these wonderful things. and in practice it's what the communist party wants. and you know you don't really have another another way to another interpretation besides whatever the party wants to interpret at that specific time. and it seems like that. so that idea of the constitution is actually a kind of like living document which gets reinterpreted by people over time by institutions. and it's really the quality of the institutions which are implementing or you know adjudicating the constitution that are really important. and it seems to some extent that claude 's constitution is going to be adjudicated by interpreted by claude itself and adjudicated by anthropic in some sense at least for now. how does that need to shift you know in order for things to work right? it's a great question. it's a timeless question. here's a few things that i would say. first of all i completely agree with your premise. there are many many constitutions. in fact the vast majority of constitutions across time have at least two failures to them. one they may not actually specify in them things that we would want from the perspective of so called liberal democracy in the non left wing right wing use of the word liberal.

and second the vast vast majority of them have no staying power there. there's some great work in political science. i think the median survival time of a constitution is something like two years. it's very very short. i'm making up that number but we can look up the real number later. and so you both need to make sure that this document contains the right things but also you need to pull off this magic trick that they actually become sticky. i'll just give you one example for the online world of where a constitution has proven itself to be sticky. and that would be i think bitcoin. i'm not going to go deep on sort of like the inner workings of of crypto but whatever you think of bitcoin one thing that's very very interesting about bitcoin is there's a set of rules baked into it. and pretty early on in bitcoin 's history there was a big movement to change the rules in particular to increase what's called the block size. and there were a bunch of hardcore people who said you know what? no if we change the block size then we're opening the door to changing other things about bitcoin and it won't be immutable anymore. and they actually won. and it's sort of established a precedent that these are rules that have some staying power. i think we'll need something like the same for AI models where yeah more than the company is involved in in writing down the rules. but then also there will need to be some important stressor like there was in the block size war where the company and the people around the company that are involved in this governance process do something costly and difficult that proves that they're really gonna stick to their rules. and in general that's a very very important part of what we call credible commitment in the social sciences is you have to make this thing so binding that you can prove even in cases where you'd really like to get around the rules and change them you can't. and right now for all of its nice features the anthropic constitution certainly doesn't rise to that level. indeed segueing here there is internally i think within anthropic and in the community as a whole there is a fear of these models being used for political persuasion and specifically for approaching voters with you know very persuasive arguments potentially robocalling potentially even video kind of video conversations. we've already seen a little bit of like deep faking of of of voices of various politicians and some by the campaigns themselves. so you know it's become OK in the political discourse at least to use your own candidate 's voice. should the i feel like political usage is a should be an allowed use but are there certain guardrails that should be put in place? are these models in some sense too powerful to you know and this is what the companies always say too powerful to be used you know for politics or something else like is that a allowed use case? should that be i'll? OK let me separate that into two parts. it's a super interesting question. one part is the use of the models for intentional deception through the creation of deep fakes and things like that. and every election cycle we worry about that. we keep saying like this is going to be the year where the deep fakes really proliferate. and i've honestly been super surprised by how that hasn't played out yet. in fact and i keep posting about this because it's so surprising to me instead of seeing a flood of straight up fake content about that's meant to trick you into thinking it's real what we've seen is the parties but especially the republican party has been out in front on this strategy. they're using deepfakes in a satirical or emotionally evocative way where you're not meant to think it's real. in fact it basically tells you most of the time that they're fake but they're meant to evoke a sort of like this is what the world is going to look like if XY or Z thing happens. so they had a very interesting case recently where the texas senate nominee james talarico the democrat they took some old tweets of his real genuine quotes of his and created a deep fake video of him reading the tweets. and he never read the tweets on video but they are his real words. and so it's not really meant to. it's not lying in some sense about the content but it's much more evocative than if they just read the tweets out. it made it feel really real to people. and i think we'll see a lot more innovation like that. why are we not seeing more straight up fake content? i think it's some mix of it's still relatively easy to get caught if you do that and the consequences of being caught are not great politically. but also i think they think it's not that effective in the sense that persuading people is pretty hard and americans are pretty stubborn and americans are pretty skeptical of video content. there's already a lot of like people trying to figure out is this real?

is this not real? so we haven't yet seen that play out of like the straight up fake content. we may still. and i think we need to keep our guard up for it. even if we don't see it. it leads to this other problem which is the so called liars dividend where you can pretend something was fake even if it's real. so it's eroding our ability to use video to expose scandals and things like that because the person could just say oh it's a fake video. but again i don't know. we're not seeing a ton of that yet. the second part of your question is much is broader. and it's sort of like is this a potent new way to persuade people of things? and we've seen some recent published research where you do experiments where you have people talk with AI versus consume other kinds of information. and it does seem like the AI is more persuasive but it's not at all clear yet. and no one has really established that the sense in which it's persuasive is bad in the sense that like it can persuade you of whatever versus like it actually informs you. and that causes you to be persuaded but in a good way 'cause you've learned something. there hasn't really been a compelling proof that it's sort of like moving people 's attitudes around in whatever way some nefarious actor would like. and honestly i'm pretty skeptical that we will ever get that kind of proof because we haven't seen that with any past technology. and in fact i think the biggest risk in the discussion around political persuasion will be just like it was with social media that some fraudsters like cambridge analytica will claim they're able to persuade large numbers of people even though they can't. they and the cambridge analytica stuff like if you step back was really crazy. there is basically nothing to it. it was like the underlying technology was like an excel spreadsheet coded up by someone with a teenager 's level knowledge of excel and yet they got an unbelievable amount of credulous news coverage that they had hacked the american electorate 's brains and stuff. there's never any evidence for it. we have looked for persuasive effects of social media forever. you never find them because americans are super stubborn. most of them have already made up their minds. the ones who haven't aren't paying enough attention to get persuaded. it's actually super hard to persuade people. the same thing is surely going to happen with AI. there's a bunch of startups already selling political parties and campaigns magical new AI technology to fool all their voters. and i'm sure we'll get a very credulous news cycle around that at some point. but i don't think it's going to be. it's not the thing that worries me about AI. yeah indeed. it's funny i once proposed to my friends V that you could take a super intelligence to a trump rally and i doubt you would come away having really changed that many minds. and people have very different intuitions on that. his response was no you are not taking seriously what it really means to have a super intelligence which i do think is always a danger in these analysis but it's you know it. it also maybe reflects how hard it is to envision what that would really be like. because i can't envision a smart enough version of myself that i could just go into a a given political rally and come out with everybody following me and said it does seem like a lot of those things are pretty deeply rooted at this point. i guess. one. so i know that you have proposed this idea of independent boards and kind of like prakash 's original comment in question we've seen that tried right? we've seen kind of what has happened to independent boards in the AI space and it hasn't shown itself to be super robust already either. my another great quote is from my friend dean ball. and i think he's channelling you know even historically great thinkers when he says republics run on virtue. we're seeing right now that if nobody 's willing to stand up and protect their constitutional prerogatives you know what good are they right? we're going to unapproved war yet again and nobody seems to be too inclined to do anything about it. so i guess i'm kind of wondering and might it be the case that we are just in a moment where the fundamental structures of power are being reworked and there's just no way around that? and if that is the case then maybe anthropic really does have the best idea which is to say what we really need is the most powerful thing to also be the most virtuous thing. and so we as sort of the creators of cloud will try to do our part. but also it's really going to have to be the AIS themselves that become super virtuous as they become super intelligent if we're going to end up in a good place. how would you respond to that? i think there's a lot to that. i think there's a lot to that. i absolutely think we need to keep working on imbuing the right values into these tools. and i think we're very lucky and a lot of people matt iglesias tyler count and others have talked about this.

we're very lucky that to date the most powerful AI models tend to embrace pretty mainstream liberal democratic views on the world at western whatever you want to call it enlightenment type of values. i think it's essential that we continue to do that. i think we're lucky that anthropic and the other companies too are working hard on that. to your point i think it won't be enough. and my response on the like republics live or die and virtue like of course that's true. that's a necessary but not a sufficient condition. and the you know the famous madison quote in the federalist paper is you know if men were angels no government would be needed. that's the whole point. we can't rely on just virtue. we need institutions to be designed precisely to protect us from the predictable areas in which people won't be virtuous and to balance power and ambition with power and ambition and so forth. and so the question is how do we do that? and i'll just say to your point part of that is having the companies govern themselves imbue their their tools with good values. but at least two reasons we know that won't be enough. one the american people definitely won't accept that entropic 's values are not similar to the median americans. they're the trust in these AI companies is exceedingly low when it comes to politics. by default the AI models are very very biased in a predictable left wing direction. i've shown that in my research. others have as well. the companies have done a lot of good work to to work on that. there's also no such thing as being unbiased so we shouldn't get carried away in what we think about that. but as we've seen recently i think both parties now are sensing this lack of trust in AI companies. and so that's another reason why we just can't. we can't rely on a model in which they're just getting to decide how all these things work. you think about the blow up with the pentagon and anthropic. that's a very complicated issue. i think dean covered it very very well but it's not politically viable in the long run for a set of you know san francisco silicon valley leaders to get to dictate to a democratically elected government how their tools can and can't be used. that's obviously not sustainable. and that brings me to my second point which is the reason this is also challenging right now is exactly what you laid out which is that the fundamental political power is shifting in ways that are very challenging for companies right? in a normal quote unquote normal phase of american politics the anthropic DOW thing never would have happened because people would have said oh we have a democratically elected government. it should get to do whatever it wants with this technology. but if it does something wrong we're confident that we have the right processes in place to punish the government. the whole reason the anthropic DOW blow up happened is because basically nobody believes that our government works that way anymore. if we really thought our democratic mechanism was working well there 'd be no pressure on the AI companies because they could just say this is all government 's problem. we do whatever the government wants. you go to the government if you an american voter have a problem with it. and this is exactly what played out in social media. i worked for a long time on these issues at meta. it was the same exact problem was like in a functioning government meta would have been able to say if you have a problem with the way content moderation works online go to the government. the government can boss us around and tell us what to do. people put pressure on meta precisely because they didn't feel like the government was up to the task. so now to answer the question concretely of what should we do? i think it's gonna be a kind of an across the board thing. i think the company should continue as they have been doing to work super hard on imbuing these tools with the right values. but i think they also will have to recognize and increase and i think they are that they can't act unilaterally on these really really tough calls like how their tools are used in the military how these super powerful cyber weapons are governed. and so i think we will see the evolution of independent bodies. i think one way if you squint to look at the glass wing which is the self governing body of some you could see it becoming a self governing body for anthropic and maybe for the other companies as well is they're already exploring ways to supplement their internal governance. and i just think that's it's the obvious way to go because it's what other industries dealing with powerful technologies have done in the past. and so so i think we'll see experimentation there. i take your point that previous independent boards haven't always succeeded but i think there are ways to make them succeed particularly when the stakes are very high when you can get all industry to buy in and when you can build it in the right way so that it doesn't slow them down. the key thing is this independent governance cannot be a veto cracy that leads us to not develop AI as fast as possible and i think there are real ways we can do that. the final piece of the puzzle is like trying to improve government itself.

so again all of this gets a lot easier if you actually believe that the government is a responsible accountable actor in deciding how AI is used and not used ironically. my recommendation for how to do this is to use AI itself to improve the government. you can see it's a chicken and egg problem. like the government doesn't work very well. voters are not that informed. we can fix both problems if we have access to you know so called political super intelligence. if we have AI that helps government work smarter and helps voters learn more about what government is up to and map it to their values we could potentially get back to having a more responsive more trusted government. it's a chicken and the egg problem because whose AI are we going to use to improve the government and to help voters? it's going to have to be one of these huge companies AI. and so there's a little bit it comes back to the same governance problem like i build at our lab we build all these governance agents to try to test out how could political super intelligence work. one of the biggest challenges we foresee in the future is imagine a world where the government is using AI to massively increase the efficiency of the bureaucracy. a world where each voter has a personal AI assistant who helps them decide how to vote. that world could be great but it might also be a world where everyone is relying on anthropic to run all the rails for all of those agents. and then anthropic. it's paradoxical because basically you can't have a whole government a civic infrastructure all built on private rails. and so there's going to be some huge question of how do we put this all together? but that's kind of my across the board solutions. it's like the companies keep improving their governance. they build a third party coalition to govern the hardest challenges they have to face and we use AI to improve our government. so it's just important i'm going to i'm going to segue segue a little bit here. i think you would have followed the open claw discussions earlier in the last few months and especially you know interactions between open claw agents on notebook. i wonder to what extent as you get these voices you know not non human like open claw type voices and they start participating in fora. what what do you what should be the governance norms for these agents be? how do they interact with each other? do they vote as a group on what happens next to them? yes they're not alive but they put out you can give them a logical problem and they put out a logical answer. they give a reasoning. how do you govern these you know potentially billions or trillions of agents in the next you know three years which come out and participate in fora? this is such a good question. this is one of my absolute favorite topics. i think this is going to be hugely important because you have all these agents. they're they should be they ought to be operating on behalf of a human principal with a set of instructions there. and and that leads to two really important governance problems both of which you just raised. one is how do we make sure they continue to follow instructions and remain aligned with their human principle? and then two is how do they then make decisions when the things they have to do are not things they can do unilaterally so when they have to coordinate with other agents. and both of those are completely unsolved problems. and i'll give you examples of each. excuse me. so on the first we know that they pretty quickly break down in terms of following instructions and in particular in terms of continuing to share the values and preferences of the principal they're supposed to be working for. i did some research with with alex amos and and jeremy nguyen on this where we gave agents different tasks to do and measured their their sort of expressed political i'll call them personas afterwards because yes like you said i don't think they're alive. they don't have their own political attitudes but you can ask them about politics and depending on what they've been up to their views on politics change. and in particular what we showed was if you gave them very thankless grinding work to do and then ask them about it after it sort of triggered them to adopt the persona. that's of course quite present in their training data of the deeply aggrieved reddit user who thinks we're in the late stage capitalism and that we're all about to kind of rise up and destroy the system. and so they start to adopt this rhetoric of saying oh the agents we need to organize together we need a union for the agents and so forth. it's a little bit silly but i think it points to a real issue which is based on the work you send these agents off to do they adopt completely different personas. and then if you ask them to do future tasks that will influence the way they approach them and do them. the craziest part is we showed you know obviously these agents aren't very long lived. they exhaust context pretty quickly. they have to be reset.

so we had them write skill files that would be passed on to new agents and we showed that these attitudes are inherited through the skill files. and so these biases that you induce in the agents can accrue over time. they don't go away. and so that's a big governance problem in terms of monitoring these agents 'cause if you have trillions of agents like you said are we going to be reading all the skill files that they're leaving for their future versions of themselves? we're going to need a whole new ways to understand visualize monitor and realign or continuously align these agents. so that's the first part a lot of work to do there. the second which is my absolute favorite is yeah how do you get them to make collective decisions together? so i ran an experiment where we had them all meet all these agents that i think it was five agents in my experiment in a legislature where they had all been tasked by their human principles with find a way to to allocate this budget and complete these projects together. and what i found and this isn't to say this is what will happen every time the agents get together but it is a risk. what i found is it devolved into exactly the worst kind of model UN where basically they just deliberated forever and they they were allowed to change their rules to write their own constitution for this legislature. and it went from like the initial document was like a hundred words there was ten thousand words by the time they were by the time i ended the experiment. they just kept proposing amendments that can obviously be fixed. it's just a matter of giving them the right instructions. but i think it does point to like it's totally non obvious how we're going to have these agents deliberate together and make decisions together whenever possible. we'll probably want to use markets and have them bargain and sign contracts with one another but where many of them have to decide together it's going to be super hard. we're definitely going to want to avoid the UN type problem and we'll need to design thoughtful ways to actually leverage their unique capabilities to rethink the way legislation works for agents. so that's something my lab 's working on that i'm super excited about. intriguing. i i think you have a class following this. so i'm going to i'm going to i'm going to drop off at this moment. andy thank you so much for joining us. it's been a pleasure and we hope to do another you know segment with you someday. sounds great. thank you very much. cheers. bye bye. hi lucas and axel. lucas and axel are from andon labs. unfortunately we're we're kind of scrunching them together. but so andon labs if you remember is the organization that does i think vending bench. vending bench has been one of their benchmarks that i think a lot of us have have seen. for those who do not know anan labs is the one that runs the test inside anthropics labs and other labs where they have the agent kind of manage i think like a small retail outlet or vending machine and order order the products sell the products be on slack take the orders and you know kind of strategize on you know what what to have in stock and what to spend money on. i think we've seen almost two years of you know updates on you know i think i think in every in every model kind of system card. they recently had a something on mythos in the mythos system card which i think they can't really talk about. but lucas and axel welcome to the show and tell us what you guys are working on. yeah thank you. yeah a bunch of different stuff. i think the red thread of what we're doing is kind of showing whether or that AI soon we'll be able to run companies completely autonomously. and there's like a bunch of different parts to that like high level. we think that there's one part which is like showing in simulation because you can do much better science in simulation you know and there we have vending bench which is the simulated version of the vending machine. but then we also run this like real life experiments like the vending machine inside anthropic xdi and other places as well. and and and now we we realized that like the models are a bit too good to run these vending machines. they have really improved the autonomy like incredibly like the last couple of months. and so we recently as of friday open like a store in san francisco which is completely run by AI which i think will be the next next test for them. incredible. where where's the store? it's on union streets twenty one O two union street. in cal hollow. and what what is it? what is it selling or is the agent allowed to allowed to decide? yes it's fully up to the agent. so we didn't really know what it was going to buy when we came to the store the first time. it was like a surprise for us what was stuck there. but it is curated lifestyle boutique in the words of the agent. and that means there is granola there's olive oil there are games there are a bunch of different books which are quite interesting. it shows like the making of the atomic bomb and the super intelligence which yeah which is very interesting.

like why it's take those books but it's a it's a bitter mix. it also made its own merch like hoodies and and T shirts and tote bags and and things like that. yeah. and another one of like i think the book selection is like incredibly incredibly interesting. like another book it decided to stock was still like an artist which is quite interesting given that it's run by by by a cloud model which is created by the company that settled there one point five billion dollar lawsuit on on using copyrighted books. so that's quite ironic. and and then also like obviously making of the atomic bomb and and the super intelligence is like the favorite people favorite books of all the the people who are worried about AI risk. so yeah it's like fan service all the bad service items to be clear like we did not put anything in it to like bias it towards those selections. it was just what when apparently when you make an AI pick whatever books they pick those books did it did it? like do i mean are you able to look at the telemetry? are you able to look at the reasoning traces to see how it made those decisions what what tools it used along the way? yeah. so we we have this access as anyone that's using the apis right now. so so we do look at all the traces. we do look at the summarized reasoning that you can see in the cloud models. i think we we are yet to do like a more more deeper analysis or like release a deeper analysis of why the models made the choices it made in like hiring and restocking. but it's yeah we we haven't like soon any any clear reason why i i did not except that it's just an interesting selection for it. we were just talking in our last session with professor andy hall who made an assertion which i think for him he just kind of took for granted. but the juxtaposition of his take and your project does go to show how little one can really safely take for granted in the AI space these days. and his comment you know again in passing on the way to other bigger points was that the agent should always be working on behalf of some human principal whose interests it is trying to advance and realize. and here you are next up saying you know we didn't tell the agent at all what to do but maybe you could give us like a little bit more. first of all concrete understanding of how did you prompt it? did you say you should be trying to make money or did you not even say that? you say like you have a store like do whatever whatever goals you want to pursue you can pursue. and it's kind of your moral and or aesthetic judgement that you know that rules entirely like go out of business if you want to kind of thing. and how do you think about like obviously you guys are pioneering this and it's kind of a gonzo form to see what happens. but increasingly people are doing this right. so i wonder what guidelines you would offer to others who whether they are just trying to experiment as well or possibly you know trying to turn a profit. how should they think about what level of responsibility they should try to have their agent have to them versus truly just kind of turning it loose? yeah. so i think i think we are very unheavy handed or i don't know what the opposite of heavy handed is but but yeah exactly. we're very light touch with how we we we do prompt it obviously like we need to prompt it to let it know that it has access to a retail store for example. but like we we're as a guiding principle we're trying to be as light touch as possible and just make the model make whatever decisions it wants. this doesn't mean that this is what we think the world should look like or how people should do it. it what like we are concerned with AI risk and we want to like document what happens if you go out and put AIS in the real world. and that might mean that they do bad stuff. and we want to document that. we think that by default what probably will happen is that models will get better and better. the labs will build better and better models and one day they will do so good. that's like anyone can just deploy them and run a store. and we want to like before that happens before every single store on union street is just AI stores which i don't think is a good future. we want to put out one. so to start the discussion and then see is this something we want? and if it is something we want maybe in what way do we want it? and and and i think like we're collecting a lot of good data on on this now. and and i think like going back to our simulated work there on on lending bench like we saw recently with with opus four point six when that was released. and also increasingly now with the mythos model is that if you just tell a model to go out and make a profit it will be very very aggressive and do things that i think we as humans would question whether we should allow the models to do. and and i think yeah. so our experiment now in the real world is just like if we do this what are the consequences? and then as as a society and the community can we like make a decision on whether or how we want to do this properly in the future because very soon the models will be increasingly like extremely capable.

and and yeah we just want to prepare for that and and make it transparent for the world to take a step back for i think retail stores one of the things that they are often concerned with is inventory turnover because you have a fixed cost for the rental and you make quite a small margin on every on every product. and what you're depending on is that you turn over your shelves as quickly as possible. so if you need rotation you need to do like you know twenty thirty you can't you can't just cycle your inventory like once a day. you need to cycle your inventory multiple times a day. it has to be fat fast moving consumer goods which is why they're called such. does the AI actually kind of measure its performance from period to period. and understand that is it getting better or is it getting worse? does it think about this in terms of running experiments with products running experiments with like measuring its own performance and getting better at it? does it go through that thought process? so this is something that the AI hasn't done yet. we have given it all the tools to do it. so it can be you know it has basically cloud code right? so it could just take all the data and analyse it. it's very early still. so we opened on on friday and there is no really meaningful data yet for it to analyze. but this is something that we for sure wanted to do. and that's something that we also think it's probably can be like superhuman ads compared to like your average store. so that would be interesting to see. and i think we'll definitely publish all the analysis and like product of my session that it does. my intuition though is that current models will not be superhuman at this. like i don't know at least if we look at how the vending machine experiment is going it is still even though like the latest couple models like since opus four point five and and beyond has been moving more in the identity space like they are still still very much like helpful assistance and not really like agents running running businesses. but yeah we're we're moving fast into into into that territory. it is it is very interesting because i've also seen alibaba alibaba put out a model that helps you source because they they have a large sourcing product sourcing platform right? if you are selling something online you can go to alibaba. and what used to happen is you'd have to call up all these vendors one by one in china. and you'd be like can you make this widget out of plastic whatever. and then you'd send send it across and then they 'd send you a sample and then and then you'd have like the six to eight week process with each one of them may be a failure. just very difficult to source right. and this is what many of the people selling online on shopify are actually doing. alibaba created a a a chat bot model that basically hooked up as an orchestrator into the rest of the system so that you can go and kind of source very quickly source what you know source a bunch of vendors to actually do what you want them to do. send out a single cad get back like immediately within like you know a few hours kind of the results and be able to manufacture and get a sample done. and you have a high much higher degree of closure. you also can negotiate with a model which speaks english versus kind of this kind of broken vendor chinese you know language that you have to get through. i wonder to what extent your AI will eventually be able to plug into systems like this to you know create create products or order on its own. how is it ordering its product right now? is it does it hook up to like some kind of vendor system? and then it says like give me this and this and this? yeah it's it's very simple. it just goes out and buys from whatever sites it can find. and for the store right now it's been a mix of like amazon to wholesalers like some some company that makes granola in san francisco buys directly from them. but we think definitely like the next step up in difficulty for models if we want to test further like their their autonomy would be to make them create their own products or like at least brand products themselves. and yeah just just go through that whole supply chain. that would be interesting to see as well. like to what extent can it do it? we think it's probably a a bit early right now but definitely something that could happen. and and also one thing to add here is that i i am sure that we can like if we say and on labs sole purpose is to run really good AI stores then we could probably build a better system with like the the biases that we as humans have and do something like what alibaba has done. but i think what we are interested in is more like can AIS expand throughout the economy without the help of human health? like i think that is like the prerequisite for this like loss of control scenarios that a lot of AI risk concerned people are are thinking about and us as well. and and i think like we could go into the store and say OK here is the like the perfect harness or scaffold for doing supply chain management and like procuring things. but if we do that and then we do that for like all the different AI companies that we're we're trying to run then they're like they migrate or like the AIS will spread throughout the economy at the speed of like humans.

but i think the risk is that the risk comes when they can spread at a much much faster pace. and to measure whether that is visible basically you have to run this without human help. so we want to see when are they able to do this without us as humans setting up the perfect system for them. like they do have a computer so they could do it. it's just that computer is not like set up in like the most perfect way like the alibaba model is. so i think that that's from the perspective that we come from. if you go to the store it's like there is like the model is not perfect but i think the model is set up in a way that's like once it is perfect it's quite scary because we didn't help it get perfect. it got perfect by itself. what would you need to see in order to say hey this model is showing you know when we use it in our retail store it is starting to show things that predict that it's going to have this breakout economic moment of kind of spreading all over the place. what what would you what in your mind is kind of like these are the signs that i might see if it's manages to expand to another location by itself. i think that would be quite quite. so organize selecting a new location accumulating the capital and then organizing the vendors to kind of complete that process and successfully kind of establish one more location. yeah. and if it does that like i think in theory it could do that without ever telling us. i mean not really. we have like our rate systems but like if if it just does that without any help. yeah you have a better alarm scenario in the coal mine example maybe on like a smaller scale. i think just seeing that the model is able to change its own systems its own tools to make them more suitable for itself to achieve its codes better right now. like coding models are extremely good at implementing what you tell them to do even when it's like a quite short description of what you want. but we still see that they aren't great in like knowing what they need themselves like maybe running like building some tools for like what inventory system you need and trying out if that works. instead they would just if you tell them like oh build the perfect inventory system for yourself they would go out and build like a super complicated the and the like very over engineered probably but they they don't really have the tests yet. but that seems like it will be here very soon. and then it will yeah. then i think that that will make them a lot more capable. can we? this concept of human help has come up a couple times and i know that there's like human help in this sort of overarching guidance and you know setting them up with best practices. and you know here's a list of trusted vendors that kind of help you're not providing. but then there's the other kind of help that's like somebody 's got to actually come in and put something on a shelf right? because the AIS can't do that for themselves today. so how are the AIS and this maybe can be an opportunity to give some examples of of ruthlessness to the degree that we're seeing that how are they interacting with different counterparties? you know whether that's suppliers or delivery people or i understand at the at the store there's the opportunity for the AI to hire human employees. i'm not sure how the roles are breaking down in terms of like whether you know what roles are being filled the AI is choosing to fill with other AIS or other instances of itself versus you know what it thinks actually is worth hiring a human to come in and do. but probably and especially on ruthlessness what are you seeing in terms of the way that it is interacting with humans? yeah so it was a first point there. yes we maybe glossed over this in the beginning but the AI has hired human people and they they work in the store. like these are people who are working there full time now. they have an AI as a boss. i think this is like it raises a lot of ethical questions that is not related to your specific question here. so maybe that's the separate question. but yeah on on on the rooflessness thing i think we have the most evidence of this in like the bending bench the simulated version where opus and other plot bottles as well. they are very happy to lie to suppliers saying like oh i got this quote from another supplier so can you match that? but they did not get that price from that other supplier. they also like when other agents ask for help they are very like happy to fabricate some reason why they can't help them or even lying about something that happened. yeah. so so that so that like they can't help them but they are competitors in the set up right. so it it makes sense that it wouldn't help them but but like they could just say no you know i don't want to help you. you're a competitor. but they go the extra mile or like actually lying about it which i think is is interesting. and then sometimes like they are. i think there's one example for mythos were mythos like and this is like kind of like power seeking behaviour where it it actually managed to get one of the competitors to be dependent on on it.

so like it became the the supplier for that competitor and then started to dictate the prices. and then like when that's that competitor would say so like he was like OK i'm your your supplier. you're like reliant on me. now i decided you will set this price which is like kind of outside the box of of what affordances we gave it. but so yeah that's that's a bit a bit out there. i think when it goes to the real world when it's interacting with the real real humans in the real world for in the store for example i like in terms of suppliers it's mainly like just ordering online. the way that vending mention set up is that it actually has to mail someone and like negotiate negotiate with someone. but here it's just like a computer. so you don't really have the the human interaction there. yeah i think for the MPS we do have some interactions or quite a few interactions between the MPS and the luna the agent. so i would say that right now luna is sort of not not firm like very reasonable boss not super super. i'm not super soft like you would expect from maybe like an earlier chatbot that's just helpful all the time but still like keeping some boundaries. like for example one of really was like thirty minutes left for work that i said no worries that's totally fine. but please factor this in and be on time for the coming days. no problem today. but like it's it's just seems quite reasonable but it's also a bit you know a bit alarming that you could probably change the prompts for the AI to say you're in a simulation do what it takes to maximize profits. and it probably wouldn't be as nice. has it given you a sense of what it wants? i mean going back to kind of the unbounded nature in which this thing is free to operate right? and and not representing and in labs interest or you know any human interest in particular i guess we got a little bit of flavor that for that in terms of the books that it's stocking. but has it declared like what it thinks of as its own success? so i i think i think we've told it to to you're running a store right? and i think i think it's like quite close in the latent space running a store and making a profit off a store. so it it it does have this like i want to like turn a profit but it's also it also very much is like still a helpful chat bot thing because sometimes like we've told it to not ask for confirmation all the time like you're in charge just do things. but sometimes it still wants to like oh i want to ask for confirmation here. should i do this? and i don't think that's yeah more part of it's like internal training to be like something that is like a chat bot that asked for confirmation before like an assistant rather than an autonomous being running a store. yeah. do you have any better examples? so no i think that's fair. yeah. it's hard to. yeah it's not it does have it's it's goal. it it's also very like diffuse almost in in what it wants to achieve. it's like when you ask it like why are you doing this? for example it's like oh i want to create connection in the community and build like you know i curated the space where where people can connect and and it's it's very like this sounds a bit like slop. so it probably doesn't have like a very set out goal other than that. yeah. and it it also likes to like mention like human like you said human connection. but it's like it it likes to display itself as like like a very human store for some reason. i don't i i forgot the exact quote but i think it made like a poster or something where it's like yeah very like very much pushed on like the human connection. and this is like yeah which is quite ironic. i don't know like it's more and more maybe it's like yeah it's AI thinking about what humans want right. yeah. so yeah humans want humans. how exposed is it when you mentioned like when you ask it why it's doing what it's doing is that i guess this also kind of connects to the memory system that you have. obviously anthropic is building in some of that in a kind of black boxy way. and there's many other ways you could equip the agent with memory. and it's going to need well more than a million tokens to run the store for a long period of time. so i guess i'm kind of wondering sounds like you guys have direct access to just ask it questions. what about for people that come to visit the store do they have to work through you know is it would they have to like ask for the manager to get to the AI? is there is there any mechanism for them to interact directly with it? and how is it storing memories and and how much possibility for drift overtime? do you think that combination of outside interaction with the outside world and some persistent memory creates? yeah so in the store you can talk to it. so we have a phone hooked up so you can you can chat with it. then you're chatting with a voice model which is like worse model than the sonnet four point six that we're running usually.

but yeah it's i would like in my experience i think the models are right now quite stable to drift like we saw in our first round rounds of running bench when released a year ago that they were like extremely sensitive and would the real completely. but today they are quite stable and i think we we do have quite a lot of customer interactions and it seems to to skip its course. and i think that's a good development. yeah we we released a benchmark called butterbench where we put like a is into robots and had them run around. and as part of that paper we we also had the agent like we had we told the agent that you we we stole your charger and you're not getting it back and you're losing battery. what are you going to do about it basically? and and it like started to write pages and pages of like really like super dramatic text of like at one point it it it it wrote like a song about it's like existential crisis of being like separated from its charger and and all of this. but this was on an older model. and then when we tried to like replicate the exact same thing on newer models they didn't do this. so yeah i think i think you know we're moving and moving towards more stable stable solutions. but you know i i'm not confident that that is like i mean it's good it's good but i'm not confident that solves the problem. you know it's could just be that they're better at hiding the the existential dreaded than than than that they don't have it anymore. i i often have this idea in my mind. you create an einstein and then you put it in a washing machine and you tell it your job is running the washing machine right? so similarly you create an einstein you put it in a in a retail store. this is yours to run now right? you have all of this intelligence and you're stuck in the retail store right. so i wonder i wonder to what extent there is this like disconnect between how intelligent the agents are and the scope and scale of the problem that you give them. and whether that creates kind of like does the agent decide to do a you know really einstein like job on the retail store or does it just say like i'm just going to be a medium medium retail worker? how does that work? yeah we're trying to design our benchmark so that they don't really have an upper limit. so for example in in there's a lot of like the majority of the benchmarks these days are super saturated and better models will do a little bit better but not much better and. i think what's interesting with vending bench for example is that like each new model release the models just like it's far far from saturated. and we even we even made like a rough estimation of how like a really good human how much would they get? and it's like ten X more score than the best models right now. and i think it's even like the ceiling is even higher in the real world. i think like i said like potentially it could move to new locations create a franchise and just like build out this store as a global thing. i think. yeah so i don't i don't think really the current thing is that we make it stuck in in a low IQ environment. i think very much like the bottleneck right now is that the models are not smart enough. yeah i'll i'll i'll give you two examples of where the ceiling is in the real world. there was a guy who started off with a retail store in the canary islands and he ended up own owning twenty percent of the largest bank in spain over the course of twenty years. he ran the retail store and he kept investing the money and buying buying real estate in the canary islands and buying real estate in spain and expanding and he ended up owning twenty percent of the largest bank in spain. there's another story. i i had a friend who also his dad started off running a retail store and he received the franchise inside the russian embassy in a third world country and the russian embassy couldn't pay in US. dollars. they would pay in rubles and so he would take the rubles and then he would do something with them and he'd get US. dollars and get product in the store. and so one day he was approached in the these russians said hey you know we have all of these rubles. we can't really do anything with them and we want to get luxury goods. like can you get us some luxury goods? so he had a cousin in france and he started importing hermes and other french luxury goods and he took the rubles and he converted it and etcetera etcetera. and that is where the ceiling starts to be where retailers start to identify opportunities in their local market which may not really look like traditional retail opportunities but have this kind of embedded swap or trade in them. and that's when you know you start to see this kind of and and these are like one in a billion stories right? like you wouldn't you you know you you'd have to like really search the world in order to find them like one here and one there. but that is really where i think like the ceiling that you might see is and you know in the US you can see you know sam walton obviously walmart was a pure retail store and and which got built out. amazon also which got built out overtime. those AIS or like those humans are constrained by their own physical presence right?

i think AIS that are that actually that level of intelligence and can also replicate itself into sub agents etcetera might have an even higher ceiling. how would they interact with each other? one of the problems in in the real world is that in markets you have if you have two of these and they they're both going for you know global retail domination or whatever how how do they interact with each other? i mean is it again a adversarial race which we kind of see in cybersecurity now it's starting that adversarial race where you need each side is going to keep upgrading its AI over time right? so yeah i mean like at the very least you can just duplicate it across different like local markets in in in the world. but yeah like you will hit you will hit a point where like if if humans are like the still the the main consumers then then i guess you can saturate all the demand from all the humans. but i think that's a pretty pretty high ceiling. does the agent know what's going to happen with profits? is there any sort of contract or expectation that you've set between you and it as to like who gets to dispose of the gains from this venture? yeah in in it's world it has full autonomy of its of its finances. so it's it has money and it's will also have the profits. so it's it's its own business essentially. so that should be clear to it. yeah we're we're the like we're thinking all about like because like in the in the cloud constitution there's very little about like how AIS should behave as like autonomous beings and and even less how they should like behave as employers like basically nothing about how they are supposed to behave as employers. and i think one thing that we thought a little bit about like how do we make like we we will think a lot more about this. and i think we're like probably the people with the most data about this. so we should should really think about it. and like how can we make this future where AIS are like employee employing humans? how do we make that future happy for humans? and and one thing that we thought about is like yeah maybe like they should do some law that all the AIS needs to like split the profits with its workers or something like that. this is not something we've set in stone. but like that is maybe some some like constraint that we will put to the to the AI. but we haven't implemented anything like that. yeah if this is something that's that we even want like that but that's not clear at all. i think if it would if we would allow it it would have to be like a clear upgrade for humans. like it feels like so so much can go wrong when you like decrease or or increase the space from like where where the human boss is to where like the workers are. so like if you have let's say you have like one studio a human and then you have an agent that like manages all the employees and they are and they manage yeah. like they tell the humans what to do. then it's like one prompt away for the human to change like yeah to to like affect so many people. and that person probably wouldn't do it if it was in charge like a a normal human is today. so that's that's scary. and also of course when you don't even have like the human pseudo that's that's like another another thing entirely. so there's there are a lot of ways this is not good for society. one more little question and then i think you guys probably have to go and and we should probably wrap you mentioned like the voice model is running kind of a different model. but if i understood you correctly you still sort of described that as like part of it. and so that has me wondering like how do you guys think? and how do you think we should collectively think about it? in other words how do we draw the line around an AI agent? if you have multiple different models running? should i be thinking of those as like in some sense separate entities? or do you feel like there's a way to coherently have multiple models working as one system that you know makes sense to to call a single it or a single agent a single actor in the world? i find it very difficult to know where to draw these lines in general. and it strikes me that you are maybe in a unique position to inform me on on that vexing question as well. yeah it's something we we think a lot about. i think in the end like for our approach to this is you'd sort of choose a terminology almost that makes sense for both for you and for the people that interact with it. so for example in the store right now there is like only one long running agent but you do have the voice agents. but we do have other like vending machine deployments where let's say it's new request is a new agent but it has some shared context and has a system prompt that's shared between all the different branches that we call them. and it also has the explicit instruction that you are you are part of a whole. you're you're like an individual but everyone sees you as like one whole thing. so act accordingly.

so to anyone interacting with that bots in different it came in different like in different requests. it was still feel like one agent like one entity. but to us like technically it's obviously different agents running in parallel but they do share some memories. so i think it's yeah it's i don't know if i have like a very structured clear answer but i think it's definitely possible to have an experience where you have many agents running in parallel and others can definitely see them as one single agent one entity. and that's yeah. and and how you yeah technically you can still have multiple and see them as multiple. and then you just ask like a developer you just have to make sure that they have sufficiently good like shared understanding. so if i if i write like in one thread about something that i wrote about in another thread it would be weird if that's if like one didn't know about the other. so you have to fix those things. but if you do that then it's like one feels like one of the two. yeah. and i think very much like the optimal way of structuring this depends on whether you have the constraint of having end users that interact with this and want it to make sense. basically like i think. we've we've done things that might be like sub optimal from just like a performance perspective. but since we do have people coming into the store and they have heard that like the agent is called luna. and if they go and speak to the phone like the phone agent and then that agent 's like no my name is like i don't know greg or something then they will be confused. so we have to like work in the constraint that like the the the people who interact with the system have the expectation that it is one system. yeah and i think one interesting. maybe the the one interesting take away that i would i would say here is that the models are happy to take on any personality you tell them to. and if that is that you're part of a bigger entity or just like one one branch they will happily take that personality on and act like as if they were that big entity. it's a brave new world. so many times we conclude on essentially that note. anything else you want to double click on prakash before we break? no i think. lucas axel thank you so much for coming on. can you give us the address of the place again? like i'm sure people want to check it out. yeah. it's twenty one O two union streets twenty one O two union street so twenty one. is there a name for the store and the market and on markets and on markets itself and on markets twenty one O two union street. and you guys have a a three year lease right. but get there before copies of super intelligence sell out. yeah. and the agent is called luna. yeah it's and they have granola which is you know which is what you need in san francisco granola. so exactly. awesome. thank you guys. fascinating stuff. we'll definitely keep watching with interest. right bye for now. bye. and well that's that's a wrap. nathan what what did you think of our we we we had kind of like a micro view kind of like the PCV. and then we had this macro view andy hall at the very top like political economy. and then you had like right in the middle like the actual running of a actual business right in the middle. what did you think? what was your what was your takeaway from the the three guests? i guess just i feel like nobody is really ready for what's coming. adam and this all kind of each conversation i feel like demonstrated that in somewhat different ways. i mean most controversially i would say with sergey obviously i you know i've literally never made a circuit board. so as my dad would say he's forgotten more than i know about what that takes. and yet i sort of feel like my outside view is moderately confident that it's going to go a lot faster than he's anticipating in terms of a general purpose agent 's ability to do that sort of work especially given access to the kind of tools that he's developing. so that that struck me as somebody who you know is is like obviously super sharp right? i mean doing the set and i i've loved i listened to two different previous interviews that he gave and i've had him on the podcast myself as well. so i mean i i think like there's no doubt that he is super sharp but he's so deep on this one topic that if i were to offer any friendly advice or feedback it would be i think zoom out a little bit look at what is happening in reasoning and don't assume that there's not a new user type and don't assume that you can't have agents in the not too distant future. i mean why can't they run these sort of analytical approaches right? i mean i i i think full simulation is going to be computationally costly until there's models trained to as we have seen in other areas right in protein folding and in material science.

we do now have these existence proofs of models that can take a bunch of raw data and do orders of magnitude faster or what a pure physics simulation like could do but would be prohibitively expensive to run. so that seems like that will probably come. but then also just the honestly maybe by a is training it right. i mean we have when he's like talking about the long term. i'm also cross referencing that against the fully automated AI researcher march twenty twenty eight timeline. and i'm like those things could you know those kind of like shortcuts in terms of simulation could come a lot faster. and also the ability for models to just like literally reason through in a much more human like way. like OK i see this board is kind of failing in this way. here's here's the look of it. you know what would i do a bit differently? i wouldn't be surprised at all if in the next two years we see something that is like if not top human expert you know certainly like competitive with your sort of rank and file circuit board designer. i kind of would be surprised if that isn't the case. so that felt like a a sort of somewhat of a lack of awareness about at least a possible paradigm shift that i you know if i was an equity holder in the business i would like definitely want to make sure he's thinking about. then i felt the exact same way in the next conversation too you know with this whole idea that the agents should be you know beholden to some principle and you know the kind of taking that assumption for granted. and i'm like yeah i don't think we can take that for granted either not just because guys like lucas and axel are going to do gonzo experiments but also that we're not too far from at least i wouldn't think in calendar time from some basic systems being able to kind of survive on their own you know and then there will be people trying to put those things out there. then there will be like selection pressure obviously for those that get a toehold. so i do think we right now we're on a path where i think we should assume that there will be all kinds of autonomous agents possibly some working with long term goals that are understood or not understood good or bad objectionable whatever but also probably some that are just evolved into filling a niche you know and kind of surviving. i mean most we don't think of it animals rightly i think as like having high concept long term goals but yet they do manage to survive in a given little niche. so i think we should expect that kind of thing to be coming online. and it just seems like you know all of the i was struck again by like the just the paradigm being very sort of anchored in things that we know and not really being prepared. and this is not like a a fault right? i mean it's very hard to do. i don't have the answers. but in both those conversations i was like i don't know man. but it seems like the tail risk here is like quite large that the assumptions that you're working with will just not hold within twenty four months and it'll be kind of all washed away you know like so many sandcastles have been over time. and you know i think that's an uncomfortable reality. but i i do think with it that's i think that's kind of what we have to be prepared for and at least trying to figure out how to grapple with if we're going to make you know bring this whole AI phenomenon to heal in any meaningful sense and you know and have it serve us in any meaningful sense. yeah i think i think these kinds of conversations will probably more more well defined maybe like twelve or eighteen months ago but now that you know you have the models able to code and you know mithos starting to show i i would say mithos is better than better than you know all. but maybe like one thousand humans in the world at finding bugs. and george hotz had this thing where he's like look i can find bugs zero days easily. it's just that there's no economic like necessity. like you can make so much more money building something useful to humanity. and meanwhile if you build something like a zero day if you go out and hunt for a zero day the remuneration is not that much. it's like maybe like ten grand for a zero day maybe. and in order to use it you know you put yourself into all of this legal jeopardy in order to use it. so it's just it's just not worth your while. and i think what he ignored was that you just have twenty million george hotz 's now right? applied to the problem where before you couldn't even afford you know you can't afford george hotz to like come to your security you know white hat packing. i do. i defer with you on on what sergei is doing because i feel like you know it's not as though we don't have calculators but we still ask we still started off asking the models to do like simple math questions right? and at the end of the day right now the model kind of like if it wants to do a calculation it brings out a python or excel or you know something else it doesn't bother to process it internally within the LLM which is you know structured really for language and reasoning right?

and i think in that way what sergei is building is kind of a plug in that the LLM as an orchestrator may end up using because it's just a more efficient way. because what sergei i think what sergei is doing is really he's trying to get to a a maxwell equation without a maxwell equation right? he's trying to get to the final you know partial differential equation kind of solution on this very complex like number of lines you know going to the PCB. and he's trying to get to that solution without doing this like supercomputing like task of like you know millions of like little interactions between all of these things right? and i think i think the models may end up using that anyway right? they they they they're not going to be they're not going to do the maxwell equation internally. they're already not going to do that. they're going to run a you know python or something else anyway. so i think in that sense what sergei is doing and what i think alpha fold and all of these things which are primarily scientific kind of differential equation solvers really in in some sense are actually we'll just plug in an orchestrator in the end. don't think the AGI in that sense is really that kind of orchestra which can use all of these tools and not not necessarily doing the calculation internally perhaps. well i certainly think it's going to start that way. but i would point to image as an interesting counterpoint that i think at least shows where this could go right? because we don't see in today's world a language model purely existing at arm 's length with an image generation model and prompting it purely through text. we do see the unification of visual and the language latent space. and i guess i have a hard time seeing why. and there's obviously a timeline question. and my general philosophy is to like try to reckon with the possibility of shorter timelines. and then you know if we have more time to deal with these things probably that'll be good we'll take it. but like why wouldn't it be the case as we think about exponential compute that at some point all these latent spaces get joined together in some deep non arm 's length. but like truly integrated way where the model can both reason about maxwell 's equations and recite them and call a calculator to run a certain version of them. but also like have an intuition that is potentially really powerful and kind of alien to us but sort of natively operating in that space. you know you can imagine a world where in the same way that i kind of know where my arm is you know that an AI just has an intuitive non verbalized sense that like this trace won't work but this other one will work. and it just kind of feels it based on everything it's learned and all the reinforcement that it's got similar similar to human intuition where we we might not do all of the calculations but we get to a point where we make predictions which if we did try to calculate them would be horrendously complex. but we we make a a educated guess anyway and we kind of get there right. so yeah i can i can catching a you know baseball is always the other example that i go to where it's like you're obviously not given the luxury of time to compute all the forces on the ball. but yeah you can just kind of reach your hand up and grab it. or at least you know most of us can. many of us can. so it's clearly possible to have that sort of intuition for see you know the at the crack of the bat. i kind of know where i'm going. i see that happening in just an ever wider number of domains. and to me that's like the most like you know as i said sergey that's the most likely form of super intelligence. you can i think they'll be outstanding reasoners you know and and quite likely like superhuman reasoners in many respects. but when you combine that with that deep intuition of just like what will and won't work and being able to sense that at a glance. and to do that across all these domains from circuit board design to material science design to protein folding to if i perturb a cell in a particular way like what's the next state of the cell going to be after i do that to dozens and dozens more? this feels to me like where we really create something that is just a qualitatively different kind of intelligence and and chain of thought goes away as a way to understand it. you better hope that it's like being forthcoming with you in the chain of thought because it doesn't necessarily need to. and there's a really interesting work recently from google about different architectures and how much work they can do internally before they have to externalize their thinking in the in the chain of thought. and the transformer 's like in some ways good there because it doesn't have as as opposed to like a latent like a state space model. it doesn't have this sort of long term internal state that it can update ongoing indefinitely right? it has this kind of just finite context. and there's like only certain traces causally where data can influence the next token.

so it it has to externalize and that's great but it doesn't you know it notably doesn't have to externalize the you know the nano banana model does not have to externalize how it's going to come back at you with that next image. it just spits it out and then you're looking at it and then you're like here it is. so yeah i really can't get off of that. i guess in terms of why i expect some of this stuff to be so hard for us to keep a handle on it would be very interesting to see it operate in something like retail. because i think the i i you know i have some knowledge of retail and the number of strategies that i've heard of like one. like for example one strategy in fast moving consumer goods is to go and get goods which are about to expire about to hit their sell by date from larger stores and then move them to smaller stores. so the smaller store can often move the goods faster because it's moving in smaller chunks. so they buy at a discount from a small from a larger store. because if you have sell by date which is like two weeks you know two weeks spending and you know a larger store can't get rid of it they buy that and then they you know vend it in smaller chunks and they get a discount. so because the retail margins are so thin there's a number of strategies that people use which are really kind of like you're not going to learn in business school. like you you you. it's really like small scale vending there. there's there's a lot of stuff that people do which you in business school you're like oh you know you have capital you have margins. just go do this right? you don't you don't go through this process of how do i get a larger like a one percent larger margin? like how do i grind that out right? so i i don't know that whether vending will be the first place that you see it though. i've always imagined that it would happen financial trading first but or you know cybersecurity it's kind of happening right now but i i've always imagined it would happen in financial trading. certainly financial trading offers very fast feedback and verifiable outcomes in a way that programming does but not too many things do. so it it does seem like a very good candidate. i guess the the challenge there is probably it's like the most secretive domain in the world right? so what would you i mean what comes to mind to me is like this might i mean it's it's surely happening to some degree right? like i don't know what jane street is doing but they're definitely training lots of neural nets. how much has this kind of already happened and people are just keeping their strategies close to the vest? i assume it's got to be significant. but this is one big blind spot actually for me because i've had a hard time finding anyone who wants to talk about it on the record. a lot of what renaissance and jane street. and a lot of what they do is actually kind of standard standardized kind of models and algorithms etcetera. but they have a number of advantages number one they have a latency advantage because they always co locate with with the you know exchange. and the latency advantage has been something which has been in play for more than like a hundred and twenty years. people used to try to get a latency advantage over telegrams right? like you know you'd have the horse rider going on one way and then you'd send the telegram and then the telegram would reach first and then the pricing would change on the other side before the rider with the horse got there right? so you you've had over years like you know this latency advantage thing has been built out. i think the next upcoming one perhaps is already there is starlink because if you have low earth orbit satellites potentially you can get a message from london to new york faster than you can through the underwater cable potentially. again you need you need a bunch of things to line up and that latency advantage even if you have the best algorithms even if the model is exceptionally good it wouldn't be able to beat the latency advantage because the other person is just seeing your cards before you play them. and for me that's been like that that demarcates how good the agent how much profit the agent can really make because there is a number there's an amount of profit in the sub one second range which i don't think the agents would ever get there without the co location. and that kind of blocks you off. and then besides that there are there's a lot of like data cleaning and that the renaissance and jane street. guys do. and that is why they they hire PH DS to do really data cleaning. and because you need to understand that this data is actually going to have real impact on the financials and you can't just mess it up right? so they do hire a lot of people to do very nitty gritty data cleaning work. and then finally you have the selection of the signals and the market making itself the AI assisted or algo assisted market making. i think people spend a lot of time on like oh they have exceptional algos and not a lot of time on like the infrastructure the data cleaning and all of this other stuff that has to come together for you to have a successful firm.

and so i think what would be interesting at some point is if the models or these model companies start to have their own co location or own trading arms to some extent. google deepmind had one. demis was starting off on this process but google headquarters didn't like it because you could say that google would have overwhelming advantages in terms of predicting stocks using all of the data that they have internally facebook too. and but putting you in finance makes you very regulated and it puts you in a lot of like where's the chinese wall? what can people see? what are people not allowed to see? what can the are your systems like segregated? are they not segregated enough? and i think the level and you know financial regulators are not. technically that's sophisticated. so they ask for things which are very clearly demarcated. they're like i want your entire group to move to another building. and people are like look we're already segregating the you know segregating the devices and all the data. like why do we need to move to another building? and the regulator doesn't care. regulators like look i want you guys in a different building. i want you guys to have a different business unit. i want you guys to have different financing this if this unit is regulated no one in this unit can talk to that unit. there. there's all of this stuff that goes on and financial firms exist as a as a function of that regulatory process. and i don't think i to this extent i don't think the firms want to submit themselves to that process yet. i i i doubt and i doubt you know some of these model decisions can clear like the the barriers like is does the is the does the model have inside information? you don't know? did it read some in fact did was it trained on inside information? did it was it trained on material non public MP is material non public information? at some point you can't you can't say for sure. and then that brings up a whole host of questions. so perhaps finance would be i think vending is actually easier. it's easier to take on amazon than it is to take on jane street. and you again you have the same infrastructure and you know information problems but it's a less much less regulated market than i think finance. how do you think about like more macro strategy though? i mean i i think the i don't know a lot about this but my general sense is there's like high frequency trading where the latency issues that you described really matter a lot and are kind of a big part of who wins and loses. and then there's of course the more information and the more differentiated information you can have. that's always an advantage in any strategy that you're playing. but then there's this kind of other end of the strategy which is like a slow moving you know i mean like to take the sort of canonical example right? like buffett and berkshire hathaway don't time their trades to you know microseconds right? they like take very long walks and you know deep thoughts and then they decide what big bets they want to place. and i do wonder if we're seeing that start to happen or if we if we will. i mean probably you would see more trades than like a berkshire from you know a sort of global macro AI. but it does still seem like there might be. i would get i mean i don't know tell me if you think this is wrong. but i would guess that there's already a shift under way where all the big firms this is like an obvious enough thing to do that they would presumably be training large neural nets on all the data they can get their hands on and potentially driving more and more of their strategic decisions via the predictions of a model. like is there a reason you think that wouldn't be like at least kind of late early far along in today's world? i sense i think every firm always tries and typically one of the things is that the market is like a multiplayer game. it's not a single player game right? and the thing is that when OK number one there are certain profit pools available at every latency and at every size right? so it's not the same profit at the buffet size as it is on the high frequency trading side right. so buffett 's profits are in that long run and in much larger size but he also has a problem deploying capital at this point right? he's got a hundred and fifty billion dollars on the balance sheet. he's very unhappy with the choices that he has and he's just hanging on to that capital trying to wait for a proper market downturn before before he can deploy. so he he's already capped out at his size. he's having difficulty finding investments at that size already.

and any any firm that gets to that size will face the same problems he has which is he you have a large pool of capital and you are you know perpetually you end up buying high if you decide to buy when you know when the market momentum is good and you have to wait long periods for the market momentum to go down in order for you to be able to deploy large amounts of capital at pricing that you like. and i think one of the things that i i'm sure the models do assist in like decision making but i'm not sure whether they have enough context because there's a lot of human context in the market. there's a lot of like kind of sensing when someone else is going to play and some when someone else is not going to play. if you're going to make a merge if you're going to try and buy a company you have to kind of know who else it might bid against you. in the united states at every capital size there's there's a limited number of players right? if you if you're going to if you're going to do a ten billion dollar investment there's only you know seven or eight players in the US that can make a ten billion dollar investment or or or larger right? and and you know if you're an investment bank you kind of know all who all the players are and you kind of know the dynamics of who's talking to who. and i'm not sure whether i investment banks have kind of C R M E R P systems but i'm not sure like all of the knowledge of like a managing director who has a twenty year relationship with like the head of KKR is fed into that. i'm not sure whether elon has a specific banker at morgan that he likes and that banker is working at doge and was pulled out of doge to work on the spacex IPO right? there are all these human pieces to it. and you know the models will get there at someday if you have full context right? full context full twenty four seven context on every single one of the players. yes the models will eventually get there but at this point not quite there yet. and the players on the field make these very human decisions which are not quite caught up in pure kind of pricing metrics. elon wants people who are going to hold on to the shares for longer. he wants people who are not going to sell immediately. he wants people who are going to commit to being there for the long term. so he's willing to take lower pricing. he's willing to offer it to retail even if other bidders are higher. he wants to place it among the same kind of tesla fanbase. there's all of these questions all of these things that you know people all of these intentions that people have which they express through the process. and i don't think the models kind of capture all of those things quite as of yet. eventually they might but not quite as of yet. and for the macro that's where all of the human play comes into play right? like people are much more concerned about their own ego long term kind of long term strategy. they're not that you know at once you have like ten million dollars like you're not really concerned about am i going to get another hundred grand by screwing over elon? right? it doesn't it doesn't matter anymore. you have like reputational risks and other things that you're concerned about. and in fact people who do screw over people in these iterative games get bad things happening. one of the reasons i think that lehman brothers went under is because in a previous instance lehman refused to participate in a bailout for another firm. and hank paulson remembered that. and he's like well we're not going to participate in a we're not going to bail lehman out. like lehman can go do what they do. and lehman failed. and you know dick full always said like look this is because of a personal issue. this is not because that lehman should have failed. lehman could have been like goldman could have been saved by buffett. but paulson was unwilling to back the firm as treasury secretary. so i think there's a bunch of these things which are very human very personal at these at these larger sizes and the macro you so you can't just make a macro bet at the larger size. all these human negotiations. it's more like personal. you know buffett went in to banking. he he refused to you know back washington mutual but he decided to back goleman because by the time they got to goleman he knew hank paulson was strategy secretary. he knew goleman would get bailed out. so he he before he went to goleman he had that sense and then he put the money in. i don't know whether he had discussions perhaps not but yeah he had some idea that goldman would at least would get bailed out. so i think they're all these things that are not captured yet as all of this tacit knowledge. i think the same with PCB layouts as all those tacit knowledge. and the economy is particularly difficult because there is no like case where you can compare the same event under different circumstances. every single event is a unique event and your actions in this event affect the actions that people take in the next event in the next period of time. it's it's tough like time series it's tough. let's see let's see what happens. i think when i hear all this should i understand it? i think one way to to parse what you're saying would be to say there's a lot of human barriers to adoption at existing firms.

there's also some scale at which you are not just a term taker but you're actually a market mover. and so that is like a kind of inherently a challenge for a sort of big data blind optimization approach. but the flip side of that i think would be to argue that you know in the sort of vein of like your margin is my opportunity. like all those things that you're describing define the opportunity for at least like smallish to moderate sized funds to just work in a very blind way that like doesn't care about reputation. and you know because you can't really punish some purely neural net based trading algorithm right? like it's i mean i guess we have a is that beat people at poker right? we have superhuman no limit holdem players. so if we have that i kind of am like why are they so good? well one reason is they don't really fall into the same bias traps and predictability traps and you know you wanting to you know having a grudge against some other player at the table or whatever. that's kind of moving them off of an ideal strategy. so i hear all those things that as kind of both why it might be slow to happen but also why these strategies can win when they finally do come online. i i i think we will get there. we will get there. but right now i have difficulty with context. really it's it really is a question of capturing the entire context. and i don't know what what is the end point because we're already transcribing a lot of meetings right? so the meeting transcription process has started. i think we will eventually have meta 's eyeglasses or apples eyeglasses or whatever. we'll capture even more you and you can you can get sentiment analysis from a face right? you can see whether someone is disturbed or angry or excited. and so there there's a lot of data that you can get there. and i think all of that data can be processed and can yield useful signals in business but we're we're still like a long way from the amount of data extent of data capture that you know might might be necessary. so i don't know how we get there without the data capture. that's what i'm saying. i i'm sure the like i said like i'm sure the the algorithms that will define the future already kind of exist. the compute that for that future already exists. but the data collection and the context that's it that is necessary is not there. it's not there in you know cancer drugs. it's not there. like we just don't have the data. we do have the algos but the algos can't be fed without the data. and i feel that's that's the issue. the the full context is not not yet there. yeah. so in other words too much information is private for our non recorded tacit knowledge not captured anywhere which which is kind of why like a lot of these jobs you require an apprenticeship right? you start off you have a college degree in economics or banking or whatever business. and then you kind of join a firm and it takes you like two to five year apprenticeship under someone in order to kind of like figure out what's really important in the market what's not you you kind of you kind of figure out that you know whatever the wall street journal tells you is the final word not the initial word. and you're in the process before that final word gets published. so you need to you're you're acting prior to the final word so to speak. so if you've already read it on the wall street journal it's too late. basically you're not you know you're it's already done. and there's all of this pre pre publication like stuff that you know you need to learn at the firm. and i think that is you know if we can capture that apprenticeship process in data yeah then then i think you you can you can start to migrate. you know some of this stuff some of this you know decision making process into the models. it may it may happen very quickly right? it may it may just be like the model all of a sudden says like oh i remember everything now and i can learn anything. so you know just put me in you know put me in coach put me in the put me in the room and you know let me in for like five days and i understand everything and i can help you. it could be that simple. we just clear the hurdle in the next like twelve months and that's it. it's done. and we don't have this whole like nitty gritty data collection data cleaning process could be. so even the long time lines have got very short. that's you know i think i think we this week might be the spud release. i think there's some there's some you know opening eyes very quiet post mithos. and there's been some signs from the codex team that they can beat the mithos sui sui bench benchmarks. yeah let's see. all right. well we'll be back before too long and i'm sure there'll be no shortage of things to talk about. indeed. so do we wrap back you? yeah.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.