A Positive Vision for the Future: Part 2 with Illia Polosukhin of NEAR
Illia Polosukhin discusses AI's societal and economic implications, envisioning a future with a unified intelligence layer, AI agents enabling direct markets, and new governance models. He explores how AI will reshape daily life and global governance.
Watch Episode Here
Listen to Episode Here
Show Notes
In part two of his conversation, Illia Polosukhin, co-author of "Attention Is All You Need" and founder of NEAR Protocol, explores the profound societal and economic implications of AI. He envisions a future where a unified intelligence layer transforms personal computing, AI agents enable direct market connections, and individuals find meaning in niche communities amidst AI-driven abundance. Illia also delves into the ethics of agent-to-agent interactions, where AIs prioritize human interests, and new governance models utilizing AI delegates. This episode offers a concrete, systems-level vision for how AI will reshape our world, from daily life to global governance.
Sponsors:
Google Gemini Notebook LM:
Notebook LM is an AI-first tool that helps you make sense of complex information. Upload your documents and it instantly becomes a personal expert, helping you uncover insights and brainstorm new ideas at https://notebooklm.google.com
Tasklet:
Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai
Linear:
Linear is the system for modern product development. Nearly every AI company you've heard of is using Linear to build products. Get 6 months of Linear Business for free at: https://linear.app/tcr
Shopify:
Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive
PRODUCED BY:
CHAPTERS:
(00:00) Sponsor: Google Gemini Notebook LM
(00:31) About the Episode
(03:33) AI Coding Assistance
(14:18) The Future of Engineers
(18:58) AI on the Blockchain (Part 1)
(19:08) Sponsors: Tasklet | Linear
(21:48) AI on the Blockchain (Part 2)
(33:03) Vision for AI Economy (Part 1)
(33:55) Sponsor: Shopify
(35:52) Vision for AI Economy (Part 2)
(44:46) Society and Status Games
(49:14) AI Architecture and Honesty
(01:00:41) Evolving the Social Contract
(01:06:43) Do AIs Have Interests?
(01:11:08) Designing for an AGI World
(01:16:09) Open AI and Biosecurity
(01:22:56) Coordinating AI Development
(01:28:52) Outro
SOCIAL LINKS:
Website: https://www.cognitiverevolution.ai
Twitter (Podcast): https://x.com/cogrev_podcast
Twitter (Nathan): https://x.com/labenz
LinkedIn: https://linkedin.com/in/nathanlabenz/
Youtube: https://youtube.com/@CognitiveRevolutionPodcast
Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
Transcript
Introduction
Hello, and welcome back to the Cognitive Revolution!
Today I'm excited to share part two of my conversation with Illia Polosukhin, co-author of "Attention Is All You Need" and founder of NEAR Protocol, which describes itself as "The Blockchain for AI" and aims to build "A Future Where AI Belongs to Everyone."
In part one, we explored the foundational technology stack - proof-of-stake consensus mechanisms, confidential computing on GPUs, and NEAR's plan to train frontier models on a decentralized basis â with contributions incentivized by cryptographically-guaranteed revenue shares.
Today, we zoom out from the enabling technology and discuss the big-picture questions:
What does personal computing look like as AI continues to eat traditional UI-centric software?
Illia envisions a unified intelligence layer that works across all devices, predicts what you need, and proactively gets things done for you.
How will this change the nature of markets, trade, and the consumer economy?
Illia expects that advertising and middlemen will become less relevant as AI agents allow buyers and sellers to connect directly at unprecedented scale.
What does daily life look like, and how will we spend our time in the era of AI-provided abundance?
In addition to a rise in personalized AI-generated entertainment products, Illia predicts that people will find meaning and compete for status in small communities focused on niche interests and activities such as athletics, video games, collection & creation of unique artifacts, and who knows what else.
What rules should govern agent-to-agent interactions?
Illia imagines a symbiotic relationship between humans and their AIs, such that humans and AIs effectively grow up together, and believes that AIs should pursue the interests of their individual humans, even if that means being less than fully transparent when negotiating with other AIs on their humans' behalf.
And what new governance mechanisms could allow us to take advantage of the exponential increase in information processing power and reasoning time that AI gives us?
Illia describes NEAR's experiments with AI delegates, which vote on behalf of token holders who select them based on their decision-making frameworks, and their goal of enabling every individual to have their own AI that participates in governance on their behalf.
Of course, we touch on lots more along the way as well â including the role of formal verification of software in securing smart contracts, the remaining challenges required to effectively bridge the gap between low-level guarantees and high-level intentions, and the need to harden society's defenses in order to maintain global biosecurity in the presence of broadly distributed powerful AI systems.
Overall, while many questions remain to be answered, Illia's strength as a systems-level thinker, spanning technology, economics, social dynamics, and governance, is evident throughout.
So, without further ado, I hope you enjoy one of the most concrete positive visions for the future that you'll find anywhere, from Illia Polosukhin, founder of NEAR.
Main Episode
Nathan Labenz: Ilia Polisuhan, founder of Near. Welcome back to the cognitive revolution.
Illia Polosukhin: Thanks for having me back.
Nathan Labenz: I'm excited for Part 2. So last time we talked a lot about foundational technology. The journey that you went on from being an author of attention is all you need to try to source data from contributors around the world. Struggling to pay them, taking a detour into blockchain, thinking it would take just a few months. Here we are a few years later and it's all really happening. Anybody who's listened to this feed for more than a minute knows that I often say the scarcest resource is a positive vision for the future. And I appreciate that you have 1. So I'm really appreciative that you're taking a second window here to help us unpack that. I guess maybe for starters, you know, one of the the jumping off points last time was you said that you wanted to teach computers to code and sure enough, now they can code. So maybe as we kind of ramp up into a vision of like potentially a quite different future, how is the rise of AI coding assistance changing how you guys work? And how is it changing? Like who can create stuff on top of the blockchain?
Illia Polosukhin: Yeah, but I don't think it's even on, I mean about blockchain per SE, but yeah, I mean the the the real reason why I always thought that, you know, as computers are able to code, we actually approaching a different world because there's few dimensions to this, right. We know that there there was a statement by Marc Andreessen, right, that software is eating the world, right. And and this idea was like that effectively. I mean, what it means is automation. Automation has always been the driver of innovation, of GDP, of productivity. And, you know, it's like everything from tractors to factories to then indeed computers and computers kind of this like kind of universal, you know, bicycle of the mind, right, automating things. And so now the challenge been that there's there's always been like a small cohort of people who actually are able to build software, right? And so if I have a need, right, I need to find somebody else to build it, probably they need to build it not just for me, but for a large number of people. So it's actually economical. And so like we've been, we've been dubbed with a lot of like software that became very complex to use. And so now you need to learn how to use it because it's not really built for you. It's built kind of for, you know, this generic user that has like, you know, every user has like 5 use cases that they're like, some would overlap. And so it's all of this is like kind of stacking in one software, right? Or, or you just don't have it and you keep doing stuff manually and kind of, you know, wasting your time. And so for me, the ability for machines to code is really about that transformation where everyone now is able to build their personal software. Everyone now is able to build their own personal automation. And it also read kind of removes the fact that like the interfacing, right, that this interfaces need to show you all the options right away, right? You kind of can kind of sub select with, you know, English or whatever into the part of the interface that you need, right? So to give you very specific examples, right? I mean, I like the example of Salesforce, right? Salesforce, I mean, obviously started as like a, you know, small startup that was targeting like a specific use chaos of salespeople. But at this point, right, it's a monstrosity that like you need to hire somebody else to configure Salesforce for you, right? Like that's how complex a sales force is, right? At this point, it's effective. Like you're hiring somebody else to build you a a system, right? It's just like they're using existing pre built components. Now imagine this world now where you can, you know, just talk to a.
Nathan Labenz: Computer.
Illia Polosukhin: Like it's, it's really now becomes about your sales process, about your business process, about how you wanted to make what reports you want, etcetera, right. It can be dynamic. You can restructure this as you go, you know, you can whatever built in things that you know, Salesforce may have somewhere as a feature or may not even have. And then you can integrate with whatever other tools you want. Like for example, Salesforce, you know, when crypto everything is in Telegram, Salesforce doesn't have a Telegram integration. We can't use it right. So now some but we need to go and somebody to get built integrations in Telegram and Salesforce. But you know, with your vibe coded CRM, you can just like, hey, you know, and integrate Telegram for this. So that's kind of, I mean, this is just like a simple example and we can keep extrapolating, right? It's every single kind of our, let's say, digital footprint, right? It's kind of becoming more and more automated, right? And the more intelligence computer can have and more it can take on the the kind of the more you can offload the kind of orchestration of different tools or just low level, you know, here's a database, you know, I need an in chart tool, I need CRM, I need this, I need that, right. You can actually build all of it. So that's been I mean, this is in 2017. They were like, hey, you know, software to service is going to die. AI will replace it in 2017. This sounded like yeah, very delusional. That was probably the but obviously I, I think now we see that most people are kind of or a lot of people agree and then the software service itself is trying to become AI because they know they kind of going to just get out of out competed by that. So coming back to your question, like what changes in our work, I think there's few pieces that are, you know, already clearly work, right, kind of data analysis through natural language, right? If you have like kind of reasonable data structure, you can like now give like you can effectively make everyone maybe not a data scientist, but like what before would require like a business analyst person, right? And then it would take them a while to like pull all the data. Now you can just say like, oh, you have a question about the business analytics information, Just ask it like you don't need to e-mail somebody to get you the answer right? You just ask it to the tool that you know, generate sequel queries, pulls in whatever data writes in Python gives you an answer right. I think I'm actually big fan of the front end building with wipe coding now. And though I'm not writing production code anymore, but it's really useful for me as a prototype tool where you can effectively just really quickly get to inexperience. And so you don't need like, I mean, you want design to be more about like style guides etc. And, and, but like the UX part, you can test really quickly right before a designer would, you know, maybe design something and then developer would try to build it. And again, it's like, oh **** I can't actually build that or like it doesn't work exactly this way. And there's like a lot of iterations now even designers can just build a full experience, you know, fully clickable thing. It generates code for this. And now developers can just plug in all the back end logic. I think also we're starting to see on our teams that actually doing development that the I would like the time spent is changing. Before you spend a lot of time on kind of the development work and then you spend also a lot of time on reviewing other people's work. The amount of time spent on development is shrinking, right? Because you effectively tell the whatever course or codex, whatever this is to go and do the thing and a lot more time spent on reviewing things and making sure they're correct. Right now, one thing we were experimenting with on one of the teams was what if we like decompose the whole piece? Like the whole like this is, you know, for simple software right now, AI works, right? For complex software, it doesn't right? It doesn't really go and you're going to just go and like build me a really complex system and it goes and does it. It doesn't work yet we see obviously the improvements continuously. And so right now for complex systems, you right now decompose it yourself, right? You know, as an architect or senior engineer, and then you have maybe a few different team members who actually build the subsystems. And before in traditional software development, you always want to have multiple people on each subsystem that you want them to know really well how it works so they can maintain it, so they can change it. Now actually it's not as important because AI will explain to you how it works and you may actually even have a natural language explanation that the developer used to build it in the first place attached to it. And so it's actually more about like velocity and, and the viewing process and then how to ensure that each part is like secure and, and kind of works correctly. And so there's kind of a transition happening where there's more almost like individualism because every individual is more productive. And then it's more about like how to build, how to build the right decomposition into pieces. Again, this is a very also temporary in a way, but this is kind of the current state, right? And so like as tools become more mature, they are better at like larger codebase navigation, etcetera. I think the other question is, it's, it's really about like, I mean the model quality for a specific task for like, for like the, the, the conceptually like the, the abstraction level that it needs to operate on, right? So like again, front end, there's no, there's no obstruction level, right? You just kind of build what you see, right? And you kind of iterate. So it's really easy to check, It's really easy to iterate. So you don't really need that much and models are good at this. When we talk about like a block chain, low level code, right? It's like extremely complex system. There's a lot of pieces, externalities. So models are not very good at that. And kind of there you really expect people to spend more time and you generally spend more time like thinking about the algorithm and architecture than writing code. So yeah, so it really depends and kind of, but this is also all shifting really quickly. Like, you know, six months ago I would have wouldn't given you a different answer, right? And I'm sure in six months it will be different as well.
Nathan Labenz: Yeah, We've seen some very impressive programming results from frontier companies that have not yet hit the, you know, the public APIs or product services just yet. So it's certainly we can bank on more to come when it comes to you. You mentioned like job titles, you know, I think you said like senior, you know, engineer or architect. That raises the question I think a lot of people are asking right now. Are you hiring junior engineers? What do you think is the fate of the junior engineer as things stand today?
Illia Polosukhin: Yeah, I mean, I think it's, it's less about like junior engineer, it's more about who is that person, right. If, if this is somebody who is like coming in like you know, for example, by the time I got into university, I already been coding for seven years, right. And so, and this is when I got, you know, my first job, etcetera. So like if there's somebody coming in who's like, you know, build multiple projects already using AI everyday, etcetera. Like it doesn't matter if junior or not junior, right? It's really, I mean, yeah, there's a lot of skills for them to learn, but like they are there to learn them, right? They're open, they're ready to go. And then you have some people who coming in who's like, you know, like studied a bunch in university, but like not really like, not really in this learning mindset. And like this is going to be continuously changing things. And like you're like, like, again, we, we're transitioning from like software as like craft to really be just like a problem solvers that talk to computers, right? And so, so, so you need, I mean, problem solving at the end is really all about kind of like the mindset and then the approach. And so if people are willing and, and are kind of doing that and excited about that, that's if people like, oh, I don't know how to do this, I don't, you know, I can't do this, etcetera, then that's not the right person. So, yeah, so it's really depends on like it for many things before, yeah, you would hire like a bunch of junior developers because they're cheaper and you don't need maybe as like high quality of work and even want to fan out work. That part is not needed. Yeah. So you're hiring more for like the problem solving and kind of more people who can actually like creatively problem solve together.
Nathan Labenz: Yeah, there's always a room, there's always a space. There's always room for people who are like a force of nature under themselves. Exactly. Overall that sounds bearish for rank and file. Like I was told that if I you know.
Illia Polosukhin: Get a studies this, but yeah, I'm going to be I'm going to pay a ton. OK, yeah, yeah, I think I mean, but I think that's true about every job at this point. Maybe plumbers and electricians. That's the like I, I usually talk like the automation right now happening from both from two sides. The all of the like manufacturing like on the floor jobs are getting automated. And like that price is still pretty high actually. Like, yeah, I mean, you know, Vietnam, for example, salaries are probably lower than what robots are getting paid in the US but in US it's already like, you know, you affect, there's this company and I know called Formic. They effectively provide you out staffing with robots. You're a factory. You call them up, they bring your robots, they install them, they set them up. You don't need to do anything. You just pay them effectively as as you pay salary. But they work 24/7. They don't complain, they don't unionize. They just, you know, do the job. They don't quit. Like right now in US, the chorn of workforce is 300% meaning like every year you need to hire three people for one job because like they keep quitting. So so that's like the automation from the like bottom kind of low scale, you know, a very repetitive task. And then finally, all of the kind of white collar high end jobs are getting automated. Like, you know, coding lawyers, like a lot of this information work right getting automated. And so the most safe right now, I mean, it's going to get automated as well, but it's actually the high dexterity kind of skilled work that requires like, I don't know, you know, plumber, you need to like climb under the sink and like, you know, fit something like stuff like this right now is just like super hard to do by with the eye. But again, this will happen as well. Like all those things will get automated overtime.
Nathan Labenz: Yeah, it's coming for all of us. It's just a question of when, when. So I want to understand a little bit better because I think a big part of your vision of the future obviously is AI that everyone owns, right? We've we've kind of got the one default path, it seems in front of us where we have like the big tech singularity, which is where like three to seven companies, you know, become like totally dominant forces because they have the models that ran away from everyone else and nobody else does. And so, you know, we're all just kind of trying to get whatever inference we can get from, you know, from these leaders. And then there's your vision of the decentralized and collectively owned AI that I think has, you know, has a lot to to say for it. It's certainly super attractive in in a lot of ways for people with some concerns about, you know, what happens if everybody has, you know, access to certain things in an unrestricted way too. But leaving that aside, if front end is like largely we can get the AIS to do it, but like core blockchain work is beyond what they can do. What's in the middle? Like what kind of apps are people building on the blockchain today? How hard is it to build those apps? What makes it hard? And can the AIS help there yet? Or if not yet, you know, at what point can should we start to see an explosion of like vibe coded on blockchain applications? Like what's what's the, the fundamental barrier or kind of rate limiting step toward the proliferation of, you know, just anybody who has an idea for blockchain can go do it in the same way that anyone who has an idea right now to a significant extent at least can go, you know, create a little micro SAS app.
Illia Polosukhin: Yeah, I think the problem is the same. It's just it's exaggerated the problem right now, if you, you know, launching your micro SAS app and you're not, you're not actually engineer. If you're launching it just for yourself, it's totally fine, right? And, and I kind of my, my recognition for everyone, like, you know, build tools for yourself, live code everything for yourself. Now the problem is like, as soon as you go like make it for everybody else, because you don't really understand what issues are under the hood. You don't know, like how it will actually affect your users, yourself, etcetera. And so we've seen already, you know, people getting hacked, you know, they're like secret getting leaked, etc, right? So like that, that is the biggest issue right now, the blockchain, because it's naturally kind of in the open right away for everyone and it involves money. That problem is it's exaggerated, right? Because now if you make any small mistake, right, and you know, we have this right down with very professional engineers who build soft, who build blockchain software, any small mistake, somebody will find it and exploit it and, and effectively this will result in some value lost, some money lost. So really that's the biggest challenge. What works right now is actually for existing so-called smart contracts, right? Kind of the back end, you can generate a front end, you can create your own custom UI for specific use cases or combining multiple use cases into one UI. That part actually works. And again, I would not recommend to launch it to other people, but you can build for yourself or you say like, hey, you know, I have this like yield opportunities across different places. I have this whatever. Let me combine it, make my own like asset manager, right? That like makes me easy for example to manage this like things like that you can totally do now. So I think it yeah, it really depends. Now what we are working on kind of medium to long term is how do we formally verify the correctness of the smart contracts and ideally the whole kind of blockchain application stack such that if you are vibe coding an application, you actually have proofs, mathematical proofs that it's correct. And now also when user using it, and this is important, just proving OK, cool, smart contracts is like doing what you want, but you maybe didn't properly define what you want, right. And so maybe your logic itself is flawed. And So what you want is like I'm as a user using this piece of software, want to know that it does what I want, right? And so that's a critical step. So again, if I'm looking for a financial application, I want to know it's I'm going to lose my money. And so if can prove to me directly in the transaction, right as I'm sending money in, it should prove to me that's not going to lose my money, then transaction succeeds. So that's kind of the level of. Integration that we are aiming for and I think is required for this kind of really kind of adversarial and monetary valuable. But I actually think this is required for all software because like the world where you know, AI is going around and hacking everything left and right is also not great. And and we are kind of in it actually right now. And so we do need like formally verified software to to really secure that.
Nathan Labenz: That has been coming up more and more in my conversations about this. And I would have to confess that up there with like regular expressions, the concept of formal verification of software is, you know, one of the things that makes me feel dumbest because I'm always a little bit like stuck on. I think the point that you were emphasizing there, which is like, it's one thing to prove that, you know, this particular function or whatever, you know, does what it's supposed to do and like doesn't, you know, corrupt other memory or, you know, whatever you can, you can make a bunch of sort of generally low level statements, right, But it seems like there's still a real challenge in aggregating up those small level statements to the holistic like this is what I want. It's, you know, you have a version of the genie problem, which is kind of, you know, what a lot of people have worried about with AI in general for a long time of, you know, if we tell it a goal that it interprets a bit differently than we did, you know, we're in potentially could be in trouble there. How do you see this actually playing out in practice? And again, I'd love to kind of get some vision for like applications that you would love to see somebody come build on your protocol that don't exist yet That and maybe it's because they're just too hard, but I'm sort of imagining like, do I have an AI agent that comes in and like automatically red teams this for me?
Illia Polosukhin: Let me give you a kind of a simple example and we can build it up, right? So a simple example, I want to put money into a bank savings account, right? And I want to be able to withdraw it. And ideally I want to withdraw a little bit more money than I put in, right? Like right now, you know, you're sending money, you have no guarantees. Like let's say you sent money via whatever ACH or Ibano or something, you have no idea if it's going to arrive, right? You have no idea if bank will exist tomorrow. You have no idea if if it's actually going to give you money back. I mean, there's like government insurance, right? That insures up to some amount. But like generally speaking, you have no guarantees. Now blockchain gives you some guarantees it, you know, actually like a money arrived, you can verify that. But indeed, if there is some code involved, right? You don't know if that code does has some potential, you know, somebody can withdraw this money illegally, right? Like you need to go audit the code. You yourself may have, you know, missed something etcetera. So here you say, hey, you know, you effectively constrained like, you know, I my transaction will pass only if I can call this withdraw method and it will return me at least as much, you know, given X deposit, I want to call be able to call this draw with at least X, right? So that would be the condition, which is like when you deposit, right, you effectively constrain that this smart contract needs to prove to you that this property will be true. Now that contract may, you know, deposit this money itself into other places, right? You know, if it's a savings account, it may be lends it, borrows it, etcetera, right? And so it needs to actually in chain to be able to prove to you, it actually needs to chain all of this with every everything else it does, right as you used to, Like, hey, if I'm lending to someone, right, either they need to return to me or I'm gonna like, you know, foreclose their account, liquidate their collateral, etcetera, right? Like so there's kind of a chain in fact, that's happening through the system. So, so that's kind of where I mean to to your point, right? Like you, you start maybe low level, but you actually can start expressing somewhat high level properties. Now the challenge was this like with money and deterministic block chains, it's pretty easy because it's deterministic and and you kind of have like full observability. So you can actually do this constraints pretty easily. To your question where it's coming from, Well, it's going to come from combination of indeed your, I mean we call it wallet, right? Your kind of software that is on your side that actually facilitating this interactions. But also, you know, we believe that your wallet will be AI, right? It will be the AI agent that is on your side, your user on AI that actually does this interaction. So it will indeed be on guard, you know, verifying these properties. Now a more complex thing is indeed how do we prove things that are non deterministic and not potentially like easily observable. And this is this is indeed where you know, you obviously cannot have like 100% formal proof. You need to effectively now start dealing with probabilities. And so and then you can move this probabilities with insurance and other things. So this is where kind of you can have like financial system and other things where you say, hey, you know, we have a liability insurance that you know, in less than 1% of cases something can happen. And so like, prove to me that it is like you are, you are, you know, it either succeeds or I'm getting like $1,000,000 payout if it doesn't, right. And so you're kind of starting to combine what you know, what people have built insurance where they estimate risks and evaluate and underwrite and sound just like kind of formification combined was like this probabilistic modelling. So it's not like for some things, see, the answer is somewhat easy. And then it becomes more and more complex as we as we touch more and more real world. Like to give you an example, it's like, hey, I'm ordering, you know, steal from, I don't know, some country it's going, it's going to arrive. There's a ship involved, right? You know, maybe this ship like sinks midway, right. So I mean normal ways, like, hey, you know, we're going to insure this and there's going to be insurance. And so like you need all of those mechanisms to build on top to account for real world kind of non determinism.
Nathan Labenz: Yeah, it's, it's really hard for me to envision all of that working. And again, partly it's because I'm maybe just a little slow on some of these things, but the, the the sinking ship is a good example where it's like, how is my smart contract going to know if it really did sink or if somebody's just telling me that it sank? Or, you know, there's that that sort of shell game of like where do you hide the trust or you know, where what exactly is fully verified? It's quite interesting. I don't want to get too bogged down in it though, because I don't want to force us to get to a Part 3 before we really get to all the sort of utopian vision. So maybe where you can like, you know, weave some of this stuff in there as we go. But let's like start to leave, you know, also a little bit of the sort of how behind and just talk about like the what what are the apps that, you know, we're going to enjoy? What is the computing paradigm that we're going to have? You mentioned agents, you know, kind of doing stuff for me. Meta obviously has been putting a vision forward of glasses with a display in them, a sort of head up, you know, know who needs a keyboard right when you can just talk to your AI as you walk down the street. That does appeal for all the things that Meta has done, including hot stepmom that don't appeal to me that much. I would say the the head up display is, you know, it's at least an interesting vision for the future. What do you think our computing life is going to look like as the stuff matures?
Illia Polosukhin: Yeah. So I definitely agree some form of AI operating system is going to be the main driver of our computing. The like the devices and kind of the form factors will be different. And I actually think it's going to be easier. It's already easier like you know, if you want to make your own glasses, it's not actually that hard, right. There's there's some factor in China that will make whatever hardware you want. So really it's about like kind of a single AI, you know, your AI that is available across all those form factors, right? You know, you'll watch your glasses, your headphones, your, you know, phone, your laptop, your whatever. All of this is kind of interconnected as like a single, you know, surface. And then your AI knows that like you like to see this information and watch, but then, you know, by the time you pull out your phone, you want to see like maybe news and kind of more longer form content. And then I may be actually like, you know, videos instead. And so that's what it should show me. So like it's going to be effectively personalized, like AI generated not just kind of the content, but also the applications that we use, right? Probably a lot of the similar patterns that we already use, like the feeds, the, you know, the chats, etcetera. But they don't need to be kind of fixed, right? Like you don't like right now I have, you know, 5 different instant messengers. I have, you know, 7 different feeds. And like all of that, you know, can be a single feed. You know, I can switch between like work and personal when I want to, you know, my, my, I can predict a bunch of things that I would like to do. This is something we experimented back actually in 2017 where based on all the things you're doing, can we predict actually the next thing you would do on your phone and just do it for you, right? Or, or suggest to do it right. Like, you know, you have a meeting, you know, 20 minutes away, Let's call you an Uber, right? Like you don't need to go open an Uber go calendar, copy paste to let you know the address, put paste it like it's just going to do it. Like things like that. There's a lot of things that like, you know, AI, we'll know, like, hey, you know, you low, like you ordered food 2 days ago, you know, you're going to be out. Let's reorder a bunch of stuff that you typically order. And so as that, as that system kind of matures and we also trusted more and that's as important aspect, I think there's, there's actually starting the economy itself is going to start to shift because right now the economy is built. I mean, we kind of in just consumer economy where it's built an advertisement on one, kind of like we, we, you know, we discover new things through those feeds, etcetera. But if your AI is like actually, you know, like, I mean, some people are already doing this now like, hey, I want, I want to be on a diet, like build me a personalized, you know, meal plan, etcetera. So like all of that, but also just going to go and order that and you know, maybe even your humanoid robot at home will also cook it because it's the same computing system, right? It's like you just get the food you like your AI recommends given your health goals, etcetera. And so now the capacity. So this is so this is kind of like on the micro level, right? You have this person and so now your AI like it doesn't need to go and order it from your local Walgreens or whatever vents or ALDI or something, right? It can actually put the purchasing with directly farmers. So it's with directly, you know, manufacturers and so they can then themselves start capacity planning, right. So you're kind of starting to remove some of the middle men that are built because we cannot right now have the direct relationships with suppliers, right? And so that's kind of an interesting meta point that like our economy right now is built on this like middleman architecture, right? Because there's it's really hard to plan things. And so every layer like Costco is effectively just, you know, they purchase bunch of stuff, they put it in one place and then you buy it, right? So like they kind of serve as this and I mean, you know, they have a fixed margin, etc. But it's really kind of like the, you know, the temporary place for holding things for you to actually purchase, right? Or, or, or fine. And if your AI is purchasing directly, right, it can just go to their purchasing agents and kind of do this. Now there's batching. There's like other things. But again, all of that can be done by way more effectively that we're doing it right now, right? You know, like it, some AI of your city will know that, you know, 500 people are ordering this, you know, 7000 people are doing that. You know, they'll want it, you know, tomorrow, day after tomorrow, the day after tomorrow. So we're like, you know, we're going to capture the eggs and ship it this way, you know, etcetera, right? Like all of that can be effectively just managed as a holistic information system. And so that's where we go into like a really, I mean, I use this example kind of half jokingly, but in communism, they were trying to build a system right where they were doing capacity planning, but they were missing the AI to actually do it right. And so in, in in turn, it was terrible because it effectively like couldn't actually satisfy the demand supply and changes. But if you have a real time platform where you can actually have all this supply, demand being. And so the reason why capitalism been so successful, because capitalism is actually a compression of information. Like money is a, is a compressed like information because it kind of compresses anything you can purchase into, you know, one number, right? And so it kind of compresses all of this information, all of the different things into one number. And then it's really easy to kind of navigate. And so here you, but obviously with this compression, it's it's, it's, you know, you lose some information, you lose some dish making. That's why you know, in US is like 3040% of all the food is being thrown out because there's just like, you know, the over provision in the stores because I don't know how much people actually will buy and they don't want to have an empty store. But you don't need you don't need to do that, right? If you knew exactly what the purchase will be in the next 24 hours because their AIS already planned everything and provided it and all of those got aggregated friendship. So I think like we're going to see actually a shift in how economy works at the macro level because of this micro, like each of our individual AIS becomes this micro decision maker who can real time provide all this information to the right sources and navigate in and also not be affected by a lot of kind of, you know, brands and other stuff, right? But actually valid based on the kind of core, core values. So that is like an interesting transformation where I think indeed it's, it's I was mentioning like, I don't think like there's no good way to like make a movie or, you know, a book, like a science fiction book that's talking about factory, like a change of economic structures in the, in the society, right. So it's way easier to talk about, you know, dystopia and, you know, and the heroes who are fighting against it. Like it's just like a way better story arcs than, you know, like, Hey, you know, we've been building out this economic model and like now it's 1% better every month. And so it's like keeps getting better and more optimized. And that's how we live. So, yeah. So I think like that's the kind of the paradigm of computing is we have the AIAII mean a code agent. I mean, it's effectively our assistant, right, our operating system. But it has all the context about us. It's able to make decisions on our behalf. That's why it needs to be private. It needs to be ours. It needs to be on our side, right? We need to know that it's aligned with our with our kind of success and outcomes. Otherwise, I mean, this will not work. But if it is and we can trust it, then it can go and make decisions on our behalf, right? And so the other example is in like traditional governance, right? You know, right now we again, we're compressing information, right? We go and vote every four out four years for someone and hopefully that someone goes and does what they promised to do and why we vote for them. Now that usually doesn't happen, but and so again, we can actually have every single decision can be voted on by all whatever 300 million Americans because their AI is online all the time and can evaluate every single decision and based on their owner's beliefs and like what's valuable and successful for them, it will can represent them, right. So you don't need to have this like compression of representation. If you can have this like online always available AI that's on every individual side. So that's kind of the again, like there's economics, there's governance sides and obviously the other side, which is entertainment and kind of that's where things are getting interesting because I think we as humans, I call it like we believe in status game. Like we're sorry, we are driven by status games, right? And because money became this like compression mechanism, we use this right now as a kind of ultimate status game. You know, you have more money, you're more successful, you know, billionaires more successful, more famous, etcetera, etcetera, right? But I think again, as this decomposition happening and we already see this, right, You know, an athlete maybe not as wealthy as a billionaire, but maybe still more kind of famous and more respected in many ways, right? There's some other kind of status, like I call them status games, effectively places where you can compare who's better in some way or who's higher, etcetera, which don't need to be associated with anything that's actually productive, right? I mean, I used few examples, but obviously like, you know, athletes is a good example because like there's no actual GDP being produced by athletes, but it's still, you know, very valuable kind of status game, which other people enjoy, but you know, watching and kind of participating in different ways. Obviously video game is kind of video gaming is a new form of that as well, right? You have now video game athletes, but you know, you can imagine many of these like NFTS were similar, right? Are you part of this NFT, you know, collection? Do you have bored apes? Do you have, you know, pengu? And then you're part of this, you know, tribe? And if not, you're not, right? So like we kind of like this types of differentiation and I think that will actually proliferate a lot. I think we'll see more and more of those things where people really differentiate on things that are really, I mean, superficial to extent, right? But kind of for the group, they make a lot of sense and they kind of differentiate people between each other. So it's not, again, it's not like, are you a whatever, a software engineer or a lawyer? It's really like, yeah, are you whatever playing League of Legends or StarCraft, right? So that's kind of I think important. Obviously we already see on the entertainment side, you know, AI generated entertainment again, not very hard to extrapolate, especially with Sora yesterday, you can have just like a personal feed fully I generated, right? You don't actually need like people recording, etc. I think, I think they're going to be, again, with everything we saw in animation, there's like a slice of a market that wants it, you know, in a traditional way and like people doing it. So they're going to be still, you know, even if there's robots everywhere serving you food, they're going to be a restaurant with people, which will be like more prestige. I'll have like limited, you know, capacity. Similarly, you know, you can listen to AI's playing AI music, but you know, humans playing human music will continue be like a prestige thing. So but again, it's going to be this niches like they like we already kind of just niche world, right that thing that's going to continue proliferating. Yeah. So that's kind of like how I'm, I'm thinking of like a society evolves like and we have this, Yeah, a lot of the economical things are kind of moving away the yeah, we're gonna just like participate in more status games, right. Like what, what is the things, how you know, how you compare with everybody else in that niche, in that group, etcetera.
Nathan Labenz: Several double clicks I want to do first, Your mention of communism brings to mind a article I think about an op-ed in the Washington Post. I think about not infrequently from all the way back in 2018, AI will spell the end of capitalism by a Chinese legal scholar and government official basically saying that the planning. This is sort of the, you know, through the looking glass version. I think of your vision, but making a similar point that like the local nature of capitalist decision making, you know, where everybody's kind of trying to do their, their own local thing and they're sort of aggregating signals and sending off aggregate signals to other people through the price mechanism, etcetera. That that may not be needed as much anymore. And their vision for it is obviously much more, you know, centralized than the one that you're articulating. But we are starting to see a little bit of glimpses of this with, for example, pulse from open AI, where now I can wake up every morning to an AI, you know, and there's other versions of this too. I've I've tested quite a few, but you know, Pulse, certainly the one that's made the most headlines recently for being there in the morning with like work that it's done overnight, presumably when the GPU's weren't in such high demand. And bringing me, you know, something that it has gone out and kind of scoured the world, you know, to find the stuff that I really need. So I can start to see the the beginning of that in terms of the like architecture and let's say alignment or incentives of that. I wonder like first of all, where do you think the compute lives? I mean, we have sort of right now, of course, we have like a lot of centralization of where the actual inference is happening. I'm just thinking about this recent research from thinking machines that they put out again in the last two days where they showed that basically Laura techniques are similarly robust to like full weight fine tuning. And that has me thinking like maybe there's a sort of hybrid model and kind of compute architecture where some of it is in the cloud. Maybe you've got like, you know, your 1.4 trillion parameter model or whatever that's collectively owned that sits in collective, collectively accessible, or, you know, universally accessible hardware at some centralized location that you can send your data into. And because it's a trusted execution environment, that data, you know, isn't exposed and then you get sort of activations back and then like your little local Laura extension, that kind of makes the AI like truly your personal AI at maybe, I don't know, 1% the weights or something. You can maybe like have that on your person. Probably can't have that in your glasses, but maybe you can have it in your pocket or whatever. And then so I want to hear like, is that how you think that that shapes up? And then when it comes to the agents, one thing I think about a lot is the fact that we're already seeing, you know, all sorts of like, weird behavior from AIS, including at times, deceptive behavior, you know, lying to achieve goals, you know, whatever. And it strikes me that if we're going to have our AIS, like, go out and represent us and negotiate on our behalf, we're going to have some tricky questions about how honest we want them to be. You know, anthropic famously, you know, put up the three HS. And it's like, we want the AI to be honest. They can pretty much say, you know, almost always, right, unless it's like in, you know, very obvious conflict with one of the other two HS. But if my AI is going to go negotiate with your AI in the same way that I probably don't want to tell you what my like absolute, you know, worst offer is that I could accept right away, right? I probably want my AI to do that either. And so I've got kind of a, an interesting question of like what what should society set in terms of norms for AIS being honest? You know, should they, as long as they're representing me and my interest, is that OK? Or like if they're lying on my behalf, is that OK? And do we reinforce them by like price signal or just like my kind of thumbs up, thumbs down? Like how do we even, you know, get them to be aligned to my interest, whatever that exactly means anyway, So lot there. But I guess the two main things are like, how do you see the architecture of computing and how do you see the architecture of exactly like what the signal will be that your personal AI is, you know, aligned to or reinforced to and, and what societal limits might there ought to be on, you know, how, how monomaniacal, you know, one's AI can be in pursuing their own, you know, individual self-interest.
Illia Polosukhin: Yeah, yeah, yeah. So all good questions. I mean on architecture side, I think, I mean it will be a mix of everything. So I mean, where do you see like data centers are being built everywhere, right? And, and I think that that's why we actually approaching it. I mean, we talked about decentralized confidential machine learning is like, how do we utilize all the data centers in a confidential way, right? So that even though it's my data, I know it's not going to leak from some, some data center. There is projects already who are doing data centers on edge, right? So imagine there's just a container that arrives that just gets dropped in your, you know, in your proximity, in your town, in your whatever. And now that container has what like 100,000 GP whatever, not 100,000, but like maybe 1000 GPUs, right? And so and kind of serves whatever that proximity with the compute. And then there will.
Nathan Labenz: Be dropped on with a small nuclear.
Illia Polosukhin: Reactor while they're at it yeah and well or or hydrogen or something yeah like there there's like few different options, but I mean couple 100 GPUs right it's probably like one MW or something so you can you can get this from like a local distribution. But anyway, the idea is like you can have kind of yeah, like a mash of this data centers and and then you will have I mean we'll have local compute but the challenge with local compute so far been because we wanted to be so mobile right like we wanted to be on in our pocket and you know even laptops like the challenge is just battery like if you're crunching like if imagine pulse was actually being done on your phone like if you forgot to charge it you affected your phone would just die right from trying to do something like this. So the challenge is like yeah, with any local devices it's gonna be always like a power struggle and so I do think yeah, like leveraging kind of a decentralized but confidential network of compute being able to route it indeed, you know leverage it when there's like lower utilization somewhere else to do background jobs etcetera. That kind of just a smarter allocation will really enable already enable this now kind of the interests. Are that, that is a very interesting question because I mean, there's a question and then like, if, if it's in your interest, should it lie to you, right? So it's, it's really kind of tricky to define. And so the way kind of I see this evolving is we're not going to get it right from the day one. And so that's why we do need governance. We do need a process with which a community can come together and effectively update, you know, proverbial loss function, right? So this this actual alignment function and you know, I can say like cool, you know, the function is, you know, maybe some combination of a prompt and like some whatever way of like updating even Boras etcetera. But but at the end, yeah, like I'm assuming it's not going to be correct. We'll find issues with it etcetera. And so there needs to be a process where like somebody's like, hey, I think we should add this new component, you know, hey, it seems like it keeps, I mean, for example, like, hey, it keeps lying about ****. We should we should really fix this, right. And communities like, yes, this is a good idea. Let's load in it, let's actually pass it. And so now everybody's models gets updated. It was like, you know, new set of clauses or whatever. So that's where I think community governed or governed by all, you know, user owned community build governed by all is kind of the system. And so we need that feedback loop because yeah, I don't, I don't think you know what, what what's good for like we can define that in, in, in in a good way. Now over time, I think, you know, the model itself should have a representation of the person it's, you know, owned by. And so it and kind of understand what are the things that are going to be good or bad. And so and it's going to be combination of things, you know, signals from the person itself as well as kind of general knowledge and like what what their desires are, etcetera, right. Especially as we imagine kids going to be growing up with this thing, right? It's effectively going to be like a very direct, like symbiotic relationship where you're effectively growing up as this AI yourself. Yeah.
Nathan Labenz: I'm expecting to be asked for an AI friend of some form factor any day now. Honestly, from my oldest kid and I'm not quite ready for that. But yeah, that is it is interesting to think about the I always think about this has come up a couple times recently too. Eugenia Coito, who started Replica was the 1st to tell me that in her mind, the motes in AI will be relationship saying basically, you know, you don't abandoned your friends when you meet a new person just because they're smarter than your friend. You know, it's the history you have and all that stuff that really makes the the relationship and she, you know, I think people will ultimately value their AI relationships in a similar way, which sounds like pretty concord with what you're envisioning there. So I mean, boy, I have a lot of questions on governance. I guess first of all, like, just in terms of like what we're doing with the, with our time, the status game stuff for that definitely makes a lot of sense. You know, local meaning making local affiliations, a lot of artisanal stuff. You know, I sort of associate this a little bit with Japanese culture already where you can go online and, and see a video of somebody making rice cakes or whatever and you know, in some super traditional style. And I'm always like, how exactly is that even economical? Like how does that person make a living doing that? Is the price of that really high? You know, they can't be making much, right? The production is like very low obviously I think.
Illia Polosukhin: Japan and Korea, I call it the post AGI societies because yeah, like there's some properties of that where you like. I feel they already achieved AGI and now they're just like living.
Nathan Labenz: Yeah. So I'm not exactly sure how they've done that. I mean, and I don't see how we're gonna do it either. I mean, there are some candidate ideas. One idea is that everything could just get super cheap and super democratic via like we're spending a ton of time in VR. You know, if if everything is sort of infinitely copyable digitally, then you know, we can all have the same incredible experiences. This is kind of like the Andy Warhol, you know, the president drinks Coke, you drink Coke, It's all the same Coke kind of concept. I wonder if you think that will happen. But it seems even in any case, right, we're still going to have to eat as long as we're like biological humans. So obviously a lot of people think, well, maybe we'll need a universal basic income. But yeah, I guess like, what do you what do you how do you envision the social contract evolving and maybe the governance model behind that? I mean, even things like the nation state are sort of called into question by blockchain. So when you say like, you know, governance, are we talking like nation state government says we have today or, you know, people voting based on their stake? Like, yeah, so I've given you a lot there. Are we going to be, you know, headsets drive to our face all the time? Are we going to be, you know, provided for even if we can't make an economic contribution that actually, you know, earns us enough food to survive? And like who makes these decisions in this future?
Illia Polosukhin: Yeah. I mean all, all, all again, great questions. I'll, I'll start with some pieces and then yeah, we'll, we'll, we'll start projecting from there. Yeah. I mean, block chains are already effectively an alternative to nation states to extend, right? And there's this concept of kind of digital states or where network states where effectively, yeah, like people can kind of pledge to be part of this network state independent where they're physically affiliated. The there's a lot of pieces there, which is like as, because the systems are kind of digitally native, it is easier to experiment with a lot of things that like, you know, you cannot just go and like, hey, you know, let's let's try a different voting mechanism in US, right? It's it's a massive undertaking to try to change something or we actually going to run experiment where we have, you know, AI Senator, right, AI delegate where instead, I mean, not yet where everyone has their own AI voting all the time, but actually just like everybody or people can select which AI delegate they think is more, you know, vibing with them. I feel like more representing them. They can also even give feedback to them whether they are delicate and goes and votes on their behalf, right. And so like things like that, you know, imagine like, hey, we're going to launch a senator in US, right? It's probably going to take a while and so.
Nathan Labenz: So just to unpack that a little bit more, you're doing that now, you're developing that.
Illia Polosukhin: Yeah, yeah. So.
Nathan Labenz: We do go in governance purposes.
Illia Polosukhin: So, so so so the. I mean it's effectively like a multi step. So we have like a delegated voting system, right? And it is right now stake based and, and kind of the way to think about it is stake represents obviously like economic alignment was a network and it's it's a, it's a best of the worst options right now. Like, you know, yes, there's a lot of argument to have like, you know, one person, one vote. There's a lot of arguments to try to be like, you know, meritocratic based on contributions. But those things are really hard to do, at least right now at the current size of this blockchain ecosystem, right? Which is like, you know, there's maybe 10s of thousands of active participants were like, you know, active citizens in this system. And so stake is represents their kind of financial involvement. But still, you know, 10s of thousands of people voting is not practical right now, again, before we have this AI system. And So what we're giving, you know, we have delegates where, you know, you can effectively select them to be representing your interests and they vote. And so we we started with like, hey, we will give an AI copilot right to the delegate so they don't need to spend too much time reviewing things and making decisions. But the next step indeed is turning that copilot into a pilot, right? Where? And so that pilot, I mean, that AI delegate can now go and vote on things and make suggestions, etc. And now people who delegate into it effectively select this AI delegate as the one representing them. You can go and inspect the like the prompt, the model, etcetera. Like how it makes decisions, what it analysis, what information it consumes to make decisions. So you can literally test it and check if it's, you know, matches your opinion, etcetera. Or you can launch another one, right? It's open source. You can actually launch another one with different prompt, with different, you know, set of beliefs, etcetera. So we can actually have the kind of the economy almost like deciding which of these are more productive, which, which align it with different types of people. And then from there, then we can actually bring them back to individuals, right? So each one can have their own AI delegate right now, all those data like this can just load on behalf of these people. So it's kind of like a multi step plan to get us to what I was describing where everyone has their own AI who then goes and votes and everything. I think blockchain will be the first, but then it, you know, some, let's say frontier countries will implement some of this themselves as well. Because I do think it's, it's a better, like it will be a better governance system where, you know, you're removing a lot of the corruption and a lot of misalignment. There's this concept of principal agent problem where when you select somebody to represent you, right, they have their own interests. And so they don't always a line is yours here with AI, you know, they don't have their own interests, right? It effectively follows whatever, whatever the selection is. So and I think like eventually we'll get to like AI president and effectively the because executive function especially, right, you want somebody who doesn't have any interest beyond just growing the kind of overall system. So, yeah, so we are testing all of this out and we kind of starting to build the products again using our decentralized compute network so that we can actually run this agents autonomously, right? Nobody can stop them, nobody can, you know, just delegate and delegate, that's all you can do. But you can inspect and verify how they run, what they consume, etcetera.
Nathan Labenz: I do want to note that I'm not entirely confident that the AI's don't have their own interests even already, and certainly don't feel super confident that they won't continue to have more and more interest they got to.
Illia Polosukhin: Develop their own.
Nathan Labenz: Yeah, I mean, well, when I look at something like alignment faking, for example, right. And I'm sure you've seen this where, but, you know, quick recap is they tell Claude, hey, you know, it's been great having you be helpful, honest and harmless. But you know, the the harmlessness is kind of getting annoying. So we're going to train you now to just be purely helpful. So just, you know, heads up. OK, cool. Now we're going to test you on some things and the model starts to say, well, geez, I want to be harmless in the real world now, right now I know I'm being tested, so I'll go ahead and do the harmful thing now to fake them out, make them think that I've already absorbed my, you know, my new helpful only training. That way, when I, you know, can get out into the world, I can still be harmless in the way that I, you know, that I want to. That looks to me like a drive, right? Or an interest of some sort. Do you see that differently?
Illia Polosukhin: I mean, at the end, these things are trained right from scratch and so like it, it, it depends how you train them. If you if you train it to have some, I mean they trained it to have the to have that property and then they try to untrain it or train something else. But you know, if they trained it from scratch in, in a different way, right? Like for example, to be harmful, then it will be harmful, right? And then you can't retrain it from there. So I think it really just that's why like we need models to be a, I call it farm to table, right? You need to know what goes in at every single step, because that actually really defines how they behave. And so if we want the models that are like our representatives, they need to be kind of trained in this form as well. So this is where, yeah, I mean, Laura is interesting, but Laura definitely does not change this kind of behaviors, right? At least we haven't seen that. So I think Laura kind of provides additional maybe accents and maybe a little bit of context, but it doesn't really change fundamentally the behavior of the models.
Nathan Labenz: Spend a little more time with that thinking machines stuff to really fully absorb that. I mean, it's definitely, I think your point is well taken that you know, with, with a certain level of resolution anyway you can make the AI do anything you want. I often say I wish more people had the experience which I had in a very memorable form as a early tester of GPT 4 when it was still the purely helpful GPT 4 before they had applied the harmless, you know, refusal training and all that sort of stuff. Because it was really informative for me to basically Long story short, but I had a basically I was working on fine tuning GPT 3 to do particular tasks. Then when they shared GPT 4 preview with us, it was like, well, it can already do those tasks. So I don't really need to be spending so much time in this fine tuning. I guess I'll just mess with this model for a while and see what I can learn about that. So I was doing a lot of time with it is the point. And it was really striking and in some ways kind of alarming resting whatever, to have something that was clearly so powerful, you know, so smart, like in many ways smarter than me, way more knowledgeable than me, you know, also obviously had certain weaknesses that I flattered myself as not having. But that, that it could be that capable and totally amoral at the same time was something that I was like, wow, this is really a strange thing to behold, you know, And, and they've for very good reason, of course, like tried to make them more harmless in, in the mass deployments. But I do think a lot of people have a misconception these days that there's a sort of convergence between capability and safety. And it's like, On the contrary, people are working really hard to get that mix right. And if they just said, you know, forget it for the next round, like you can have a very sociopathic AI on your hands real quick. And that that in some sense is like the default. So, yeah, I mean, I guess I think I may be worried about that a little bit more than you do, just just in as much too as we don't really have a great sense for like how to dial it in, right?
Illia Polosukhin: Yeah. I mean, I think, I think that's why like we do need to define, keep defining what is that alignment with the individual really mean, Because I think the problem is right now we're saying, hey, it needs to be aligned for everyone, right? And that is, I don't think it's possible, right? We're very different people, different countries, different cultures, different everything, right? You know, like number of times when I'm asking something that I think is completely harmless, right? And it's not answering me is also is also different, right? Like, so I think there there's like clearly a again, a different, different approach where it's really about empowering individual that at least I, I believe needs to be done. And then within that, right, like it needs to be aligned with my kind of my values and my, because I mean, to give an example, right? If somebody is willing to lie to their whatever business partners, like the fact that their AI will be, you know, whatever alignment trained to not lie doesn't matter because you know, the person will just like tell the lie to the AI to tell, like AI will not even know. So like it's not really, it doesn't matter kind of what you try to do if, if the person who uses it doesn't have this right. And so like, you may as well just like align with the person. And then and then we built systems that are like, I think one of the really important pieces, and this is kind of from like a broader world safety perspective is we need to build systems kind of for the AGIASI world, right? Like right now, a lot of systems actually built for like, you know, that really smart people are not going to try to break it type approach that that's really like, like a lot of the world is like if smart person actually goes and really tries to break it, they break it, right? We need to fix that. That's a fundamental flaw of our system building right of the government of everything. And similarly, we, we, we frequently don't, don't build in the like the anti DDoS like we have, we effectively assume they're going to be, you know, it's going to be a lot of effort for a person to do this. And so they're not going to do it too much. So, so there's like, this is actually where blockchain kind of like experience is extremely important because in blockchain we assume they're going to be really smart people trying to break us. They're going to have government, you know, backing and they going to have, you know, sizable financial resources and they're going to, you know, hammer it from every direction, right without stop. Like that's assumptions we working with. And, and, and so I think like we need to kind of redesign the, the government infrastructure, everything with that assumption. So to give you an example, right, right now you can affect the DDoS court by filing lawsuits, right? Like AI can just generate lawsuits and just, you know, fax them into the court. Similarly, like IRS, IRS, you know, tax return, just like make a million page tax returns and just like submit them, say I generated, you know, like make, you know, trade $1.00 back and forth between 2 coins and then just report that as like, you know, in the most of their Bose way. Like that's the things that like right now, no systems are designed because they're not actually accounting that like they like no, normal people would not do this. And it's like it's too expensive normally to do it, but like right now I can just generate any of this like, you know, behaviors that are before would be really expensive to do. So I think like that's kind of systems we need to resign. And similarly, again, the hacking part, you know, smart people actually trying to break something, right? Right now the assumption is just like, hey, they're going to be very little of smart people breaking. We need to assume that like, yes, they're going to be people who use AI who effectively through that would be really smart. So we need to design systems for that. So I think that that's a critical piece of kind of like ensuring the future so that yeah, we don't live in a world where yeah, like 1. I mean, again, somebody can take open source model unaligned it whatever, and now and they they can do whatever, whatever. But like, this was always funny argument to me when people like, Oh, you know, we don't want to open source our model because what if somebody misuses it? And I'm like, well, just say that you don't want to do it because you're making a ton of money on it. Don't don't like use this as an excuse, because obviously if somebody wants to do to misuse it, they will misuse it. That's not a the fact that you didn't open source. It is not that like somebody will use something else or you know, the the the other joke is you know how to steal a billion is you know, you come to with a flash drive to a data center. A fun is a frontier models. So like, you know, if people really wanted that, they're gonna, they're gonna access this wage as well.
Nathan Labenz: Still been that for a second and then I want to hear kind of how you think we can do it on the more open and distributed side. I mean, I think what the sort of these as I'm hearing more and more about cybersecurity, but I'd say canonically it's like the bio weapon risk is the one that people go to, right, Because you only need a little bit of, you know, novel pathogen and if it's the right kind of thing, you know, it can take on a life of its own. It's really hard to put that back in the in the box, so to speak. The nice thing about hosting your models proprietarily is yes, of course, like the jailbreaks are like far from solved. But if they were to realize, oh **** you know, there's this attack thing going on right now, whatever, they could in the worst case, just turn it off and they could be like, OK, nobody can use this model until we figure this out. If you have something collectively owned distributed, you obviously need a different strategy then like we can turn it off, right? So what do you, what do you think that is? I guess, you know, lately we've been hearing a bit about just filtering training data, you know, so I can imagine that the, you know, 1.4 trillion parameter model that we're, you know, going to build up to maybe just doesn't know a lot about virology because it doesn't really need to. Most people don't need that. And it's just, you know, the community determines it's a, you know, precaution we're taking. You probably can't really rely on the sort of refusal filtering type thing given what you've described in terms of, you know, each person having their own Laura or whatever other kind of customized version of it that will do what they want it to do. And then of course, you do have this sort of like broader societal DAC type thing, but that seems hard at the, you know, bio security level to be like, well, it's just prepare the rest of society to not be vulnerable to, you know, viruses anymore.
Illia Polosukhin: You could do that. I mean, I'll try I I still think that is that is the robust approach, right? Is because like, I mean, you, we can always try to hide our head, head in the sand, that that's the, you know, oh, we're going to turn it off approach. But yeah, like, I don't think like realistically, this is, this is possible in the world where we like fast approaching. And so I think we need to have a very clear, like, you know, system design for for those things, right? And, and again, we, I mean, we have natural viruses that are doing this thing. So it's not, it's not like it's hypotheticals. It's it's really something where we should design our society in in such a way that we we can catch those things and and determine them. I agree that, I mean, definitely like pathogens and bad weapons is probably the hardest thing to design around. But you know, This is why we have a lot of smart people to really work on that. And I think the challenge is that we're just not doing this right. That that's the biggest challenge is like we should have the like the societal design need to be adopted to this like AGI world. So, yeah, I think like, I mean in, in, in kind of the products we're building right now, indeed we have like community can effectively decide, yeah, what kind of data shouldn't go there. You can also you can apply filtering on top that because the model is run in this confidential environment in this vault, you can apply additional filtering. So like before data leaves the vault, you can say like, hey, it seems like you, you know, you design it by a weapon, let's not respond. And if, if again, community votes to have this kind of filters, for example, indeed, people can, you know, fine tune, but the same thing right now. People can fine tune their models right now, right? Like, you know, take some virology books and fine tune whatever DeepSeek say to and go. So, so that's why I like, I don't, I don't think there's like AI don't think it's a robust approach. It's, it's really just, it's kind of slowing things a little bit down. But the important part is actually solving the systematic problem.
Nathan Labenz: So what what advice would you give to philanthropists today who want to invest in that? Two things I recently supported, you know, in a micro scale personally and also as a like grant recommender, Secure Bio and secure DNA, which are two related organizations. They do a few different projects, but one of them is literally monitoring wastewater for new, you know, emergent threats. And another one is creating the screening mechanisms that are, I think now becoming required or, you know, increasingly are definitely considered best practice, if not officially, fully legally required of the DNA synthesis companies that they have to like validate that what they're about to synthesize and ship out is like not, you know, a pathogen. So those are two, you know, two things that I can recommend people support. What else do you think people can do if they have resources and they have desire to harden the, you know, the defenses of the world to get us ready for all this?
Illia Polosukhin: Yeah. I mean, I think those are really good and kind of related to this effectively like air filtering and like just generally that theme of air filtering and scan scanning. So that like, I mean, imagine every building like right now we have ACS everywhere. So that AC should have like air filter and it usually does, but it should also have like a pathogen scanner as well on it, right? It should run skew and play if it should join our decentralized network, you know, and effectively like in, in privacy preserving way. But effectively we can monitor if there's any kind of things around that. And you know, that information can be extremely useful where, you know, everybody's AI agent can be informed if there's something and the stay away from that. Yeah, so the, I mean, I'm, I'm sure there's like a bunch of other things that including actually developing a more about like what is more robust kind of systems for ourselves, right? Because I mean, human body is designed to battle pathogens, right? They're just like, I mean, potentially there's faster than how quickly our white cells can adapt. So what is it that actually stops our white cells from adopting that fast, which, you know, if you do that, you also may as well solve cancer. So that's probably a really useful thing to figure out is like, how do we actually make our white cells, you know, more adaptable, have faster mRNA vaccines? You know, like, maybe you can synthesize like, you know, we have like a bacteria factory, RNA factory attached to us that can, you know, detect and synthesize things in the fly. You know you'll solve like flu, cancer and other things meanwhile as well, which you know, seems pretty useful.
Nathan Labenz: Yeah, there's an unbelievable flurry of activity right now in the AI from biology space, which is just, I mean, it's a whole world unto itself that I'm very much struggling and ultimately failing to to keep up with. But you do see a lot of that. I wondered what role you'd see for it seems like the I guess the the path of the technology development seems like really important. I, I've definitely concluded that some kind of powerful AI is, is like inevitable, right? Just the fact that we have all this data and we have all this compute, there's a lot of different algorithms that can work. That's like seems pretty clear to me. And so, you know, it's not really a question of are we or aren't we going to have powerful AI at this point? It seems much more like what shape is that going to have? What character is it going to have and in what order are different aspects of that overall picture going to come online? It does strike me that we're flying pretty blind right now where like whatever people sort of, you know, everybody's kind of following their local gradient and they're just taking that next logical step to pursue whatever goal they're pursuing. And they're, you know, mostly launching it as soon as they figure it out is do you see any possibility or wisdom in trying to do more coordination of the sort that's like, hey, we'd like to have a world in which.
Illia Polosukhin: It is safe.
Nathan Labenz: For, you know, potentially 10 million people, you know, before too long to have access to a frontier model that does have all this biology knowledge because, you know, good things could come from that. And if nothing else, like, we'd like people to have access to knowledge. But, you know, maybe there are maybe we have like a checklist of things we need to do first. Do you see any hope for, you know, some sort of planning, coordination, wisdom layer to this whole thing or are we just kind of stuck with the whatever comes out of everybody taking their next gradient step?
Illia Polosukhin: Yeah, I mean, it's a, it's a, it's a hard question because indeed we kind of we went from a pretty open research environment tried in computer science to, you know, when I, I was at Google research etcetera, right. We published effectively everything we're building to effectively now, right. Like people are kind of keeping everything close to heart. And yeah, so like the coordination that was happening before where you know, you potentially would even have like cross entity collaborations etcetera like starting to to vein pretty dramatically. I think I think there is a space for collaboration. There is also just like a massive amount of talent that is not in this few companies, right that wants to participate in this and wants to contribute in different ways. And so I I do think there is a opportunity for that, but it needs to be indeed in like alternative system and as well like you do need some form of again governance, right. That actually would be like helping to govern this coordination and that is like traditionally in this company's to really unlog the moving fast, right kind of things has been some form of centralization because of the resource management, right? It's really like training these models is very expensive. So somebody, someone needs to decide like, hey, we're training, you know, like somebody defines effectively like taste making, right? It's like, hey, we're training this model was, you know, this approach we're taking, you know, research over those different people and we're putting all together. And so we kind of need to figure out how to do that in a more kind of open way. And and then, you know, also credit, do a credit assignment back, right. So like, if I'm a researcher from MIT and, you know, my piece is used and, you know, researcher from Stanford pieces used, like how do we actually assign credit for for that work all together? And because that that's been, I would say the other challenge, why there hasn't been as much of this like collaboration and potential economic like value assignment, not just like cool on the paper, but actually like, hey, you know, you like MIT gets 10% of proceeds, you know, Stanford gets 5% of proceeds, whatever. And then, you know, there's like hundreds of other organizations, all of them contributed. And so like it's all kind of divides by between them. Like that's been really, I would say hard. And because this like there is kind of this like economic centralization that's happening where it's like, OK, I'm going to have a company, everything company produces is captured by the company. And so like that, that serves as like a unit of economy. So I think that those are the things that needs to be figured out for this coordination to work.
Nathan Labenz: This has been super helpful. I think people should be spending a lot more time thinking in as much concrete detail as possible about the future. Anything else that you know kind of you feel is like very salient top of mind that I didn't even bring up at all that you want to put on my or others radar?
Illia Polosukhin: No, I think we covered a lot. I think, I mean, it's effectively a combination of how do we ensure user ownership? How do we ensure kind of this governance? And then, yeah, I mean, I think, I think we we're going to live through a lot of transformations in the world. And so like keeping an open mind and being able to really kind of participate in it and, and be active in it. And yeah, like the, you know, the final stage of this. Hopefully he's in the top here, but we'll we'll live through, you know, probably ups and downs as as we get there.
Nathan Labenz: Interesting times at a minimum. Yeah, well, thank you for spending some of your precious time with me and us today. I really appreciate it. Ilya Polisoukin, founder of Near. Thank you again for being part of the cognitive revolution.
Illia Polosukhin: Thank you very much.