Sovereign AI: Geopolitical Strategy & Industrial Policy for Countries 3-193, with Anjney Midha, a16z
Today Anjney Midha, General Partner at a16z joins The Cognitive Revolution to discuss sovereign AI and China's growing semiconductor capabilities.
Watch Episode Here
Read Episode Description
Today Anjney Midha, General Partner at a16z joins The Cognitive Revolution to discuss sovereign AI and China's growing semiconductor capabilities.
Check out our sponsors: Gemini CLI, Labelbox, Oracle Cloud Infrastructure, Shopify.
Shownotes below brought to you by Notion AI Meeting Notes - try one month for free at https://notion.com/lp/nathan
- What is Sovereign AI? Varies by stakeholder - for enterprises it means controlling where AI workloads run, while for nation-states it represents both technical independence and cultural alignment with local values
- Semiconductor Competition with China: "Chip sanctions on China have resulted in an enormous doubling down of local investment in Huawei's ecosystem... they're in a full-on tear to try to decouple themselves from American chips"
- Middle Path on American AI Policy: Midha advocates for a Marshall Plan for AI where countries maintain sovereignty over models while partnering with the US on semiconductor infrastructure
- European AI Alignment: "It's a huge win for America that MARA in Europe is going with American chips and not Huawei chips... the European continent has been courted by the Chinese semiconductor industry like never before"
- The Race to Close the Gap: "Huawei is in a much stronger position today than it was three years ago... They will be able to close the gap and because workloads are becoming more efficient, they can decouple at least the inference part of their ecosystem from the US within two to three years"
- Cultural Independence in AI: Nations seek models that align with their values while maintaining technical independence - requiring a nuanced approach to global AI partnerships
Links:
Anjney Midha & Jensen Huang on Winning the AI Race https://a16z.com/podcast/jense...
Sponsors:
Gemini CLI: Open-source, lightweight utility for direct Gemini access—find Gemini CLI on GitHub.
Labelbox: Labelbox pairs automation, expert judgment, and reinforcement learning to deliver high-quality training data for cutting-edge AI. Put its data factory to work for you, visit https://labelbox.com
Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs, including up to 50% less for compute, 70% for storage, and 80% for networking. Run any workload, from infrastructure to AI, in a high-availability environment and try OCI for free with zero commitment at https://oracle.com/cognitive
Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive
PRODUCED BY:
https://aipodcast.ing
CHAPTERS:
(00:00) Sponsor: Gemini CLI
(00:31) About the Episode
(05:45) Introduction and Defining Sovereign AI
(11:00) Enterprise AI Infrastructure Decisions (Part 1)
(20:38) Sponsors: Labelbox | Oracle Cloud Infrastructure
(23:15) Enterprise AI Infrastructure Decisions (Part 2)
(24:26) Rebundling Technology and Insurance
(29:26) National AI Talent Competition
(38:33) Middle East Petrocompute Strategy (Part 1)
(38:45) Sponsor: Shopify
(40:41) Middle East Petrocompute Strategy (Part 2)
(53:08) Open vs Closed Models
(59:30) AI Infrastructure Distribution Challenges
(01:08:04) Sovereign Infrastructure Investment Decisions
(01:15:37) China's Chip Capabilities Evolution
(01:27:35) Middle East AI Deals
(01:37:36) Security and Governance Considerations
(01:43:19) Outro
SOCIAL LINKS:
Website: https://www.cognitiverevolutio...
Twitter (Podcast): https://x.com/cogrev_podcast
Twitter (Nathan): https://x.com/labenz
LinkedIn: https://linkedin.com/in/nathan...
Youtube: https://youtube.com/@Cognitive...
Apple: https://podcasts.apple.com/de/...
Spotify: https://open.spotify.com/show/...
Full Transcript
Transcript
Sponsor (0:00) This podcast is supported by Google. Hey, y'all. I'm Brian J. Salva, product lead and maintainer for Gemini CLI, a lightweight utility that gives you bare metal access to Gemini for software development, research, and tons of creative use cases. We're building Gemini CLI as an open source project where community members like you have contributed thousands of ideas and bug fixes. Check out Gemini CLI on GitHub to get started.
Nathan Labenz (0:30) Hello, and welcome back to the cognitive revolution. Today, we're exploring a fundamental question that governments around the world are wrestling with. If the United States and China are going to continue to dominate the fundamental AI inputs of talent, data, and compute, and thus own the AI capabilities frontier, what exactly should everyone else do? My guest is Anjney Midha, general partner at Andreessen Horowitz, who brings deep software industry investment expertise and a remarkable ability to channel diverse national perspectives on the opportunities and challenges of AI to this conversation. We begin by considering what sovereign AI means to the enterprise, which I honestly hadn't planned on coming in, but does make a lot of sense considering the degree to which companies are both driving investment and adoption around the world and informing their respective government's priorities. Here, it seems regulatory questions around which companies have to share what information with which governments under what circumstances loom so large that United States based hyperscalers are beginning to face new competition from national champion AI data factory companies like the ones recently announced in France, the UAE, and Saudi Arabia. With the conversation grounded in that reality, we then turn to what countries should do and are doing. On the talent front, we agree that there's really no way for most countries to compete at the frontier today. But Anjney does argue that governments should play for longer timelines and try to both partner for access to the fruits of frontier labor now and build the complementary and last mile local delivery talent that they'll need to be successful on a 10 to 20 year time scale. On data, Anjney makes what I think is really an inarguable case that all governments should be tokenizing their cultures. And we then debate whether they should expect better results by partnering with frontier developers or by trying to own fine tuning and application development locally. Though fortunately for them, as long as frontier open models continue to drop, those options are not mutually exclusive. On compute, it seems that most countries will be asking themselves a complicated mix of questions, including what level of independence do we really need and want, and what mix of partnership and local buildout is required to get there? Just how reliable are the US and China as partners? And do we have to align with one or the other, or can we play both sides? Obviously, that last question has become much more complicated recently as the United States has shaken the foundations of its global alliances while in the midst of intensifying strategic competition with China. Still, aside from a few countries like the UAE and Saudi Arabia that have extremely deep balance sheets and can offer regulatory arbitrage in exchange for being cut into the global AI buildout on a strategic level, my sense coming out of this conversation is that for most countries, some limited investment in critical infrastructure would be prudent. But beyond that, they're probably better off letting the market develop naturally and accepting really quite a bit of dependence on the United States. At the very least, I sincerely hope that the US proves to be a good bet as an AI partner, first and foremost, when it comes to responsible development of the frontier technology itself. To be honest, if it doesn't, I feel like the rest of the world might have even bigger problems to contend with. Obviously, this is a super dynamic space, and this was a great conversation. I really think that the empathy Anjney demonstrated for so many different positions that national leaders find themselves in and the build by partner framework that he brings to this analysis are valuable contributions, and I look forward to doing it again soon. With that, here are a couple of quick disclaimers from me and from a16z, which as you probably know, recently acquired the Turpentine Network. First, I personally am solely responsible for all content on the Cognitive Revolution, including guest and topic selection, the questions I ask, and the editorial commentary I offer. For this episode with Anjney in particular, I received no consideration to have him on the show. And while I did follow my usual practice of sharing questions in advance and allowing Anjney to review our edit and request any additional cuts, no topics were off the table and nothing meaningful was cut during the editing process. And second, this is directly from a16z. This information is for general educational purposes only and is not a recommendation to buy, hold, or sell any investment or financial product. Turpentine is an acquisition of a16z Holdings LLC and is not a bank, investment advisor, or broker dealer. This podcast may include paid promotional advertisements. Individuals and companies featured or advertised during this podcast are not endorsing Ah Capital or any of its affiliates, including but not limited to a16z Perennial Management LP. Similarly, Turpentine is not endorsing affiliates, individuals, or any entities featured on this podcast. All investments involve risk, including the possible loss of capital. Past performance is no guarantee of future results, and the opinions presented cannot be viewed as an indicator of future performance. Before making decisions with legal tax or accounting effects, you should consult appropriate professionals. Information is from sources deemed reliable on the date of publication, but turpentine does not guarantee its accuracy. With that, I hope you enjoy this exploration of how companies and countries are navigating vexing strategic AI choices with Anjney Midha of Andreessen Horowitz. Anjney Midha, general partner at a16z. Welcome to the cognitive revolution.
Anjney Midha (5:50) It's great to be here. Thanks for having me.
Nathan Labenz (5:52) I'm excited for the conversation. So today, we're going to be talking, and I'm sure we'll touch on other things, but the core theme is this notion of sovereign AI. And I think it's actually the first time that I've covered it in 200 episodes of the podcast, so that's probably on me for being neglectful of what I see as an increasingly important concept and idea that, as geopolitics is literally exploding all around us, is certainly going to become more and more important. My cards on the table. I think, obviously, AI is really important. I think governments should be taking it seriously. I think they should be thinking about how to invest in this technology wave and bring the best of it to their citizens and also protect their citizens from the worst of it. And yet I find myself a little confused by this concept of sovereign AI, and I'm hoping you can help shed some light on the subject for me. So maybe for starters, what is Sovereign AI and what is the big argument for it? And I have a couple different candidates, but I'd love to hear how you think about the big arguments for Sovereign AI.
Anjney Midha (6:52) Yeah. No. It's a good question. I think we should start with first principles. Right? One, I think we should acknowledge that nobody has a working, canonical definition of sovereign AI that seems to be consistent across different regions.
Nathan Labenz (7:05) Yeah. We don't even have AI agents defined, so there's
Anjney Midha (7:08) Nobody there. Yeah. So we are definitely in this weird regime where you have nation state leaders and CEOs of the world's largest companies talking about how important sovereign AI is. Yesterday, Satya, I think, tweeted that they're launching their European Sovereign Cloud. And if you look on their website for a definition of what they mean by Sovereign AI, you can't find one. Right? And I think that this is one of my icebreakers now when I get together with policymakers or leaders who happen to be finding themselves in the position of not just being geopolitical leaders but now actually having to have an opinion on AI. So I often ask them, well, what's your definition of sovereign AI? Which is the policymaker version of the what's your definition of an AI agent test. Right? And it's a great icebreaker because you ask five different people, you're going to get five different answers. So let me try to paint a picture of what I think it means to people and then what I think it should mean. What I think it means to people is basically it's become synonymous. If you're a nation state leader, if you happen to be responsible for the future direction of the atomic unit of a country or a large company, I think it's your way of communicating that you want control over your own destiny as much as possible as AI plays out. If you're a technologist, if you're an engineer, you're a researcher, I think it tends to mean more a collection of services and models that run locally on prem without having to rely on cloud, traditional cloud infrastructure that is then beholden to some set of geographical governance. So I'll continue playing out what I hear from different people. If I'm talking to a CIO of a company in Europe and ask them what is Sovereign AI, what I often hear is, to me, it's a way to have our workloads, AI workloads located in a way that I know it does not have to comply with backdoor information requests from foreign governments that I don't have to comply with if I don't want to. Some of those are American regulations. Some are Chinese. Some are Singaporean. But if you're a company at any kind of scale and you have customers in various regions, it's a complete nightmare for you if now your AI workloads are located in a place that make you have to comply with the regulations of that place. So if you're a CIO, Sovereign AI is your way of saying, look. I want to know where my workloads are and what regulations they comply with, and I want control over what regulations I have to comply with. If you're a nation state leader like a prime minister or an elected official, then I think it stops to mean something concretely technical and starts to mean something more cultural. Right? Where if you say, look, a large percentage of my population is going to be talking to AI models as their daily companions. A large percentage of my country's mission critical industries are going to be running on AI workloads. And not only are these models computing infrastructure, technical infrastructure, but they're also a form of cultural infrastructure where they're trained on training data, have a set of post trained values in them, and I want those values to reflect our own and not some adversarial country. I think Sovereign AI for them means the independence and the ability to control what value systems that cultural infrastructure reflects. So again, I think it means different things to different people. But if you're asking for a sort of subset, I think it's a set of technologies and services that both convey the technical independence of traditional cloud infrastructure, but also the cultural independence of having to adhere to some other country's value systems that don't align with yours. Does that make sense?
Nathan Labenz (11:01) Yeah, I think that's pretty interesting. And I hadn't even really considered it from the perspective of just the enterprise, but certainly it does make sense to want to be mindful of what government backdoor sort of requirements you might be walking into without even realizing it if you're not thoughtful and take care to minimize that kind of stuff. I had been thinking about it more from the nation state or cultural perspective coming into the conversation. You know, another way I was kind of thinking about it is just in terms of inputs. Right? I mean, if the inputs to AI are data, compute, and algorithms, you could kind of say that the data is obviously the sort of cultural stuff that can go into a model. The compute is the data centers, and those are very sort of physical, real things that have a very specific place in the world. And then on the algorithms, I would kind of equate that to talent. If I'm thinking from an enterprise perspective, I was actually just talking to a friend yesterday who works for a kind of mid-sized boutique technology consulting firm that serves enterprises. And he was asking me, how do I think about these sort of data issues? Because, you know, the people at our big enterprise customers, they're like very conservative, tons of lawyers. We have a real hard time getting them to take any risk. You know, they're going to want to bring everything on prem. They're going to want to have everything in house. What do you think about that? And my response was, I would try to push through it. You know, it was kind of like, if I'm an enterprise or if I was advising an enterprise, I think I would try to get the legal team comfortable with the Microsoft terms of service and feel like, you know, you probably already trust some big tech provider to have some custody of important data. And this maybe isn't necessarily so different for your queries to be going to the Azure cloud versus your own cloud. Right? In fact, they might even be more secure there in some ways depending on what your threat model is relative to hacking. Like what's more likely to get hacked? Your stuff or Azure? You know, I don't know, but it's certainly not obvious which is more likely to leak. So how do you think about that? I mean, with Microsoft, you know, taking something to the European market, for example, and saying like, okay, hey, we've got data centers here and it's all local in some continental sense. Like, you don't have to worry about American jurisdiction or Singapore or what have you. Would you kind of come down on the same place and say these enterprises are maybe a little overly focused on controlling their own thing relative to using the best provider and getting the benefits sooner perhaps?
Anjney Midha (13:49) Yeah, yeah. So this is a good question. Here's a framework for how I tend to think about it, which is I like red teaming the sort of supply chain that delivers the product or service that you're depending on. Because an enterprise is just like sort of a Silicon Valley word because everyday people don't use that word. Right? If you ask, what is an enterprise, really? It's basically either a business or an organization that makes large purchasing decisions on behalf of users. If you trace back the first principle, the rise of this concept with the etymology of the word the enterprise, it started to become a thing when the idea of selling something and using something kind of diverged. Right? So you were often selling a piece of software technology to somebody who actually wasn't the end user because that person was the purchaser or decision maker on behalf of the organization. Right? So that would be sort of classic top down adoption or diffusion of a technology. Then there was a sort of violent disruption of that with bottoms up SaaS, right, or software as a service, where any individual within an enterprise, a company, an organization, even a government could say, hey. There's this cool new tool called Dropbox or Slack. Right? And I can just go swipe my credit card and start using it. And then my colleagues can do that if they like it too. And then at some point, you know, there's enough of us using it that it makes sense for us to adopt that across the entire organization. But that was a huge shift in the way technology and software was diffused. So traditionally, you had this top down buyer model. Right? And when you say enterprise, I often think of governments as the largest enterprises. So you often have the minister of IT for a country deciding what stack the country should run on. Right? And I think what we're seeing right now is a back to the future moment where in the pre cloud era, where you had prepackaged software, where you had a lot of on prem infrastructure, you had CIOs and CTOs making decisions for the entire company. We're going to use Linux. The entire company is going to use Linux, and we're going to go to Red Hat and buy a bunch of licenses to allow us to use the enterprise version of Linux. And then we're going to buy the consulting and the support that came with that contract to manage the deployment, the complexity of deployment of that system. And then the second thing we're going to be buying is basically insurance, which is when the system inevitably fails. Who's on the line? Who is providing the indemnity and the hook? Who's accountable to raise their hand and say, we'll fix it. This was our fault. We'll fix this problem. And so I think in any conversation about the enterprise, the framework I try to use is, what are they buying? Are they buying technology, or are they buying insurance? And in the top down era, pre software as a service, pre cloud, you were often buying solutions. You were buying technology, and then you were buying insurance from the same person that was bundled. That's why we had the rise of someone like Red Hat. That got completely unbundled in the SaaS era. You had individuals buying the technology, and then the risk was moved or the insurance was moved to the CIO or the chief compliance officer. And so we've been in this sort of scramble, this unstable equilibrium, I would say, the last 10, 15 years, where you had individual users within the enterprise adopting the technology just because it was the best product that they wanted to use, like Dropbox or Slack or GitHub. Right? And then you had CIOs constantly chasing them to try to buy insurance and say, well, wait a minute. Like, this tool isn't SOC 2, isn't compliant with the risk posture of the company or in some cases when you're buying for the entire country. So who's on the hook? And then that resulted in the rise of hyperscalers or cloud companies like Azure basically saying, hey, we'll sell you insurance. You should work with us, especially for infrastructure products, because we'll indemnify. We'll provide you the security, the indemnity, and so when something bad happens, we'll raise our hand and say, you, the CIO, you're not on the hook. You can blame Azure. And so you're buying job security for yourself and your team. And that was an extraordinary offering. If you look at AI, Gen AI terms of service today versus 2 years ago, Azure now indemnifies a bunch of the AI services versus 2 years ago, there was no indemnity offering. Right? So if your friend is asking you, hey. What should I use, Nathan? And should I trust Azure? The first thing you need to ask is, is he trying to buy technology or is he trying to buy insurance? And I do think we're in a back to the future moment where open source - what I'm seeing is certainly in regions like Europe is the largest enterprises, especially the ones who want sovereignty like we've described before, that independence over the supply chain, are turning to companies like Mistral to say, we want you to provide us the technology and the insurance. So you come help - you figure out what the right implementation should be because it's just way too complex. There's too many models coming out every week. We have no idea how RL from verifiable rewards works. We don't even know what a reward model is, but we want you to come automate - let's say I'm CMA CGM, the third largest shipping company in the world, 70 or 80 billion in revenue a year. And I know I want to automate the entire port operations and cargo operations for my company. But I have no idea whether I should be using Mistral 7b, Mixtral, DeepSeek r1, a reasoning model, non reason. My head is spinning because every week there's something new coming up. And I need somebody to sell me the technology and come and manage the complexity of the technology. And I want somebody to say, don't worry. I've got it - I'm managing your risk for you when it comes to anything going wrong, whether it's cybersec risk, copyright issues with the outputs, whatever it might be. Right? And I think we're getting to a back to the future era where the largest enterprises are turning to 1 or 2 companies to provide all of that. In the US, I often run into Palantir as sort of that provider for mission critical workloads where they manage the technology deployment and they provide indemnity. And so I think it depends on whether your friend basically wants to buy Azure insurance, or do they want the best technology? And often, right now, you're not getting both of those from the same place if what you're looking for is frontier open source models. Hey, we'll continue our interview in a moment after a word from our sponsors.
Nathan Labenz (20:38) AI researchers and builders who are pushing the frontier know that what's powering today's most advanced models is the highest quality training data, whether it's for agentic tasks, complex coding and reasoning, or multimodal use cases for audio and video. The data behind the most advanced models is created with a hybrid of software automation, expert human judgment, and reinforcement learning, all working together to shape intelligent systems. And that's exactly where Labelbox comes in. As their CEO Manu Sharma told me on a recent episode, Labelbox is essentially a data factory. We are fully verticalized. We have a very vast network of domain experts and we build tools and technology to then produce these data sets. By combining powerful software with operational excellence and experts ranging from STEM PhDs to software engineers to language experts, Labelbox has established itself as a critical source of frontier data for the world's top AI labs and a partner of choice for companies seeking to maximize the performance of their task specific models. As we move closer to superintelligence, the need for human oversight, detailed evaluations, and exception handling is only growing. So visit labelbox.com to learn how their data factory can be put to work for you. And listen to my full interview with Labelbox CEO Manu Sharma for more insight into why and how companies of all sorts are investing in Frontier Training Data. In business, they say you can have better, cheaper, or faster, but you only get to pick 2. But what if you could have all 3 at the same time? That's exactly what Cohere, Thomson Reuters, and Specialized Bikes have seen since they upgraded to the next generation of the cloud, Oracle Cloud Infrastructure. OCI is the blazing fast platform for your infrastructure, database, application development, and AI needs, where you can run any workload in a high availability, consistently high performance environment, and spend less than you would with other clouds. How is it faster? OCI's block storage gives you more operations per second. Cheaper? OCI costs up to 50% less for compute, 70% less for storage, and 80% less for networking. And better? In test after test, OCI customers report lower latency and higher bandwidth versus other clouds. This is the cloud built for AI and all of your biggest workloads. Right now, with 0 commitment, try OCI for free. Head to oracle.com/cognitive. That's oracle.com/cognitive.
Anjney Midha (23:15) Does that make sense?
Nathan Labenz (23:17) Yeah. I mean, that all sounds to me like it bottoms out to probably you should go with Azure. What would be the argument for actually trying to bring all this stuff to your own enterprise on-prem servers?
Anjney Midha (23:36) Well, if you don't trust Azure. Azure has to comply with government regulations that today are governed primarily by American law. So if the workloads that your friend is running are in mission-critical industries outside the US, and their customers would not be comfortable with backdoors to the US government, then you should probably not go with Azure. For example, the US Cloud Act governs all workloads run by US cloud companies. So that's one way to break it down - do your friend's customers care about the sovereignty of their workloads? Would they be fine with the US government having total access to those workloads? And if the answer is no, then that's one reason that they should be going with their own infrastructure.
Nathan Labenz (24:28) Yeah. So I guess to maybe try to summarize that again, the trend seems to be pretty clearly toward a rebundling of technology and advisory services and the insurance component. But because of some of these laws like the US Cloud Act, as a European company, for example, you might want to go with a European provider of that bundle as opposed to an American provider of that bundle. And obviously, there could be some nuance in terms of exactly how the US Cloud Act applies internationally. I'm certainly not an expert in that, but I can understand why you might be more comfortable going with a Mistral as opposed to Azure based on that, while still basically seeking the same bundle.
Anjney Midha (25:14) Correct. Yeah. Now what's happening - and this is playing out, I would say, in many regions across the world - you have an effort like Stargate here. It's a new kind of fully integrated cloud offering that starts with a single AI provider integrating the chips, the data center that then runs the chips, the compute part of the stack, the entire model part of the stack, as well as the end applications like ChatGPT. That's certainly working in the US. And for the first time, I think we saw that two weeks ago get internationalized with the deal that OpenAI did with the UAE, where they said, "We're going to offer ChatGPT to the entire country." And the government will pay for it because it's essentially free for all residents of the country. But the idea that now those workloads are running on the OpenAI offering for countries versus being run on Azure is the first clue that there's a rebundling of the cloud happening, where hyperscalers are getting unbundled and rebundled by AI companies. In the previous era of the rise of the hyperscalers, the rise of the cloud, the primary organizing function was the marginal cost of compute. And by compute, I mean storage as well as CPU workloads. And by centralizing almost all of these workloads in one place, the economies of scale were really, really hard to fight. So let's put aside for a second the whole insurance argument that we talked about earlier. Just from an economic perspective, it just didn't make sense for most companies to run their own workloads, which is why the classic technology purchasing decision is build by a partner. In the pre-cloud era, you had to build your own infrastructure, your own data centers. And then when the clouds came, started presenting you with the value proposition of "we can just offer you all of that but cheaper and abstract away the complexity for you so you don't have to manage your own infrastructure," most enterprises, most countries and companies chose to buy instead of building. Until a few came out on the other side who were such large customers, or they had regulatory reasons where they had to then go back to building. And I think that's what's happening now - a country like the UAE is basically saying, "We'll just buy the sovereign AI stack. And we trust OpenAI, we love them, and we'll just buy it." There are other countries and regions that are going, "No, we're going to build. We're going to own our own chips. We're going to want open source models running on it, and we're going to handle the fine-tuning, the post-training, as well as the deployment. We'll take some open source application, our own version of ChatGPT, and we'll handle that. We will deploy that for our customers." And so I think you want to analyze - in those regions, they're not purchasing Azure, they're purchasing a full stack rebundled offering, or they're choosing to build that full stack themselves. So in the case - this is what Mistral Compute is, which is something they announced last week. NVIDIA and Mistral are building a cloud - this is a new animal, where it's NVIDIA chips. So these are American chips being racked in a data center, which will be your single largest data center locally, with a cloud platform on top that runs both Mistral's models as well as other open source models, with the customer's choice of whatever application they want, including both open source clients or not. And so that's a rebundled cloud offering along the dimensions of open source workloads first, where open source workloads are the first-class citizen versus what has been the dominant paradigm for the last 15 years, which has been clouds providing cheaper and cheaper marginal cost of compute to customers for centralized workloads, but running primarily on proprietary technology.
Nathan Labenz (29:26) So let's maybe shift to the value to countries and citizenries. And maybe I'll go in order of talent, then sort of data, models, culture, and then infrastructure. On the talent side, it seems like Mistral - should I am I saying that right? You're the authority on this. They seem to be one of a very small number of sort of national champions that has continued to pretty much keep up with the frontier. I mean, I wouldn't say they're quite at the frontier frontier, but they're not far behind. And certainly, there is a critical mass of talent there, and that is something that is unique in Europe and close to unique around the world outside of the US and China. So I guess that seems like a win sort of just on its own merits. Like, you're a country and you're like, "Well, who knows what might happen? Geopolitics is exploding in all sorts of ways, all over the place. We'd like to have a domestic talent pool that, if walls really come down, we can still do this stuff within our own borders, with our own people, and feel confident about that." That seems like a pretty clear value prop. I do wonder how many countries do you think could actually achieve that? It's notable that Germany doesn't seem to have a similar thing, and you can go down the list of most populous countries and there are a lot of countries with a lot of people that don't have anything like this. Do you think they should be trying to create them, could create them, or would you worry - if you were advising Russia or Brazil or big countries with lots of resources - would you be worried that they would just be kind of wasting money on something that's never really going to amount to anything?
Anjney Midha (31:19) Yeah. Okay. So this is a good question. I'm going to answer your question with a story, which is there was a moment in post-World War II when a lot of folks were looking at the way the diffusion of another modern technology was changing the world, which was modern finance. So post-World War I is when the Bretton Woods conference happened. A number of countries met up to talk about adopting the US dollar as a sort of global reserve currency. And then post-World War II is really when the adoption of that started accelerating. And a lot of countries got together and said, "Wait a minute, this dollar thing is really working. And that's resulting in two clear power centers. There's America that is the purveyor of the single global reserve currency. And then there's Europe, which is turning out to be one of America's largest trading partners." Of course, this is when we're seeing the rise of the Middle East and Russia as well because of the petrodollar. And a bunch of other countries were going, "If we're not a trade partner of the US, what do we do? We're just going to get left out in the modern finance era. How else do we amass reserves of our own reserve currency?" And so most countries navigated that build, buy, or partner trade-off differently, but it resulted in the rise of a new kind of nation that I call essentially a hypercenter partner nation, which is a country that may not have the size of the big guys, may not have the talent of the big guys, but understands how to insert itself in the flow of value in the global economy. And that's what happened with countries like Singapore. Singapore is a tiny island nation with fewer than 3 million people. So did not have a massive talent base, did not have vast oil reserves like the Middle East did to trade with the United States, and was not a net producer of currency like the US is. So how did they navigate that? Well, they said, "What we're going to do is make ourselves essential to the flow of global dollars going from the US and Europe into the East, and we'll provide a stable rule of law regime." Lee Kuan Yew, the founding father of Singapore, did a lot of work to basically make Singapore have the lowest corruption index in the world, the lowest cost of doing business index, and the highest ease of doing business score in the world. And that resulted in them amassing this extraordinary wealth, creating an extraordinary base of wealth for the country. And at this point now, Singapore has one of the highest GDP per capita populations in the world and one of the top 10 sovereign wealth funds in the world. And so my answer to your question is basically you either have to build, buy, or partner. You are either a hypercenter with the compute reserves and the talent like America, China, and now increasingly Europe, or you have to find a way to partner and insert yourself in the flow of tokens in the global AI economy. And I think that's the race we're seeing play out right now. The result is that you're essentially directionally correct. I don't think most of the world's economies have access to the kind of frontier pre-training or post-training talent that Mistral has. The vast majority of that talent is concentrated in the US. Europe is one of the few places that has it, and most countries outside of Europe, China, or the US either have to build, buy, or partner that talent.
Nathan Labenz (34:56) Right? And boy, is it getting expensive. Zuckerberg is gonna be hard to outbid, it seems.
Anjney Midha (35:03) That's right. I mean, you know, Meta - that's why when you say enterprise, I think of nation states and Fortune 500 companies as essentially in the same atomic unit. Right? So if you're competing against Mark Zuckerberg for a researcher, you cannot bring a knife to that bazooka fight. Right? And outside of the US, Europe, and China, there's not that many private companies that can fight that battle. So it ends up being nation states. Right? And that's why there are so many countries going, wait a minute. If AI is a core piece of national infrastructure and we're bidding on talent against Meta, then we can't leave it to our private sector, which is tiny, to fight that fight for the country. We need to back it up with essentially sovereign dollars. And that's what's happening. Right? If you look at the rise of G42 in the UAE, right, which is an infrastructure company but backed by the sovereign wealth fund, they are playing the kind of role in the economy that AWS or GCP plays here, but they're entirely state backed. Right? So governments have had to step in because the sheer capital required to play at the frontier is outside of the purview of the buying power of most companies.
Nathan Labenz (36:15) Yeah. So would you analogize then what the UAE and perhaps also Saudi Arabia are doing right now to the Singapore play from years past? Like, they're saying, we're gonna find a way to insert ourselves into the global flow of tokens.
Anjney Midha (36:32) 100%. Except they don't need their Singapore moment right now because they've got oil. So what they're going through is their petro - if you trace the history of Singapore, what's so interesting is they navigated the rise of the petrodollar through the concept of being an entrepôt, where they said, okay. We don't have our own oil, but we see this massive flow of one resource turning into another, which is, you know, petroleum turning into dollars on the other side of the world. And the only thing they had going for themselves was their location. And so Lee Kuan Yew did a bunch of strategic partnership deals, very reminiscent of how Sam and OpenAI have been doing a bunch of ecosystem deals where all of the crude from the Middle East or the vast majority of the crude oil would ship on containers to Singapore and then refined in Singapore. A value added tax was added and then shipped out to multiple other Western countries that would buy. And that's how Singapore navigated the petrodollar regime. The UAE does not have to do the same thing because they've got the petrodollars. What they're trying to do is convert their petrodollars to petroflops, right, by buying vast amounts of GPUs, basically compute infrastructure, primarily from NVIDIA, and saying to the world, well, come run your workloads here. That's the whole G42 pitch. Right? Is over the next 30 to 50 years, let's convert our petrodollars to petrocompute and attract the world's largest customers to run their compute workloads here. And then the vast majority of our GDP should be decoupled from the price of an oil barrel. Like, the more time I spend in the Middle East, they realize that the way we follow interest rates here in the US is how they follow the price of a barrel, right? And entire investment projects are greenlit or not, depending on the price of an oil barrel that week. And they would like to decouple their future, certainly their technological future from the price of oil and compute and AI inference is that path for them.
Nathan Labenz (38:35) Let me come back to that infrastructure in the Middle East question in a second, and I have a number of questions on that.
Anjney Midha (38:40) Hey, we'll continue our interview in a moment after a word from our sponsors.
Nathan Labenz (38:45) Being an entrepreneur, I can say from personal experience, can be an intimidating and at times lonely experience. There are so many jobs to be done and often nobody to turn to when things go wrong. That's just one of many reasons that founders absolutely must choose their technology platforms carefully. Pick the right one, and the technology can play important roles for you. Pick the wrong one, and you might find yourself fighting fires alone. In the ecommerce space, of course, there's never been a better platform than Shopify. Shopify is the commerce platform behind millions of businesses around the world and 10% of all ecommerce in the United States, from household names like Mattel and Gymshark to brands just getting started. With hundreds of ready to use templates, Shopify helps you build a beautiful online store to match your brand's style, just as if you had your own design studio. With helpful AI tools that write product descriptions, page headlines, and even enhance your product photography, it's like you have your own content team. And with the ability to easily create email and social media campaigns, you can reach your customers wherever they're scrolling or strolling, just as if you had a full marketing department behind you. Best yet, Shopify is your commerce expert with world class expertise in everything from managing inventory to international shipping to processing returns and beyond. If you're ready to sell, you're ready for Shopify. Turn your big business idea into cha-ching with Shopify on your side. Sign up for your $1 per month trial and start selling today at shopify.com/cognitive. Visit shopify.com/cognitive. Once more, that's shopify.com/cognitive.
Nathan Labenz (40:42) But just to take one beat on the data side and the linguistic and cultural and that sort of value side. Because we've covered talent, and I think we're basically on the same page that, like, it's gonna be very hard for almost anybody to amass enough talent density to really compete for Frontier AI.
Anjney Midha (41:03) That's right.
Nathan Labenz (41:03) It doesn't mean you don't wanna develop your workflows generally, but that's sort of a frontier versus diffusion question. And, like, all these companies or countries should be promoting diffusion. But that's quite distinct from, like, trying to amass the density to compete at the frontier. Then you also hear this argument around, well, you know, we want the AIs to be good in our language and we want them to reflect our cultural values and all that kind of stuff. Right. There again, I feel like if I'm advising, you know, let's say 80, maybe even 90% of the countries in the world, and I was like, okay, you've got your data, your culture, your language, you've got all this stuff. How can we make sure that you are best served by AIs with respect to all that makes you unique and special? I think what I would say for almost all of them is do a massive data collection and curation project and then literally just take the data on a silver platter to all the hyperscalers and say, hey, please include this in your future training runs because we want your models to be fluent in our language and, you know, understand our idiosyncrasies and talk to us the way we wanna be talked to and reflect our values back to us. Right. Would you - I mean, would you give the same advice? And here I'm thinking like Mexico, Argentina, Australia, you name it. Right? All these sort of middle tier countries that we've kind of established probably can't compete at the talent frontier, but there's this notion that if they don't do that somehow that they might miss out and my suggestion is just like, they want data. You have data. If you can bring it to them and allow them to use it, they'll happily incorporate it, and then you'll get kind of what you were looking for in maybe the most efficient way.
Anjney Midha (42:46) Well, okay. So to be clear, I don't believe that the countries can't compete at the talent frontier. I just don't think they can compete today. They need a short, medium, and long term plan to get there. Right? So in the short term - but look, at the end of the day, there's no secret recipes for RL. Like, everybody knows how to - Mistral published Mistral, you know, their reasoning paper. DeepSeek put out the recipe. So the recipes are known. You know, I would say there are some tips and tricks that you learn from, especially when it comes to the infrastructure required to do RL, sort of online RL learning that are hard to learn. But there's no fundamental reason why Singapore or the EU or Mexico could not long term build out a talent pool that can do pretraining and post training of their own models if they wanted to. Over a 20 year arc, if they started investing today in the right partnerships, the right education institutions, the right infrastructure, there's no reason they fundamentally can't get there. Right? So the question is what do you do in the short medium - what's the road map to get there?
Nathan Labenz (43:49) Yeah. I'd put the singularity somewhere in the, like, 5 year time frame. So the 20 year sounds long to me. Although, obviously, I don't know for sure that that's gonna happen.
Anjney Midha (43:58) Yeah. So that's it. I think that's one way to organize the timelines. Right? And there's certainly some countries that are more AGI-pilled than others. Right? So the UAE is more AGI-pilled than Mexico today. So the UAE is saying, yes, you know, it's hard to plan outside of 5 years, and so let's do whatever it takes to build out our infrastructure and our talent pool here. And they're choosing to partner with OpenAI. They're choosing to partner actually with multi - they're trying to partner with everybody because their belief is, like, they have a 5 year period to establish their independence before the singularity arrives. The countries that are less AGI-pilled, that are less urgent about AI, are planning on, like, a 20 year horizon. Right? But let's say for a second that the singularity arrives. I'm quite sympathetic to the idea of the gentle singularity. Right? That it's not some violent rewrite 5 years from now that suddenly, magically, the entire economy collapses, and then you're no longer relevant. I still think if you're Mexico, you want to plan for a 20 year horizon on which your young population can graduate to having useful jobs and purpose in life. And if a big percentage of that is doing useful post training work on AI models or doing integration work between AI, like, whatever the frontier models are and integrating those into the rest of the economy. That talent pool is certainly - you can certainly today, I would argue even that talent pool is really hard for them to get, but you have to invest in your capability to build out that talent pool over 10 years. For example, I do think one of the most sought after skill sets today is the equivalent of a forward deployed AI solutions engineer. Right? Someone who can bridge - is that what we call them here?
Nathan Labenz (45:48) That's my own personal branding. But
Anjney Midha (45:50) Right. But it's someone who can take a model and then integrate it into the right workflow. Often that happens in some real world task. Right? Often that requires genuinely knowing how to do variable reward design. And maybe all of this will be automated 5 years from now. But as a country, I think it's possible to do what countries like India did in the nineties with auto manufacturing. Right? So in 1992, India started liberalizing and said, well, we would like to go from being a socialist country to being a modern capitalist country. We think auto manufacturing is one of the frontier sectors of the world, and we need to develop that capability. We just don't know how - we can't leapfrog to doing that overnight, so let's partner. And so they invited companies like Suzuki who came in and set up these joint ventures locally, where they trained up the local population. They created this company called Maruti Suzuki, which was responsible for producing the vast majority of cars in India for like a decade. And over time, they transitioned the running of that industry and that sector to local talent. By the way, the US did the same thing with oil in Saudi in the fifties and sixties with Saudi Aramco. That was originally the Standard Oil Corporation of America. Right? The Standard Oil Corporation of America went and did oil exploration in the Kingdom of Saudi Arabia, and then over time transitioned that to a local talent base called Saudi Aramco. So I think that's the healthy way for the US to bring its allies along into the Singularity, is to do joint partnerships, kind of what NVIDIA is doing with Mistral in Europe, to bring along allied partners. Because if we don't bring them along, they're going to turn to adversarial countries like China to do that. And so I just want to be clear, in the short term, that talent race is intense. That doesn't mean in the long term you can't partner with the right folks to set you up for a better future at the talent frontier. And I don't think it's just a talent thing. I think that extends to all parts of AI frontier progress, whether that's data or compute. So to your question more directly about data, I think what you should be doing is, if you're a country, certainly figuring out what are ways to get your culture tokenized. If your culture is not tokenized, then fundamentally, you're reliant on some other country to tokenize it for you. And so I think tokenization is the most under discussed part of sovereign AI. Right? Is get that corpus that reflects your culture and values, tokenize it, and then figure out how to do pre training on that corpus. And I think there are several countries actually racing to do that today, certainly for languages that have a different root than sort of base European languages. Does that answer your question? I don't know if that was
Nathan Labenz (48:36) It does. Although I still am kind of like, where are they really gonna get the best results? You know? And it still feels to me like just bringing all that data on a silver platter to the leading developers. And you could maybe do both. But, like, if you were Brazil, for example, and you're like, okay. We've got all this massive data and whatever, and now we've curated it and tokenized it. Now we could do - one of the great things about information and also one of the, I think maybe a disanalogy between some of these earlier industries, auto and oil is like, this stuff is so intangible. And to some degree, some of these things kind of only have to happen once, whereas you have to actually stamp out every car. You know, the cost of duplicating a trained model is obviously vanishingly low compared to the upfront cost that goes into it. Right. But if I'm Brazil and I'm like, okay. I've done all that work. Now I can do all these things. I can go to OpenAI and give them our data. Can go to Anthropic and give them the data. I can go to Google and give them the data, and I can try to develop a national champion. I would be pretty confidently willing to bet that Brazilian national champion is going to be worse on a 2 to 5 year time scale at least. And using that data and actually even serving the local culture as opposed to the hyperscalers who I just think are gonna be basically better at everything. And I don't know if you take the other side of that bet, but
Anjney Midha (50:01) it seems hard to win. Well, yeah, I think it depends on what your definition of a local champion is. Right? If the local champion is the provider of Brazilian language models to the vast majority of its civilians or mission critical industries like defense, health care and finance. The value lies in the last mile, not the pre training. So at that point, they should absolutely not try to develop their own pre training capabilities in the short term, right, because I think that just gets commoditized. They're much better off finding a pre training partner, and ideally somebody who's an open source pre training partner, because then they have a ton more customizability over the post training. Right? Because if you have access to the weights, your ability to do weight adjustment and on policy updates to the weights, if you have that direct access, is dramatically higher than if you don't. But I completely agree with you. In the short term, you should find a pre training partner instead of trying to replicate that. But the last mile distribution effects you have in your country, the last mile advantage you have of knowing the behaviors of which companies are the best positioned to consume that technology, how to productize it for them, how to do RL on, certainly on non verifiable rewards, which is really where this stuff becomes really fuzzy and you need local handholding, you are tremendously more advantaged than OpenAI or Anthropic. It's, you know, have you talked to ChatGPT in a language outside of, like, in any local language?
Nathan Labenz (51:36) Minimally, I speak enough Spanish to experiment with that a little bit. But, you know
Anjney Midha (51:40) So my Spanish is terrible. I don't but I'm a native Hindi speaker. And when I talk to ChatGPT in Hindi, it speaks like an American tourist visiting India with an American accent. And its diction is different, and its linguistic choices are different. It's like a tourist. And so for any consumer use case in Brazil where your civilians want to don't want to talk to a tourist, they want to talk to a local, or a mission critical industry where the last mile integration, you only trust a local partner for. You're best off developing a local champion. That's the AI partner for that. Right? And then I completely agree with you. Pre training should be a partnership effort with whoever is the right technology partner that has it, and we've talked about how there's only 5 or 6 teams who can do that today. But over the long term, I think what happens is, you know, knowledge diffuses out, and that company, which is today your local AI champion, which is largely a last mile provider over time, will learn the knowledge required just like with Aramco and refining or with Suzuki and manufacturing. I don't think there's anything - you're right. The marginal cost of producing a model is different from producing a car, but inference is an ongoing muscle. Right? And continual post training is a muscle that diffuses out, and you should have local capability over the next, in the AI economy future.
Nathan Labenz (53:08) How much does that depend on the trajectory of maybe just overall capabilities and specifically like closed versus open model capabilities? And obviously nobody has the answer to what that's going look like, but one could imagine that know, if you take, like, the AI 2027 scenario. A big part of that scenario is the frontier developers start to close down, you know, they're kind of running away from the rest of the competition. Nobody can match their, like, you know, ever 10 x ing training runs. Right. And they also start to even keep some of their models secret in that scenario, which is a whole other can of worms. But, you know, I think it sort of depends on how would you rather use, like, o 5 in Hindi even though it's maybe not as native, or would you rather use, like, an o 1 equivalent that has that local post training?
Anjney Midha (54:05) Look, it took 26 days to go from o 1 to DeepSeek R1. So what China has demonstrated is that they're able to fast follow the frontier of reasoning within 60 days. Right? And we're sitting here in June. Right? It's been 6 plus months since o 1 came out. And, of course, o 3 has come out. But the distance between o 3 and the new version of R 1 is not some step function that's hard. The current data seems to be that the frontier of closed source and open source are basically moving in lockstep at roughly 6 months or less. Right? And I think there were a lot of experts up in front of congress a year ago, very confidently claiming that, you know, the US was 5 to 6 years ahead of China, which to me felt like I was getting crazy pills. I was like, what planet are you on? Like, I'm reading, DeepSeek had been putting out incredibly strong open source models for like 8 months before R1 came out. So for anyone who's paying attention, it was clear that the gap between closed source and open source was not some multiyear race. Right?
Nathan Labenz (55:22) Still, I mean, that's an interesting dynamic too. And, again, I don't think anybody has a great crystal ball for this sort of thing, but one does wonder it's not like these are open source projects in the traditional software sense of being, like, community driven, you know, and sort of anybody can
Anjney Midha (55:37) No. They're nation state backed at this point.
Nathan Labenz (55:38) Yeah. There's still, like, hyperscalers who are just making a strategic decision for the moment to do that in open sourcing. So I do still wonder, you know, how long will China continue to do that?
Anjney Midha (55:50) So DeepSeek is not a hyperscaler. Right? They are at this point a hedge fund that was arguably under a ton of regulatory pressure to shut down their core business because the CCP believes that financial instability for the markets as introduced by market makers like hedge funds is net negative, who then turned to doing Frontier AI research really well and then got a meeting with Xi to then continue doubling down on their Frontier AI research efforts. So I don't think analyzing the incentive of DeepSeek as a hyperscaler is an accurate lens. I think analyzing their game theoretic optimal strategy from the position of a geopolitical player with the blessing of the CCP to stay at the frontier is more realistic. And I think they essentially from within the current regime will continue doing whatever it takes to have a Chinese model open sourced at the frontier and adopted by every country on Earth as a form of soft power. Right? DeepSeek was one of the most extraordinary - between the 3 way release of DeepSeek R1, the Unitree humanoid, and this animated movie coming out that it's called Nezha 2, which was a blockbuster hit. There was a complete soft power revival in China in the first 6 months of 2025 relative to the last 2 years, which was this malaise in the Chinese economy. Right? The primary narrative being we're losing technological supremacy. We're losing cultural supremacy, and we're losing financial supremacy because the financial markets were on this downturn ever since the real estate bubble there. And what you could - whether or not you consider DeepSeek getting lucky or not, what we're certainly living in the regime now is the CCP views frontier open source models as a core national capability, and will continue open sourcing that as long as they see that as a way to ensure soft power globally. Right? It doesn't have anything to do with the hyperscaler revenues.
Nathan Labenz (58:00) Yeah. No, it's definitely a good point. I feel like I have a hard time predicting what exactly they're gonna decide to do, but that's a good baseline.
Anjney Midha (58:09) That's right. What's actually unique about China is the way the structure works of the market is that as long as Alibaba keeps monetizing DeepSeek as the enterprise distribution arm of DeepSeek, which is the stable equilibrium they're in right now. DeepSeek is the AGI research frontier lab that's sucking up all the best frontier research talent and continuing to publish an open source ruthlessly to further, I would say, the country's reputation on the global stage, and Alibaba keeps monetizing it in the enterprise. I think the CCP will maintain that as a healthy sort of steady state equilibrium, right? Because it benefits both the party and also benefits GDP. There's now an enterprise infrastructure software deployment arm within China that knows how to do all the solutions engineering work we were talking about earlier to actually diffuse DeepSeek into the economy. Because actually, if you remember R1, it's a pretty unwieldy model. Like, it's a very hard model to actually use. They topped all the leaderboards, and from a raw capability standpoint, it's great. But it's an MoE that's really massive and very hard for most businesses to know what to do with, and Alibaba is doing a lot of that work for them now. So that's the sort of steady state equilibrium I see for a while.
Nathan Labenz (59:32) So let's turn to the infrastructure. And this is obviously what gets the most attention because there's most dollar values, and then you've got these sort of exquisite assets that are getting plopped down all over the globe. I started because I'm no Dylan Patel here, but I do have access to DeepSeek Research. So just to kind of get a baseline and one of my working theories over time has been that basically AI sort of converges to look like the cloud because in the limit, it sort of is cloud. Like, anywhere where there is compute, you could run AI. And so why would it diverge that much from that? So what DeepSeek Research told me when I just asked it to characterize the distribution of data centers today was that, first of all, the US has about 45% of the global data center infrastructure. The top 25 countries account for 88% of data centers, and that obviously leaves the bulk of the countries with only about 10% and many countries with no significant data center footprint really at all. And then one other aspect or angle that I asked DeepSeek Research to look at was how different parts of the compute stack might be distributed differently. And it basically said that compute is the most centralized. Storage is less centralized. We've got CDNs and all that kind of stuff. And then network is, of course, the least centralized because you gotta actually get to all the end users. So I guess the big question is in the fullness of time, meaning maybe the 5 years between now and the Singularity, do you think that the AI infrastructure footprint looks meaningfully different than that? And if so, how would it end up being different and maybe why?
Anjney Midha (1:01:27) Yeah. So I do think that it would look very different in some ways, but almost identical in some others. And I'll explain which is which. But as we discussed earlier, I don't think there's gonna be a fundamental change in sort of traditional non-AI parts of the data center economy. Right? So compute, storage, networking, good old fashioned EC2 instances, good old fashioned CPU workloads. I mean, those are the workhorses of the economy today, and most businesses should benefit from centralization where the vast majority of your workloads are running on Amazon East or whatever. Right? So for traditional non-AI workloads, I don't see that changing anytime soon. I think public clouds have been fantastic for most software and technology adoption, and that's gonna be the case. AI workloads, whole different story for 2 reasons. One is if you look at the if you X-ray a data center, a data center is not a data center anymore in that it does not refer to the same bill of materials that it did just 5 years ago. Right? Something like 60 to 70% now of a data center is GPUs. This is a huge change, whereas 10 years ago, less than 10% were GPUs. Right? So this is why Jensen, I think, on a whole is on a terror to try to get everybody to realize he's calling them AI factories. Right? He wants to call them AI factories for a reason because he thinks this is not just marketing. He's like, guys, under the hood, the data centers look fundamentally different now. We should not be calling them data centers anymore. The bill of materials has changed completely. The workloads they're running has changed completely. So for that part of the new data center economy, it's the most aggressive sort of rewriting of the cloud infrastructure business that we've ever seen. Because if you don't have the frontier model that developers want to use, they literally are switching clouds. Right? This is why, arguably, Amazon had to invest $8 billion into Anthropic when they were losing 100 plus million dollar contracts to Azure in 2023 because they didn't have a GPT-4 alternative. Right? They literally just didn't have a GPT-4 alternative. Now I was an early investor in Anthropic, and so I got a chance to see some of that, the calculus behind the scenes. And it was very, very painful. Like, the public cloud workloads were being moved on the basis of who had the best AI model. And that's why we're currently in a race where Google is trying to integrate as deeply as they can across the entire stack, right, between Gemini and TPUs. Amazon is in a race to try to integrate across Anthropic and the Trainium, their custom silicon. Right? And, arguably, Microsoft is trying to work on custom silicon as well. They have had an 8 year effort there. I think they're behind, but Satya would like to control his own destiny. Right? They don't really have their own frontier lab, I would say, today that is their clone of Anthropic, but they're trying to build that capability in house. And so I would say that the data center economy is not gonna change very much for traditional workloads. For the AI workloads, we are in a full arms race. And it seems like I think tightly integrated workloads within the Google stack and within the Amazon stack will basically be like the Gemini and the Anthropic economy will continue to be highly centralized workloads. That begs the question of what happens to NVIDIA. Right? Because if the largest proprietary models are fully integrated on custom silicon with the clouds and they're going full stack, then NVIDIA's core business is at threat. Right? Because if by the end of this year, 2 out of the top 3 models, right, Gemini and Anthropic, are running on non-NVIDIA hardware, that's a core existential threat. And to Jensen's credit, I think he saw this and started investing in a whole ecosystem of new AI first clouds like CoreWeave, like Nebius, and most recently, Mistral. Right? And so I think where we will go is basically 3 or 4 different types of clouds. You'll have the closed scalers, like the Google Cloud and the Amazon Cloud. And then you'll have the open scalers like Mistral compute, which is a close, tightly integrated offering between an AI frontier lab like Mistral and NVIDIA. You will have regional clouds like Nebius and CoreWeave is another one. I think there's a couple of these in The Middle East, G42. There's one called Humane that the king of Saudi Arabia just launched. And then the rest of the world is gonna have to decide, do they wanna be on an open stack or a closed stack? And your guess is as good as mine where things go 18 months from now. But I don't think Jensen's gonna give up the entire AI factory economy to the custom silicon winners of the cloud. Right? And what's beautiful about him is he's very much of a because he's such a deep technologist, he's like 4 steps ahead. Like he knew his biggest customers were also trying to kill him starting a decade ago. And this is why he's been such an aggressive partner to AI, Frontier AI startups. Right? Because he sees them as long as a company like a developer like Mistral, a frontier research team like Mistral, keeps reaching for NVIDIA chips, then NVIDIA will see the healthy ecosystem continue growing globally. I think the natural arc of centralization, like you said, is very strong, but it's not unchallenged. And the primary players, I would say, the live players, right, are now the AI startups that are changing the destiny of these workloads. Like, if I told you a year ago that Mistral was gonna be running a compute with tens to hundreds of thousands of Blackwell chips serving workloads in Europe. If you had told me that, I would have laughed. Right? But that's literally what they announced last week. So these new open scaler wars are going to be one of the primary stories that get written in the next 18 months. And I don't think it's a foregone conclusion at all that the clouds end up winning that, the traditional clouds. But they can have the CPU workloads.
Nathan Labenz (1:08:04) So if you're a country that maybe hasn't had data centers on your sovereign territory historically, I guess I'm wondering what is the case to try to get them on your actual soil now if
Anjney Midha (1:08:26) you know, certainly, we could
Nathan Labenz (1:08:26) go find countries in Africa where it's just like, they just don't have it. And then maybe there's maybe I don't know. I'm guessing here, but maybe Mexico, for example, like, probably doesn't have near as much data center presence in Mexico as they maybe use, and they're probably just effectively importing that across the network from The United States. Again, I'm guessing about that, but it seems like a pretty good guess.
Anjney Midha (1:08:47) Should those countries
Nathan Labenz (1:08:50) sort of just say, okay. Well, it's worked for us thus far for kind of traditional compute. We may as well just kind of let the market run its course again this time, and we can import, who cares if we're calling across a border for inference? We're not really in a great position to play that game anyway, so fine. Or should they be saying, this is different. We have to have GPUs running on our own sovereign territory because something I'm not quite it seems like that impulse is out there, but I'm not quite sure why or what they hope to gain from it or protect themselves from by doing that.
Anjney Midha (1:09:31) Yeah. I mean, this is the idea of cultural infrastructure. Right? If you don't have the infrastructure locally, the compute, at the end of the day hosting and running inference for your models, then you'll be beholden to somebody else to run that infrastructure for you. And that means from a supply chain and strategic autonomy perspective, if that compute's not in your country, you're dependent on those API calls that you're shipping - you're hitting to come back with full uptime and not returning a 404 error if some government that governs the infrastructure that your workloads are hitting turns it off. I mean, I don't think from a supply - I don't think it's net new if you see AI workloads, particularly inference, as core national infrastructure the same way you view parts of your defense infrastructure, then you just have to decide, well, which parts of this supply chain do we need strategic autonomy over? We're gonna set that up locally. And which parts do we not? Use that from partners.
So look, the vast majority of the world's governments don't - countries don't build their own jets. They don't build their own tanks. They buy them. But then there are some parts of their supply chain that they go, wait a minute. Actually, we do need strategic autonomy over this subcomponent of the aviation industry, and then they build that out locally. Some governments, by the way, realized that they depended too much on foreign governments for critical defense parts of supply chain. This was the case with Europe. Right? For the vast majority of the last 70 years, Europe has basically consistently outsourced core national infrastructure to other parts of the world. They are now correcting that with an $800 billion - or starting to correct that with an $800 billion defense bill, saying we're going to have to fight for Ukraine because, as vice president J. D. Vance said at the Munich Security Conference, the Americans aren't going to subsidize this for us anymore. Right? And we can't, and so we need to do our part of funding and spending for our defense if we wanna keep being part of NATO.
And so I think if you view compute from the lens of core national infrastructure and cultural infrastructure, it comes down to, like, yes, I could ship my workloads off to somebody else, but what happens when they turn it off? Is that okay? And for some workloads, I think the answer is yeah, sure. Look, if 30% of my country's citizens are using a cloud in some other country for entertainment, if they turn it off, that may not be strategic autonomy. But for mission critical industries like defense, like health care, we probably need strategic autonomy, and so I want those compute workloads to be running locally. Does that make sense?
Nathan Labenz (1:12:16) Yeah. It's tough logic right now. I'm a proud globalist, I think. And I feel like as we head into this AI transition, which, you know, I hope is gentle, but I'm not so sure. I kind of wish we were maintaining or even deepening the level of integration across countries and sort of increasing the barriers to conflict. And, you know, this seems like a classic prisoner's dilemma sort of situation where from every individual perspective, there's an argument for, you know, we don't wanna be the ones dependent. But there is, like, a common good in mutual dependence, you know, of the sort of classic, like, I think it's now been violated once or twice, but, like, traditionally, no 2 countries with McDonald's had ever gone to war. There seems like there's something there that was, like, worth preserving. And instead, we're kinda liquidating that. And...
Anjney Midha (1:13:10) I agree with you. But look, it starts from the - let me put it this way. If that's a future we wanna live in, then we need to acknowledge, one, that the Chinese adversarial threat is very real. And then we as America need to bring along our allies and be stable, reliable partners, technological partners. But if we're not gonna be sending those signals, then what more can we expect than the rest of the countries to try to develop their own plan?
That's why I'm a huge proponent of a Marshall Plan for AI, by the way. I think it's a huge win for America that Mistral in Europe is going with American chips and not Huawei chips, because they could. The European continent has been courted by the Chinese semiconductor industry like never before in the last 18 months, and yet they're choosing to proceed with American semiconductor partners. This is a huge win for globalization. This is a huge win for America, and I think that's the template that we should want, like, globally. Right? Open models running on American chips is a great template for global integration like the kind you just described.
I just think we have to acknowledge that the model layer, as it currently stands, has a ton of vulnerabilities if it ends up being centralized and closed source, and does not confer the benefits of sovereignty of independence and of self determination of cultural independence that most countries, most at least most wealthy countries want. And so I think the sooner we acknowledge that, that they want sovereignty over the model layer, but they're totally happy partnering with us on the semiconductor part of the stack, I think the sooner we reach at least 2 stable blocks in the world because, you know, it doesn't seem like anytime soon China is interested in pausing their rush for frontier progress.
Nathan Labenz (1:15:14) Yeah. I wanted to get into these recent deals, especially the ones in The Middle East, but the one with Mistral in Europe is also really relevant. I guess my question on that is, like, do the Chinese really have chips to sell? I mean, I know that they have, like, 5G infrastructure to sell, and that's been a whole battleground as well. You know, who will use Chinese and who will not? There's been pressure applied in all sorts of different directions. But my broad sense in the AI era is that, you know, for example, DeepSeek has said publicly despite, you know, and in contradiction of sort of official Chinese narrative that they are limited by their inability to acquire more GPUs. And so I guess I am wondering, is it really realistic to think that, like, all these countries that The US is currently partnering with could actually go to China and buy? Like, my sense is just the supply is not there, and their domestic market would eat up all that they can create and lots more.
Anjney Midha (1:16:21) I think the evidence is that chip sanctions so far on China have had clearly one impact, which has resulted in an enormous sort of doubling down of local investment in Huawei's ecosystem. I think there's an alternate future where if we didn't put any sanctions on them, right, we could continue making their 20 year arc of the semiconductor industry there dependent on The US. I think that ship has sailed now. Partly, that was started by the Biden administration where they made it clear that China could not count on a stable supply of frontier, you know, let's say, 3 nanometer or below chips from American vendors.
But until that happened, like, Huawei wasn't even in the game for frontier chips. They are now in the game. They have sovereign backing from their country. They have a ton of momentum now and motivation to develop to close that capability gap, and they're not there today. But there's a national level race there to catch up, certainly starting with inference. Like, they may not get there with frontier training capabilities for, let's say, 3 to 5 years. But, you know, as long as the current arc of compute efficiency stays on track, I don't think there's any reason why you can't be running frontier reasoning model workloads circa 2025 on Huawei Ascend chips by the end of this year. In fact, I wouldn't be surprised if the DeepSeek R1, whenever it comes out, makes that clear that it's Ascend compatible on day 1. Right? They are now full on trying to decouple themselves from American chips, and I think that's the position The US has created for them. Time will tell if it was the right one, but I think if you look at the data, it's very clear that Huawei is in a much stronger position today than it was 3 years ago in terms of narrowing the gap between their chip capabilities and The US. And that's the arc. I don't think you should do a static point in time analysis of their capabilities - it's dangerous. What we need to be doing is a 5 year analysis, and where they're heading is they will be able to close the gap. And because workloads are becoming more and more efficient, I don't think it's that crazy for them to be able to decouple at least the inference part of their ecosystem from The US within 2 to 3 years.
Nathan Labenz (1:19:08) Yeah. I've seen them put up a hospital in, what was it, 6 days or something. So never underestimate the speed.
Anjney Midha (1:19:16) Never underestimate the Chinese.
Nathan Labenz (1:19:17) China might be able to do something. Yeah. Yeah. There's a funny incoherence, I think, sometimes in our policies where we simultaneously make China out to be this, like, you know, big terrible bad guy that's gonna - I'm not even sure what exactly the threat model is supposed to be. I often ask people, like, what exactly am I supposed to be afraid of China for? Like, are my grandkids gonna be speaking Chinese? Like, what is it exactly that you expect? You can offer an answer to that if you have one. I usually don't get great answers, but we have this sort of, you know, oh my god. Like, China, they're so scary. And then on the other hand, we're like, but if we cut them off from this, they'll never catch up. You know? And I'm like, I don't know. Which side of that - it doesn't seem like it's a coherent idea. Or at minimum, it seems like there's sort of a needle threading that we're trying to do with this policy where, you know, we can stay ahead just long enough to get to the singularity and they won't catch up and we'll have, you know - but it seems to me a lot of things have to come together just right for this policy to ultimately pay off in a sort of, you know, new American hegemony via, you know, our values being dominant in some sort of superintelligence or whatever.
Anjney Midha (1:20:24) Yeah. Look. I know earlier I said that, you know, there's nothing secret about RL. The recipe is known, and nations can catch up if they want, but the corollary to that is they have to invest at the right time. And I'm very sympathetic to the argument or to the line of reasoning that AI is a tech tree, and you have to get involved early enough in the tech tree to develop your own capability years from now if you want. You can't just say I'm going to bury my head in the sand for like 5 years, and then in the future at some date, I'll show up, and then we'll try to catch up on RL or AI progress then. Chips is very similar. Right? And I think we have accelerated their adoption in the tech tree.
What is the threat now is that, like, if we stop, if America and our allies stop progressing along the tech tree, and instead there's a future where you're entirely reliant on China's tech tree, then let's make no mistake. Then our mission critical workloads and so on are gonna be dependent on whatever governance they choose to impose on the infrastructure running these workloads. And, look, as much as from a technological perspective, I think it's okay to admire their engineering prowess. But from a cultural perspective, we have to acknowledge that this is one of the most ruthless authoritarian governments whose biggest victims are often its own population. Right? And they're willing to do things that I don't think we'd ever want our families having to be subjected to. And so unless we are all okay with the trade off and civil liberties and oftentimes persecution that comes with the Chinese form of governance, then I think we have to ensure that we have our own frontier stack that's independent of China. Right?
Nathan Labenz (1:22:22) Yeah. I'm totally sold on that, by the way. We should build our own stuff. We should have our own fabs. And by we, I mean, The United States. I think that makes total sense and is almost inarguable. Where the rubber hits the road for me is more on the question of, like, should we try to deny them similar access? And if so, why? And how do we make that not self defeating? And those, I think, are much, much harder questions. I certainly wouldn't want to see the AI territory at any level of the stack to China, but I'm
Anjney Midha (1:22:57) Much
Nathan Labenz (1:22:58) less convinced that we should be trying to deny them their own version of it, if only because it just ratchets up tensions and just makes everything else harder. And I'm not sure that we're necessarily accomplishing all that much.
Anjney Midha (1:23:11) No. I agree. I believe in the opposite, which is let's ship the best technology and export it to everybody so everybody adopts it, including the Chinese.
Nathan Labenz (1:23:23) Yeah. I've made that argument too. Like, if you really want American values to dominate AI, why not just give them free inference? Now they'll probably not even accept it. That's a whole other question.
Anjney Midha (1:23:34) Yes. That's what they did with the Internet. Right? They created their own Internet and prevented our companies from operating there. We never reciprocated that. And, arguably, that's why we are stuck in a situation where ByteDance and TikTok is more successful arguably in America than Instagram is. Different conversation. I think I'm certainly concerned about the arc of post training on short form video like that is going to eat the brains of the next generation in a way that I don't think we are prepared for, even though I've ordinarily been sympathetic to the argument that humans are resilient and we - you know, arguments were made for previous technical waves that video games were terrible for our kids, and TV is terrible for the kids. And yet we turned out to be fine. I do think RL like, human preference tuning can be weaponized at a speed and scale that humanity has not really been prepared for before.
Nathan Labenz (1:24:41) I think that's totally right. I mean, if you take what Facebook has done, and this might be about, roughly speaking, OpenAI's plans. I mean, they literally just hired a person to run applications who was a big player in developing the news feed into what it now is. But if you apply that same level of optimization to one-to-one AI interactions, I think you do end up in, or at least there's a pretty good chance you end up in, kind of a dark place.
Anjney Midha (1:25:08) I would not over index on the executive hires, but I understand your greater point, which is that AI RL on human preference is a very, very, very powerful way to drive addiction to AI products. Agreed. I would not want the Chinese government doing that to our population.
Nathan Labenz (1:25:27) And I agree. It's certainly premature to judge any individual. And you could also make the argument that who better to avoid the mistakes the second time around than the person who was deepest in the details of it the first time around. So I could see that going either way. I wanna also just real quickly say, just to put myself on record, I remain very open minded to the idea that China really is a terrible bad guy. I live in Detroit. Not too long ago, there was the story that came out of literally the Detroit Airport where somebody came in from China with some fungus sample that is like a
Anjney Midha (1:26:07) Oh, that's right. Yeah. Yes.
Nathan Labenz (1:26:10) I've been poking around trying to get a little more information on that. It's hard to get good information on such a story. You could wish that we had people in key positions that were making these announcements like at the FBI, at the DOJ and whatnot that were maybe a little bit more universally trusted than the people we actually have. But what I have heard has been basically like, yeah dude, that kind of checks out and you probably should be afraid. So I'm updating on that and I'm like, man, if they're willing to try to release bio agents that would destroy - this wasn't necessarily about to be released, but anything messing around in the domain of bio agents that destroy crops
Anjney Midha (1:26:53) Right.
Nathan Labenz (1:26:53) That's insane. Not only is it just straight up evil and dangerous, but I have to imagine it puts their own population at risk. Right? I mean, these things are not the kind of things you can just release in a controlled form or fashion and expect to only harm the other side of the world. Like, spores can float a long way on the winds, and I just think that's totally insane. So for people who have framed me in the past as overly dovish on China, just know that I'm at least trying to continue to pay attention to this and update my worldview as new evidence comes in, which from time to time it does. Okay. These deals in The Middle East. I believe you were even there for some part of this conversation. Right?
Anjney Midha (1:27:36) Yes. I was not physically there for the announcement of a number of the Trump administration's new deals there. I just have been involved in - a number of the founders and companies I work with have been approached by both American as well as Middle East, I would say, data center providers, compute providers, government folks who are all actively trying to create a joint ecosystem across the two regions, across The Middle East and The US. And so I've just been peripherally involved in some of these conversations, but I was not there for the President Trump trip.
Nathan Labenz (1:28:17) Gotcha. I guess it seems pretty intuitive. We talked an hour plus ago now, I think, at this point about the strategy that the Middle East countries are running to try to do something analogous to Singapore from the past generation in the financial system. They wanna insert themselves into the global flow of tokens. That makes total sense. It's obvious to me why these countries would wanna do it. I also understand why the AI industry would wanna do it in as much as they are deep pocketed customers and all that good stuff. What's a little less obvious to me is what is The United States getting out of it? So one question is, what is the bottleneck that these countries are relieving for us? Is it that they can muster energy faster? Is it that they can just green light projects faster? Is it that they don't need these things to be profitable and so they're willing to pay more than maybe American customers counterfactually
Anjney Midha (1:29:17) would be
Nathan Labenz (1:29:17) willing to pay. Like, what's in it for The US? Why don't we just do it all here?
Anjney Midha (1:29:21) So one, they're required to do half of it here. There's a one-to-one match. For all infrastructure they build out there, they have to build out a one-to-one match here. That's one. So we're getting a ton of direct foreign investment, ton of infrastructure. That's just a baseline requirement for all deal making there on infrastructure deals. The second is we get them to build on the American stack, not the Chinese stack. I mean, Huawei has been courting every one of these Middle East hyperscalers or sovereigns aggressively for the last 2 years. And I think I'm very glad that we got continued alliances in The Middle East. This is what we were talking about earlier. This is an extended version of our Marshall Plan. So what do we get? We get more folks in our camp than in China's. So I think we get a bunch of capital. We get more infrastructure funding in The US, which results in more chips and compute capacity for our companies here and our founders here. And we get more allies in The Middle East than allies with China. So I think we get a bunch of investment. We get a bunch of compute, and we get a bunch of geopolitical wins. So, yeah, I think it was a win-win in that situation. I think the hardest thing or the thing we've got to watch out for is, is there a defection of that alliance, right? If I was one of the regions in The Middle East, by basically doubling down on America as my ally, I'm running the risk of alienating China. Right? How do I protect against America in a future where America's not, for whatever reason, a reliable partner of mine? For the time being, they seem to think we are reliable partners, which is great. So I'm happy. But that wasn't clear, by the way, under the Biden administration. I think The Middle East was in a place where they were being denied a local AI ecosystem. They were being denied certainty of chips, and that was driving them into the arms of China. So I think things were getting pretty tense where they were starting to have to build out joint - you know, the thing about infrastructure is infrastructure is destiny, so you have to plan on 20, 30 year horizons. Right? And so if you lose an ally in the first chapter, they're essentially predetermining the outcome 20 years in advance. And there were moments in the Biden administration where we were basically driving the Middle East into the arms of China and therefore giving up the 20, 30 year arc of the destiny of the infrastructure in that region to Huawei versus our guys.
Nathan Labenz (1:31:57) I think that's correct. That's why I still come back to that question of, just how ready is China to actually meet that demand. And I do agree with you that we should not underestimate, and we should not also just take one static point in time. It does still feel like it's at least a few years away, though, right, before China would actually be exporting AI infrastructure? Well, a deal with China for these Middle Eastern countries seems like it's not now, but 3 to 5 years from now if Huawei can scale to the point where there's surplus beyond what Alibaba and DeepSeek and all their own domestic major companies wanna buy, which I assume is a lot. So
Anjney Midha (1:32:44) Yeah, totally. 3 to 5 years, but that determines the next 30 years. If you lose the first 3 to 5, you've lost the 30. That's the nature of infrastructure deal making. You lose the first mile, you've predetermined the outcome 30 years hence. So yeah, I agree with you, but that's why the 3 to 5 years is so critical. It doesn't really matter when in a sense, yes. Racking and stacking a frontier data center with Ascend chips might take 3 to 5 years, but the point is that it's an Ascend. It's a Huawei data center. And then what happens to the build out? What happens with the next 500 megawatts of capacity and the next 500 megawatts of capacity? You have now essentially set The Middle East on a path down the tech tree where success is predetermined over the 30 year arc in China's favor versus ours.
Nathan Labenz (1:33:38) So aside from this competition with China angle, what would you say is the next most compelling reason to do these deals? Is it because they can turn on energy super fast and we can't?
Anjney Midha (1:33:51) Yes. So I do think they have on a per flop basis, there's roughly an 18% from what I can tell. I mean, some of the marketing would be like, oh, then you get 50% energy gains. But on liquid cooled footprints, there's about an 18% once I handicap it for a bunch of other factors, I think you get an 18 to maybe 20% lower cost of energy per FLOP by locating a Blackwell node in The Middle East, in The UAE or The Kingdom, than you do locating it in The US without government subsidies here. So yes, there's a lower energy footprint there, partly because a bunch of these data centers are liquid cooled with oil. So yes, there's a pure energy reason to do it. Look, I also think one of the realities we have to contend with until recently with the new administration here is that the Kingdom Of Saudi Arabia, The UAE, these are countries with their own form of governance where they don't have democratic elections, but they do have the ability to move quickly when it comes to infrastructure highways. They can get through regulatory red tape really fast. And now that those regions are very aggressively pro America, pro modernization, they're changing dramatically the modern fabric of the society. If you go to KSA, you'll often see a complete change in cultural values relative to 5 years ago. The speed at which you can get frontier data centers up because of the lack of red tape relative to other regions across the world is kind of crazy. Whereas there are still parts of Europe where putting up a frontier data center can take like 2 years of getting through red tape and approvals. Whereas in The UAE, that's just not the case. So yes, there's a technical reason why the compute in these places is more energy efficient, but there's also, I think, a speed of execution thing here.
Nathan Labenz (1:36:03) Yeah. What do you think as this develops, what is it gonna look like in terms of personnel control and governance and literally military protection? So what I mean there is, okay, we know it's gonna be in The UAE or it's gonna be in Saudi Arabia. But who's gonna build it? Are we gonna have Americans over there doing the building? And does that create a vulnerability where for these countries, The US could at some point say, everybody has to come home? Are there other governance things? I've heard, and this is an unverified rumor, but I've heard that in some part of some of these deals, there's an agreement on an off switch that is supposedly remotely controlled. I don't know if that's true or not, but that obviously would be an interesting sovereignty question. And then literally, missiles are flying right now. We have a lot of we've had recent episodes of US aircraft carriers reportedly having to turn fast to avoid missiles to the point where they're dumping planes over the side of the aircraft carrier. And data centers aren't so maneuverable. So I don't know does this also sort of imply a major expansion of the American defense umbrella to protect these assets? Because if we have gigawatt data centers in the desert in The Middle East somewhere, everyone's gonna know where they are. They're not gonna be that hard to hit with a missile. And they're much harder to hit in The US than they would be over there. So what deterrence do we have, and what sort of promises do you think we might be implicitly making? So anyway, there's personnel, governance of the off switch variety, and the risk of missiles.
Anjney Midha (1:37:58) Yeah, so this is a good question. The framework you wanna apply to this is dual use technology. Because AI is a dual use technology, it's often governed by a certain body of work that is separate from laws that govern defense capabilities and so on that govern non dual use technology. So if you're talking about a frontier data center that is classified for mission critical workloads, actually, the construction of these data centers is classified information. It's confidential. It is quite hard actually for a number of reasons, like other crash critical national infrastructure, the set of protocols that go with the setup of those data centers is different from a traditional Google data center. So you can't just it doesn't show up in regulatory filings. It often doesn't show up as the construction protocols do not look like traditional data center construction protocols. And for better or for worse right now, because the model training happens in a different place than the inference, the data centers running mission critical inference workloads can be quite small. You don't need 100,000 Blackwells in one place to run military workloads or national security workloads. So I just wanna say that from a pure physical, brick and mortar perspective, they're not that easy to trace. Their footprint is different. They can often just be inside what looks like a department store on the outside. Then there's the whole part of counterintelligence and so on. Let's say there's counterintelligence from an adversarial nation that figures out where these workloads are running, and they will be targeted. Then the analysis you wanna do is basically what is the threat vector, and what is the frontier of warfare in that situation, which is if it's an aerial threat vector, then you're essentially running the UAV playbook. And these data centers, at least the ones that are civilian data centers in the kingdom and so on, are governed by their laws and their military posture and their aerial playbooks. So I don't think the analysis is AI specific is my point. That just applies to any kind of asset, and the risk of that particular asset being attacked is no greater or lower than the risk of any national security asset there, and it depends on basically how good is the government there defending their best assets. So I don't think it requires a net new expansion if you're talking about the physical footprint of these data centers. There's a whole other analysis we could talk about, which is the weights. And this is why open source is so effective. Because if your entire national security apparatus is predicated on the idea that weights don't get stolen, then I don't know how to protect you.
Nathan Labenz (1:41:00) Yeah. Good.
Anjney Midha (1:41:02) Exactly. I think that remains, including The United States, a completely unsolved there have been, look, there have been Chinese citizens who have been boarding planes from The US with TPU schematics. I think there was a Google engineer. I don't think there's an apparatus right now within The US, Frontier Labs, to prevent model weight exfiltration, that is real. If our adversaries if China wants the weights, they're getting the weights right now.
Nathan Labenz (1:41:38) I think everybody yeah. I think that's relevant
Anjney Midha (1:41:40) at this point. Yeah. So I don't think we should expect other countries to be able to do something even we can't do yet.
Nathan Labenz (1:41:47) Yeah. All good. I mean, you've been very generous with your time, and I really appreciate it. Do you have any closing thoughts? Anything we didn't touch on that you wanna share? We've covered a lot of ground. That's for sure.
Anjney Midha (1:41:58) No. We should do a part 2. I don't think we got to talk about Taiwan at all, which is a whole can of worms, but I think that's a critical part of the story that depending on how Taiwanese sovereignty plays out over the next 2 or 3 years, the arc of how the rest of the compute infrastructure, the rest of sovereign infrastructure plays out will be quite different. So that's for next time.
Nathan Labenz (1:42:17) Yeah. Talk about a big domino to say the least. That's one of those reasons that I'm still not sold on the export controls just because I feel like if nothing else, it's clearly making it that much easier for some big move to be made on Taiwan as opposed to the scenario where China was getting the chips. But I've said that on this feed many times. So cool. Any other closing thoughts? I mean, we can definitely check in. The cycle times are getting shorter and shorter. So, I don't know if it's 6 months away or 3 months away when we'll have to come back and evaluate all of AI and geopolitics. But something tells me it won't be all that long. But any other closing thoughts?
Anjney Midha (1:43:00) No. That's exactly so everything we discussed today should be treated as subject to shorter and shorter time spans of relevancy. So see you Monday.
Nathan Labenz (1:43:12) Sounds good. Anjney Midha, general partner at a16z. Thank you for being part of the Cognitive Revolution.
Anjney Midha (1:43:18) Thanks for having me.
Nathan Labenz (1:43:20) If you're finding value in the show, we'd appreciate it if you'd take a moment to share it with friends, post online, write a review on Apple Podcasts or Spotify, or just leave us a comment on YouTube. Of course, we always welcome your feedback, guest and topic suggestions, and sponsorship inquiries either via our website, cognitiverevolution.ai, or by DMing me on your favorite social network. The Cognitive Revolution is part of the Turpentine Network, a network of podcasts where experts talk technology, business, economics, geopolitics, culture, and more, which is now a part of a16z. We're produced by AI Podcasting. If you're looking for podcast production help for everything from the moment you stop recording to the moment your audience starts listening, check them out and see my endorsement at aipodcasting. And finally, I encourage you to take a moment to check out our new and improved show notes, which were created automatically by Notion's AI Meeting Notes. AI Meeting Notes captures every detail and breaks down complex concepts so no idea gets lost. And because AI Meeting Notes lives right in Notion, everything you capture, whether that's meetings, podcasts, interviews, or conversations, lives exactly where you plan, build, and get things done. No switching, no slowdown. Check out Notion's AI meeting notes if you want perfect notes that write themselves. And head to the link in our show notes to try Notion's AI meeting notes free for 30 days.