Chinese AI – They're Just Like Us? With Beijing-Based Concordia AI CEO Brian Tse
Brian Tse, CEO of Concordia AI, discusses China's pragmatic approach to AI development, safety, and governance. The conversation covers regulations, cooperation pathways, and areas of overlap with the U.S., alongside topics like chips, Huawei, and embodied AI.

Watch Episode Here
Listen to Episode Here
Show Notes
Nathan interviews Brian Tse, founder and CEO of Concordia AI, about China's approach to AI development, safety and governance. They discuss China’s pragmatic vision emphasizing AI integration into the economy, the country’s multiple AI hubs, regulations requiring pre-deployment testing and AI content labelling, and areas where China's approach overlaps with the U.S. The conversation covers chips and export controls, Huawei's rise, DeepSeek's peer-reviewed article on Nature, open-weights, Singapore's role as a bridge, and pathways for cooperation like shared red lines, risk management frameworks, and emergency preparedness protocols. They also touch on embodied AI and humanoid robots, public optimism, and real labor anxieties.
- Op-Ed in Time “China Is Taking AI Safety Seriously. So Must the U.S.” by Brian Tse.
- State of AI Safety in China (2025) report, following versions in 2023 and 2024.
- Frontier AI Risk Management Framework, co-authored by Shanghai AI Lab and Concordia AI, China's first comprehensive framework for managing severe risks from frontier general-purpose AI.
- Frontier AI Risk Management Framework in Practice: A Risk Analysis Technical Report, contributed by Concordia AI’s staff and evaluating 20+ models across 7 risk domains.
- Frontier AI Risk Monitoring Platform (preview), including biological risk refusal evaluations of DeepSeek’s models mentioned during the conversation
- State of AI Safety in Singapore report, the first comprehensive analysis of the space.
- Concordia AI Fall 2025 Hiring Announcement
- Concordia AI Substack, including World AI Conference 2025 Recap and The State of China-Western Track 1.5 and 2 Dialogues on AI mentioned during the conversation
- International AI Safety Report
- Statement on Biosecurity Risks at the Convergence of AI and the Life Sciences
- Shanghai Consensus Statement on Ensuring Alignment and Human Control of Advanced AI Systems
Sponsors:
AssemblyAI:
AssemblyAI is the speech-to-text API for building reliable Voice AI apps, offering high accuracy, low latency, and scalable infrastructure. Start building today with $50 in free credits at https://assemblyai.com/cognitive
Claude:
Claude is the AI collaborator that understands your entire workflow and thinks with you to tackle complex problems like coding and business strategy. Sign up and get 50% off your first 3 months of Claude Pro at https://claude.ai/tcr
Linear:
Linear is the system for modern product development. Nearly every AI company you've heard of is using Linear to build products. Get 6 months of Linear Business for free at: https://linear.app/tcr
AGNTCY:
AGNTCY is dropping code, specs, and services.
Visit AGNTCY.org.
Visit Outshift Internet of Agents
Shopify:
Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive
PRODUCED BY:
Transcript
Introduction
Hello, and welcome back to the Cognitive Revolution!
Today I'm excited to share my conversation with Brian Tse, founder and CEO of Concordia AI, a Beijing-based social enterprise working to advance global AI safety and governance.
I've been wanting to have this conversation since I first started the show, and it feels especially timely now. Recently, at The Curve conference in Berkeley, I participated in conversations about AI policy and the options we have available as we attempt to safely navigate the development and deployment of powerful AI systems – and I was again struck by how many of these discussions – including with members of technical staff at frontier model developers – are explicitly premised on assumptions like "China will never slow down their AI development" or even, in some cases, "China doesn't care about AI safety."
These notions are doing a lot of work in shaping American AI policy and corporate strategy. They're used to justify everything from aggressive scaling timelines to military partnerships to export controls that cut off China from advanced chips. And yet, I'm not sure how well they hold up to scrutiny.
I often say that AI defies all binaries. Even American politics, which is usually so rigidly polarized and partisan, often has its coalitions scrambled by AI questions. And this conversation with Brian suggests that, at least to a first approximation, the US and Chinese AI ecosystems are not opposing forces, but similarly structured communities pursing remarkably familiar goals.
Both countries have mission-oriented startups using, for now at least, functionally the same architectures and training techniques to relentlessly push the AI capabilities frontier. Both have big tech companies investing huge resources in datacenter buildouts. Both governments are gradually waking up to the possibility of economic transformation, and using industrial policy to secure their supply chains and export controls to gain leverage. And both societies are concerned about rapid, disruptive change, and have safety research institutes – like Brian's Concordia AI – studying the risks and promoting best practices.
The differences, on the other hand, are comparatively subtle.
The Chinese AI community, per Brian's telling, is less focused on AGI timelines or achieving some sort of recursive self-improvement loop or intelligence explosion, and more concerned with building practical applications that serve society's needs.
This view is supported by the fact that Chinese companies are in some important ways much more open than their American counterparts. In addition to making their model weights freely available to businesses and researchers worldwide, they are publishing all sorts of methods at international conferences, generally in English!
And then there's China's recent refusal to purchase NVIDIA's H20 chips, even after the Trump administration made them available. If China were truly in an all-out race to AGI at any cost, as many assume, wouldn't they be buying every chip they could get their hands on?
While none of this rules out a hidden agenda on the Chinese side, it should at least give us pause before assuming the worst about Chinese intentions.
And this conversation goes much deeper than that, with Brian offering one of the most thorough overviews of China's AI governance landscape that you'll find anywhere.
We discuss China's pre-deployment testing requirements for consumer-facing AI products (which are stronger than any US requirement, though roughly on par with what American leaders are doing), their national AI safety standards that explicitly address catastrophic risks including loss of control, their elevation of AI safety to a top national security concern, and the growing ecosystem of AI safety researchers across Chinese universities and institutions, who are, quite naturally, grappling with the same safety concerns we are – CBRN risks, loss of control, and alignment – and developing - you guessed it - broadly similar defense in depth solutions.
There are links in the show notes to deeply researched reports that Brian, along with the Concordia team and coauthors, have produced detailing all of this activity - again, in English.
Overall, this doesn't sound like a country that's AGI-pilled and racing toward superintelligence with reckless abandon. If anything, as you might expect by a country run by engineers with a living memory of revolutions, it sounds like a government that's taking a more grounded, cautious, sober approach.
Of course, I want to be very clear about my own limitations here. I've still never even been to China, I'm certainly not claiming any expertise in Chinese culture, politics, or society, and I completely understand why many in the American political establishment are worried about losing a competition for technological leadership. The stakes are incredibly high, the challenges of building trust between rival great powers are immense, and as I've said many times – I wouldn't want to live in Xi's China – I value the freedoms, of speech and otherwise, that I have the opportunity to exercise here.
But the question at hand isn't whether China's political system is one we would choose – it's whether China is engaged in a reckless race to AGI that forces American companies and the US government to throw caution to the wind in response.
And on that specific question, the evidence Brian presents suggests a very different picture than what many in the West assume.
This conversation obviously won't be the final word on these questions, but I do think it's an important counterbalance to some dangerous assumptions that are currently shaping policy. And considering Brian's observation that there doesn't seem to be as much fear and paranoia about the US in Chinese AI circles as there is about China in the American AI discourse, at a minimum, I think this report, from an AI safety leader on the ground in China, should give one the courage to place the burden of proof on anyone who would use claims about China's approach to justify their own apparently reckless plans, and hopefully also inspire much more direct dialogue between the two countries' AI research communities.
With that, I hope you enjoy this illuminating conversation about Chinese AI development, safety, and governance, with Brian Tse, founder and CEO of Concordia AI.
Main Episode
speaker_1: Brian Tse, founder and CEO of Concordia AI. Welcome to the cognitive revolution.
speaker_2: Hi, Nathan, thank you so much for having me. I'm a big fan of your podcast.
speaker_1: Thank you very much. That's well, that's kind and surprising to hear because we're twelve time zones away. This is a Beijing to Detroit conversation happening 9:00 AM for me and 9:00 PM for you. So I appreciate you staying up late for me. Really excited about this conversation. I, as regular listeners, will know I had dreams of getting to China this summer and attending a big AI conference there. And there was a satellite event focused on emerging risks from AI, including deception and, you know, scheming behaviors. Unfortunately, I had too much going on at home and I wasn't able to make it. So I've been hotly anticipating this conversation as a way to fill in what I missed, you know, in terms of experiential learning. I'm going to have to get it second hand from you. I think, you know, 11 point of view, I think we share is that there's little in the world going on today that's more important than the way that the US or the West probably, But certainly the US and China relate to each other on AI issues and definitely want to get your perspective from the other side of the world on how that's going and, and hopefully how we can, you know, steer the future in a positive direction. For starters, you want to just tell us a little bit about yourself. I know your your educational background spans kind of East and West, and I'd love to just kind of, you know, we don't normally don't do too much honestly on biography, but certainly for something like this, I think it is more relevant. So maybe just give us a little bit of your kind of personal history.
speaker_2: Absolutely. I started my journey in AI at Qinghua University and a deep learning hardware setup just as the deep learning revolution was taking off in China. That was about 10 years ago. The Alpha Go movement was very exciting, but what really captivated me were the big picture AI governance question that it raised. So that curiosity led me to the UK where I became a policy affiliate with the Centre for the Governors of AI, initially founded at the University of Oxford. So I look into the governance of dual technologies in the past and had the incredible opportunity to visit and present my work at Google D mine in London. And then from there, I also served as a senior advisor to the partnership on AI and consulted with Open AI around 2019. So I provided external inputs to the societal implications and also the release strategies of early large language models like GBT 2, which was just a fascinating experience right in the heart of Signal and Valley. Then I found my way back to Beijing, where I did some work with the Beijing Academy of AI, including developing one of the first AI ethics principles from Chinese institutions. It was around that time that I founded Concord AI with the goal of bridging these different worlds and building a more cohesive global conversation on air safety.
speaker_1: Cool, I love it. I wish I had spent more time going back and forth myself. So tell us about Concordia. You're in Beijing. What do you guys do?
speaker_2: So Concordia AI is an independent social enterprise with a clear mission since our day one to advance global AI safety and governance. So our work is roughly structured around 3 pillars. One, we drive the development of AI safety standards and policy by participating in the relevant national standard setting committees and also providing expert consultations to government bodies. 2nd, we collaborate directly with some of the leading AI labs and companies, helping them developed safety frameworks and also implement safety best practices. For example, we recently co-authored the first comprehensive framework for managing the critical risk from general purpose AI models, in particular in collaboration with Shanghai AI Lab. Third, we also foster global dialogue on AI safety. This includes convening forums that bring together some of the top experts from across academia, industry, and also policy makers at some of the biggest AI conferences in China and Singapore. We also participate in global policy forums such as the United Nations and Global AI Summits, and also conducting research to help international stakeholders understand AI, safety and governance deployments in the region.
speaker_1: So is there a organization in the United States or, or more broadly in the West that you would say is like the closest analogue to your organization? Just to give kind of a, you know, an intuition or point of reference depending on who you mentioned, there's a decent chance we've done an episode with them in the.
speaker_2: Past I don't think there is one perfect in the lock, but there is certainly a lot of civil society organizations that do work on air safety and governance what is unique about Concordia is perhaps sort of the Asian perspective on the global landscape and also blending in work across academia, industry and policy the.
speaker_1: One that comes to mind, I guess, and maybe this is recently biased because I just did a conversation and put out the episode with him is far AI. I just had Adam Gleeve on the podcast and it seems like certainly in terms of supporting these like international convenings, there's, there's some overlap there. And I think kind of A at least from what I understand so far of your like overall worldview, there seems to be some significant alignment or overlap between between you and them. But I don't know if that's, I don't know if you want to react to that or what If I if I said, oh, Concordia is like the far AI of China, what would you say that misses?
speaker_2: Well, yeah, I do think there is a lot of common ground in terms of our mission. I think Concordia is also involved in a lot of the national policy and center setting, and I can't speak to whether that's true for some of the other groups.
speaker_1: Yeah. I think they're, they're moving in that direction as well, although they I think they're coming to that maybe a little bit later after more of a research focus in, in earlier years, but also, you know, realizing how important that's going to be there shifting energy and activity in that direction too. How, how about just a general sketch of the AI landscape in China, In the US, we have, you know, a sort of handful of what I sometimes call live players, which for me, I define that as, you know, kind of qualitatively who do I think is like really in a position to shape what the future of AI looks like. And I would say, depending on, you know, exactly where you want to draw that line, there's probably 5 live players in the US. Obviously you'd put open AI, Google, Anthropic in that category. And then I think for me, XAI has to be included and probably Meta also has to be included. And then you could, you know, go a little farther and say, well, maybe Microsoft or you know, whatever. You could start to add a couple other organizations to that list as well. Notably, there is super high concentration in the Bay Area, little bit of, you know, Seattle area as well, but it's like a pretty geographically densely clustered thing in the US And it's also a broadly speaking, like a pretty densely culturally clustered phenomenon, right? Like these people that are doing this work have known each other for years. Like they've all read their sort of, you know, Eliezer, agree or disagree, right? They've all kind of got a very remarkable amount of like shared cultural context in history, even before, you know, the current AI paradigm really started to take off. I couldn't do a similar sketch in China. And I and I don't know to what degree things have developed sort of as expected in China or if there's been surprises here. Of course, we have like, you know, some of the companies I've mentioned are like the giants that you would, you know, have definitely anticipated being in that spot, like Google, right? Like was clearly an AI leader the whole time. And here they are. No surprise there. We've got companies like Alibaba in in China that, you know, are, I think in kind of a similar role where they were massive tech giants with huge resources and, you know, naturally investing in AI and continue to be leaders. But then we've also got these, you know, kind of DeepSeek and Kimmy type. And Kimmy is moon shot, I should say, but the model is Kimmy companies, which, you know, for many in the West kind of came up by surprise. How would you sketch the, the Chinese landscape in terms of like who are the live players and to what degree has that shaped up like in a very, I know there's, there's also a little bit more geographic distribution, but I'm not sure how you would describe that. And I'm not sure if you were, you know, were surprised by any of these other companies kind of hitting the the top tier in China or if you know from the inside view that was more expected.
speaker_2: Yeah, I think China has multiple hubs for AI developments and start-ups. So you have Beijing that is home to the big companies like Baidu and by 10S also Moonshot AI and Triple AI are founded by alumni of Chengwa University, which has just a very high density of AI researchers and entrepreneurs. You have Shanghai that is home to Minimax and other very strong large model startup with a focus on multi modality. Shanghai is also hosting the World Air Conference, which draws a lot of attention across the investment and industry ecosystem. And then you have Hangzhou, right, where DeepSeek and Alibaba have their headquarters there. Actually, in 2025, there was a term that got popularized, which is the 6 little Dragons of Hangzhou, including DeepSeek, but also two robotics companies like Unitary. And then you have Shenzhen in the South of China, which has major players like Huawei and Tencent. And I think that sort of geographical distribution also applies to compensation in AI safety. So in our report on state of AI safety in China now, we have documented around 31 groups in China that have published a few papers on AI safety. And the top five cities are Beijing, Shanghai, Hangzhou, Shenzhen and then also Hong Kong.
speaker_1: Is there any kind of cultural constant across these hubs? You know, again, with the Bay Area, right, Like it's definitely a distinct culture. I won't try to, you know, fully characterize it in 30 seconds. But it's clear that there is a sort of science fiction, you know, inspired intellectual tradition and definitely a lot of like shared cultural touchstones that span the companies, you know, even as they're, you know, competing fiercely with each other in the market and, and for research breakthroughs. They certainly have some commonalities. Is there anything like that in China? Like how, how would you describe the the kind of cultural foundation of AI work in China?
speaker_2: I think 1 notable difference is across all of these Chinese cities, the tech ecosystem and the policy ecosystem are quite intertwined. Whereas in the US, my sense is there is a pretty big difference between the Signal and Valley culture and the culture in DC. So perhaps they're just more coherent views on policy questions across these Chinese AI hubs.
speaker_1: How about in terms of a like where they are taking us, I guess, you know, vision for the future right up my one of my common refrains is the scarcest resource is a positive vision for the future. And I'm always struck by what we hear from the leaders of AI companies in the United States. It's usually very like it's going to be so amazing. You know, we're going to like solve all the problems, cure all the diseases. It rarely goes much deeper than that. We don't hear too much about, you know, how what is life? You know, what do you think life is supposed to be like when the AIS, you know, get as good as you say they're going to get? And I feel like that's quite a notable gap. There is this sort of like sci-fi, but very vague, you know, sketch. And then it's just like, but we just got to, you know, develop this technology. Trust us, it's going to be amazing. Kind of is there a similar, you know, sort of lack of concrete vision in China or is it because you do also see these survey results that suggest Chinese people are more optimistic about AI than Americans? I wonder if there is like a more concrete vision that the the sort of government and companies together are offering the public. Or maybe this is just a dispositional thing like the last 25 years in China have gone, you know, comparatively really well and so people are just just more optimistic in general. Yeah. How would you describe the sense that exists in China about like, what's going to be good about this and why people are relatively optimistic?
speaker_2: Yeah, that sounds like a great question. So maybe let's start off with the Chinese society in general and then I can comment on the perspective of Chinese policy. So in our report, we have looked at some of the existing public opinion surveys on AI in China. The limited samples suggest that the trans public generally view that the benefits from AI would outweigh the harms. One survey after ChatGPT suggests that the Chinese public do think there could be existential risk from AGI, but still think that AGI should be developed because they think that the risk are largely controllable. And then there is another recent survey with a focus on Chinese students also found that they are pretty optimistic about the benefit of AI outweighing the the potential harms. I think around 80% of the respondents agree that AI would do more good than harm for society. But to put this into a broader context, right, this is not very surprising if you consider the recent past. Let's take my parents example. They were born around the 60s in a southern province of mainland China in Fujian. And within the lifetime, the per capita GDP of the country has increased by more than 140 times and the rate of extreme poverty has decreased from approximately 88% in the early 80s to close to 0 today. So I think when you have witnessed that level of transformative progress first hand or through your parents, you tend to have a more optimistic view of what technology as well as proper governance could deliver.
speaker_1: Yeah, that's, that's incredible. Obviously it started from a low, low after some, some bad times, but still it's an incredible rebound. And, you know, in many ways you look at Chinese cities today and it's, it's hard to argue that they're not world leading in, in many respects. I guess maybe just one, one more question on sort of like the culture of AI development in China. Would you say that it has a, is there a single culture of AI development in China or because it's spread across these different regional hubs and, you know, different companies? Would it be overly simplistic to say that there is a culture of AI development in China?
speaker_2: I think there are different approaches to thinking about frontier AI, but it's not necessarily geographically based. So the mainstream approach in the Chinese commercial AI market is scaling large foundation models and deploying them and they follow closely the deployment in the West, particularly with the success of GP 3. So you have the Chinese AI ecosystem that is rapidly advancing in areas that are similar to what your audience would would know, increasingly multi modal advanced reasoning, increasingly capable AI agents and also scaling across training, post training and inference time stages, right? But then alongside this mainstream trend, there is also a couple of directions that receive significant emphasis in China. So you have prominent AI policy documents and scientists that believe in body AI, grounding AI training and deployment in the physical world is the most important approach for achieving powerful AI in the future. You have institute called the Beijing Institute for General Artificial Intelligence that focus on developing AI that can reason and plan towards complex goals with minimal initial input. The director of frame this approach as a small data big task approach in contrast to the big data path for training large scale foundation models. So yeah, I do think there is different flavours for thinking about frontier AI.
speaker_1: Yeah, those definitely have, you know, resonance or, or analogs I would say in the in the Western AI culture as well. And culture's maybe not the right word, but certainly there is a school of thought that like embodied AI is going to be critical. And there's definitely, you know, we've got things like the Arc AGI prize and people are very focused on getting AI systems to adapt to new tasks and, you know, become more sample efficient. So yeah, overall seems more maybe similar than different, which I think is a big theme, honestly, if of all the reading of your reports and, and all the different preparation that I've done for this conversation, it just kind of kept coming back again and again. Like this sounds a lot more similar than different. How, how do you think Chinese AI companies look at the West these days? And again, maybe we, I don't know if we want to speak about companies or the sort of intersection of companies and government that are kind of, you know, more intertwined in China than in the US. But in the in the West, we hear all the time and it's, it's very frustrating to me. So many conversations about what we should do, what we can do, what the option set looks like. And in well, China's going to do this, China's going to do that. It's inevitable. You know, we can't stop them. So we've got to do it ourselves. You know, this this sort of adversarial relationship and the race to AGI between the two civilizations, you know, it's kind of taken as a given. Do you hear that coming from the Chinese side as well? Is, is that also something that is, is like a mirror image? Or if not, you know what, what is the narrative in terms of how Chinese AI relates to Western AI that you you hear there?
speaker_2: One thing that has significantly less discussion among Chinese companies and placemakers is specific AGI timelines or a clear kind of finish line for AGI. I think this contract with the public discourse in the West about when AGI might be achieved, it's very prominent, right? I think Chinese entrepreneurs have a stronger focus on achieving. Teleological self-sufficiency, getting very useful and practical applications and just being very profitable and enjoying commercial success. There is also some idealistic goal of open sourcing AI to benefit all, especially countries and communities that lack AI capacity and s S.
speaker_1: Yeah, interesting. I wonder what that comes from. One would be tempted to say that the AGI notion of like this super powerful thing that can like run the world or, you know, even become like a Singleton or something seems to in some ways echo Western religion. And the the sort of, you know, single God Abrahamic concept that is very familiar here that maybe just isn't so isn't such a, you know, isn't such a background force in China. And so that that whole notion maybe resonates less. Is there? Is there any like you say, there's less emphasis on a finish line, less emphasis on AGI, less emphasis on timelines, but is there any like what it the Western story is very confusing, right? It's sort of like we're going to get over this event horizon of like AGI. And I just heard the other day that Sam Altman had been asked like, you know, how well, you know, when you've got AGI. And an answer that he offered was basically when the AI researchers are using more of our research compute than the human researchers. That's like a, you know, we'll know that we've tipped over into AGI because, you know, the AIS will be if they if they deserve, you know, more of our compute allocation then in some, you know, clear sense, like that's a a tipping point. But this, but we get it's a very weird story where we sort of have this event horizon beyond which like we can't see. Is there something similar in the Chinese narrative or is it just kind of a like things will hopefully get better, daily life will get better. Is there. I'm still kind of struggling to fill what fills in that blank space in the Chinese imagination around the, you know, the big picture future of AI, if anything.
speaker_2: Yeah, let's talk about the recent high level directive on AI Plus released by the Chinese central government. This is a pretty big deal that represent a comprehensive blueprint for how China plans to develop and deploy AI domestically and that was just released in August 2025. Overall, the goal is for AI to serve as a new economic engine, right? A general purpose technology that is similar to the Internet in the past few decades or electricity during the industrial revolution that is fully integrated into the whole economy. Right? The plan has 6 pillars with the first one AI being a tool for scientific discovery and innovation. I think this is listed first because Beijing see as a force multiplier for everything else. I was at AAI for Science Forum in Shanghai last year, and even then it was clear to scientists and policy makers and entrepreneurs that it's really inspiring to use AI to solve scientific challenges, right with example of Ever Fold and others. And interestingly, the Chinese document sort of take this a step further, calling for the use of AI to even advance the social sciences and even philosophy. Then there are other domains like AI as tool for industrial transformation. Think about fully autonomous factories right smart logistics and the use of AI powered drones and robotics in areas like agriculture. Third, there is AI as a consumption booster, using AI to create new consumer experiences, even including things like the metaphors and brain computer interfaces. And then there is AI as a human collaborator as the 4th area. And a key theme is to use AI to support human labour, not to fully replace it right? Helping workers to collaborate more closely with machines and leveraging AI agents as Co pilots to boost productivity. Then there is like AI as a tool for government efficiency and proper governance. And then finally, I believe the 6th area is using AI for international cooperation as well. Where the leadership of China views AI as an international public good that should benefit the humanity, and also making open source tools and models widely available, particularly for the Global South. And notably throughout this comprehensive national blueprint for AI, it doesn't mention AGI or super intelligence or any other such concept. Right? It seems to me that the real AI race for China isn't about beating the US to AGI supremacy, it's about prioritizing integrating AI into and also boosting the real economy in the coming years and decades.
speaker_1: Yeah, that's, that's good to know. That's a, it's a more grounded, more grounded conception. Well, I'm remembering one article, I can find the link, but it dates back a few years now, maybe probably 5 plus years. It was in the Washington Post and it was from a Chinese government official. And basically the idea was that with AI, we might be entering a world where central planning can really work. And it was sort of making the case interestingly to a Western audience in English. And I have some questions about that, that phenomenon in general, because like it doesn't, it's not something I take for granted. I think we in the United States may be broadly culturally take for granted that Chinese researchers are like going to the trouble to publish their stuff in English and making it like really quite readable for us, which is I think something we probably shouldn't take for granted. But this commit, we'll circle back to that in a second. The op-ed basically was sort of saying like, you know, markets have been great, but they're undirected and you know, they were sort of the best thing that we could have. But now with like really powerful AI that can crunch all this data, there is the opportunity for central planning and, you know, government direction of society to really work. That's been a few years and was just one op-ed and I don't really know how like how big of a deal that was, although it did come from someone who is like official enough that it was, you know, it was not nothing, but I didn't hear that in your characterization just now. It did. It did not sound like there was this idea that with AI, like the government will really finally, you know, be able to kind of get it all right. Is there any impulse like that that you see where AI is supposed to be sort of a magic solution that will make like state planning, centralized, you know, economic direction really work in a way that it it hasn't historically?
speaker_2: I think that sounds a little bit utopian for the Chinese policy discourse. So no, I don't think people are betting on using AI to solve everything, including central planning of society on the margin. I think when I talk about using AI for proper governance and government issue efficiency, there is example of using AI to create better early warning system for for example, natural disasters, right? Getting kind of real time data early and for the government to act and prepare for potential crises. So on the margin, yes, there are valuable applications, but probably not as a unifying silver bullet to societal issues.
speaker_1: Yeah, interesting. I'll have to dig up that link. I just Googled. I couldn't find it immediately, but but I can, I can track it down. How about on this English thing like you mentioned self-sufficiency. One worry that I have broadly is like right now it does seem like a very fortunate situation that we're in that the two, you know, for as much sort of decoupling as there has been and the trend obviously can you know, seems to be continuing to go in that direction. A eyes that are coming out of the two countries are like much more similar than different, right? Like we've got language models, we've got the attention mechanism, we've got, you know, large pre training and post training and like now reasoning and multi modality. And it's like very much following the same path of development. That's again, something I don't take for granted. How do you think that the Chinese research and development ecosystem broadly thinks about that? Do they want to be like doing the same thing as as what's happening in the West? Why are they publishing in English? Like why not just do do all this stuff in Chinese? And do you think that there could be a divergent where the, you know, because of, you know, whatever cultural or, you know, perhaps like I worry about this sometimes in the context of chips, like if the chips become too different or availability becomes too different, that could be sort of a forcing function that could take R&D efforts in in somewhat different directions. But yeah, so there's a lot there. Sound off on it if you would please.
speaker_2: Yes. I think for several decades, especially since the deep learning revolution around 2012, Chinese and Western AI researchers have been part of the same ecosystem, right? They attend and present at the same top tier machine learning conferences like Europe's and ICML. They publish papers on archive to share findings quickly. So publishing at top tier international conferences is seen as more procedures than publishing in Chinese only journals. And I would say the communities have been deeply integrated, right? I remember seeing a study showing that for the UK, the most common partner for AI papers is actually China, not the US. Another study in Nature in 2024 analyse over like 5 million AI papers, and they found that collaboration between researchers in China and the US produce more impactful and novel AI research than when either country works alone. So it is deeply integrated and collaborative. And this practice isn't unique to AI. You know, English serve as the lingua franca for most of the modern scientific fields. And there is obviously a deeper and darker historical reason for this, right? We have lived in a world dominated by the English language for the past two centuries, largely a product of the Western LED industrial revolution and the era of colonialism. So I think that has created a kind of cultural gravity as well. Yeah, I'm happy to talk about the tech stack later on, but on the language point, I do think it's something that we shouldn't be taking for for granted.
speaker_1: Yeah. Are there Chinese versions of all these papers as well, or are they literally just putting them out in English as like the canonical version?
speaker_2: I think for most of the papers it would just be in English and then there could be summaries of these papers in blog posts in Chinese.
speaker_1: Yeah, fascinating. How how how strong is the English on these teams? Like if I go to DeepSeek and I show up not speaking, you know, more than two words of Chinese, can I expect to have like because the papers are I thought the DeepSeek are one paper in it. You know, in addition to all the other things that, and this has been true about the Kimi papers too, in addition to all the other commentary. One thing that I think wasn't commented on enough was it was just very well written, like clear, compelling, just very well written, like better written than the typical paper, you know, that I read from native English speaking authors. So is that something that where English is like that do do do the rank and file researchers that these companies have like that level of command of English or is it sort of a specialized skill within the company? Like if I showed up at the lunch room, would I be able to just like speak very comfortably in English to everyone? What is the sort of English production process look like at these companies?
speaker_2: The vast majority of people at these companies would be able to understand English, both in terms of hearing and and also reading. I think a good amount of them would also be able to have a conversation with you, in particular among the scientists. But obviously, that's not the case for everyone, especially at companies that have hired more from, you know, local universities.
speaker_1: Interesting. OK. How about if we go outside the companies and just look at the public for a minute there? You know, obviously there's been this whole chip wrangling and I wonder has that like impacted the public's access to AI at all? If I am just a, you know, average Chinese computer Internet user, do I feel like there's any scarcity of access to AI? Or is it is it similar to being in the US where I have, you know, four or five different products I can choose from? They're all like, you know, sort of have a free version and a relatively affordable paid version. And I can basically get like all the AII want right at A, at a not crazy, you know, monthly fee is there. What does access look like at the retail level in China? And, and how would you describe, you know, just just how much of A sort of everyday thing it has become for, you know, normal people that are not part of the AI community itself?
speaker_2: I think it's pretty similar. Pretty accessible with people able to download multiple chat bots and video generation apps on their phones and many of them are free so I would say it's pretty similar and accessible.
speaker_1: And these are cloud services too, right? Like they're not. When you say download, obviously you download the app, but you're not in general downloading a model and running inference locally on your device, right? The, the sort of inference model is similar to what we experience in the West, right? It's, it's the models are being run in the cloud. You're getting streamed tokens. That's, that's the typical pattern. I know that I've asked that's a little less obvious in in China because so many of the models have been open sourced, but I'm assuming that the, you know, the delivery mechanism is still basically via the cloud.
speaker_2: For the everyday consumer, you will be downloading the applications through the cloud or just using the website, the browsers right? Obviously open weights is also very much a thing for developers, so all of these options exist.
speaker_1: Yeah, OK, cool. Let's talk about your op-ed in Time. It's it's been fascinating to see Time magazine become the sort of flagship outlet for AI policy thought leadership. Maybe before getting into the specifics of your op-ed, which is basically making the case that China is taking AI safety seriously and that, you know, we should be less confident. I think that by extension, at least we should be less confident in the, you know, but China's going to do it. So you know what, what else can we say? You know, kind of get out of jail free card that that happens in so many American conversations. How did you actually get connected with Time magazine? Like what was your experience of of publishing from China and an op-ed in Time magazine?
speaker_2: Well, we had the report on the state of AICP in China. We want to communicate the message that there is a lot of activity and nuances going on in the landscape. We also want to connect with the audience internationally, right, especially on recommendations for air policy. So we just roll up the op-ed I pitched to the editor at Times Magazine, which has been quite interested in AI safety as a topic in the last few years. And so I think that was a very straightforward process. We appreciate how professional and and open they are in terms of these topics. So yeah, it was a really good experience.
speaker_1: Yeah, cool. I hope to get Mark Benny off on the show. He owns Time magazine now. And it's definitely, you know, something I did not have on my AI bingo card is, you know, Eliezer publishing there. You publishing there. It's quite a, quite a, quite an impressive reinvention of, you know, an older media institution, I would say. So OK, take us through the argument. China is taking AI safety seriously. I mean, you can walk me through it probably better than I can guide you through it. So what are the kind of key reasons to believe this?
speaker_2: So AI safety has been elevated to a top national security and public safety concern over the last one to two years. So one of the most important public meetings in China is the Third Plenum, and that happened in 2024, where the leadership, including presidency, call for an oversight system to ensure the safety of AI and that meeting classify AI safety as a major public safety concern. And then following that meeting in February this year, China also published an update of the national emergency Response plan. So now the new version includes AI risk alongside concerns such as cybersecurity, biological security, and also natural disasters, right? So that suggests AI is not viewed simply as a content control issue as some might think. Then there is also a study session on AI by the Chinese leadership, sort of the top 24 officials in the Politburo. This is a channel for senior policy makers to be informed by experts in different fields, including AI. And in the session in April this year, the Chinese president noted that AI brings unprecedented development opportunities, but also could bring. Unprecedented risks and challenges. And if we compare the meeting readout to previous high level statements by the government, it was much more detailed on safety. It describes some of the distinct stages of AI risk management from technology monitoring to early warning and then to emergency response. So I will just start with this really high level national prioritization of AI safety that we've seen in the last year.
speaker_1: Could you give a little more context on sort of how these things should be understood in context? I guess what what I mean there is like when the Politburo has a study group, I think you could at least superficially look at, you know, a sort of similar ish meeting at the White House, you know, where people talked about AI and various things were said right? And you could say, oh, well, that seems like it's happening pretty much the same in both countries. And maybe that's right. Maybe that is the right way to understand it. But I at least have some sense that there's something more like official and meaningful and substantive when a Chinese body does something like this versus when it happens, you know, because I think a lot of these things in the, in the United States at least, it's like, yeah, well, we had that one meeting at the White House and, you know, we kind of said a few things and then we moved on. In a couple cases, we got like voluntary commitments. Mostly they've been honored by the companies. But at the governmental level, like we've totally turned over the government. And you know, it's, there's, there's not a lot of continuity or just because like a meeting was was held, you know, doesn't mean like anything is going to happen downstream of that. So how would you kind of characterize what these meetings or these sort of statements really mean, so to speak, in the Chinese context?
speaker_2: Absolutely. So one of the key functions of these statements and study sessions is to signal the priorities of the leadership and for the country to a domestic audience, right? And then local officials would study them to understand the priorities and also other stakeholders in the society right, including investors and private companies would also pay close attention. So this allows the government to mobilize significant resources towards specific goals, creating a whole society approach to achieve these policy objective. Of course, this type of top down mobilization doesn't guarantee a perfectly coordinated process. Actors might respond dynamically and might have different interpretations, but I do think there is some differences in AI policy between the US and China. Two things. The first major difference is continuity. In China there is a strong long term continuity in national plans, including for AI. So once the central leadership set a direction, it could be a multi year or even multi decade commitment that will be carried out in in the government and across society. Whereas in the US it seems like a different story with a executive order on AI by one administration that might not survive the next administration. Just example. The second difference is coherence. In China, the Politburo sets the overall tone and then there is more unified approach to implementing policy. Whereas maybe in the US you have different opinions between the White House, Congress and other stakeholders. So I think in terms of the strength of the Chinese policy making system, he allows for a more unified focus on longer term planning. And in terms of the AI safety culture differences between China and the USI think there are a number of observations I have. The first one is China appears more willing to consider regulations and perhaps that comes from a high level of trust by the public towards the government. And the second, we also see that academics have a really significant influence on policy making in China. For example, several Chinese academics who are deeply concerned about the potential catastrophic risk from AI have briefed the Politburo leadership directly through either study sessions or other channels. I think that is partly due to a historical tradition where for like almost thousands of years, scholars have held almost the highest status in Chinese society, more prestigious than entrepreneurs and business people. It also means that China might be less susceptible to capture by commercial interests and Ted lobbying than what we might see in some other countries. And then finally, it also seems to me that the AI safety discourse in China is less polarizing for, you know, lack of a better word. You know, looking at the headlines, it seems like in the US, the views on the air safety really has a huge spectrum, right, with people calling for a immediate pause on AI development to effective acceleration lists who want to speed up at all costs. And I think the discourse in China is a bit more in the middle.
speaker_1: Yeah, that's interesting. How about on the question of accepting performance compromises for the sake of safety? This is something that I feel like we might really need to be willing to do longer term. You know, we could maybe have a little bit more powerful AI or maybe even just could respond faster or whatever if we, you know, didn't have certain guardrails in place. We haven't seen too much will to do that in the US yet. Arguably, it hasn't really been needed yet. But I wonder if there if you could characterize the attitude of, you know, are these things seen as kind of something that people are are quite ready to unwilling to trade off against each other?
speaker_2: I think in China overall, people do not feel capabilities and safety as 0 sum. It's possible to increase the efforts in both directions. For example, one quote I am remembering is that the Chinese Vice Premier Ding Zhei Zhang went to the World Economic Forum, Dafos forum in early 2025 and he gave a metaphor that if the braking system isn't under control, you can't really step on the accelerator with confidence. And for context, he is the most senior official in charge of science and technology plan in China. And so at least conceptually across all of the Chinese policy documents that we have analysed, he always view safety and capabilities as hand in hand and not necessarily as a 0 sum dynamic.
speaker_1: Yeah, there's many, certainly many ways in which that has proven to be true over time. And maybe the healthier attitude in general versus even viewing them as being at odds, There probably will be some areas where there will be at odds. Like one very simple 1 is just like, if you really want to control sensitive systems, you might want to put a filter on outputs. And now you have this question of like, should I stream the outputs token by token and run the, you know, filter in parallel with that? Or should I wait for the generation to be done and then, you know, do a a classification and then, you know, return a thing? There are some areas I think where you will have a kind of, if not necessarily power, at least like a user experience trade off against safety and control. But yeah, maybe I shouldn't emphasize those too much because it is probably easy for people to falling to the frame that it's a trade off, when in fact, in in many, many situations, better control allows you to, you know, get better use of AI and it's all quite to the good. This is the 45° line too, right? That's I really like the metaphor. I'm not a big analogy guy in general, but I do like the metaphor of like, you can't go fast without good brakes. The Is the 45° line basically the same idea, or is that a? Would you unpack that one slightly differently?
speaker_2: I think it is basically the same idea that we need to advance safety and control alongside capabilities deployment of AI. So for example, if we have more dangerous capabilities of AI in specific domains, then we should also have stronger safeguards at the same time.
speaker_1: One really interesting characterization that I heard of the difference, of course there are many differences, but one, you know, particularly relevant difference between the Chinese and the American system was I think this came from the Dwarkesh podcast described as the role of mayors and governors. And the incentives that mayors and governors have where in the US they are obviously elected by local populations. And so they're accountable to local populations and their incentive is to be popular with local population. In the Chinese system, as it was described, it is the incentive is to do a good job according to the priorities that the central leadership has set. And there's still like a lot of opportunity to be inventive and come up with, you know, good local solutions. But the way you get promoted and become like a higher ranking official in advance in your career is not by being exactly popular with the local population. It is by effectively hitting the objectives that the national leadership has set. I guess, first of all, I would order like, do agree with that assessment of the incentives that people have. And if that is true, are we starting to see mayors or governors, like, do things sort of locally entrepreneurially to try to advance the AI safety priorities that the central leadership has set? Are there any examples of people who have like, come up with cool new ideas that are kind of, you know, bubbling up in response to these nationally prescribed priorities?
speaker_2: That's a good question. Yeah, there is the concept of major economy in in China. And for a while the local and provincial governors were very much focused on optimizing GDP, economic performance. So they were building a lot of infrastructure, real estate, boosting investment in those local areas. And then there was a increasing level of environmental concern over the last decade. And so in the assessment of those governor's performance, environmental goals and metrics were added and that changed a lot with, for example, the air quality in Beijing. And so I think that is example where you need to identify and improve the local incentives in order to achieve national policy for AI specifically, over the last two years, we have seen a number of provincial AI plants from Shenzhen to Shanghai and other areas. I would say 1 prominent example is creating these experimental zones for self towering cars to look at different levels of automation and how it could be properly regulated. In general, there is a tradition of doing experiments locally before sort of institutionalizing at the national level. We have seen that for experimental zones in the economic reform era with with Shenzhen being a prominent example. And I think we're also seeing the same strategy being played out for some of these AI implementations.
speaker_1: How? This is a bit of a digression, but how would you describe the state of self driving vehicles in China? Have you had a chance to ride in any and you know how, how would they compare to like a Waymo that people might have experienced in San Francisco?
speaker_2: I think the state-of-the-art in China is pretty comparable to Waymo. I have seen and experienced some of the self driving cars in in Baidu with their Apollo projects. I think it's getting pretty good. I think one of the main bottleneck is with regulations, with self driving cars. It can't be just a little bit safer than humans, right? You want them to be, you know, maybe 10 times safer for the society and the public to feel comfortable with it.
speaker_1: Yeah, that's, that's funny. That's almost exactly the same as it seems like we have in the United States too. I'm, I'm always like marvelling at the fact that even as we're now hitting literally like 10 times safer, still, people are kind of like not comfortable with it. And you know, we're unsure if we should really adopt it or whatever. But boy, the, the latest data from way MO is like really, really hard to read any other way in my view, other than like, we really should make this a national priority. We'll see if that happens. How about on the AI safety dimension specifically though, Is there anything from from mayors or or you know kind of more local regional officials that you would highlight as sort of interesting? Maybe it's just too early to have anything there? I mean, obviously this is all kind of developing very quickly.
speaker_2: So one of the national piece of regulation that came out in 2025 was around labelling of AI generator outputs. So starting from September this year, all the developers and also social media platforms are required to have both explicit labels as well as implicit metadata for AI generator outputs ranging from texts, audios, videos to even simulated environments. And I've seen Shanghai trying to pioneer a consortium of these AI companies like Minimax, but also social media platforms like Red Notes to make sure that there is a unified and and functional standard around watermarking and content provenance.
speaker_1: Interesting. One worry that we have in in the US is that we're going to get a bunch of state level rules and that they are going to be, you know, at odds with each other are contradictory in different ways, you know, and there's obviously just 50 states. So there's a lot of like possibility for all kinds of different, you know, small differences that just create a lot of overhead and headache for the AI developers. And so a lot of people are like, well, we need to have a national standard. Federal government doesn't seem too inclined to create 1. So currently we, we kind of have nothing. And there's like a few state rules popping up and people are sort of worried about this chaos. What is the what does the usual trajectory look like in China for sort of harmonizing those kinds of ideas as they mature? Like if if you have multiple different regional initiatives, you know, that are kind of trying to figure things out. Does the central government come in at some point and say, OK, you guys got the best version of this? Like we're just going to make that the standard or do those sorts of things like continue to persist in different regions for a long time? Like I'm wondering if there's a similar issue in China that that like Chinese AI companies would have to contend with going forward.
speaker_2: So far, the AI regulations in China are mostly at the national level. So you have specific regulations for recommendation systems, for deep synthesis and then also for general AI. And then there is also like ethic AI ethics review regulations. So at least so far, it doesn't seem like a major issue for China and I can speak to those specific regulations if there is interest.
speaker_1: Yeah, that's a perfect segue. I was just going to ask next, what how does this all kind of cash out for Chinese AI companies today in terms of, you know, we trained a new model, we got a new product, we've got some new capability, you know, in the in the United States, basically you can just launch it. You know, we've seen like the most in my view, kind of crazy example of that was when XAI launched Grok 4 at the exact same time that Grok 3 was online identifying itself as Mecca Hitler. And they just basically went on the line and said here's crack 4. Here's all the great things about it. They didn't even comment about their, you know, Grok 3 identifying as Hitler. Now we've got obviously other companies are taking a much more I would say, you know, responsible and deliberate approach. But X AI at least shows that in the United States today, you can take something, you know, screaming hot straight off the GPU's and throw it into the public with basically no buddy to tell you that you can't. And like, no requirements that you do any meaningful test. You can just test, test it in prod, so to speak, as they say. What's that look like in China? Is there a structured process that everybody has to go through? I've heard that in the past, like people have kind of told me, well, they sort of let people launch, but then if if you mess something up that they come down on you hard. So it's more of like a incentive to stay on the right side of things. But yeah, how how is that shaping up today?
speaker_2: So if you are a deployer of a publicly available chat bot, then you will have to pre register with the governments. The process is quite involved. In particular it requires the developers to do safety testing on various risk. There is a national standard with 31 risk that you have to test for. And also the Chinese regulators, specifically the cyberspace administrator of China will be given pre deployment access to the model and then they will have to approve it before it can be released to the public. And now over 500 of such systems and different versions of the model have been filed over the last two years. And yeah, I think this basically introduced a filing and registration process that looks like a licensing regime. And the interim measures of Gender of AII think it's one of the most important regulation for governing general purpose AI today.
speaker_1: Interesting. So that sounds like it that's changed maybe. I think it was maybe a year ago that I did my last episode with someone who was on the ground in China. Do I have that timeline timeline right? Has that changed like within the last year that you've gone from a sort of not you, but you know that the Chinese system has gone from a you can kind of go ahead and do your thing, but we're watching you to a like now you have to do these like 31 different dimensions and pre register and give the early access.
speaker_2: Well, I think the system of registration and pre deployment testing already exists in 2024 that was being developed a few months after the release of Chachi PT and also as Baidu and other Chinese companies started developing their own large models. So it's not new, OK.
speaker_1: Are there like explicit thresholds and is all this data public? If I've got 500 companies that have been through this like 31 dimension analysis, is there a place where I can go and look at their scores on all those different dimensions? And are there sort of rates of like, because obviously nobody's been able to drive all the unwanted behavior to 0, right? So are there explicit thresholds that you have to stay below on a particular test? Also, is there like a red teaming component to this? Because I wouldn't worry that people would have, I think if the, if the standards became too well known would be that they'd become like very easy to game, right? And then you could sort of like say, Oh yeah, well, we could definitely hit that, you know, get stay under those thresholds. That doesn't necessarily mean your AI is like broadly going to be well behaved or under control. So how are they dealing with the, you know, on the one hand, you might want transparency. On the other hand, you might want to have some tests that are not disclosed or you know, that are more dynamic because you don't want people to overly game those, you know, finite test sets.
speaker_2: No, that's a good point. So first of all, the list of models that are being registered has a publicly available database, so people can check it out. And then in terms of the threshold, there is specific numerical targets. For example, there is a set of questions that the model would have to be carried. And then at least, you know, 96% of the time the output should be considered acceptable by the authorities. And then there's also 96%. Yeah. And then there is also a red teaming component where apart from internal testing by the companies, companies would also need to create text accounts for the cyberspace authorities in their local provinces, basically granting them access to test the model pre deployment. And so the process could involve multiple rounds of feedback, renewed fine tuning, and then conversation until the regulators are satisfied with how the model behaves.
speaker_1: That 96% is, is quite interesting. Obviously the flip side of that is it's 4% up to 4% response rate on like sensitive topics that would be allowed that are, you know, even though the outputs would not be pleasing to the government. That strikes me as like kind of remarkably high in some sense. I mean, we don't have anything like that in the United States at all. But I guess when I think about what I think I know about the Chinese Internet and sort of, you know, at what rate can you say something online that is, you know, out of line with what the the government wants to permit in the public sphere and have that like stay up on the Internet? My, my sense is that like it's not 4% that, you know, you would be expecting if you went online and said something, you know, out, out of line like that you would expect to be, to have it taken down, you know, like at A at a significantly higher rate than 96%. And is, is that sort of a concession in some sense to the inherent unwieldiness of the technology or am I may be wrong in my kind of background assumptions? How would you help me understand that better?
speaker_2: So in the initial draft of the regulations for turn of AI, there were some language around ensuring that the output of the large language models are, you know, very reliable and truthful almost to a perfect degree. And then there was a window of three months where the industry and the wider public were allowed to provide feedback. And I think there was some concerns in the industry that to ensure that the output of large language models are perfectly truthful and reliable, that's just technically infeasible. And so the final piece of the regulation was relax a little bit. So I do think there is some trade off between this policy goal and the inherent limitations of the technology.
speaker_1: But then I suppose that still has a, there's like another layer perhaps of governance where it's one thing if the chat bot says something to me that is out of line with what the government would want it to say. A separate question in some sense is do I then go post that on the Internet or share it with other people? So I guess there's kind of multiple layers of controlling the information flow and the the the model outputs is only one layer, right that ultimately is operative to sort of make sure people are kind of engaging in the way or you know, sharing the sorts of information that the government wants them to be sharing and seeing.
speaker_2: Yeah, I think so. To give a concrete example. So the government of Beijing passed a regulation for AI check bots in the medical setting. So hospitals and medical institutions are not allowed to use AI chat bots to produce medical prescriptions without human in the loop. I think that's very sensible given the rate of hallucinations from AI. So even if AI chat bots produce bad and unreliable medical recommendations, the institutions cannot use them, right? That's another layer of governance.
speaker_1: Yeah, interesting. It's a good reminder, too, that there's a lot of practical aspects to this stuff that are not purely about politics or, you know, historical memory of famous events or whatever. Certainly people in the United States tend to frame these questions in terms of like, you know, like there's the canonical things that people ask. And I'm, I'm just phrasing my own description of it sensitively, just given the fact that you're there and I'm here. But like, yeah, it's, it's useful to highlight that like medical chat bots are probably, you know, much more day-to-day what people are going to be getting great value from and, you know, also potentially could have real risks from as opposed to, you know, what happened to, you know, this particular historical episode or whatever. How far down the like, scale of entrepreneurial activity does this registration exist? Like here, I can take a model, I can spin it up into an app, Obviously I can go publish it just as a you know, I can create whatever random experience I want to create. If I am a very small time app developer in China and I go grab a Quinn model or a DeepSeek or whatever and I want to make a new product experience with it. Do I have any sort of fast lane to get that stuff online and and test it out? Or do I have to go through the same like governmental approval process? I guess my could I could I say, Sir, well, look, you know, DeepSeek already did this and I'm just kind of a wrapper app around it. Therefore I don't have to go through all this pain. Or do I still have to do like all the same steps as the bigger companies do?
speaker_2: If you are only deploying the AI model for internal research purpose or only in business to business setting then the same set of regulation will not apply. But as long as you are doing a check board, a recommendation system that is like broadly available, then a series of binding regulations would would apply.
speaker_1: Yeah, OK, interesting. Are there anything like AI friends in the Chinese app ecosystem? You know, in, in the US, we have a growing category of AI friends, AI boyfriend, AI girlfriend, you know, sometimes there's now explicitly like AI therapist. And by and large, these things are again just kind of rolled out. See what happens. And there you have it. I have no idea if something like an AI friend or an AI girlfriend would even be approved in the Chinese context or if there are other, you know, I I know that there are some, I don't know the current state of this today, but I remember that there was like a crackdown on video games where like video games were sort of, if I correct me if I'm wrong, but I believe that a national level, it was like video games will only be played during these hours. Going forward. Focus on your studies, kids. How, how much sort of of an editorial role, I'll say, does the Chinese government play in terms of what kinds of AI experiences are going to be allowed? And do we have AI boyfriends and AI girlfriends in China today?
speaker_2: I'm not too familiar, but there is certainly apps for AI companions being friends, being collaborators, and also different characters, right? So I have seen a App Store, a virtual App Store where you can pick different characters and choose their different psychology traits. So that definitely exists, but I think you're right that Tencent and other gaming platforms would have to control the amount of time that kids and teenagers could play on an everyday basis, to focus on study, to focus on the real world. There's certainly that sense of paternalism from the Chinese society and the government.
speaker_1: How about on like a training basis? The to the degree that we've seen attempts at regulation, I guess we've seen everything, you know, at least at the state level in the US. But some of the things that have got the most heat around them in terms of debate are at the training level. You know, before you go and do some big training, you have to tell the government you're going to or you have to, you know, commit to some sort of testing. And that's quite distinct from, you know, when you're model is done, you know, then you have to register it or approve you hit certain thresholds. Is there any sort of pre training approval process that's happening on the Chinese side?
speaker_2: So there is national recommendations on different stages of the AIR and D pipeline. For example, there is a national standard setting body called TC 260 that recently published the AI Safety Governance framework. They already published it last year, but they have updated to a version 2.0 and I think this is very relevant to some of the conversation around frontier air safety. So the document highlights the concern around catastrophic risk included a lot of focus on loss of control and also misuse in to use domains like biological weapons and cyber attacks. And in terms of risk prevention, it highlights measures for example like doing data creation and filtering in the pre training phase, especially removing some of the hazardous knowledge that might be relevant for CPR and risk. But as far as I know this is not part of binding regulations yet.
speaker_1: Gotcha. OK, pretty, I guess how would we try to summarise all this if we in many ways, you know, it seems like there's more similarities and difference. Obviously the, the core technology that's being developed thus far looks very similar. That means it has all the same strengths and weaknesses. The list of concerns, you know, and reading through the safety framework that you helped helped author, I was very struck by like if you took the names off of it, I would not be very confident where this came from in the world. Like it it, it seems like, you know, the the taxonomies of risks, the way of understanding them, conceptualizing them seems much more similar than different. I'm not sure if that quite extends. Would you say that's also true of loss of control? Certainly at these like, you know, bio security, CBRN type things, it seems cyber security seems like much more similar. Does the Chinese side have a similar imagination when it comes to loss of control risks?
speaker_2: Yeah. So Concordia AI work with the Shanghai Air Lab to co-author the Frontier AI risk management framework. For context, Shanghai Air Lab is a funds AI Research Institute with significant leadership roles in National Center setting and other areas in China. And the framework that we put out is designed to help developers of general purpose AI models, especially those at the frontier, to manage the critical risk that the system pose, especially for public safety and national security. We have built on some of the best practices and standards from safety critical industries and also from around the international community. And we try to define a set of red lines as an acceptable threshold that no one should cross and also yellow lines as sort of the early warning indicators that would trigger more strengthened safety and security measures. And those include critical areas like cyber offence, biological risk, large scale manipulation and persuasion and also loss of control risk. And on loss of control, which you ask, we also release a technical report alongside the framework. The technical report evaluated over 20 open weights and proprietary models on different domains. And under loss of control, we broke it down into several dangerous capabilities and propensities, including the ability for the AI models to self replicate, to be able to deceive and scheme, and also the possibility of producing uncontrolled AR and D So I do think some of the conversations are quite similar to other parts of the world, which is a theme that we have constantly come back to during this conversation.
speaker_1: Yeah, it seems like they're the big differences are really at the sort of consumer tech layer, right. There's like this something that looks more like a licensing regime with a certain established body of tests and you know certain requirements before you actually go to market in a consumer facing way. And aside from that, I would struggle to identify too many big differences in terms of just the the overall patterns of thought, the way that concerns are conceptualized. I guess the other big thing that we highlighted in terms of differences that there is a greater fascination with AGI and a sort of sense of like an end goal or a, you know, some, some sort of event horizon in the in the West as compared to China. But certainly a lot more similarity than the difference. I would say. One, one question though on given all these things, right? I haven't like personally tested this systematically myself to validate it, but I'm sure you're aware that Dario from Anthropic had said of the DeepSeek-R1 model that it had very little in the way of guardrails and was like, you know, rather willing to help with queries about biological weapons and so on. I think he said it was like, you know, the the least secured model that they had tested in terms of refusals on those kinds of dimensions. How should we make sense of that? I guess first of all, do you would you agree that that is like a true assessment of that model And if that is right, what happened there? Like how, how do we have all these criteria in these processes and yet something gets released with, you know, seemingly less safety baked into it than we have seen from most of the the models that come out in the US?
speaker_2: So there is different angles that I could offer. First of all, the approach to AI safety in China is mainly driven by binding regulations and standards led by the government. As we have discussed that includes registration, pre deployment, testing and labelling of AI generated content. But one effect of this is that Chinese companies might be less likely to release safety evaluations for something that isn't directly in the regulation. The compliance costs required to deal with existing regulations reduce sort of the appetite for taking on additional safety measures. So in our industry session of State of AICV in China report, we have looked at the practices of all the Chinese AI developers and they do implement all the standard safety techniques right from training data filtering to safety fine tuning, RHF, constitutional AI, real time misuse monitoring. First similar to some of those techniques used by major Western companies. But those practices appear to be primarily geared towards the 31 categories of risk there are in the existing regulation and the vast majority of Chinese companies. Have not publicly shared evaluation results for CBRN or loss of control risk. But I also think the air industry is making some progress. I don't know whether you've seen this. Just last week Deep CR 1 was peer reviewed and published in Nature. This is really the first time for any widely used large language model. And in their public facing safety evaluation section they are being quite candid. They said that a public model, A open weight model, is vulnerable to further fine tuning that could compromise safety protections. And their own evaluation also found that reasoning models like those of deep CR1 tend to expose more sensitive or risky knowledge. But according to the peer review article, the safety level of deep CR 1 is generally comparable with other state-of-the-art models comparable with, for example, GP4-O. And in addition, our Concorde AI team also performed independent evaluations of the deep sea models. And we have seen some improvements in safeguards related to power weapons queries just within the last few months. Specifically on the benchmark SOS bench bio, the refusal rate of deep sea model has increased from around 11% in the FEE 3 version to 554% in the FEE 3.1 version. The refusal rate for harmful biological prompts and this current rate is around the median as compared to other models. And I could link to some of the data and graphs from our platform for those who are interested.
speaker_1: So overall, it sounds like again more similar than different. I mean, I just had this conversation with Adam from far AI and they had just done some red teaming of the sort of defense in depth systems that companies have created. And one of his big takeaways is that like all of this stuff is sort of happening on a just in time basis where, you know, if you were really trying to be as safe as you could be, you would probably reverse the order of some things you would like build the systems that you're going to use to maintain control. And you know, especially on these like bio risk things, which I do think are honestly pretty scary, maybe not quite yet, but like soon, if not already, you would have those developed like before the model was ready. But it seems like I'm in contrast, what we're actually doing is like train the model, see what it's capable of, then be like, oh **** you know, it's actually getting pretty good here. Like we better tack something on to try to address that. And it seems like basically the same same kind of thing is is sort of happening, at least for now on the Chinese side on some of these risks. Like we're first creating the model that that is capable and then we're figuring out what to do about it. Do you think that will flip in? I mean, it seems like with these licensing regimes that there's at least some structure that could flip it. What would flip it? Would it be like expanding from 31 to, you know, some additional categories? And what about open source? I mean, this is obviously the, you know, debate on this runs like super hot too. How committed do you think Chinese decision makers are? I guess, you know, spanning government and companies? How committed do you think Chinese decision makers are to continuing to open source indefinitely? Or do you think there could be a time, perhaps not too far in the future where there's a different analysis that's like a these things are just too unwieldy. Like we really need to, you know, keep them on infrastructure that we control because that's like obviously part of how you would run, you know, a broader defence in depth strategy, yeah.
speaker_2: First of all, AI safety is a much broader landscape than just AI enabled bioterrorism, right? For example, I remember looking at the AI risk database from MIT and it lists like 24 subdomains and 1000 different types of risk. So I do think, you know, every country and company might have slightly different set of safety priorities and challenges. So we shouldn't be judging one player on just a single neurometric. But I think you're right that the model registration and predeployment testing is quite a flexible tool, right, that could be adapted to address new concerns from CBRN or loss of control or other emerging challenges from advance AI. So in fact, the Chinese government has repeatedly said the system could evolve over time and the AI safety governance framework by TC 260 that I mentioned earlier, I think it could help the national standard setting body to issue specific guidelines in the coming years. And we are already seeing some deployments in this space, including Enchanted AI, including open source governance and other critical issues on open weights. I think there is just tremendous benefits both in terms of safety as well as innovation more broadly. At the World Air Conference in Shanghai in July, China announced the Global AI Governance Action Plan. And across all these documents, right, there is a sense that open source is very beneficial and China should capitalize on this development. And that is similar to the USA Action plan, right? That is also pro open source. At the same time, the Chinese documents have also mentioned the possibility of misuse from open weights. And I think that's why, you know, we have seen papers from, for example, the UKAIC Security Institute, the paper Deep Ignorance that recommends removing hazardous knowledge from the data. So at the pre training phase and that recommendation is also in the TC 260 document that I mentioned earlier. In general, it seems like this is a new research challenge that stakeholders in China and elsewhere are really trying to tackle.
speaker_1: So maybe the vision, if I'm filling in the gaps correctly, would be like pretty strong commitment to open weights going forward. Make that safe by doing enough filtering of the training data so that the version that gets put out open weights to the public just doesn't know about key questions and virology or whatever. And kind of literally can't help you there as opposed to just, you know, having the sort of surface level refusal training. And then presumably on a more like structured access basis. There would still be of course, like models with the full scientific data set or whatever, but those might be the ones that would be like withheld from open weight release. And you know, you'd have to have certain access or approval to use those models, you know, with I know your customer regime or what whatever sort of other additional layers of of control.
speaker_2: Yeah, I think we do need a defense in that approach and we're in a pretty early stage of developing the science and the practice of open weight AI risk management. There are other potential interventions like temporary resistance safeguards or you know, responsible AI license or monitoring at the open source platform level. Many of those might not work and there is a lot of debates within the research community, but I think the general shape is there is a lot of positive interests and commitment in open source and yeah, let's try to make it safer and and less prone to misuse.
speaker_1: Let's shift gears to the relationship between the two countries. And I know this is something you're also working on. Obviously, it's not great, you know, broadly speaking, excuse me, it's not great, broadly speaking. One big thing that you hear from a lot of U.S. government officials, like current former, you know, military intelligence community, whatever, is this general sense that you can't trust the CCP. And I think what a lot of people would say in response to all these things that we have discussed is like, basically, that all sounds nice. And it might all be true. And we do maybe even respect the degree to which the Chinese government is taking care of its own population. You know, in obviously in terms of development, like the track record is, you know, undeniable when it comes to like making sure there's, you know, good standards for medical chat bots or whatever like that. You know, we could say, sure, that's great. And it sounds like they might be doing a pretty good job and maybe we could even take a lesson from it. But then you still get people kind of falling back to. But when it comes to the real game of international power we keep, we just can't trust the CCP, right? That's the that is, I think the prevailing sentiment for a lot of people. And so you'll get people saying things like whatever they say, whatever they do domestically, whatever they do to protect their kids from AI girlfriends. When it comes to militarizing things, when it comes to like racing for some sort of, you know, breakthrough super intelligence that is, you know, going to create some sort of strategic advantage, we have to assume that they're going to do it. You know, and because we have to assume they're going to do it, well, then we have to do it. Is there anything you could say to those people that would be reassuring Or, you know, is there any, what could I say to those people if I'm, if I want to, you know, at least create some doubt in their mind that there might be more opportunity for collaboration than their, you know, their current outlook suggests?
speaker_2: Certainly there is quite a mount, quite a lot of distrust on both sides. But I think in AI the specifics do matter. For example, if there is concern around developing advance AI in secret, that is just not grounded in the reality of AI deployment, right, The state-of-the-art AI models from China are largely released openly from Kimi to Minimax and Quan and DeepSeek. They are now among the world's best open source AI models. And so I think that level of transparency and the possibility of third party auditing and research should allow for greater trust between the companies and, and, and, and the governments. I also think, you know, the most advanced AI models in both the US and China are developed by the private sector, not inside the government. And you know, it is the same global community of AI researchers, a few 100 top elite researchers who you know, most of them know each other. And so I think that common ground of scientific vision and also commercial deployment is another factor where there should be more common ground between between the two countries.
speaker_1: Does the would you say? You said, And I definitely think this is important to keep in mind. The distrust runs both ways. From the Chinese perspective. Is there a similar worry? I mean, with like open AI, for example, we hear these sort of conflicting narratives. I would say on the one hand, you know, it seems like we can, you know, Rune, for example, has said you guys like you, you people outside of the frontier companies, like you don't know how good you have it. Like the gap between what we have internally and what you get to use as a retail customer is only like a couple months. And you know, during that couple months we're like working as fast as we can to like get it all buttoned up so that we can release it. Basically, he's like you guys, you know, you don't appreciate how close to the absolute bleeding edge frontier you are. But then at the same time you have like, oh, we have an IMO gold medal model and that's going to be a while before we release that, by the way. And then you have like AI 2027 type narratives from again, another former open AI insider saying that one of the things we should be watching for is a divergent between the models that get released to the public and the models that are, you know, developed and deployed internally. Perhaps, you know, with this idea of like trying to to do some sort of recursive self improvement, intelligence explosion, whatever. So I guess you know, from the Chinese perspective, the US companies could sort of, they can't make quite as, as you know, much of a case about, hey, we're releasing everything open source, but there is at least some claim that like we're not hiding much. Everything's coming out pretty quick. But does the from the Chinese perspective, is that credible or is there a worry that, you know, sure, like there's GVD 5, but what we don't, what we really don't know is like what is AI? What is open AI done in secret, you know, or what might they be doing in partnership with the US government that we have no visibility into? Is that a worry that people get stuck on? And you know, is, is there, I guess, is there sort of a mirror image of the you, as I said, like in the US, so many of these conversations kind of go off the rails on well, but China's going to do it. Does that happen in China too, where they say, well, but the US is going to do it. So we have to, we have to keep going because the US is deaf. We can't stop them.
speaker_2: I don't think there is as much fear and paranoia. As I said earlier, the AI Plus initiative by China, it shows that the main push is to mobilize the AI resources and direct that towards applications. And that is a sign that, you know, applications, economic engine delivering for the people. Those remain the top priorities for the government and for the party. I do think when it comes to export control, you know, the Chinese air industry, you know, is viewing export control as a clear attempt by the US to stifle the technological progress of China and to maintain the US, that is, as the leading global superpower for that one. I think that has created a sense of resentment. And what is often missed is how this also affect the everyday applications, right? Much of the computing power, much of the FLOPS is used for civilian applications, from generating ARS to powering, you know, AI medical diagnosis. And so in effect, the export control could deny a better quality of life for everyday Chinese people. And I think that also creates a bottleneck for addressing some of the shared concerns like risk from advance AI between the two countries. And so I do think that is perhaps one of the most significant issues preventing, at least from the Chinese perspective, a sense of common ground and possible cooperation with the US.
speaker_1: What do you make of the recent decision from the Chinese government to, because we've seen like multiple reversals on this, right? The Trump decision to go ahead and sell the H 20s, which I think it's, it is worth reminding people that like the H20 was designed in response. My understanding is the H20 was designed in response to the government standard that said you can only sell up to this level of chip to China. And so they're like, OK, great, we'll make a chip of that kind and we'll sell it to China. And then it was like, Oh, no, you don't, Which I'm not really sure how companies are supposed to operate given that kind of relationship with the regulator. But nevertheless, Trump comes around eventually and says, yeah, you know, it's fine, just give me a little cut of the action and you can do it. I was quite surprised then when the Chinese government said, actually, hold on, we don't want them. How do you make sense of that? It doesn't of all the sort of signals that we're getting from China about like how seriously they're taking AI, most of them are like pretty similar. But this one feels to me like, whoa, if you're really serious, you really, you know, think this is like the revolutionary technology, Like you would want to import those chips while you can. Of course, you still want to develop your own like domestic capacity to produce chips. But why not both is sort of seemingly the obvious strategy import while you can invest intensively at the same time, How do you make sense of that, that refusal of the H 20s? And do you think that refusal of American chips will be persistent if there are other chips available to buy?
speaker_2: So as of 22nd of September, Chinese companies are not purchasing the RTX Pro 6000D from NVIDIA, which are seen as low quality and somewhat overpriced. I think there is also a lack of trust in H20 specifically, and regulators from China have raised concerns about the integration of back doors. So specifically remote location tracking and also remote shutdown control integrated into the chips. I think first of all, this could be reflecting a shift in the overall balance of national power as well. So when the Trump 1.0 administration started the trade war and the Biden administration rolled out a series of export controls, China was less prepared then. But now I think China is demonstrating that it is not dependent on the US for its technological future. And in fact, maybe the US is also dependent on China for critical minerals. And whereas in in some of the recent trade negotiations, I think second, this is also backed by a growing level of confidence in the domestic AI chip industry in China, the export control have acted as a catalyst to accelerate the deployment of home grown alternative, right. So companies like Huawei are making significant progress with their ascent line of chips. And actually just this month, Huawei announced the new Atlas 950 Super note that would support more than 8000 chips. And the sort of Atlas 950 Super cluster would also use more than 500,000 chips as a cluster. And so perhaps from the Chinese industry perspective, it doesn't want to be reliant on the American stack and this is a critical window to develop the Hong Kong alternative.
speaker_1: I, I still think what the analysts that I have read who's who I think are generally taken to be the most credible, I should say, among Western analysts of the chip industry seems to boil down to something like, of course, Chinese companies are going to make good progress here over time. Of course, like, you know, there's, there's no preventing Huawei from becoming a, you know, a scaled chip manufacturer on like a decade timeline. But still over the next like 5 years, the expectation is that the production capacity is just going to be dramatically higher for the Western, notably, which includes Taiwan in a, in a critical role, But that the sort of, you know, NVIDIA, TSMC Western complex is going to produce like, I would say the median estimate would be like comfortably more than an order of magnitude more chips than what the Chinese, you know, domestic producers will be able to produce. And if that is true, it's going to be a big difference in terms of, you know, how big the training runs can be, how big the deployments can be throughout the economy, etcetera, etcetera. So do you think that that is wrong like that maybe that analysis is wrong? Or if that analysis is right, like then I kind of come back to I would still buy those H 20s while I could. And yeah, maybe they do get bricked at some point or like something, you know, I I can't speak, you know, definitively certainly to whether or not there are any backdoor, you know, technologies or attack possibilities built into those. I would guess not, but I don't really know. But yeah, I mean, what, how what do you make of that? Because it seems like that that's that is the conventional wisdom in the West. It's going to be a dramatic difference in production capacity and that's going to give the US this like pretty durable edge at least over like a five year time scale. And so the, and I think the perception from the from the West is that it's just a plain mistake for the Chinese government to refuse the age 20s that it's like, you know, a face saving measure that's like ultimately self defeating or something along those lines. But the analysis could be wrong. Maybe it is going to happen a lot faster and, you know, maybe the gap wouldn't be so big. What's your expectation?
speaker_2: So I think there are two other factors. One is China could leverage is relative energy abundance to overcome some of the limitations on a single chip basis. For example, a few months ago there was the release of Cloud Matrix 384 by Huawei, which uses more than five times as many Ascend chips, but it more than compensate for each chip being less powerful as compared to NVIDIA Blackwell GPU. But you know it matters less for China if the primary trade off is energy efficiency, given that China has significantly expanded its power grid, adding an amount of capacity equal to the entire US grid in the last decade alone, right? And most of them could be pretty renewable with some of the largest installations in the world of solar, hydro, wind, and now a potentially leading role in nuclear energy deployment as well. And so in the Chinese stack, China could prioritize scale over power density. I think another factor is beyond hardware, Chinese air companies are also trying to become more efficient at training large models. So as detailed in the peer review article in Nature, the deep CR1 cost maybe 500 Nvdia H100 trips, and that's pretty impressive given the performance that it is able to produce.
speaker_1: Yeah, that energy point is, is interesting. I have a little bit of an intuition around that that I haven't really developed. But basically the idea there is that like we can make potentially all the sort of trailing edge chips we might need. And the primary disadvantage of those is efficiency. But we've got so much energy and we've got kind of ability to provide national subsidy for critical industries such that maybe we can just kind of make up for it that way. And it and it doesn't the deficit isn't actually so big when it comes to what we can practically do in terms of large scale training. Yeah, that's, that is definitely very interesting. And certainly the the requirements on training I think has been relaxed. And this is not something I would say I'm a real expert in by any means. But not too long ago there was this idea that like you have to have everything in one data center, the bandwidth is such a limiting factor, yadda, yadda, yadda. And now it seems like we've taken, we being the sort of collective research community globally has taken pretty big bites out of that problem with now we are training across multiple data centers. Now we are seeing more distributed training come online. We're also seeing, you know, all sorts of efficient weight communication and update schemes, not to mention the fact that like inference itself is becoming a bigger and bigger part of training due to the rise of reinforcement learning. So the like trillion dollar cluster all in one place, all with like super high interconnect being like a hard requirement that also seems to have fallen off quite a bit compared to what the discourse might have suggested, you know, 18 months ago or what have you. Going back to this trust issue a little bit. So I mean, I think the point about like China open sourcing the models is at least somewhat compelling, but it still leaves this question of like, yeah, but what are they doing in secret, right? Sure, there's all these great things coming out and sure, they're open source, but, you know, on both sides, like we really don't know what's going on in some data center somewhere, you know, between the government and a leading company or whatever. We just have a really hard time getting visibility. And without visibility and, and without like a, you know, foundation of trust, it's like really hard to be confident, right, that the other side is not doing some dangerous experiment, you know, that we should be concerned about. And I do think that's pretty, excuse me, I do think that's pretty much symmetrical concern. Do you have any ideas about how we can create a stable equilibrium between the two countries? I I'm sure you read the Super intelligence strategy document from Dan Hendrix, Eric Schmidt and Alex Wang not too long ago where they described the mutually assured malfunction maim, mutually assured AI malfunction. I wasn't like super compelled that that would actually be a stable equilibrium, but I did feel like, hey, at least somebody's trying to, to sketch one out. How do you have any sort of narrative or, or sense of like how the two countries, aside from a, you know, sort of revolution in relations, could get comfortable with some amount of trust that could create some stability such that neither side feels like compelled to do dangerous things out of a fear that the other side, you know, might be doing it in secret?
speaker_2: Yeah, that's a fascinating and very important question. So the concept in the supreme talent strategy, as I understand it, is to prevent any single nation from achieving air dominance. And countries, especially the great powers, are prepared to sabotage and destroy the AI infrastructure of the rival. I think on the positive side, it's good that it acknowledge dominance from a single superpower is a problematic objective and a flow paradigm. And it's trying to chart a path of stability under a water for powerful AI. But I do think the analogy to the MAD mutually assured destruction from the nuclear error, it's also problematic for two reasons. First one, it really lacks a clear observable red line in the world of AI, right? The logic of nuclear deterrents work because using a nuclear weapon is a detectable event. You can't really hire nuclear tests or a launch of ICBM. So that creates a clear red line. But you know, if it hinges on a trigger point of a aggressive deployment of super intelligence, what what does that really mean in practice, right? Like we have this meter graph of the lingo of tasks that AI could do that is doubling every seven months. So is there a certain level of that graph that would count as getting into the intelligence explosion? So I worry that it is quite fake if we try to apply that concept in the world of Afan's AI. And arguably, if you think about it from Chinese perspective, the US with some of the prominent AI leaders is already talking about AGI in 2027 and also, you know, implementing export control and and so forth, right? So one could imagine that the US is already racing for a monopoly on, on super intelligence. Does that trigger, you know, the mutually assure AI malfunction? I hope not, right. And so that leads me to the second point that the risk of instability and escalation. If countries actively endorse this strategy, then, well, it would basically be declaring the willingness to go to war with one another, right? It's not just about attacking a data centre. It will be, you know, destroying one of the most valuable national assets, one of the most critical infrastructure of another country. And I think that is just a very terrible and unpredictable event. And so I don't see that as a plan for stability. Quite the opposite, it it seems like a potential sort of hair trigger alert of of of terrible risk from AI.
speaker_1: Yeah, unfortunately, I agree. And I think you're very right to highlight just how intense the fog of possible war is going to be around and for better. And, you know, I guess for false positives and for false negatives, I would say like, if you're sitting in one country trying to assess what's going on in some data center in the other country. I mean, your visibility into this is, is low in every respect, starting with like what you should care about, as you rightly pointed out. But then also like, are you getting an accurate signal on what's really going on? It just seems like it's going to be extremely, extremely hard for anybody to have much confidence that they would be doing the right thing, you know, at the moment where they would be making a decision to like sabotage some key infrastructure. So, yeah, I'm with you. I, I again, I applaud the effort for looking for some sort of stable equilibrium. I did not feel like they found it in that, in that piece. Do you have any better ideas? I mean, not to put you on the spot, but you know, we need all the help we can get here. Is there any, is there any stable state, you know, that you could imagine getting to again without like sort of a overly utopian imagination of revolution in relations? And we can come to that next, you know, in terms of like working toward that. But if we assume that like distrust levels remain relatively high, is there any way to make that a stable situation?
speaker_2: So I wrote some of the recommendations in the Time magazine piece. I think 1 strategy would involve building on three core pillars. The first pillar would be for the international community, especially between the AI grid powers, to define a set of concrete red lines for advance AI. So that is a set of agreed upon limits on the capabilities and behaviours of frontier models. For example, that would prohibit the development of systems that could pose A catastrophic or even existential risk to humanity. Possibility of AI leading to proliferation of weapons of mass destruction. Possibility of autonomous AI being able to self replicate or self improve and also lead to intelligent explosion that either humanity as a whole or other countries can no longer comprehend and control. And then the second pillar would be having continuous testing evaluation for early warning indicators. And ideally, we can create a architecture, a set of shared protocols that countries and leading AI companies can share the most relevant results of evaluation with one another. And the final pillar involves creating a set of emergency response protocols, right? So if certain behaviours of the models are found as early warning indicator, what should you do? What should be triggered when those thresholds are being crossed? And that could involve implementing more safety and security measures. It could be mandating human oversight. But especially in moments of crisis, it is really critical that countries and international community have a prepared plan to manage it rather than let the risk escalate.
speaker_1: And how optimistic are you that our governments can come together and do this in any sort of credible way? I mean, I've kind of voiced the view from the US government that, you know, you can't trust the CCPI totally expect that the Chinese government feels similarly about the US, even if it's not like you can't trust, you know, a particular individual or whatever. We just have obviously have like such radical change in attitudes at the top that it's very hard for the Chinese government to know like what they're going to be dealing with in 2028 or I guess 2029 when, you know, the next president takes office. So, you know, I think we've had, I'm not really sure of the status of this, but there was at least the one like no AI in the nuclear chain of command soft agreement. I'm not sure if that has been, like, ratified or, you know, sort of institutionalized in some way. Could we expect governments to come together and do more in the direction that you're advising right now? And if not, like, what else can we do? I mean, you've got interesting ideas around Singapore as kind of a meeting place. And obviously there's these, like, Track 1.5, Track 2 dialogue type things you're putting together. But yeah, like, where can we actually make progress, you know, toward these goals right now?
speaker_2: Yeah, you're right that, you know, given all the tension between the US and China, it's positive that leaders of the two countries were able to identify AI risk as the area of dialogue. They have held meetings on the topic. And also to build towards a joint agreement that there should be human control over nuclear command and control. I think, you know, a few years ago this would have been thought of as pretty surprising. So at least there is a positive step. Obviously I'm not an expert in the AI and nuclear space and you know, the agreement is pretty high level at the moment and there are some work to be done in terms of operationalizing the the agreement. But I do think the overarching concept of having human in the loop on some of the most critical decision making systems, that idea could be applied for broader discussions on air safety, right? We already have leading sciences and academics from China and the West agreeing on loss of control being a serious issue. And there are property steps that we can build not just between the researchers, but also between the industries and the eventually at the track one level to ensure human in the loop when we get to a certain level of autonomous advanced AI systems.
speaker_1: Tell us so let's hope going away from the official government to government track. Tell us about the the sort of dialogues, convenings, etcetera that you're helping to put together and like what's happening at those? You know, what is the what is the theory of change? Is it is it about just sort of keeping connections warm and kind of keeping the research communities connected to each other? How do you conceive of the value that that all that work is driving?
speaker_2: Yeah, I think Track 2 dialogues and sciences to sciences conversations have long been a mechanism for exchange on issues of common concern between great powers. I think primarily they could foster trust, increase mutual understanding among the participants and then also to help formulate A refined policy solutions to their respective governments. One example was the Pop Watch conference on Science and war affairs, which was held regularly since the year 1957, and that was really a pivotal time for reducing catastrophic risk from nuclear weapons. We have put out a piece on Track 2 dialogues on AI between China and other countries. And overall, I think these dialogues have produced several useful outcomes. For example, refuelling areas of scientific consensus on frontier risk, you know, solving particular issues that experts are better suited to solve at the moment. For example, looking into the threat modelling and potential risk from the convergence of AI and political risk. And so I think this is just a critical area where we should continue.
speaker_1: And how about Singapore as a venue for that sort of thing? Is that, is that a unique opportunity in today's world? I I've never even been to Singapore, so I've got a lot to catch up on.
speaker_2: Yeah, I think Singapore is one of the top countries in AIR and D globally, and also ranked top for like overall readiness for AI innovation. So it's a very forward leaning country. Concordia recently published a report on the state of air safety in Singapore. And it seems like Singapore is just playing a pretty outsized role in global and regional AI governance, for example. It is able to convene international AI events like the Singapore Conference on AI and that produced a consensus document on global AI safety research priorities. That consensus was developed by over 100 global experts and they came together to build a shared technical agenda for general purpose AI safety, including for loss of control risk. And then I think another distinct feature is it is a very vibrant hub for both Chinese and U.S. companies. Many of the leading companies from US and China have an office and presence in Singapore. Singapore has a very vibrant assurance hub doing testing evaluation on these models, including its Singapore AI Safety Institute. And so given this combination of convenient experience, also neutrality between US and China and also a very strong kind of AI ecosystem in itself, it is pretty well placed to be the meeting between the East and the West. And I think that's also the reason why we're excited to have an office in Singapore.
speaker_1: Is there any other sort of international cultural initiative that you might promote? Like one idea that I've had and you can, you can react to this idea, but that I'm especially interested to hear any other ideas that you have is to create something like an MMLU for morality, which I think is going to be a tricky project. But I'm struck by the fact that when companies put out new models, I particularly noticed this with Gemini 2.5 Pro in their like official blog post announcing the new model, they, I believe, reported 12 different benchmark results. None of them pertained to safety, alignment, morality law, following, you know, anything along those lines. It was all just raw capability. And this got me thinking, geez, you know, if there was one number that we could look at to sort of get a sense for how well behaved a model is, that could be like a really powerful carrot. You know, that could, could potentially lead companies in that direction. Like if it became the norm that you're going to report that number when you put out a new model and that it's going to become one of the dimensions of competition. You know, it seems like it could be a a pretty meaningful contribution to the overall trajectory of AI development. Again, a lot of little caveats like you probably have to have a, you know, a bunch of sub scores, you know, under that one score, you might have to have localized scores as well. Because I do think you you countries are going to want somewhat different behavior on, on certain aspects of of those questions. But it seems like a good idea to me. What do you think about that idea? And what other kind of maybe outside the box ideas do you think are like relatively neglected right now? Like what do people, what do we need people to step up and do that? That is not yet happening.
speaker_2: Well, I'm all supportive of creating these positive race to the top dynamics, incentivizing companies and researchers to compete on AI safety research to create safer, more trustworthy AM models. I think to some extent this is already happening, right? That you know, people have to report their performance on MMLU, but also like hump bench and other AI safety, ethics and risk related benchmarks. I don't think we can rely on any single benchmark. We just in their portfolio of all those different dimensions. So I think, yes, I think that would be great if we have more leaderboards of ASCV performance.
speaker_1: Any other moon shot projects that you would love to see somebody act on that just aren't happening today?
speaker_2: Well, I think it'd be great if we could align on a global framework for frontier AI risk management, you know, from China, we work with China Air Lab for this framework. This closely resembles some of the frameworks to followed by other global players, for example, some of the leading AI labs in the US and also recently with the EU core practice. So I really think this convergence of approaches is a very positive development. But we are still in the early days of creating international AI standards, right? So one analogy I think of is, you know, when people fly between different cities either from Europe to US or Singapore to China, we are all abide by the same set of international safety standards. And it's quite incredible how much safer it has been to get on the plane. I remember a statistic showing that the number of accidents from getting on an airplane that has dropped by, I don't know, 2 orders of magnitude just within the last few decades.
speaker_1: How about heroes? Are there people either on the US or China side that you think should be higher in status than they are? Just more appreciated, You know, who's doing great work to either reduce tensions or build understanding, build these these shared frameworks? Who do you admire? And you know, that could be like, who should I have on the show in the future? But who should people you know look to for inspiration?
speaker_2: Yeah. I think one of the most urgent priorities right now is to build a share international understanding of the risks from Afon's AI. One really positive and landmark moment I have experienced was the Global AICV Summit at Bletchley Park in November 2023. And I remember attending the summit and seeing that ministers from the US, China and other countries, I think a total of 28 countries came together to sign a Bletchley declaration. And a specific outcome of that summit was this agreement to nominate international experts to work on the Internet Independent report, right? That was led by Joshua Banjo and, you know, feature contribution from 100 experts around the world. It really aims to be sort of the IPCC for AI safety. And I think that is a very positive example. There are a number of other international forums that are making progress in this space. For example on loss of control. There is the International Dialogue on AI Safety that agrees on for example recently how there is a growing level of deception and scheming behaviours from frontier AI models. There is the Global AI Bio Forum, which published a statement on the bio security risk at the convergence of AI and the life sciences, and some of the world leading scientists from both the AI and the synthetic biology community also sign on to that statement.
speaker_1: How about for individuals that aren't like famous or powerful? What can we do? You know, I, I want to go to China for multiple reasons. I've never been. It sounds fun. The food sounds amazing. Can I do anything that would actually be meaningfully positive? Or like, how would you coach people who want to make their own, like grassroots, you know, even if it's modest contribution, what can we do to make the world marginally better?
speaker_2: Well, thank you for asking that. First of all, I think we all need more people to people exchange and understanding. And so trying to go beyond the headlines and creating more nuanced understanding, I think that's a first step. And this is what Concordia try to do, right, by publishing these AI safety deployment in, in China, in Singapore, in, in, in Asia and vice versa, right? We also try to absorb and learn from different parts of the world, including from the West in our work within China. And I think for people outside of China, coming to AI conferences in China could be a good opportunity, for example, with the world AI conference that happens every summer. I think that is a very rich platform for discussing the frontiers of AI innovation, but also for AI governance and and safety. And then in terms of donations, I think, you know, philanthropy could be a powerful force for making advance in air safety, funding, ambitious scientific research, training the next generation of leaders, fostering informal dialogues in some way. I think the field of AI safety philanthropy is still in the early stage, right, much like what climate science was maybe a few decades ago. And so getting involved now means that, you know, one could be early shaper of this critical domain.
speaker_1: I just did AAI Safety Charity Review project this summer and there were a few. It's very hard, I think, for philanthropists to figure out what is actually helping, you know, because we had a number of organizations and I think by and large, they like did receive grants. But I just noticed from my perspective as an evaluator that it's like very hard to read a document that says like, we're trying to do this and this to foster, you know, improvement in US China relations or whatever. And have any sense of like, is it working? Is it having any impact? You know, it's, it's, it's tough. How do you know it? How do you know what's working? Like if if you were to advise somebody to look for signals, you know, in that kind of thing, or even just for yourself, like how do you know what's working and what's not working? What are the What are the feedback mechanisms that you trust to know that you're making the world marginally better?
speaker_2: That's a great question. I think it depends on the specific interventions, but when it comes to creating these international conversations, one metric could be whether we are exploring topics and issues that are pretty important and urgent. But no one was talking before. So to be concrete, we recently run a international dialogue with a focus on crisis preparedness, exploring scenarios such as how AI could enable large scale cyber attacks on critical infrastructure. We found that policy makers and elite decision makers, we're all pretty surprised that this could happen in the next few years. And I think we created a space for people to be better prepared for a world of powerful and unfamiliar with AII think another metric could be whether we could create consensus and outcome documents. I mentioned a couple of examples before, and I think this really shifts the Overton window in the international conversation to show that this is something that could be agreed upon. And repeatedly, I think we've shown that international collaboration on catastrophic risk is something that is important and tractable to make progress on.
speaker_1: Do you feel like when, when you're doing this kind of international relations improvement work in China, do you feel like you're in a minority position like here? And certainly the, you know, the, the folks that are focused on this, my sense is that they feel that they are definitely in a minority position, that kind of things are getting worse, not better. You know, that they are basically trying to resist a negative trend, but that the negative trend, you know, is, is still happening. And that they're kind of like, you know, just a minority voice. And, you know, it's sort of hard for them to breakthrough because the the dominant narrative is so hawkish and so focused on rivalry and, you know, winning. Do you feel similarly in China like, or do you feel like you're more mainstream there in some sense than than people here feel?
speaker_2: I feel pretty mainstream. I think there is a lot of stakeholders ranging from academics to industry to think tanks that view international cooperation on AI governance as not only accepted but something to be prioritized. I think in our report, we have documented a lot of institutions and individual experts that have done this consistently throughout the years and certainly when it comes to prominent AI conferences in Beijing, Shanghai and elsewhere. In our experience, the organizers really want to engage with international experts and guests and would, you know, put in a lot of time and resources to send those invitations. And when people come to China, there is a lot of effort being done to make sure that they they feel comfortable, they could explore the cities, they have a good time basically. So I think there is a lot of goodwill in in general. So yeah, I don't, I don't feel like I am doing this as a very minority or marginalized community, not at all.
speaker_1: Yeah, good. It's important data point. Maybe one last just kind of big picture or sort of outside the the box question. If you had like, you know, some large amount of money, say a billion or several billion dollars, is there are there any sort of mega project type things that you could see people putting together that might really move the needle? And I'm thinking as, you know, far fetched here as like a joint Research Center on some, you know, small Pacific island where you could imagine like scientists coming together to do very sensitive research in an environment that like, you know, neither side could really defend, You know, that that it's kind of like a a sort of very obvious, like sort of sequestered space, you know, for, for this kind of super sensitive work. I don't know if that's a good idea or a bad idea, but that strikes me as like the kind of thing that if we were really trying to come up with creative solutions, we'd hear a lot more about outside the box ideas like that. Do you have any, you know, kind of outside the box ideas like that that you think maybe somebody should pick up and run with?
speaker_2: Yeah, I think having a international joint research projects or lab with a focus on frontier air safety, that sounds a great idea. I think we really lack a space where researchers can come together not just to discuss, but to make concrete progress. I do think I don't have very much creative idea. What the world probably needs more is sensible, pragmatic ideas to make progress. All too often even the most common sensical things to do, especially when it comes to international collaboration and policy are not being done. So I'll probably sort of respond to your question in a different way that we just need to make more progress on the low hanging fruit.
speaker_1: Yeah, double down on the basics. Is there anything that we haven't touched on that you think we should or that you would want to make sure people are aware of before I let you go?
speaker_2: Yeah. I think I would just like to mention that around 2023, discussions on frontier risk also took off around the world, obviously because of Chechi PT, but in China as well. And you see prominent Chinese scientists sign on to the statement like, you know, extinction risk from AI should be a global variety alongside other concerns. And there was an important moment of a Beijing air conference featuring people like Geoffrey Hinton, Andrew Yao, Sam Walman and others with a full day of speeches and panels on air safety. And so that was really one of the first time where catastrophic risk from AI were discussed so prominently at a technical air conference in China. And then since then, the landscape has changed quite a lot as well, where as of 2025, there is like more than 30 AI research groups in China that have devoted A substantial amount of time on on front air safety. And, you know, in the beginning, it was mostly along the lines of RHF or jailbreak defence. But now I think there are also a lot more papers on, you know, scalable oversight of superhuman AI systems, on mechanistic interpretability, on other areas that are much closer to like, advance AI safety concerns. And so I think this expansion of research highlights that academics in China think this is a problem worth exploring and directly investing the time and effort into it, and also pretty much aligned with some of the research happening in the West.
speaker_1: Any highlights from that body of research that you think I should take a deeper dive into, particularly any that are like notably distinct in terms of their approach from stuff that might be more familiar to me as a, you know, somebody who primarily follows the Western developments?
speaker_2: Again, I think 95% of those papers and research are pretty similar, but I do think in the last half a year there is more papers that focus on the risk from embody AI, both in terms of how that could align with human values, also in terms of job breaking a foundation model controlled robots. Yeah, in general, it seems like physical AI is a big theme of Chinese AI in 2025.
speaker_1: Yeah, that's that's maybe something we have kind of under discussed actually because if there are so many commonalities. But it does seem like obviously China has a huge manufacturing strength that if and when these humanoid robots actually working is going to come in huge for producing them at scale. And I haven't seen anything that I can recall from Western researchers that focuses on, and it's a little bit from Google, I suppose, because they are doing this stuff. But it does seem relatively neglected, like it. It seems like kind of a collective blind spot in the West that, hey, you know, we're actually going to have like humanoid robots like it. They probably are going to work. There may be a lot of them walking among us. And how does that change the way we should be thinking about safety and control? It does strike me as something that is underdeveloped here. So that to hear you highlight that as as something that might be unique in the safety literature there is, is interesting. And that's maybe something I can do a a deeper look into. And I definitely appreciate any pointers to particular papers, people, whatever. Is there anything else that we should know or, you know, should should take a minute to contemplate when it comes to the relative emphasis on embody the eye broadly or, you know, human humanoid robots specifically that's going on in China right now?
speaker_2: I'm sure there's also a lot of AI safety literature going on in other parts of the world, but perhaps you highlight the concerns for safety in industry documents. So there is AAI industry association in China that put out these voluntary safety commitments. It has been signed by more than 20 major Chinese companies and in the latest version, it highlights risk from agentic AI but also embodied. AII think this really reflects how the industry is looking ahead and thinks, well, their models could be integrated into the physical world at some point and this is a time to start thinking about safety and policy measures as well.
speaker_1: Yeah, I definitely see some of these short videos online of these robots getting kicked over and bouncing back and doing flips and stuff. And it's like, whoa, not only might these things be as smarter, smarter than us, but they're going to be potentially a lot more agile and robust to physical assaults than than we might be before long, too. So yeah, it's going to be a strange world in the not too distant future.
speaker_2: Yeah, during the Chinese New Year, there was a national TV program showing the robots from a company unit tree, you know, having group dances and I don't know, like hundreds of millions of people in China probably watch it. And, and now they are running these marathon competitions between AI robots and humans and very soon they will be doing like a full scale like Olympics again between the humanoids and and humans in Beijing and other cities. So it is a very interesting world.
speaker_1: What is the cultural reaction to that? Like, I mean, one, one thing that people often say when we debate like, what's future life going to look like? Where are we going to get meaning from? And you know, what's going to matter? People often say chess has never been more popular. You know, it's isn't it interesting that we had the first computer that beat the, you know, grandmaster in chess like 20, I think in the 90s, maybe 30, almost 30 years ago now. And then there was the period when the human and the AI together we're best. And now the AIS are just the best. But people are still very interested in watching humans play chess. And then they'll they'll say like, but you know, nobody's that interested in watching a is play chess against each other. Maybe the real chess experts are, how would you describe the Chinese reaction to this sort of thing? Like, do you think the Chinese public will be interested in sort of a robot Olympics? Will that be like something that will have interest in the public imagination? Because it seems like people here don't expect that to capture capture public interest, but maybe there it's different.
speaker_2: No, I think many people do find it somewhat entertaining and amusing. There is also a genuine function to show that there are some companies that perform better in this sort of real world benchmarks as as as as you will. And to get beyond the hybrid. For example, the marathon competition between the humanoids and the humans. It show that you know there is still a lot of room for improvement for the humanoids and also it creates some suicidal discussions on what that means for potential impact on the labour market. Right. There was a local sort of incident of people protesting that self driving cars are making too much progress and that could take away jobs for the drivers in a central province of China. And so obviously there is some controversial and negative ramification as well.
speaker_1: Yeah. How much does that get? I mean, that's another thing we haven't done. There's so many things, so many aspects of this. We can't possibly cover them all. But since you've raised that, is there a anxiety in the Chinese public that this could be bad for me, You know, personally, like in in the US, there's definitely brought, I think, quite widespread sense that even if AI gets really good, even if it can do everything better than I can do, even if we have in theory abundance, I'm not really sure. And I'm speaking not as Nathan, but, you know, as sort of the general, you know, plurality of the public. I'm not really sure that that's going to benefit me. You know, I, I still sort of worry that, like, the rich might get richer and I might be left behind. And if, you know, if I'm driving a car for a living and all of a sudden the AIS can do it better, like I'm not sure what's going to become of me. Is there does that same anxiety exist across the Chinese public or or perhaps there's a higher level of confidence that like the benefits from these sorts of things will in fact be, you know, broadly shared? How would you character? I mean, it's a hard to summarize the views of more than a billion people, obviously, but how would you summarize the vibe there?
speaker_2: Yeah, obviously I can only speak to my partial observations and antidotes. But you know, what do we have as humans? We have our intelligence in our mind and then we have our body doing feasible work, right? So I think it's very natural for people to ask what if AI could do both in the future. And so in the last few years, the unemployment rate, especially by the youth, has increased quite a bit. In China, you know, everyone has a bachelor's degree, education level has gone up. So it's much harder to get a job at a bank or in the government as you know, white collar work. And so people also have to consider like food delivery or like being a driver on, on DD, Chinese version of Uber. But what you've there is also self driving cars and all these humanoids that can do what everyone can do physically. So I do think there is a sense of anxiety among some significant portions of the population as well. And it often surprised me how little governments are addressing this concern. But it seems so salient, right? For most people.
speaker_1: Yeah, that's interesting. I'm kind of, I would have maybe expected the Chinese government to be more forward-looking or you know, a little bit more prepared than what I perceive the US government to be. But it sounds like maybe not. And again, it is all coming at us quite fast. So maybe I shouldn't be surprised. But I guess I'm.
speaker_2: Well, I think there there could be a future and a potential strength that China could leverage. You know, it is a socialist country, right? And so if the profits of AI are dependent on chips, which are also dependent on energy data centres and then lands right as critical infrastructure, and most of those infrastructure are owned by the states, not by private companies, you know, by state owned enterprises, you could imagine a model of distributing the benefits of AI using these state owned access.
speaker_1: Yeah, I think we're going to need something like that here. I've I've often not that I have the answer by any means, but I've often suggested that at the time is probably now to start working on a new social contract and it's been slow to develop. But I do give Sam Altman in particular credit for investing personally in universal basic income studies. And it, it does seem to me like we're going to need to do something to decouple one's fundamental right to live a decent life from your ability to contribute meaningfully to the economy. Because, you know, no matter how far the AI continues to progress from here, it seems like just with implementation of what we have, there is going to be very significant displacement. And we really can't expect our current structures to handle that in a graceful way, you know, to put it mildly. So yeah, somebody needs to to take some leadership there. And we currently have kind of a a vacuum. But yeah, it sounds like, you know, there's opportunity for people to step up in the Chinese context as well. I'll just ask it again. Anything else we haven't talked about that you think people should be aware of before we break?
speaker_2: I don't think so. I think maybe as a expression of gratitude, thank you for all the thoughtful and open minded discussions on your podcasts. I think this is really amazing with all the work that you're doing and for also bridging international understanding. And then also as a plug, Concord AI is hiring multiple positions for researchers for operations, for other roles both in Singapore and Beijing. So yeah, really pleased and happy to have this conversation with you, Nathan.
speaker_1: Well, that's very kind and I'll do my best to live up to it. But right back at you, I appreciate all the hard work you're doing to maintain and hopefully even build, you know, positive momentum in shared understanding of the AI phenomenon and, and particularly how we can make sure we're on the right side of history with this thing, because doesn't seem like that's going to happen by default. And it's going to take a some heroic efforts from folks like you. So I really appreciate what you're doing. Brian Say, founder and CEO of Concordia AI. Thank you for being part of the cognitive revolution.