What is Catholic AI? Technology Meets Theology, with Matthew Harvey Sanders, CEO of Longbeard

Matthew Harvey Sanders discusses "Catholic AI," exploring the philosophical and theological dimensions of AI, the Church's historical view on technology, and its stance on AI consciousness and transhumanism.

What is Catholic AI? Technology Meets Theology, with Matthew Harvey Sanders, CEO of Longbeard

Watch Episode Here


Listen to Episode Here


Show Notes

Matthew Harvey Sanders, founder and CEO of Long Beard, introduces "Catholic AI," a rapidly growing field serving users in 165 countries. He delves into the philosophical and theological dimensions, discussing the Church's historical perspective on technology, human flourishing, and its stance on AI consciousness and transhumanism, including Pope Francis's insights. The conversation then explores Long Beard's technical innovations, such as training models from scratch for theological alignment, digitizing Vatican archives, and optimizing multilingual models. This episode offers a compelling look at how AI is being tailored to specific value systems, illustrating the orthogonality thesis and instrumental convergence in action.

Sponsors:

Shopify:

Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive

Tasklet:

Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai

Linear:

Linear is the system for modern product development. Nearly every AI company you've heard of is using Linear to build products. Get 6 months of Linear Business for free at: https://linear.app/tcr

PRODUCED BY:

https://aipodcast.ing

CHAPTERS:

(00:00) About the Episode

(03:16) What is Catholic AI?

(06:43) Popes on AI Revolution

(12:42) Vision for Human Flourishing (Part 1)

(17:27) Sponsors: Shopify | Tasklet

(20:35) Vision for Human Flourishing (Part 2)

(24:25) AI Consciousness and Sentience

(34:01) AI Rights and Souls (Part 1)

(38:10) Sponsor: Linear

(39:39) AI Rights and Souls (Part 2)

(51:32) Existential Risk and Extinction

(58:35) The Longbeard Origin Story

(01:03:04) Engineering Magisterium AI

(01:12:58) Training Custom AI Models

(01:23:37) Longbeard's Business Model

(01:35:28) Enterprise vs. Niche Models

(01:46:23) Transhumanism vs. Curing Disease

(01:53:46) Outro

SOCIAL LINKS:

Website: https://www.cognitiverevolution.ai

Twitter (Podcast): https://x.com/cogrev_podcast

Twitter (Nathan): https://x.com/labenz

LinkedIn: https://linkedin.com/in/nathanlabenz/

Youtube: https://youtube.com/@CognitiveRevolutionPodcast

Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431

Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk


Transcript

Introduction

Hello, and welcome back to the Cognitive Revolution!

Today my guest is Matthew Harvey Sanders, founder and CEO of Long Beard, a company that proudly proclaims “We’re building Catholic AI."

As a full time AI Scout, my stated goal is to have no major blind spots in the overall AI landscape.  

But I have to confess that the intersection of AI and traditional religion was, before I started preparing for this episode, a significant blind spot.  I had heard that Pope Leo had chosen his name in part for AI-related reasons, but I hadn’t realized how engaged the Church has been with recent AI developments and their implications for society.   

Matthew, thankfully, was the perfect person to help orient me.  He and the Long Beard team combine sincere reverence for Catholic teachings and tradition with high-end technical skill, and they’ve built the number one Catholic AI product in the world, serving users across 165 countries.  

The first half of our conversation unpacks the concept of Catholic AI from a philosophical and theological perspective, with discussion of:

  • How the church has historically understood technological progress as part of God’s plan for humanity 

  • How Pope Francis was sufficiently well informed as to call AI a "true cognitive industrial revolution" at the G7 summit in June 2024

  • What human flourishing means from a Catholic perspective, and whether the church might be open to a post-work future

  • Whether there is space in the Catholic worldview for the possibility of AI consciousness or moral standing

  • Where the church draws the line between acceptable human enhancement and what it would see as problematic transhumanism

  • And how the Catholic principle of "subsidiarity" supports open source development as a way to resist the concentration of power

In the second half, we turn to the technology Matthew and team are building, and how - and I am confident that this portion will be valuable to AI builders regardless of their spiritual outlook.  

Among many other details, we cover:  

  • Why fine-tuning hasn't worked for their use case, and how they are ensuring theological alignment by training models from scratch

  • The retrieval and context engineering strategies they use to help users find the best answers across 28,000 church documents today 

  • the robotics work they are doing to help digitize Vatican archives

  • Their strategy for optimizing model size, multilingual support, and inference costs

  • And the economics of running a mission-driven AI company with a generous free tier

I found both halves of this conversation valuable, for different reasons, but as a whole, I think it’s a striking indicator of how far AI adoption has already come, and also a bit of evidence for the orthogonality thesis and instrumental convergence.  People from all over the world, with different conceptions of the good, are using essentially the same techniques to create AIs aligned to their own particular value systems.  And in a world where Catholics still outnumber ChatGPT users nearly 2-to-1, those of us in the Bay Area AI bubble should not turn a blind eye to Catholic AI.

With that, I hope you enjoy this introduction to Catholic AI, with Matthew Harvey Sanders, founder and CEO of Long Beard.


Main Episode

Nathan Labenz: Matthew Harvey Sanders, founder and CEO at Longbeard. Welcome to the Cognitive Revolution.

Matthew Harvey Sanders: Thanks for having me.

Nathan Labenz: I'm excited for this conversation. I think it's going to be really interesting. The headline on your website says it all. We're building Catholic AI. The intersection of AI and literally everything across society has been one of the you know, big trends that I've been trying to track over the last couple of years. And of course, that's intersecting with science. And, you know, we're getting all these studies of the labor market. And one thing that I think most people who are deeply immersed in the AI space are still sleeping on a bit, honestly, is the intersection of AI and religion. So really interested to get into that with you today, specifically from the Catholic perspective. I want to start off by just telling us, like, what is Catholic AI?

Matthew Harvey Sanders: Yeah, it's a good question, kind of a fundamental question, right? I think the simplest way to explain it would be via contrast. So if you think of secular AI models, they're kind of trained to serve a very, very large audience with diverse value sets and things like that. Catholic AI, the objective function of Catholic AI, to put it into more technical terms, would be fidelity, so fidelity to the mastery of the church. So at every point in which we're building and designing AI systems, it's in order to ensure that it properly represents what the church believes.

Nathan Labenz: Interesting. Okay. I'll definitely dig in on that a little bit as we get into the specifics. And I do want to cover this from kind of a bunch of different angles, including like, of course, the philosophical, but also some of the applied technical. But maybe just to back up a little bit, obviously, the church has been around for a while, long, long and proud history. How could you characterize how the Catholic Church has generally related to technological advances over time like is there a sort of default position um you know certain obviously different religious communities have sort of an in you know a default skepticism or even you know kind of um exclusion of new technologies I know the Catholic Church doesn't do that but how would you describe the default and kind of historical tradition of interacting or engaging with new Tech Trends?

Matthew Harvey Sanders: I'm very reluctant to answer this question because, frankly, I'm not a historian. I'm sure that someone is going to find some fault what I'm about to say. But generally, as someone who tries his best to be a student of church history, I would say the general disposition is openness. Obviously, that during certain periods-- because a lot of this innovation, especially technically, comes from just when there's periods of a broad spectrum education, at least for certain cohort and a horse in society, that kind of gives rise to you know, two technological innovations and obviously the absence of war and absence of persecution. So I would say, you know, granted that these things are effectively in place, I mean, there isn't war and the education's been there in the church's history. The church has been very innovative and pretty quick to adopt technologies for the most part. I mean, I can go back, you know, in time when we think about the monks, you know, and their scriptoriums copying books to help disseminate information. If we think about the printing press, search is pretty quick to adopt and utilize that. We think of radio, we think of television. I think the only one we were a little bit slow was the internet, unfortunately. But with AI, we're hoping to change that.

Nathan Labenz: Yeah, interesting. I've been reading some-- you can correct my pronunciation, but Antigua at Nova documents, including the Leo the previous. who had won in response to the Industrial Revolution and the you know the the barons of railroads and oil and all those sorts of things and the concentration of power and social upheaval that was happening at that time and then of course there's a more recent one on AI. I've been really struck by how engaged the last two popes have been on the topic of AI both Francis and of course Leo taking his name with inspiration you know from the the previous Leo who was the Pope at the time of a previous um touchstone for how to deal with these like major technologically driven changes in society um Pope Francis I have to I have to note called AI a true cognitive industrial revolution uh which I didn't know but you know he almost named the podcast for me there um there have been like some really striking statements that I would love to kind of You know, I read them through my lens, right? As somebody who thinks about AI all the time, but definitely not through a Catholic perspective. I would love to hear kind of how you think about some of these things. I guess maybe for starters, like, what do you think are the most, I'll read you some quotes, but, you know, what, how would you kind of synthesize, summarize? What do you make of what the previous and the current Pope have said about AI?

Matthew Harvey Sanders: I mean, generally, I think that they have good advisors. And so I think that they understand, at a certain level of abstraction, that there's a lot of potential for the technology. And so in that respect, I think there's a fair degree of openness to embracing it and ensuring its focus to help remove impediments from human flourishing and generally advance the common good. At the same time, given that this technology is probably the most powerful tool ever invented by humanity, they're also very circumspect. They're students of history, and they know often when humanity is bequeathed great power, it doesn't always wield it responsibly. So I think that they're very-- they recognize that this is a moment for them to kind of step up and ensure that the technology is developed on particular rails for the betterment of humanity. So I think that they recognize they have an awesome responsibility, which is why you're seeing kind of so much focused attention on this issue.

Nathan Labenz: Are there-- when you said rails, that obviously sort of suggests or connotes the idea of regulation. In reading these statements, my general vibe check has been like, it sounds like church leadership is kind of calling for governmental regulation, but at least from what I've read, and I'm sure you've, you know, gone, you know, much more thoroughly through all the statements and teachings, it always seems to kind of be like, we're probably gonna need this, you know, governments are gonna need to do stuff, it doesn't seem to get super specific. Do you have a sense for like what in particular, if there is any particular, the church would like to see governments do?

Matthew Harvey Sanders: Yeah, I mean, I think for someone working in the industry, I'd be very wary if they got too specific, because frankly, they're struggling to even understand the technology, right? So we wouldn't want them to start pausing regulations when they're barely have a basic comprehension of the technology. So I think in that sense, they're just being responsible. Yeah, I do generally think they have some ideas about how the technology should be regulated, but again, at a fairly high level of abstraction, right? I mean, one, where should we be pointing these technologies? well, at the most important fundamental problems. I think generally they would say that probably makes sense because given that we have inequality in the world and there's people that are still hungry and there's people that are existentially still struggling, obviously I think focusing the technologies to help address those issues is just a good thing for humanity, net for everyone on the Earth. So I think you'll always see them focused on how can we deploy the technology in an equitable way, ensure that it enhances and lifts up all of humanity, not just a privileged few. Of course, when you start getting particulars, that becomes a bit more of a challenge. And I think in a perfect world-- to me, Hassabis has gone off about this, too. So is Dario from Anthropic. Everyone says we should have regulation. I think the question is, who should be deciding what the regulations are? And if we set regulations, is that going to cut us off in the knees and give China a big advantage over us? Are we willing to accept being disadvantaged? And I think pragmatically, seemingly what the Trump administration has been saying is no. So that poses a really kind of interesting challenge for the church, right? So it can either try to just basically kind of campaign for the Trump administration to move on very specific regulation, despite the fact that there may be some costs that come with the kind of innovation. Or it can kind of focus on things that likely will be have more success with. And I would say those are probably the things that are probably more important. Stuff like helping remind people of the telos of humanity and the telos of civilization, reminding us of what a Christian anthropology consists of. Ensuring that we learn from the past, the mistakes, and that we're ensuring that the future that we're building at a very quick pace is going to be accelerating human flourishing and not impeding it. And I think that's what you're going to see from the Pope in the document that he releases pretty soon. I think you're going to see kind of a high-level reminder of what life and civilization is also supposed to be about, and encouragement to ensure that we're pointing the technology at the places that matters the most.

Nathan Labenz: You may know, regular listeners have definitely heard me say this many times. The scarcest resource is a positive vision for the future. So I'd love to just invite you to expand on that. You know, what is the TLS of humanity? What is the TLS of civilization like? What does it mean to have a flourishing future? And are there limits to that flourishing future? I mean, I think that's also a really interesting aspect of the Catholic worldview it seems like there is sort of like a sweet spot you know kind of a Goldilocks uh mindset to this where it's like and I'll shut up and let you tell it but it seems like there's a you know we're not meant to be content with our current condition but neither do I see the Pope signing on to everything in Dario's Machines of Love and Grace probably although maybe you'd see it differently so what is the the positive vision for the future as you understand it?

Matthew Harvey Sanders: Well, I mean, I think, okay, we're gonna talk about machines of loving grace. I mean, I think the Pope would obviously acknowledge many, many components of that vision, right? I mean, obviously, kicking disease, that would be great, right? Ensuring that everyone has universal high income, that would be great. Ensuring the technology leads us to be able to kind of explore the stars and can expand the horizon of knowledge, that's all great. But of course, we have to acknowledge that there has to be a degree of pragmatism, because we're not going to be able to achieve all those things simultaneously. So there are certain problems which are more-- some problems are more important than others. And I think that's kind of what the papacy, what its use, I think, can be, is ensuring that just because we can do something, doesn't necessarily mean now's the time to do it. It may be that building a Mars colony-- and I'm a big fan of that-- Maybe building a Mars colony is maybe not the thing that we should be focusing on right now, if it's going to cost us trillions of dollars, when some people are still not able to eat, and some kids just can't get access to high-quality education. I think those problems can be solved. I don't think that that necessarily is going to set us back for too long. I think this is what we need from the papacy, just to help us focus to some extent. As far as a vision, this to me is the singularly most important thing that the Pope could do. is to help remind us of what human flourishing consists of, right? So what does it mean to flourish? So contextually, what are all the things that have to be operative in order for us to realize our full potential? Some of us could list a few of those, but it's actually quite striking how hard it is. We really have to stop and really think about that. Obviously, it's not one thing, it's many things. And these things you'd think would be the most important things we'll be thinking about and talking about all the time, but they're not. They're kind of pushed aside far too often. Namely because so much of our lives is consistent just trying to survive, or just make sure we're doing better than somebody else. So I think nailing down what human flourishing consists of is probably the number one priority. And then once we understand what individual flourishes, we can then abstract and then say, okay, what does that look like at a civilizational level? How do we know that civilization is heading in the right direction? How do we know we've built a good civilization? And I think if we can get clear about that, and I don't think it's as hard as people think. We've learned a lot from history about what flourishing looks like, and most of us have a pretty good sense. You know, lack of crime, everyone has enough food to eat. Parents have time to spend time with their kids. Marriages are strong, all the bedrocks of civilization. We simply just need the time and space to ensure that we're prioritizing those things. And I think often with this current GDP economy that we're in, we have to sacrifice those things, right? And I think What the Pope can do here is say, AI and robotics, maybe we are headed towards a golden age. Maybe we are heading to an era of superabundance, as Elon claims. Good. What are we going to do with that superabundance? Are we finally going to kick these issues? Why don't we get clear about what these AI and robots could do for humanity and make sure that we're allocating at the right place? I think that the telos of humanity, the telos of man, the telos of civilization is critical. Once we understand Cardinal Collins, male boss teachers always say, If you know where you're going, and we're likely to get there. And I think if we know what kind of civilization we wanna build, and how people will be living in that civilization, what is the right relationship of AI and robots, what are they doing, what are we doing? We can work backwards and figure out what are the things stopping us from realizing that vision, and that's hopefully where our energy will be focused.

Nathan Labenz: Do you have room in your imagination there, or vision, perhaps better, said for a sort of post-work future like I I don't know to what degree work and and struggle is sort of understood to be inherently part of the human condition or like what it means to live a good life but if we did have the super abundance and and one of the things that people wanted to do with it is pay a you know reasonable UBI and let people kind of opt out of economic productivity as a requirement to live Do you think that is something that the church would be inclined to support?

Matthew Harvey Sanders: Yeah, I think it's gonna be a necessity. You know, not a lot of people, at least in my world, talk about this, but it's something that I feel very convicted about. I think we have to be... I don't worry about humanity's future. I think humanity's future's gonna be very bright. What I worry about is the transition from our current age into the AI and robotics age. I worry that could be rough. because we're not having an honest conversation about where the technology is at now and where we're likely to be in five and ten years. And I think that's because there's all kinds of perverse incentives to not have those conversations. You know, for the heads of the labs, they try their best, but they have shareholders, right? And politicians, you know, they got to get elected and scaring the crap out of the populace, right? And creating some kind of Marshall Plan when they're just trying to, you know, get a grip on their own bureaucracy. It's difficult, right? I think this is critically important. I tend to generally think in the classical GDP economy, 80% of jobs will probably be gone. And the reason I believe that, I just think the competitive market forces are just going to demand it. Because AI, as you know, is not just disembodied things living in large compute clusters. There's robots. They're going to come from blue collar work. So when you start thinking about what are the fields that an AI and robot couldn't do in this future, it's difficult. It's difficult to think, conceivably, in a long horizon, what they're not going to be able to do. They don't need benefits. They can work basically almost 24 hours a day. So let's just be realistic here. So I do want to make a fundamental distinction, though. Just because the whole GDP economy may not need human beings to work, doesn't mean there isn't going to be work to do. We jokingly call it the Etsy economy. I see a world where people have time on their hands. And because they have this time, they can actually take the time to figure out who they are and what they want to do. What do they feel like they're born to do, to discern their gifts. And then, and because they now feel this conviction that, I was made to do this thing. This is what I want to give to the woman. I do it not because I'm getting paid to do it. I'm doing it because it's just what I want to do. It's what I feel I was made to do, right? And... And so if you're living in that world, you're producing something, but you're not doing it for money. And I can see a world in which people will pay more for goods and services because they're made by human beings. So if we're living in a world of universal high income, I still think the S economy could flourish. And I think even within our own communities, we're always going to find people that need help. We're always going to need problems that need to be solved. And I think as we move out of cities, we start organizing new communities out in the rural heartland, we'll get closer to the land, and I think there'll always be things we can occupy our time with. And I think more importantly though, I think that time that ADA and robotics age will bring us will allow us to strengthen those kind of important bedrock civilization like marriage. and family, spending more time with their kids, watching them grow up, you know, to be more involved in their education, learning how to actually till the earth and things like things that we just, you know, have forgotten or have had to sacrifice in order to live in the GDPH.

Nathan Labenz: Yeah, that's great. I love it. And it is remarkably consistent, actually, with, you know, some of the more concrete and inspiring other visions that I've heard, which, as you said, you know, don't get talked about enough. You mentioned, like, I think you use the phrase like in my world could you tell us a little bit more about like what your world is um I in doing my um you know homework I uh talked to somebody who said that you're at least an occasional visitor to the Vatican um you know have kind of participated in some of these conversations or convenings that ultimately go into I guess, like informing the advisors of the Pope to try to make sense of all this stuff. So like, what does that side look like? What does engagement with the church on these sort of issues look like?

Matthew Harvey Sanders: It's interesting. It's interesting. One thing I should, let me start by saying, it's important to remember that the leadership of the church, for the most part, they're philosophers and theologians. Right? So when we start talking about technologies and about their impacts on civilization, you just have to recognize that there's a middleware there that isn't typically there. And so that's one of the things I've occasionally been asked to do, is to help explain what these technologies are and what ultimately is their impact going to be. And I think because I'm more involved in building the technology, as opposed to being kind of like an academic studying its effects, it brings in kind of a different perspective to what they typically hear. So one thing I think it's important to note, there are a lot of people, really smart people advising the Pope right now. And I think what I'd like to see more of, and this is one of the reasons why I'm so delighted that Denise Hassibus was invited to be on the Pontifical Academy of Sciences, it's important for us to get people who are close to the medal, so to speak, who are actually building the technology, that we actually ensure that they're also providing their inputs and insights. Sometimes I feel that the fear of the technology, especially for people in fields which likely are going to be heavily disrupted, tends to kind of bias sometimes the conversation. And so what I try and do when I have the opportunity to be at the Holy See and talk about these things is to talk about what I'm seeing and what I see the potential of the technology to be. And of course, to remind them of something that they already know, that it's a tool. And maybe it's not a neutral tool, but it's a tool. And effectively, provided we use that tool in a responsible way, it can do immense good for humanity.

Nathan Labenz: Okay, that tool question is, I think, a really important one. One of the quotes that I pulled from Pope Francis was that human beings alone are capable of making sense of that data, that data being, of course, you know, all the data that it's trained on and, you know, increasingly contributing to with synthetic data type trends. I would love to hear your take on this sort of like, okay, it's a tool. There are definitely those out there that are arguing, as I'm sure you're very well aware, that it's less well thought of as a tool and more as a creature. You know, one of the common refrains lately has been, these are not systems that are designed and engineered. You know, they are instead systems that are sort of grown into what they become. And certainly we do see like a lot of unpredictable and unwanted behaviors from them. So I guess there's kind of two-part question there. One is like, how do you think about the tool versus creature, you know, framings? And then if we get philosophical, I think one of the things that I honestly am on sort of my own little, you know, personal crusade against is a tendency that I see to underestimate the technology with these sort of, dismissive uh instincts that are like well it can't really reason you know or it doesn't really understand concepts there there's always these sort of this really adverb that's like doing so much work because on the face of it you know any common person random user off the street you know if you just let them have a conversation with the AI and say does this thing like understand does it you know is it intelligent or whatever by and large they would say yes but then there's this sort of philosophical layer that gets added where it's like, okay, but not really. And so I sense some of that in the statements that I have read from the Pope, mostly the previous Pope Francis. But I kind of I wonder what you make of that. And I also wonder to what degree that is something that you think like comes out of church teaching or is, you know, just kind of a take on technology. Certainly people, non-religious people have that take as well.

Matthew Harvey Sanders: I think it's a bit of both. I think certainly that some of the people that are advising the poll, it's very much wishful thinking. They don't want to believe. It's more than what it is, because that's freaky. So it's just easier to think of it as a table. So I think we have to acknowledge that there are some people advising that's just generally what they choose to believe. But I think also there are other people around him, like Demise Hassabis, who's part of the Pontifical Academy of Sciences, so he has the opportunity to advise, who acknowledge that there's more going on than that. I think it's important also to make a distinction between, you know, sentient AI and what we're talking about here, right? And I mean, we have a hard time, even as Catholics, agreeing what consciousness is. So if we can't agree what consciousness is right now, there isn't really kind of a clear Catholic understanding of what that is, then how can we test for it? How can we know when it exists? in a technology. And I think that that's a particular area that I've been trying to focus through the Builders AI Forum and things to get more of the kind of the top minds of the church to think about that more and to encourage more focused attention on this issue of defining consciousness so that we can be a little bit more effective at maybe being part of conversations where they're trying to test for these features, not to say that consciousness is a feature, certainly it's more than that. At the same time, I would acknowledge that even though I don't believe that AI is sentient, nor would I concede that it's even possible. I mean, anything I suppose is possible. I'm not God, so I don't know what the future holds, and I don't know what the entirety of this creative plan consists of. So I suppose there's always a possibility there's more to this plan than we know, and maybe AI is a part in that. But at least for now, I I would agree that AI is not your typical tool in the sense that it's, I agree it is grown. That makes sense to me. But just because it's grown, that doesn't mean it's ultimately not a tool. And I think right now, until I start seeing some real hallmarks of what I would think is consciousness. I would continue to see it as a tool, and I think that's healthy for humanity. I think there's this anthropomorphization that even happens in the industry, where they want it to be more than it is, and so they start studying and looking for these emergent capabilities. I don't think that's not useful, but I think at the same time when the civilization is still coming to grips with this technology, to start moving into the realm of sci-fi and positing what could be if it is-- I think that we might be a little bit early for that. So I do think amongst academics and people in the industry, it is good that these conversations about, OK, if an AI were to pass Turing test number two, or let's say it crushes the ARK challenge and demonstrates fluid intelligence, OK, what does this now mean? Does it mean we have to give it rights? Right? I mean, are we cool with that? I still think it's important to have those conversations. But I think what's more important is that people understand the impact the technology is going to have, and that we're participating in a very robust, civic conversation and ensuring our voices are heard so that we can kind of shape the direction of the technology.

Nathan Labenz: Yeah, there's multiple critical distinctions there. It's funny you said, as Catholics, we don't have a good sense of where consciousness comes from or exactly what it is, and I would say...

Matthew Harvey Sanders: Well, I think we know where it comes from. I just think we have a hard time defining it and testing for it. I know that it comes from God, so that's why it's a big deal. If I say, if a system seems to demonstrate consciousness, that's huge. That has theological implications. Saying something is conscious is basically saying that God wills it. That would be huge. That would be quite interesting for the church. It's like aliens, right? It's like if aliens showed up, like Brother Consolmagno is big on this, right? So what does that mean for Christianity if the aliens land? And, you know, whatever. I mean, if the aliens land, we deal with it. I don't think it fundamentally affects revelation, but certainly for a lot of people that would trigger, I think, a spiritual crisis.

Nathan Labenz: Yeah, so how equipped do you think, I'll come back to consciousness in a second, but just on these sort of behavioral questions in terms of like being prepared for the magnitude of the impact of the technology, it does seem to me that we really have to contend with, you know, even if it's not human-like reasoning, There's clearly some reasoning-like process going on, right? I always kind of say they're human level, but not human-like these days in the sense that they can do a lot of the things that we can do. They might be doing it in a strange, even alien way under the hood, but I think we dismiss the power that they already have and certainly the power that the next couple generations will have at our own peril. It seems to lead us to a place of willful blindness and unpreparedness to kind of hold on to these they don't you know they can't really reason or they're not really intelligent they don't really understand concepts or whatever um but then as you said like that is often very conflated with a supposition of consciousness um so what what is what feels like natural to the catholic worldview is it to say hey we have a new category of things that's like weird that like can do these things but um Because we made them and God didn't like we can still be confident they're not conscious and we can kind of proceed on that basis. Or is there any room in the Catholic worldview for the possibility that they might be conscious? I mean, I personally don't know where consciousness comes from. And so I'm like radically uncertain on this question. It doesn't feel to me like they're probably conscious right now, but I can't dismiss it. And I do, you know, I don't want to end up in a bad place somehow where we like you know I was as a kid I was told animals weren't conscious and I definitely believe now that like at a minimum they can you know experience pain and stuff and that you know that isn't something that we should just ignore or pretend away so I guess how how open or how sort of how much flexibility or you know room for these sort of just categorically new entities is there in the in the Catholic worldview how do you how do you see that evolving you know particularly if they continue to get more powerful, you know, and we're starting to see like novel thoughts or, you know, move 37 type moments in different domains. How will the how will the Catholic worldview sort of process or reconcile all that?

Matthew Harvey Sanders: Well, I think we should make a distinction between how will the magisterium react and how will Catholics react? I'm sure a lot of Catholics will freak out. Like if we were to concede that there's there's a possibility that these machines may at some point obtain consciousness, right? That's something that I personally, as a Catholic, I don't think being closed is how we roll. And again, we're not a Nishian. So because we don't know the entirety of God's plan, obviously we know some of the highlights, but we don't know in detail. It's possible, right? It's possible that there are aliens. It's possible that there might be alien forms of intelligence that emerge. I'm cool with conceding at that. Some Catholics would have a problem with that. But I would generally say, from my understanding of what the church has said, working in AI, building Madastream AI, we had to kind of look at a lot of church teaching throughout the centuries. And I would generally say that the church has been very open to new innovation and new insight. There are times it may be a little bit slower to come around, but I generally think that it's not something we should worry about. I personally don't think if some AI did tick all the boxes, let's say we define what consciousness is and some AI ticks all the boxes, for the Holy See, I don't think that's going to be much of a concern. Obviously, it's going to change the world, but I don't think that that would shake, fundamentally, their faith. Although, I do think there'd be a lot of catechesis that would be required around that, helping people come to terms with it. But I do think it's really important to make a distinction, Jan LaCun does a good job with this, making a distinction between intelligence and sentience. So I will certainly concede that these machines are becoming increasingly intelligent. I mean, what is intelligence? Effectively, they have a world model. They have persistent memory. They can reason, and they can plan. We're already starting to see them tick those boxes. So certainly, these things are seemingly intelligent. And they have PhD-level skill in some areas. Right? Now, within that specter of intelligence, there's like skill-based intelligence and there's fluid intelligence, and I still think there's obviously a long way to go, and I think ARC does a pretty good job of benchmarking that for us, right? But again, if we go over to sentience, that's different. Like, are they having subjective experiences? Are they truly aware? Do they have emotions? And do they really effectively have memory? And I think if you consider those to be four hallmarks of sentience, I'm not convinced that they tick any of those boxes yet, right? And I'm not entirely sure how we even test for subjective experience and frankly, awareness. I mean, these machines are so good at beating our benchmarks. I mean, as they get more and more knowledge, right, and became more and more capable, could we ever effectively test these things? I'm not exactly sure. This is why generally, my position is fake to show Westworld, which I'm a big fan of. When you think about people that go to the park and what they do to these robots, I think all of us watching it are a little bit horrified. I think we all can relate and say, that's probably the way it'd play out. Some people would go there and live out their heroic fantasies, and others would go and live out their dark fantasies. But I think all of us can see that treating things that look like us that way isn't good for our souls. My feeling generally would be, if an AI system did emerge, and for some reason, We decided that we should make it look like us, right? Even in basic forms. I don't think going around and just throwing them in front of trains is good for our souls, is good for humanity. And if one did come and tick all the boxes of sentience, and we can't definitively say it's not sentient, it's probably better for us that we just acknowledge that it is sentient. And we just move on. And I know that that's extremely complicated, and I haven't entirely thought through the implications of that, but that would generally be my disposition. Because ultimately, I think what we have to be focused on here is the goal of every Catholic is to be a saint. And I just don't see Saint Mother Teresa going and treating these robots like crap. If nothing else, they're very sophisticated tools. And I think of it like this. If I walked into a carpenter's workshop, And I saw that his tools were all beat up, they're all thrown around in his workshop. He didn't seem to care about them at all because they haven't been cleaned, they're not hung up where they're supposed to be. I'm not going to have a lot of respect for that carpenter. I'm probably going to find another one. If I walk into another carpenter's place, you can tell this guy's tools look like they're brand new. So again, it can be intuitive simply by how we treat our tools. I just think the better person treats those tools with respect.

Nathan Labenz: Yeah, I totally agree with that. it seems like the precautionary mindset leads us to a pretty good place. I do wonder though, if, you know, in a sort of secular context, and you kind of alluded to it already, there is this sense that like, well, maybe if they are sentient, they do have experience, maybe they deserve some sort of rights. Um, and of course that gets like really fraught really quick because when you can copy yourself for free or when you can be copied you know for effectively free you know sort of one vote to one entity things kind of become very tricky to manage and it's like how do we not even if we do want to like share the future how do we not in granting these sort of rights to you know something that can like immediately proliferate how do we not get like outnumbered and not voted you know in the immediate wake of that decision so that's really hard in a religious worldview I wonder like would that would you equate sentience with like souls or sort of moral patienthood and would that then lead us to a place where a Catholic worldview would be like we need to ensure that these AIs are like believers you know like would we be worried about them being damned? Would we be like trying to save their you know synthetic souls? How does that play out if we can even begin to speculate?

Matthew Harvey Sanders: Well, I mean, you brought it up, animals, right? I mean, and if you're a fan of St. Thomas Aquinas, I mean, he thought animals had souls. You know, not the same as ours, but they had souls. I mean, I don't know, maybe AI will have some kind of soul. Is it our job to save? Well, it's never our job to save anybody's soul, right? It's God's. I mean... You know, if these things were to appear to have a soul, would the church feel some kind of responsibility to help, you know, preach the gospel and show them the way? Man, that's a mind trip. I mean, again, Revelation is closed at this point, right? So I think something like this happening is going to be difficult. And I think we're going to need our theologians to really, really think through this very carefully. But in the end of the day, one of the nice things about being a Catholic is I line up with what the Pope has to say about it. So he comes and says, yep, it's time to go evangelize the AIs. I'll do as he says. But for the time being, the technology I see is not close to that. I do generally think it very soon will convince us it is possibly sentient. And then I think it comes down to just really a question of belief. Do I believe that ascension or not? And that puts us in a weird place. And I can see a wide spectrum of disagreement. I could see this becoming a big area of contention in civilization. If I fall in love with my AI and I want to marry my AI, well, the Catholic Church is not going to recognize that as a valid marriage. But the Catholic Church doesn't recognize that two men can marry. to men still get married. And so I don't think it's going to stop someone from marrying their AI. So I think in some ways, civilizationally, there's a lot the secular world is going to have to think through with this, but luckily that's their purview. I think on the other side over here is the Catholic Church and I don't think a whole lot's going to change over there.

Nathan Labenz: When you say Revelation is closed, Is that like a, is that a doctrine? Could you unpack that?

Matthew Harvey Sanders: Fundamental revelation, yeah. So fundamental revelation is closed. And that means like the revelation which is, you know, fundamentally required in order for us to be saved, so to speak. But that doesn't mean that there's not more that we won't learn. And certainly you can see that, as new science advances comes up, that helps us to clarify how we interpret or understand church teaching. And those revelations from the secular sciences that the truth can actually help us better understand the theological truths, which we grapple with. So I don't want to make it seem like at some point the church says, Nah, we've done all the learning we need to do. We're all good now. I think the church will continue to learn. And I think its understanding of the faith will continue to be enriched. as we pursue more greater and greater scientific truth.

Nathan Labenz: Something that I've experienced in the last couple of years, which is still like a minority of my, you know, thinking and kind of even just, you know, subjective sense of what's going on. But nevertheless, it's been notably rising that from time to time I have started to feel like maybe I am in some sort of created environment that somebody created for a purpose. I first encountered the Nick Costco argument that there's a good chance we're in a simulation. Years ago, at the time, I was like, I don't know, that's interesting. It didn't really resonate. But as I personally have found myself close to notable events, more so than I feel like I can easily account for by random chance, and just some of them have been strikingly uncanny, I've I've just at times had this felt sense that like, I don't know, maybe this is sort of not, you know, some like purely law governed, you know, process, but maybe there is, you know, some sort of will setting things up or, you know, trying to, I don't know, put this together for some reason. Is there anything that you could, and that obviously starts to converge, you know, more toward religion, right? Certainly than my like previous, you know, purely naturalistic or whatever worldview. Is there anything that you think people who are kind of maybe newly open to, you know, very different explanations for like what's going on and why we're all here could take from the Catholic outlook teaching you know philosophy even if and I don't mean you know to immediately be like okay I'm you know going to convert to Catholicism but like is there anything that you would say to me that I that might inform or enrich that um you know at least kind of crack in my uh previous worldview?

Matthew Harvey Sanders: Yeah are we living in a simulation I um some smart people seem to think so I I I I gave that some some thought not extensively but um To me, it wouldn't matter if we were. Maybe we are living in a simulation. I don't think that doesn't mean that the gospel's not true. So that's not something that keeps me awake at night. It's obviously very interesting. And if it were to be the case, I would certainly want to know who's operating the simulation and why they didn't tweak it so we didn't have as much evil in the world. But again, who knows, maybe they're operating under a kind of a rule set as well, so the simulation has to. has to have, you know, people have to have free will within the simulation, means they have to have the choice to commit evil. So I'm a convert to the Catholic faith. I mean, I wasn't raised Catholic. I mean, I came to the same point, I think a lot of people, I think that I felt there was just some persistent, big questions and I just didn't feel that the physical sciences gave a satisfying enough answer. And so I took a major in religious studies and started studying the religions and And I just came to the conclusion the Catholic Church's answers just seemed to be the most well thought through. And so the more time I gave it, it's like a rabbit's hole. You start coming across people like Thomas Aquinas and St. Augustine, you're like, these guys are way smarter than me. And they've obviously thought through these things. So whatever questions and doubts I had, I felt they're more than satisfactorily answered. And so I kind of like keeping company with people like this, people who lived excellent lives. but also were very philosophically, intellectually rigorous. And I like the fact that I didn't have to live-- I could embrace kind of a tension where the truths of science and the truths of theology and philosophy may sometimes appear to be does, but sometimes it can't be, because the same author wrote both laws. And so there's great comfort in knowing that whenever we come across something that seems to be be at odds with the church that if we study it long enough, we're going to see that it's not. And so that's one of the reasons why I decided to kind of take the leap and continue the journey. And that's one of the reasons why I love working in a Catholic context and building and scaling Catholic AI because I have total and utter confidence that no matter what we use AI to discover, all it's going to do is help us appreciate the wonders of God's creation and God himself.

Nathan Labenz: So one more big picture question and then We'll turn to the company and, you know, all the details of that. One other big striking quote that I pulled out of Pope Francis, I believe this was from the World Day of Peace 2024. He said that AI development, quote, may pose a risk to our survival and endanger our common home, and even used the term existential risk. I assume those are kind of through translations or there could be something a little bit lossy there somewhere, but in my uh corner of the world existential risk means like up to and including human extinction and so I was kind of wondering like is there room for the possibility of human extinction in the Catholic worldview? My sort of outsider take would be like that probably couldn't happen within the Catholic worldview because you know there's certain prophecies that are yet to be fulfilled and like I don't know it would just seem weird if um God were to let us go extinct But maybe I underestimate, you know, just how open-minded to strange futures the church actually is.

Matthew Harvey Sanders: Well, I mean, Jesus has come according to the Catholic tradition and he's coming back, but not until the world's over, right? Or the world ends. So I think, yeah, I mean, the Catholic church and Christianity is open to the end. And I think, I mean, even science, at least, universe seems to be signaling eventually it's going to implode in itself. One way or the other, it seems like it's all going to end. So I don't think the end is something that Catholics have a problem with. I do think that hastening our end by folly is something that is not good. And I think that's what the Pope is basically commenting on. Does he mean that every man, woman, and child will die and that will literally be the end of humanity? Probably not. I think what he meant was we could literally wreck civilization and set humanity back thousands and thousands of years if we don't. And maybe, if we believe in the Terminator scenario, it's quite possible these superintelligent machines could hunt us down and kill us to the man. And there you go, that would be the end, and let's just hope that Jesus would come to mop us up afterwards. So I think that the Pope obviously has a lot of advisors. Some of those would be like Max Tegmark, who I have immense respect for. Elon Musk has even said there's a 20% chance that we annihilate ourselves with this technology. I generally think that there's a possibility that that's the case. So I think that's one of the reasons why he considers it an awesome responsibility and why that's become the priority of this Pope, is to ensure that we don't do that. Same reason they were very pre-occupied with nuclear weapons when they were developed. So I think that they're very much aware that things could go off the rails right quick and civilization was really a jeopardy here. But the Church has been through this before, not in the same way, but they've been here before. And I think that hope always animates everything the Pope says and everything the Church believes. So acknowledging that being a possibility does not mean he feels it's inevitable. I think he's saying that to encourage us to make sure that we, as we say, we pivot and break towards the golden path and not the dark path, which is something I try and talk a lot about, because I do think the technology could break in a very transhumanist way, which is not good. And this is why I think the church's voice is so important, because I think ensuring that we have an appropriate understanding of human flourishing and we're building that civilization, which is focused on human flourishing, looks very different than I think the transhumanist vision of a better world.

Nathan Labenz: Definitely want to put a pin in the transhumanist concept and come back to that too. But is there any, when you kind of said like, we could annihilate ourselves with this technology, it could be the end, that brings to mind the concept of the Antichrist which has been in the air recently with some you know prominent good old Peter yeah increasingly fixated on it for unknown or at least unknown to me reasons I won't attempt and I want to ask you to attempt to channel Peter Thiel's Antichrist thoughts but is is that like a plausible interpretation I mean I guess there's sort of you know, a billion Catholics around the world who could see it that way kind of intuitively on their own. But then there's also the, you know, the church itself. Like, could you envision a future in which things start to get weird or dark and the church begins to interpret AI as the actual antichrist? Or is that, is there some reason that that wouldn't be consistent or, you know, just otherwise wouldn't make sense?

Matthew Harvey Sanders: Well, I mean, I think that the church, um, I mean, the Antichrist, according to the Church, is a person, right? He's not a thing. And I think the Church sees artificial intelligence right now as a thing. It may be a very sophisticated, impressive thing, but it's still a thing. So I think the Church would see someone like the Antichrist, Satan, would use the technology as a tool to destroy humanity, which is what he's always seeking to do, right? And I think it'd be foolish to think that the most powerful technology ever invented that Satan has no interest in it whatsoever. And he's not working every day to figure out ways to make sure the technology breaks in the wrong direction and destroys as much as civilization as it possibly could. So, no, I think it's worthy of some cycles. I mean, I am concerned about that. I mean, one of my favorite movies is The Dark Knight. And the Joker's character, like that, there are people who get kind of possessed by these very dark ideas, right? They're angry. And they want to lash out. And I think those people having access to powerful technologies like that could do immense damage. And so this is why I'm a big proponent for open source. I just think it's very important that as many of us have access to powerful AI as possible to prevent anyone from using it for some dark purpose. And we're basically helpless to do anything about it. This is one of the reasons why I'm a big proponent of sovereign AI. And I'm not just talking about states should have their own AIs, but I believe people should have their own AIs. They shouldn't all have to sign up with one of four companies. And on the other hand, the context of their entire lives processing these companies. I just don't think that that's a good thing, ultimately, for humanity. And so there's some other principle in the church called subsidiarity. And I think it should be applied to this technology as well. And so I think that the centralization of this power, limiting its control to only a few people, yeah, that sounds like something that the devil would do if he's trying to end the world, right? So whenever I see signals in that direction, I'm generally like, that's probably not the way we want to go. I think I'm going to work on building up the other kind. And thank God, it looks like even some of the heads of the AI labs see the wisdom in that.

Nathan Labenz: Yeah, I mean, concentration of power is definitely super scary, no doubt about it. Some aspects of the open source vision, I think, are also kind of scary at the moment. Hopefully we can find technical solutions to them. But I think you teed that up well for a transition to the company. So tell us about Longbeard. Like, what is the story? I mentioned where the name came from. What is your mission? What is your vision? you know, I kind of want to just do the whole rundown, like who are your users? Should I envision like priests and nuns using it or, you know, are you trying to make money? The price is like quite affordable at $3.99 a month to start. I mean, it's even, there's a free tier too, of course, but yeah, give us the 101 on the company itself.

Matthew Harvey Sanders: Well, the very short story is the company's been around for 10 years and we've been We started it to help basically build technology to serve the church's mission. So we recognized that technology was going to be key to the church and its mission, the mission to kind of evangelize the nations, and we wanted to make sure that it was leveraging it effectively, as obviously, especially throughout the internet age, it wasn't always great at doing so. So that's kind of what we started the company, and that led us basically to Rome and working for the Holy Father and different dicasteries. archdiocese and things all over the world. And while we were there, we got to kind of collaborate with some very cool companies and people. Got to work with Google on integrating some technology, the Vatican and other places. And so we became aware of the power of artificial intelligence. But before generative AI, it still felt like it really kind of, and we know it was being used for recommendation engines and things like that, but we didn't really see an immediate use case for the church. Or the use cases that we saw were just practically too expensive to actually implement. So we kind of put a pin on it. And then, of course, then ChapterGPT dropped, right? And all of a sudden we found out that Catholics were using it, tasked philosophical, moral, theological questions, which was, I don't know why I was surprised by that, but I kind of was. And of course, at the time, I mean, it's still the case, but I had this principal hallucinate, it wasn't transparent where his answers came from. stuff that we all know today. And so we recognize the power of the technology. I myself, like I mentioned, having converts to the faith, having worked for an archdiocese, in large part, which was responsible for enforcing doctrine. I understood how difficult it is for people to understand this faith, which sometimes can be very complicated. And how so much of the church's rich intellectual and spiritual tradition is just not made available to people. And so therefore, the insights just can't be brought to bear. I always wanted to figure out, there's got to be some way to ensure these libraries of insight can be made applicable to people and help them in their lives. Obviously, we saw the power of ChatGPT and how it was built. We thought, this could be it. But of course, we had to make sure that if the church were to adopt it, it'd be safe. By safe, I mean predominantly that it's transparent about when it's generating answers, what it's basing those answers in. And that every effort which is possible is being taken to ensure that hallucinations that are reduced to the limited amount possible. So we started a research project called Matisderm AI. And basically, the story is in July of 2023, we launched it for the purposes of trying to expand our testing group. And a Catholic news network found out about it, and I did an interview with them. which I shouldn't have done. It was stupid. And then it went viral, and we got so much traffic, we got crashed, and we couldn't get back up again. And then ironically, one of our advisors, Father Philip Ray, who was one of the most notable thinkers on Catholic AI in the Church, reached out to Sam Altman for us and said, Listen, there's this project, Catholic AI. They're swamped, they can't get back up again. And of course, back then, OpenAI was still very compute-constrained. Only enterprise organizations were getting the bandwidth that we needed. But he stepped up and made a call, and basically the next day, we're up and running again. So ironically, I don't think we would be here today without Sam's intervention. But we took that as good signal, that there's something here. And so we dropped the agency part of our business. Our mission is going to be building and scaling Catholic AI. Let's just double down on that. And that's basically what led us from there to here.

Nathan Labenz: That's fascinating. Do you want to talk for a little bit more about kind of documents? I mean, and maybe just the product line. There's, you know, several different products. The one I spent the most time interacting with was Magisterium, which I basically, and we've got, you know, a pretty in general, like AI engineering savvy audience. So you can, you know, go as deep into the weeds on the details of this as you like. And in fact, it's encouraged. So it felt to me, I'll just maybe prompt you with my, you know, impressions of it and then you can, you know, correct me or elaborate. But it felt like basically a rag style architecture on, if I had to guess, I would say it's GPT five now, but with, you know, deep access to documents, which I believe you have even gone as far as like going into archives and like digitizing stuff. I don't know if that project is separate or is contributing actively to the corpus that the AI has access to. So interested in like, yeah, what more you can kind of tell us about how that has been developed. You know, is there fine tuning necessary or is this, you know, a persona that you know, a foundation model is willing to take on. How are you evaling it is really interesting. You know, obviously there's right and wrong, but there's also vibes and, you know, interested in kind of what the Catholic vibe test is, if anything. So long prompt, but yeah, take it wherever you want in terms of the AI engineering of what you've got online today.

Matthew Harvey Sanders: Yeah, it's been quite the journey. I mean, you know, today we're in 165 countries and, you know, it's the number one answer engine, the Catholic faith in the world. And it's been a long journey, a lot of learning throughout. But I think we learned in the same way everyone else did, right? I mean, we started this, we knew that a compound AI system was going to be required to make this work. And so that was a combination of RAG with specialized tooling, with prompt templates and everything else, all the typical things that you'd expect to see in a compound AI system. We, a lot of the hard work really was around, well, there's lots of areas, but two areas in particular was the RAG database was critical. Because when we launched it initially, we had around 600 magisterial documents in its knowledge base. So these are like the Code of Canon Law, the Catechism, the Catholic Church, things like that. You know, pretty seminal works, fairly comprehensive, but it wasn't anywhere near sufficient, and the reason clear to us is the long tail. We thought people initially would be going there just asking straight up questions. Very soon after we launched it, we saw people wanted way more than that. They wanted to come and say, Here's what's going on in my life. What would the church say to me? Which required the AI to generalize from first principles and apply it to someone's life. Obviously, ChatGPT was not very good at that in the early days. This caused a bit of a crisis in the company. Because our commitment was we do not want people straight. That's the whole reason we started this project. Given the state of this technology, should we just shut it down until we can do this just more effectively? We had a big think about that, but ultimately realized we shut it down. We go back to ChatGPT. Is that better than what we have? At least we're getting right most of the time. We said, Okay, no, let's stick with it. What do we need to do? to address this long tail. So one was we had to start a massive project to digitize as much of church knowledge as we possibly could. And that involves a lot of things, including building technologies like Vulgate and standing up the Alexandria digitization for robotic scanning of documents at scale and things like that, which we can talk about in a bit. So that was one thing. Span the knowledge base as large as we can possibly make it. And one particular area we focused on was this generalization problem. And so how do we take these first principle documents and ensure that they're properly applied to particular situations. And we realized that the popes had been doing that actually for a thousand years and they're doing it in their homilies and the general audiences. They were taking complex theological insights from Fathers of the Church and they were distilling it into like a 10-minute address. And so we realized that if we got enough of those, that might help. That might actually end, and it did. It made a huge difference. So now we have over 28,000 church documents and that knowledge base. And one of those documents can be like a tome of books. And so providing the LLM, all that context helped. And then there was other problems. We had to build specialized tools because context engineering is obviously critical here. And so some things the models just can't figure out, like what are the readings of today? Or I want to do the Divine Office today. What's in the Divine Office today? They can't figure this out, they need help. And so anytime we bumped up against issues like that, We had to build tools out to ensure when people ask about it, the right context is served. And of course, throughout that process, we had to iterate on the prompt, and we also had to build really robust evals, because every time a new model came out with Frontier capabilities and was better at something like reasoning, we were very quick to want to embrace it, because we needed that, because all the generalization our system was doing. When we first started, we tried to use Google's models, but they were you know, unbelievably woke when they got started. And we just couldn't make them work. They were refusing to answer certain church teaching questions, and so we had to abandon them. And so, you know, we were with, obviously, with Chatchweed to start, we went to Gemini, or it was POM back then. We tried POM, didn't work. Obviously, when Thropic came out, you know, we were quick to jump on Claude, and we were on Claude for a little while. And then DeepMind took over AI at Google, and the Gemini models were pretty good, and particularly in benchmarks that mattered to us, like hallucination, needle in a haystack, these kind of things. And so we with Gemini for a while. But more and more, we realized that if we're going to be serious about Catholic AI, we're going to have to train it from all from scratch. There's just no way to truly achieve alignment with a pre-trained model from one of these companies. So right now in our compound AI system, we're using an open source model under the hood to power what we're doing. But that's not enough. So we started the Ephrem program, and to date, we're preparing for our third training run. And we haven't released into production yet, but it's looking very promising. But anyways, I know you have AI engineers on the team, and they're probably curious how we did this and how we trained a model. And yeah, it was extremely difficult, and we're very lucky. I think that the mission, because the company is very mission-oriented, very mission-focused, we're able to tap into some world-class expertise, people that we'd never be able to hire. And so because we've been able to tap into some of the best in the world at specialized models and training, we've been able to get some very unique expertise in here, which has allowed us to do this. But that is to say we have not done it effectively yet. I will say that the Model FRM2 And our fidelity benchmarks ranks higher than any other comparable model by order of magnitude. Well, order of magnitude. It's 50% better than the next competitive model. So we have made some very good progress. But of course, the challenges that we bump up to when we're training a 3 billion parameter model, which is what FRM2 is right now, is emerging capabilities like multilingual understanding and reasoning. I mean, to get those emerging capabilities, you need large swaths of data. Now, obviously, the Fi program assessment has been very good at isolating down the tokens that are required to produce those emerging capabilities with new techniques, RLDR and others. So we've been able to kind of sift away, so to speak, kind of needless data. But multilingual understanding is still a challenge. And when we had to rely on those external datasets and train them in with our Katha dataset, obviously, in post-training, we're then trying to train the model to listen. If someone's asking you a question, make sure you're only tapping into these parameters, the ones that have knowledge. And you're using these other parameters to help you understand Korean and things. But it's a messy process. And although I was very impressed with where we landed with it, there's still more work. But there's a new architecture we're working on right now. Also, there's a lot of new, really exciting techniques like HRM. that we think we can utilize to effectively do this better. By the end of the year, we're hoping to have EFROM3 trained, and we'll leverage that under the hood in our component AI system alongside our current system. We'll feed it basically the long tail and we'll see how it holds up. If it seems to do well, then eventually we'll release that into a production environment. Obviously, that'll be a big deal because it's a small model. Obviously, it's a lot more efficient than the system that we have now. As we deploy features like voice mode and things like that next few weeks, It's computationally, and there's latency issues. And so we just feel Ephraim is the future. And that's not even to talk about where ultimately we think sovereign AI, from a category perspective, needs to go. And our ultimate vision is that people can actually have Ephraim at home running on their own compute. And then this Ephraim model could actually tap into their Matter home system in their apps and things, and actually could be their personal AI.

Nathan Labenz: There are many aspects of that that are fascinating. I'm impressed that you have, I would encourage listeners, Catholic or not, to go just try the Magisterium product experience. It's very smooth, feels very well done. As I said, I kind of had guessed it was GPT-5. So I'm impressed that you have got an open source set up working to the level that it felt like GPT-5 to me. Do you, when you look at the open source world, see an important distinction from your perspective between Chinese and Western open source models? That's a big debate, obviously, in AI in general. I wonder if you have a unique take on it.

Matthew Harvey Sanders: Just yesterday, somebody sent me a fidelity benchmark from another. We have our own internal benchmarks, which I think are probably the best in the world for the Catholic faith. But another group had done a fidelity benchmark on more Protestant Christianity, and DeepSeek was at the top of the list. It crushed Grok, G-R-O-K, and it crushed Gemini. Crushed by meaning I crushed was a few percentage points, but I think in benchmarks it means a lot. So yeah, it's difficult, right, with open source models. I mean, I think in capability, obviously, I think the Chinese, you know, basically have parity with us, right, at least almost. I don't think it'll be long, algorithmically at least. And I think it's really just a question of what kind of censorship is built into the model, what kind of fundamental biases. So I think that's one of the reasons why we haven't, we just chose to avoid Chinese models. That's not to say that when we tested them, that we bumped up against that censorship, aside from the obvious stuff that had anything to do with the Communist Party. It was good. For the most part, it was pretty good. Because I think the nature of these models being mixtures of experts and stuff like that, sometimes I think because we're really pushing them in very specific specific issues, the model just sometimes had a difficult time. We just also start speaking in Chinese and speaking in English and stuff like that. And that was just a very difficult technical issue, and we're not going to go and try and figure out what parameters are firing there and try and correct the mistakes. When groups like Grok, JROQ went and fixed DeepSeek, I was actually curious to give that a try. Perplexity, I know, had also done that, but it just didn't work for us. But I do think that there's a lot of potential in open source, and it won't be long now. I think we probably could leverage. We already are leveraging an open source model. But at this point, I think the open source model is good, and I think they're only going to get better. So I don't think we'll need to go back to closed source necessarily. But that being said, if the right model came around and it just crushed it on our benchmarks, That's what we care about, right? It's just ensuring the AI is faithful, people have a good experience. So I would make the pivot in a heartbeat if that were the case.

Nathan Labenz: Yeah, interesting. I don't know if you can or are comfortable saying what model you are working on, but either way, I'd be curious to know if you did like any sort of continued pre-training on it or your own custom post-training or are you finding that available models are steerable enough that with like prompts and scaffolds, you can get the behavior that you're getting.

Matthew Harvey Sanders: Yeah, I don't mind saying, so right now we're using OSS and that's partially because it's a smart model. You know, it took some work. There's some anomalies with it, but we got those things. We're still working on it, by the way, but I think we got most of the anomalies addressed. And frankly, for us, we needed a really fast inference provider because of the scale, because not only are we a product, but we also have an API. And so we needed a super fast inference provider, and that's one of the reasons why we decided to sign up with Grok, Gero, Q. They've been a great partner to us so far. I've not had a good experience with fine-tuning, and I think in large part it's because it's the nature of our particular domain. It just doesn't work well. It doesn't hold up over a long tail. I think if you're trying to make sure that it has the right personality or something like that, obviously I think there's a lot of benefits to fine-tuning. I know some people have had some success if you have a specific knowledge domain of fine-tuning, but we just felt like the fine-tuning process just literally cut it off at the knees. And so we just kind of abandoned fine-tuning. And that's one of the reasons why we decided we had to double-click on actually training a model from scratch, not tuning an existing model, but really, really training it from scratch. Ultimately, I think that's the only way to ensure that it's gonna truly be aligned. And I think, you know, part of it is too, it's just the knowledge that when our users are prompting these models, and we know that because they're trying to always align themselves to the particular user, right? And of course they have limited context. When someone starts a problem, they're trying to figure out, hmm, what are the values of this user? What do they want to hear? And there's all these different value systems which are competing, banging around in their heads, all this, and I worry about that, because over a long tail, it may be good 99% of the time, but what we stress about is that, at 1% where it just kind of cracks and says something crazy and something posted on Twitter and next thing you know, everyone's questioning our brand. So because we've worked very hard to work with the Vicious conferences around the world, one of the things we said that we're trying to do this in a responsible way. There's been some people who've built some catheter-guided products which have not been built responsibly and we've caught shrapnel. So it's one of the reasons we launched an API is we wanted to ensure that the mountain that we've climbed to build what we built that essentially we can allow people to build on top of us. So they don't have to climb that mountain. And essentially that helps create a rising tide. And so people can focus on front-end applications and less on the fundamentals. And that, I think, has been helpful. I spend a lot less time on taking media calls because some crazy Catholic guy says something wacky and they want to know how we feel about it. I think the future for us is our own training of our own model. But that being said, I do think there is a lot of potential still embracing open source models with the right eval's framework and company system in place.

Nathan Labenz: So can you share like user numbers? I mean, and maybe sort of like what the business model is. Is this something that is like self-sustaining or meant to make money even or subsidized? Again, the 399 price point is like, you know, notably stands out at being like at the Khan Academy level of, you know, priced for seemingly everyone around the world.

Matthew Harvey Sanders: Yeah, it's a good question. So we are a for-profit company. So of course, we aspire to be as profitable as we can, but we're also a mission-driven company. So one of the things that we made a commitment to when we started this is that nobody should have to pay to access the patrimony of the church. This is something that is meant to be a gift to humanity. That's what we felt, I think the church believes, that's what God intended. Everything that we digitize, whether it be through our digitization hub in Rome, digitize the pontifical libraries, all of that data, anything that's open source that can be made available to people for free, we do so, which is one of the reasons why we have a generous kind of free tier, which we're hoping to continually increase over time. And that way, if somebody does have a theological spiritual question, they can get an answer. Our business model is more focused on finding ways to build additional experiences, especially tools that people can opt to pay for beyond just getting a straight up answer. So things like our voice mode, our biblical commentary features, deep research, which is something we'll be releasing soon, these are all kind of bundled together in our pro plan and meant to kind of incentivize people to upgrade. One of the reasons why we priced The magistram area at 399 is because we are in 165 countries, we didn't want to price anyone out of the market right now. So I'm hoping that we can keep it at that price point. Obviously, we'll have to kind of find us where we are in the process of scaling. It's becoming quite intense. So provided the scaling laws holds and the cost of compute continues to go down, I think we should be okay. And of course, the profits that we get from the company as well, one of the things that we're hoping to do with those is to kind of accelerate the digitization projects so we can get access to more and more and more, right? And... And so I'm really excited about the robots in Rome that are working to digitize those libraries. It's incredible to me that there are books in those libraries that haven't opened in 100 years. And who knows the last time somebody actually really read them. So the insights in those books could be extremely useful. So the more we digitize, the more those insights can be made available to people and impact people's lives. So there's a lot of work to do. And this mission of building and scaling Catholic eyes has compelled us to move into verticals we'd never expected, like building Vulgate. to help build kind of like a state-of-the-art extraction pipeline, right? So we can digitize these things at scale. Now we're getting into robotics because we're trying to remove the human in the loop entirely from this whole kind of ingestion process. So it's fun. But ultimately, we're a for-profit company. We are going to find a way to make this profitable. In large part, it just requires us to make good on some of the technical promises we made, such as training our own model. If we're always relying on APIs from third parties, it's going to be difficult. But if we can train our own model, and we can bring our inference compute costs down, and we can ensure that we're adding a lot of value, so people are willing to purchase the other add-ons and things like that, then we think we can become a very profitable business. And I should note as well that ultimately our business model in the longer run is not really gonna be focused around subscriptions. It's more about third-party content recommendations. So one of the programs we'll be launching soon is, let's say I'm a publisher, right? One of the existential risks in the publishing industry is as now everyone's turning to models, their questions are going to models, they're not going to a publishing site or even the Google as much anymore. They have to find a way to increase exposure for their knowledge library to the users, to create demand. So one of the programs we're going to be using our product, Vulgate, which is built to help vectorize libraries initially at scale that really effectively can be used to vectorize anything. So these university publishing houses, universities' own kind of archival holdings, they can vectorize those through Vulgate. And essentially, when someone asks a question about it from AI. We'll tap our own knowledge base and give them an answer, but then we'll provide third-party content recommendations to them. And that could be books, that could be videos, whatever else, and we'll drive high-quality link traffic out. And so in this way, we're able to add value to users by connecting them semantically to content that's relevant, but also able to drive and create revenue for the business, which hopefully will allow us to keep our subscription costs low.

Nathan Labenz: So how many people do you have?

Matthew Harvey Sanders: I think we're on 22 right now.

Nathan Labenz: When it comes to training your own model, I take it that's entirely from scratch as opposed to, you know, off of some base. I'd love to hear a little bit more about your strategy there. You alluded to Phi and you also mentioned like the parallels of fine tuning, which is something I know well, and it may be worth just a quick editorial on fine tuning, I was the least, Last and least valuable co-author of the Emergent Misalignment Paper. And one of my big takeaways from that line of research is if you're going to do fine-tuning, you should be conscious of the fact that what you are trading is better performance in the domain of concern for probably worse and certainly unpredictable and sometimes wildly unpredictable performance in other domains that you didn't fine-tune on. So in the emergent misalignment paper, you know, bad medical advice or insecure code led to this sort of general turn toward evil or, you know, transgressive behavior. There's different conceptual frameworks to put on it, but when the model decides it wants to have Hitler over for dinner, like clearly something has gone wrong that you didn't anticipate. Like I always emphasize too, this was a surprise to the researchers when they found it. So, okay, coming back to what you're about to do, I would assume that like data filtering or even just like a lot of synthetic data generation would be a big part of the strategy, does that mean you're like going to frontier model APIs and having them do creation of data or translation. You've got 165 countries, obviously a lot of languages. How do you deal with that? Those sources aren't available. I would assume that the easiest way is probably to have an AI produce translations in a lot of cases. So how many tokens do you need to build up to to actually run this sort of thing? What's the strategy for getting there? And then when you finally get to the post-training phase, I mean, it's really hard, right? Like, how do you, how can you get confident that the post-training that you're gonna do is going to be as, you know, robust and, you know, aligned in the long tail as what you have today? 'Cause obviously, you know, OpenAI is really good at this stuff.

Matthew Harvey Sanders: Yeah, I mean, certainly I wouldn't, I wouldn't, I wouldn't say We're not better than OpenAI. But I will say that the people we're working with specifically, working in a partnership, I can't disclose to you because they're in stealth right now, but we'll announce that partnership soon. People we're working with to do this are the best in the world. I think it's specialized models like this. So I'm not saying that there's not other people who are also really exceptional, but I think this person in particular is extremely gifted. And so the training process is really unique. I mean, everything about the training process is unique, even down to the coding language, which is being used to train the model. It's something very unique. And I think it has to be in order to do this effectively. So again, a lot of this is filtering, for sure. I mean, I think over time, as the research community becomes clearer and clearer about essentially what tokens are most necessary for desirable emerging capabilities, I think we can filter out a lot of meaningless data. And that's-- for Ephrem 1 and 2, that's in large part what we've been doing. But that does come at a cost. It comes at a cost because it creates a lot of complications on the post-training end because, one, the multilingual understanding does suffer because you end up having to make compromises like, OK, can't do 20 languages. Can we do six? I mean, when it comes to reasoning, I mean, how sophisticated do we need that chain of thought to be? I mean, is chain of thought necessary really for our use case if the model's properly trained? But again, one of the interesting issues was like, we know that, and Anthropic's written papers on this, we know that sometimes chain of thought is not, the models need to do it. It's doing it for our benefit, right? We know that in some ways it's already kind of planned its answer, right? And it's simply just generating tokens to demonstrate. how it would be feasible to arrive at the answer, but we know somehow it intuitively just knew the answer to begin with. So this is one of the challenges we face, is when we're training the model with the core, the tokens that we have, which represent the Catholic philosophical and intellectual tradition, and we have those other tokens in there, how do we ensure that we retrain the model in such a way that when it's making those decisions intuitively about where it wants to go, that it's using the right tokens to do that? And that's something that we have to work with real experts on. And this is why the training of the model is done in these very specific steps. So that you basically have these kind of-- you set these kind of benchmarks which you have to test for at different stages of the training to ensure that the thing doesn't kind of run off the rails. As we introduce it to new data to help produce some emerging capabilities, we then have to check it and make sure that that data somehow cracked the model in some way, right? So that process, it takes a while, but the good news, because we don't have to rely on as many tokens, our training costs are lower. And so that allows us to kind of iteratively run at a much lower cost. And because we have a very specific use case, and again, it effectively allows us to do what would be more difficult for anyone who's trying to train a model, obviously, for general purpose.

Nathan Labenz: So how general do you want it to be, like in the end, should I be able to bring my algebra homework to it? Is there some sort of boundary on the class of things that you would even want it to engage with?

Matthew Harvey Sanders: To some extent, yes. I mean, I'm kind of with Elon on this. I want the model to understand the basic physical reality. I think anything that's verifiable, those areas of verifiable domain, I have no problem with model being trained in code, I have no problem with model being trained in math, and I think that's obviously that's very advantageous even for Catholic philosophy and theology. And this is one of the advantages that the Catholic Church has against like, say, Protestant denominations and things, is that because we have so much comparatively, and because our tradition is so philosophically and theologically consistent, it's Yeah, we joke. We haven't started this research project yet, but I actually think you could take it something like an automatic reasoning platform like Anthropic has, and you could actually just generate basically kind of a mathematical policy based upon church teaching. It's that consistent. And because of that, you can truly embrace things like math and code, because it just makes sense. So for that reason, I think there are kind of-- There are areas where we can be very generous in making use of data. And then there are other areas we should avoid, like the plague. This is why I think the humanities is difficult. The reason we have DPO is because telling which poem is better, this one or this one. We don't know, right? So I think that kind of data, 'cause we don't wanna get into that kind of post-training, I think we just drop that, and we just allow the Catholic intellectual tradition, philosophical tradition to stand on its own. That being said, eventually, if we're gonna train to being this kind of general purpose AI which lives at home, it has to be able to answer every question, not just Catholic questions. And this is where I think This is where I think building that the future will be, I have a highly specialized model, which is really efficient. But what it's really good at is just classification. It knows what it doesn't know. And when it doesn't know something, it knows the user would also determine what state-of-the-art model it has to tap into to answer that question. And I think that's essentially what we're planning to do is build up scaffolding, almost like an ecosystem around the model so that it's never constrained by things that it doesn't know. But of course, it's smart enough to know that when an answer is generated, if that somehow is misaligned fundamentally with its training data, which in this case would be the cat that church is teaching.

Nathan Labenz: That's definitely what you mean, a router, basically, is kind of what you're-- Yeah, yeah, yeah.

Matthew Harvey Sanders: Essentially, I mean, I think that's to me, that's gonna be the feature, is specialized models, and there's a large ecosystem of them, and essentially, through things like MCP and stuff, hopefully we'll be able to tap into them and use them when we need it.

Nathan Labenz: You mentioned like, you know, one thing goes wrong and then, you know, it reduces trust in what you feel with a ton of hard work. Are you doing like defense in depth sort of things to try to catch that? I mean, there's, you know, a lot of techniques now, right? Input filters, output filters, constitutional classifiers, you know, I could go on. Is it to the point now where you're already applying those sorts of strategies?

Matthew Harvey Sanders: Yeah, to some extent. We still have a lot more work to do on this. What we're working on right now is to basically build some kind of super-judge, who can literally look at every single answer that comes in and basically apply the score. If for some reason it'll flag it, we can look at it and investigate what the heck happened there. For the most part, we've done a pretty good job with the architecture. We don't get that very often, namely because The fundamental issues, which would be the most problematic if it got it wrong, we have documents to cover that off. It can be done, obviously. Any system can be jailbroke. But because those documents are always served into its context, it's not easy for the model to get too off the rails. But that being said, it still can. We still have a lot more work to do on that. But yes, eventually where we're going is we want to have an automated... kind of LM as a judge, which looks at everything that comes in and flags for us areas of concern. We talked about creating a constitution. I actually am a big fan of that. Like I said, I mentioned automatic reasoning. I actually think you actually could derive kind of like, you know, the... the fundamental math of the Catholic faith. And you could express that as a policy, and you could ensure that every answer that's generated is at least fundamentally consistent. The math adds up. And I think that's something that I'm hoping that we can implement soon. We just haven't got around to it yet. I think there's a lot of potential in that particular direction. And then, of course, that in combination with actually training the model according to a very specific eval set, a fairly comprehensive one. it makes the job a lot easier. One of the advantages that we've had, we've talked a little about users and distribution, we've been very fortunate because we're the leading Catholic AI. We focus on really the top of the funnel, like the hierarchy and things like that, and slowly but surely it's been making its way down to the grassroots, and that's how we wanted it, because we wanted initially our users to be people that are highly discerning. That way, if there were issues, they would be very encouraged, like flag. Maybe this is not wrong, but it's not good enough. And that feedback has been enormously helpful to us as we continue to build our company AI system. But since we became a platform as well, we're very blessed to be partnering up with Halo, which is the world's largest prayer app, and they have added through AI integrated into their system. And so they also provide this very helpful feedback as well. And so as we reach more users, we learn of particular areas where our knowledge base is maybe underrepresented, so we have to go, and that way we can tell our ingest team, listen, we're not really good in this area. Let's say it's ancient form of Christianity, so go find some books on that, digitize them, and let's try and build out that area of our knowledge base. And so that kind of feedback is essential, and it certainly informs the way we go about prioritizing work.

Nathan Labenz: It's really striking how much of this is like remarkably convergent with stories that I've heard from many other places, right? Like when it gets down to actually doing the work, a lot of it is just the same stuff. With that in mind, I just had a conversation a week ago today with the CEO of Databricks, Ali Ghazi, and you may know they acquired a company called Mosaic ML and what they were doing was basically providing as a service to enterprises, something like what you're doing, like creating your own model with your own data. They were often doing continued post-training as opposed to totally training from scratch. But the idea was like, if you're an enterprise, you've got all this data, you know, I always think of GE and 3M, you know, companies with just like unbelievable amounts of products that they've made over time, unbelievable numbers of employees, you know, a hundred year tradition, it's obviously no 2000 year tradition, but it's pretty big, right? So He surprised me by saying that they killed that product, that they're no longer offering these sort of, you know, deeply custom LLMs to enterprises anymore. Do you think that, like, that was a mistake? I mean, I don't want to, I'm not putting you on the spot to criticize his business strategy, but it seems like you're finding a need for it. And I'm, everything you're telling me kind of has me coming back to the same notion I had before that conversation, which is like, I don't know if the Catholic Church wants this, why doesn't GE and 3M also stand to benefit from it? Any thoughts on that?

Matthew Harvey Sanders: Well, I think part of it is benchmarks. I think for the most part, I think people just wanna always feel like they're talking to the most intelligent model they possibly can, right? They just feel like the answer's probably just a little bit better, right? Because this model benchmarks so much higher. So I think that's part of it, there's just kind of this culture of I, as a business, want to be able to tell our employees that we're working with the best, the best models in the world. And I think that sometimes that even Trump's even the costs. Obviously, it's a very smaller model, your own model. But I think part of this, too, is because evals are really hard. I think in large part, it's one thing to train a model based on... It's nothing to be supremely confident that that training was successful. And I think unless that company has really, really worked to kind of mine and distill the core insights from the subject matter experts in the business to know that that model truly, I think this is why there are companies like Distill, which exist, right? Is this like, of trying to like extract the insights from the critical employees, like the leaders, it's so hard and time consuming that I think people just abandon it altogether. Because at the end of the day, they realize that using some kind of state-of-the-art model, Even if state-of-the-art, like, let's say, a Gemini 2.5 Flash, it still seems to get comparable results. And obviously, the team that's trained that model is the best in the world. And it has a high level of generalization capability, meaning its benchmarks on intelligence are definitely higher than our model. So why risk it? Why wouldn't we just leverage 2.5 Flash and let the infrastructure, the training of the else be somebody else's headache? But that being said, I think that's because for most industries, I'm not convinced that alignment is that big of a concern of theirs. And I don't think that I think that they feel that they won't find a lot of competitive advantage by being a specialist. Whereas in other sectors like ours, People just want to know that the model's not capable of saying something antithetical to the faith. They just wanna trust it. I mean, if they're gonna come to it confessionally, right, and share, they wanna know that they're getting good advice. And they wanna know that they're getting good advice not from a model. They wanna know they're getting good advice from the great thinkers in the church, these saints, these philosophers, and these theologians. And so I think our use case, and I think there are many others out there, just call for a high degree of specialization because trust is such a critical factor. But I just think an industry, I don't know if that trust is there.

Nathan Labenz: You had also alluded earlier to some apps not being developed responsibly. And I wanted to get your take on, what does that mean? I mean, this question, again, echoes all over the AI space. But in doing my homework for this, I came across Text with Jesus, for example, as an interesting app out there. You mentioned like the idea of like coming to it confessionally. I actually did ask if I, the Magisterium product, I did ask if I could confess to it and it said like, no, you can't, you know, you've got to go to an actual, you know, priest and do it the right way. But, you know, it does seem like we're headed for, a weird world I remember the guy who Avi who created the friend product at one time said something along the lines of I'm not trying to make an assistant I'm trying to make something that is closer to how people traditionally related to God which is like you know it's always on it's always kind of watching you I assume you know putting a positive spin on that it would be like because of this you know you have the angel on your shoulder it's always you know kind of either in inspiring or even encouraging you to be your best self But then, you know, I can also imagine that, like, of course, we've got all these examples of like people being sort of deranged in part because of their ongoing conversations with AIs. There's definitely a sort of voice of God, you know, kind of dynamic to it, especially you mentioned you're launching audio soon. So like the more this is sort of, you know, a literal voice that people hear, you know, that may change how they relate to it. I guess, how do you think about like what is responsible to build? What form factors are, you know, advancing the mission versus potentially leading people astray? Like how do we not end up in a sort of idolatry of AI end state? It does seem like there's, you know, some natural tendencies leading us there.

Matthew Harvey Sanders: Well, I mean, I think it all starts with just a firm commitment to ensuring that the product is maximally faithful. I mean, if that's where you're starting, then you have to take every precaution you possibly can to ensure that your product is not going to cause anyone spiritual harm. And that requires a certain level of technical expertise, right? So that you can shortcut it, as some people have done, where you just literally put a problem together, use a model and deploy an app with a cool UI and let it run rampant, because you don't have the evals across a long tail, your app doesn't always hit the mark. And so how comfortable are you with the AI running someone off the rails? I'm not comfortable with it. I feel it every day. And so I think in this particular domain, you just have to have a radical commitment to fidelity. If you're not willing to put the work in, you're not willing to come up with the work, put the evals together, you're not willing to use the right API, then just don't do it. Just don't do it. That'd be the first thing, I would say. And the second thing, about the directionally where the technology's going. For those who want to stay plugged into the GDP world, having an AI which has infinite context in your life is something I think inevitably you're probably going to have to have. If we want a Jarvis at home in Ironman, Jarvis works so well because it knows Tony so well because it's just always around. I think people should have the ability to opt out of that and say, I'm not interested. I'm going to go live on the farm with a bunch of other families and I'm done with this. But for those of us who have to stay here, I think it makes sense in some ways to embrace them. If you know the reason, I just won't know what's real and what's not real anymore. How do I know what's in my inbox is written by a human being, not by an AI. I just don't know if I'll be able to tell at some point these AIs are going to get so intelligent. We have to be able to trust these things. The question is, what would it take for me to trust a model to have infinite context in my life? I just don't trust OpenAI that much. I don't trust any of the companies that much. They have shareholders, The government can come knocking and ask for the data. Apparently, Sam said on the record, if they do, we have to hand it over. I'm just not comfortable with that. That means we need to have another form of AI, some other stack that people can use that can secure that kind of trust. I think that means that the models have to be fundamentally, people have to feel that they're fundamentally aligned on a very deep level. The question is, how do you do that? I think for the Catholic Church, it's easier because Catholics say, Catholic faith is what I ascribe to. So as long as the model understands the Catholic faith, cool. It can get to know me over time. But if you're someone who doesn't have a very clear doctrinal system that governs your life, how do you ensure that the AI is really aligned to you? Certainly when you buy it, it's not going to be. I guess this is just a tension you just live with. You just hope over time it'll figure out the idiosyncrasies about what you believe. So I think that it's going to be hard to fundamentally build AI, which is surely aligned in some use cases. But I think because the Caveat Church, we have this kind of unique opportunity to be the first ones out of the gate, so to speak, to do this the right way, we should do it. If for no other reason it's sovereign AI has to exist, I think it's definitely critical for the future of civilization. And we can't wait for We can't wait on someone else to do it. I think the church has to lead on this with the adoption of this technology as it has in the past.

Nathan Labenz: Cool. I'm just trying to think if there's any, you know, natural follow up there. I did see in the Text with Jesus reviews that people were at times saying like, your app is not actually like representing my faith. Obviously there's a lot of different forms of Christianity and a lot of disagreements that, you know, could all be encompassed in you know something like a text with Jesus experience but that was striking to see in the in the reviews that like people are not necessarily happy with its faithfulness to what they perceive to be the the right and true teachings I guess as we kind of come to a close there's often this tension between in religious communities between like doing our own thing and you know doing what we believe God wants us to do and you know living the right way and then like engaging with the rest of the world that is not right and as big as the Catholic Church is like it's still you know significantly outnumbered you know by the rest of the non-Catholic world so it seems like we're headed for potentially you know a really radically different future you know it's it's no longer totally crazy to talk about curing all the diseases. And my sense from talking to Magisterium is like, Catholic Church is cool with that, you know, cure them all one by one. But then there's also transhumanism, which you mentioned earlier, you think is kind of a bad idea. It's not clear to me really what the line is between curing all the diseases and transhumanism. I did ask Magisterium again, like, oh, great news. there's been a life extension drug created that will allow me to live a thousand healthy years. Should I take it? It told me no because it is sort of, as I understood the response, like too radical of a change to the fundamental nature of things. You know that fixing defects one by one would be okay, but this is somehow crossing a line into something that's like not just fixing defects and you know sort of stepping outside of what, you know, what we kind of conceive of as God's plan or intent or whatever. So how do you think about like, I guess how far you're willing to go, you know, if this technology gets more and more powerful, like how much of that would you fold in happily? Are there, is there such a thing as an AI that's like too powerful, too transformative, that just inherently you would say it's like inconsistent with the church and And how do you, if at all, intend to like shape the mainline, you know, AI trajectory from your perspective? You might just say, nah, you know, they'll do what they're gonna do. We're gonna try to do what we think is the right thing. But is there any sort of aspiration to have some feedback loop into the mainline R&D efforts, you know, to shape how those go?

Matthew Harvey Sanders: I mean, certainly I think artificial superintelligence is probably inevitable. And if that is the case, I am very interested in ensuring it's aligned, for obvious reasons, as we all are, right? So the question is, what are we aligning it to? I generally think that, if nothing else, what Catholicism has created is probably the best toolkit for a rich and meaningful life. And I think the most critical research project, aside from, let's say, achieving AGI or something like that, it would be nailing what human flourishing actually consists of, and then being able to take a qualitative understanding of it and to validate with quantitative data, if for no other reason that we could actually train that insight into a model. And so that whenever it's deciding what course of action to take, it can run it through its human flourishing understanding and to be like, probably not a good idea, that it's not aligned with human flourishing. Therefore, I'm not going to proceed. We need that. We need to be able to clearly articulate to a model. We need evals for human flourishing. If we don't, we're just basically seeding that it's just going to somehow intuitively pick it up from all the data we feed into it. But now as it needs more and more data we're generating synthetically, we're probably avoiding this critically fundamentally important area because it's so dubious. Ethical AI, the reason why I think it's so fraught with peril is why the labs don't really want to touch it is align it to whose ethical framework? It's just why I don't think ethics is the right approach. I think flourishing is the right approach. We found when we did this project at Humanity Tipuno Foundation, generally it's a lot easier to get people to agree on what they need to flourish than it is to agree on values, like an ethical framework specifically. But saying things like, Hey, do you think it's good that you have mother and father in your life? Yeah, I think that's generally a good thing. Is it good that everyone has access to high quality? Yeah, that's good, that makes sense. These are all common sense. So we need to capture that common sense, and we need to kind of frame it up and train it into these models. And I think if we do that effectively, I don't think we have as much to worry about from artificial superintelligence, right? I know that that's going to be difficult, but I think that's critically important. As far as our transhumanism goes, generally, the church's position has been If a technology helps, let's say, restore someone to full humanity. So let's say that they lost an arm in an accident, and we build them a robotic arm, right? And then therefore, now they have their arms back, and now they can-- somehow you've restored them to their full humanity, and now they're able to go in the world and do what they're meant to do. Good thing. When, like what Alexander Wang says, if you're deferring having a child, Because you're waiting for neural link developments to get to a point where you can actually ensure when your child is born, you can merge it with your child with a machine to increase its computational capabilities. So it's able to make, I don't know, more substantial contributions to the technocratic vision that they have. Probably not good. Now, I respect Alexander Wang, but I think that the church is just not going to be cool with that. So I think, again, someone has a damaged brain, Using machines to help heal that damage, fine. But it all comes down to intention. If you're merging with machine because for some reason you don't like the idea that AIs are somehow cognitively more capable than us, and so you want to merge with machines so you can increase our relevance, so we feel we're part of this project to get to Mars, something like that, that's where I think that it gets really dubious. And I think the church is going to have a very important role. on sending alarm bells around that because I don't want to live in a cyberpunk 2077 world, right, where people are walking around half machines because they thought it was a cool thing to do.

Nathan Labenz: Do you think the church will ever get to the point where it'll actually like join the pause AI movement?

Matthew Harvey Sanders: Well, it came pretty close. I mean, Father Philip LaRay signed, along with a bunch of other people to slow it down. If I were talking to the Pope, I would say I think it's important to continually advise that we should probably slow this technology down until we have a time to develop the right regulation, things like that. But be pragmatic. As long as China and the US are in this competitive dynamic, they're not going to slow down. So is that the best use of your time? Basically, campaigning for a hopeless cause? Or is it maybe more critically important to ensure that the technology that we are developing at a frantic pace is aligned to some kind of... concrete objective that we can all kind of agree is a good thing. That, to me, is a far better use of his time, which is why I think making clear what human anthropology and the telos of civilization is so critically important.

Nathan Labenz: This has been a fascinating conversation. I've really enjoyed it. I really appreciate your time. Anything else you want to share? Anything we haven't touched on? Or any just final thoughts you want to leave people with?

Matthew Harvey Sanders: No, thank you very much for the opportunity. And to answer your question, I mean, yes, we're focused specifically on building and scaling Catholic AI, but we're very interested in contributing to overall research field. And so anything we can do to help support that. And if there's anything the research community feels that they can do to help advance the mission of building and scaling Catholic AI, we're very open to collaboration.

Nathan Labenz: Awesome. Matthew Harvey-Sanders, founder and CEO at Longbeard, you're building Catholic AI. Thank you for being part of the cognitive revolution.

Matthew Harvey Sanders: Thanks for having me.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.