Software Supernova: Lovable's "Superhuman Full Stack Engineer" to Transform Idea to App in Seconds

Software Supernova: Lovable's "Superhuman Full Stack Engineer" to Transform Idea to App in Seconds

In this episode of the Cognitive Revolution, founder Anton Osika and AI engineer Isaak Sundeman from Lovable.


Watch Episode Here


Read Episode Description

In this episode of the Cognitive Revolution, founder Anton Osika and AI engineer Isaak Sundeman from Lovable.dev, discuss their AI coding platform that allows users to describe software in natural language and have it built by AI. They delve into the nuances of using AI for full-stack engineering and the future of human coding. During the episode, they demonstrate building a product comparison app, discussing the integration of AI, backend functionalities, and handling complex API interactions. They also talk about the challenges and opportunities in UI design, the role of AI in improving user experiences, and the potential future impacts of AI on software development. The conversation touches on the phenomenal growth and scaling of Lovable.dev, their strategies for managing context and dependencies in large codebases, and their vision and roadmap for the future of AI-powered application development.

Checkout lovable here: https://lovable.dev

SPONSORS:
Oracle Cloud Infrastructure (OCI): Oracle's next-generation cloud platform delivers blazing-fast AI and ML performance with 50% less for compute and 80% less for outbound networking compared to other cloud providers. OCI powers industry leaders like Vodafone and Thomson Reuters with secure infrastructure and application development capabilities. New U.S. customers can get their cloud bill cut in half by switching to OCI before March 31, 2024 at https://oracle.com/cognitive

NetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitive

Shopify: Shopify is revolutionizing online selling with its market-leading checkout system and robust API ecosystem. Its exclusive library of cutting-edge AI apps empowers e-commerce businesses to thrive in a competitive market. Cognitive Revolution listeners can try Shopify for just $1 per month at https://shopify.com/cognitive


CHAPTERS:
(00:00) Teaser
(00:56) About the Episode
(03:57) Introduction and Guest Welcome
(04:21) Overview of Lovable.dev
(04:53) Building a Product Comparison App
(05:19) Starting the Coding Process
(05:46) Defining the App's Functionality
(08:21) Setting Up the Backend
(13:20) Future of Software and AI (Part 1)
(17:40) Sponsors: Oracle Cloud Infrastructure (OCI) | NetSuite
(20:20) Future of Software and AI (Part 2)
(23:44) Challenges in AI Development (Part 1)
(32:48) Sponsors: Shopify
(34:07) Challenges in AI Development (Part 2)
(39:08) Integrating External APIs
(54:01) Helping Out and Initial Fixes
(54:18) Progress with Documentation and API Updates
(55:21) AI Analysis and Product Comparison
(57:33) Building AI Applications: Challenges and Solutions
(59:02) Understanding AI Agency and User Experience
(01:03:04) Iterating and Debugging AI Models
(01:12:59) Reverting and Improving AI Implementations
(01:24:27) Scaling and User Growth
(01:29:03) Future Directions and Hiring
(01:30:55) Final Thoughts and Advice
(01:32:06) Outro

SOCIAL LINKS:
Website: https://www.cognitiverevolutio...
Twitter (Podcast): https://x.com/cogrev_podcast
Twitter (Nathan): https://x.com/labenz
LinkedIn: https://linkedin.com/in/nathan...
Youtube: https://youtube.com/@Cognitive...
Apple: https://podcasts.apple.com/de/...
Spotify: https://open.spotify.com/show/...

PRODUCED BY:
https://aipodcast.ing


Full Transcript

Anton Osika: (0:00) Over time, AIs are going to, like, read our minds, basically. They're going to be extremely good at predicting what we want in a given situation. Historically, we have switched overnight after a new model comes out because we try it out and say, this is a better model. We don't use just one LLM, like we do smart routing and we use from Gemini, OpenAI and Anthropic. And soon, I imagine potentially from DeepSeek. When you're connecting to external APIs, it's, as you said, Nathan, it's often more things that can go wrong. And that's where you need to have a system that's good at debugging itself between the different components that are interacting. The most important thing is that you have a product that predictably works and like works in an intuitive way. And making an agent work intuitively and like in a nice way, it takes a lot of iteration. If you're currently, like, working without AI, then I think you're really disappointing your employer or your customer or your clients.

Nathan Labenz: (0:57) Hello, and welcome back to The Cognitive Revolution. Today, we're simultaneously releasing the first two parts of a series we're calling Software Supernova with the makers of new and stunningly fast growing full stack AI developer products, Lovable and Bolt. Each episode explores in its own way how AI's rapidly improving coding capabilities are beginning to tangibly transform the software industry by expanding the space of what can be built, changing how professional software developers work, and making it possible for people to create software without ever learning to code. My guests in this episode are Anton Osika and Isaak Sundeman, founder and AI engineer at Lovable, online at lovable.dev, which describes itself as your superhuman full stack engineer and promises to take users from idea to app in seconds. Headquartered in Stockholm, Sweden, Lovable has achieved extraordinary growth since launching in November, reaching $9,000,000 in annual recurring revenue in just their first two months in market, thus becoming one of, if not the single fastest growing European startup ever. In this hands-on episode, we weave a discussion about Lovable's vision for the future of software, their product philosophy, and some of the opinionated choices they've made to maximize user success rates, including their Supabase integration for database functionality and authentication, their approach to error message handling and debugging, and their extremely novice-friendly user experience for handling API keys, altogether with a live demo in which we actually use Lovable to build a large language model powered product comparison application. I think the audio version should be fine for most listeners, but if you want to read all the prompts and see the product in action, you can visit our YouTube channel for a version that includes a screen recording. As you'll hear and perhaps see, while we do encounter some friction along the way, in the end, we are able to create a neat little AI app not just once, but twice. The first time iteratively over a dozen or so interactions, and then on the second go with a few lessons learned in just four prompts. All with Claude 3.5 Sonnet, I should note, as we did record this episode just prior to the recent release of o3-mini. The upshot is that today, non-coders with a bit of AI savvy and willingness to retry when needed can create basic full stack applications on their own without writing any code. And of course, new models will only continue to expand the scope of possibility. This represents a massive democratization of software development. And considering that so many of the resulting apps will use AI to do things that traditional software never could, a low cost path to AI transformation for many millions of businesses. As always, if you're finding value in the show, please take a moment to share it with friends, write a review, or reach out via our website, cognitiverevolution.ai. We always welcome your feedback and suggestions. For now, I hope you enjoy this unique look at AI powered software development with Anton Osika and Isaak Sundeman of Lovable online at lovable.dev. Anton Osika and Isaak Sundeman, founder and AI engineer at lovable.dev. Welcome to The Cognitive Revolution.

Anton Osika: (4:07) It's great to be here. Thank you. Thanks, Nathan. I've been listening to your podcast, and I love that you cover everything, and I get smarter every time I listen.

Nathan Labenz: (4:15) Well, thank you. It's very flattering. That is definitely the goal is to learn as much as we can and hopefully be a little smarter about what's going on in AI. So you guys have been on quite an exciting journey lately. Lovable online at lovable.dev is an AI coding agent. I would classify it as - you can tell me if you have a different high level label for it, but it's one of these... Yeah. We put a new

Anton Osika: (4:40) full stack engineer.

Nathan Labenz: (4:41) AI full stack engineer. Yeah. So it's one of these things where you can show up and say, hey, I want a piece of software created for me and just describe what you want in natural language and then have the AI run off and try to build it for you. And so today, what I thought we would do is kind of a little bit of a departure from our usual format, but, you know, have the normal conversation, try to understand what you're building and your vision for the future and, you know, how soon you think human coding becomes irrelevant if that ever is going to happen. And then at the same time, actually, you know, kind of go in the background, partially in the background and actually try to build an app as we go. And I think that'll be a very, you know, informative two-track experience. So I guess, Anton, you and I will be primarily talking, and I think you'll be kind of primarily coding in the background. So I guess maybe let's start with just kind of a little - we'll get you running on the programming, and this will be on the video feed. You know, we can just be kind of following along with you. You can, you know, we can stop and interject anytime and share developments, and then Anton and I will kind of, you know, continue with sort of big picture stuff while you're moving forward. A simple app that I had an idea for that I think we were going to try today is just a product comparison app. Basically, say you've got a couple links to a couple products online and you want to get a good comparison of those two products, feed them both into an app and have the app come back and tell you what matters about this kind of product and how these products compare on that dimension. And I'm sort of thinking of this as an AI driven comparison. Right? So it's not something that would be fully programmatic or formulaic, but where there's a little bit of dynamism in using the AI to determine what sort of comparison even makes sense for a given product. How's that sound?

Isaak Sundeman: (6:33) That sounds good, I think.

Anton Osika: (6:34) I hope so. Like, what type of products do you want to use this for?

Nathan Labenz: (6:39) Well, I think what's so interesting about AI products in general is how open ended they can be and how flexible, you know, they can be. So I guess my initial idea was like any products, you know, a very live example right now this week in my home is that our washing machine has broken and we need to replace it. And so my wife is, like, you know, going on to review sites and the Wirecutter and Consumer Reports and, you know, trying to figure out which one should we get. Obviously, you know, capitalism has provided us with a huge number of different options, but we don't even really know what matters. Right? We've never bought one before. We've never thought about it. If you think about what you do today, if I think about what I do today, I first sort of feel like I have to go out and educate myself generally about a product category. Like, what are the dimensions that matter? But that process of identifying those dimensions is, you know, where the bulk of the cognitive work seems to be going. You know, do I want, for example, a front loader or a top loader in my washing machine was not a question I had considered at all before. I had first, like, educate myself to even get to the point where I realized that, okay, that is an important dimension of this comparison. So I feel like if the AI was really serving me well, it would flag for me like, okay, here are the main things that people find to be important about these products that you may not even be aware of yet. And then here's how they compare. Right. But sort of creating the comparison framework and then populating it is sort of what I imagine that we would have loved to have had these last couple of days.

Anton Osika: (8:19) Let's see if we can ship an MVP and then iterate. I think that's always the best start. Super simple. Okay. So I guess let's go. I'm thinking the first part is to just get up, like, some core data input into the system and then feed that into an AI and see if we can hook all that up and how long time that would take. I would do just like - we want to put - so one workflow is you say, I want to buy a dishwasher. Which one should I compare? But let's be more to help it out and say, I want to put in a few URLs of these different dishwashers and compare them. Does that sound reasonable?

Nathan Labenz: (8:58) Great question. I don't have a super prescriptive idea in mind of exactly what the user experience should be.

Anton Osika: (9:04) That's cheaper first version. And there -

Isaak Sundeman: (9:07) are also like some of the things that you can think about when you create a product within Lovable is that you can think more overarchingly what tools and APIs we should use. Like for example, there is this Perplexity API that will automatically search up things for us. So that could perhaps -

Anton Osika: (9:24) That could be super useful. But let's start with like we constrain as much as possible to see if that works out and we just get the UI up. So first type out, we first type out a prompt where we say I want the UI to kind of let me put in URLs of products and then some information about those products should be shown and the AI should help guide us through what's important to consider. Maybe generating a product comparison table. So let's go with this prompt. So it has create a product comparison tool that uses AI, but we'll create the UI, and the flow is something like the user put in product URLs.

Isaak Sundeman: (10:06) The product URLs. Any input fields?

Anton Osika: (10:11) Nathan, do you think this is good? What do you want it to do once you put in the fields?

Isaak Sundeman: (10:17) Should we have, like, a button called analyze or, like, compare?

Anton Osika: (10:19) Yeah. It's kind of about the analyze button.

Nathan Labenz: (10:22) Yeah. Sounds good.

Isaak Sundeman: (10:23) And then we have analyze button that when pressed will scrape the websites and get the data and then call upon - then call GPT for - let's take that later. Let's call it. Yeah. Yeah. Yeah. But let's start with a really nice UI.

Anton Osika: (10:49) So it's like - it's try typing out the prompt here that is, like, has a few typos and things. Yeah. And I think that the typos, they don't matter. The AI understands perfectly. But the formatting and the sequencing of what you ask in what order, that does matter quite a lot. So like being very specific about what - once you have an application and you want to change it, being very specific about in the prompt in your prompt is very important. And what we're seeing now - free to ask any questions, Nathan. I think that's more fun. But what we're seeing now is that it creates a plan for what to do, a bit about the - like, first it plans up the design. And then it generates what we think is the best practices in terms of a web application. Like most software today is actually web applications. So that's what it's doing. It's React code. And now it's spinning up the first UI version of this. And, yeah, I guess, like, here you can add a few product URLs and click compare products in the - yeah. Now you're getting a Swedish version of washing machines because Google is adapted to where we are at right now. But let's pick that

Isaak Sundeman: (12:00) one. Okay. So I guess the idea is we have

Anton Osika: (12:02) to pick same one, yeah.

Isaak Sundeman: (12:04) Yeah. Wait. Let's see. Maybe something else goes this different one.

Anton Osika: (12:09) Yeah. And this should work for, like, Amazon or any URL. Right? But let's see

Isaak Sundeman: (12:14) And then we would press.

Anton Osika: (12:15) What happens if we press compare?

Isaak Sundeman: (12:16) Yeah. So nothing should happen right now because we haven't actually connected the back end or anything. So, yeah, analyze this feature is coming.

Nathan Labenz: (12:25) I hope that gives you the little heads up.

Isaak Sundeman: (12:27) You need notifications? Yeah. And so you

Anton Osika: (12:32) should - yeah. So let's hook up the data input, and then we need the back end. So the back end is needed when you have something that requires, like, external data, AI features and so on. So the way we set up a back end is that we rely on our wonderful friends over at Supabase, which has, like, a back end as a service that covers all your needs. And then, usually, I guess you would be logged in here, but now - yeah.

Isaak Sundeman: (12:57) Now I just log in here. Click.

Anton Osika: (12:59) One click to get the back end setup.

Isaak Sundeman: (13:02) Yeah. Let me log in real quick.

Anton Osika: (13:04) Do you want us to clarify anything on what's happening there, Nathan?

Nathan Labenz: (13:09) Well, let's maybe start with a, like, big vision and then, you know, we can kind of meet maybe in the middle. We've got the very low level, like, we did our first prompt, we're connecting into Supabase. What do you think is the sort of big picture future of software? Like where is this all headed? Because I feel like we have this sort of like competing narratives right now around the future of software. And I always say my crystal ball gets real foggy more than a few months out. So what do you think is the medium term future of software? You could maybe take that from like a user standpoint, you know, a developer standpoint. Are there still developers? Does anybody just sort of speak software into existence?

Anton Osika: (13:54) My biggest, you know, prediction here is that when I was super young, I started coding and started creating computer games. And that's been a, like, a superpower to understand the world and communicate about technology and to make things come into existence. But much less than 1% of the world's population has that superpower. And with AI, the 99% are going to be empowered to create, solve problems with software and edit and improve software. Everyone is going to be able to use a version of the software that they prefer. That's customized or is improved to them. And that change and how fast it's going to be because AI is much faster than humans at writing code is going to result in some kind of Cambrian explosion of really high quality software and just human creativity is going to be like unleashed at a much larger pace. I think that's going to be the biggest change. And then you can talk about how that affects the current workforce building software. And that's a bit more complex, but I think that's the biggest obvious change in my eyes.

Nathan Labenz: (15:07) I guess one big question I still have about that, though, is like with this example that we have, right? If I imagine a future where AI is sort of continuing to advance and it's like achieving the promises that, you know, are sort of everybody sort of seems to believe that it's about to achieve. It seems like we have lots of reasons to think, you know, capabilities are not done improving. Then will there be like any UI in the comparison process of products in the future? Like I could imagine a much different thing entirely where I just ask the question directly. Right? And maybe this is sort of a fault of like the example app that I proposed, but in today's world we like need these dedicated UIs because we don't have a like general purpose interface to the world's, you know, information, let alone reasoning.

Anton Osika: (15:55) I've been hearing this generative UI thesis a lot on like, oh, we're not going to have software, just going to have AI. And then there's some general interface to all of this AI. I'm confident that that's not going to be the whole answer. And there is a very simple reason for that. And it's that us humans, we prefer when things are predictable and the same and more so that the software that we're - the products that we're interfacing with has a UX that has been tested by many other humans. And it's something that humans can understand easily. So if you have an AI that always generates like a new type of interface with context dependent on where you're at, then it's going to be different every time potentially. And it's going to be different for your grandmother and yourself. So you cannot kind of explain, oh, this is how the software works. I'm pretty sure that we're going to like have standardized, like people are going to use the same software with the same UX always, most of the time, because that's just easier for us humans to get used to, build up the muscle memory of how do I use Slack, how do I use my email client and so on. And the exact UX is something that is super hard to nail. Like it takes so many iterations of making a product lovable. That's why we call it Lovable. It takes so many iterations and now with AI you're going to be able to go through those iterations and really nail like what is the right UI for what this software is supposed to do. So that's why I don't think generative UI is going to come in and just wipe out everything else. But there's definitely going to be much more generative components and parts of the parts of software that is powered by AI that pulls in the right context depending on what you're trying to solve for right now.

Nathan Labenz: (17:43)
Hey. We'll continue our interview in a moment after a word from our sponsors. In business, they say you can have better, cheaper, or faster, but you only get to pick two. But what if you could have all three at the same time? That's exactly what Cohere, Thomson Reuters, and Specialized Bikes have since they upgraded to the next generation of the cloud, Oracle Cloud Infrastructure. OCI is the blazing fast platform for your infrastructure, database, application development, and AI needs, where you can run any workload in a high availability, consistently high performance environment, and spend less than you would with other clouds. How is it faster? OCI's block storage gives you more operations per second. Cheaper? OCI costs up to 50% less for compute, 70% less for storage, and 80% less for networking. And better, in test after test, OCI customers report lower latency and higher bandwidth versus other clouds. This is the cloud built for AI and all of your biggest workloads. Right now, with zero commitment, try OCI for free. Head to oracle.com/cognitive. That's oracle.com/cognitive.
Nathan Labenz: (18:56)
It is an interesting time for business. Tariff and trade policies are dynamic, supply chains squeezed, and cash flow tighter than ever. If your business can't adapt in real time, you are in a world of hurt. You need total visibility from global shipments to tariff impacts to real time cash flow, and that's NetSuite by Oracle, your AI powered business management suite trusted by over 42,000 businesses. NetSuite is the number one cloud ERP for many reasons. It brings accounting, financial management, inventory, and HR all together into one suite. That gives you one source of truth, giving you visibility and the control you need to make quick decisions. And with real time forecasting, you're peering into the future with actionable data. Plus with AI embedded throughout, you can automate a lot of those everyday tasks, letting your teams stay strategic. NetSuite helps you know what's stuck, what it's costing you, and how to pivot fast. Because in the AI era, there is nothing more important than speed of execution. It's one system, giving you full control and the ability to tame the chaos. That is NetSuite by Oracle. If your revenues are at least in the seven figures, download the free ebook, Navigating Global Trade, 3 Insights for Leaders at netsuite.com/cognitive. That's netsuite.com/cognitive.

Nathan Labenz: (20:21) Yeah. I feel like I agree with that to an extent at least where I'm kind of like part of what we're doing here will have a little bit of a generative UI component to it because we're going to have - I mean, even and even in just in this product comparison idea, there's going to be sort of the dimensions on which a product will be compared are not hard coded. Right? So even just like the table, how many - you know, what are the sort of fields of comparison is like, you know, that sort of generic, you know, dynamic or generated UI writ small. I agree that when it comes to like power software that people spend a lot of time with, you know, your, like, Gmail or whatever, like, I would not want to have a Gmail experience that I log into and it, gets, you know, re - the UI is regenerated every day on the fly. And it's like, well, where was - I know I know where things are in today's world and I want consistency and there's probably just no reason to recreate that all the time. People do want - I hear you on all that stuff. At the same time though, I do have a sort of different vision that competes with this too, where I'm like, you know, if we're imagining a world of abundance in the future and we sort of say, okay, well, what do the people that have the most abundance today do? And, you know, would people in the future state of abundance like to try to do the same or would they do something different? Like, you know, how does Elon Musk buy a washing machine today? I think the answer is he basically says to someone on his team, know, maybe at his level, it doesn't even come to his attention, you know, that the washing machine is broken in the first place. But if it did, he would just say, okay. Find the best one and buy it and install it and let me know when it's done. Right? There's a sort of, I don't even want to think about this. I just want to delegate the entire task end to end, come back when it's installed or maybe somewhat short of that, like, tell me what I should do and then maybe I'll review it and approve and we'll go from there. So I do sort of wonder if my future is mediated by UI at all in some of these scenarios? If I just say to like a general AI, my washing machine broke, find me a couple of good options. Tell me what the pros and cons are and I'll make a call and then you go online and buy them.

Anton Osika: (22:41) Yeah. I mean, can I spin on this on the topic of, like, the AI would know or in the Musk case, his colleagues, whatever, they know what he wants? And that - I mean, that transfer, I think, is the only important thing in the future somehow. Like transferring what the human wants, desires in this very moment to a different system and transferring the information to the user. And over time, AIs are going to like read our minds basically, or they're going to be extremely good at predicting what we want in a given situation. So then those UIs, those elements where you need to say like, oh, I don't care about the front loading, whatever. You don't need to do that because it already knows what you want. On the displaying information to the user, which might be less important as well in the future, there we're going to have more at least some level of standardized UI components. But yeah, I'm not saying this is not going to happen. I'm just saying there's going to be some parts that are going to be listed.

Nathan Labenz: (23:43) One question I've had, I wonder if you have a take on - as I've tried, you know, all sorts of things like this, and I've definitely made it my business to go and try a wide range of products. I've been a longtime fan of Replit. I've used Cursor pretty heavily in recent months building an app. It always seems like the DevOps portion, the ability to actually deploy to a machine and get things working in a way that is exposed to the internet always seems so hard relative to that initial code. Like so many experiences I've had, you could say, oh, give me this UI. You get something pretty decent, you know, that like looks more or less what you had in mind. And then you're like, you know, why is this, you know, I have a dependency that's missing what's going on or this thing is not building or whatever. You know, I'm not sure what's going on with these port forwarding situations. All that stuff seems like very hard for the AI. Do you have a theory for why one is so much more difficult than the other?

Anton Osika: (24:44) Yeah. Totally. I mean, the software engineering gets very hard when you have like different things that have to connect - the more components that have to kind of be wired up and connected to each other in the right way makes it exponentially more error prone. And the way to handle that in like how we approach it as well is to limit those choices as much as possible and say like, okay, no, if you're going to deploy something, you can only use Supabase for your backend functions and for your database. And if you're going to add payment, there's a very clear happy path for that. And if it goes like this, we generate the button for you to go out to Stripe and get your account set up there or just grab your API key, come back, and then the payment is, like, almost guaranteed to work. If you just let an LLM, like a large language model, if you're using Cursor, that's much less opinionated, but still like an amazing tool, then it's going to sometimes generate different pieces of the software that haven't been fine tuned so that they were guaranteed to work really well together. And then at some point it's going to - you're going to run into a problem and the AI is going to have a hard time recovering from that problem.

Nathan Labenz: (26:01) So do you think this is a technology wave that creates like consolidation across different technology stacks and potentially like core providers? Like it seems like everybody - I noticed, for example, in their original generation, it was Tailwind CSS. It seems like that's become a, you know, community favorite, if not standard. And it seems like it's kind of a standard in most of the code gen experiences. And I would imagine, like, you know, for payments, I don't know what you're using, but I would expect that, like, Stripe is probably a leading candidate that has, you know, lots of great documentation and so on. Yes. I think sort of a Schelling point effect where everybody ends up using the same core components?

Anton Osika: (26:45) I think so. I mean, there is already a Schelling point among human developers where Tailwind is like the Schelling point. It seems like now, at least according to most people, and large language models accelerate that because, like, it's much easier - one is it's much easier to learn this new and the best practices. As a human, if I don't know Tailwind, it takes a long time for me as a human to learn it. So the LLMs, they just instantly spit it out and it works. And secondly, the LLMs are much better at the - they should take the very popular choice because they have much more training data there. So there's some convergence for sure, but I'm also excited about that. There are going to be technological innovations like Tailwind might not be the end all. There might be better ways to style components. Facebook came out with something like a few months ago, I think. If this gets adoption, then all the AI tools are going to just like overnight maybe switch over to this new Schelling point.

Nathan Labenz: (27:42) Yeah. That's a really interesting possibility. I've been kind of interested in that idea at the level of the AI providers too. You know, are we entering into a race dynamic either between countries perhaps or between leading companies? And what are the incentives there? Is there any way for people to sort of coordinate to proceed, you know, with responsible caution or are they going to sort of constantly face this, got to go faster to beat the other guy incentive? And one of the questions I've been wondering about there is, like, how easy is it for everybody to switch when a new model comes online? And one thing I did notice in Lovable, which is distinct from say a Cursor is like in Cursor, I just have a drop down. I can choose what model I'm using in the background. And so now all the developers get to choose. With Lovable, I don't know what background model I'm using, and so you are choosing.

Anton Osika: (28:46) Mhmm.

Nathan Labenz: (28:46) And I guess I wonder, like, how easy is it for you to switch and which dynamic, like, is better or worse from the standpoint of the developers and the racing dynamics. Like if all the developers can switch, not all of them will, but it's like it can happen, you know, super quickly. You guys would have more of a process, but you'd be switching on behalf of like all your users. Yeah. So, yeah, I'm interested in how you think about model choice and switching.

Anton Osika: (29:09) Yeah. Historically, we have switched overnight after a new model comes out because we try it out and say, this is a better model. We don't use just one LLM, but we do smart routing and we use, like, from Gemini, OpenAI, and Anthropic. And soon, I imagine potentially from DeepSeek from there. And

Nathan Labenz: (29:27) maybe Kimi. Don't sleep on Kimi.

Anton Osika: (29:30) Yeah. But we do have a model selector for us like an admin feature. So we can do some selections. And what I think - but I don't think that's the right user interface for user because like if you have a selection and you don't know how to have all the context, then you're more likely to make the wrong decision. What we're planning to roll out is that you can - I mean, this defaults to being the fastest chain of large language models. And then if that doesn't work, it goes into showing that in the UI and letting the user have some type of control and a deeper analysis of the problem with larger and slower large language models. So I think that's the right approach and not be specific about which chain of models are running here. That's too much information for the user. Yeah. I mean, we make sure that we can switch out models super fast and know if that's a better or worse user experience.

Nathan Labenz: (30:27) In the future you expect to be able to incorporate new models really quickly, but it doesn't sound like the activity on Lovable is going to be like a winner take all contest because you're already subdividing the queries and figuring out where they should go. And so a new model will at most take a bite out of other providers usage. It seems very unlikely from what you're saying that there would be a new model that would just dominate everything and take all of the activity from one day to the next.

Anton Osika: (30:58) Yeah. We'll see, of course. Like, it's all about moving - like, everything is about having a team that just can execute super, super, super fast and move the fastest here. And it might turn out that one model just dominates everyone else in terms of price and performance.

Nathan Labenz: (31:15) DeepSeek does seem like they have a strong contender all of a sudden. Do you have, like, a sort of rundown, you know, a cached thought on, like, the OpenAI's, Anthropic's and Gemini's compare to each other today? Is there, like, a shared understanding on the team of what's best for what?

Anton Osika: (31:34) Yeah. Yeah. Sure. Yeah. And to your point there, like DeepSeek is open source and that makes it much easier to have low level control and train your own model, at least for maybe the majority of fast use cases, just train your own model and use that as the default. I see that happening for us quite soon. The difference is now Claude 3.5 is still overall like the best one. It's very fast. It's very good at coding. It's good at reasoning and it works the best most of the time. And then if you look at wanting speed and low costs, then the Haiku is actually quite expensive compared to 4o-mini. And the best one that we have switched over to for the smallest calls now is Google's Flash model, which is also like really performant and fast. And yeah, so for open - but the final win for OpenAI is that their reasoning models are good when like the AI gets stuck and you want to get out of that and to take a lot of information about, like, kind of in an uncommon situation, reason from first principles, then OpenAI's reasoning models are still the best ones.

Nathan Labenz: (32:48)
Hey. We'll continue our interview in a moment after a word from our sponsors. Being an entrepreneur, I can say from personal experience, can be an intimidating and at times lonely experience. There are so many jobs to be done and often nobody to turn to when things go wrong. That's just one of many reasons that founders absolutely must choose their technology platforms carefully. Pick the right one and the technology can play important roles for you. Pick the wrong one and you might find yourself fighting fires alone. In the ecommerce space, of course, there's never been a better platform than Shopify. Shopify is the commerce platform behind millions of businesses around the world and 10% of all ecommerce in the United States. From household names like Mattel and Gymshark to brands just getting started. With hundreds of ready to use templates, Shopify helps you build a beautiful online store to match your brand's style, just as if you had your own design studio. With helpful AI tools that write product descriptions, page headlines, and even enhance your product photography, it's like you have your own content team. And with the ability to easily create email and social media campaigns, you can reach your customers wherever they're scrolling or strolling, just as if you had a full marketing department behind you. Best yet, Shopify is your commerce expert with world class expertise in everything from managing inventory to international shipping to processing returns and beyond. If you're ready to sell, you're ready for Shopify. Turn your big business idea into cha-ching with Shopify on your side. Sign up for your $1 per month trial and start selling today at shopify.com/cognitive. Visit shopify.com/cognitive. Once more, that's shopify.com/cognitive.

Nathan Labenz: (34:50) Yeah. I feel like that echoes what I've experienced when doing this sort of model choice manually. You know, it always varies a bit. But I would say if I want to do like a new feature on an app and I'm working on this app that is designed to help people create even more so than curate because oftentimes it is synthetic data that, you know, that is being used. Just really high quality examples for use in few shot prompting or potentially even fine tuning, but some sort of AI automation context. You know, you have some task, routine task that you want to automate. First and honestly, the most critical step most of the time is get together a small number of examples that demonstrate what you want, you know, and that you can, you know, sort of stand behind and say, yes. This represents a job well done. I've found that not too many of the LLM ops apps really help with that. There's many things that help with, like, monitoring your performance and, you know, logging everything and running standard evals. But just getting to, like, half a dozen or 10 really high quality examples for your particular task, I haven't seen too many things that help with that. So I decided I was going to make one kind of as an exercise.

Anton Osika: (36:00) So to me, like, why do you - what do you use those examples for? Did you compare this vibe check output on different LLMs or yeah. Something else.

Nathan Labenz: (36:09) Yeah. That's part of it. I just find kind of the great examples are kind of the heart of any automation project. Right? You first have to understand what is it that we want. Usually there's a team involved. So like, can we all look at these examples and agree that, you know, this is what we want as a team. Just getting clear on that, you know, that's more of a social challenge than a technical challenge, but it's an important challenge in practice for many automation purposes. And then, yeah, once we have those, like, which model can, even just on a few shot basis, you know, imitate that. Part of what the app does is it also tries to help fill in the reasoning process that converts the inputs to outputs. Obviously, now, you know, reasoning chain of thought type of stuff is like all the rage, but a few months ago and even still, I think people kind of sleep on how important that is because often what they have in their business is the inputs and outputs, you know, whatever they want to automate - responses to customer service tickets or whatever. Right? They have, like, the message received from the customer and then maybe they have the message sent back to the customer by the agent. But what they generally fail to capture and maybe they have like documentation in policy format or something like that, but they very rarely have any sort of chain of thought that connects like, okay, this is what the user sent. And you know, now I'm thinking about the DeepSeek. Okay. The user is asking about this, right? And then working through all that stuff until you finally get to an output. That stuff typically does not exist. So part of what my little app does is tries to help fill that in, you know, saying, okay, well, here's the output and it came from this input. What is the likely chain of thought? Help people iterate on that so that they can see something, you know, because yeah, people have a hard time writing this stuff down. So show them something, then they can react to it and say, like, that is what I do or that's not what I do, and they can iterate toward hopefully a chain of thought that represents the way they really think about it. And then from there, yeah, test out different models, you know, maybe graduate to fine tuning. I'm not planning to commercialize this app because I don't think it's a very monetizable thing. It's like, it's the sort of thing you don't necessarily use all the time, and it stops actually at the point where you have enough examples that you can take it somewhere else. Like, we're not going to - I'm not going to run fine tunings for you or, you know, be your, like, middleware of any sort. It's just like you get to the examples and you can export them to prompt format or a JSONL format or, you know, whatever.

Anton Osika: (38:36) Did you build this app with Lovable?

Nathan Labenz: (38:39) I didn't. No. I've been - it's been a slow burn. So I'm not even sure if you guys had launched. Maybe you can tell the timeline of launch and how you've scaled. I think I started on it before you were an option.

Anton Osika: (38:53) Yes. Yeah. We launched November 21st. We launched like Lovable. And since then the product has like - that's when we also launched, like, we have built out everything with the back end functionality, which is a huge part of the unlock. And, yes, since then we've just been scaling like absolutely crazy and spent most of the time keeping up with that. But I'd be happy to show you some of those parts and get back to the product comparison tool. Yeah.

Isaak Sundeman: (39:26) So I had to log in to Supabase again. I hit rate limits or not rate limits, but I had too many Supabase projects that I had created. So I had to - yeah. I had to deal with that. Okay. So I pressed the Supabase button now. I connected a Supabase project to our Lovable project. And we can see that we also get this message that's automatically sent to the Lovable AI when we do that. So where are

Anton Osika: (39:49) we at now? We have the UI and we were supposed to try comparing two products and we need to fetch the data from those websites of the products we're comparing somehow. So I think then the right thing is just to ask, like, how do I scrape data from an external URL?

Isaak Sundeman: (40:08) Scrape data from an external URL.

Anton Osika: (40:12) And you don't have chat mode enabled here with us.

Isaak Sundeman: (40:14) But we could enable that.

Anton Osika: (40:16) Yeah. You can use it. You can decide.

Isaak Sundeman: (40:18) Yeah. So we have this chat mode feature which allows you to chat with the AI opposed to just to tell - like if you only want to chat with AI and you don't want the AI to code and you kind of want to plan out things before actually writing any code, then you can enable this feature right here in labs where we try out new features. If you go back into the project, you will see that we have this little toggle right here. So now we can go on chat only mode and now we can ask this question and now the AI will actually give us some guidance. So how do I scrape data from an external - so maybe you send that.

Nathan Labenz: (40:55) Yeah. A couple interesting things here while it's thinking. One of the questions I had written down, I hadn't realized this feature was there, was what do you think about, like, getting the AI to sort of coax more of what you want out of you? You know, when I first kind of prompted and when you first prompted, it was like, here's what I want, and then it just kind of immediately hauls off and it's like, alright. We're coding. You know? Here we go. And I feel like if you were to, again, sort of compare and contrast this against, like, what would the experience be if I were Elon Musk today. Right? It would be sort of a person answering back to me and be like, hey. Okay. So, yeah, Elon, I got a couple questions for you about what exactly you want before I go start coding this. This seems like a step in that direction, but I'm interested in your thoughts on the sort of potential for a Socratic interaction with -

Anton Osika: (41:43) Yeah. There's a lot of things you learn by using a system like this. And we are, like, when you become a super user of our tool or other AI tools, you get like 10x the value. So getting there the fastest is something that's a big part of building up a product like this. We haven't gotten that far. What you're seeing here is that it suggests click on this button to go to the Supabase API key or the edge function logs, for example. It gives you those information contextually. But the next steps are to interject some of these conversations like, before we do that, let's go through a few of the things to clarify and then it says, I'm ready to build it now. Does this look good? Does this panel look good? Then we go back to execute. It's not like the most critical thing for the product. Like if you're really good at using it, if you use it a lot, then you can still get all the value without that, I would say. Now this is an AI full stack engineer, but in the future, you're going to be talking more to like a CTO or a chief product technology officer. That's how I see it. Or also like your chief design, head of design. And then it should be doing even more of those things, right? Like what technology choices, like suggesting things with the product. And I'm super excited about seeing how all of this evolves for us and for others building in the space. What you're seeing now what he's doing on the prompting side is that he said, let's use FireCrawl. FireCrawl is like super popular way of fetching data from the Internet. And then it says, I'm going to need your FireCrawl API key, go to this URL. And then it opens an input box where you just paste your API key. So you have to - like we don't manage your FireCrawl billing. FireCrawl is free in the beginning, but then it costs money later on. But you're fully in control as a builder with Lovable that way. And then now they're trying to do a scrape request. And like when you're building yourself or when you're building with AI for that sake, you often run into errors. So that's what we're seeing here. There's some kind of errors that you can click try - like, we're seeing some of the error part of the logs here. Now we've got access from Supabase so that we can get the error logs from the back end. So that's going to be launched. That's going to be a game changer to build back end endpoints, which is what we're doing here. But right now, is that Isaac has to manually open the edge function logs in Supabase and paste the errors from there. And so there's a bit of a complexity here.

Nathan Labenz: (44:21) So is this a new feature - that - because it's something I'm also very much on the lookout for is people building for AI users as opposed to building for human users. And if I'm understanding you correctly, this is maybe an instance of this from Supabase where they, in the past, were kind of like, well, of course, it's going to be a human developer, so they'll just come to our site and look at the logs and understand the logs and whatever. And now you need more programmatic access because you're actually trying to directly feed that into a language - yeah.

Anton Osika: (44:53) I think we've been driving most of Supabase new sign ups, at least in the past. I think that might still be the case. And they are like, wait, we have to start building for exactly what you're saying for programmatic access to everything. And I know that others who are building similar products to Supabase are also like, we're going to be agent first in terms of building for a world where software and all of these things are managed by agents, not by humans.

Nathan Labenz: (45:17) I definitely have this experience too where I find myself being, like, a glorified copy and paster, you know, is sort of a lot of what I'm doing between things. That's also been the case even in just - we kind of got sidetracked from this earlier, but we were talking about, like, different models and what happens when there's an error and OpenAI can, you know, with the o1 series can kind of help reason through things. I found myself quite often using like ChatGPT Pro to, you know, make my plan or like diagnose how should I think about this feature at a high level.

Anton Osika: (45:51) Yep.

Nathan Labenz: (45:51) Have it give me instructions. And then I'm like pasting instructions one by one into another UI, AI UI and having it, you know, like implement the plan step by step. So it is funny how much of the actual watch over the human shoulder - they're spending just, like, pasting, you know, stuff back and forth between systems, and that definitely seems like the sort of thing that's going to get smoothed out.

Anton Osika: (46:15) Like, that's a big part of the what you're seeing with Lovable is collecting, like the hardest part or the most important part in the beginning of building a large language model app is context management. And knowing like, do we need information from this source, from this other source, from our knowledge database and from like the history of what the user has done. And if you're very good at managing that context, it becomes much easier for the workhorse of the coding in our case, like Claude, to take the right decisions. And yeah, I think that's - we have spent a lot of time on that and that's why a reason why it's just more reliable than other tools that at least according to the people that I know who run comparisons among all of them.

Nathan Labenz: (46:57) I did enjoy how you had the - and this maybe also kind of gets to who you're targeting in terms of users and what you're seeing in terms of, you know, the kind of background, of people that are using the tool and what level of knowledge they have coming in. But I did appreciate how along the way there are these sort of prompts to say like, I'm not going to tell you go somewhere else and find the place to do this instead. Like you give me the API key here. I'll put it where it needs to go. I thought that was like quite nice and definitely a notable step toward, you know, anybody being able to do this sort of thing.

Anton Osika: (47:35) Yeah. But Isaak, do you want to go to the edge function logs and see like what was the exact error - so here it says, I already clicked that one. The - and this is what's not yet in production at least fed into the AI system, but that should, of course, be fed into the AI system. And what does it say exactly? So I think we can just copy paste all of this into the -

Isaak Sundeman: (47:57) Yeah. Let's just do that.

Anton Osika: (47:58) And then go back to the app. So this is what you're saying. Right now, we're doing the copy pasting. This is going to be completely automated, and then we wouldn't have gotten stuck at this point at all.

Isaak Sundeman: (48:08) Yeah. Let's send that. Hopefully, this should -

Anton Osika: (48:13) So when you're connecting to external APIs, it's as you said, Nathan, it's often more things that can go wrong, and that's where you need to have a system that's good at debugging itself between the different components that are interacting. And now we're - with the browser is interacting with Supabase that in turn is interacting with FireCrawl, which is kind of fetching the data for us.

Nathan Labenz: (48:37) One other question I had going back several rounds in the exchange with the AI was, I forget exactly what you typed, but it was like, what's the best way to scrape product information?

Isaak Sundeman: (48:50) Yep.

Nathan Labenz: (48:51) And yeah. So what's the best? How do I scrape data from an external URL? So I found in my general use of AI that I - and I think this is starting to change a little bit with the reasoning models, but certainly pre-reasoning models, I've developed a practice of trying to be super neutral with my language because especially in an area where I don't know super well what the right answer is, I'll often find that unintentionally, I can bias the AI in a particular direction. So for example, there, scrape. Right? Seems like it - with almost all the models up until, like, the sort of o1 series, my use of the word scrape would naturally send the AI down a path of, like, okay. We're scraping. And then in a lot of tools, I've seen actually - I really like the idea of trying to default to the best in class tool, like a FireCrawl so that you're not kind of recreating scrapers from scratch. But I've had many experiences where it'll be like, okay, we're going to write a Python scraper and, you know, then it'll make a sort of bare bones, like, you know, we'll use the requests library to like go get the HTML and then we'll like use Beautiful Soup and, you know, whatever. Next thing you know, you're sort of lost in scraping hell. Yep. And it's like, man, there's way better things out there to do this. So I like the idea, first of all, that you're sort of - seems to be kind of curating in the background. Like, these are the production grade tools that we trust that you can tap into immediately so you're not kind of recreating stuff from scratch. But also wonder if you're - like in more general - do you have like a list of sort of these are the tools that are like on that sort of FireCrawl level that we know and trust and we're going to kind of try to route common needs to those best in class tools?

Anton Osika: (50:52) Yes. We have like a list. We have - if you go to lovable.dev/integrations, there is the ones that we default to, and we're adding - I think maybe not everyone is here, but we have for emails, we have Resend. For payments, Stripe. For AI, it defaults to OpenAI or Anthropic if you ask for that. If you want to generate images, that's Runway. And I'm not sure we launched it yet, but we have using Replicate for various, like, AI, a lot of other plethora of AI APIs. Models galore for sure. Yeah. And then yeah. I think there are many people that request us to add their - their - like, people that reach out from large tech companies, want us to use them as a default provider for both for the back end parts and for some like for payment, for example.

Nathan Labenz: (51:45) That's cool. I think that is really smart. Going back to just kind of the language and sort of the user potentially mistakenly like leading the AI astray. I've started to see with the reasoning models occasionally, they will come back to me and say, I understand what you're trying to do and why you're trying to do it this way, but I actually recommend a different approach. Like in this case, you know, it might say instead of scraping, you should, you know, use a commercial API that can get product information for you or whatever. Do you have any sort of strategy for kind of questioning the assumptions of the user to make sure they're not like going down sort of the wrong path?

Anton Osika: (52:25) That's a good - yeah, we should have that. Like now we rely on people really knowing their shit and like - or being fast learners and figuring out errors. Like in this case, I will definitely be like, okay, now maybe we had the problem and I will restart from scratch. I was just like, okay, let's restart from scratch. And because then you can instantly quickly learn, like, how do you make this work reliably? And in the future, I mean, we want to, of course, be much more Socratic and be like, okay, this is the situation. I think you're asking for this, but it doesn't really make sense, potentially, or like, what do you mean exactly? And that's going to be a big level up for most users. I'm surprised that this didn't work absolutely instantly. So what we're seeing is we're getting an error, we're getting bad requests. And now I - I mean, most non-developers have a hard time understanding this. But it is possible that a human go in and say, okay. There's something about the connection here, here, here not working. And it says like, oh, review the API documentation. I don't think we need to do that. It says unrecognized keys. Maybe something changed in their API, but we're getting 400 errors. If I ask, try to fix, it will pass - pass in - use this logs. It will not use our Supabase logs. Right. You need at least two products to compare. Okay. Sure. Let's do it. It'll just create enough products for comparison. That's clear at least. And while it's running, I'll check the Supabase logs to see if there's any more details there.

Anton Osika: (54:07) And yeah, what you can of course do if you're a developer is like you spin up a nice UI, there's a lot of best practices that are like spun up for you, Stripe and so on. And then at some point maybe you want to do edit the - seems like it

Isaak Sundeman: (54:20) was successful.

Anton Osika: (54:22) Unknown product, unknown. I think the problem here is that I'm not scraping washing machines. Do you have any washing machines?

Isaak Sundeman: (54:26) Yeah. I do it on another tablet. It's going to -

Anton Osika: (54:30) Let's take two of the same washing machines. Here we go.

Anton Osika: (54:42) So I had to click try to fix that. I like, I'm so happy I could help you out.

Isaak Sundeman: (54:45) You were trying to figure it. Why did it? Okay. That's here.

Anton Osika: (54:48) Amazing. So then what you say is just like it just says unknown product. Make sure to show.

Isaak Sundeman: (54:57) It seems like we're making progress now. So I think it's successfully scraped the links now. And I think like one thing that I did is that I included the documentation on FireCrawl. And that's one thing - these LLMs are not up to date. If, for example, FireCrawl updated their docs or API, Lovable might use the old documentation. And then you might have to include that within the context. Now, in our native integrations, we actually stay on top of all of that stuff. So that will essentially always work. But in the case where we want to configure an arbitrary API and use it, then it will not necessarily work if the API has been updated. Now, like, I guess the Lovable AI doesn't actually know what to display right here, and I think we will have to use OpenAI.

Anton Osika: (55:42) No. That's true. That's true. We should - we have to process the processing through the AI model. But, I mean, this is very standard. Like, okay. So you get something. You'd be like, okay. Something - the API works. Now we have to update the UI to handle the API response. And that's what we're what it's doing now and writing in the code. Let's see.

Isaak Sundeman: (56:00) Yeah. There we go. There we go.

Anton Osika: (56:01) Yeah. So we're seeing that - we're seeing them. But let's now - an AI analysis of them. And, Nathan, do you have any preferences on what you would - so what you would want to compare given where the UI is at right now?

Nathan Labenz: (56:16) Yeah. I think maybe I want like a two-part analysis that first is like, what are the most relevant factors to like, you know, satisfaction with this product type? And then present those in sort of a head to head way. You know, what - and they could be, like, key features. It could be, you know, common problems, like - but, you know, that front loader, top loader thing is a good example where I was just never really thought about that before. And then when you start reading the Wirecutter on washing machines, you know, it's like, okay. Well, the first thing you're going to need to decide is do you want, you know, front loader or a top loader? And by the way, if you have a top loader, then you definitely can't stack them on each other. So, you know, do you have space for two side by side, or do they need to stack? It's sort of this, like, what should I even be thinking about as the relevant dimensions and then show me what they actually are.

Isaak Sundeman: (57:14) That was nice.

Anton Osika: (57:15) So let's ask that. I'm saying, let's send all these products to the AI, and it should be in a short format, to give you three important features to consider. And as a recap now, what we have is you can enter products, you can get data, and now we're going to send the product data to an AI model to say, what should we consider? And then as a step after that, probably list the differences among those dimensions as a table or something like that.

Isaak Sundeman: (57:43) Yeah. But that's really interesting that like some of the dimensions of certain products, you don't even know, right? Like you didn't even know that was a relevant thing to look for in dishwashers. So, yeah, I think that would be a really cool thing that dimensions will actually be suggested by the AI to us. And to create that table, we'll want to get a structured response from one of these LLMs. Right? And that's totally possible using function calling. And Lovable knows how to use that. So I think that's probably the next step that we have for this response.

Anton Osika: (58:15) And I just want to clarify, like, for an application like this, it's - I would say, for a pretty technical person, you can very reliably build this entire application. And then for a technical person, if - for people who are less technical, for a simple application like this one, with some patience, you will be able to succeed in building it, but you will run into problems with at least 50% probability. You'll be frustrated if you're not technical. And with maybe 10% probability, it will take a shit ton of time. Like you really feel like you're getting stuck. That's - but - and it's a bit random. Like in some cases, you're lucky. In some cases, you're less lucky. So just like this type of application is something you can definitely build with the current version of Lovable. And this is as bad as it's ever going to be. Like this is as bad as it's ever going to be.

Nathan Labenz: (59:12) Yeah. It's funny. This just - been how long were you building before you launched two months ago? I think we're talking on exactly the two month anniversary of your launch.

Anton Osika: (59:18) So the company has been - we started the company a bit more than a year ago. And then we went through a few different iterations. We went down, like, the agent hole or agent route, which has some advantages, but we came to some realization on the UX why agentic usually has very bad UX and made it much more focused on speed and like the fastest possible way to get the result back to the user. I mean, think -

Nathan Labenz: (59:45) people have very different ideas of what an agent is or, you know, they mean very different things when they talk about agents. Yep. And the way I think about agents as opposed to say intelligent workflows is that an agent in my mind is something that has at least a certain amount of delegated decision making authority. Whereas, you know, if I make something in a Zapier type framework where it's like one step follows the other, maybe some of those steps are AIs, like, the prompt is prescribed. Everything is kind of happening sequentially one step after another. I would say that's like low - that might be intelligent, but it would be low agency if, like, every, you know, step is sort of fully planned out. Whereas here, I think this is actually like higher agency for the AI than most product experiences? Because here there is like a decent amount of the AI sort of interpreting what you're saying and, you know, making sort of dynamic decisions about exactly how it's going to go about it. So how do you understand agency, and what do you mean?

Anton Osika: (1:00:53) I mean, agency, I think, is a bit of a different - it's, like, goal oriented, but agent in LLMs, I think, is, like, agent does one action and then it looks at the result, and then it does another action, it looks at the result, very open with a very open ended loop. I think in most cases, if you want to do what you're asking for, reasoning steps and so on, you can do that without that kind of open ended, that very open ended loop. You can design the chain of LLM calls in an intelligent way. But what the problem is with if you do it in an agentic way - has the benefit of being more general, is that it's very unpredictable how long time it's going to take. And that unpredictability from a user standpoint, especially if the system is not 100% reliable, if it's unpredictable and not 100% reliable, is a very shitty experience. So you want to get as far away as possible from that. And then once you're at the like the far as fast and as reliable, then you can start making it take more than one, just one step of LLM calls. But I'll just look back to the product now. So we can see the two products now and then compare like what we should look at - load capacity, steam cleansing technology. Okay. I don't know if that's a top priority, but that's apparently what AI says is the priority. So we could do some better prompting at making sure it looks at what's most important and energy efficiency. That makes sense. Let's take two other products. Like, what would you consider buying apart from a dishwasher?

Nathan Labenz: (1:02:24) Let's do headphones.

Anton Osika: (1:02:25) Let's do headphones. So we do Bose and JBL here. And now we're still just going to see the like what's good to consider for headphones in this case. But the next step would be for it to list like why you should use X over Y. And I guess we could just write the pros and cons list for both of them for now. We should consider sound quality, battery life, comfort and fit. So for next prompt, I imagine we could write something like, let's prompt the AI system to also list how the two different - all the products compare along these dimensions as a nested bullet list or something like that. Does that make sense for you?

Nathan Labenz: (1:03:09) Yeah. Yeah. I think so.

Isaak Sundeman: (1:03:10) Ask, like, another step then. I would - we -

Anton Osika: (1:03:12) I would do the same step before just -

Nathan Labenz: (1:03:14) for the -

Isaak Sundeman: (1:03:14) Same step. Yeah. Okay. Then in - under the list, those dimensions compare how the two products compare with each other.

Anton Osika: (1:03:28) If not limited to two products, I'd rather drop it down really like - it's the -

Isaak Sundeman: (1:03:33) products with each other based on the features listed. Yeah. Make this happen in the same AI call. Yeah. Okay. Let's send that.

Anton Osika: (1:03:46) Maybe this is going to be a killer application when we're done. I guess I mean,

Nathan Labenz: (1:03:51) next, but not too far downstream from this would be then starting to pull in like customer review highlights potentially as well. I'm just kind of, you know, my vision for the product is evolving. I'm kind of imagining like a sort of - the first step is like the advisor layer that's kind of, you know, coaching you on what you should be thinking about. Second step is like, now here's an objective tale of the tape. And to some extent, that's kind of informed by like the product pages typically have these sort of like spec, you know, things. But of course, they're all formatted differently and it's hard to - you know, even just in simple things like the size of the washing machine. Wait. How wide was that other one? And, you know, is this one wider and whatever? And then a third section that I could imagine would be just like, what past customers have to say about this? And is there anything that they're bringing to the fore that the product pages themselves didn't mention? You know, we may or may not get there, you know, in this session, but with all three of those things, I think you would have a pretty useful little tool.

Anton Osika: (1:04:55) Yeah. 100%. I think what I'm also excited about here would be the first step is just, I'm looking for a dishwasher and then it pre-fills some of the products to analyze.

Nathan Labenz: (1:05:08) Yeah, yeah, that's cool too.

Anton Osika: (1:05:11) And we got an error now, I don't know, what was that?

Isaak Sundeman: (1:05:14) Yeah, it was - maybe we're out -

Anton Osika: (1:05:16) of quota for our APIs. So I mean, this is, if you're not impatient, you just like click to fix, but I don't know why we would have a new error here. So I would read the logs in this case and be like,

Isaak Sundeman: (1:05:28) if you -

Anton Osika: (1:05:29) show logs up there, we can try to understand why did it suddenly have an error? So it had bad gateway and that's FireCrawl side. So if it starts with 500, it's not our fault. It's actually FireCrawl. So now he's trying to fix it on our side. We can't do that. But we can try again.

Nathan Labenz: (1:05:46) Hopefully, it'll prove transient. Yeah. Going back to the agency thing while we're debugging this, if I understand your understanding of agency correctly, what would make this more agentic but you think is not a great experience is if for example encountering this error then it just took the next step on its own to try to resolve the error.

Anton Osika: (1:06:09) Yep. That is a very reasonable thing to do, right? But it - yeah, there's a few reasons we don't do that. We don't unleash it like that for now.

Nathan Labenz: (1:06:18) Yeah. So tell me what they are because I could - I mean, I feel like having tried a bunch of these different experiences, another one that's obviously gone through cycles of hype and counter hype and whatever over the course of the year is Devin. Yeah. And in preparing for this, I did an experiment with a similar, you know, little project. Right? I loaded up multiple of these sort of coding agents or assistants or whatever creating products. And with Devin, I did have a weird experience where because it just keeps working, right, in the background.

Anton Osika: (1:06:49) Yep.

Nathan Labenz: (1:06:50) I was rotating between them. I would like, look at the state of one of the products, give a next direction like we're doing here, but then I would just tab over to the next one. And when I would get to the Devin tab, I did realize I have no idea what's going on. This thing has been working continuously in the background.

Anton Osika: (1:07:07) Yeah.

Nathan Labenz: (1:07:07) And in some ways that's like very appealing but in other ways when I get there it might be on like iteration 87 and I have no idea like what the current state is. So it's like very hard in that moment to be like, what are you even working on? Like, what's working and what's not working right now? Where are we? And I did find that to be weird. So it kind of led me to a sense. But then here I also do think, as you said, that, you know, certainly be reasonable in some cases to like take that next step or, you know, another thing that we're looking at here a lot is like just literally pasting in the URLs and running it again. And so to some degree, I also think like, especially with, you know, something like Claude computer use starting to be a thing, part of me is like, could I sort of have like limited agency? Like, I don't necessarily want this thing to run forever and run up a bill or, you know, drift off into some state that I have no idea where it's at. But I kind of would like it if it sort of took my one prompt, tried to do it, tried to use the product, you know, and had Claude - yeah.

Isaak Sundeman: (1:08:12) Computer use, like, basically, go. That is coming -

Anton Osika: (1:08:15) up the computer use part. I mean, my point, this thing, many of these things you can - I think you saw Isaak just used the selector to select should use chat mode? And there is a pre - this is not publicly available, but there's an agent mode there in selector, which is only available if you have like an admin account like Isaak has. So these things are things that people like us experiment with. The most important thing is that you have a product that predictably works, like works in an intuitive way. And making an agent work intuitively and like in a nice way, it takes a lot of iteration, a lot of iteration. And we haven't made it work in a kind of reliable nice way, but I think we're going to be there very soon. It's something - it's not - yeah. It's one of the things in the coming few weeks that's going to be on top of the roadmap. Yeah. And what you saw here, so the product works now. If you look, we have the two headphones compared and it says the key features to consider are sound quality, noise cancellation, battery life, and then it runs a comparison. And here, we should continue to iterate to make sure that the AI always spits out the comparison in an easily digestible format. Here, I think it says like, oh, this one is offering superior sound quality in a long paragraph. And what did you do, Isaak, is you asked AI, what should we do next? And then it says like, oh yeah, do visual comparison improvements. We could just pick one of these, paste it in and ask AI to do it. And then it's going to - what will you say, like, we're the CEO and then there's - we just ask the AI PM, product manager, and then we decide what the software engineer should do out of those suggestions. Yeah. I think some of these are pretty good, like reviews. I think it would look really good if

Isaak Sundeman: (1:10:05) we have like a table.

Anton Osika: (1:10:05) Like, a table. Yeah.

Isaak Sundeman: (1:10:06) You guys talked about. And I think I'm going to nudge the Lovable AI now to use function calling from OpenAI just to make sure that we get that formatted response for the table. Because, yeah, in this case, we don't just want, like, a long chat response. We want a format of the response so we can render it in this beautiful digestible way.

Anton Osika: (1:10:26) So yeah. But, yeah, I think that was kind of a good rundown. Now we built this simple AI app, and it ended up giving us a product that can provide value. If we had done this with everything logged in and set up from scratch, I would expect this to take 5 minutes and happy to see people do speedruns of something like this and record it online. But, yeah, as you can see, there is a lot of iteration. Now plain English is the hottest programming language. It actually works in this case, but it still takes an - again, an engineer, a human supervisor - that's the QA at this point. So that's the state of things right now.

Nathan Labenz: (1:11:10) Oh, here we go.

Anton Osika: (1:11:11) Oh, wow.

Isaak Sundeman: (1:11:12) This is really -

Anton Osika: (1:11:12) really, really good. One more prompt and you just nailed it.

Isaak Sundeman: (1:11:14) Yeah. Yeah. Nice. This is nice.

Anton Osika: (1:11:16) Nice. And, unfortunately, the AI is extremely politically correct. It just says everything. I cannot decide. Yeah.

Isaak Sundeman: (1:11:25) We should do this with Bolt and Lovable. See what it does. Is it going to - yeah. Should I try? Let's try it. See what it does. See what it says? Go for it. Okay. Bolt. And Lovable. Let's see. Is it - if it's going to betray us?

Nathan Labenz: (1:11:42) Betray itself. This is also like a self-awareness. Yeah. Situational awareness test.

Isaak Sundeman: (1:11:49) Yeah. What are the - so

Anton Osika: (1:11:50) this is not our AI answering. This is the -

Nathan Labenz: (1:11:54) But it probably knows, right, in, like, a system prompt or something. It should have some clue as to who it is. Yeah.

Isaak Sundeman: (1:12:00) Okay. Let's see where you get the -

Anton Osika: (1:12:02) images. Lovable. Great.

Nathan Labenz: (1:12:04) Let's see.

Anton Osika: (1:12:04) User interface, deployment options and ease of use. No. Bolt one, two, or - what? But that is not true. Actually, we do have built-in deployment options. As I would say, this was great. And the interfaces are also the same, but that's what you get with AI.

Nathan Labenz: (1:12:22) I can definitely vouch for the fact that the interfaces are quite similar.

Isaak Sundeman: (1:12:25) Yeah. Yeah.

Nathan Labenz: (1:12:27) Okay. Yeah. That is really - that's cool. I mean, first of all, we're an hour into trying this. How many iterations have we been through? 17, it looks like. Edit 17. One thing you mentioned earlier that kind of caught my ear, because I've also experienced this a lot. I'm trying to build up my own, like, coding with AI best practices is, like, commit at every working state and definitely be prepared and be willing to roll back to a previous good known state, find that - and I don't think we ever reverted in this experience. Maybe we can just kind of scroll through and look at what were the 17 steps that we took. And this is not like a Lovable comment. This is more of a, like, me doing it with Cursor finding. But once I get off the track, I have often found it really hard to get back on track. It seems that the models are generally much better at sort of doing the thing, getting it to doing it right the first time versus, like, iteratively debugging. You know, it's a - I find a lot of times they kind of end up making a mess. They try things over and over again. They get kind of confused. They don't break frame well. So, yeah, any thoughts about kind of when to revert, how to know when to revert? One thing I have found somewhat successful is when I do revert, I'll sometimes take the error message that it was struggling to fix. I'll grab that error message, go back to the last known state, edit my prompt, and say, by the way, last time we got here and you couldn't fix it, so make sure you avoid that this time. And that often does seem to help. But I want - anyway, just kind of wonder what else you have experienced in terms of, like, the - and it's way different. Right? Because I think people have in general, when they have done the code, they have a lot of attachment to that code because it represents a lot of, like, their work. Whereas, you know, it should be much easier to just throw away whatever a language model has given you over, like, four rounds of prompts. But, yeah, what else have you learned about kind of when to, you know, execute like a tactical retreat?

Isaak Sundeman: (1:14:36) I think I have some good takes here. So I kind of would see it as a search tree. Right? Like, you kind of have this tree. You start off, you know, in the middle of it, and then you can go in different directions. And when you're trying to implement a particular feature, there are various ways of doing that. And sometimes certain features require certain sub-steps. So usually when a non-technical user gets stuck implementing a feature, it's because he has not correctly taken all of the steps in the right order. And that's why we recommend consulting with the chat only mode. But I think if you have taken the steps in the incorrect order, then it is a very good idea to revert back. Because then you could just kind of - as you mentioned, you can include the error that you got when you actually got stuck. You have that scar tissue, kind of that intuition of where things went wrong and you can nudge the AI to go in a different direction. And then even if you don't do any nudging, there's still a probability that the AI will go in a different direction just on its own because of randomness and then it might work. So I think it's hard to say exactly in which scenarios you should revert, but if it seems like you went down a spiral and it's hard to get up again, then it's a very official way of saying it. But for instance, I think it could have been a good idea to revert in our case when we were struggling with FireCrawl and then maybe include some of the documentation instead. Yeah. Anton is now redoing it now with the -

Anton Osika: (1:16:03) I figured out just to see if I can do this in four prompts. Yeah. Can I get your API key for FireCrawl?

Isaak Sundeman: (1:16:08) Yeah. Absolutely.

Nathan Labenz: (1:16:09) Do you want to show your screen as you're doing this? You're on a different computer?

Isaak Sundeman: (1:16:15) Yeah. Let's see. Maybe you can screen share.

Nathan Labenz: (1:16:19) Yeah. I think this is really interesting too. I mean, this is sort of humans end up in this spot where it's like, we've been coding this thing for years. We've got all sorts of technical debt. We've got like all sorts of, you know, kind of shortcuts or weird strategies that we use that we kind of wish we would clean up. And this is basically the AI equivalent of that. Right? Like, we've been 17 rounds. We're not exactly sure what weird little micro decisions might have been made. And now you're basically saying, okay. Now I kind of know what I want. Let me go back, take it from the top, and see if I can do, like, a really clean version. Is it - that's basically the thought process you're going through?

Isaak Sundeman: (1:16:57) Yes. Yes.

Anton Osika: (1:16:58) I mean, I think Isaak had, like, the worst luck I've ever seen. So I figured it's like, well, what's the more likely outcome in this case? I mean, sometimes you are unlucky, but this is what I did. I said, okay. Let's add the two URLs, and then let's add FireCrawl scraping with FireCrawl. And now I'm going to ask you to send it to OpenAI. And to answer two things: one, what features should be considered when making a purchase decision? Two, how do the two products compare? So this will

Nathan Labenz: (1:17:37) just be the second edit?

Anton Osika: (1:17:39) That'll be the - yeah. So how did this work, actually? I think we had these two baked into one edit. That's true. Have we

Isaak Sundeman: (1:17:49) connected Supabase on this part?

Anton Osika: (1:17:53) No. Supabase is not connected in this project, but that - I - that's why I wanted to do it, like, absolutely fastest possible way. Yeah. So if I would have connected to Supabase first, it would have put the scraping on Supabase.

Isaak Sundeman: (1:18:06) It's alright because we don't hardcode the keys now. Instead, we have this input field where the user would put in their own keys.

Anton Osika: (1:18:12) Yeah. So we're skipping a step there.

Isaak Sundeman: (1:18:14) Yeah. So that's the OpenAI key. That's the FireCrawl key.

Anton Osika: (1:18:17) So there. So much. So here -

Isaak Sundeman: (1:18:21) And that's very nice. So, like, Lovable seems to almost never hardcode keys. So even though right now we haven't implemented Supabase, it still understands that we probably don't want to hardcode the API keys within the front end. Right? So instead, just let's just add these input fields. And this will probably also be way less error prone because now we don't have a complex system, right, before we added -

Nathan Labenz: (1:18:48) Yeah.

Anton Osika: (1:18:49) I don't know. I wish I connected Supabase. I was too quick to ask for FireCrawl, but at least now we're going to show that this works in just a few prompts, I hope. So what it does is that it lets me enter the OpenAI API key here. I'm going to make sure I don't leak the API key, hopefully.

Isaak Sundeman: (1:19:10) It's alright. I can just remove it that way.

Anton Osika: (1:19:12) Okay. Let's go. And if you're a bit technical, you can always look at the network logs here and see if we're getting a response from them. Okay. So here we have the websites. It's going to compare Lovable in this case. So then when I fetch from - I hope it's fine. Here we go. Oh, nice. Yeah. So now we did it actually in three edits. So now that's what I would expect, to be honest. And we compared - let's compare something really helpful.

Anton Osika: (1:19:49) And also it's notoriously hard to scrape, so I hope that works. But this was a three-pronged version of it. And back to your question was like, how much we can expect this to work? I think if you're good at using reverts and so on, as you were touching on, then for any product that is - or any internal tool is one of our core use cases as well. Where it's like one main feature, like we're creating one main feature here, then it should take a dozen edits or so. If you're adding many features, it takes hundreds of edits, but - and then it also starts becoming much harder because you're noticing that like the AI doesn't handle large code bases as well. Yeah. There's a lot of smart things we're doing to handle large code bases, but it doesn't handle large code bases as well.

Nathan Labenz: (1:20:38) Yeah. That's - I mean, I'd be interested to hear more about that to the degree that you want to share it. I mean, what I have been doing on my own is just taking literally my whole code base. Like, my little app that I've mentioned a couple times is roughly 100,000 tokens. And with AI's help, of course, I had it, you know, just write a little script to put that all into a single file, kind of like your LLMs.txt, except it's not docs. It's literally just the source. And then, you know, paste that into ChatGPT and ask o1 Pro, like, figure out a plan. You know, here's all my code. Like, figure out a plan for whatever. Now, obviously, that's going to run into limits. So I've started to try to first, like, modify my script that creates the single file. And, you know, we don't really need, like, the CSS classes, you know, or we can, like, skip these various things. So it sort of adds a bunch of regular expressions to the script that prints the single file, and that'll save me, like, 20% of the tokens. So then it's, okay. Cool. Now I can, you know, do a couple more features till I hit the limit. But I'm like, obviously, you know, big code bases are, like, way bigger than 100,000 tokens. So what have you learned about managing context? Are you doing, like, dependency tree type of stuff? Yeah. Curious as to what you've found to be successful.

Anton Osika: (1:21:57) I mean, there's some things that really give you a lot of bang for the buck. And how we do RAG, we do agentic RAG is one of the key kind of differentiators for why our product is very good, even as the codebase grows. So I can't go into detail, but being just being smart about agentic RAG gets you very far.

Nathan Labenz: (1:22:22) Okay. Well, that'll be a little - I'll have to go on a little side quest to figure out exactly what that - I

Anton Osika: (1:22:27) mean, you can try to reverse engineer it as well. I'm looking to be more open about this. This is a big part of where it's strong and we're building out many more areas. But yes, we got this error from like in the last session here, seeing how does the product work that we built, where the error was the OpenAI couldn't handle Amazon websites. There's too much text on the Amazon website. So the AI set the max number of characters to 4,000.

Nathan Labenz: (1:22:57) Think my guess is that this is going to not have the information. That's kind of what I ran into when I was messing around with this myself a little bit. If you just, like, truncate, you get a lot of just, like, header script kind of cruft garbage. So I was thinking, you know, how - but that does get tough. That's where I was kind of like, man, hopefully, FireCrawl can, like, maybe solve some of that nonsense for me because -

Anton Osika: (1:23:23) solve that.

Isaak Sundeman: (1:23:24) Yeah. I think it did solve that quite well.

Anton Osika: (1:23:26) Do we get a response here? Is it doing it crawling here? No, I hope not. So it just - like that it picks up the title, it picks up the text like in a nicer format. And now I don't know if we're like how - I don't know how we're using this different data. Normally, like to use our product model to its full potential, you often want to paste in like the payloads from the API request because they are not fed currently, I think, by default that is not fed into the LLM. As you see, there's a lot of data here, but then it would be better at picking this. I think we're doing something in the background. Why is that? Maybe, like, little bit for history here.

Isaak Sundeman: (1:24:14) Oh, we got a response.

Anton Osika: (1:24:15) Okay. We got a response. So we have a lot -

Isaak Sundeman: (1:24:18) of in - yep. And I think it's still analyzing. So we're waiting for the other analysis to -

Anton Osika: (1:24:23) So it's not a fast application that has been built here by the AI, but it's been - it gives us - okay.

Isaak Sundeman: (1:24:28) It says it's completed.

Anton Osika: (1:24:29) Oh, no. I refreshed it. Alright. I think I pressed back.

Isaak Sundeman: (1:24:32) Oh, no.

Anton Osika: (1:24:33) I'll start over.

Nathan Labenz: (1:24:36) Okay. Cool. I think this has been really good. It's really interesting. Smart RAG is a takeaway of, you know, something to think about more for sure.

Anton Osika: (1:24:45) Yep. Context management is a superset of smart RAG, and that's why that's really like one of the core pieces of this.

Nathan Labenz: (1:24:53) Just in kind of a couple, you know, wrapping up sort of questions, where are you guys at today? Like, who are your users? You've scaled, like, remarkably fast. I've seen tweets to the effect of, you know, one of the fastest growing, if not the fastest growing, you know, European startup ever maybe. So what's that story like over the last couple months?

Anton Osika: (1:25:12) Yeah. So, I mean, since we launched, we've just been continuing to grow. If you analyze - I think it's over a year, it's $1,000,000 revenue per year. So now we're actually at $9,000,000 and that's faster than - from like in - so that's in 8 weeks. So that's faster than any other company like launching from Europe. My research, it's the fastest. And more importantly, we have hundreds of thousands of users. Whenever we post something online, people are - it's full of comments of people who are just like blown away by what they can do with our product now. There's a lot of love. Yeah, our users are the people who are paying, they're using it every other day. So there's a lot of positive aspects. What we're doing going forward is to make this valuable also to teams that collaborate. And you can actually see that on my screen that I'm sharing here. Like this is not a launch, but we're going to make it much easier for people who collaborate to use our products for that. So that's a bit of a snapshot. But what's much more exciting than that is the AI becoming more reliable, being able to do more debugging itself and launch - that you saw here as well. That's in the works.

Nathan Labenz: (1:26:35) So I wanted to ask also about edit code and publish that I see in the upper right hand corners. Like those are maybe the less interesting things from an AI perspective, but they're obviously important for people that want to even ship an internal tool. Right? So what does that look like?

Anton Osika: (1:26:51) We're deploying the applications on, like, on the edge with Cloudflare, and then it makes it possible to just like with one click, you have built your app and it's all in production. And this is running and it scales really well as well. So that's how our publish flow works right now. Edit code is actually more interesting than it sounds. I mean, you can edit it with any browser - any IDE, like Cursor, whatever. Here, I'm just opening VS Code in the browser. And then if I edit - if I change something - let's see here. We're - then I can - it will sync - it will synchronize. So then it will - I will instantly see, like, oh, a human edited the code from their favorite IDE. And this is also a way to collaborate with their teams. And this is not us. So GitHub is now - slow, is what I'm showing on the screen. But that's what we have made to talk about. I mean, there are some more very valuable features that you learn if you're a super user. One is that if you have a specific API, like the documentation or specific API, you can just put that in the knowledge for this project that we're editing. And if you want to do changes experimentally, like if you're a developer, you're used to something called branching, and that's something that's built into the product as well.

Nathan Labenz: (1:28:11) Gotcha. So you've got the basically kind of dual mode of you can do the full developer experience with your IDE. You've got, like, your - obviously, changes tracked on in a git repo here, branching, etcetera. How does your user base break down? I mean, it's probably all happening so fast, you may not even know, but, like, how many people would you say that are using this are developers trying to move faster versus people who are like, I don't really know how to code, but I want to make something.

Anton Osika: (1:28:43) I think we ask this question like how much coding experience do you have? And it splits evenly with 25% in like no coding to up to a bit and up to like a lot of coding experience. And we are all about empowering like the 99% of people that don't know how to code. It's even more who don't know how to do both front end and the back end. So there are more of users in that bucket, but the people who are technical, they get much further. They can read much more complicated things because they understand a bit of like debugging, they understand what the API calls, how that works and so on. So the theme we have in common about our favorite users are that they have a very entrepreneurial spirit. They are like high agency, another way I like to put that. So it's often founders and people operators who are running their own business, so maybe it's an agency, and they're like super quick to understand what's possible with the new technology. Like, those are our favorite users today.

Nathan Labenz: (1:29:45) What do you think is going to move the needle most over the next couple months? Is it just better models or anything else that you're kind of specifically tracking? And where do you think we get, say over - if you can, you know, see this far into the future, where do you think we get over the course of 2025?

Anton Osika: (1:30:02) Yeah. I mean - for us, like, you mentioned DevOps and the infrastructure being the big bottleneck. And long term, you're going to be more of the infrastructure packaged in a very nice and quite opinionated way for AI for the best AI models. And more of our smart algorithms that are putting us ahead of others today are going to matter less because the large language models are going to advance and advance and just keep becoming more intelligent. Our tricks on top of that are going to be less important. The things that do matter for us in the coming months is to add a few more. We try to assemble the team of the highest talent density here in Europe, like absolute geniuses. There are people creating the product mostly. And that's what matters for us and making the team work really well together, figuring out what is the right abstractions, both on the UI and the UX, on the infrastructure of the code that's on the projects that are being generated and being smart with getting the most out of the large language model. Figuring that out as a team is really the key to win any, really any type of AI product, I think.

Nathan Labenz: (1:31:08) Do you want to put out a call for what you're looking for and where? Are you guys all in Stockholm or is there a -

Anton Osika: (1:31:14) Yes. We're hiring mainly for people who are up for relocating to building in an office, which is much, much more fun and want to solve really, really hard problems and be at the absolute epicenter of what AI is able to do right now. And we're paying top of market for the top talent.

Nathan Labenz: (1:31:34) Cool. Well, this has been fascinating. Anything else you guys want to touch on before we break?

Anton Osika: (1:31:40) I think if people haven't tried these tools, then the best thing you can do for your career and for your friends and so on is to get your hands very dirty. That's like - you're going to learn so much from using these tools. Even if you don't have a business application today, you're going to learn so much. If you're currently working without AI, then I think you're really disappointing your employer or your customer, your clients if you're running an agency. So you should get on the train. It's a huge time saver.

Nathan Labenz: (1:32:17) Yeah. Get hands on. That's always my number one advice as well. Cool. This has been really fun. I've enjoyed the peek into the product and I will definitely continue to follow your progress and I don't expect it to slow down. So keep up the great work.

Anton Osika: (1:32:33) Thanks a lot.

Nathan Labenz: (1:32:33) For now, I will say Anton Osika and Isaak Sundeman, founder and AI engineer respectively at Lovable online at lovable.dev. Thank you both for being part of The Cognitive Revolution.

Isaak Sundeman: (1:32:46) Thank you for having us.

Nathan Labenz: (1:32:48) It is both energizing and enlightening to hear why people listen and learn what they value about the show. So please don't hesitate to reach out via email at tcr@turpentine.co, or you can DM me on the social media platform of your choice.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.