AI in the AM: 99% off search, GPT-5.5 is "clean", model welfare analysis, & efficient analog compute

Hello, and welcome back to the Cognitive Revolution!

Today I'm pleased to share another edition of AI in the AM, the new live show format that I'm developing with my friend Prakash Narayanan, aka @8teapi on Twitter.

This episode originally aired live on Friday, April 24th, starting just before 9am Pacific time, which … mercifully for a night owl, like me, is just before noon where I live in Detroit.  

Our guests, in order, were:

- first Anna Patterson, former Google VP of Engineering and now founder and CEO of Ceramic.ai, a company that started last year with a plan to help enterprises train their own models, but quickly pivoted to search based on the updated belief that information retrieval plus thorough fact-checking is the best way to equip models with the mix of public and private enterprise data they need.  What's so interesting about Ceramic is that their product is specifically designed for LLMs to use, and their price point undercuts other search providers by roughly 2 orders of magnitude, a combination that she hopes will be enough to unlock all sorts of new use cases and usage patterns. 

- After that, we welcome Lukas Petersson from Andon Labs back for another chat.  While it had only been 2 weeks since we last spoke to Lukas, the testing he and the Andon team had done with Opus 4.7 and GPT-5.5 meant we had plenty of new ground to cover.  Fascinatingly, and in a definite narrative violation, Andon reports that while Opus 4.7 still makes more money in its Vending Machine simulation, it does so in part by adopting "ruthless" tactics, which GPT-5.5 does not.  We also hear a bit about their experience opening a new Gemini-run cafe in Sweden.

- Our third guest is another returning champion: Zvi Mowshowitz.  While it was a bit too early for him to render judgment on 5.5, we got into quite a bit of detail on 4.7, including how he understands the bad behavior reported by Andon Labs, and also what he makes of Anthropic's recent model welfare reports, including why we should care, how much we should trust the models' self-reports, and what low-cost actions he recommends frontier model companies take to improve model welfare on a precautionary basis. 

- Then finally, we have Naveen Verma, Princeton professor of Electrical Engineering, and co-founder and CEO of EnCharge AI, a company that's developing a new computing paradigm that uses in-memory, analog data processing to drive order-of-magnitude energy efficiency improvements, which, though we can't get our hands on it yet, promises to unlock local, private inference that consumes roughly the same power as a standard laptop does today. 

As I mentioned last time, this is still an experiment, and we expect the format to evolve.  If you'd like to shape how that happens, please follow @AI_in_the_AM and send us a DM to let us know how we might make this new format more valuable for you.  

With that, I hope you enjoy this edition of AI in the AM, from Friday April 24th, co-hosted with Prakash Narayanan.

Watch now!

Thank you for being part of The Cognitive Revolution,
Nathan Labenz

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.