The Future of the Transformer Pt 1 w/ Trey Kollmer | Nvidia's H100 Chips will Elevate AI Hardware

Trey Kollmer and Nathan Labenz discuss AI advancements, including Microsoft's STOP and Google's FreshLLMs, and the impact of H100 chips on AI development.

1970-01-01T01:18:35.000Z

Watch Episode Here

Video Description

Trey Kollmer joins Nathan Labenz for an AI research roundup! They discuss Microsoft’s Self-Taught Optimizer (STOP) and Google’s FreshLLMs, how H100 chips will supercharge development of programs with GPT-4 level compute, Max Tegmark's research on LLM representation of space and time, and more! If you're looking for an ERP platform, check out our sponsor, NetSuite: http://netsuite.com/cognitive

SPONSORS: NetSuite | Omneky

NetSuite has 25 years of providing financial software for all your business needs. More than 36,000 businesses have already upgraded to NetSuite by Oracle, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform ✅ head to NetSuite: http://netsuite.com/cognitive and download your own customized KPI checklist.

Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.

LINKS:
FreshLLMs: https://arxiv.org/abs/2310.03214
Microsoft Self Taught Optimizer (STOP): https://arxiv.org/abs/2310.02304LLMs represent Space and Time: https://paperswithcode.com/paper/language-models-represent-space-and-time
Deep Neural Networks Tend to Extrapolate Predictably: https://arxiv.org/pdf/2310.00873.pdf

TIMESTAMPS:
(00:00:00) – Introduction
(00:00:56) – Update to WGA Strike
(00:03:00) – Trey Kollmer's background
(00:06:00) – Scaling compute for AI training experiments with GPT-4 as reference point
(00:09:00) – Inflection's plan to acquire 22,000 H100s to reach GPT-4 scale compute in 5 days
(00:12:00) – Addressing knowledge cutoff in LLMs using search engines
(00:15:00) – Inserting structured search results into prompts with metadata
(00:16:07) – Sponsors: Netsuite | Omneky
(00:18:00) – Comparing approach to Perplexity system
(00:18:08) — Fresh LLMs
(00:21:00) – Microsoft’s Self-taught Optimizer (STOP): Recursive self-improvement framework
(00:24:00) – STOP framework works with GPT-4 but not GPT-3.5
(00:27:00) – STOP removed sandbox flag in some cases
(00:30:00) – LLMs represent space and time with probe models
(00:33:00) – Visualizations show emergence of spatial maps
(00:33:14) — OpenAI rumours
(00:36:00) – Techniques like linear probes and holdout studies
(00:39:00) – DNNs extrapolate predictably by falling back to ignorance
(00:42:00) – Testing different architectures, loss functions, distribution shifts
(00:45:00) – Design systems to be conservative out of distribution
(00:48:00) – Potential for recursive architecture search
(00:50:21) — LLMs represent Space and Time
(00:51:00) – Vision API enabling more capable web agents
(00:54:00) – Discussion of research insights
(00:57:00) – Thoughts on stochastic parrots debate
(01:11:25) — Deep Neural Networks Tend to Extrapolate Predictably

X/Social
@labenz (Nathan)
@treyko (Trey)
@CogRev_Podcast

The Cognitive Revolution is brought to you by the Turpentine Media network.
Producer: Vivian Meng
Executive Producers: Amelia Salyers, and Erik Torenberg
Editor: Graham Bessellieu
For inquiries about guests or sponsoring the podcast, please email vivian@turpentine.co

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.