Robotics Research Update, with Keerthana Gopalakrishnan and Ted Xiao of Google Deepmind

Diving into Google DeepMind Robotics' latest research, we uncover breakthroughs in robot autonomy and ethics, paving the way for a future where robots seamlessly integrate into our daily lives.

Robotics Research Update, with Keerthana Gopalakrishnan and Ted Xiao of Google Deepmind

New Feed

đź’ˇ
We've moved! To ensure you don't miss any of our exciting new content, please update your subscription to our new feed. You can find us on Apple PodcastsSpotifyYoutube, or simply add our new RSS feed to your podcast app. Thanks for listening!

Watch Episode Here


Video Description

In this conversation, we cover 6 papers in detail.  They are: 

  • RT-2 – which shows how internet-scale vision-language allow robots to understand and manipulate objects they've never seen in training
  • RT-X â€“ a collaboration with academic labs across the country that demonstrates how a single model can be trained to control a diverse range of robot embodiments, achieving performance that often surpasses specialist models trained on individual robots.
  • RT-Trajectory â€“ a project that shows how robots can learn new skills, in context, from a single human demonstration, as represented by a simple line drawings
  • Auto-RT â€“ a system that scales human oversight of robots, even in unseen environments, by using large language models and a "robot constitution" to power first-line ethical and safety checks on robot behavior.  
  • Learning to Learn Faster â€“ an approach that enables robots to learn more efficiently from human verbal feedback,  
  • Pivot - another project that shows how vision-language models can be used to guide robots – no special fine-tuning required.  

While progress in robotics is still trailing behind the advances in language & vision, there are still challenges to be overcome before robotics models will have the scale of data and/or the sample efficiency needed to achieve reliable general-purpose capabilities, and the study of robot safety and alignment is still in its infancy, ultimately I see this rapid-fire series of papers as strong evidence that the same core architectures and scaling techniques that have worked so well in other contexts will ultimately succeed in robotics as well.  

The work being done at Google DeepMind Robotics is pushing the boundaries of what's possible, investment in a new generation of robotics startups is heating up, and the pace of progress shows no signs of slowing down.

As always, if you're finding value in the show, please take a moment to share it with friends.  This one would be perfect for anyone who has ever day-dreamed of having a robot that could fold their laundry or pick up their kids toys.  

And especially as we are just building the new feed, a review on Apple Podcasts, Spotify, or a comment on Youtube would be much appreciated.

Now, here's my conversation with Keerthana Gopalakrishnan and Ted Xiao of Google Deepmind Robotics.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.