INSUBCONTINENT EXCLUSIVE:
Two cobots using autonomous evaluation rollouts from finetuned LBMs to perform long-horizon behaviors, like installing a bike rotor
| Source: Toyota Research InstituteToyota Research Institute (TRI) this week released the results of its study on Large Behavior Models
(LBMs) that can be used to train general-purpose robots
The study showed a single LBM can learn hundreds of tasks and use prior knowledge to acquire new skills with 80% less training data.LBMs are
pretrained on large, diverse manipulation datasets
Despite their growing popularity, the robotics community knows surprisingly little about the nuances of what LBMs actually offer
TRI’s work aims to shed light on recent progress in algorithm and dataset design with this study.In all, TRI said its findings largely
support the recent surge in popularity of LBM-style robot foundation models, adding to evidence that large-scale pretraining on diverse
robot data is a viable path towards more capable robots, though with a few points of caution.General-purpose robots promise a future where
household robots can provide everyday assistance
However, we’re not at the point where any robot can tackle average household tasks
LBMs, or embodied AI systems that take in robot sensor data and output actions, could change that, TRI said.In 2024, TRI won an RBR50
Robotics Innovation Award for its work building LBMs for fast robot teaching.An overview of TRI’s findingsTRI trained a series of
diffusion-based LBMs on almost 1,700 hours of robot data and conducted 1,800 real-world evaluation rollouts and over 47,000 simulation
rollouts to rigorously study their capabilities
It found that LBMs:Deliver consistent performance improvements relative to from-scratch policiesEnable new tasks to be learned with 3-5×
less data in challenging settings requiring robustness to a variety of environmental factorsImprove steadily as pretraining data
increasesEven with just a few hundred diverse hours of data, and only a few hundred demos per behavior, performance jumped meaningfully, TRI
Pretraining provides consistent performance uplifts at earlier than expected scales
There is not yet an internet worth of robot data, but benefits appear far before that scale — a promising sign for enabling virtuous
cycles of data acquisition and bootstrapped performance, TRI claimed.TRI’s evaluation suite includes several novel and highly challenging
long-horizon real-world tasks; finetuned and evaluated in this setting, LBM pretraining improves performance despite these behaviors being
highly distinct from the pretraining tasks.Inside the architecture and data of TRI’s LBMsThe LBM architecture is instantiated as a
diffusion transformer which predicts robot actions
| Source: Toyota Research InstituteTRI’s LBMs are scaled multitask diffusion policies with multimodal ViT vision-language encoders and a
transformer denoising head conditioned on encoded observations via AdaLN
These models consume wrist and scene cameras, robot proprioception, and language prompts and predict 16 timesteps (1.6 second) action
chunks.The researchers trained the LBMs on a mixture of 468 hours of internally collected bimanual robot teleoperation data, 45 hours of
simulation-collected teleoperation data, 32 hours of Universal Manipulation Interface (UMI) data, and roughly 1,150 hours of internet data
curated from the Open X-Embodiment dataset.While the proportion of simulation data is small, its inclusion in TRI’s pretraining mixture
ensures that it can evaluate the same LBM checkpoint in both sim and real.TRI’s evaluation methodsTRI evaluates its LBM models on a
bimanual platform across a variety of tasks and environmental conditions in both simulation and the real world
| Source: Toyota Research InstituteTRI evaluates its LBMs on physical and Drake-simulated bimanual stations employing Franka Panda FR3 arms
and up to six cameras — up to two on each wrist, and two static scene cameras.It evaluates the models on both seen tasks (present in the
pretraining data) and unseen tasks (which TRI uses to fine-tune its pretrained model)
TRI’s evaluation suite consists of 16 simulated seen-during-pretraining tasks, 3 real-world seen-during-pretraining tasks, 5 previously
unseen long-horizon simulated tasks, and 5 complex previously unseen long-horizon real-world tasks.Each model was tested via 50 rollouts for
each real-world task and 200 rollouts for each simulation task
This enables a high level of statistical rigour in our analysis, with the pretrained models evaluated on 4,200 rollouts across 29 tasks.TRI
said it carefully controls initial conditions to be consistent in both the real world and simulation
It also conducts blind A/B-style testing in the real world with statistical significance computed via a sequential hypothesis testing
framework.Many of the effects the researchers observed were only measurable with larger-than-standard sample sizes and careful statistical
testing that is non-standard for empirical robotics
It’s easy for noise due to experimental variation to dwarf the effects being measured, and many robotics papers may be measuring
statistical noise due to insufficient statistical power.Save now with early bird discountTRI’s top takeaways from the researchOne of the
team’s main takeaways is that finetuned performance smoothly improves with increasing pretraining data
At the data scales we examined, TRI saw no evidence of performance discontinuities or sharp inflection points; AI scaling appears alive and
well in robotics.TRI did experience mixed results with non-finetuned pretrained LBMs, however
Encouragingly, it found that a single network is able to learn many tasks simultaneously, but it doesn’t observe consistent outperformance
from scratch single-task training without fine-tuning
TRI expects this is partially due to the language steerability of its model.In internal testing, TRI said it has seen some promising early
signs that larger VLA prototypes overcome some of this difficulty, but more work is required to rigorously examine this effect in
higher-language-capacity models.When it comes to points of caution, TRI said subtle design choices like data normalization can have large
effects on performance, often dominating architectural or algorithmic changes
It’s important that these design choices are carefully isolated to avoid conflating the source of performance changes.The post TRI:
pretrained large behavior models accelerate robot learning appeared first on The Robot Report.