Cassie Learns to Walk

Cassie Learns to Walk

Reinforcement learning techniques successfully enable robotic legs to learn walking the human way through trial and error

A team of scientists at the University of California, Berkeley has developed a pair of robotic legs that has been taught to walk using reinforcement learning. This is the same learning technique that is used to train AI machines perform complex behaviour through repeated trial and error.

The two-legged robot is called Cassie and as of now it comprises …well… just two legs and nothing else! However, those pair of legs can now adroitly perform a wide range of locomotive movements from scratch, including walking in a crouched posture and while carrying an unexpected load.

Various movements of Cassie in real world in different scenarios
Image courtesy: https://arxiv.org/pdf/2103.14295.pdf

The seven-member development team has released a complete paper titled “Reinforcement Learning for Robust Parameterized Locomotion Control of Bipedal Robots” describing the complete innovation. Teaching Cassie to walk on its human-size pair of legs all by itself is a huge achievement in terms of robotics. It promises to be able to handle diverse kind of surface terrain and recover whenever it stumbles or misaligns itself. However, Zhongyu Li, the first-named author of the paper, commented to press that “…we still have a long way to go to have humanoid robots reliably operate and live in human environments.”

Reinforcement learning has been used to train many bots to walk inside simulations, but transferring that ability to the real world is hard. Speaking to MIT Technology Review, Chelsea Finn, an AI and robotics researcher at Stanford University who was not involved in the work, said: “Many of the videos that you see of virtual agents are not at all realistic.” The challenges faced by the Berkeley team that developed Cassie were manifold. Minor differences between the simulated physical laws inside a virtual environment and the real physical environment outside, can put a self-learning robot completely off-track when it tries to apply what it has learned.

Even a tiny difference in factors, such as how friction works between the robot’s feet and the walking surface, can cause a heavy two-legged robot such as Cassie to lose balance and fall. As the paper candidly admits in its abstract: “Developing robust walking controllers for bipedal robots is a challenging endeavor. Traditional model-based locomotion controllers require simplifying assumptions and careful modelling; any small errors can result in unstable control. To address these challenges for bipedal locomotion, we present a model-free reinforcement learning framework for training robust locomotion policies in simulation, which can then be transferred to a real bipedal Cassie robot.”

All said and done, training a large robot through trial and error in real world situations can be a risky affair. To skirt around the handicaps involved, the Berkeley team used two levels of virtual environment. In the first, a simulated version of Cassie learned to walk by drawing on a large existing database of robot movements. This simulation was then transferred to a second virtual environment called SimMechanics that mirrors real-world physics with a high degree of accuracy—but at a cost in running speed. Only when Cassie seemed to walk well virtually, the learned walking model was loaded into the actual robot.

The results were stunning. As reported in MIT Technology Review, the real Cassie was able to walk using the model learned in simulation without any extra fine-tuning. It could navigate rough and slippery terrain, carry unexpected loads, and recover after being pushed without notice. Even when Cassie damaged two motors in its right leg at the testing phase, it was able to adjust its movements to compensate. Edward Johns, who heads the Robot Learning Lab at Imperial College London was frank in admitting that “[t]his is one of the most successful examples I have seen”.

The development team are eager to add more movements to Cassie. “To our knowledge, this paper is the first to develop a diverse and robust bipedal locomotion policy that can walk, turn and squat using parameterized reinforcement learning,” they wrote in their conclusion. “An exciting future direction is to explore how more dynamic and agile behaviors can be learned for Cassie, building on the approach presented in this work.”

Anyone interested can access the full paper at: https://arxiv.org/pdf/2103.14295.pdf

© 2023 Praxis. All rights reserved. | Privacy Policy
   Contact Us
Praxis Tech School
PGP in Data Science