New Test in the Robotics Syllabi

New Test in the Robotics Syllabi

The ThreeDWorld (TDW) Transport Challenge set to transform the future of robotics

Whilst most envision humanoid robots the way Hollywood sci-fi movies portray them, truth be told, we’re still quite far away from something as proficient as Sonny in I, Robot. As VentureBeat reports, “this is because many of our intuitive planning and motor skills — things we take for granted — are a lot more complicated than we think. Navigating unknown areas, finding and picking up objects, choosing routes, and planning tasks are complicated feats we only appreciate when we try to turn them into computer programs.”

Hence, developing robots that can sense their surroundings and interact with their environment the realm of embodied artificial intelligence is still a far shot from human capabilities. Yet, the strides being made on the front currently are nothing short of remarkable.

In a recent development in the field of embodied AI, scientists from MIT, Stanford University and IBM have developed a novel challenge for AI agents, testing their abilities to “find paths, interact with objects, and plan tasks efficiently.” The ThreeDWorld (TDW) Transport Challenge, in this regard, is set to establish a virtual environment for such tests to be carried out — and will be presented in the upcoming Embodied AI Workshop during the prestigious Conference on Computer Vision and Pattern recognition (CVPR) to be held in June 2021. Although no current AI techniques are known to come even close to solving the TDW challenge, it is set to open up a new paradigm for embodied AI and robotics research moving forward.

Figure 1: The TDW-Transport Challenge; Source:TechTalksThe TDW-Transport Challenge

At the heart of the TDW-Transport Challenge, however, are several crucial aspects that need to be addressed:

  • Reinforcement Learning (RL) in virtual environments: Usually, RL agents begin processes with no prior knowledge of their surroundings, learning from a series of actions that translate through positive feedback and rewards. This lies at the core of most robotics applications: and meeting the challenges they pose need to be met head on. One challenge, VentureBeat opines, involves “designing the right set of states, rewards, and actions, which can be very difficult in applications like robotics, where agents face a continuous environment that is affected by complicated factors such as gravity, wind, and physical interactions with other objects.”

Another major challenge involves accruing millions of episodes of training data from real-world sources for robots to learn from. To combat this, researchers have created simulated environments that self-driving cars, games and robotics industries can use as a part of their training regime such as the one proposed in the TDW Transport Challenge (“which the authors describe as “a general-purpose virtual world simulation platform supporting both near-photorealistic image rendering, physically based sound rendering, and realistic physical interactions between objects and agents”) — in order to train and evaluate AI algorithms. This is, however, a major challenge given that exact dynamics of the physical world are rather difficult to capture.

  • Task and Motion Planning (TAMP):This involves robotic agents passing through the TDW Transport Challenge to not only chart the optimal paths, but also face the complex task of changing the state of objects to achieve its goal. VentureBeat adds: “The challenge takes place in a multi-roomed house adorned with furniture, objects, and containers. The reinforcement learning agent views the environment from a first-person perspective and must find one or several objects from the rooms and gather them at a specified destination. The agent is a two-armed robot, so it can only carry two objects at a time. Alternatively, it can use a container to carry several objects and reduce the number of trips it has to make.

At every step, the RL agent can choose one of several actions, such as turning, moving forward, or picking up an object. The agent receives a reward if it accomplishes the transfer task within a limited number of steps.”

  • Abstracting challenges: Certain aspects of the TDW simulated environment, as expected, are abstracted: such as the number of degrees of freedom in movement (i.e. number of limbs in a humanoid robot, for example), perception of environment using RGB-coloured frames, segmentation maps and depth maps and the simplification of state and action space through the limiting of navigation and movements.

According to Principal Research staff member at the MIT-IBM Watson AI Lab, Chuang Gan, in spite of these abstractions, there are still several challenges to overcome, such as:

“(i) the synergy between navigation and interaction: The agent cannot move to grasp an object if this object is not in the egocentric view, or if the direct path to it is obstructed;

(ii) physics-aware interaction: Grasping might fail if the agent’s arm cannot reach an object and,

(iii) physics-aware navigation: Collision with obstacles might cause objects to be dropped and significantly impede transport efficiency.”

Figure 2: The TDW-Transport Challenge first-person view; Source:TechTalks

These, coupled with the fact pure RL-based models are not enough, i.e. “hybrid AI models, where a reinforcement learning agent was combined with a rule-based high-level planner” all suggest that the TDW-Transport Challenge will prove to be a seminal step forward to advance research surrounding embodied AI and assisted robotics.

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us