A8-[HumanoidSpace] - Details

Humanoid Robot Navigation in Complex Indoor Environments

 

So far, the problem of autonomous navigation with humanoid robots has mostly been studied in rather simplified environments. In this project, we deploy a humanoid robot in a complex indoor environment containing different surfaces, different rooms at multiple levels connected by staircases and ramps, as well as objects that have to be manipulated while executing navigation tasks.

Most existing techniques for environment modeling have no means to represent movable and articulated objects and the way they can be manipulated. In our project, we consider the problem of perceiving, representing, and interacting with such objects. We use a mobile humanoid robot equipped with a laser scanner and a camera to perceive the environment. We apply a method to learn kinematic models of articulated objects and develop an approach to learn grasp points for movable objects. Afterwards, these object models are used to compute actions for the humanoid so as to efficiently manipulate the objects.

During path planning, we reason about manipulation actions by considering actions such as opening doors or carrying away objects blocking the robot's way to areas where they do not interfere with the robot. In this way, the humanoid will be enabled to successfully perform navigation tasks in a complex environment.

Finally, we look into decision-theoretic methods to reason about perception actions the robot should carry out in order to reduce the uncertainty about its state or parts of the environment. This is from utmost importance to achieve robustness under large uncertainties in sensing and motion execution and also while carrying objects. The developed techniques are then used to plan efficient navigation actions for the humanoid robot in complex 3D environments containing various objects and to robustly execute them.