AmbiguousSpace - Overview

3D Perception and Map Building in Unstructured, Highly Ambiguous Environments


Most work in mobile robotics focuses on robots moving in the plane. There are good reasons for that because building floors are planar, wheeled robots cannot move vertically and the most popular sensor nowadays scans a planar slice of the environment with a laser beam. While a 2D approach may be adequate for indoor service robotics there are some worthwhile applications that require both 3D motion and perception. Applications are extraterrestrial, underwater, mine or cave exploration, and searching for earthquake victims in the rubble pile of a collapsed building. Here the environment is highly unstructured requiring the robot to crawl through a three-dimensional tunnel of rubble. This project aims at using a walking robot to create a 3D representation of such environments and to aggregate this representation into a map. Computer vision shall be used to perceive the environment combined with proprioceptive information on the robot's movement. While vision allows to perceive 3D environmental features, the project's main focus is on integrating these features into a consistent spatial representation by modeling their uncertainty statistically. Recent progress both in the design of walking robots and in the area of simultaneous localization and mapping (SLAM) make it possible to address this problem now. The proposed 22-month project will serve as a first study making this kind of representations available for projects in the second SFB phase. We propose this project as an "additional activity" in order to seize an unforeseen chance that has emerged from the recent breakthrough of efficient SLAM algorithms and from Kirchner's entry into the SFB/TR8 (A6-[ReactiveSpace]). It addresses what can be achieved by combining 3D spatial representations also pursued in A2-[ThreeDSpace] with a walking robot that can actually operate in a 3D environment. For SFB/TR 8 the opportunity is to complete its representation portfolio with a "missing link" between 3D perception and action. As an additional benefit the explored techniques could be employed to visually localize devices such as a PDA inside a building, a prerequisite for intelligent-building applications as envisioned by SFB/TR 8.