A5-[ActionSpace] - Overview

Spatial Exploration Based on an Integrated Representation of Sensory Features and Motor Actions


Humans are fast, precise, and efficient explorers of their spatial environment. The exploration and classification takes place in an ongoing action-perception cycle that is based on the continuous interplay of sensory information processing and goal-directed motor actions. However, in spite of the generally accepted importance of motor actions for perception, almost all representational models of the environment are exclusively based on static spatial descriptions, such as spatial schemata, maps (e.g. grid-based or geometric approaches), route graphs or qualitative spatial representations (e.g. topological representations). We question the conceptual separation between (dynamic) actions and (static) spatial representations and postulate an inherently sensorimotor representation that comprises sensory as well as motor aspects, often in an inseparable fashion.

Our goal is the development of a hierarchical sensorimotor representation and its use in a biologically inspired system for exploratory self-localization and spatial navigation. The sensorimotor representation is motivated by the human action-perception cycle with its interplay of sensory information processing and goal-directed motor actions. This concept is investigated by (i) theoretical research, (ii) psychological experiments, and (iii) modeling approaches. The latter will be implemented in a mobile agent that operates in a VR environment, and in a mobile robot.

In our theoretical research we will develop data structures for spatial scene representation that are suited for integrating sensory features and motor actions. We will investigate different levels of sensorimotor representations, from low-level to cognitive and experience-based levels. The concepts will be compared and related to known spatial representation schemes, including conceptions for mental spatial representation. In pilot studies with human subjects, we will investigate performance and typical errors in navigational tasks with respect to their dependence on motor actions and "Sensorimotor coherence" of the input. The results will provide first qualitative information about the role of motor actions in human representation and exploration of spatial configurations. In the system implementation, we will investigate the integration of the sensorimotor representation with a top-down knowledge-based strategy for efficient exploratory reasoning. This architecture will be implemented in a simulated spatial exploration system, in a robot head, and in the mobile robot used in A6-[ReactiveSpace]. These systems will be tested with respect to the resulting behavior and recognition capabilities. The theoretical, behavioral, and system approaches will be pursued in continuous interaction, the empirical results will drive and modify the theoretical work, and vice versa. In the long-term the sensorimotor representation will be extended towards the inclusion of multi-sensory information that includes auditory, somatosensory, and proprioceptive information, and towards the integration of complex high-level programs for motor actions.