Harris, L. R., Jenkin, M., and Zikovtiz, D. C., Vestibular capture of the perceived distance of passive linear self motion. Arch. Ital. Biol., 138: 63-72, 2000.
The relative role of visual and vestibular cues in determining the perceived distance of passive, linear self motion were assessed. Seventeen subjects were given cues to constant acceleration motion: either optic flow, physical motion in the dark or combinations of visual and physical motion. Subjects indicated when they perceived they had traversed a distance that had been previously indicated either visually or physically. The perceived distance of motion evoked by optic flow was accurate relative to a visual target but was perceptually equivalent to a shorter physical motion. The perceived distance of physical motion in the dark was accurate relative to a previously presented physical motion but was perceptually equivalent to a much longer visually presented distance. The perceived distance of self-motion when both visual and physical cues were present was perceptually equivalent to the physical motion experienced and not the simultaneous visual motion even when the target was presented visually. We describe this dominance of the physical cues in determining the perceived distance of self motion as "vestibular capture".
Dudek, G., and Jenkin, M., Computational Principles of Mobile Robotics, Cambridge University Press, 2000.
This is a textbook for advanced undergraduate and graduate students in the field of mobile robotics. Emphasising computation and algorithms, the authors address a range of strategies for enabling robots to perform tasks that involve motion and behavior. The book is divided into three major sections: locomotion, sensing, and reasoning. It concentrates on wheeled and legged mobile robots, but discusses a variety of other propulsion systems. Kinematic models are developed for many of the more common locomotive strategies. It presents algorithms for both visual and nonvisual sensor technologies, including sonar, vision, and laser scanners. In the section on reasoning, the authors offer a thorough examination of planning and the issues related to spatial representation. They emphasize the problems of navigation, pose estimation, and autonomous exploration. The book is a comprehensive treatment of the field, offering a discussion of state-of-the art methods with illustrations of key technologies.
Harris, L. R., Jenkin, M. and Zikovitz, D. C., Visual and non-visual cues in the perception of linear self motion, Experimental Brain Research, 135: 12-21. (link is to the online publication). Copyright Experimental Brain Research. Locally cached copy.
Surprisingly little is known of the perceptual consequences of visual or vestibular stimulation in updating our perceived position in space as we move around. We assessed the roles of visual and vestibular cues in determining the perceived distance of passive, linear self motion. Subjects were given cues to constant-acceleration motion: either optic flow presented in a virtual reality display, physical motion in the dark or combinations of visual and physical motions. Subjects indicated when they perceived they had traversed a distance that had been previously given to them either visually or physically. The perceived distance of motion evoked by optic flow was accurate relative to a previously presented visual target but was perceptually equivalent to about half the physical motion. The perceived distance of physical motion in the dark was accurate relative to a previously presented physical motion but was perceptually equivalent to a much longer visually presented distance. The perceived distance of self motion when both visual and physical cues were present was more closely perceptually equivalent to the physical motion experienced rather than the simultaneous visual motion, even when the target was presented visually. We discuss this dominance of the physical cues in determining the perceived distance of self motion in terms of capture by non-visual cues. These findings are related to emerging studies that show the importance of vestibular input to neural mechanisms that process self motion.
Jenkin, M. and Dudek, G., The Paparazzi Problem, Proc. IEEE/RSJ IROS 2000, Takamatsu, Japan. Copyright IEEE.
Mulitple mobile robots, or robot collectives have been proposed as solutions to various tasks in which distributed sensing and action are required. Here we consider applying a collective of robots to the paparazzi problem the problem of providing sensor coverage of a target robot. We demonstrate how the computational task of the collective can be formulated as a global energy minimization task over the entire collective and show how individual members of the collective can solve the task n a distributed fashion so that the entire collective meets its goal. This result is then extended to consider unbounded communication delays between members and complete failure of individual members of the collective.
Lang, J. and Jenkin, M. R. M., Active object modeling with VIRTUE, Autonomous Robots, 8:141-159, 2000. Copyright Autonomous Robots.
This paper presents a vision system for the task of actively acquiring and modeling the geometry of an unknown object. Using an active trinocular stereo head (VIRTUE), sensed 3-D line segments are grouped into a polyhedral volumetric model through the aid of a constrained Delaunay triangulation. Partial models and a viewpoint enumeration scheme are used to guide the image acquisition process and to determine `where to look next'. Results of the active vision recovery of a number of objects are provided with their associated volumetric and surface errors.
Allison, R., Harris, L, Jenkin, M., Pintile, G., Redlick, F. and Zikovitz, D. C., First steps with a rideable computer, Proc. 2nd IEEE Int. Conf. on Virtual Reality, 2000. Copyright IEEE.
Although technologies such as head mounted displays and CAVEs can be used to provide large immersive visual displays within small physical spaces, it is difficult to provide virtual environments which are as large physically as they are visually. A fundamental problem is that tracking technologies which work well in a small enclosed environment do not function well over longer distances. Here we describe Trike -- a 'rideable' computer system which can be used to generate and explore large virtual spaces both visually and physically. This paper describes the hardware and software components of the system and a set of experiments which have been performed to investigate how the different perceptual cues that can be provided with the Trike interact within an immersive environment.