2009

2024 2023 2022 2021 2020
2019 2018 2017 2016 2015 2014 2013 2012 2011 2010
2009 2008 2007 2006 2005 2004 2003 2002 2001 2000
1999 1998 1997 1996 1995 1994 1993 1992 1991 1990
1989 1988 1987 1986 1985 1984 1983
  1. Harris, L. R., Jenkin, M., Dyde, R. T., Jenkin, H., and Zacher, J. E. Assessing the perceptual consequences of non-Earth environments. White Paper, 2009-2010 Decadal Survey on Biological and Physical Sciences in Space, National Research Council/National Academy of Sciences USA, 2009.
    This white paper summarizes the need to measure the perceptual consequences of long-term exposure to reduced gravity environments. Such information is essential to establish astronaut comfort, optimum operational performance and for the design of appropriate living spaces.
  2. Wang, H., Jenkin, M. and Dymond, P. Graph exploration with robot swarms. Int. J. of Intelligent Computing and Cybernetics, 2: 818-845, 2009.
    Purpose - A simultaneous solution to the localization and mapping problem of a graph-like environment by a swarm of robots requires solutions to task coordination and map merging. Here we examine the performance of two different map merging strategies.
  3. Codd-Downey, R. and Jenkin, M. Crime scene robot and sensor simulation. Proc. VRST 2009, Kyoto, Japan, 2009.
    Virtual reality has been proposed as a training regime for a large number of tasks from survery rehersal (cf. [Robb et al. 1996], to combat simulation (cf. [U.S. Congress, Office of Technology Assessment 1994]) to assisting in basic design (cf. [Fa et al. 1992]). Virtual reality provides a novel and effective training medium for applications in which training "in the real world" is dangerous or expensive. Here we describe the C2SM simulator system - a virtual reality-based training system that provides an accurate simulation of the CBRNE Crime Scene Modeller System (see [Topol et al. 2008]). The training system provides a simulation of both the underlying robot platform and the C2Sm sensor suite, and allows trainign of the system to take place without physically deploying the robot or the simulation of chemical and radiological agents that might be present. This paper describes the basic structure of the C2SM simulator and the software components that were used to construct it.A copy of the poster is also available.
  4. Jenkin, H., Zacher, J. E., Dyde, R. T., Harris, L. R. and Jenkin, M. R., How do SCUBA divers know which way is up? The influence of buoyancy on orientation judgements, Proc. VSS, 2009.
    One's perception of the direction of up is influence bya number of cues including the nature of the visual display (visual cues), the orientation of the body (idiotropic cues), and gravity (gravity cues). Normally these cues exist in close agreement, but in unusual enviornments, including underwater and outer space, these cues may be placed in conflict and some cues may not be available at all. NASA uses underwater training to give astronauts a sense of reduced gravity cues as being underwater cancels many body cues to orientation while leaving the otolith-transduced cue unaltered. SCUBA divers report re-orientaiton illusions when underwater especially when visual cues to orientation are reduced. Here, we investigate how advanced SCUBA divers integrate visual, idiotropic and gravity cues to orientation.Perception of self-orientation was measured using the Oriented Character Test (OCHART, see Dyde et al., 2006). OCHART requires observers to recongize an oriented character (here the letter 'd' as either a 'p' or a 'd' as it is presented in different orientatinos.) Divers viewed the OCHART probe through an underwater window at approximately 4' depth. Each OCHART session consisted of 672 trials; four different visual backgrounds, 24 different character orientations, and seven repititions. The influence of the body's orientation was manipulated by having divers assume two different orientations while completing these tasks: (1) right-side down and (2) upright. Observers performed the task both in and out of the water.Divers in a right side down orientation showed a reduced reliance on visual cues underwater compared to performance on dry land, revealing a decrease in the visual effect on average. This finding is consistent with results from short duration parabolic flights where a reduced reliacne on visual cues is also found (http://journalofvision.org/6/6/183/).
  5. Dyde, R. T., Jenkin, M. R., Jenkin, H. L., Zacher, J. E. and Harris, L. R., The effect of altered gravity states on the perception of orientation, Exp. Brain Res., 194: 647-660, 2009.
    We measured the effect of the orientatino of the visual background on the perceptual upright (PU) under different levels of gravity. Brief periouds of micro- and hypergravity conditions were created using two series of parabolic flights. Control measures were taken int he laboratory under normal gravity with sujets upright, right side down and supine. Participants viewed a polarized, natural scene presented at various orientations on a laptop viewed through a hood which occluded all other visual vues. Superimposed on the screen was a character th identity of which depended on its orientation. The orientations at which the character was maximally ambiguous were measured and the perceptual upright was defined as half way between these orientations. The visual background affected the orientation of the PU less when in microgravity than when upright in normal gravity and more when supine than when upright in normal gravity. A weighted vector sum model was used to quantify the relative influence of the orientations of gravity, vision and the body in determining the perceptual upright.
  6. Jenkin, M. and Harris, L. (Eds.) Cortical Mechanisms of Vision, Cambridge University Press, 2009 .
    The advent of sensors capable of localizing portions of the brain invovled in specific computations has provided significant insights into normal visual information processing and specific neurological conditions. Aided by devices such as fMRI, researchers are now able to construct highly detailed modles of how the brain processes specific pattens of visual information. This book brings together some of the strongest thinkers in this field, to explore cortical visual information processing and its underlying mechanisms. It is an excellent resource for vision researhcers with both biological and computational backgrounds, and is an essential guide for graduate students just starting out in the field.
  7. German, A. and Jenkin, M. Gait synthesis for legged underwater vehicles. Proc. ICAS 2009, Valencia, Spain, 2009.
    Legged autonomous vehicles move by executing patterns of leg-joint angles known as gaits. Synthesizing gaits by hand is a complex and time-consuming task which becomes even more challenging when the vehicle operates underwater. When operating underwater any motion of the limbs applies forces to the vehicle. Underwater gaits must therefore be constructed to mitigate these unwanted forces while meeting the desired gait properties. This paper presents an automatic gait synthesis systme for underwater legged vehicles. The system utilizes a simulated annealing engine coupled with a black box hydrodynamic vehicle model to synthesize the desired gait. The resulting system is used to synthesize gaits for a simulated version of the AQUA amphibious hexapod although it is general enough to be applied to other legged vehicles.
  8. Wang, H., Jenkin, M. and Dymond, P. It can be beneficial to be "lazy" when exploring graph-like worlds with multiple robots. Proc. ACSE 2009, Phuket, Thailand, 2009.
    This paper describes a technique that allows mobile roots to explore and unknown graph-like enviornment and construct a topological map of it. The robots explore in a "lazy" fashin in which identified "hard" tasks are put off ot later steps taking advantage of the fact taht certain tasks often become easier as more of the world is known. Experimental validation shows that multpile robots exploring in a lazy fashoin can produce a reduction in exploration effort over multiple robots exploring without prioritizing tasks based on expected effort.
  9. Wong, S. W. H. and Jenkin, M. Exploiting collision information in probabilisitc roadmap planning. Proc. IEEE Int. Conf. on Mechatronics (ICM), Malaga, Spain, 2009.
    This paper develops a novel approach to combining probabilisitc motion planners. Rather than trying to develop a single planner that works over a wide range of environments, we develop a strategy for combingin different motion planners within a single framework. Specifically we examine how planners designed for open spaces and those desiged for narrow passages can be integrated within a single planning framework. Information that is normally discarded in the planning process is used to identify regions as being potentially 'narrow' or 'cluttered', and we then apply the planner most suited for that region based on this information. Experimental results demonstrate our approach outperforms the basic PRM approach as well as a Gaussian sampler designed for narrow regions in three test enviornments.
  10. Yang, J., Dymond, P. and Jenkin, M. Hierarchical probabilistic estimation of robot reachable workspace. Proc. 6th Int. Conf. on Informatics in Control, Automation and Robotics (ICINCO) 2009, Milan, Italy, 2009
    Estimating a robot's reachable workspace is a fundamental problem in robotics. For simple kinematic chains within an empty environment this computation can be relatively straightforward. For mobile kinematic structures and cluttered environments, the problem becomes more challenging. An efficient probabilistic method for workspace estimation is developed by applying a hierarchical strategy and developing extensions to a probabilistic motion planner. Rather than treating each of the degrees of freedom (DOFs) "equally", a hierarchical representation is used to maximize the volume of the robot's workspace that is identified as reachable for each probe of the environment. Experiments with a simulated mobile manipulator demonstrate that the hierarchical approach is an effective alternative to the use of an estimation process based on the use of a traditional probabilistic planner.Copies of the book can be ordered through Springer.
  11. Harris, L., Jenkin, M., Jenkin, H., Dyde, R., Zacher, J. and Allison, R. S., The unassisted visual system on Earth and in space, J. Vestib. Res. 20: 25-30, 2009.
    Chuck Oman has been a guide and mentor for research in human perception and erformance during space exploration for over 25 years. His research has provided a solid foundation for our understanding of how humans cape with the challenges and ambiguities of sensation and perception in space. In many of the environments associated with work in space the human visual system must operate with unusual combinations of visual and other perceptual cues. On Earth physical acceleration cues are normally available to assist the visual system in interpreting static and dynamic visual features. Here we consider two cases where the visual system is not assisted by such cues. Our first experiments examines perceptual stability where the normally available physical cues to linear acceleration are absent. Our second experiment examines perceived orientation when there is not assistance from the physically sensed direction of gravity. In both cases the effectiveness of vision is paradoxically reduced in the absence of physical acceleration cues. The reluctance to rely heavily on vision represents an important human factors challenge to efficient performance in the space environment.A pre-press version of this manuscript can be found here.