2005

2024 2023 2022 2021 2020
2019 2018 2017 2016 2015 2014 2013 2012 2011 2010
2009 2008 2007 2006 2005 2004 2003 2002 2001 2000
1999 1998 1997 1996 1995 1994 1993 1992 1991 1990
1989 1988 1987 1986 1985 1984 1983
  1. Jenkin, H. L., Dyde, R. T., Jenkin, M. R. and Harris, L. R., The influence of room structure on the perceived direction of up in immersive visual displays. Proc ICAT 2005, Christchurch, NZ, 2005.
    VR environments utilize compelling visual displays in an effort to simulate these up directions. What factors in the visual display contribute to the up direction? In earlier work we examined the effect of a wide-field virtual environment on the perceived up direction under different simulations of tilt (rotation around the nasooccipital axis)[3] and pitch (rotation about the inter-aural axis)[4]. We found that visual cues can be used to manipulate the perceptual up directions for both pitch and tilt, but that this effect was limited to relatively small deviations (+/-35 deg) from the gravity and body defined up directions. Here we manipulate the nature of the visual display and demonstrate that even simple wire-frame visual displays and random textured surfaces contribute to the direction that users perceive as being 'up'.
  2. Kapralos, B., Jenkin, M., and Milios, E., Acoustical diffraction modeling utilizing the Hugens-Fresnel principle. Proc. IEEE Int. Workshop on Haptic Audio Visual Environments and their Applications (HAVE), Ottawa, 2005.
    This paper describes the application of the Huygens-Fresnel Principle to acoustical diffraction modeling. A theoretical formulation of the optics-based Huygens-Fresnel Principle is presented followed by a discussion regarding the modfications necessary to apply the Huygens-Fresnel Principle to acoustical diffraction modeling. Experimental results indicate the method is cpaable of modeling acoustical diffraction phenomena in a simple and efficient manner, making it attractive for interactive virtual environments.
  3. Dudek, G., Jenkin, M., Prahacs, C., Hogue, A., Sattar, J., Giguerre, P., German, A., Liu, H., Saunderson, S., Ripsman, A., Simhon, S., Torres, L.-A., Milios, E., Zhang, P., and Rekletis, I., A visually guided swimming robot, Proc. IROS 2005, Edmonton, Alberta, 2005.
    We describe recent results obtained with AQUA, a mobile robot capable of swimming, walking and amphibious operation. Designed to rely primarily on visual sensors, the AQUA robot uses vision to navigate underwater using servo-based guidance, and also to obtain high-resolution range scans of its local environment. This paper describes some of the pragmatic and logistic obstacles encountered, and provides an overview of some of the basic capabilities of the vehicle and its associated sensors. Moreover, this paper presents the first ever amphibious transition from walking to swimming.
  4. Borzenko, O., Lesperance, Y., and Jenkin, M., Controlling camera and lights for intelligent image acquisition and merging, SVAR, Brampton, 2005.
    This poster provides an overview of the Light and Camera project including some preliminary results with the intelligent controller.
  5. Xu, W., Lesperance, Y., and Jenkin, M., Edge detection with multiple exposure images, SVAR, Brampton, 2005.
    During spacecraft docking multiple images are captured of docking targets under widely varying but controlled illumination. How can the visual features be captured in order to localize the docking target? A multi-channel edge detection algorithm is developed to address this problem.
  6. Hogue, A., and Jenkin, M., Visual and inertial pose estimation and 3d mapping for AQUA, SVAR, Brampton, 2005.
    This poster provides an overview of the AQUA trinocular environmental recovery process.
  7. Dyde, R., Jenkin, M. and Harris, L., Cues that determine the perceptual upright: visual influences are dominated by high spatial frequencies, J. of Vision, 5: 193a. Proc. VSS, May 6-11, 2005, Sarasota, FL.
    INTRO: The perceived direction of upright - the preferred orientation for polarized objects to be recognized - depends on the relative orientations of the visual backgroun, the body and gravity. The perceptual upright (PU) is distinct from the subjective visual vertical (SVV) which is dominated by the directino of gravity and which predicts the perceived effects of gravity on objects and the observer. The PU is highly sensitive to the orientation of the visual background: that is the preferred orientation for object recognition is critically influenced by the ambient visual enviornment. Which spatial frequency range carries the information that most influences the PU?METHOD: The PU is measured from the perceived identity of the character p/d. The orientations where one interpretation (p) changes to the other (d) are biesected to indicate the PU. Subjects were tested upright and supine whilst viewing the character against a highly polarized photograph of a natural scene displayed on a laptop computer whose screen was masked to a 42 deg circle viewed at 25 cms through a tube that obscured all peripheral vision. THe influence of a tilted background picture was examined as a series of circular Gaussian blurs were applied to it at 2, 4, 8, 16 and 250 pixel widths.RESULTS: The influence of the visual background on the PU was initially about equalt to that of gravity and about half that of the body. When we blurred the background image, the influence of the visual backgroun on the PU systematically decreased at a rate indpendent of body posture, though the magnitude of effect remained reliably higher for supine observers.

    DISCUSSION: The systematic decrease of the influence of the visual environment as it is blurrerd suggests an important role for higher spatial frequencies and the detail they convey rather than the overall structure of the scene in providing cues that determine the perceptual upright.

  8. Harris, L. R., Dyde, R. T., and Jenkin, M. R., The use of visual and non-visual cues in updating the perceived position of the world during translation. Proc. SPIE Imaging Science and Technology, Vol. 5666, pg. 462-472, 2005.
    During self-motion the perceived positions of objects remain fixed in perceptual space. This requires that their perceived positions are updated relative to the viewer. Here we assess the roles of visual and nonvisual information in this spatial updating. To investigate the role of visual cues observers sat in an enclosed, immersive, virtual environment formed by six rear-projection screens. A simulated room was presented stereographically and shifted relative to the observer. A playing card, whose movement was phase-locked to the room, floated in front of the subject who judged if this card was displaced more or less than the room. Surprisingly, perceived stability occurred not when the card's movement matched the room's displacement but when perspective alignment was maintained and parallax between the card and the room was removed. The role of the complementary non-visual cues was investigated by physically moving subjects in the dark. Subjects judged whether a floating target was displaced more or less than if it were earth stable. To be judged as earth-stationary the target had to move in the same direction as the observer: more so if the movement was passive. We conclude that both visual and non-visual cues to selfmotion and active involvement in the movement are simultaneously required for veridical spatial updating.
  9. Jaekl, P., Zikovitz, D. C., Jenkin, M. R., Jenkin, H. L., Zacher, J. E., and Harris, L. R. Gravity and perceptual stability during translational movement on earth and in microgravity. Acta Astron., 56: 1033-1040, 2005.
    We measured the amount of visual movement judged consistent with translational head movement under normal and microgravity conditions. Subjects wore a virtual reality helmet in which the ratio of the movement of the world to the movement of the head (visual gain) was variable. Using the method of adjustment under normal gravity ten subjects adjusted the visual gain until the visual world appeared stable during head movements that were either parallel or orthogonal to gravity. Using the method of constant stimuli under normal gravity, seven subjects moved their heads and judged whether the virtual world appeared to move 'with' or 'against' their movement for several visual gains. One subject repeated the constant stimuli judgements in microgravity during parabolic flight. The accuracy of judgements appeared unaffected by the direction or absence of gravity. Only the variability appeared affected by the absence of gravity. These results are discussed in relation to discomfort during head movements in microgravity.
  10. Jenkin, H. L., Dyde, R. T., Zacher, J. E., Zikovitz, D. C., Jenkin, M. R., Allison, R. S., Howard, I. P. and Harris, L. R. The relative role of visual and non-visual cues in determining the perceived direction of up: experiments in parabolic flight. Acta Astron., 56: 1025-1032, 2005.
    In order to measure the perceived direction of "up" subjects judged the three dimensional shape of disks shaded to be compatible with illumination from particular directions. By finding which shaded circle appeared most convex, we were able to infer the perceived direction of illumination. This provides an indirect measure of subject's perception of the direction of "up". The different cues contributing to this percept were separated by varying the orientation of the subject and the orientation of the visual background relative to gravity. We also measured the effect of decreasing or increasing gravity by making these shape judgements throughout all the phases of parabolic flight (0G, 2G and 1G during level flight). The perceived up direction was modeled by a simple vector sum of "up" defined by vision, the body and gravity. In this model, the weighting of the visual cue became negligible under microgravity and hypergravity conditions.
  11. Borzenko, O. Lesperance, Y. and Jenkin, M. Controlling camera and lights for intelligent image acquistion and merging, Proc. Canadian Conference on Computer and Robot Vision (CRV), Victoria, 2005.
    Docking craft in space and guiding mining machines are areas that often use remote video cameras equipped with one or more controllable light sources. In these applications, the problem of parameter selection arises: how to choose the best parameters for the camera and lights? Another problem is that a single image often cannot capture the whole scene properly and a composite image needs to be rendered. In this paper, we report on our progress with the CITO Lights and Camera project that addresses the parameter selection and merging problems for such systems. The prototype knowledge-based controller adjusts lighting to iteratively acquire a collection of images of a target. At every stage, an entropy-based merging module combines these images to produce a composite. The result is a final composite image that is optimized for further image processing tasks, such as pose estimation or tracking.
  12. Hossain, M. and Jenkin, M. Recognizing hand-raising gestures using HMM, Proc. Canadian Conference on Computer and Robot Vision (CRV), Victoria, 2005.
    Automatic attention-seeking gesture recognition is an enabling element of synchronous distance learning. Recognizing attention seeking gestures is complicated by the temporal nature of the signal that must be recognized and by the similarty between attention seeking gestures and non-attention seeking gestures. Here we describe two approaches to the recognition problem that utilize HMMs to learn the class of attention seeking gestures. An explicit approach that encodes the temporal nature of the gestures within the HMM, and an implicit approach that augments the input token sequence with temporal markers. Experimental results demonstrate that the explicit approach is more accurate.
  13. German, A., Jenkin, M. and Lesperance, Y., Entropy-based image merging, Proc. Canadian Conference on Computer and Robot Vision (CRV), Victoria, 2005.
    Spacecraft docking using vision is a challenging task. Not least among the problems encountered is the need to visually localize the docking target. Here we consider the task of adapting the local illumination to assist in this docking. An online approach is developed that combines images obtained under different exposure and lighting conditions into a single image upon which docking decisions can be made. This method is designed to be used within an intelligent controller that automatically adjusts lighting and image acquisition in order to obtain the "best" possible composite view of the target for further image processing.
  14. Jenkin, M. and Harris, L. (Eds.) Seeing Spatial Form, Oxford University Press, 2005.
    This book, Seeing Spatial Form, is dedicated to David Martin Regan who has made so many contributions to our understanding of how we see objects. Its chapters bring together issues from some of the world's leading researchers in form vision to explain what we know about distinguishing form. The book includes a CD-ROM, which contains additional demonstrations and colour images that considerably enhance the chapter contents. Seeing Spatial Form will be an invaluable resource for student and professional researchers in vision science, cognitive psychology, and neuroscience.The volume has been reviewed in Nature.
  15. Jaekl, P. M., Jenkin, M. R., and Harris, L. R., Perceiving a stable world during active rotational and translational head movements. Exp. Brain Res., 163: 388-399, 2005.
    When a person moves, the associated visual movement of the environment in the opposite direction is not usually seen as external movement but rather as a changing view of a stable world. We measured the amount of visual motion that can be tolerated as compatible with the perception of moving within a stable world during active, 0.5 Hz, translational and rotational head movement. Head movements were monitored by a fast, mechanical head tracker and the information used to update a helmet-mounted visual display. A variable gain was introduced between the head tracker and the display. Ten subjects adjusted this gain until the visual display appeared stable during yaw, pitch and roll head rotations and naso-occipital, inter-aural and dorso-ventral translations. Each head movement was tested with movement either orthogonal or parallel to gravity. There was a wide spread of gains accepted as stable (0.8-1.4 for rotation and 1.1 to 1.8 for translation). The gain most likely to be perceived as stable was greater than geometrically required (1.2 for rotation; 1.4 for translation). For rotational motion, the means gains were the same for all axes. For translation there was no effect of whether the movement was lateral (mean gain 1.6) or dorso-ventral (mean gain 1.5) and no effect of the relative orientation of the translation direction relative to gravity. However translation in the naso-occipital direction was associated with more closely veridical settings (mean gain 1.1) and narrower standard deviations than in other directions. These findings are discussed in terms of visual and non-visual contributions to the perception of an earth stable environment during active head movement.