2004

2024 2023 2022 2021 2020
2019 2018 2017 2016 2015 2014 2013 2012 2011 2010
2009 2008 2007 2006 2005 2004 2003 2002 2001 2000
1999 1998 1997 1996 1995 1994 1993 1992 1991 1990
1989 1988 1987 1986 1985 1984 1983
  1. Jenkin, H. L., Jenkin, M. R., Dyde, R. T. and Harris, L. R., Shape-from-shading depends on visual, gravitational, and body-orientation cues. Perception, 33: 1453-1461, 2004.
    The perception of shading-defined form results from an interaction between shading cues and the frames of reference within which those cues are interpreted. In the absence of a clear source of illumination, the definition of `up' becomes critical to deducing the perceived shape from a particular pattern of shading. In our experiments, twelve subjects adjusted the orientation of a planar disc painted with a linear luminance gradient from one side to the other, until the disc appeared maximally convexöthat is, until the luminance gradient induced the maximum perception of a three-dimensional shape. The vision, gravity, and body-orientation cues were altered relative to each other. Visual cues were manipulated by the York Tilted Room facility, and body cues were altered by simply lying on one side. The orientation of the disc that appeared maximally convex varied in a systematic fashion with these manipulations. We present a model in which the direction of perceptual `up' is determined from the sum of three weighted vectors corresponding to the vision, gravity, and body-orientation cues. The model predicts the perceived direction of `up', contributes to our understanding of how shape-from-shading is deduced, and also predicts the confidence with which the `up' direction is perceived.
  2. Jenkin, H. L., Dyde, R. T., Jenkin, M. R. and Harris, L. R., Pitching up in IVY, Proc. ICAT 2004, Korea, 2004.
    Virtual reality is often used to simulate environments in which the direction of up is not aligned with the normal direction of gravity or the body. What is the effect of such an environment on the perceived direction of up? In earlier work (e.g. [8]) we examined the effect of a wide-field virtual environment on the perceived up direction under different simulations of tilt (rotation around the naso-occipital axis). Here we extend this earlier work by examining the influence of a wide-field virtual environment on the perceived direction of up under different simulations of pitch (rotation around the inter-aural axis). Subjects sat in a virtual room simulated using an immersive projective display system. The room could be pitched about an axis passing through the subjects' head. Subjects indicted their perceived direction of up by adjusting the orientation of an indicator until it aligned with the perceived direction of gravity. Subjects' judgments indicated that for physically upright subjects the visual display is an important factor in determining the perceived up direction. However as was found to be the case for roll simulations, this technique for influencing a subject's perceived direction of up is most effective for pitch rotations within approximately +/-35 degrees of true gravitational vertical.
  3. Huang, H., Allison, R. S., and Jenkin, M., Combined head-eye tracking for immersive virtual reality. Proc ICAT 2004, Korea, 2004.
    Real-time gaze tracking is a promising interaction technique for virtual environments. Immersive procection-based virtual reality systems such as the CAVE(TM) allow users a wide range of natural movements. Unfortunately, most head and eye movement measurement techniques are of limited using during free head and body motion. An improved head-eye tracking system is proposed and developed for use in immersive applications with free head motion. The system is based upon a head-mounted video-based eye tracking system and a hybrid ultrasound-inertial head tracking system. The system can measure the point of regard in a scene in real-time during relatively large head movements. The system will serve as a flexible testbed for evaluating novel gaze-contingent interaction techniques in virtual environmenets.The calibration of the head-eye tracking system is one of the most important issues that need to be addressed. In this paper, a simple view-based calibration method is proposed.
  4. Georgidas, C., German, A., Hogue, A., Liu, H., Prahacs, C., Ripsman, A., Sim, R., Torres, L.-A., Zhang, P., Buehler, M., Dudek, G., Jenkin, M. and Milios, E., AQUA: an aquatic walking robot, Proc. UUVS 2004, Southampton, UK, 2004.
    Traditional ROV's and underwater robots are based on propeller/thruster/control surface designs. Although these traditional mobiltiy mechanisms work well, they limit the operatiaon of the vehicle to the open water. In contrast the AQUA robot 'swims' using 6 paddle-link legs. By driving its legs in different gaits, a high level of mobility is obtained in both free swimming and surface swimming modes. In addition, legs can be used to propel the vehciel ina walking gait, permitting the vehicle to walk into the sea and to walk along solid surfaces under water. Currently two different sets of legs are used for swimming and walking gaits, although a common set of legs for both modes of locomotion are in development.The AQUA robot is visually guided. In addition to onboard video capture for teleoperation, a trinocular vision/inertial sensor pod, and acoustic localization systems have been developed for the robot. The trinocuarl vision vision/inertial sensor pod is used to develop a 3D model of the robot's environment, while the acoustic localization system utilizes a surface-based acoustic array to localize the robot based on an active acoustic source on the vehicle.This talk introduces the AQUA project, including its locomotive, sensing and reasoning strategies. Technical details of the robot will be presented along with results form the robot's sea trials held at the Bellairs Marine Research Institute in January 2004.
  5. Harris, L., Dyde, R., Sadir, S., Jenkin, M. Jenkin, H. Cross-modal contributions to the perceived direction of "up". International Multisensory Research Form 2004, June 2-5, Barcelona, 2004.
  6. Georgidas, C., German, A., Hogue, A., Liu, H., Prahacs, C., Ripsman, A., Sim, R., Torres, L.-A., Zhang, P., Buehler, M., Dudek, G., Jenkin, M. and Milios, E., AQUA: an aquatic walking robot, Proc. IROS 2004, Sendai, Japan, 2004.Abstract
    This paper describes an underwater walking robotic system being developed under the name AQUA, the goals of the AQUA project, the overall hardware and software design, the basic hardware and sensor packages that have been developed, and some initial experiments. The robot is based on the RHex hexapod robot and uses a suite of sensing technologies, primarily based on computer vision and INS, to allow it to navigate and map clear shallow-water environments. The sensor-based navigation and mapping algorithms are based on the use of both artificial floating visual and acoustic landmarks as well as on naturally occurring underwater landmarks and trinocular stereo.
  7. Kapralos, B., Zikovitz, D., Jenkin, M., and Harris, L., Auditory cues in the perception of self motion, Proc. 116th Convention of the Audio Engineering Society, Berlin, Germany, 2004.
    Despite its potential importance, few studies have methodically examined the role of auditory cues to the perception of self-motion. Here we describe a series of experiments that investigate the relative roles of various combinations of physical motion and decreasing sound source intensity cues to the perception of linear self-motion. Self-motion was simulated using either (i) physical motion only, (ii) moving audio-cues only, (iii) decreasing intensity cues, and (iv) physical motion coupled with moving audio-cues. In all conditions an over-estimation of self-motion of measures that varied systematically with the simulated acceleration. Of particular interest was that audio cues combined with physical motion cues resulted in more accurate estimates of self-motion than did either audio or physical motion cues in isolation.
  8. Harris, L. R., Jenkin, M. R., Dyde, R. T., and Jenkin, H., Failure to update spatial location correctly using visual cues alone [Abstract]. J. of Vision, 4: 381a, 2004. http://journalofvision.org/4/8/381.
    As we move through the world we update the perceived position of objects such that they appear to remain stationary and so that we continue to know where they are relative to ourselves. Self-motion information that could potentially be used to do this arises both from the retina and from extra-retinal sources such as the vestibular and somatosensory systems, and efferent copy of motor commands. Here we assess whether visual information alone is sufficient to accurately update our view of space. Observers sat in the Immersive Virtual environment at York (IVY) formed by six rear projection screens that fully enclose the viewer. On each screen to appropriate view of an 8x8x8 foot cubic room was presented stereographically (using shutter goggles) with the correct perspective for the viewer. The virtual room was shifted sinusoidally +/-10cms up/down, left/right or towards/away from the stationary observer at 0.5 Hz. A virtual playing card was presented floating in front of the observer at one of six distances. The card's movement was locked in phase and direction with the room's movement. The amount of the card's displacement was varied in a response to whether observers judged it to be moving further or less far than the virtual room. Its movement was varied by a double staircase routine that gravitated towards the point where the movement of the card and virtual room were judged to be identical. Statistically indistinguishable results were found for up/down, left/right and towards/away motions of the room. In all cases subjects arranged the card's movement not so as to match the actual displacement of the room but instead to maintain its visual direction relative to the visual scene: perspective alignment was maintained at the expense of spatial position. In the absence of the non-retinal information about self-motion, observers cannot accurately update their change in viewpoint relative to the environment or the position of an object within it.
  9. Jenkin, H. L., Dyde, R. T., Jenkin, M. R. and Harris, L. R., The perceived direction of up measured using shape-from-shading in a virtual environment [Abstract]. J. of Vision, 4: 384a, 2004. http://journalofvision.org/4/8/381.Abstract The direction of up is a fundamental property of visual perception and is created from information arising from several sources including the direction of gravity, orientation of the body and visual polarity cues. Cognitive strategies based on knowledge of natural laws, such as the expected direction of sunlight or the direction in which dropped objects fall, can also contribute. Here we assess the use of a wide-field virtual environment to generate a perceived up direction that differs from that defined by gravity and body orientation. In the absence of competing cues, observers assume that illumination comes from above. A shaded disc is experienced as most convex when its brightest segment is perceived as uppermost and this orientation can therefore be taken as a measure of perceived up. Observers sat in the Immersive Virtual environment at York (IVY) formed by six rear-projection screens that fully surround the viewer. On each screen a view of an 8'x8'x16' room, with the correct perspective for the viewer, was presented stereographically using shutter glasses. The roll angle of this virtual 3D room was varied between +/- 90 degrees from vertical. For each room orientation the observer adjusted the orientation of a flat, virtual, shaded disc that floated in front of them, until it appeared maximally convex. For room orientations within approximately 35 degrees of upright the perceived direction of up was heavily influenced by the orientation of the surrounding virtual room. The pattern of responses within this range could be modeled as the weighted sum of vectors corresponding to the direction of gravity, orientation of the body and the orientation of the visual environment. For larger tilts other factors seemed to influence the observers' judgements. Earlier studies have shown that real rotated and tilted rooms manipulate the perceived direction of up. The IVY wide-field virtual environment has a similar effect for small roll angles (+/-35 degrees) but not for larger angles.
  10. Dyde, R. T., Sadr, S., Jenkin, M. R., Jenkin, H. L. and Harris, L. R. The perceived direction of up measured using a p/d letter probe. [Abstract]. J. of Vision, 4: 385a, 2004. http://journalofvision.org/4/8/385.Abstract Determining the direction of 'up' requires the observer to derive a reference from several visual and non-visual sources including the direction of gravity, the orientation of the body and visual cues. Here we describe a simple probe of the perceived direction of up that requires minimal assumptions about physical laws and which may have important practical applications in the study of reading and in the design of unusual environments. The only visible difference between the letters 'p' and 'd' is the orientation of the character relative to the observer. Subjects were shown the character in one of 18 orientations (ranging from 15 deg to 150 deg and from 210 deg to 330 deg in steps of 15 deg where 0 corresponds to a screen-upright 'p') and indicated whether they recognized it as a 'p' or a 'd'. The perceived direction of up was calculated as the orientation half way between the two transition points (measured psychometrically) between these interpretations. The visual background was a highly polarized circular photograph (dia 40 deg) presented in 15 orientations in steps of 22.5 deg. The relationship between body and gravity was dissociated by repeating the experiment with observers in various orientations Both the orientation of the visual background and the orientation of the subject relative to gravity had a significant effect on the orientation of the transition zones between the p and d interpretations of the character. The corresponding 'u' directions could be predicted from the weighted vector sum of the direction of gravity, orientation of the body and the orientation of the visual environment. A simple letter recognition task was influenced by both the orientation of the visual background and the orientation of the observer. The reference direction of up, to which many perceptions, including letter recognition, are referred can be predicted from a simple weighted sum of gravity, body orientation and visual orientation.
  11. Hogue, A., Jenkin, M. R. and Allison, R. S., An optical-inertial tracking system for fully-enclosed VR displays, Proc. CRV'2004, London, Ontario, 2004.
    This paper describes a hybrid optical-inertial tracking technology for fully-immersive projective displays. In order to track the operator, the operator wears a 3DOFcommercial inertial tracking systemcoupled with a set of laser diodes arranged in a known configuration. The projection of this laser constellation on the display walls are tracked visually to compute the 6DOF absolute head pose of the user. The absolute pose is combined with the inertial tracker data using an extended Kalman filter to maintain a robust estimate of position and orientation. This paper describes the basic tracking system including the hardware and software infrastructure.
  12. Kapralos, B., Jenkin, M. and Milios, E. Sonel Mapping: Acoustic Modeling Utilizing an Acoustic Version of Photon Mapping. IEEE International Workshop on Haptic Audio Visual Environments and their Applications (HAVE '2004), Ottawa, Canada. October 2-3, 2004.
    Acoustic modeling of even small imple environments is a complex, computationally expensive task. Sound, just as light, is itself a wave phenomenon. Although there are sever key differences between light and sound there are also severl similarities. Given the similarities which exist between sound and light, this work investigates the application of photon mapping (suitable modified), to model environmental acoustics. The resulting acoustic sonel mapping technique can be used to model acoustic environments while accounting for diffuse and specular acoustic reflections as well as refraction and diffraction effects.
  13. Jenkin, M. and Dymond, P., One-time pads for secure communication in ubiquitous computing, Proc. IASTED Advances in Computer Science and Technology, ACST 2004, St. Thomas, Nov. 22-24, 2004.
    Although the Internet can be used to provide high connectivity between parties, it does not always provide strong protection for private communications. Here we describe a strong cryptographic solution to this problem using one-time pads. One-time pads provide cryptographically secure communication and can be implemented using low-power computing devices such as Pocket PC's, Palm-compatible devices, and even devices for ubiquitous computing such as Javacard and iButton. A method for the cryptographically secure transmission of data between remote users is described, and prototype implementations are presented for standard computational platforms, handheld devices and Javacard enabled devices.
  14. Kapralos, B., Zikovitz, D., Jenkin, M. R. M., and Harris, L. R., Auditory cues in the perception of self-motion for linear translation, Technical Report CS-2004-04, Department of Computer Science and Engineering, York University, Canada, 2004.
    A series of experiments that investigate the relative roles of physical motion and changing sound source intensity cues to the perception of linear self-motion. Self-motion was simulated using either (i) physical motion only, (ii) simulated auditory motion only, (iii) physical motion coupled with auditory motion, or (iv) physically moving audio only. In all conditions an over-estimation of self-motion was observed that varied systematically with the simulated acceleration. Of particular interest was that auditory cues combined with physical motion cues resulted in more accurate estimates of self-motion than did either auditory or physical motion cues in isolation.
  15. Jenkin, M., Dudek, G., Milios, E., Buehler, M. and Prahacs, C., AQUA: An amphibious walking and swimming robot, Proc. RoboNexus 2004, Santa Clara, CA.
    Taking bioinspired robotics to the next level. By combining capabilities demonstrated by both insects and marine life, the AQUA platform is designed to be at home in both the terrestrial and aqautic environments.