Jenkin, H. L., Jenkin, M. R., Dyde, R. T. and Harris, L. R., Shape-from-shading depends on visual, gravitational, and body-orientation cues. Perception, 33: 1453-1461, 2004.
The perception of shading-defined form results from an interaction between shading cues and the frames of reference within which those cues are interpreted. In the absence of a clear source of illumination, the definition of `up' becomes critical to deducing the perceived shape from a particular pattern of shading. In our experiments, twelve subjects adjusted the orientation of a planar disc painted with a linear luminance gradient from one side to the other, until the disc appeared maximally convexöthat is, until the luminance gradient induced the maximum perception of a three-dimensional shape. The vision, gravity, and body-orientation cues were altered relative to each other. Visual cues were manipulated by the York Tilted Room facility, and body cues were altered by simply lying on one side. The orientation of the disc that appeared maximally convex varied in a systematic fashion with these manipulations. We present a model in which the direction of perceptual `up' is determined from the sum of three weighted vectors corresponding to the vision, gravity, and body-orientation cues. The model predicts the perceived direction of `up', contributes to our understanding of how shape-from-shading is deduced, and also predicts the confidence with which the `up' direction is perceived.
Jenkin, H. L., Dyde, R. T., Jenkin, M. R. and Harris, L. R., Pitching up in IVY, Proc. ICAT 2004, Korea, 2004.
Virtual reality is often used to simulate environments in which the direction of up is not aligned with the normal direction of gravity or the body. What is the effect of such an environment on the perceived direction of up? In earlier work (e.g. [8]) we examined the effect of a wide-field virtual environment on the perceived up direction under different simulations of tilt (rotation around the naso-occipital axis). Here we extend this earlier work by examining the influence of a wide-field virtual environment on the perceived direction of up under different simulations of pitch (rotation around the inter-aural axis). Subjects sat in a virtual room simulated using an immersive projective display system. The room could be pitched about an axis passing through the subjects' head. Subjects indicted their perceived direction of up by adjusting the orientation of an indicator until it aligned with the perceived direction of gravity. Subjects' judgments indicated that for physically upright subjects the visual display is an important factor in determining the perceived up direction. However as was found to be the case for roll simulations, this technique for influencing a subject's perceived direction of up is most effective for pitch rotations within approximately +/-35 degrees of true gravitational vertical.
Huang, H., Allison, R. S., and Jenkin, M., Combined head-eye tracking for immersive virtual reality. Proc ICAT 2004, Korea, 2004.
Real-time gaze tracking is a promising interaction technique for virtual environments. Immersive procection-based virtual reality systems such as the CAVE(TM) allow users a wide range of natural movements. Unfortunately, most head and eye movement measurement techniques are of limited using during free head and body motion. An improved head-eye tracking system is proposed and developed for use in immersive applications with free head motion. The system is based upon a head-mounted video-based eye tracking system and a hybrid ultrasound-inertial head tracking system. The system can measure the point of regard in a scene in real-time during relatively large head movements. The system will serve as a flexible testbed for evaluating novel gaze-contingent interaction techniques in virtual environmenets.The calibration of the head-eye tracking system is one of the most important issues that need to be addressed. In this paper, a simple view-based calibration method is proposed.
Georgidas, C., German, A., Hogue, A., Liu, H., Prahacs, C., Ripsman, A., Sim, R., Torres, L.-A., Zhang, P., Buehler, M., Dudek, G., Jenkin, M. and Milios, E., AQUA: an aquatic walking robot, Proc. UUVS 2004, Southampton, UK, 2004.
Traditional ROV's and underwater robots are based on propeller/thruster/control surface designs. Although these traditional mobiltiy mechanisms work well, they limit the operatiaon of the vehicle to the open water. In contrast the AQUA robot 'swims' using 6 paddle-link legs. By driving its legs in different gaits, a high level of mobility is obtained in both free swimming and surface swimming modes. In addition, legs can be used to propel the vehciel ina walking gait, permitting the vehicle to walk into the sea and to walk along solid surfaces under water. Currently two different sets of legs are used for swimming and walking gaits, although a common set of legs for both modes of locomotion are in development.The AQUA robot is visually guided. In addition to onboard video capture for teleoperation, a trinocular vision/inertial sensor pod, and acoustic localization systems have been developed for the robot. The trinocuarl vision vision/inertial sensor pod is used to develop a 3D model of the robot's environment, while the acoustic localization system utilizes a surface-based acoustic array to localize the robot based on an active acoustic source on the vehicle.This talk introduces the AQUA project, including its locomotive, sensing and reasoning strategies. Technical details of the robot will be presented along with results form the robot's sea trials held at the Bellairs Marine Research Institute in January 2004.
Kapralos, B., Zikovitz, D., Jenkin, M., and Harris, L., Auditory cues in the perception of self motion, Proc. 116th Convention of the Audio Engineering Society, Berlin, Germany, 2004.
Despite its potential importance, few studies have methodically examined the role of auditory cues to the perception of self-motion. Here we describe a series of experiments that investigate the relative roles of various combinations of physical motion and decreasing sound source intensity cues to the perception of linear self-motion. Self-motion was simulated using either (i) physical motion only, (ii) moving audio-cues only, (iii) decreasing intensity cues, and (iv) physical motion coupled with moving audio-cues. In all conditions an over-estimation of self-motion of measures that varied systematically with the simulated acceleration. Of particular interest was that audio cues combined with physical motion cues resulted in more accurate estimates of self-motion than did either audio or physical motion cues in isolation.
Harris, L. R., Jenkin, M. R., Dyde, R. T., and Jenkin, H., Failure to update spatial location correctly using visual cues alone [Abstract]. J. of Vision, 4: 381a, 2004. http://journalofvision.org/4/8/381.
As we move through the world we update the perceived position of objects such that they appear to remain stationary and so that we continue to know where they are relative to ourselves. Self-motion information that could potentially be used to do this arises both from the retina and from extra-retinal sources such as the vestibular and somatosensory systems, and efferent copy of motor commands. Here we assess whether visual information alone is sufficient to accurately update our view of space. Observers sat in the Immersive Virtual environment at York (IVY) formed by six rear projection screens that fully enclose the viewer. On each screen to appropriate view of an 8x8x8 foot cubic room was presented stereographically (using shutter goggles) with the correct perspective for the viewer. The virtual room was shifted sinusoidally +/-10cms up/down, left/right or towards/away from the stationary observer at 0.5 Hz. A virtual playing card was presented floating in front of the observer at one of six distances. The card's movement was locked in phase and direction with the room's movement. The amount of the card's displacement was varied in a response to whether observers judged it to be moving further or less far than the virtual room. Its movement was varied by a double staircase routine that gravitated towards the point where the movement of the card and virtual room were judged to be identical. Statistically indistinguishable results were found for up/down, left/right and towards/away motions of the room. In all cases subjects arranged the card's movement not so as to match the actual displacement of the room but instead to maintain its visual direction relative to the visual scene: perspective alignment was maintained at the expense of spatial position. In the absence of the non-retinal information about self-motion, observers cannot accurately update their change in viewpoint relative to the environment or the position of an object within it.
Jenkin, H. L., Dyde, R. T., Jenkin, M. R. and Harris, L. R., The perceived direction of up measured using shape-from-shading in a virtual environment [Abstract]. J. of Vision, 4: 384a, 2004. http://journalofvision.org/4/8/381.Abstract The direction of up is a fundamental property of visual perception and is created from information arising from several sources including the direction of gravity, orientation of the body and visual polarity cues. Cognitive strategies based on knowledge of natural laws, such as the expected direction of sunlight or the direction in which dropped objects fall, can also contribute. Here we assess the use of a wide-field virtual environment to generate a perceived up direction that differs from that defined by gravity and body orientation. In the absence of competing cues, observers assume that illumination comes from above. A shaded disc is experienced as most convex when its brightest segment is perceived as uppermost and this orientation can therefore be taken as a measure of perceived up. Observers sat in the Immersive Virtual environment at York (IVY) formed by six rear-projection screens that fully surround the viewer. On each screen a view of an 8'x8'x16' room, with the correct perspective for the viewer, was presented stereographically using shutter glasses. The roll angle of this virtual 3D room was varied between +/- 90 degrees from vertical. For each room orientation the observer adjusted the orientation of a flat, virtual, shaded disc that floated in front of them, until it appeared maximally convex. For room orientations within approximately 35 degrees of upright the perceived direction of up was heavily influenced by the orientation of the surrounding virtual room. The pattern of responses within this range could be modeled as the weighted sum of vectors corresponding to the direction of gravity, orientation of the body and the orientation of the visual environment. For larger tilts other factors seemed to influence the observers' judgements. Earlier studies have shown that real rotated and tilted rooms manipulate the perceived direction of up. The IVY wide-field virtual environment has a similar effect for small roll angles (+/-35 degrees) but not for larger angles.
Dyde, R. T., Sadr, S., Jenkin, M. R., Jenkin, H. L. and Harris, L. R. The perceived direction of up measured using a p/d letter probe. [Abstract]. J. of Vision, 4: 385a, 2004. http://journalofvision.org/4/8/385.Abstract Determining the direction of 'up' requires the observer to derive a reference from several visual and non-visual sources including the direction of gravity, the orientation of the body and visual cues. Here we describe a simple probe of the perceived direction of up that requires minimal assumptions about physical laws and which may have important practical applications in the study of reading and in the design of unusual environments. The only visible difference between the letters 'p' and 'd' is the orientation of the character relative to the observer. Subjects were shown the character in one of 18 orientations (ranging from 15 deg to 150 deg and from 210 deg to 330 deg in steps of 15 deg where 0 corresponds to a screen-upright 'p') and indicated whether they recognized it as a 'p' or a 'd'. The perceived direction of up was calculated as the orientation half way between the two transition points (measured psychometrically) between these interpretations. The visual background was a highly polarized circular photograph (dia 40 deg) presented in 15 orientations in steps of 22.5 deg. The relationship between body and gravity was dissociated by repeating the experiment with observers in various orientations Both the orientation of the visual background and the orientation of the subject relative to gravity had a significant effect on the orientation of the transition zones between the p and d interpretations of the character. The corresponding 'u' directions could be predicted from the weighted vector sum of the direction of gravity, orientation of the body and the orientation of the visual environment. A simple letter recognition task was influenced by both the orientation of the visual background and the orientation of the observer. The reference direction of up, to which many perceptions, including letter recognition, are referred can be predicted from a simple weighted sum of gravity, body orientation and visual orientation.
Hogue, A., Jenkin, M. R. and Allison, R. S., An optical-inertial tracking system for fully-enclosed VR displays, Proc. CRV'2004, London, Ontario, 2004.
This paper describes a hybrid optical-inertial tracking technology for fully-immersive projective displays. In order to track the operator, the operator wears a 3DOFcommercial inertial tracking systemcoupled with a set of laser diodes arranged in a known configuration. The projection of this laser constellation on the display walls are tracked visually to compute the 6DOF absolute head pose of the user. The absolute pose is combined with the inertial tracker data using an extended Kalman filter to maintain a robust estimate of position and orientation. This paper describes the basic tracking system including the hardware and software infrastructure.
Jenkin, M. and Dymond, P., One-time pads for secure communication in ubiquitous computing, Proc. IASTED Advances in Computer Science and Technology, ACST 2004, St. Thomas, Nov. 22-24, 2004.
Although the Internet can be used to provide high connectivity between parties, it does not always provide strong protection for private communications. Here we describe a strong cryptographic solution to this problem using one-time pads. One-time pads provide cryptographically secure communication and can be implemented using low-power computing devices such as Pocket PC's, Palm-compatible devices, and even devices for ubiquitous computing such as Javacard and iButton. A method for the cryptographically secure transmission of data between remote users is described, and prototype implementations are presented for standard computational platforms, handheld devices and Javacard enabled devices.