2006

2024 2023 2022 2021 2020
2019 2018 2017 2016 2015 2014 2013 2012 2011 2010
2009 2008 2007 2006 2005 2004 2003 2002 2001 2000
1999 1998 1997 1996 1995 1994 1993 1992 1991 1990
1989 1988 1987 1986 1985 1984 1983
  1. Kapralos, B., Jenkin, M. and Milios E., The Sonel Mapping Acoustical Modeling Method. Technical Report CSE-2006-10, Computer Science and Engineering, York University, 2006.
    This work investigates the application of photon mapping to model enviromental acoustics. The resulting acoustic sonel mapping technique is a Monte-Carlo based approach that can be used to model acoustic enviornments while accounting for diffuse and specular acoustic reflections as well as diffraction effects. This modeling is performed in an efficient manner in contrast to available deterministic techniques. The sonel mapping approach moels many of the subtle interaction effects required for realistic acoustical modeling.
  2. Hogue, A., and Jenkin, M., Development of an underwater vision sensor for 3D reef mapping, Proc. IEEE/RSJ IROS, Beijing, China, 2006.
    Coral reef health is an indicator of global climate change and coral reefs themselves are important for sheltering fish and other aquatic life. Monitoring reefs is a time-consuming and potentially dangerous task and as a consequence autonomous robotic mapping and surveillance is desired. This paper describes an underwater vision-based sensor to aid in this task. Underwater environments present many challenges for vision-based sensors and robotic vehicles. Lighting is highly variable, optical snow/particulate matter can confound traditional noise models, the environment lacks visual structure, and limited communication between autonomous agents ing divers, and surface support exacerbates the potentially dangerous enviornment. We describe experiments with our multi-camera stereo reconstruction algorithm geared towards coral reef monitoring. The sensor is used to estimate volumetric scene structure while simultaneously estimating sensor ego-motion. Preliminary field trials indicate teh utility of the sensor for 3D reef monitoring and results of land-based evaluation of the sensor are shown to evaluate the accuracy of the system.
  3. Kapralos, B., Jenkin, M., and Milios, E., Sonel mapping: a stochastic acoustical modeling system, Proc. ICASSP, Toulouse, France, 2006.
    Modeling the acoustics of an enviornment is a complex and challenging task. here we describe the sonel mapping approach to acoustical rendering. Sonel mapping is a Monte-Carlo-based approach to modeling diffuse, specuarl, absorption and diffraction effects in an efficient manner. The approach models many of the subtle interaction effects required for realistic acoustical modeling, and the approach is computationally efficient allowing it to be used to acousticallly model interactive virtual enviornments.
  4. Xu, W., Jenkin, M., and Lesperance, Y., A multi-channel algorithm for edge detection under varying lighting conditions. Proc. IEEE CVPR, 2006.
    In vision-based autonomous spacedraft docking multiple views of scene structure captured with the same camera and scene geometry is available under different lightin conditions. These "multiple exposure" images must be processed to localize visual features to compute the pose of the target objects. This paper describes a robust multi-channel edge detection algorithm that localizes the structure of the target objects from the lcoal gradient distribution computed over these multiple-exposure images. This approach reduces the effects of the illumination variation including the effects of shadow edges over the use of a single image. Experiments demonstrate that this approach has a lower false detection rate than the average response of the Canny edge detector applied to the individual images separately.
  5. Hogue, A., German, A., Zacher, J., and Jenkin, M., Underwater 3D mapping: Experiences and lessons learned. Proc. Canadian Conference on Computer and Robot Vision. 2006.
    This paper provides details on the development of a tool to aid in 3D coral reef mapping designed to be operated by a single diver and alter integrated into an autonomous robot. We discuss issues that influence the development and deployment of underwater sensor technology for 6DOF hand-held and robotic mapping. We describe our current underwater vision-based mapping system, some of our experiences, lessons learned, and discuss how this knowledge is being incorporated into our underwater sensor.
  6. Chopra, A., Obsniuk, M., and Jenkin, M. The Nomad 200 and the Nomad SuperScout: Reverse engineered and resurrected. Proc. Canadian Conference on Computer and Robot Vision. 2006.
    The Nomad 200 and the Nomad SuperScouts are among the most popular platforms used for research in robotics. Built in the early 1990's they were the base of choice for many mobile robotics researchers. Unfotunately, lack of support and proper documentaiton to enahce the computing power on these robots has meant that they have slowly faded away into obvlivion. We at the York University have four robots from Nomadic Technologies Inc., and rather than allowing our old robots to just rust away, we decided to breath new life back into them. In this paper we present the techniques we used to resurrect our old Nomads.
  7. Saez, J. M., Hogue, A., Escolano, F., and Jenkin, M., Underwater 3D SLAM through entropy minimization. Proc. IEEE ICRA 2006.
    The aquatic realm is ideal for testing autonomous robotic technology. The challenges presented in this environment are numerous due to the highly dynamic nature of the medium. Applications for underwater robotics include the autonomous inspection of coral reefs, ships, pipelines, and other environmental assessment programs. In this paper we present current results in using 6DOF Entropy Minimization SLAM (SImulataneous Localization and Mapping) for creating dense 3D visual maps of underwater environments that are suitable for sucha pplications. The proposed SLAM algorithm exploits dense infromation coming from a stereo system, and performs robust egomotion estimation and global-rectification following an optimization approach.
  8. Dyde, R. T., Jenkin, M. R., Jenkin, H. L., Zacher, J. E., and Harris, L. R., The role ov visual background orientation on the perceptual upright during microgravity. Proc. VSS 2006, Sarasota, FL. J. of Vision, 6: 183a, 2006.
    The perceptual upright (PU) -- the orientation in which an object is most easily and naturally recognized -- is determined by a combination of the orientation of the body, the visual background, and gravity. PU can be assessed by identifying a character the identity of which depends on its orientation (the Oriented Character Recognition Test: OCHART, Dyde et al. VSS 2004. J. Vision, 4(8), 385a). Using OCHART we measured the influence of the orientation of the visual background on the PU in the fronto-parallel plane under conditions where gravity was irrelevant (when the character was presented orthogonal to gravity, with the subject lying supine); or not present (during exposure to microgravity created during parabolic flight). When supine in 1g the influence of the background on the PU was reliably greater than when the observer was upright in 1g. In microgravity the influence of the background on PU was reliably less than in the equivalent 1g state; curiously a similar reduction relative to the 1g condition was also found during the hyper-gravity phase of parabolic flight. These perceptual changes are consistent with an increase in the use of the body as a reference frame when gravity is changed. The effects of microgravity in the fronto-parallel plane cannot be simulated by simply arranging gravity to be orthogonal to that plane by lying supine.
  9. Harris, L. R., Dyde, R. T., and Jenkin, M. R. M., Where's the floor? Proc. VSS 2006, Sarasota, FL. J. of Vision, 6: 731a, 2006
    The floor of a room is the surface that is most likely to provide support. What is the contribution of the rooms structural features to the perception of which surface this is?Using the Immersive Visual Environment at York (IVY) twelve subjects were placed in three simulated box-like rooms with no features. The rooms had a constant depth, a height-to-width ratio that varied from 1:1 to 1:3 and were presented at different roll orientations in an interleaved manner. The far wall was coloured purple and the other four visible surfaces were randomly assigned one of four colours on each trial and subjects indicated 'the floor' by pressing correspondingly coloured buttons on a game-pad. Each surface was described by its normal vector. The vectors of the chosen surfaces were summed to provide the average orientation of the perceived floor for each room orientation.We tested three models of how people determine the floor. Subjects might choose the surface (1) closest to orthogonal to gravity (flipping point of wall-to-floor at 45 deg), (2) closest to orthogonal to gravity on each side of the room's diagonal (flipping point when diagonal of room vertical), or (3) based on a weighting function dependent on each surfaceĆ¢'s length and orientation. Contrary to expectations, subjects did not necessarily choose the surface closest to orthogonal to gravity. The weighted-surface model best described the data with each surface being weighted by its relative length raised to the power 1.25 (r2 = 0.9).
  10. Dyde, R. T., Jenkin, M. R. and Harris, L. R. The subjective visual vertical and the perceptual upright. Exp. Brain Res., 173: 612-622, 2006.
    The direction of "up" has traditionally been measured by setting a line (luminous if necessary) to the apparent vertical, a direction known as the "subjective visual vertical" (SVV); however for optimum performance in visual skills including reading and facial recognition, an object must to be seen the "right way up" a separate direction which we have called the "perceptual upright" (PU). In order to measure the PU, we exploited the fact that some symbols rely upon their orientation for recognition. Observers indicated whether the symbol "(a sideways p)" presented in various orientations was identified as either the letter "p" or the letter 'd". The average of the transitions between "p-to-d" and d-to-p" interpretations was taken as the PU. We have labelled this new experimental technique the Oriented CHAracter Recognition Test (OCHART). The SVV was measured by estimating whether a line was rotated clockwise or counter-clockwise relative to gravity. We measured the PU and SVV while manipulating the orientation of the visual background in different observer postures: upright, right side down and (for the PU) supine. When the body, gravity and the visual background were aligned, the SVV and the PU were similar, but as the background orientation and observer posture orientations diverged, the two measures varied markedly. The SVV was closely aligned with the direction of gravity whereas the PU was closely aligned with the body axis. Both probes showed influences of all three cues (body orientation, vision and gravity) and these influences could be predicted from a weighted vectorial sum of the directions indicated by these cues. For the SVV, the ratio was 0.2:0.1:1.0 for the body, visual and gravity cues, respectively. For the PU, the ratio was 2.6:1.2:1.0. In the case of the PU, these same weighting values were also predicted by a measure of the reliability of each cue; however, reliability did not predict the weightings for the SVV. This is the first time that maximum likelihood estimation has been demonstrated in combining information between different reference frames. The OCHART technique provides a new, simple and readily applicable method for investigating the PU which complements the SVV. Our findings suggest that OCHART is particularly suitable for investigating the functioning of visual and non-visual systems and their contributions to the perceived upright of novel environments such as high- and low-g environments, and in patient and ageing populations, as well as for normal observers.
  11. Borzenko, O., Xu, W., Obsniuk, M., Chopra, A., Jasiobedzki, P., Jenkin, M. and Lesperance, Y. Lights and Camera: Intelligently controlled multi-channel pose estimation System. Proc. Int. Conf. on Computer Vision Systems, New York, 2006.
    Guiding the spacecraft docking process requires the use of sensors that estimate the relative position of the two vessels. This task is complicated by the widely variable on-orbit illumination. To combat this, controllable docking cameras are augmented by computer-controlled illuminants. But how should these illumination and capture parameters be controlled and how should the images obtained under different conditions be combined in order to estimate the relative pose of the vessels? We address these issues in the "Lights and Camera" system. Images captured with the same camera and scene geometry but under different lighting conditions are merged, and the resulting edges are used to estimate the target's pose. A high level controller monitors the imaging process and determines the set of images to capture and use for pose estimation. This paper describes the "Lights and Camera" system architecture and initial results of its operation on mockups of space hardware.
  12. Harris, L. R., Jenkin, M., Jenkin, H., Dyde, R. T. and Oman, C. M. Visual cues to the direction of the floor: implications for spacecraft design. Proc. 7th Symposium on the role of the vestibular organs in space exploration, Noordwijk, The Netherlands, 2006.
    The floor of a room is the surface that is most likely to provide support and defines the plane in which limbic head direction cells and place cells code orientation and navigation information. What is the contribution of the room's structural features to the perception of which surface this is?Using the Immersive Visual Environment at York (IVY) twelve subjects were placed in three simulated box-like rooms with no features. The scene was rendered in stereo and was viewed through shutter glasses with a field of approximately 60 x 110 degs. The surfaces that made up the walls, floor and ceiling of the room were each painted a solid colour. The far wall was coloured purple and the other four visible surfaces were randomly assigned one of four colours (red, green, blue or yellow) on each trial. The rooms had a constant depth and a height-to-width ratio that varied from 1:1 to 1:3. The rooms were presented ten times at each of a number of different roll orientations in an interleaved manner. The room was presented for 500ms after which it was replaced with a blank screen of equal luminance. Subjects indicated which surface appeared to be 'the floor' by pressing correspondingly coloured buttons on a game-pad.To analyse the data each surface was described by its normal vector. The vectors of the surfaces that were chosen for each room structure and orientation were summed to provide the average orientation of the perceived floor for that room configuration.In weightlessness, when subjects experience visual reorientation illusions, the most overt change in perception is that walls, ceilings, and floors change identities. This can happen in 1-G experiments in a tumbling room too, but the phenomenon has never been quantified before.

    We tested three models of how people might determine the floor. Subjects might choose the surface (1) closest to orthogonal to gravity (in which case when our bare room was tilted at 45 deg, the two surfaces equally close to orthogonal to gravity would be equally likely to be chosen as the floor, independent of other features in the room, such as its aspect ratio or depth), (2) closest to orthogonal to gravity on each side of the room's diagonal (in which when the room was tilted so that the diagonal was orthogonal to gravity, the two surfaces on either side of it would be equally likely to be chosen as the floor. The orientation of the diagonal depends on the aspect ratio of the room), or (3) based on a weighting function dependent on each surface's area and orientation. Contrary to expectations, subjects did not necessarily choose the surface closest to orthogonal to gravity or diagonal. Instead the weighted-surface model best described the data.

    The perceived direction of the floor seems to depend on the properties of the available surfaces in a predictable manner. Quantifying exactly how these relative weightings are determined and how they might be influenced, may be consequential in limiting visual reorientation illusions and in providing a stable visual reference for orientation when other cues are not available.

  13. Dyde, R. T., Jenkin, M., and Harris, L. R. Measuring the perceptual upright while manipulating body and orientation and the orientation of the visual background relative to gravity. Proc. 7th Symposium on the role of the vestibular organs in space exploration, Noordwijk, The Netherlands, 2006.
    The direction of 'upĆ¢' has been traditionally measured by setting a line, luminous if necessary, to the subjective visual vertical (SVV) -- i.e. the perceived axis of gravity. It has been found that the relative direction of gravity, of the observer's body orientation and the orientation of the ambient visual environment all influence the SVV.An alternative measure of the upward direction can be generated by determining the orientation of a character at which it is most easily recognised. By taking the letters 'p' and 'd', and determining the two orientations at which they are maximally confused, we can infer the orientation at which they are maximally differentiable. This new method - the oriented character recognition technique (OCHART) - has been used to measure what we have named the perceptual upright (PU). The technique was applied to 11 observers who were upright, lying in repose and supine while a background picture rich in polarity cues to the visual 'up' was presented in a series of 16 different orientations. When all the cues to upright -- the body orientation, gravity and the visual background -- are aligned, PU and SVV are also aligned. As these three sources of determining up are changed with respect to each other, the effects on PU and SVV are markedly different. In the case of PU the effect of manipulating these three contributors to up is closely predicted by a weighted vectorial sum of the directions indicated by each cue, with the direction of the body being the dominant factor and gravity and vision being roughly equally weighted. For SVV a much more complex and less easily modelled result is found, but one which is dominated by the direction of gravity with residual influences of body and vision. These results suggest that as well as measuring different 'ups' PU and SVV differ in their sensitivity to manipulations of the contributing cues. As such PU provides a measure which is more amenable for use as a probe for the function and dysfunction of all the sensory contributors to defining up, for the maintenance of postural balance, and in distortions contingent upon perceptions of relative orientation.
  14. Dyde, R. T., Jenkin, M., Jenkin, H., Zacher, J. and Harris, L. R. The role of visual background orientation on the perceptual uprihgt during parabolic flight. Proc. 7th Symposium on the role of the vestibular organs in space exploration, Noordwijk, The Netherlands, 2006.
    The perceptual upright (PU) -- the orientation in which an object is most easily and naturally recognized -- can be modelled as a combination of the orientation of the body, the visual background, and gravity. PU can be assessed by identifying a character the identity of which depends on its orientation ('The subjective visual vertical and the perceptual upright', (in press), Experimental Brain Research). Under the reduced gravity condition of parabolic flight, asking subjects to indicate their perceived orientation by 'setting a line to the vertical' as, for example, defined by the direction a ball would fall, causes them to complain that they don't have a clear sense of which way things might fall in weightlessness. However since the PU can be assessed by asking subjects whether a character is a 'p' or a 'd', the task remains clear. Using a small observer pool of six participants we measured the influence of the orientation of the visual background on the PU under the low gravity conditions obtained in parabolic flight. Observers were tested in the fronto-parallel plane under conditions where: gravity was irrelevant (when all stimuli were presented orthogonal to gravity, with the subject lying supine); gravity was reduced (during exposure to microgravity created during parabolic flight); or gravity was increased (during the hyper-gravity phase of parabolic flight). When supine in 1g the influence of the background on the PU was reliably greater than when the observer was upright in 1g suggesting a relative increase in the effect of vision on PU in the absence of gravity. However in actual microgravity the influence of the background on PU was reliably less than in the equivalent 1g state; curiously a similar reduction relative to the 1g condition was also found during the hyper-gravity phase of parabolic flight. These perceptual changes are consistent with an increase in the use of the body as a reference frame when gravity is changed but they also suggest that the effects of microgravity cannot be simulated by simply arranging gravity to be orthogonal to that plane through lying supine.