2012

2024 2023 2022 2021 2020
2019 2018 2017 2016 2015 2014 2013 2012 2011 2010
2009 2008 2007 2006 2005 2004 2003 2002 2001 2000
1999 1998 1997 1996 1995 1994 1993 1992 1991 1990
1989 1988 1987 1986 1985 1984 1983
  1. Nakano, D., Lam, J., Kanev, K., Kapralos, B., Collins, K., Hogue, A. and Jenkin, M. A framework for sound localization experiments and automation. Proc. Joint. Int. Conf. on Human-Centred Computer Environments. Aizu-Wakamatsu, Japan, March 2012.
    Table-top computing has been growing in popularity slowly for the last decade and is poised to make in-roads into the consumer market soon, opening up another new market for the games industry. However, before surface computers become widely accepted, there are many questions with respect to sound production and reception for these devices that need to be explored. Here, we describe two experiments that examine sound localization on a horizontal (table-top computer) surface. In the first experiment we collect "ground truth" data regarding physical sound source localization by employing a computer controlled grid of 25 equally spaced loudspeakers. In the second experiment we investigate virtual sound source localization using bilinear interpolation amplitude panning method and a modified quadraphonic loudspeaker configuration whereby four loudspeakers are positioned at each corner of the surface in a manner such that they emanate sound in an "upwards" direction. Obtained results indicate that sound localization of virtual sound sources on a horizontal surface is prone to errors and this is confirmed with our physical sound source "ground truth" data.
  2. Speers, A. and Jenkin, M. Tuning stereo image matching with stereo video sequence processing. Proc. Joint. Int. Conf. on Human-Centred Computer Environments. Aizu-Wakamatsu, Japan, March 2012.
    Algorithms for stereo video image processing typicaly assume that the various tasks; calibration, static stereo matching, and egomotion are independent black boxes. In particular, the task of computing disparity estimates is normally performed independently of ongoing egomotion and environmental recovery processes. Can information from these processes be exploited in the notoriously hard problem of disparity field estimation? Here we explore the use of feedback from the environmental model being constructed to the static stereopsis task. A prior estimate of the disparity field is used to seed the stereomatching process within a probabilistic framework. Experimental results on simulated and real data demonstrate the potential of the approach.
  3. Wang, H., Jenkin, M. and Dymond, P. Enhancing exploration in topological worlds with multiple immovable markers. Proc. Canadian Conference on Computer and Robot Vision. Toronto, Ontario. 2012.
    The fundamental problem in robotic exploration and mapping of an unknown environment is answering the question 'have I been here before?', which is also known as the 'loop closing' problem. One approach to answering this problem in embedded topological worlds is to resort to the use of an external marking aid that can help the robot disambiguate places. This paper investigates the power of different marker- based aids in topological exploration. We describe enhanced versions of edge- and vertex-based marker algorithms and demonstrate algorithms with enhanced lower bounds in terms of number of markers and motions required in order to map an embedded topological environment.
  4. Yang, J., Dymond, P. and Jenkin, M. Reaching analysis of wheelchair users using motion planning methods. Proc. ICOST 2012, Italy. Published in Impact Analysis of Solutions for Chronic Disease Prevention and Management. Lecutre Notes in Computer Science (LNCS), 7251: 234-237.
    For an environment to be well suited for wheelchair use not only should it be sufficiently clear of obstacles so that the wheelchair can navigate it, it should also be properly designed so that critical devices such as light switches can be reached by the hand of the wheelchair user. Given a goal location, explicitly calculating a path of the wheelchair and the person sitting in it so that they can reach the goal is not a trivial task. In this paper, we augment a Rapidly-exploring Random Tree (RRT) planner with a goal region generation stage that encourages the RRT to grow toward configurations from which the wheelchair user can reach the goal point. The approach is demonstrated on simulated 3D environments.
  5. Harris, L. R. , Herpers, R., Jenkin, M., Allison, R. S., Jenkin, H., Kapralos, B., Scherfgen, D. and Felsner, S. Optic flow and self motion perception: the contribution of different parts of the field. Society for Neuroscience, New Orleans, 2012.
    Background
    Moving in the world generates optic flow on the retina which contains important information about the self motion that created it, providing proprioceptive feedback about heading direction and distance moved. Different parts of the retina are known to be differentially sensitive to optic flow. For example, motion in the nasal retina and the upper fields is known to be more effective at generating optokinetic eye movements than motion in the lower field. The effect of different retinal regions on the perception of self orientation appears to be additive (Dearing and Harris, Vis. Res. 2011, 51: 2205). Here we examined the effectiveness of optic flow in different retinal regions in determining the perception of distance traveled.Methods
    Twelve subjects sat on a stationary bicycle in a "CUBE" display which provided a virtual reality presentation of moving in a corridor (8' wide) or over a ground plane from which various sections could be removed. Subjects viewed the flat display dichoptically or wore an eye patch over one eye (monocular condition) and were shown a target simulated at 8, 16, 24 or 32m. The display was yoked to a head tracker so that absolute depth could be estimated from head motion. The target then disappeared and optic flow compatible with forward motion at 1 or 2 m/s was presented. Subjects indicated when they had "moved" through the target distance. Data were fitted by the Lappe et al (2007 EBR 180:35) leaky spatial integrator model from which spatial integration constants and sensory gains were obtained.Results
    Monocularly viewed optic flow appeared to be more effective at producing motion than when both eyes were open. Monocular viewing resulted in substantial errors in which subjects felt they had moved much further than the simulated motion (mean gain 1.3). Monocularly, for movement over the ground plane, optic flow in the upper field was more effective than in the lower field for full field motion (for movement at 1m/s). However, for movement in a closed corridor, and for high velocity conditions, motion in the upper field was only marginally more effective than motion in the lower field. Optic flow on the nasal retina was no more effective than on the temporal retina.

    Discussion
    These results are discussed in terms of the connection between self motion, depth perception and the generation of compensatory eye movements.

  6. Harris, L. R., Herpers, R., Jenkin, M., Allison, R. S., Jenkin, H., Kapralos, B., Scherfgen, D. and Felsner, S. The relative contribuitons of radial and laminar optic flow to the perception of linear self-motion. J. of Vision, 12: 1-10, 2012.
    When illusory self-motion is induced in a stationary observer by optic flow, the perceived distance traveled is generally overestimated relative to the distance of a remembered target (Redlick, Harris, & Jenkin, 2001): subjects feel they have gone further than the simulated distance and indicate that they have arrived at a target's previously seen location too early. In this article we assess how the radial and laminar components of translational optic flow contribute to the perceived distance traveled. Subjects monocularly viewed a target presented in a virtual hallway wallpapered with stripes that periodically changed color to prevent tracking. The target was then extinguished and the visible area of the hallway shrunk to an oval region 40 deg (h) x 24 def (v). Subjects either continued to look centrally or shifted their gaze eccentrically, thus varying the relative amounts of radial and laminar flow visible. They were then presented with visual motion compatible with moving down the hallway toward the target and pressed a button when they perceived that they had reached the target's remembered position. Data were modeled by the output of a leaky spatial integrator (Lappe, Jenkin, & Harris, 2007). The sensory gain varied systematically with viewing eccentricity while the leak constant was independent of viewing eccentricity. Results were modeled as the linear sum of separate mechanisms sensitive to radial and laminar optic flow. Results are compatible with independent channels for processing the radial and laminar flow components of optic flow that add linearly to produce large but predictable errors in perceived distance traveled.A locally cached version can be found here.
  7. Wang, H., Jenkin, M. and Dymond, P. Enhancing exploration of topological worlds with an immovable marker. Proc. World Conference on Information Technology. Barcelona, Spain, 2012.
    The fundamental problem in robotic exploration and mapping of an unknown environment is answering the question 'have I been here before?', which is also known as the 'loop closing' problem. One approach to answering this problem in embedded topological worlds is to use an external marking aid that can help the robot disambiguate locations. This paper develops novel techniques that enable a robot to map the world with a single immovable marker. We describe enhanced versions of the single immovable marker algorithm described in [4][5][9] and demonstrate that the algorithm can be greatly improved in terms of the number of motions used by the robot to map the environment.A copy of the poster presented at the conference is available here.
  8. Harris, L. R., Jenkin, M. R. M. and Dyde, R. T. The perception of upright under lunar gravity. J. Gravitational Physiology 19: 9-16.
    The perceived direction of "up" is determined by a weighted sum of the direction of gravity, visual cues to the orientation of the environment, and an internal representation of the orientation of the body known as the idiotropic vector (18). How does the magnitude of gravity contribute to the assignment of the relative weights? In a previous study we showed that under microgravity less weighting is given to the visual cue than under normal gravity (6). Here we measured the weighting assigned to visual cues in determining the perceptual upright during periods of lunar gravity created by parabolic flight. The emphasis placed on the visual cue was not significantly affected by lunar gravity compared to under normal gravity (during level flight). This finding is discussed in terms of multisensory cue weighting and attempts to simulate reduced gravity levels by bedrest in which the influence of gravity along the long axis of the body is reduced by lying supine.