2008

2024 2023 2022 2021 2020
2019 2018 2017 2016 2015 2014 2013 2012 2011 2010
2009 2008 2007 2006 2005 2004 2003 2002 2001 2000
1999 1998 1997 1996 1995 1994 1993 1992 1991 1990
1989 1988 1987 1986 1985 1984 1983
  1. Kwietniewski, M., Wilson, S., Topol, A., Gill, S., Gryz, J., Jenkin, M., Jasiobedzki, P. and Ng, H.-K., MED: A multimedia event database for 3d crime scene representation and analysis. Proc. 24th Int. Conf. on Data Engineering, Cancun, Mexico.
    The development of sensors capable of obtaining 3d scans of crime scenes is revolutionizing the ways in which crime scenes can be analyzed and at the same time is driving the need for the development of sophisticated tools to represent and store this dat. Here we described the design of a multimedia database suitable for representing and reasoning about crime scene data. The representation is grounded int he physical enviornment that makes up the crime scene and provides mechanism for representing both traditional (forms-based) data as well as 3d scan and other complex spatial data.
  2. Jenkin, M., Tsotsos, J., Andreopoulos, A., Rotenstein, A., Robinson, M., and Laurence, J., A large-scale touch sensitive display, Proc. INFOS'2008, Cairo, Egypt.
    Large scale displays present a number of challenges in terms of physical construction and software control. This paper describes a software and ahrdware infrastructure that supports extremely large interactive display surfaces. The resulting device is capable of supporting multiple users interacting with the display surface simultaneously and the display of complex interactive graphics over expansive high resolution displays. A specific implementation 36"x54" with a pixel resolution of 5040x3150 pixels is presented.
  3. Wang, H., Jenkin, M. and Dymond, R., Enhancing exploration in graph-like worlds. Proc. Computer and Robot Vision, 2008. Windsor, ON.
    This paper explores two enhancements that can be made to single and multiple robot exploration in graph-like worlds. One enhancement considers the order in which potential places are explored and anotehr considers the exploitation of local neighbor information to help disambiguate possible locations. Empirical evaluations show that both enhancements can produce a significant reduction in exploration effort in terms of the number of mechanical steps required over the original exploration algorithms and that for some environments up to 60% reduction in mechanical steps can be achieved.
  4. Gill, S., and Jenkin, M. Polygonal meshing for 3D stereo video sensor data. Proc. Computer and Robot Vision, 2008. Windsor, ON.
    Many visually-guided robotic systems rely on stereo video data streams to obtain surface models of environmental structure. However stereo video-based scanners are noisier than those produced from laser-based scanners and are subject to areas of spase point information corresponding to textureless or specular surfaces. This complicates the process of constructing polygonal meshes from these point-clouds. This paper develops an approach to meshing for stereo video surface reconstuction that addresses these issues and, by exploiting the known egomotion of the sensor, obtains surface normal and texture rednering information. This widens the range of application for which stereo video reconstruction can be applied and opens the possibility of using stereo video to generate models for real-time rendering applciations.
  5. Topol, A., Jenkin, M., Gryz, J., Wilson, S., Kwietniewski, M., Jasiobedzki, P., Ng, H.-K., and Bondy, M. Generating semantic information from 3D scans of crime scenes. Proc. Computer and Robot Vision, 2008. Windsor, ON.
    Recent advancements in laser and visible light sensor technology allows for the collection of photorealistic 3D scans of large scale spaces. This enables the technology to be used in real world applications such as crime scene investigation. The 3D models of the environment obtained with a 3D scanner capture visible surfaces but do not provide semantic information about salient features within the captured scene. Later processing must convert these raw scans into salient scene structure. This paper describes ongoing research into the generation of semantic data from the 3D scan of a crime scene to aid forensic specialists in crime scene investigation and analysis.
  6. Yang, J., Dymond, P. and Jenkin, M. Accessibility assessment via workspace estimation. Int. Journal of Smart Home, 2: 73-90, 2008.
    The process of evaluating a built environment for accessibility is known as "accessibility assessment." Determining accessibility is closely related to the problem of determining possible motions of a specific kinematic structure -- given an environment and a mobile device, how much of the environment is accessible? Given these similarities, here the accessibility assessment process is reformulated as a motion planning problem. Rather than treating each of the degrees of freedom 'equally' while planning, we explore a hierarchical characteristic of all of the degrees of freedom when constructing the roadmap. The approach is demonstrated on simulated environments as well as on a student residence at York University.
  7. Jenkin, M., Hogue, A., German, A., Gill, S., Topol, A. and WIlson, S. Modelling underwater structures, Int. J. Cognitive Informatics and Natural Intelligence, 2: 1-14, 2008.
    For systems to become truly autonomous, it is necessary that they be able to interact with complex real world environments. In this paper we investigate techniques and technologies to address the problem of the acquisition and representation of complex environments such as those found underwater. The underwater environment presents many challenges for robotic sensing including highly variable lighting and the presence of dynamic objects such as fish and suspended particulate matter. The dynamic six-degree-of-freedom nature of the environment presents further challenges due to unpredictable external forces such as current and surge. In order to address the complexities of the the underwater environment we have developed a stereo vision-inertial sensing device that has been successfully deployed to reconstruct complex 3D structures in both the aquatic and terrestrial domains. The sensor combines 3D information, obtained using stereo vision, with 3DOF inertial data to construct 3D models of the environment. Semi-automatic tools have been developed to aid in the conversion of these representations into semantically relevant primitives suitable for later processing. Reconstruction and segmentation of underwater structures obtained with the sensor are presented.
  8. Haji-Khamneh, B., Dyde, R. T., Sanderson, J., Jenkin, M. R. M. and Harris, L. R. How long does it take for the visual environment to influence the perceptual upright? Proc. VSS 2008, Naples, FL.
    The perceptual upright (PU) (the orientation in which objects appear 'upright') is influenced by visual and non-visual cues concerning the orientation of an observer. The orientation of the visual background accounts for about 25% of the influence. How long does it take for the perception of upright to form? We used the OCHART method (Dyde et al. 2004 Exp Brain Res. 173: 612) in which subjects identified a character (p/d) the identity of which depended on its orientation. Using a three-field tachistoscope (Ralph Gerbrands, field of view 6.3 degs) subjects viewed the character against a background. Display times were varied from 50-600ms and were immediately followed by a mask. We used the method of constant stimuli with a range of character and background orientations each presented at least six times. From this, we could identify the orientation where the character was most easily identified (PU). There was no effect of the background at the shortest exposure times, even though the subject could comfortably identify the character. There was an increase in the size of the effect with increasing exposure duration with a time constant of about 200ms. Subjects are able to identify the gist of a background with an exposure of only 26ms (Joubert et al. 2007 Vis Res. 47: 3286). However, using information from the visual background to influence character recognition seems to take substantially longer than this. It is possible that different types of orientation cues differ in the time they take to be effective.
  9. Jenkin, H., Barnett-Cowan, M., Dyde, R., Jenkin, M., Harris, L. Left/right asymmetries in the contribution of body orientation to the perceptual upright. Proc. VSS, Naples, FL.
    INTRODUCTION: The direction of the orientation at which objects and characters are most easily recognized, the perceived upright has been modelled as a weighted vector sum of the directions defined by the body's long axis (egocentric), gravity, and visible cues (Dyde et al. 2006, Exp. Brain Res.). This model predicts symmetrical responses such that subjects lying left or right side down relative to gravity should exhibit mirror symmetric patterns of responses. Such symmetry is also expected if torsional eye orientation dependent upon body orientation relative to gravity or visual orientation relative to the body is included in the model.
    METHODS: Nineteen subjects drawn from researchers and students at York University participated. The Oriented Character Recognition Test (OCHART - described in Dyde et al. 2006) was administered while subjects viewed several orientations of visual background while either upright, left side down, or right side down relative to gravity. OCHART identifies the perceptual upright using the perceived identity of letters.
    RESULTS: Responses revealed a systematic difference between the response pattern when lying left side down and lying right side down. This asymmetry can be modelled by a leftwise bias in the perceived orientation of the body relative to its actual orientation.
    DISCUSSION: The asymmetry in the effect of body orientation is reminiscent of the left-leaning asymmetry in determining the direction of light coming from above (Mamassian & Goutcher 2001 Cognition 81:B1). The asymmetry might reflect a similar tendency to perceive the body as tilted.
  10. Dudek, G. and Jenkin, M. Inertial sensors, GPS and odometry. In B. Sicillano and O. Khatib (Eds.) Springer Handbook of Robotics, Springer, 2008.
    This chapter examines how certain properties of the world can be exploited in order for a robot or other device to develop a model of its own motion or pose (position and orientation) relative to an external frame of reference. Although this is a critical problem for many autonomous robotic systems, the problem of establishing and maintaining an orientation or position estimate of a mobile agent has a long history in terrestrial navigation.
  11. Kapralos, B., Jenkin, M. and Milios, E. Virtual audio systems. Presence: Teleoperators and Virtual Environments, 17: 527-549, 2008. (in press).
    To be immersed in a virtual environment the user must be presented with plausible sensory input including auditory cues. A virtual (three-dimensional) audio display aims to allow the user to perceive the position of a sound source at an arbitrary position in three-dimensional space despite the fact that the generated sound may be emanating from a fixed number of loundspeakers at fixed positions in space or a pair of headphones. The foundation of virtual audio rests on the development of technology to present auditory seignals to the listener's ears so that these signals are perceptually equivalent to those the listener would received in the environment being simulated. This paper reviews the human perceptual and technical literature relevant to the modeling and generation of accurate audio displays for virtual environments. Approaches to acoustical environment simulation are summarized and the advantages and disadvantages of various approaches are presented.
  12. Gryz, J., Jasiobedzki, P., Jenkin, M., McDiarmid, C., Bondy, M., Ng, H.-K., Codd-Downey, R., Gill, S., Kwietniewski, M., Topol, A. and Wilson, S. 3D crime scene acquisition, representation and analysis. In K. Franke, S. Petrovic and A. Abraham (Eds.) Computational Forensics, Springer. (in press).
    This chapter describes results from the CBRN Crime Scene Modeller (C2SM) project -- a project whose goal is the development and field evaluation of technologies for real time data acquisition at CBRN crime scenes, and fast recreation of such scenes as virtual environments with access to all of the multi-model data and heterogeneous evidence associated with the scene. The C2SMproject leverages recent results in 3D scene modelling to develop an experimental system that supports 3D crime scene acquisition, representation and analysis.The original publication is available at www.springerlink.com.
  13. Kapralos, B., Jenkin, M. and Milios, E. Sonel Mapping: a probabilistic acoustical modeling method. Journal of Building Acoustics 15: 289-313, 2008
    Sonel mapping is a Monte-Carlo-based acoustical modeling technique that approximates the acoustics of an enviornment while accounting for diffuse and specular reflections as well as diffraction effects. Through the use of a probabilistic Russian roulette strategy to determine the type of interaction between a sound and any objects/surfaces it may encounter, sonel mapping avoids excessively large running times in contrast to deterministic techniques. Sonel mapping approximates many of the subtle interaction effects required for realistic acoustical modeling yet due to its probabilistic nature, it can be incorporated into interactive virtual environments where accuracy is often substituted for efficiency. Experimental results demonstrate the efficacy of the approach.
  14. Cowan, B., Shelley, M., Sabri, H., Kapralos, B., Hogue, A., Hogan, M., Jenkin, M., Goldsworthy, S., Rose, L. and Dubrowski, A. Interactive simulation enviornment for interprofessional education in critical care. Proc. ACM FuturePlay 2008 Conference on the Future of Game Design and Technology, pp. 260-261, Toronto, Canada, 2008.
    Interprofessional education is a pedagogical approach which allows helath care practitioners to develop a clear understanding and appreciation of the roles, expertise, and unique contributions of their disciplines as well as those of the other participating health care providers. It also helps build effective team relationship which is essentila for optimal health care delivery. Currently interprofessional educatino includes classroom teaching, clinical placements, and practice in simulate enviornments using both high and low fidelity simulated patients. These strategies are resources intensive and present significant challenges including the releae of team members from clinical responsibiliteis. Interactive virtual simulation environments, such as serious games, offer a feasible alternative to traditional methods as multiple team members may participate in the simulation simultaneouslly regardless of their physical location or time of day. Here we descrie an ongoing project that seeks the development of an interactive virtual simulation platform using serious games technology to augment learning of skills, knowledge and attitudes requisite in interprofessional education.
  15. Dudek, G., Prahacs, C., Saunderson, S., Giguere, P., Sattar, J., Jenkin, M. Amphibious robotic device. US Patent 7,427,220, 2008.
    A control system for a robotic device maneuverable in at least a liquid medium, the system having at least one visual sensor retrieving an image of the device's environment, and image analyzing module receiving the image, determining a presence of an object of a given type therein and analyzing at least one property of the object, a motion calculator determining a desired motion of the device based on the proerpty, and a controller operating a propulsion system of the device to obtain the desired motion. Also, a legged robotic device having a control system including at least one sensor providing data about an environment of the device, the control system using sensor data to determine a desired motion of the device, determining a corresponding required leg motion of each of the legs to produce the desired motion and actuating the legs in accordance with the corresponding required leg motion.