1993

2024 2023 2022 2021 2020
2019 2018 2017 2016 2015 2014 2013 2012 2011 2010
2009 2008 2007 2006 2005 2004 2003 2002 2001 2000
1999 1998 1997 1996 1995 1994 1993 1992 1991 1990
1989 1988 1987 1986 1985 1984 1983
  1. Jasiobedzki, P., Jenkin, M., Milios, E., Down, B., Tsotsos, J., and Campbell, T. Imaging and randing apparatus and aiming method. Canadian Patent 2,105,501, 1993.
    An imaging and ranging apprartus has a sensor unit comprising a video camera in combinatation with an optical range-finder. Typically the range-finder is an infra-red alser and the camera is sensitive to visible light. The camera and the range-finder are arranged so that the optical axis of the camera is parallel or co-linear with the signal axis of the range-finder or so that the signal axis of the range-finder passes through the frocal point of the camer. A pan and tilt motor are provided for aiming the sensor unit. The apprartus may be used as a robot head or mounted on the end of a manipulator. To aim the appraratus, radiation is imaged from the field of view of the camera. A patch is stored from said radiation image comprising an area of interest within the radiation image and an adjustment of the pan and tilt motors is estimated in order to relocate the patch to a postion within said field of view where the patch is centred about the optical axis. The motors are then adjusted in accoredance with the estimate and radiation is imaged from the field of view. A patch in the position centered about the signal axis is compared with the previously stored patch and, based upon the comaprison, it is deteremined whether further adjustment of the motors is necessary.
  2. Dudek, G., Jenkin, M., Milios, E., and Wilkes, D., Map validation and self-location in a graph-like world, Proc. 13th Int. Conf. on Artif. Intell., 1648-1653, Chambery, France, 1993.
    We present algorithms for the discovery and use of topological maps of an environment by an active agent(such as a person or a mobile robot). We discuss several issues dealing with the use of pre-existing topological maps of graph-like worlds by an autonomous robot and present algorithms, worst case complexity, and experimental results (for representative real-world examples) for two key problems. The first of these problems is to verify that a given input map is a correct description of the world (the VALIDATION PROBLEM). The second is to determine the robot's physical position on an input map (THE SELF-LOCATION PROBLEM). We present algorithms which require O(N^2) and O(N^3) steps to validate and locate the robot in a map (where N is the number of places in the map.)
  3. Dudek, G., Jenkin, M., Milios, E., and Wilkes, D., Robust positioning with a multi-agent robotic system, Proc IJCAI-93 Workshop on Dynamically Interacting Robots, 118-123, 1993.
    A collection of interacting autonomous robots can define a local coordinate system with respect to one another without reference to environment features. This simplifies tasks requiring robots to occupy or traverse a set of positions in the environment, such as mapping, conveyance and search. We argue for an approach to positioning in which sensing errors remain localized, and dead-reckoning plays no role. This involves a robot-based representation for the environment, in which metric information is used locally to determine the relative positions of neighbouring robots, but the global map is a graph, capturing the neighbour relations among the robots. We show that many tasks can be solved without reference to a global coordinate system, but that global metric maps may be constructed as desired, with small errors in the vicinity of any chosen position.
  4. Dudek, G., and Jenkin, M., A multi-level development environment for mobile robotics, Proc. Int. Conf. on Intelligent Autonomous Systems: IAS-3, 542-550, Pittsburgh, PA, 1993.
    Mobile robotic devices combining sensors, actuators, and computers are unique, complex devices which may be difficult to model from an abstract point of view. This paper presents a software development system which builds an abstraction of a robotic environment. This abstraction allows external software to interact with either a simulated robot and environment or to a real robot complete with sensors. The implementation is distributed across a network, and allows software to run on remote hardware thus taking advantage of any specialized hardware available on the network.
  5. Harris, L, and Jenkin, M. R. M. (Eds.), Spatial Vision in Humans and Robots, Cambridge University Press, 1993.
    Spatial vision is that field of science which deals with the problem of inferring the structure of the world from vision. The problem can be divided up into many separate tasks, such as extracting information about three-dimensional objects, or object recognition. This book brings together papers from the 1991 York Conference on Spatial Vision in Humans and Robots. Spatial Vision is interesting to both biological researchers who investigate how the brain solves spatial problems, and also to designers of robots and computer vision systems.
  6. Milios, E., Jenkin, M., and Tsotsos, J., Design and performance of TRISH, a binocular robot head with torsional eye movements, Int. J. Pattern Recognition and Artificial Intelligence, 7:51-68, 1993.
    We present the design of a controllable stereo vision head. TRISH (The Toronto IRIS Stereo Head) is a binocular camera mount, consisting of two fixed focal length color cameras with automatic gain control forming a verging stereo pair. TRISH is capable of version (rotation of the eyes about the vertical axis so as to maintain a constant disparity), vergence (rotation of each eye about the vertical axis so as to change the disparity), pan(rotation of the entire head about the vertical axis), and tilt (rotation of each eye about the horizontal axis). One novel characteristic of the design is that each camera can rotate about its own optical axis (torsion). Torsional movement makes it possible to minimize the vertical component of the 2D search which is associated with stereo processing in verging stereo systems.
  7. Nickerson, S. B., Jenkin, M., Milios, E., Down, B., Jasiobedzki, P., Jepson, A., Terzopoulos, D., Tsotsos, J., Wilkes, D., Bains, N., and Tran, K., ARK: Autonomous navigation of a mobile robot in a known environment, Proc Int. Conf. on Intel. Auton. Sys: IAS-3, Pittsburgh, 288-296, 1993.
    This paper gives an overview of the ARK(Autonomous Robot for a known environment) project. The objective of the project is to build a mobile robot capable of navigating in partially known industrial environments, using a variety of sensors, including colour video cameras, laser range finders, sonar and infrared walls, but without modification of the environment such as bar codes on the walls magnetic strips beneath the floor, or active radio beacons. Because an industrial environment lacks the wall structure of an office or lab space, navigation be necessity has to rely on landmark detection, indentification and tracking. Two prototypes are under construction: ARK-1, a university based mobile robot with off-board computing, and ARK-2, an industrial version of ARK-1 with most processing done on board. The main sensor of the robot consists of a colour camera with a zoom and focus controlled lens combined with a laser range finder, both of which are mounted on a pan-tilt unit. Other sensors include sonar, a network of infrared sensors providing protection from elevated protruding objects, and a floor sensor.
  8. Jenkin, M., Milios, E., Jasiobedzki, P., Bains, N. and Tran, K. Global navigation for ARK. Proc. 1993 IEEE/RSJ Int. Conf. on Intel. Robots and Systems (IROS). Yokohama, Japan, 1993.
    ARK (Autonomous Robot for a Known environment), is a visually-guided mobile robot which is being constructed as part of the Precarn project in mobile robotics. ARK operates in a previously mapped environment and navigates with respect to visual landmarks that have been previously located. While the robot moves, it utilizes an active vision sensor to register the robot with respect to these landmarks. As the landmarks may be scarce in certain regions of its environment, ARK plans paths which minimize both path length and path uncertainty. The global path planner assumes that the robot will use a Kalman filter to integrate landmark information with odometry data to correct path deviations as the robot moves, and then uses this information to choose a path which reduces the expected path deviation.
  9. Jenkin, M., Milios, E., Tsotsos, J. and Down, B. A binocular robotic head system with torsional eye movements. Proc. IEEE Int. Conf. on Robotics and Automation, 1993, pp. 776-781. The hardware and software designs for TRISH (The Toronto Iris Stereo Head) are presented. TRISH is a robotically controlled binocular camera mount, consisting of two fixed focal length color cameras with automatic gain control forming a verging stereo pair. TRISH is capable of version (rotation of the eyes about the vertical axis so as to maintain a constant disparity), vergence (rotation of each eye about the vertical axis so as to change the disparity) pan (rotation of the entire head about the vertical axis), and tilt (rotation of each eye about the horizontal axis). Each camera can rotate about its own optical axes (torsion). Torsion movement makes it possible to minimize the vertical component of the two-dimensional search which is associated with stereo processing in verging stereo systems. TRISH also incorporates a real-time video processing subsystem capable of accepting and processing the images generated by the head