Dudek, G., Jenkin, M., Milios, E. and Wilkdes, D., Topological exploration with multiple robots, Proc. ISORA'98, Anchorage, Alaska, 1998
This paper describes a technique whereby a group of mobile autonomous agents explore and unknown graph-like environment and constructs a topological map of it. The key idea is that the mobile agents start at a common node of the environment, explore independentlyl (using a previously published algorithm) and agree to meet after a specified number of moves to exchange information in order to augment each other's partial map of the world. Just after each exchange, robots have the same map of the world, which is a superset of each robot's map just before the exahnge. The process of harmonizing each other's maps involves a combination of reasoning (for the areas that consist of paths common to all partial maps before the exahnge) and physical movements of the robots, to check whether certain nodes in one map are identical to nodes in the rest of the maps.
Harris, L., and Jenkin, M. (Eds.) Vision and Action, Cambridge University Press, 1998. Abstract The visual processes involved in moving, reaching, grasping, and playing sports are complex interactions. For example, the action of moving the head provides useful cues to help interpret the visual information. Simultaneously, vision can provide important information about the actions and their control. This becomes a reiterative process. This process, and the interactions between vision and action, are the foci of this volume. This book contains contributions from scientists who are leaders in each of the several facets of the subject. Chapters consider simple types of action, such as moving the eyes and head and body, as one would do while looking around or walking, as well as complex actions such as driving a car, catching a ball, or playing ping-pong.
Harris, L. Jenkin, M., and Zikovitz, D., Vestibular cues and virtual environments, IEEE VRAIS'98, 133-138, Atlanta, GA, 1998. Copyright IEEE.
The vast majority of virtual environments concentrate on constructing a realistic visual simulation while ignoring non-visual environmental cues. Although these missing cues can to some extent be ignored by an operator, the lack of appropriate cues may contribute to "cybersickness" and may affect operator performance. Here we examine the role of vestibular cues to self-motion on an operator's sense of self-motion within a virutal environment. We show that the presence of vestibular cues has a very significant effect on an operator's estimate of self-motion. The addition of vestibular cues, however, is not always beneficial.
Jenkin, M., and Dymond, P., A plugin-based privacy scheme for world wide web file distribution, Proc 31st Hawaii Int. Conf. on System Sciences, 1998.
Existing security mechanisms for serving documents on the World Wide Web typically require use of either an underlying security transport mechanisms.(e.g. SSL) or alternate servers, browsers and data streams(e.g., SHTTP). In this paper we introduce a simpler method using plugins which provides moderate security for serving private documents within the standard HTTP mechanism and socket layer. This new method operates by providing a security plugin within a standard web-browser environment. It provides a some-what lower level of functionality and security than the overhead, especially on the server end, and appears to be very appropriate for serving low-security, non-public documents, files and images over the world wide web. The method can be easily adapted to provide other advantages, such as automatic "water-marking" of decoded material with the name of the decoder, and the deployment of content-specific compression algorithms.
Jenkin, M., Elder J., and Pentile, G., Loosely-coupled telepresence through the panoramic image server, Proc. Vision Interface '98, 249-254, Vancouver, BC.
While computer vision systems can clearly assist in suveillance tasks, taking human out of the loop entirely proves to be difficult or undesirable in many applications. Human operators are needed to detect events missed by automatic methods, to identify false alarms, and to interpret and react appropriately to complex situations. A key challenge in partially automated systems is how best to combine machine alogrithms for event detection, analysis and tracking with telepresence systems controlled by one or more human operators. Given the disparity in performance between the human visual system and typical robotic cameras, we argue that direct coupling of human and machine is inappropriate. We propose instead to couple human and machine components indirectly, through a database called the Panaramic Image Server. We show how this loose couping allows machine and operator surveillance priorities to be resolved while providing a fast and natural telepresence environment for one or more human operators.
Jenkin, M., and Jasiobedzki, P., Computation of Stereo Disparity for Space Materials, IROS'98, Victoria, BC. 1998.
One of the challenges facing computer vision systems used in space is the presence of specular surfaces. Such surfaces lead to several adverse effects when imaged by vision systems such as the creation of reflected "virtual" images of objects due to specular reflections, specular reflection of light from the sun and other sources, and total reflection of any projected illuminants including laser beams. These effects may lead to incorrect measurements and loss of data in the case of sensor saturation or inadequate intensity of the returned laser beams in the case of an active illuminant. In addition, the instruments inside space structures such as satellites may be extremely sensitive to active illuminants such as laser beams or radar signals, and thus passive vision systems which rely on either natural on low-power projection systems are preferred over active sensing technologies. An additional advantage of fixed stereo based techniques over scanning rangefinders lies in the fact that fixed stereo systems do not contain moving parts that are prone to failure and expensive to qualify for space and maintain. Fixed stereo based systems use cameras, framegrabbers and computers already qualified for space but must deal with the issue of specular reflections. Here we consider the task of recovering the local surface structure of highly specular surfaces such as satellites using passive stereopsis without resulting to the introduction of additional light sources. In particular we examine the use of stereo cameras to recover surfaces which result in perfectly specular reflection.
Lang, J., and Jenkin, M., Actively building models with VIRTUE, Proc. ACCV'98, Hong Kong, 1998. This paper was awarded the 1997 E. Lyn Kirchner Award for research in Vision Science at York University.
This paper presents the sensing plan of VIRTUE; an active vision system based around a VIRtual TrinoUlar stEro-head. VIRUTE is used to build polyhedral volumetric models of unknown objects based on recovered 3-D line segments. Partial models and a viewpoint enumeration scheme are used to guide the image acquisition process and to determine "where to look next". Results of the active vision recovery of a number of objects are provided as well as volumetric and surface errors associated with the resulting models.
Lesperance, Y., Tam, K., and Jenkin, M., Reactivity in a logic-based robot programming framework, Proc. Cognitive Robotics --- Papers from the 1998 Fall Symposium, 98-105, Orlando, FL, 1998.
A robot must often react to events in its environment and exceptional conditions by suspendingor abandoning its current plan and selecting a new plan that is an appropriate response to the event. This paper describes how high-level controllers for robots that are reactive in this sense can conveniently be implemented in ConGolog, a new logic-based robot/agent programming language. Reactivity is achieved by exploiting ConGolog's prioritized concurrent processes and interrupts facilities. The language also provides nondeterministic constructs that support a form of planning. Program execution relies on a declarative domain theory to model the state of the robot and its environment. The approach is illustrated with a mail delivery application.
Nickerson, B., Jasiobedzki, P., Wilkes, D., Jenkin, M., Milios, E., Tsotsos, J., Jepson, A., and Bains, O. N., The ARK Project: Autonomous robots for known industrial environments, Robotics and Autonomous Systems, 25:83-104,1998. Copyright Robotics and Autonomous Systems.
The ARK mobile robot project has designed and implemented a series of mobile robots capable of navigating within industrial environments without relying on artificial landmarks or beacons. The ARK robots employ a novel sensor, laser eye, that combines vision and laser ranging to efficiently locate the robot in a map of its areas. Navigation in walled areas is carried out by matching 2D laser range scans, while navigation in open areas is carried out by visually detecting landmarks and measuring their azimuth, elevation and range with respect to the robot. In addition to solving the core tasks of pose estimation and navigation, the ARK robots address the tasks of sensing for safety and operator interaction.
Tsotsos, J. K., Verghese, G., Dickinson, S., Jenkin, M., Jepson, A., Milios, E., Nuflo, F., Stevenson, S., Black, M., Metaxas, D., Culhane, S., Ye, Y., Mann, R., PLAYBOT: A visually-guided robot to assist physically disabled children in play, Image and Vision Computing, 16:275-292, 1998.
This paper overview the PLAYBOT project, a long-term, large-scale research program whose goals is to provide a directable robot which may enable physically disabled children to access and manipulate toys. The domain is the first test domain, but there is nothing inherent in the design of PLYABOT that prohibits its extension to other tasks. The research is guided by several important goals: vision is the primary sensor; vision is task directed; the robot must be able to visually search its environment; object and event recognition are basic capabilities; environments must be natural and dynamic; users and environments are assumed to be unpredictable; task direction and reactivity must be smoothly integrated; and safety is of high importance. The emphasis of the research has been on vision for the robot this is the most challenging research aspect and the major bottleneck to the development of intelligent robots. Since the control framework is behavior-based, the visual capabilities of PLAYBOT are described in terms of visual behaviors. Many of the components of PLAYBOT are briefly described and several examples of implemented of sub-systems are shown. The paper concludes with a description of the current overall system implementation, and a complete example of PLAYBOT performing a simple task.
Mantegh, I., Jenkin, M. and Goldenberg, A. A. A modular and less complex environment representation algorithm [for mobile robots]. Proc. IEEE Int. Symp. on Industrial Electronics. Pretoria, South Africa, 1998.
The purpose of environment representation is to map the external real-world of the robot (workspace) and its evolution to an internal data structure usable by the motion planning algorithm. This operation is essential in the development of goal-attaining (complete) motion commands for an autonomous robot. In this paper, the authors present a modular environment representation which can readily be used by a hill-climbing search method to find a goal-attaining path for the robot. Capitalizing on the properties of harmonic potential functions and absorbing Markov chains, this paper presents a new method of environment representation which: (i) is able to map the robot environment to local-minima-free potential fields; (ii) is capable of handling exact geometries so that no geometric approximation is required; (iii) requires less memory for data storage than commonly used methods of environment representation; and (iv) is computationally less complex than the existing methods of representation. The process of environment representation is carried out in two stages, as described in the paper.