2017

2024 2023 2022 2021 2020
2019 2018 2017 2016 2015 2014 2013 2012 2011 2010
2009 2008 2007 2006 2005 2004 2003 2002 2001 2000
1999 1998 1997 1996 1995 1994 1993 1992 1991 1990
1989 1988 1987 1986 1985 1984 1983
  1. Harris, L. R., Jenkin, M., Jenkin, H., Zacher, J. E. and Dyde, R. T. The efect of long-term exposure to microgravity on the perception of upright. Nature Microgravity, 3:3, 2017.
    Going into space is a disorienting experience. Many studies have looked at sensory functioning in space but the multisensory basis of orientation has not been systematically investigated. Here, we assess how prolonged exposure to microgravity affects the relative weighting of visual, gravity, and idiotropic cues to perceived orientation. We separated visual, body, and gravity (when present) cues to perceived orientation before, during, and after long-term exposure to microgravity during the missions of seven astronauts on the International Space Station (mean duration 168 days) and measuring perceived vertical using the subjective visual vertical and the perceptual upright. The relative influence of each cue and the variance of their judgments were measured. Fourteen ground-based control participants performed comparable measurements over a similar period. The variance of astronauts’ subjective visual vertical judgments in the absence of visual cues was significantly larger immediately upon return to earth than before flight. Astronauts’ perceptual upright demonstrated a reduced reliance on visual cues upon arrival on orbit that re-appeared long after returning to earth. For earth-bound controls, the contributions of body, gravity, and vision remained constant throughout the year-long testing period. This is the first multisensory study of orientation behavior in space and the first demonstration of long-term perceptual changes that persist after returning to earth. Astronauts showed a plasticity in the weighting of perceptual cues to orientation that could form the basis for future countermeasures.
  2. Codd-Downey, R. and Jenkin, M. On the utility of additional sensors in aquatic simultaneous localization and mapping. Proc. IEEE ICRA 2017, Singapore, 2017.
    Simultaneous Localization and Mapping (SLAM) is a key stepping stone on the road to truly autonomous robots. SLAM is of particular importance to robots with large motion estimation problems, such as robots operating on the surface of aquatic GPS-denied environments where a paucity of local landmarks complicates SLAM and accurate navigation. Visual sensors have proven to be an effective tool for SLAM generally and have wide applicability, but is vision enough to solve SLAM in this environment, and how important are other sensors including a compass and water column depth to solve SLAM for an aquatic surface vehicle? Here we show that more sensors are almost always helpful in terms of improving SLAM performance in such a situation but that a compass is a particularly useful sensor for SLAM for autonomous surface vehicles; suggesting that a compass is a worthwhile investment for such a robot, and that compass alternatives should be considered when operating an autonomous vehicle in environments that are both GPS and compass-denied.
  3. Hoveidar-Sefid, M. and Jenkin, M. Autonomous trail following. Proc. ICINCO 2017, Madrid, Spain. 2017.
    Following off-road trails is somewhat more complex than following man-made roads. Trails are unstructured and typically lack standard markers that characterize roadways. Nevertheless, trails can provide an effective set of pathways for off-road navigation. Here we approach the problem of trail following by identifying trail- like regions; that is regions that are locally planar, contiguous with the robot's current plane and which appear similar to the region in front of the robot. A multi-dimensional representation of the trail ahead is obtained by fusing information from an omnidirectional camera and a 3D LIDAR. A k-means clustering approach is taken based on this multi-dimensional signal to identify and follow off-road trails. This information is then used to compute appropriate steering commands for vehicle motion. Results are presented for over 1500 frames of video and laser scans of trails.
  4. Codd-Downey, R., Jenkin, M. and Allison, K. Milton: An open hardware underwater autonomous vehicle. Proc. IEEE ICIA 2017, Macau, 2017.
    Although there are a large number of autonomous robot platforms for ground contact and flying robots, this has not been the case for underwater robotic platforms. This is not due to the lack of interesting applications in the shallow underwater domain (50m depth), but rather due to the relative cost of building such platforms. This has recently changed with the development of inexpensive thrusters and other underwater components. Leveraging these components and design principles learned from more expensive remotely operated vehicles this paper describes Milton, an inexpensive open hardware design for a traditional thruster-based underwater robot. Utilizing commercial off-the-shelf hardware and a ROS infrastructure, Milton, and Milton-inspired designs provide an inexpensive platform for autonomous underwater vehicle research.
  5. Barneva, R. P., Kanev, K., Kapralos, B., Jenkin, M. and Brimkov, B. Integrating technology-enhanced collaborative surfaces and gamification for the next generation classroom. J. of Educational Technology Systems, 45(3), 309-325, 2017.
    We place collaborative student engagement in a nontraditional perspective by con- sidering a novel, more interactive educational environment and explaining how to employ it to enhance student learning. To this end, we explore modern technological classroom enhancements as well as novel pedagogical techniques which facilitate collaborative learning. In our setup, the traditional blackboard or table is replaced by a digitally enabled interactive surface such as a smart board or a tabletop com- puter. The information displayed on the digital surface can be further enhanced with augmented reality views through mobile apps on student smartphones. We also discuss ways to enhance the instructional process through elements of game mech- anics and outline an experimental implementation. Finally, we discuss an application of the proposed technological and pedagogical methods to human anatomy training.
  6. Al Tarawneh, E. and Jenkin, M. An extensible avatar (EA) toolkit for human robot interaction. ACM Celebration of Women in Computing womENcourage 2017. Barcelona, Spain, 2017.
    A key problem in the development of interactive robotic systems is the lack of common tools and tool chains to support critical aspects of the interaction between the robot and a human. The Extensive Avatar (EA) Human-robot interaction toolkit seeks to address this failing. The EA Toolkit consists of two core software components: a generic speech-to-text module that converts utterances from a speaker in proximity to the robot to a standard ROS (Robot Operating System) text message, and a generic text-to- utterance module that presents a realistic 3D avatar to the user that utters natural language speech while presenting an animated avatar that is synchronized with this utterance. Each of these modules is designed to be ROS-based and to be designed to be easily integrated into general robot systems. Details of the two core modules are described below. The speech-to-text module utilizes cloud-based software to perform generic speech-to-text mapping. This provides for continuous and active listening that detects speech in the environment, reduces the surrounding noise, and obtains the spoken words as text, simulating human listening. In addition to performing general speech to text translation, the speech to text module can be tuned to expected queries/commands from human operators thus enhancing the expected accuracy of the process and ensuring that the resulting text maps to pre-determined commands for the robot itself. The text-to-speech module combines a standard text-to-speech generation system with a 3D avatar (puppet) whose facial animation is tied to the utterance being generated. Text messages presented to the text-to-speech module are embedded within an XML structure that allows the user to tune the nature of the puppet animation so that different emotional states of the puppet can be simulated. The combination of these two modules enables the avatar representing the robot to appear as if they listen and recognize vocal commands given to it. The robot can answer and respond to questions given to it and can be programmed to answer customized questions, such as "take me to the manager". The system described here relies on a number of state of the art software modules. In particular it relies on a cloud-based speech to text recognition, a knowledge engine, a text to speech engine, a 3D character designing program, a 3D animation program, a lip syncing plugin for the animation program that extracts the sounds in words, maps them to mouth shapes and plots them according to duration and occurrence in the text in real time. An expression package controls the animated character's mood and facial expressions. An "idle loop" process animates the avatar puppet between utterances so that the character being rendered is never still but rather interacts with external users even when not being spoken to directly.
  7. Nguyen, M., Quevedo-Uribe, A., Kapralos, B., Jenkin, M., Kanev, M. and Jaimes, N. An experimental training support framework for eye fundus examination skill development.Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, doi=10.1080/21681163.2017.1376708, 2017.
    The eye fundus examination consists of viewing the back of the eye using specialised ophthalmoscopy equipment and techniques that allow a medical examiner to determine the condition of the eye. Recent technological advances in immersive and interactive technologies are providing tools that can be employed to complement traditional medical training methods and techniques. To overcome some of the issues associated with traditional eye examination approaches, our work is examining the application of consumer-level virtual reality technologies to eye fundus examination. Here, we present a cost-effective virtual-reality eye fundus examination virtual simulation tool. Results of a preliminary usability study indicate that the virtual simulation tool provides trainees the opportunity to obtain a greater understanding of the physiological changes within the eye in an interactive, immersive, and engaging manner.
  8. Gelsomini, F., Kanev, K., Hung, P., Kapralos, B., Jenkin, M., Barneva, R. and Vienna, M. BYOD Collaborative Kanji learning in Tangible Augmented Reality Setting. Proc. Inter-Academia 2017. Published in D. Luca, L. Sirghi and C. Costin (eds.) Recent Advanced in Technology Research and Education, Springer, 2017.In this work, we consider the challenges of studying Japanese, both as a mother tongue and as a second language, stemming from the complexity of its writing system employing over 2000 ideograms (kanji) and two different alphabets. We discuss a novel educational approach based on computer assisted collaborative learning incorporating direct interactions with digitally encoded physical artifacts acting as tangible interfaces components. The learning experiences are further enhanced by the BYOD based Virtual and Augmented Reality support engaging the tangible interface objects as physical attractors for focused discussions and collaboration.