2024

2024 2023 2022 2021 2020
2019 2018 2017 2016 2015 2014 2013 2012 2011 2010
2009 2008 2007 2006 2005 2004 2003 2002 2001 2000
1999 1998 1997 1996 1995 1994 1993 1992 1991 1990
1989 1988 1987 1986 1985 1984 1983
  1. Dudek, G. and Jenkin, M. Computational Principles of Mobile Robotics. 3rd Edition, Cambridge University Press.
    Now in its third edition, this textbook is a comprehensive introduction to the multidisciplinary field of mobile robotics, which lies at the intersection of artificial intelligence, computational vision, and traditional robotics. Written for advanced undergraduates and graduate students in computer science and engineering, the book covers algorithms for a range of strategies for locomotion, sensing, and reasoning. The new edition includes recent advances in robotics and intelligent machines, including coverage of human-robot interaction, robot ethics, and the application of advanced AI techniques to end-to-end robot control and specific computational tasks. This book also provides support for a number of algorithms using ROS 2, and includes a review of critical mathematical material and an extensive list of sample problems. Researchers as well as students in the field of mobile robotics will appreciate this comprehensive treatment of state-of-the-art methods and key technologies.
  2. Jenkin, M., Hogan, F., Siddiqi, K., Tremblay, J.-F., Baghi, B., Dudek, G. Interacting with a visuotactilce countertop. 4th Int. Cons. on Robotics, Computer Vision and Intelligent Systems (ROBOVIS), Rome, Italy.
    We present the See-Through-your-Skin Display (STS-d), a device that integrates visual and tactile sensing with a surface display to provide an interactive user experience. The STS-d expands the application of visuo-tactile optical sensors to Human-Robot Interaction (HRI) tasks and Human-Computer Interaction (HCI) tasks more generally. A key finding of this paper is that it is possible to display graphics on the reflective membrane of semi-transparent optical tactile sensors without interfering with their sensing capabilities, thus permitting simultaneous sensing and visual display. A proof of concept demonstration of the technology is presented where the STS Visual Display (STS-d) is used to provide an animated countertop that responds to visual and tactile events. We show that the integrated sensor can monitor interactions with the countertop, such as predicting the timing and location of contact with an object, or the amount of liquid in a container being placed on it, while displaying visual cues to the user.
  3. Jörges, B., Bury, N., McManus, M., Bansal1, A., Allison, R. S., Jenkin, M. and Harris, L. R., The effects of long-term exposure to microgravity and body orientation relative to gravity on perceived traveled distance, Nature Microgravity, 10:28.
    Self-motion perception is a multi-sensory process that involves visual, vestibular, and other cues. When perception of self-motion is induced using only visual motion, vestibular cues indicate that the body remains stationary, which may bias an observer’s perception. When lowering the precision of the vestibular cue by for example, lying down or by adapting to microgravity, these biases may decrease, accompanied by a decrease in precision. To test this hypothesis, we used a move-to-target task in virtual reality. Astronauts and Earth-based controls were shown a target at a range of simulated distances. After the target disappeared, forward self-motion was induced by optic flow. Participants indicated when they thought they had arrived at the target’s previously seen location. Astronauts completed the task on Earth (supine and sitting upright) prior to space travel, early and late in space, and early and late after landing. Controls completed the experiment on Earth using a similar regime with a supine posture used to simulate being in space. While variability was similar across all conditions, the supine posture led to significantly higher gains (target distance/perceived travel distance) than the sitting posture for the astronauts pre-flight and early post-flight but not late post- flight. No difference was detected between the astronauts’ performance on Earth and onboard the ISS, indicating that judgments of traveled distance were largely unaffected by long-term exposure to microgravity. Overall, this constitutes mixed evidence as to whether non-visual cues to travel distance are integrated with relevant visual cues when self-motion is simulated using optic flow alone.
  4. Chandola, D., Altarawneh, E., Jenkin, M. and Papagelis, M. SERC-GCN: Speech emotion recognition in conversation using Graph Convolutional Networks. Proc. Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pp. 76-80, Seoul, Korea.
    Speech emotion recognition (SER) is the task of automatically recognizing emotions expressed in spoken language. Current approaches focus on analyzing isolated speech segments to identify a speaker’s emotional state. Meanwhile, recent text-based emotion recognition methods have effectively shifted towards emotion recognition in conversation (ERC) that considers conversational context. Motivated by this shift, here we propose SERC-GCN, a method for speech emotion recognition in conversation (SERC) that predicts a speaker’s emotional state by incorporating conversational context, speaker interactions, and temporal dependencies between utterances. SERC-GCN is a two-stage method. First, emotional features of utterance-level speech signals are extracted. Then, these features are used to form conversation graphs that are used to train a graph convolutional network to perform SERC