Dudek, G. and Jenkin, M. Computational Principles of Mobile Robotics. 3rd Edition, Cambridge University Press.
Now in its third edition, this textbook is a comprehensive introduction to the multidisciplinary field of mobile robotics, which lies at the intersection of artificial intelligence, computational vision, and traditional robotics. Written for advanced undergraduates and graduate students in computer science and engineering, the book covers algorithms for a range of strategies for locomotion, sensing, and reasoning. The new edition includes recent advances in robotics and intelligent machines, including coverage of human-robot interaction, robot ethics, and the application of advanced AI techniques to end-to-end robot control and specific computational tasks. This book also provides support for a number of algorithms using ROS 2, and includes a review of critical mathematical material and an extensive list of sample problems. Researchers as well as students in the field of mobile robotics will appreciate this comprehensive treatment of state-of-the-art methods and key technologies.
Jenkin, M., Hogan, F., Siddiqi, K., Tremblay, J.-F., Baghi, B., Dudek, G. Interacting with a visuotactilce countertop. 4th Int. Conf. on Robotics, Computer Vision and Intelligent Systems (ROBOVIS), Rome, Italy.
We present the See-Through-your-Skin Display (STS-d), a device that integrates visual and tactile sensing with a surface display to provide an interactive user experience. The STS-d expands the application of visuo-tactile optical sensors to Human-Robot Interaction (HRI) tasks and Human-Computer Interaction (HCI) tasks more generally. A key finding of this paper is that it is possible to display graphics on the reflective membrane of semi-transparent optical tactile sensors without interfering with their sensing capabilities, thus permitting simultaneous sensing and visual display. A proof of concept demonstration of the technology is presented where the STS Visual Display (STS-d) is used to provide an animated countertop that responds to visual and tactile events. We show that the integrated sensor can monitor interactions with the countertop, such as predicting the timing and location of contact with an object, or the amount of liquid in a container being placed on it, while displaying visual cues to the user.
Jörges, B., Bury, N., McManus, M., Bansal1, A., Allison, R. S., Jenkin, M. and Harris, L. R., The effects of long-term exposure to microgravity and body orientation relative to gravity on perceived traveled distance, Nature Microgravity, 10:28.
Self-motion perception is a multi-sensory process that involves visual, vestibular, and other cues. When perception of self-motion is induced using only visual motion, vestibular cues indicate that the body remains stationary, which may bias an observer’s perception. When lowering the precision of the vestibular cue by for example, lying down or by adapting to microgravity, these biases may decrease, accompanied by a decrease in precision. To test this hypothesis, we used a move-to-target task in virtual reality. Astronauts and Earth-based controls were shown a target at a range of simulated distances. After the target disappeared, forward self-motion was induced by optic flow. Participants indicated when they thought they had arrived at the target’s previously seen location. Astronauts completed the task on Earth (supine and sitting upright) prior to space travel, early and late in space, and early and late after landing. Controls completed the experiment on Earth using a similar regime with a supine posture used to simulate being in space. While variability was similar across all conditions, the supine posture led to significantly higher gains (target distance/perceived travel distance) than the sitting posture for the astronauts pre-flight and early post-flight but not late post- flight. No difference was detected between the astronauts’ performance on Earth and onboard the ISS, indicating that judgments of traveled distance were largely unaffected by long-term exposure to microgravity. Overall, this constitutes mixed evidence as to whether non-visual cues to travel distance are integrated with relevant visual cues when self-motion is simulated using optic flow alone.
Chandola, D., Altarawneh, E., Jenkin, M. and Papagelis, M. SERC-GCN: Speech emotion recognition in conversation using Graph Convolutional Networks. Proc. Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pp. 76-80, Seoul, Korea.
Speech emotion recognition (SER) is the task of automatically recognizing emotions expressed in spoken language. Current approaches focus on analyzing isolated speech segments to identify a speaker’s emotional state. Meanwhile, recent text-based emotion recognition methods have effectively shifted towards emotion recognition in conversation (ERC) that considers conversational context. Motivated by this shift, here we propose SERC-GCN, a method for speech emotion recognition in conversation (SERC) that predicts a speaker’s emotional state by incorporating conversational context, speaker interactions, and temporal dependencies between utterances. SERC-GCN is a two-stage method. First, emotional features of utterance-level speech signals are extracted. Then, these features are used to form conversation graphs that are used to train a graph convolutional network to perform SERC.
Lin, W., Wu, D. and Jenkin, M. Electric Load Forecasting for Individual Households via Spatial-Temporal Knowledge Distillation. IEEE Trans. on Power Systems. doi: 10.1109/TPWRS.2024.3393926
Short-term load forecasting (STLF) for residential households has become of critical importance for the secure oper- ation of power grids as well as home energy management systems. While machine learning is effective for residential STLF, data and resource limitations hinder individual household predictions operated on local devices. In contrast, utility companies have access to broader sets of data as well as to better computational resources, and thus have the potential to deploy complex forecasting mod- els such as Graph neural network-based models to explore the spatial-temporal relationships between households for achieving impressive STLF performance. In this work, we propose an effi- cient and privacy-conservative knowledge distillation-based STLF framework. This framework can improve the STLF forecasting ac- curacy of lightweight individual household forecasting models via leveraging the benefits of knowledge distillation and graph neural networks (GNN). Specifically, we distill the knowledge learned from a GNN model pre-trained on utility data sets into individual models without the need to access data sets of other households. Extensive experiments on real-world residential electric load datasets demonstrate the effectiveness of the proposed method.
Craig, S., Lavan, S., Altarawneh, E., Chandola, D., Khan, W., Pepler,D. and Jenkin, M. Technology exposure elicits increased acceptance of autonomous robots and avatars. Proc. IEEE RO-MAN 2024, Pasadena, CA.
Science fiction has long promised a future within which robots assist humans in many facets of their daily lives, and robot technology is advancing at a pace which suggests that the necessary technology already exists, or may exist, in the near future. But, once the technology is in place, how accepting will humans be to autonomous machines performing tasks traditionally performed by humans? Are we designing and developing robots that are human centric? In a study involving 357 undergraduate students, we found that acceptance of robots was dependent upon previous exposure to different forms of technology (i.e., robots, avatars, video games). Men were more likely to have previous exposure to technology, and were therefore more likely to accept robots and avatars in different tasks compared to women. Enhancing the acceptability of robots by both men and women will require an increased exposure to technology, and women may require additional experience with technology to close the technology acceptance gap.
Jörges, B., Bury, N., McManus, M., Bansal, A., Allison, R. S., Jenkin, M. and Harris, L. R. The impact of gravity on perceived object height, npj Microgravity, 10, Article number: 95 (2024).
Altering posture relative to the direction of gravity, or exposure to microgravity has been shown to affect many aspects of perception, including size perception. Our aims in this study were to investigate whether changes in posture and long-term exposure to microgravity bias the visual perception of object height and to test whether any such biases are accompanied by changes in precision. We also explored the possibility of sex/gender differences. Two cohorts of participants (12 astronauts and 20 controls, 50% women) varied the size of a virtual square in a simulated corridor until it was perceived to match a reference stick held in their hands. Astronauts performed the task before, twice during, and twice after an extended stay onboard the International Space Station. On Earth, they performed the task of sitting upright and lying supine. Earth-bound controls also completed the task five times with test sessions spaced similarly to the astronauts; to simulate the microgravity sessions on the ISS they lay supine. In contrast to earlier studies, we found no immediate effect of microgravity exposure on perceived object height. However, astronauts robustly underestimated the height of the square relative to the haptic reference and these estimates were significantly smaller 60 days or more after their return to Earth. No differences were found in the precision of the astronauts’ judgments. Controls underestimated the height of the square when supine relative to sitting in their first test session (simulating Pre-Flight) but not in later sessions. While these results are largely inconsistent with previous results in the literature, a posture-dependent effect of simulated eye height might provide a unifying explanation. We were unable to make any firm statements related to sex/gender differences. We conclude that no countermeasures are required to mitigate the acute effects of microgravity exposure on object height perception. However, space travelers should be warned about late-emerging and potentially long-lasting changes in this perceptual skill.
Jilani, A., Hogan, F., Morissette, C., Dudek, G., Jenkin, M. and Siddiqi, K. Visual-Tactile Inference of 2.5D Object Shape From Marker Texture. IEEE Robotics and Automation Letters, 10.1109/LRA.2024.3518102.
Visual-tactile sensing affords abundant capabilities for contact-rich object manipulation tasks including grasping and placing. Here we introduce a shape-from-texture inspired contact shape estimation approach for visual-tactile sensors equipped with visually distinct membrane markers. Under a perspective projection camera model, measurements related to the change in marker separation upon contact are used to recover surface shape. Our approach allows for shape sensing in real time, without requiring network training or complex assumptions related to lighting, sensor geometry or marker placement. Experiments show that the surface contact shape recovered is qualitatively and quantitatively consistent with those obtained through the use of photometric stereo, the current state of the art for shape recovery in visual-tactile sensors. Importantly, our approach is applicable to a large family of sensors not equipped with photometric stereo hardware, and also to those with semi-transparent membranes. The recovery of surface shape affords new capabilities to these sensors for robotic applications, such as the estimation of contact and slippage in object manipulation tasks[1] and the use of force matching for kinesthetic teaching using multimodal visual-tactile sensing[2].