2022

2024 2023 2022 2021 2020
2019 2018 2017 2016 2015 2014 2013 2012 2011 2010
2009 2008 2007 2006 2005 2004 2003 2002 2001 2000
1999 1998 1997 1996 1995 1994 1993 1992 1991 1990
1989 1988 1987 1986 1985 1984 1983
    1. Hansen, J., Hogan, F., Rivkin, D., Meger, D. P., Jenkin, M. and Dudek, G. Visutactile-RL: Learning multimodal manipulation policies with deep reinforcement learning. Proc. IEEE ICRA, Philadelphia, PA.
      Manipulating objects with dexterity requires timely feedback that simultaneously leverages the senses of vision and touch. In this paper, we focus on the problem setting where both visual and tactile sensors provide pixel-level feedback for Visuotactile reinforcement learning agents. We investigate the challenges associated with multimodal learning and propose several improvements to existing RL methods; including tactile gating, tactile data augmentation, and visual degradation. When compared with visual-only and tactile-only baselines, our Visuotactile-RL agents showcase (1) significant improvements in contact-rich tasks; (2) improved robustness to visual changes (lighting/camera view) in the workspace; and (3) resilience to physical changes in the task environment (weight/friction of objects).
    2. Gleason, D. and Jenkin, M. Nonholonomic robot navigation of mazes using reinforcement learning. Proc. Int. Conf. on Informatics in Control, Automation and Robotics (ICINCO). pg. 369-378. Lisbon, Portugal.
      Developing a navigation function for an unknown environment is a difficult task, made even more challenging when the environment has complex  structure and the robot imposes nonholonomic constraints on the problem. Here we pose the problem of navigating an unknown environment as a reinforcement learning task for an Ackermann vehicle. We model environmental complexity using a standard characterization of mazes, and we show that training on complex maze architectures with loops (braid and partial braid mazes) results in an effective policy, but that for a more efficient policy, training on mazes without loops (perfect mazes) is to be preferred. Experimental results obtained in simulation are validated on a real robot operating both indoors and outdoors, assuming good localization and a 2D LIDAR to recover the local structure of the environment.
    3. Bury, N.-A., Harris, L. R, Jenkin, M., Allison, R. S., Felsner, S. and Herpers, R. From Earth to Space: the Effect of Gravity and Sex on Self-Motion Perception. Proc. From Picture to Reality, from Observer to Agent. Vision Research Conference, York University, June 6-9.
      Humans are inherently multisensory, perceptual agents. We combine vision, audition, touch, vestibular, and proprioceptive cues to navigate and interact with the world which helps us distinguish self- generated acceleration from the acceleration of gravity. But what happens when there is no gravity? The effectiveness of visual motion in evoking the perception of self-motion has shown both over- /underestimation depending on the task, a contradiction that has been effectively resolved by the Lappe Model (Lappe et al., 2007) which predicts both phenomena using only two parameters: gain (output/input) and a spatial decay (perceived distance reduced by leak a factor over distance). Here, we used tasks associated with underestimation (the adjust-target task: AT) and overestimation (the move-to-target task: MTT) to look for changes associated with self-motion under multiple gravity states. 24 participants (12 males, 33.6±7.2yrs.; 12 female, 33.9±6.0yrs.) were tested during parabolic flights that created one period of microgravity (~22s) and two periods of hypergravity (~25s) per parabola. Participants were tested: twice on the ground before flight (1&2), during level flight (3), in the hypergravity (4), and microgravity (5) phases of the flight, and after the flight (6). For all sessions, participants were tested either lying supine or free-floating. They wore a virtual reality head-mounted display presenting optic flow that elicited perceived forward self-motion along a corridor. We measured their perceived “travel distance” in an egocentrically upright, visual simulation. Participants performed two tasks in separate blocks in each of the 6 sessions: (1) “MTT” in which they moved visually to the remembered position of a previously presented target, and (2) “AT” in which they were first moved visually through a given distance and then adjusted a target’s position to indicate how far they felt they had just traveled. The data were fitted using the Leaky Spatial Integrator model (Lappe et al., 2007) to determine the gain and decay factors for each condition. For the MTT task, participants slightly underestimated traveled distance for most short distances (<18m) and overestimated it for the longer distances (>18m). The results for the AT task showed an unexpected overestimation for each distance. There was a significant difference between distance estimates of the MTT task and the AT task, and a tendency for both tasks to differ between the 1-g conditions (on ground and in level flight) and under both hypergravity and microgravity. In both altered g-levels participants’ gains were higher than during level flight with the most convincing effects found in females. Our experiment so far reveals differences between the perceived travel distances in the MTT task and the AT task which are partly compatible with the predictions of the Leaky Spatial Integrator model of self-motion perception. We also reveal a potentiation of self-motion perception in hyper- and microgravity. This agrees with anecdotal reports of increased effectiveness of optic flow in inducing self-motion in space and reveals a role of gravity in the perception of linear self-motion. However, differences in gravity might change the perception of self-motion for females only.
    4. Hogan, F. R., Tremblay, J.-F., Baghi, B. H., Jenkin, M., Siddiqi, K. and Dudek, G. Fingere-STS: Combined proximity and tactile sensing for robotic manipulation. IEEE Robotics and Automation Letters, 7: 10865-10872.
      This paper introduces and develops novel touch sensing technologies that enable robots to better sense and react to to intermittent contact interactions. We present Finger-STS, a robotic finger embodiment of the See-Through-your-Skin (STS) sensor that can capture 1) an “in the hand” visual perspective of an object that is being manipulated and 2) a high resolution tactile imprint of the contact geometry. We demonstrate the value of the sensor on a Bead Maze task. Here the multimodal feedback provided by the Finger-STS is leveraged by a robot to locate a bead visually and to guide it across a wire in response to tactile cues, with no additional sensing or planning required. To achieve this, we introduce a set of relevant visuotactile operations using computer vision-based algorithms. In particular, we sense the proximity of the object relative to the sensor as well as the nature of contact as a high resolution stick/slip vector field tracking the object motion in the finger.
    5. Wu, D., Xu, Y. T., Jenkin, M., Wang, J., Li, H., Liu, X. and Dudek, G. Short-term Load Forecasting with Deep Boosting Transfer Regression. ICC 2022-IEEE International Conference on Communications 5530-5536.
      With the increasing popularity of electric vehicles and the growing trend of working from home, electricity consumption in the residential sector is expected to continue to grow rapidly over the next few years. As a consequence, short-term residential load forecasting is becoming even more vital for the reliability and sustainability of the smart grid. Although deep learning models have shown impressive success in different areas including short-term electric load forecasting, such models require a large amount of training data. For many real-world load forecasting cases, we may not have enough training data to learn a reliable forecasting model. In this paper, we address this challenge through the use of boosting-based transfer learning with multiple sources. We first train a set of deep regression models on source houses that can provide relatively abundant data. We then transfer these learned models via the boosting framework to support data-scarce target houses. The transfer process is selective and customized for each target house to minimize the potential for negative transfer. Experimental results, based on real-world residential data sets, show that the proposed method can significantly improve forecasting accuracy.
    6. Wu, D., Jenkin, M., Liu, X. and Dudek, G. Active Deep Multi-task Learning for Forecasting Short-Term Loads. ICC 2022-IEEE International Conference on Communications, 5523-5529
      With the increasing adoption of renewable energy generation and electric devices, electric load forecasting, especially short-term load forecasting (STLF), is becoming more and more important. The widespread adoption of smart meters makes it possible to utilize complex machine learning models for both aggregated load and single-home residential load forecasting. Similar homes in nearby locations are likely to have similar load consumption patterns and this similarity can be used to improve the overall forecasting performance. However, most current work on load forecasting focuses on single learning task without exploiting the benefit of joint learning. In this paper, we propose the use of the multi-task learning (MTL) framework with long short-term memory (LSTM) recurrent neural networks for both aggregated and single home STLF. We propose a MTL-based forecasting algorithm for aggregated load forecasting in which single home forecasting is formulated as a single learning task within the MTL framework. This algorithm is extended for single home load forecasting in which load forecasting for a particular home becomes the primary learning task. Experimental results on real-world data sets demonstrate that residential load forecasting for both aggregated load and a single home can be improved within the MTL framework.
    7. Wu, D., Jenkin, M., Xu, Y. T., Liu, X. and Dudek, G. Attentive load transfer for short-term load forecasting. GLOBECOM 2022-2022 IEEE Global Communications Conference, 5285-5291.
      The modern power system is transitioning towards increasing penetration of renewable energy generation and demand from different types of electrical appliances. With this transition, residential load forecasting, especially short-term load forecasting (STLF), is becoming more and more challenging and important. Accurate short-term load forecasting can help improve energy dispatching efficiency and, as a consequence, reduce overall power system operation cost. Most current load forecasting algorithms assume that there is a large amount of training data available upon which to learn a reliable load forecasting model. However, this assumption can be challenging for real-world applications. In this work, we first propose the use of transfer learning and an attention mechanism to improve short-term load forecasting for a target domain with only a limited amount of available data. Furthermore, we extend the proposed method to utilize heterogeneous features which enables the approach to deal with more complex scenarios in the real world. Experimental results using real-world data sets show that the proposed methods can improve forecasting accuracy by a large margin over several existing baselines.
    8. Kapralos, B., Quevedo, K. C., Collins, K. C., Da Silva, C., Peisachovich, E., Kanev, K., Jenkin, M., Dubrowski, A. and Alam, F. Designing a pseudo-haptics study for virtual anesthesia skills development. IEEE Games, Entertainment, Media Conference (GEM), 1-2.
      Pseudo-haptics refers to the simulation of haptic sensations without the use of haptic interfaces, using, for example, audiovisual feedback and kinesthetic cues. Given the COVID-19 pandemic and the shift to online learning, there has been a recent interest in pseudo-haptics as it can help facilitate psychomotor skills development away from simulation centers and laboratories. Here we present work-in-progress that describes the study design of a pseudo-haptics for virtual anesthesia skills development. We anticipate this work will provide greater insight to pseudo-haptics and its application to anesthesia-based training.
    9. Baghi, B., Konar, A., Hogan, F., Jenkin, M. and Dudek, G. SESNO: Sample efficient social navigation from observation. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 9164-9171.
      In this paper, we present the Sample Efficient Social Navigation from Observation (SESNO) algorithm that efficiently learns socially-compliant navigation policies from observations of human trajectories. SESNO is an inverse reinforcement learning (IRL)-based algorithm that learns from human trajectory observations without knowledge of their actions. We improve the sample-efficiency over previous IRL-based methods by introducing a shared experience replay buffer that allows reuse of past trajectory experiences to estimate the policy and the reward. We evaluate SESNO using publicly available pedestrian motion data sets and compare its performance to related baseline methods in the literature. We show that SESNO yields performance superior to existing baselines while dramatically improving the sample complexity by using as few as a hundredth of the samples required by existing baselines.
    10. Bury, N., Harris, L. R., Jenkin, M., Allison, R., Felsner, S. and Herpers, R. The Influence of Gravity on Perceived Travel Distance in Virtual Reality. 64th Conference of Experimental Psychologists (TeaP), University of Cologne, March 20-23, 2022.