2021

2024 2023 2022 2021 2020
2019 2018 2017 2016 2015 2014 2013 2012 2011 2010
2009 2008 2007 2006 2005 2004 2003 2002 2001 2000
1999 1998 1997 1996 1995 1994 1993 1992 1991 1990
1989 1988 1987 1986 1985 1984 1983
    1. Wu, D., Kang, J., Xu, Y. T., Li, H., Li, J., Chen, X., Rivkin, D.,  Jenkin, M., Lee, T., Park, I., Liu, X. and  Dudek, G. Load balancing for communication networks via data-efficient deep reinforcement learning. IEEE Global Communications Conference (GLOBECOM).
      Within a cellular network, load balancing between different cells is of critical importance to network performance and quality of service. Most existing load balancing algorithms are manually designed and tuned rule-based methods where near-optimality is almost impossible to achieve. These rule-based meth-ods are difficult to adapt quickly to traffic changes in real-world environments. Given the success of Reinforcement Learning (RL) algorithms in many application domains, there have been a number of efforts to tackle load balancing for communication systems using RL-based methods. To our knowledge, none of these efforts have addressed the need for data efficiency within the RL framework, which is one of the main obstacles in applying RL to wireless network load balancing. In this paper, we formulate the communication load balancing problem as a Markov Decision Process and propose a data-efficient transfer deep reinforcement learning algorithm to address it. Experimental results show that the proposed method can significantly improve the system performance over other baselines and is more robust to environmental changes.
    2. Chan, M., Uribe,-Quevedo, Kapralos, B., Jenkin, M., Kanev, K. and Jaimes, N. A review of virtual reality-based eye examination simulators. Recent Advances in Technologies for Inclusive Well-Being, 83-102. 
      Eye fundus examination requires extensive practice to enable the adequate interpretation of the anatomy observed as a flat image seen through the ophthalmoscope, which is a handheld device that allows for the non-invasive examination of the back of the eye. Mastering eye examination with an ophthalmoscope is difficult due to the intricate volumetric anatomy of the eye when seen as a two-dimensional image when examined through the lens of an1 ophthalmoscope. The lack of eye examination skills in medical practitioners is a cause of concern in today’s medical practise as misdiagnosis can result in improper or prompt treatment of life-threatening conditions such as glaucoma, high blood pressure, or diabetes amongst others. Past and current solutions to the problem of ophthalmoscope education have seen the use of pictures, illustrations, videos, cadavers, patients, and volunteers. More recently, simulation has provided a higher-end instrument to expose trainees to otherwise impossible conditions for learning purposes safely. However, simulation costs associated with purchasing and maintaining modern simulators has lead to complications related to their acquisition and availability. These shortcomings in eye examination simulation have led to research focusing on cost-effective tools using a breadth of solutions involving physical and digital simulators ranging from mobile applications to virtual and augmented reality, to makerspace and practical eye models. In this chapter, we review direct ophthalmoscopy simulation models for medical training. We highlight the characteristics, limitations, and advantages presented by modern simulation devices.
    3. Chan, M., Uribe-Quevedo, A., Kapralos, B., Jenkin, M., Jaimes, N. and Kanev, K. Virtual and augmented reality direct ophthalmoscopy tool: A comparison between interaction methods Multimodal technologies and Interaction (MTI), 5: 2021
      Direct ophthalmoscopy (DO) is a medical procedure whereby a health professional, using a direct ophthalmoscope, examines the eye fundus. DO skills are in decline due to the use of interactive diagnostic equipment and insufficient practice with the direct ophthalmoscope. To address the loss of DO skills, physical and computer-based simulators have been developed to offer additional training. Among the computer-based simulations, virtual and augmented reality (VR and AR, respectively) allow simulated immersive and interactive scenarios with eye fundus conditions that are difficult to replicate in the classroom. VR and AR require employing 3D user interfaces (3DUIs) to perform the virtual eye examination. Using a combination of a between-subjects and within-subjects paradigm with two groups of five participants, this paper builds upon a previous preliminary usability study that compared the use of the HTC Vive controller, the Valve Index controller, and the Microsoft HoloLens 1 hand gesticulation interaction methods when performing a virtual direct ophthalmoscopy eye examination. The work described in this paper extends our prior work by considering the interactions with the Oculus Quest controller and Oculus Quest hand-tracking system to perform a virtual direct ophthalmoscopy eye examination while allowing us to compare these methods without our prior interaction techniques. Ultimately, this helps us develop a greater understanding of usability effects for virtual DO examinations and virtual reality in general. Although the number of participants was limited, n = 5 for Stage 1 (including the HTC Vive controller, the Valve Index controller, and the Microsoft HoloLens hand gesticulations), and n = 13 for Stage 2 (including the Oculus Quest controller and the Oculus Quest hand tracking), given the COVID-19 restrictions, our initial results comparing VR and AR 3D user interactions for direct ophthalmoscopy are consistent with our previous preliminary study where the physical controllers resulted in higher usability scores, while the Oculus Quest’s more accurate hand motion capture resulted in higher usability when compared to the Microsoft HoloLens hand gesticulation.
    4. Gelsomini, F., Hung, P. C. K., Kapralos, B., Uribe-Quevedo, A., Jenkin, M., Tokuhiro, A., Kanev, K., Hosoda, M. and Mimura, H. Specialized CNT-based Sensor Framework for Advanced Motion Tracking. Proc. Hawaii International Conference on System Sciences (HICSS). Kauai, Hawaii. Held Online.
      In this work, we discuss the design and development of an advanced framework for high-fidelity finger motion tracking based on Specialized Carbon Nanotube (CNT) stretchable sensors developed at our research facilities. Earlier versions of the CNT sensors have been employed in the high-fidelity finger motion tracking Data Glove commercialized by Yamaha, Japan. The framework presented in this paper encompasses our continuing research and development of more advanced CNT based sensors and the implementation of novel high-fidelity motion tracking products based on them. The CNT sensor production and communication framework components are considered in detail and wireless motion tracking experiments with the developed hardware and software components integrated with the Yamaha Data Glove are reported.
    5. Hogan, F. R., Jenkin, M., Rezaei-Shoshtari, S., Girdhar, Y., Meger, D. and Dudek, G. Seeing through your skin: recognizing objects with a novel visuotactile sensor. Proc WACV 2021, Held Online.
      We introduce a new class of vision-based sensor and associated algorithmic processes that combine visual imaging with high-resolution tactile sending, all in a uniform hardware and computational architecture. We demonstrate the sensor’s efficacy for both multi-modal object recognition and metrology. Object recognition is typically formulated as an unimodal task, but by combining two sensor modalities we show that we can achieve several significant performance improvements. This sensor, named the See-Through-your-Skin sensor (STS), is designed to provide rich multi-modal sensing of contact surfaces. Inspired by recent developments in optical tactile sensing technology, we address a key missing feature of these sensors: the ability to capture a visual perspective of the region beyond the contact surface. Whereas optical tactile sensors are typically opaque, we present a sensor with a semitransparent skin that has the dual capabilities of acting as a tactile sensor and/or as a visual camera depending on its internal lighting conditions. This paper details the design of the sensor, showcases its dual sensing capabilities, and presents a deep learning architecture that fuses vision and touch. We validate the ability of the sensor to classify household objects, recognize fine textures, and infer their physical properties both through numerical simulations and experiments with a smart countertop prototype.
    6. Codd-Downey, R., Jenkin, M., Dey, B. B., Zacher, J., Blainey, E. and Andrews, P. Monitoring re-growth of invasive plants using an autonomous surface vessel. Frontiers Robotics and AI. January 2021.
      Invasive aquatic plant species, and in particular Eurasian Water-Milfoil (EWM), pose a major threat to domestic flora and fauna and can in turn negatively impact local economies. Numerous strategies have been developed to harvest and remove these plant species from the environment. However it is still an open question as to which method is best suited to removing a particular invasive species and the impact of different lake conditions on the choice. One problem common to all harvesting methods is the need to assess the location and degree of infestation on an ongoing manner. This is a difficult and error prone problem given that the plants grow underwater and significant infestation at depth may not be visible at the surface. Here we detail efforts to monitor EWM infestation and evaluate harvesting methods using an autonomous surface vehicle (ASV). This novel ASV is based around a mono-hull design with two outriggers. Powered by a differential pair of underwater thrusters, the ASV is outfitted with RTK GPS for position estimation and a set of submerged environmental sensors that are used to capture imagery and depth information including the presence of material suspended in the water column. The ASV is capable of both autonomous and tele-operation
    7. Rezaei-Shoshtari, S., Hogan, F., Jenkin, M., Meeger, D. and Dudek, G. Learning intuitive physics with multimodal generative models. Proc. AAAI, held online.
      Predicting the future interaction of objects when they come into contact with their environment is key for autonomous agents to take intelligent and anticipatory actions. This paper presents a perception framework that fuses visual and tactile feedback to make predictions about the expected motion of objects in dynamic scenes. Visual information captures object properties such as 3D shape and location, while tactile information provides critical cues about interaction forces and resulting object motion when it makes contact with the environment. Utilizing a novel See-Through-your-Skin (STS) sensor that provides high resolution multimodal sensing of contact surfaces, our system captures both the visual appearance and the tactile properties of objects. We interpret the dual stream signals from the sensor using a Multimodal Variational Autoencoder (MVAE), allowing us to capture both modalities of contacting objects and to develop a mapping from visual to tactile interaction and vice-versa. Additionally, the perceptual system can be used to infer the outcome of future physical interactions, which we validate through simulated and real-world experiments in which the resting state of an object is predicted from given initial conditions.
    8. Jenkin, M., Harris, L. R. and Herpers, R. Long-duration head down bed rest as an analog of microgravity: effects on the perception of upright. 23rd International Academy of Astronautics Humans in Space Conference. April 2021. Moscow, Russia (Held Online).
      Humans demonstrate many physiological changes in microgravity for which long-duration head down bed rest (HDBR) has proved a reliable analog. However, information on how HDBR affects sensory processing is lacking. We have previously shown (Harris et al., 2017) that microgravity alters the weighting applied to visual cues in determining the perceptual upright, an effect that lasts long after return. Here, we assessed whether long-duration HDBR has comparable effects. We assessed spatial orientation using the luminous line test (which yields the subjective visual vertical) and the oriented character recognition test (which yields the perceptual upright) before (once), during (three time) and after (twice) 21 days of 6° HDBR in 10 participants. By varying the orientation of the visual background and the relative position of their body and gravity (by having them lie on their backs and on their sides during bedrest as well as also being tested upright before and after HDBR) we were able to assess the relative weightings assigned to the contributions of the body, gravity and visual cues to upright. As with the effects of microgravity exposure, HDBR decreased the weighting of the visual cue relative to the body cue. The weightings returned to pre-bed-rest-levels by the second post-bed-rest-session. Before HDBR, vision had a measurably greater influence on the perceptual upright when supine compared to upright because a supine posture temporarily removes the effect of gravity from its normal effect along the long axis of the body. This increase gradually decreased during HDBR until no effect of posture could be seen immediately following HDBR. The subjective visual vertical, however, appeared unaffected by HDBR. This is one of the first demonstrations of a perceptual consequence of HDBR and further justifies its use as an analog for space. We conclude that bed rest can be a useful analog for the study of the perception of self-orientation during long-term exposure to microgravity.
    9. Harris, L. R., Jorges, B., Bury, N., McManus, M., Allison, R. and Jenkin, M. The perception of self-motion in microgravity. 23rd International Academy of Astronautics Human in Space Conference. April 2021. Moscow, Russia (Held Online).
      Moving around in a zero-gravity environment is very different from moving on Earth. The vestibular system in 0g registers only the accelerations associated with movement and no longer has to distinguish them from the acceleration of gravity. How does this affect an astronaut’s perception of space and movement? Here we explore how the perception of self-motion and distance changes during and following long-duration exposure to 0g. Our hypothesis was that absence of gravity cues should lead participants to rely more strongly on visual information in 0g compared to on Earth. We tested a cohort of ISS astronauts five times: before flight, twice during flight (within 6 days of arrival in space and after 3 months in 0g) and twice after flight (within 6 days of re-entry and 2 months after returning). Data collection is on-going, but we have currently tested 8 out of 10 participants. Using Virtual Reality, astronauts performed two tasks. Task 1, the perception of self-motion task, measures how much visual motion is required to create the sensation of moving through a particular distance. Astronauts viewed a target at one of several distances in front of them in a virtual corridor. The target then disappeared, and they experienced visually simulated self-motion along the corridor and pressed a button to indicate when they had reached the position of the remembered target. Task 2 was the perception of distance task. We presented a virtual cube in the same corridor and asked the astronauts to judge whether the cube’s sides were longer or shorter than a reference length they held in their hands. We inferred the distance at which they perceived the target from the size that they chose to match the reference length. Preliminary analysis of the results with Linear Mixed-Effects Modelling suggests that participants did not experience any differences in perceived self-motion on first arriving in space (p = 0.783). After being in space for three months, however, they needed significantly more visual motion (7.5 %) to create the impression they had passed through the target distance (p < 0.001), indicating that visual motion (optic flow) elicited a weaker sense of self-motion than before adapting to the space environment. Astronauts also made size matches that were consistent with underestimating perceived distance in space (on arrival: 26.6 % closer, p < 0.001; after 3 months: 26.3 % closer, p < 0.001) compared to the pre-test on Earth. Our results indicate that prolonged exposure to 0g tends to decrease the effective use of visual information for the perception of travelled distance. This effect cannot be explained in terms of biased distance perception. Knowing that astronauts are likely to misperceive their self-motion and the scale their environment is critical information for the design of safe operations in space and for readjustment to other gravity levels found on the Moon and Mars.
    10. Girdhar, Y., Rivkin, D., Wu, D., Jenkin, M., Liu, X. and Dudek, G. Optimizing cellular networks via continuously moving base stations on road networks. Proc. IEEE ICRA. Xi'an, China.
      Although existing cellular network base stations are typically immobile, the recent development of small form factor base stations and self driving cars has enabled the possibility of deploying a team of continuously moving base stations that can reorganize the network infrastructure to adapt to changing network traffic usage patterns. Given such a system of mobil base stations (MBSes) that can freely move on the road, how should their path be planned in an effort to optimize the experience of the users? This paper addresses this question by modelling the problem as a Makrov Decision Process where the actions correspond to the MBSes deciding which direction to go at traffic intersections; states correspond to the position of MBSes; and rewards correspond to minimization packet loss in the network. A Monte Carlo Tree Search (MCTS)-base anytime algorithm that produces path plans for multiple base stations while optimizing expected package loss is proposed. Simulated experiments in the city of Verdun, QC, Canada, with varying user equipment (UE) densities and random initial conditions show that the proposed approach consistently outperforms myopic planners, and is able to achieve near-optimal performance.
    11. Joerges, B., Bury, N., McManus, M., Allison, R. S., Jenkin, M., and Harris, L. R. Body posture affects the perception of visually simulated self-motion. Proc VSS, held online.
      Perceiving one’s self-motion is a multisensory process involving integrating visual, vestibular and other cues. The perception of self-motion can be elicited by visual cues alone (vection) in a stationary observer. In this case, optic flow information compatible with self-motion may be affected by conflicting vestibular cues signaling that the body is not accelerating. Since vestibular cues are less reliable when lying down (Fernandez & Goldberg, 1976), conflicting vestibular cues might bias the self-motion percept less when lying down than when upright. To test this hypothesis, we immersed 20 participants in a virtual reality hallway environment and presented targets at different distances ahead of them. The targets then disappeared, and participants experienced optic flow simulating constant-acceleration, straight-ahead self-motion. They indicated by a button press when they felt they had reached the position of the previously-viewed target. Participants also performed a task that assessed biases in distance perception. We showed them virtual boxes at different simulated distances. On each trial, they judged if the height of the box was bigger or smaller than a reference ruler held in their hands. Perceived distance can be inferred from biases in perceived size. They performed both tasks sitting upright and lying supine. Participants needed less optic flow (perceived they had travelled further) to perceive they had reached the target’s position when supine than when sitting (by 4.8%, bootstrapped 95% CI=[3.5%;6.4%], determined using Linear Mixed Modelling). Participants also judged objects as larger (compatible with closer) when upright than when supine (by 2.5%, 95% CI=[0.03%;4.6%], as above). The bias in traveled distance thus cannot be reduced to a bias in perceived distance. These results suggest that vestibular cues impact self-motion distance perception, as they do heading judgements (MacNeilage, Banks, DeAngelis & Angelaki, 2010), even when the task could be solved with visual cues alone.
    12. Harris, L. R., Jenkin, M., Herpers, R. Long-duration head down bed rest as an analog of microgravity: effects on the static perception of upright. J. Vest. Res. 2021.
      BACKGROUND:Humans demonstrate many physiological changes in microgravity for which long-duration head down bed rest (HDBR) is a reliable analog. However, information on how HDBR affects sensory processing is lacking. OBJECTIVE:We previously showed (25) that microgravity alters the weighting applied to visual cues in determining the perceptual upright (PU), an effect that lasts long after return. Does long-duration HDBR have comparable effects? METHODS:We assessed static spatial orientation using the luminous line test (subjective visual vertical, SVV) and the oriented character recognition test (PU) before, during and after 21 days of 6° HDBR in 10 participants. Methods were essentially identical as previously used in orbit (25). RESULTS:Overall, HDBR had no effect on the reliance on visual relative to body cues in determining the PU. However, when considering the three critical time points (pre-bed rest, end of bed rest, and 14 days post-bed rest) there was a significant decrease in reliance on visual relative to body cues, as found in microgravity. The ratio had an average time constant of 7.28 days and returned to pre-bed-rest levels within 14 days. The SVV was unaffected. CONCLUSIONS:We conclude that bed rest can be a useful analog for the study of the perception of static self-orientation during long-term exposure to microgravity. More detailed work on the precise time course of our effects is needed in both bed rest and microgravity conditions.
    13. Wilcocks, K., Perivolaris, A., Kapralos, B., Quevedo, A., Jenkin, M., Kanev., K., Mimura, H., Hosada, M., Alam, F. and Dubrowski, A. Work-in-Progress: A novel data glove for psychomotor-based virtual medical training. Proc IEEE Global Engineering Education Conference (EDUCON), pg. 1318-1321.
      Despite its importance in the real-world, manual (hand) dexterity is often ignored in medical-based virtual training environments that have traditionally focused on cognitive and affective skills development. Psychomotor (technical) skills, particularly those related to manual dexterity, are fundamental to various medical procedures and ignoring them in virtual based training tools can lead to a sub-optimal training experience. Here, we present a novel, consumer-level data glove that provides accurate user interactions involving the proximal and medial phalanges, interaction that are relevant in many manual dexterity tasks. We also outline how this novel data glove is being incorporated into an existing serious gaming platform for anesthesia training that currently focuses on cognitive and affective skills development only. The addition of psychomotor skills development through the adoption of simulated tactile feedback will provide a more complete serious gaming training platform.