2018

2024 2023 2022 2021 2020
2019 2018 2017 2016 2015 2014 2013 2012 2011 2010
2009 2008 2007 2006 2005 2004 2003 2002 2001 2000
1999 1998 1997 1996 1995 1994 1993 1992 1991 1990
1989 1988 1987 1986 1985 1984 1983
  1. Herpers, R., Harris, L. R., McManus, M., Hofhammer, T., Noppe, A., Frett, T., Jenkin, M. and Scherfgen, D. The somatogravic illusion during centrifugation: sex differences. A poster presented at the 39th ISGP Meeting and ESA Life Sciencces Meeting, Noordwijk, Netherlands, 2018.
    Rotation on a centrifuge offers a unique opportunity to vary the direction of the gravity vector without physical tilt, that is, without co-activation of the semicircular canals during the simulated tilt. The tilt angle simulated is the tilt of the simple vector sum of gravity and the acceleration added by the centrifuge. Perceiving acceleration as tilt is the well-known somatogravic effect [Mach1875, Clark1951]. Sustained linear acceleration together with gravity creates a single gravito-inertial force (GIF). Under normal gravity conditions, sustained linear acceleration in the transverse plane can create an illusion of tilt - the somatogravic illusion - in which the entire GIF is interpreted as corresponding to gravity. However, the magnitude of the effect, i. e. the fraction of the GIF that is interpreted as gravity, has not been well quantified in the sagittal plane. We, therefore, varied the added acceleration to induce a somatogravic illusion and measured the perceptual effects using a haptic rod to indicate the perceived direction of gravity.
  2. Acosta, D., Gu, D., Uribe-Quevedo, A., Kanev, K., Jenkin, M., Kapralos, B. and Jaimes, N. Mobile e-Training Tools for Augmented Reality Eye Fundus Examination, Proc. Int. Conf. on Interactive Mobile Communication, Technologies and Learning, Hamilton, ON, 2018. Proceedings published as Auer, M. and Tsiatsos, T. (eds.) Mobile Technologies and Appliations for the Internet of Things: Proceedings of teh 12th IMCL, Springer, 2019.
    The eye fundus examination procedure that employs direct fundoscopy involves interpreting the intricate anatomy of the eye when viewed through the lens of an ophthalmoscope. Mastering this procedure is difficult, and it requires extensive training that still employs instructional materials including pictures, illustrations, videos, and most re- cently interactive computer-generated models. With the goal of adding realism to eye fundus training and overcoming the limitations of traditional media, simulators employing manikin heads that can be used. Such simulators utilise interchangeable pictures and embedded displays that allow the visualisation of multiple eye conditions. Modern simulators include immersive technologies such as virtual reality and aug- mented reality that are providing innovative training opportunities. Unfortunately, current high end virtual and augmented reality hardware is expensive, leading to isolated experiences best suited for only a single trainee at a time. This paper addresses the question of “whether commodity technology-based systems could provide comparable training experiences at a lower cost." With respect to this, we discuss the design and development of two augmented reality eye fundus examination systems on low-cost mobile platforms. We conclude with reporting some preliminary results of the experimental use of the systems that include usability perception feedback and comparisons with the Eyesi simulator.
  3. Codd-Downey, R. and Jenkin, M. Wireless teleoperation of an underwater robot using Li-Fi. Proc. IEEE ICIA, Wuyishan, China, 2018.
    Teleoperation of unmanned underwater vehicles is most commonly facilitated through the use of expensive shielded ethernet cables or high-speed fibre optic cables that are also quite fragile. Wireless underwater communication has thus far has been dominated by bulky and expensive acoustic modems. Is it possible to exploit recent advances in visible light communication technology including Li-Fi as a replacement for these technologies? Here we describe a small scale Li-Fi system that can be used to provide short-range tele-operational control of an underwater vehicle. Such control can either be provided from a diver operating in close proximity of the robot or via a communications relay from surface-based support.
  4. Bikram, B. D. and Jenkin, M. Spherico: rapid prototyping a spherical robot. Proc. IEEE ICIA, Wuyishan, China.
    While spherical robots are an interesting design that has been studied before, but due to their relative manu- facturing complexity there are few such devices commercially available. Given the increased popularity of 3D printers and laser cutters, rapid prototyping has become an integral part of engineering practice. Leveraging these advances, we have developed Spherico. Spherico is an inexpensive spherical robot that utilizes both laser cutter and 3D printing technology combined with off-the-shelve (OTS) components. Driven by a Raspberry Pi, the ROS (Robotic Operating System) is used as the underlying software architecture for controlling the robot. Using all the off-the-shelve OTS components, rapid prototyping parts and ROS, Spherico takes less than 72Hrs to build. The resulting robot is an inexpensive and robust platform for research into spherical robot control.
  5. Acosta, D., Gu, D., Chan, M., Uribe-Quevedo, A., Kapralos, B., Jenkin, M., Jaimes, N. and Kanev, K. An augmented and mixed reality approach to eye fundus training. Presented at REALMX2018 Realities in Medicine, Toronto, Canada.
    Introduction. Fundoscopic examinations are critical in the diagnosis of life and sight-threatening diseases, such as hypertension and diabetes mellitus. Although fundoscopic examinations are regarded as an invaluable skill to both general practitioners and ophthalmologists, it is still a procedure that requires extensive training to master. The difficulty associated with fundoscopy proficiency includes the lack of training time associated with current simulation and other tools. Additionally, patient and old manikin-based training add further challenges as the view of the fund us is only available to the examiner, leading to oral explanations supported by photos, illustrations, videos, multimedia, and most recently mobile applications, virtual reality, and augmented reality tools. However, such complementary tools lack realistic examiner-patient interactions, and in the case of photographs, preferred by students because of the high resolution and ease of manipulation, thus eliminating any examinee interactions. The need for realism and safer medical practices has resulted in the development of manikin simulators that allow exposing trainees to numerous scenarios while presenting learning objectives for the development of cognitive and psychomotor skills. However, manikin simulation requires a steep investment with regards to acquisition, maintenance, facilities, training, and curriculum development, that may limit access to medical educational institutions. The purpose of this research is to develop and study a complementary consumer-level tool for eye examination employing virtual and augmented reality, to elicit deliberate practice in the development of skills necessary to perform fundoscopic examinations. Our work builds upon simulation, role-playing, and games as tools to determine the most effective method of allowing trainees to develop fundoscopic examination skills in a safe and effective practice environment. Methods. To develop the virtual and augmented reality training scenario, first, we conducted an analysis and characterization of the fundus examination, more specifically, we examined the interactions taking place to identify the inputs and outputs that will govern our system. Based on this information, we developed a virtual eye examination scenario that employs 2 virtual and augmented reality allowing a user to approach a virtual patient while manipulating a 3D-printed or a digital version of the fundoscope to examine the eye fundus and identify basic anatomical landmarks. To develop the virtual scenarios, we employed the Unity3D game engine, whereas the assets were created using 3D authoring and character creation tools such as Autodesk Maya and Adobe Fuse. For the virtual interactions, a Microsoft HoloLens Headset provided independent movement and interactions without constraints while wearing the headset. Users are able to use the HoloLens' "left-click" gesture input that requires the user to hold their hand in the air while pointing at the object of interest, and tapping the index and thumb fingers to interact with certain virtual elements, such as an ophthalmoscope, the red reflex, and the various quadrants of the eye fundus. The augmented reality interactions responded to a 3D printed fundoscope that allowed the user to interact with the eye tool by wearing a mobile virtual reality headset while looking at the marker placed in the printed fundoscope in front of the headset. The 3D printed fundoscope was designed to host a wireless communication Arduino board with potentiometers to adjust light and lenses, while wirelessly sending the data to the mobile device. To understand the feasibility of both approaches as potential practicing tools, we gathered usability opinions from game developers and a medical partner regarding the HoloLens and mobile augmented reality usability. Discussion. The use of virtual and augmented reality elicits curiosity and interest amongst users, resulting in engaging experiences. However, given the work in progress nature of our current endeavors, there is still much work and research to be completed. During an informal survey of our virtual fundoscopy tool, participants expressed that the tool was easy to understand. The main concern was related to the HoloLens’ interactions that made it difficult to use, primarily due to the limited field-of-view, and gesture input system. The augmented reality approach instead, was found more intuitive as it allowed 3D printed interactions and touch-based inputs if not wearing the virtual reality headset. Although we have only gathered informal usability perceptions that allow us to fine-tune the interactions and simulation flow, all participants have expressed interest in our approach by highlighting the potential applications in other fields. Future work will focus on the development of a framework that allows cross-platform eye examination to allow users with different computers to be part of the group examination experience.
  6. Harris, L., McManus, M., Hofhammer, T., Hoppe, A., Frett, T., Herpers, R. and Jenkin, M. The somatogravic illusion during centrifugation: sex differences. Poster presented at: International society for Gravitational Physiology and European Space Agency Life Sciences Meeting. Noordwijk, The Netherlands.
    The vestibular system is sensitive to the direction of any acceleration acting on the head. A sustained linear acceleration in the presence of gravity creates a single gravito-inertial force (GIF) which the brain then has to resolve into the contributing components. Under normal gravity conditions sustained linear acceleration in the transverse plane can create an illusion of tilt - the somatogravic illusion - in which the entire GIF is interpreted as corresponding to gravity. But what is the magnitude of this effect? Here we measure the magnitude of the somatogravic illusion during prolonged backwards acceleration created by centrifugation and compare this to the perception of comparable physical tilt. Ten participants (5 females) sat in an upright chair facing outwards on a centrifuge in the dark. They judged whether a rod, mounted on a servo-controlled motor that rotated in the parasagittal plane controlled by a psychophysical adaptive staircase, was tilted forwards or backwards relative to gravity while the GIF was tilted backwards by 22.5 degrees or 45 degrees by the addition of centripetal acceleration. The orientation of the rod was varied using an adaptive psychophysical staircase that honed in on the perceived direction of gravity. Without exception, males showed a substantial somatogravic illusion with a gain (perceived real tilt over simulated tilt) of 0.46 whereas 4 of 5 females did not experience the somatogravic illusion and continued to identify the direction of gravity correctly despite the tilt of the GIF. A trend for a similar sex asymmetry was found during physical tilt (males gain =1.2, females gain= 0.49). The low gain in females suggests a restraining influence of a strong idiothetic vector acting as a prior indicating that gravity is continuously aligned with the body. The results indicated that gender should be taken into account when assessing balance or perceived orientation in situations where the direction of gravity may not align with the body. Examples of such situations include during changes of body orientation during normal changes of posture, responding to imposed tilt (such as when bumped into while walking) or during conditions of prolonged acceleration such as when driving or piloting an aircraft.
  7. Harris, L., Felsner, S., Jenkin, M., Herpers, R., Noppe, A., Frett, T. and Scherfgen, D. Gender bias in the influence of gravity on perception. Poster session presented at: Vision Sciences Society 18th Annual Meeting, St. Pete's, Florida.
    Females are influenced more than males by visual cues during many spatial orientation tasks; but females rely more heavily on gravitational cues during visual-vestibular conflict. Are there gender biases in the relative contributions of vision, gravity and the internal representation of the body to the perception of upright? And might any such biases be affected by low gravity?16 participants (8 female) viewed a highly polarized visual scene tilted plus or minus 112 degrees while lying supine on the European Space Agency's short-arm human centrifuge. The centrifuge was rotated to simulate 24 logarithmically spaced g-levels along the long axis of the body (0.04-0.5g at ear-level). The perception of upright was measured using the Oriented Character Recognition Test (OCHART). OCHART uses the ambiguous symbol "p" shown in different orientations. Participants decided whether it was a "p" or a "d" from which the perceptual upright (PU) can be calculated for each visual/gravity combination. The relative contribution of vision, gravity and the internal representation of the body were then calculated. Experiments were repeated while upright.The relative contribution of vision on the PU was less in females compared to males (t=-18.48, p<0.01). Females placed more emphasis on the gravity cue instead (f:28.4%, m:24.9%) while body weightings were constant (f:63.0%, m:63.2%). When upright (1g) in this and other studies (e.g., Barnett-Cowan et al. 2010, EJN, 31,1899) females placed more emphasis on vision in this task than males.The reduction in weight allocated by females to vision when in simulated low-gravity conditions compared to when upright under normal gravity may be related to similar female behaviour in response to other instances of visual-vestibular conflict. Why this is the case and at which point the perceptual change happens requires further research.
  8. Codd-Downey, R., and Jenkin, M. LightByte: Communicating wirelessly with an underwater robot using light. International Conference on Informatics in Control, Autonomation and Robotics, Porto, Portugal, 2018.
    Communication with and control of underwater autonomous vehicles is complicated by the nature of the water medium which absorbs radio waves over short distances and which introduces severe limitations on the band-width of sound-based technologies. Given the limitations of acoustic and radio frequency (RF) communication underwater, light-based communication has also been used. Light-based communication is also emerging as an effective strategy for terrestrial communication. Can the emerging Light Fidelity (Li-Fi) communication standard be exploited underwater to enable devices in close proximity to communicate by light? This paper describes the development of the LightByte Li-Fi model for underwater use and experimental evaluation of its performance both terrestrially and underwater.
  9. Hoveidar-Sefid, M. and Jenkin, M. Autonomous trail following using a pre-trained Deep Neural Network. International Conference on Informatics in Control, Autonomation and Robotics, Porto, Portugal, 2018.
    Trails are unstructured and typically lack standard markers that characterize roadways; nevertheless, trails can provide an effective set of pathways for off-road navigation. Here we approach the problem of trail following by identifying the deviation of the robot from the heading angle of the trail through the refinement of a pretrained Inception-V3 [Szegedy et al., 2016a] Convolutional Neural Network (CNN) trained on the ImageNet dataset [Deng et al., 2009]. A differential system is developed that uses a pair of cameras each providing input to its own CNN directed to the left and the right that estimate the deviation of the robot with respect to the trail direction. The resulting networks have been successfully tested on over 1 km of different trail types (asphalt, concrete, dirt and gravel).