2020

2024 2023 2022 2021 2020
2019 2018 2017 2016 2015 2014 2013 2012 2011 2010
2009 2008 2007 2006 2005 2004 2003 2002 2001 2000
1999 1998 1997 1996 1995 1994 1993 1992 1991 1990
1989 1988 1987 1986 1985 1984 1983
  1. Altarawneh, E., Jenkin, M. and MacKenzie, S. An extensible cloud based avatar: implementation and evaluation. In Recent Advances in Technologies for Inclusive Well-Being: Virtual Patients, Gamifaction and Simulation, A. Brooks, S. Brahman, B. Kapralos, A. Nakajima, J. Tyerman and L. C. Jain (eds.), Springer, 2020.
    A common issue in human-robot interaction is that a naive user expects an intelligent human-like conversational experience. Recent advances have enabled such experiences through cloud-based infrastructure; however, this is not currently possible on most mobile robots due to the need to access cloud-based (remote) AI technology. Here we describe a toolkit that supports interactive avatars using cloud-based resources for human-robot interaction. The toolkit deals with communication and rendering latency through parallelization and mechanisms that obscure delays. This technology can be used to put an interactive face on a mobile robot. But does an animated face on a robot actually make the interaction more effective or useful? To answer this question, we conducted a user study comparing human-robot interaction using text, audio, a realistic avatar, and a simplistic cartoon avatar. Although response time was longer for both avatar interfaces (due to increased computation and communication), this had no significant effect on participant satisfaction with the avatar-based interfaces. When asked about general preferences, more
    participants preferred the audio interface over the text interface, the avatar interfaces over the audio interface, and the realistic avatar interface over the cartoon avatar interface.
  2. Tarawneh, E. and Jenkin, M. System and method for rendering of an animated avatar. US Patent 10,580,187 B2, March 3, 2020.
    There are provided systems and methods for rendering of an animated
    avatar. An embodiment of the method includes: determining a first rendering time of a first clip as approximately equivalent to a predetermined acceptable rendering latency, a first playing time of the first clip determined as approximately the first rendering time multiplied by a multiplicative factor; rendering the first clip; determining a subsequent rendering time for each of one or more subsequent clips, each subsequent rendering time is determined to be approximately equivalent to the predetermined acceptable rendering latency plus the total playing time of the preceding clips, each subsequent playing time is determined to be approximately the rendering time of the respective subsequent clip multiplied by the multiplicative factor; and rendering the one or more subsequent clips.
  3. Bir Dey, B. and Jenkin, M. Design and construction of the DragonBall. Proc. ROMANSY 2020, Sapporo, Japan. September 2020.
    Spherical robots provide a number of advantages over their wheeled counterparts, but they also present a number of challenges and
    complexities. Chief among these are issues related to locomotive strategies and sensor placement and processing given the rolling nature of the device. Here we describe DragonBall, a visually tele-operated spherical robot. The DragonBall utilizes a combination of a geared wheel to move the center of mass of the vehicle coupled with a torque wheel to change direction. Wide angled cameras mounted on the robot's horizontal axis provide a 360 degree view of the space around the robot and are used to simulate a traditional pan tilt zoom camera mounted on the vehicle for visual tele-operation. The resulting vehicle is well suited for deployment in contaminated environments for which vehicle remediation is a key operational requirement.
  4. Friedman, N., Goedicke, D., Zhang, V., Rivkin, D., Jenkin, M., Degutyte, Z., Astell, A., Liu, X. and Dudek, G. Capturing attention with wind. Proc. Workshop on Integrating Multidisciplinary Approaches to Advanced Physical Human-Robot Interaction. Held in conjunction with IEEE ICRA 2020, Paris, France. May 31, 2020.
    Having a robot interact with people in a shared environment is complex. Both running into humans and loud audio warnings are inappropriate. Visual signallying may be appropirate but is only effective if the humans are looking at/attending to the robot vehcile. Are there effective and socially acceptable mechanisms that a robot can exploit to capture the attention of humans in a shared environment? Here we explore the potential of using controlled blasts
    of wind (haptic air) to capture attention in a solcially acceptable manner.
  5. Adjindji, A., Kuo, C.. Mikal, G., Harris, L. R. and Jenkin, M. Vestibular damage assessment and therapy using virtual reality. Proc. 7th Int. Conf. on Augmented Reality, Virtual Reality and Computer Graphics. Lecce, Italy, September 2020.
    Vestibular damage can be very debilitating, requiring ongoing
    assessment and rehabilitation to return sufferers to normal function. The process of rehabilitation can require an extended period of therapy during which patients engage in repetitive and often boring tasks to recover as much normal vestibular function as possible. Making these tasks more engaging while at the same time obtaining quantitative participation data in these tasks is critical for a positive patient outcome. Here we describe the conversion of
    vestibular therapy tasks into virtual reality and technology that enables their deployment in both directly- and remotely-supervised vestibular rehabilitation. This infrastructure is currently being evaluated in tests within a clinical setting.
  6. Hogan, F. R., Rezaei-Shoshtari, S., Jenkin, M., Girdhar, Y., Meger, D. and Dudek, G. Seeing Through Your Skin: A Novel Visuo-Tactile Sensor for Robotic Manipulation. Proc. Visual Learning and Reasoning for Robotic Manipulation Workshop (VLRRM) at RSS 2020, Corvalis, Oregon, July 2020,This work describes the development of the novel tactile sensor, named Semitransparent Tactile Sensor (STS), designed to enable reactive and robust manipulation skills. The design, inspired from recent developments in optical tactile sensing technology, addresses a key missing features of these sensors: the ability to capture an "in the hand" perspective prior to and during the contact interaction. Whereas optical tactile sensors are typically opaque and obscure the view of the object at the critical moment prior to manipulator-object contact, we present a sensor that has the dual capabilities of acting as a tactile sensor and as a visual camera. This paper details the design and fabrication of the sensor, showcases its dual sensing capabilities, and introduces a simulated environment of the sensor within the PyBullet simulator. The conference presentation can be found here.
  7. Chan, M. Uribe-Quevedo, A., Kapralos, B.â, Jaimes, N., Jenkin, M. and Kanev, K. A Preliminary Usability Comparison of Augmented and Virtual Reality User Interactions for Direct Ophthalmoscopy. Proc. IEEE Serious Games and Applications for Health (SeGAH) 2020. Vancouver, August, 2020.
    Direct ophthalmoscopy, or fundoscopy, is a routine examination whereby
    a health professional examines the eye fundus using an ophthalmoscope. Despite advances in eye ex- amination tools, there is a growing concern regarding a decline of fundoscopy skills. Immersive technologies, virtual and aug- mented reality in particular, are capable of providing interactive, engaging, and safe training scenarios, showing promise as com- plementary training tools. However, current virtual fundoscopy training solutions typically fail to provide an effective training experience. In this paper, we present the results of a preliminary study conducted to examine three approaches to simulating the direct ophthalmascope as part of training. The approach uses different virtual and augmented reality user inputs. Preliminary results suggest that the operation of a physical controller that maps finger movement to direct ophthamloscopy operation allows for more usable interactions and lower cognitive load than hand- tracking gestures, which are limited to pinching.The conference presentation can be found here.
  8. Friedman, N., Goedicke, D., Zhang, V., Rivkin, D., Jenkin, M., Degutyte, Z., Astell, A., Liu, X. and Dudek, G. Out of my way! Exploring different modalities for robots to ask people to move out of the way. Proc. Workshop on Active Vision and Perception in Human(-Robot) Collaboration. Held in conjunction with the 29th IEEE Int. Conf. on Robot and Human Interactive Communication. Held online. September 2020.
    To navigate politely through social spaces, a mobile robot needs to communicate successfully with human by- standers. What is the best way for a robot to attract attention in a socially acceptable manner to communicate its intent to others in a shared space? Through a series of in-the-wild experiments, we measured the social appropriateness and effectiveness of different modalities for robots to communicate to people their intended movement, using combinations of visual text, audio and haptic cues. Using multiple modalities to draw attention and declare intent helps robots to communicate acceptably and
    effectively. We recommend that in social settings, robots should use multiple modalities to ask people to get out of the way. Additionally, we observe that blowing air at people is a particularly suitable way of attracting attention.
  9. Altarawneh, E., Jenkin M. and MacKenzie, I. S. Is Putting a Face on a Robot Worthwhile? Proc. Workshop on Active Vision and Perception in Human(-Robot) Collaboration. Held in conjunction with the 29th IEEE Int. Conf. on Robot and Human Interactive Communication. Held online. September 2020.
    Putting an animated face on an interactive robot is great fun but does it actually make the interaction more effective or more useful? To answer these questions, human- robot interactions using text, audio, a realistic avatar, and a simplistic cartoon avatar were compared in a user study with 24 participants. Participants expressed a high level of satisfaction with the accuracy and speed of all the interfaces used. Although the response time was longer for both the cartoon and realistic avatar interfaces (due to their increased computational cost), this had no effect on participant satisfaction. Participants found the avatar interfaces more fun to use than the traditional text- and audio-based interfaces, but there was no significant difference between the two avatar-based interfaces. Putting a face on a robot may make a robot more fun to interact with, and the face may not have to be that realistic.
  10. Hogue, A. and Jenkin, M. Active Stereo Vision. In Computer Vision: A Reference Guide K. Ikeuchi (ed.). Springer.
    Active stereo vision utilizes multiple cameras for 3D reconstruction, gaze control, measurement, tracking, and surveillance. Active stereo vision is to be contrasted with passive or dynamic stereo vision in that passive systems treat stereo im- agery as a series of independent static images while active and dynamic systems employ temporal constraints to integrate stereo measurements over time. Active systems utilize feedback from the image streams to manipulate camera parame- ters, illuminants, or robotic motion controllers in real time.
  11. Jenkin, M. Evolution of robotic heads. In Computer Vision: A Reference Guide K. Ikeuchi (ed.). Springer.
    Robotic heads are actively controlled camera platforms, typically designed to mimic the head and camera (eye) motions associated with humans. Early designs were typically built to study the role of eye and head motion in active vision systems. Later designs are also used in the study of human-robot interaction.
  12. Zhang, V., Friedman, N., Goedicke, D., Rivkin, D., Jenkin, M., Liu, X. and Dudek, G. The answer is blowing in the wind: directed air flow for socially-acceptable human-robot interaction. Proc. Int. Conf. on Robotics, Computer Vision and Intelligent Systems (ROBOVIS) 2020, Held Online.
    A key problem for a robot moving within a social environment is the need to capture the attention of other people using the space. In most use cases, this capture of attention needs to be accomplished in a socially acceptable manner without loud noises or physical contact. Although there are many communication mecha- nisms that might be used to signal the need for a person's attention, one particular modality that has received little interest from the robotics community is the use of controlled air as a haptic signal. Recent work has demonstrated that controlled air can provide a useful signal in the social robot domain, but what is the best mechanism to provide this signal? Here, we evaluate a number of different mechanisms that can provide this attention-seeking communication. We demonstrate that many different simple haptic air delivery systems can be effective and show that air on and air off haptic events have very similar time courses using these delivery systems.
  13. Bury, N.-A., Jenkin, M. R., Allison, R. S., Harris, L. R. (2020). Perceiving jittering self-motion in a field of lollipops from ages 4 to 95. PLoS ONE, 15(10): e0241087
    An internal model of self-motion provides a fundamental basis for action in our daily lives, yet little is known about its development. The ability to control self-motion develops in youth and often deteriorates with advanced age. Self-motion generates relative motion between the viewer and the environment. Thus, the smoothness of the visual motion created will vary as control improves. Here, we study the influence of the smoothness of visually simulated self-motion on an observer's ability to judge how far they have travelled over a wide range of ages. Previous studies were typically highly controlled and concentrated on university students. But are such populations representative of the general public? And are there developmental and sex effects? Here, estimates of distance travelled (visual odometry) during visually induced self-motion were obtained from 466 participants drawn from visitors to a public science museum. Participants were presented with visual motion that simulated forward linear self-motion through a field of lollipops using a head-mounted virtual reality display. They judged the distance of their simulated motion by indicating when they had reached the position of a previously presented target. The simulated visual motion was presented with or without horizontal or vertical sinusoidal jitter. Participants' responses indicated that they felt they travelled further in the presence of vertical jitter. The effectiveness of the display increased with age over all jitter conditions. The estimated time for participants to feel that they had started to move also increased slightly with age. There were no differences between the sexes. These results suggest that age should be taken into account when generating motion in a virtual reality environment. Citizen science studies like this can provide a unique and valuable insight into perceptual processes in a truly representative sample of people.