Kapralos, B., Kanev, K., and Jenkin, M. Adanced sound integration for toy-based computing. In P. Hung Mobile Services for Toy Computing, pp. 107-128, Springer, 2015.
Despite the growing awareness regarding the importance of sound in the human-computer interface and the potential interaction opportunities it can afford, sound, and spatial sound in particular, is typically ignored or neglected in interac- tive applications including video games and toys. Although spatialized sound can provide an added dimension for such devices, one of the reasons that it is often overlooked is the complexity involved in its generation. Spatialized sound gen- eration is not trivial, particularly when considering mobile devices and toys with their limited computational capabilities, single miniature loudspeaker and limited battery power. This chapter provides an overview of sound and spatial sound for use in human-computer interfaces with a particular emphasis on its use in mobile devices and toys. A brief review outlining several sound-based mobile applications, toys, and spatial sound generation is provided. The problems and limitations associ- ated with sound capture and output on mobile devices is discussed along with an overview of potential solutions to these problems. The chapter concludes with an overview of several novel applications for sound on mobile devices and toys.
Kanev, K, Oido, I., Hung, P, Kapralos, B. and Jenkin, M. Case study: approaching the learning of Kanji through augmented toys in Japan. In P. Hung Mobile Services for Toy Computing, pp. 175-192, Springer, 2015.
Aside from their use in recreation, toys and toy technologies can also be employed in enhanced learning and education. The merging of augmented reality with traditional toys can lead, for example, to unique and engaging educational experiences providing opportunities for focused learning and more advanced knowledge dissemination. In this respect, the educational perspectives of various toyas and toy technologies are considered in this work, and related instrucitonal features and learning functionalities are presented and discussed. A research initiative led by the authors that integrates traditional toys with novel augmented reality technolgies to support the learning of kanji characters, a time consuming and often difficult task, is reported and discussed.
Lam, J., Kapralos, B., Kanev, K., Collins, K., Hogue, A. and Jenkin, M. Sound localization on a horizontal surface: virtual and real sound source localization. Virtual Reality, 19 (3-4), 213-222, 2015.
As the technology improves and their cost decreases, tabletop computers and their inherent ability to promote collaboration amongst users are gaining in popularity. Their use in virtual reality-based applications including virtual training environments and gaming where multi-user interactions are common is poised to grow. However, before tabletop computers become widely accepted, there are many questions with respect to spatial sound production and reception for these devices that need to be addressed. Previous work (Lam et al. in ACM Comput Entertain 12(2):4:1-4:19, 2014) has seen the development of loudspeaker-based amplitude panning spatial sound techniques to spatialize a sound to a position on a plane just above a tabletop computer's (horizontal) surface. Although it has been established that the localization of these virtual sources is prone to error, there is a lack of ground truth (reference) data with which to compare these earlier results. Here, we present the results of an experiment that measured sound localization of an actual sound source on a horizontal surface, thus providing such ground truth data. This ground truth data were then compared with the results of previous amplitude panning-based spatial sound techniques for tabletop computing displays. Preliminary results reveal that no substantial differences exist between previous amplitude panning results and the ground truth data reported here, indicating that amplitude panning is a viable spatial sound technique for tabletop computing and horizontal displays in general.
Jenkin, M. and Dymond, P. An infection algorithm for leader election: experimental results for a chain. Proc. IEEE Int. Conf. on Information and Automation (ICIA). Lijiang, China, 2015.
A variant of the population protocol model is used in [1] to describe a probabilistic algorithm for leader election (choosing one of the agents to have special authority) in a collection of autonomous numbered agents, with only simple, pairwise interactions allowed between them. Earlier results have shown that leader election for a collection of numbered agents can be accomplished via a probabilistic infection algorithm. The algorithm uses no external or global timers to decide when the election is completed. Here we consider agents distributed through different locations in space. We consider agents as being located on various nodes of a graph and allow only those agents at the same node of the graph to interact. Experimental results using various sizes of chain graph [2] illustrate that the probabilistic algorithm presented in [1], [3] can be successfully applied in this generalized setting.
Mojiri Forooshani, P. and Jenkin, M. Sensor coverage with a heterogeneous fleet of surface vessels. Proc. IEEE Int. Conf. on Information and Automation (ICIA), 571-576, Lijiang, China, 2015.
Sensor coverage of large areas is a problem that occurs in a variety of different environments from terrestrial to aerial to aquatic. Here we consider the aquatic version of the problem. Given a known aquatic environment and collection of aquatic surface vehicles with known kinematic and dynamic constraints, how can a fleet of vehicles be deployed to provide sensor coverage of the surface of the body of water? Rather than considering this problem in general, here we consider the problem given a specific fleet consisting of one very well equipped robot capable of global localization aided by a number of smaller, less well equipped devices that rely on the main robot for localization and thus must operate in close proximity to the main robot. A boustrophedon decomposition algorithm is developed that incorporates the motion, sensing and communication constraints imposed by the autonomous fleet. The approach is demonstrated in simulation using a real aquatic environment withe portions of the approach demonstrated using a fleet of real robots operating outdoors.
Dubrowski, A., Kapralos, B., Kanev, K. and Jenkin, M. Interprofessional critical care training: interactive virtual learning environments and simulations. Proc. 6th Int. Conf. on Information, Intelligence, Systems and Applications. Corfu, Greece, 2015.
Interprofessional critical care training (ICCT) is an important activity that helps develop and formalize an understanding of the roles, expertise, and unique contributions attributed to members of multi-disciplinary teams in critical situations. Such training is of particular importance for teams that operate under tight time constraints in highly stressful conditions, such as that found in medicine. Here we describe our first steps towards developing a virtual learning environment (simulation) specifically aimed at ICCT for pediatric critical care teams. Our virtual learning environment employs a tabletop computing platform with novel image-based sensing technologies to enable collaboration and interaction amongst a group of trainees while promoting a learner-centric approach where the simulation is tailored specifically to the needs of each trainee.
Codd-Downey, R. and Jenkin, M. RCON: dynamic mobile interfaces for command and control of ROS-enabled robots. Proc. 12th Int. Conf. on Informatics in Control, Automation and Robotics. Colmar, France, 2015.The development of effective user interfaces for an autonomous system can be quite difficult, especially for devices that are to be operated in the field where access to standard computer platforms may be difficult or impossible. One approach in this type of environment is to utilize tablet or phone devices, which when coupled with an appropriate tool such as ROSBridge can be used to connect with standard robot middleware. This has proven to be a successful approach for devices with mature user interface requirements but may require significant software development for experimental systems. Here we describe RCON, a software tool that allows user interfaces on iOS devices to be configured on the device itself, in real time, in response to changes in the robot software infrastructure or the needs of the operator. The system is described in detail along with the accompanying communication framework and the process of building a user interface for a simple autonomous device.