A Comparative Study of Menus in Virtual Reality Environments

Andrés J. Santos, Telmo Zarraonandia, Paloma Diaz, Ignacio Aedo
Despite the increasing popularity of Virtual Reality (VR) technology, the designers of these type of environments stick lack the necessary support for designing effective and usable interfaces for their creations. This works aim to provide support to VR designers by comparing the efficiency (task completion time and error rate) and usability of four types of menu configurations with different geometry (radial and linear) and position in the world (non-diegetic positions and spatial positions). We present the results of two experiments that suggest that in both non-diegetic and spatial menus the task completion time is smaller on radial menus than linear menus. In the case of error rate and usability, no statistic significant differences between radial and linear menus were found regardless their position in the VR environment.

A Tangible Interactive Space Odyssey to Support Children Learning of Computer Programming

Javier Marco Rubio, Clara Bonillo Fernandez, Eva Cerezo

In this paper StarLoop, a Tangible Programing Language designed to support the learning of computer programing concepts for middle school children, is presented. StarLoop has been designed as a game to be played in an Interactive Space by up to four middle-school children. The StarLoop game uses the four tabletop devices as well as image projection on the Interactive Space walls. A first test with children has been carried out; it has been quite successful and no big usability problems encountered.

Bring Your Own Device into Multi-device Ecologies

Kerstin Blumenstein, Martin Kaltenbrunner, Markus Seidl, Laura Breban, Niklas Thür BSc, Wolfgang Aigner
Almost every visitor brings their own mobile device (e.g., smartphone or tablet) to the museum. Although, many museums include interactive exhibits (e.g., multi-touch tables), the visitors' own devices are rarely used as part of a device ecology. Currently, there is no suitable infrastructure to seamlessly link different devices in museums. Our approach is to integrate the visitor's own device in a multi-device ecology (MDE) in the museum to enhance the visitor's exhibition experience. Thus, we present a technical concept to set up such MDEs integrating the well-established TUIO framework for multi-touch interaction on and between devices.

Computational Foresight: Forecasting Human Body Motion in Real-time for Reducing Delays in Interactive System

Yuuki Horiuchi, Yasutoshi Makino, Hiroyuki Shinoda

In this paper, we propose a machine learning-based system named “Computational Foresight” that can forecast human body motion 0.5 seconds before the actual motion in real-time. This forecasting system can be used to estimate human gestures in advance to the actual action for reducing delays in interactive system. In addition, the system can be applied to instruct sports actions properly, and prevent elderly from falling to the ground, and so on. Proposed system detects 25 human body joints to use those data for input dataset of machine learning. We created 5-layered neural network to estimate human body motion in real-time. In our experiment, we measured jump motions of subjects for learning. In our evaluation, the prototype system scored that the center of gravity of whole body can be forecasted 0.5 sec before with its accuracy of 7.9 cm.

Effects of the Display Angle and Physical Size on Large Touch Displays in the Work Place

Craig Anslow, William Wong
Large display and touch screens are becoming ubiquitous within the work place including multiple display screens. There is limited evidence on what con gurations and arrangements of the display screens are most e ective for data analysis. We conducted two user studies to understand the effectiveness of the display angle, physical size, resolution, and touch precision for data analysis activities. Our results indicated that touch interaction for data analysis sitting at a workstation was most e ective with medium sized screens at 27″, high precision touch accuracy (not 4K resolution), and display angle titled at 300. The results from our studies can guide other researchers and developers who want to integrate large touch display screens into their work place environments.

Evaluation of Flick Gestures on Multitouch Tabletop Surfaces

Manuela Uhr, Joachim Nitschke, Jingxin Zhang, Paul Lubos, Frank Steinicke
This work is observing the impacts/influence of different mapping functions between the user's flick gesture and the animation of the flicked object. The flick gesture, in which the user quickly swipes a finger across the screen and lifts the finger without slowing down, is a popular interaction technique on multitouch displays, e.g. for navigating on digital maps. While flick operations are well established on small mobile touch screens, the exact implementation of this technique on large multitouch tabletops needs to be adjusted to several parameters, especially flick velocity and inertia duration. We performed a preliminary experiment to explore the users' flick behavior on large multitouch tabletops with focus on the time until the motion of a flicked object stops. The results indicate that participants interact very diverse when using flick gestures on large multitouch screens.

Gesture Typing on Virtual Tabletop: Effect of Input Dimensions on Performance

Antoine Loriette, Sebastian Stein, Roderick Murray-Smith, John H Williamson
The association of tabletop interaction with gesture typing presents interaction potential for situationally or physically impaired users. In this work, we use depth cameras to create touch surfaces on regular tabletops. We describe our prototype system and report on a supervised learning approach to fingertips touch classification. We follow with a gesture typing study that compares our system with a control tablet scenario and explore the influence of input size and aspect ratio of the virtual surface on the text input performance. We show that novice users perform with the same error rate at half the input rate with our system as compared to the control condition, that an input size between A5 and A4 present the best tradeoff between performance and user preference and that users' indirect tracking ability seems to be the overall performance limiting factor.

HoloFacility: Get in Touch with Machines at Trade Fairs using Holograms

Mandy Korzetz, Romina Kühn, Maria Gohlke, Uwe Aßmann

Exhibiting and presenting real facilities, machines or related software products to visitors at trade fairs can be very challenging due to space constraints, installation costs or explaining complex data processing. Addressing these challenges, we introduce HoloFacility, Augmented Reality (AR) applications for Microsoft HoloLens. They offer an immersive and interactive experience by using in-air gestures. Visitors become able to explore hidden information and functionality, which otherwise would remain invisible. Thus, HoloFacility supports understanding underlying software systems. It also allows visitors to control certain features of real and virtual facilities. We implemented three use cases with different levels of augmentation and connected them to manufacturing execution and facility management systems: HoloCoffee, HoloMachines and HoloRobot. With demonstrations at trade fairs, we received positive feedback on UX aspects of HoloFacility and first impressions on using HoloLens as explanation support.

Improving the Feasibility of Ultrasonic Hand Tracking Wearables

Jess McIntosh, Mike Fraser
Wearable devices for activity tracking and gesture recognition have expanded rapidly in recent years. One technique that has shown great potential for this is ultrasonic imaging. This technique has been shown to have advantages over other techniques in accuracy, surface area, placement and importantly, continuous finger angle estimations. However, ultrasonic imaging suffers from a couple of issues: First and foremost, the propagation of ultrasound into flesh suffers greatly without a suitable coupling medium; Secondly, the complexity of the driving circuitry for medical grade imaging currently renders a wearable version of this infeasible. This paper aims to address these two problems by finding a rigid coupling medium that lasts for significantly longer periods of time; and devising a new sensor configuration to reduce the device complexity, while still retaining the benefits of the technique. Furthermore, a comparison between high and low frequency systems reveal that different devices can be created with this technique for better resolution or convenience respectively.

Investigating Communication Grounding in Cross-Surface Interaction

Leila Homaeian, Nippun Goyal, James R Wallace, Stacey D. Scott
This work investigates how two different cross-surface interaction techniques support communication “grounding” during a collaborative sensemaking task. Grounding is the process of establishing mutual knowledge, beliefs, or assumptions during conversation to effectively communicate, and is essential for collaborative analysis. Our study found several specific design features of the studied cross-surface interfaces that either supported or hindered the grounding process. For instance, one technique provided “flexible ownership” of widgets on a shared tabletop that controlled the data connection between the tabletop and users’ personal tablets. This feature enabled cooperative interaction strategies known to facilitate the grounding process. We discuss these results, and other cross-surface design features that impact grounding.

Learner versus System Control in Augmented Lab Experiments

Susanne Karsten, Daniel Jörg, Eva Hornecker
We present a user study of the mock-up for a learning environment on electro-mobility, based on tracking of physical interactions and projected augmentation. We discuss observations and interviews with participants that were led through a task scenario. Our insights highlight user needs in an educational context. There is generally high acceptance for augmented reality in experimentation environments. On the other hand, there are some essential points regarding user guidance and system concept critical for practical experimental education in schools and universities. We describe the most important areas of decisions for further development. Frequently, these concern questions about degrees of freedom - on the part of users as well as system.

Objective Meaning: Presentation Mediation in an Interactive Installation

Sarah Storteboom, Alice Thudt, Søren Knudsen, Sheelagh Carpendale

We explore the presentation technique of visual abstraction as a form of mediation to manage content generated by the public in order to maintain a respectful discourse. We identify technological and social mediation as two dimensions within the space of content mediation, and discuss different solutions based on related work in public interactive displays and art installations. We further discuss a novel approach to technological mediation by describing our interactive artwork Objective Meaning – an installation that invites the audience to express themselves through anonymous text messages. The design of this system mediates discourse by visually abstracting the presentation of messages on a display by breaking messages apart into decontextualized words. We briefly discuss the public response during a one-month deployment of the installation in a library setting.

ProDesk: An Interactive Ubiquitous Desktop Surface

Benjamin Wingert, Isabel Schöllhorn, Matthias Bues

We describe ProDesk, an ubiquitous, projection-based touch surface system that extends the screen space of today's desktop computer workplaces to the whole surface of an ordinary office desk, along with highly accurate multi-touch interaction. The objective of this system is to offer users more flexibility on tasks in which lots of complex information have to be handled. We propose a novel user interface concept that extends normal WIMP functionality and a software architecture that implements this UI concept. We present evaluation results of the system in technical benchmarks and a user study.

PULSE: Sonifying Data to Motivate Physical Activity in Outdoor Spaces

Oliver James Halstead, Mark Lochrie, Jack Davenport
This work in progress details the conception of a new inter-disciplinary design intended to benefit the physical activity of running, by combining the sonification of real time personal health statistics with information regarding the surrounding environment. By applying datasets sourced from wearable technologies, a model was designed that generated new original musical materials from detailed figures pertaining to exertion. The resulting music presents an opportunity to iterate new knowledge gained from insights with participants, in order to create a real time mobile application - Pulse - that sonifies both a user’s movements through place and the spaces within.

TiltPass: Using Device Tilts as an Authentication Method

Amir Esmaeil Sarabadani Tafreshi, Sara Claudia Sarabadani Tafreshi, Amirehsan Sarabadani Tafreshi

With exponential growth of using mobile devices, new authentication methods are required for better protection of personal information. The frequent use of these devices in our daily lives has resulted in a compromise by the users between higher level of security and more comfort access. The traditional Password/PIN-based methods are subject to a number of limitations and in particular to shoulder-surfing attacks. This problem becomes more pronounced when users tend toward simpler (i.e., shorter) PINs because of willingness for more comfort access. This paper presents tilting of the device as a new authentication method. We compared this new approach with the traditional approach of entering PIN in two user studies. Our results show that this new method is intuitive, enjoyable, easy-to-use and a more secure authentication method than the traditional PIN-based method.

Towards a Mixed-Reality Interface for Mind-Mapping

Philippe Giraudeau, Martin HACHET
In this work, we have explored an approach based on the hybridisation of physical and digital content for mind-mapping activities at schools. Based on the literature in the fields of cognitive science and HCI, we have designed a mixed-reality (MR) interface called Reality-Map. We conducted a pilot study with 11 participants suggesting that learning and manipulating information about the brain and their cognitive functions could be improved by the use of such a MR interface compared to a traditional WIMP interface.

Towards Touch-Based Medical Image Diagnosis Annotation

Francisco Maria Galamba Ferrari Calisto, Jacinto Nascimento, Alfredo Ferreira, Daniel Gonçalves

A fundamental step in medical diagnosis for patient follow-up relies on the ability of radiologists performing reliable diagnosis from acquired images. Basically, the diagnosis strongly depends on the visual inspection over the shape of the lesions, and somehow register its evolution through time. As datasets increase in size, such visual evaluation becomes harder. For this reason, it is crucial to introduce easy-to-use interfaces that help the radiologists not only to perform a reliable visual inspection but more importantly, allow the efficient delineation of the lesions. In this paper, we will present a study on integrating the above interfaces in a real-world scenario. More specifically, we will explore the radiologist's receptivity to the current touch environment solution. The advantages of touch are threefold: (i) the time performance is superior regarding the traditional use, (ii) it has more intuitive control and, (iii) for less time, the user interface delivers more information per action, concerning annotations. We concluded, from our studies that the path towards touch-based on medical image diagnosis annotation includes overcoming the current refusal to use these systems by radiologists, which resist change. Also, a solution to the finger occlusion must be devised.

What Are We Missing? Adding Eye-Tracking to the HoloLens to Improve Gaze Estimation Accuracy

Hidde van der Meulen, Andrew Kun, Orit Shaer
The Microsoft HoloLens keeps track of its location and rotation relative to the environment but lacks the ability to capture eye gaze data. We assess a novel method to extend the HoloLens with a head mounted eye-tracker. Using a combination of eye gaze data and head rotation we compared gaze behavior between real and virtual objects. Results indicate that eye-tracking plays an important role in accurately determining a user’s gaze for real objects in contrast to virtual objects.