Conference's mobile apps are now available for download for Android and iOS.
Web version of the app can be found here.
The schedule of ISS'17 can also be found on CONFER.

Conference's proceedings are now available at the ACM Digital Library.

8:00 - 9:00

9:00 - 12:30

Workshops & Tutorials

9:00 - 12:30: Analyzing Qualitative Data (Tutorial)
9:00 - 12:30: Printed electronics (Tutorial)

12:30 - 14:00

Lunch Break

14:00 - 17:30

Workshops & Tutorials

14:00 - 17:30: Multisensory experiences (Tutorial)
14:00 - 17:30: The Disappearing Tabletop: Social and Technical Challenges for Cross-Surface Collaboration (Workshop)

17:30 - 19:30

18:00 - 22:00

Welcome & Joint Demo and Poster Reception (ISS and SUI)

8:00 - 9:00

Registration and Coffee

9:00 - 9:15

Greetings from the Chairs

10:15 - 10:35

Coffee Break

10:35 - 12:15

Session : Highlights of SUI 

Chair: Raimund Dachselt (TU Dresden, Germany)

Stylo and Handifact: Modulating Haptic Perception through Visualisations for Posture Training in Augmented Reality


Nicholas Katzakis, Jonathan Tong, Oscar Javier Ariza Nunez, Lihan Chen, Gudrun Klinker, Brigitte Roeder, Frank Steinicke

12:15 - 13:30

Lunch With Demos and Posters (Buffet Lunch is provided)

13:30 - 14:45

Panel 1 : Funding Research Projects in Human-Computer Interaction

Panelists: Susanne Boedker, Aarhus University, Denmark
Giulio Jacucci, University of Helsinki, Finland
Marianna Obrist, University of Sussex, UK
Albrecht Schmidt, LMU Munich, Germany
Orit Shaer, Wellesley College, USA
Chair: Albrecht Schmidt, LMU Munich, Germany

14:45 - 15:05

Coffee Break

15:05 - 16:45

Session 2: Touch Interaction and Beyond 

Chair: Jens Grubert (Coburg University, Germany)

Hand Contact Shape Recognition for Posture-Based Tabletop Widgets and Interaction


Fabrice Matulic, Daniel Vogel, Raimund Dachselt

Tabletop interaction can be enriched by considering whole hands as input instead of only fingertips. We describe a generalised, reproducible computer vision algorithm to recognise hand contact shapes, with support for arm rejection, as well as dynamic properties like finger movement and hover. A controlled experiment shows the algorithm can detect seven different contact shapes with roughly 91% average accuracy. The effect of long sleeves and non-user specific templates is also explored. The algorithm is used to trigger, parameterise, and dynamically control menu and tool widgets, and the usability of a subset of these are qualitatively evaluated in a realistic application. Based on our findings, we formulate a number of design recommendations for hand shape-based interaction.

CircGR: Interactive Multi-Touch Gesture Recognition using Circular Measurements


Ruben Balcazar, Francisco Ortega, Katherine Tarre, Armando Barreto, Mark Weiss, Naphtali D. Rishe

CircGR is a multi-touch non-symbolic gesture recognition algorithm, which utilizes circular statistic measures to implement linearithmic (O(n lg n)) template-based matching. CircGR provides a solution to gesture designers, which allows for building complex multi-touch gestures with high-confidence accuracy. We demonstrated the algorithm and described a user study with 60 subjects and over 12,000 gestures collected for an original gesture set of 36. The accuracy is over 99% with the Matthews correlation coefficient of 0.95. In addition, early gesture detection was successful in CircGR as well.

Mirror, Mirror on the Wall: Attracting Passers-by to Public Touch Displays With User Representations


Eva Lösch, Florian Alt, Michael Koch

In this paper, we investigate how effectively users' representations convey interactivity and foster interaction on large information touch displays. This research is motivated by the fact that user representations have been shown to be very efficient in playful applications that support mid-air interaction. At the same time, little is known about the effects of applying this approach to settings with a different primary mode of interaction, e.g. touch. It is also unclear how the playfulness of user representations influences the interest of users in the displayed information. To close this gap, we combine a touch display with screens showing life-sized video representations of passers-by. In a deployment, we compare different spatial arrangements to understand how passers-by are \emph{attracted} and \emph{enticed} to interact, how they \emph{explore} the application, and how they \emph{socially behave}. Findings reveal that (a) opposing displays foster interaction, but (b) may also reduce interaction at the main display; (c) a large intersection between focus and nimbus helps to notice interactivity; (d) using playful elements at information displays is not counterproductive; (e) mixed interaction modalities are hard to understand.

Gesture Elicitation Study on How to Opt-in & Opt-out from Interactions with Public Displays


Isabel Benavente, Nicolai Marquardt

Public interactive displays with gesture-recognizing cameras enable new forms of interactions. However, often such systems do not yet allow passers-by a choice to engage voluntarily or disengage from an interaction. To address this issue, this paper explores how people could use different kinds of gestures or voice commands to explicitly opt-in or opt-out of interactions with public installations. We report the results of a gesture elicitation study with 16 participants, generating gestures within five gesture-types for both a commercial and entertainment scenario. We present a categorization and themes of the 430 proposed gestures, and agreement scores showing higher consensus for torso gestures and for opting-out with face/head. Furthermore, patterns indicate that participants often chose non-verbal representations of opposing pairs such as ‘close and open’ when proposing gestures. Quantitative results showed overall preference for hand and arm gestures, and generally a higher acceptance for gestural interaction in the entertainment setting.

Measuring Readiness-to-Hand through Differences in Attention to the Task vs. Attention to the Tool


Ayman Alzayat, Mark Hancock, Miguel Nacenta

New interaction techniques, like multi-touch, tangible inter-action, and mid-air gestures often promise to be more intuitive and natural; however, there is little work on how to measure these constructs. One way is to leverage the phenomenon of tool embodiment—when a tool becomes an extension of one’s body, attention shifts to the task at hand, rather than the tool itself. In this work, we construct-ed a framework to measure tool embodiment by incorporating philosophical and psychological concepts. We applied this framework to design and conduct a study that uses attention to measure readiness-to-hand with both a physical tool and a virtual tool. We introduce a novel task where participants use a tool to rotate an object, while simultaneously responding to visual stimuli both near their hand and near the task. Our results showed that participants paid more attention to the task than to both kinds of tool. We also discuss how this evaluation framework can be used to investigate whether novel interaction techniques allow for this kind of tool embodiment.

16:45 - 17:45

Posters and Demo Madness

8:00 - 9:00

Registration and Coffee

09:00 - 10:45

Session 3: Exploring Spaces : 3D Interaction 

Chair: Dimitar Valkov (University of Münster, Germany)

Summon and Select: Rapid Interaction with Interface Controls in Mid-air


Aakar Gupta, Thomas Pietrzak, Cleon Yau, Nicolas Roussel, Ravin Balakrishnan

Current freehand interactions with large displays rely on point & select as the dominant paradigm. However, constant hand movement in air for pointer navigation leads to hand fatigue quickly. We introduce summon & select, a new model for freehand interaction where, instead of navigating to the control, the user summons it into focus and then manipulates it. Summon & select solves the problems of constant pointer navigation, need for precise selection, and out-of-bounds gestures that plague point & select. We describe the design and conduct two studies to evaluate the design and compare it against point & select in a multi-button selection study. The results show that summon & select is significantly faster and has less physical and mental demand than point & select.

Investigating the Use of Spatial Interaction for 3D Data Visualization on Mobile Devices


Wolfgang Büschel, Patrick Reipschläger, Ricardo Langner, Raimund Dachselt

Three-dimensional visualizations employing traditional input and output technologies have well-known limitations. Immersive technologies, natural interaction techniques, and recent developments in data physicalization may help to overcome these issues. In this context, we are specifically interested in the usage of spatial interaction with mobile devices for improved 3D visualizations. To contribute to a better understanding of this interaction style, we implemented example visualizations on a spatially-tracked tablet and investigated their usage and potential. In this paper, we report on a qualitative study comparing spatial interaction with in-place 3D visualizations to classic touch interaction regarding typical visualization tasks: navigation of unknown datasets, comparison of individual data objects, and the understanding and memorization of structures in the data. We identify several distinct usage patterns and derive recommendations for using spatial interaction in 3D data visualization.

Superiority of a Handheld Perspective-Coupled Display in Isomorphic Docking Performances


Thibault Louis, François Bérard

Six degrees of freedom docking is one of the most fundamental tasks when interacting with 3D virtual worlds. We investigated docking performances with isomorphic interactions that directly relate the 6-dof pose of the input device to that of the object controlled. In particular, we studied a Handheld Perspective-Coupled Display (HPCD); which is a novel form of interactive system where the display itself is handheld and used as the input device. It was compared to an opaque HMD and to a standard indirect flat display used with either a sphere or an articulated arm as the input device. A novel computation of an Index of Difficulty was introduced to measure the efficiency of each interaction. We observed superior performances with the HPCD compared with the other interactions by a large margin (17\% better than the closest interaction).

Robotic Assembly of Haptic Proxy Objects for Tangible Interaction and Virtual Reality


Yiwei Zhao, Lawrence H Kim, Ye Wang, Mathieu Le Goc, Sean Follmer

Passive haptic proxy objects allow for rich tangible interaction, and this is especially true in VR applications. However, this requires users to have many physical objects at hand. Our paper proposes robotic assembly at run time of low-resolution haptic proxies for tangible interaction and virtual reality. These assembled physical proxy objects are composed of magnetically attached blocks which are assembled by a small multi robot system, specifically Zooids. We explore the design of the basic building blocks and illustrate two approaches to assembling physical proxies: using multi-robot systems to (1) self-assemble into structures and (2) assemble 2.5D structure with passive blocks of various heights. The success rate and completion time are evaluated for both approaches. Finally, we demonstrate the potential of assembled proxy objects for tangible interaction and virtual reality through a set of demonstrations.

Desktop VR using a Mirror Metaphor for Natural User Interface


Santawat Thanyadit, Ting-Chuen Pong

The main objective of this research work is to create a desktop VR environment which enables users to interact naturally with virtual objects positioned both in front and behind the screen. We propose a mirror metaphor that simulates a physical stereoscopic screen with the properties of a mirror. In addition to allowing users to interact with virtual objects positioned in front of the stereoscopic screen using their virtual hands, the virtual hands can be transferred inside the virtual mirror to interact with objects behind the screen. When the virtual hands are operating inside the virtual mirror, they are transformed like the reflection in a real mirror. This effectively doubles the interactable space and creates an interactive space that could facilitate collaborative tasks. Our user study shows that users could interact through the mirror approach as effectively as similar interaction techniques, hence demonstrating that the mirror technique is a viable interface in certain VR setups.

Fast Lossless Depth Image Compression


Andy Wilson

A lossless image compression technique for 16-bit single channel images typical of depth cameras such as Microsoft Kinect is presented. The proposed “RVL” algorithm achieves similar or better compression rates as existing lossless techniques, yet is much faster. Furthermore, the algorithm’s implementation can be very simple; a prototype implementation of less than one hundred lines of C is provided. The algorithm’s balance of speed and compression make it especially useful in interactive applications of multiple depth cameras on local area networks. RVL is compared to a variety of existing lossless techniques, and demonstrated in a network of eight Kinect v2 cameras.

10:45 - 11:00

Coffee Break

11:00 - 12:15

Panel 2: Future Directions for Interactive Spaces and Surfaces

Panelists: Yvonne Rogers, UCL, UK
Stacey Scott, University of Guelph, Canada
Jürgen Steimle, Saarland University, Germany
Sriram Subramanian, University of Sussex, UK
Andy Wilson (Microsoft Research, USA)
Chair: Orit Shaer, Wellesley College, USA

12:15 - 12:45

12:45 - 14:00

Lunch With Demos and Posters (Buffet Lunch is provided)

14:00 - 15:40

Session 4: From Wall Displays to Liquid Matter 

Chair: Stacey Scott (University of Guelph, Canada)

MeetAlive: Room-Scale Omni-Directional Display System for Multi-User Content and Control Sharing


Andreas Rene Fender, Hrvoje Benko, Andy Wilson

MeetAlive combines multiple depth cameras and projectors to create a room-scale omni-directional display surface designed to support collaborative face-to-face group meetings. With MeetAlive, all participants may simultaneously display and share content from their personal laptop wirelessly anywhere in the room. MeetAlive gives each participant complete control over displayed content in the room. This is achieved by a perspective corrected mouse cursor that transcends the boundary of the laptop screen to position, resize, and edit their own and others’ shared content. MeetAlive includes features to replicate content views to ensure that all participants may see the actions of other participants even as they are seated around a conference table. We report on observing six groups of three participants who worked on a collaborative task with minimal assistance. Participants’ feedback highlighted the value of MeetAlive features for multi-user engagement in meetings involving brainstorming and content creation.

Using Variable Movement Resistance Sliders for Remote Discrete Input


Lars Lischke, Paweł W. Woźniak, Sven Mayer, Andreas Preikschat, Morten Fjeld

Despite the proliferation of screens in everyday environments, providing values to remote displays for exploring complex data sets is still challenging. Enhanced input for remote screens can increase their utility and enable the construction of rich datadriven environments. Here, we investigate the opportunities provided by a variable movement resistance slider (VMRS), based on a motorized slide potentiometer. These devices are often used in professional soundboards as an effective way to provide discrete input. We designed, built and evaluated a remote input device using a VMRS that facilitates choosing a number on a discrete scale. By comparing our prototype to a traditional slide potentiometer and a software slider, we determined that for conditions where users are not looking at the slider, VMRS can offer significantly better performance and accuracy. Our findings contribute to the understanding of discrete input and enable building new interaction scenarios for large display environments.

Sunny Day Display: Mid-air Image Formed by Solar Light


Naoya KOIZUMI

We propose a mid-air imaging technique that is visible under sunlight and that passively reacts to light conditions in a bright space. Optical imaging is used to form a mid-air image through the reflection and refraction of a light source. It seamlessly connects a virtual world and the real world by superimposing visual images onto the real world. Previous research introduced light emitting displays as a light source. However, attenuation of the brightness under a strong light environment presents a problem. We designed a mid-air imaging optical system that captures ambient light using a transparent LCD (liquid crystal display) and a diffuser. We built a prototype to confirm our design principles in sunlight and evaluated several diffusers. Our contribution is three-fold. First, we confirmed the principle of the mid-air imaging optical system in sunlight. Second, we chose an appropriate diffuser in an evaluation. Third, we proposed a practical design which can remove disturbance light for outdoor use.

ThermoTouch: a New Scalable Hardware Design for Thermal Displays


Sven Kratz, Tony Dunnigan

ThermoTouch is a new type of thermo-haptic display device. It provides a visual display with a grid of thermal pixels that can provide hot or cold haptic feedback. Unlike previous devices, our proposed design uses liquid cooling and resistive heating to output thermal feedback. We describe the hardware and software design of ThermoTouch. Technical measurements on our prototype indicate that ThermoTouch has thermal output properties comparable to Peltier elements, which have been used extensively as thermal transducers in previous works. Our measurements of ThermoTouch's per-area power consumption and its low hardware cost per thermal pixel indicate that our technology improves scalability to large-scale thermal displays over technologies used in previous systems. As an example application of ThermoTouch, we describe an editing, automatic keyframe generation and playback system for video with an additional thermo-haptic feedback channel. Lastly, we describe technical design considerations for creating large-scale thermal displays using ThermoTouch technology.

Programmable Liquid Matter: 2D Shape Deformation of Highly Conductive Liquid Metals in a Dynamic Electric Field


Yutaka Tokuda, Jose Luis Berna Moya, Gianluca Memoli, Timothy Neate, Deepak Ranjan Sahoo, Simon Robinson, Jennifer Pearson, Matt Jones, Sriram Subramanian

In this paper, we present a method which allows for the dynamic 2D transformation of liquid matter and present unique organic animations based on spatio-temporally controlled electric fields. We deploy an EGaIn (Gallium indium eutectic alloy) liquid metal which features electric conductivity and high dynamic range of surface tension when supplied with varying voltage. Controlling multiple arrays of electrodes dynamically, we found it is possible to manipulate a liquid metal into a fine-grained desired shape. To demonstrate our proof-of-concept, we present a 7x7 electrode array prototype system with an integrated liquid metal tracking system and a simple GUI.Taking advantage of the high conductivity of liquid metal, we introduce shape changing, reconfigurable smart circuit as an example of unique applications. We discuss the systems constraints and the overarching challenges of controlling liquid metal, such as splitting, self-electrode interference and finger instability problems. Finally, we reflect on the broader vision of this project and discuss our work in the context of the wider scope of programmable materials.

15:40 - 16:00

Coffee Break

16:00 - 17:40

Session 5: Feel, Taste, Smell, Splash! 

Chair: Paloma Diaz (University of Madrid, Spain)

Interactive FUrniTURE -- Evaluation of Smart Interactive Textile Interfaces for Home Environments


Philipp Brauner, Julia van Heek, Nur Al-huda Hamdan, Jan Borchers, Martina Ziefle

Ubiquitous computing strives to reach the calm computing state where sensors and actuators disappear from the foreground of our surroundings into the fabric of everyday objects. Despite the great progress in embedded technology, artificial interfaces, such as remote controls and touch screens, remain the dominant media for interacting with smart everyday objects. Motivated by recent advancements in smart textile technologies, we investigate the usability and acceptance of fabric-based controllers in the smart home environment. In this article we describe the development and evaluation of three textile interfaces for controlling a motorized recliner armchair in a living room setting. The core of this contribution is the empirical study with twenty participants that contrasted the user experience of three textile-based interaction techniques to a standard remote control. Despite the slightly lower reliability of the textile interfaces, their overall acceptance was higher. The study shows that the hedonic quality and attractiveness of textile interfaces have higher impact on user acceptance compared to pragmatic qualities, such as efficiency, fluidity of interaction, and reliability. Attractiveness profits from the direct and nearly invisible integration of the interaction device into textile objects such as furniture.

TastyFloats: A Contactless Food Delivery System


Chi Thanh Vi, Asier Marzo, Damien Ablart, Gianluca Memoli, Sriram Subramanian, Bruce Drinkwater, Marianna Obrist

We present two realizations of TastyFloats, a novel system that uses acoustic levitation to deliver food morsels to the users’ tongue. To explore TastyFloats’ associated design framework, we first address the technical challenges to successfully levitate and deliver different types of foods on the tongue. We then conduct a user study, assessing the effect of acoustic levitation on users’ taste perception, comparing three basic taste stimuli (i.e., sweet, bitter and umami) and three volume sizes of droplets (5µL, 10µL and 20µL). Our results show that users perceive sweet and umami easily, even in minimal quantities, whereas bitter is the least detectable taste, despite its typical association with an unpleasant taste experience. Our results are a first step towards the creation of new culinary experiences and innovative gustatory interfaces.

OSpace: Towards a Systematic Exploration of Olfactory Interaction Spaces


Dmitrijs Dmitrenko, Emanuela Maggioni, Marianna Obrist

When designing olfactory interfaces, HCI researchers and practitioners have to carefully consider a number of issues related to the scent delivery, detection, and lingering. These are just a few of the problems to deal with. We present OSpace - an approach for designing, building, and exploring an olfactory interaction space. Our paper is the first to explore in detail not only the scent-delivery parameters but also the air extraction issues. We conducted a user study to demonstrate how the scent detection/lingering times can be acquired under different air extraction conditions, and how the impact of scent type, dilution, and intensity can be investigated. Results show that with our setup, the scents can be perceived by the user within ten seconds and it takes less than nine seconds for the scents to disappear, both when the extraction is on and off. We discuss the practical application of these results for HCI.

RapTapBath: A User Interface System by Tapping on a Bathtub Edge Utilizing Embedded Acoustic Sensors


Tomoyuki Sumida, Shigeyuki Hirai, Daiki Ito, Ryosuke Kawakatsu

We present RapTapBath: a human-computer interface system that converts an existing bathtub into a controller that recognizes hand-tapped tones and patterns on a bathtub's edge. This system utilizes embedded piezoelectric sensors in the bathtub's edge to analyze acoustic signals of tapped sounds, and projects a menu on the tub's edge using a projector installed above the tub. Tap locations are detected by measuring differences of signal propagation times using the piezoelectric sensors. Tap tones are identified by spectrum patterns using non-negative matrix factorization. Tap patterns are detected via tapping rates, whose probability calculations are measured over a fixed duration. This paper describes the tapping user interface's (UI) events and their specific detection methods and signal processing techniques. We also present an experimental performance evaluation. We also propose effective applications for spending bath time using this system. Finally, we describe the limitations of current tap UI events and the implications of interaction designs of RapTapBath.

Creepy Tracker Toolkit for Context-aware Interfaces


Maurício Sousa, Daniel Mendes, Rafael Kuffner dos Anjos, Daniel Medeiros, Alfredo Ferreira, Alberto Raposo, João Madeiras Pereira, Joaquim Jorge

Context-aware pervasive applications can improve user experiences by tracking people in their surroundings. Such systems use multiple sensors to gather information regarding people and devices. However, when developing novel user experiences, researchers are left to building foundation code to support multiple network-connected sensors, a major hurdle to rapidly developing and testing new ideas. We introduce Creepy Tracker, an open-source toolkit to ease prototyping with multiple commodity depth cameras. It automatically selects the best sensor to follow each person, handling occlusions and maximizing interaction space, while providing full-body tracking in scalable and extensible manners. It also keeps position and orientation of stationary interactive surfaces while offering continuously updated point-cloud user representations combining both depth and color data. Our performance evaluation shows that, although slightly less precise than marker-based optical systems, Creepy Tracker provides reliable multi-joint tracking without any wearable markers or special devices. Furthermore, implemented representative scenarios show that Creepy Tracker is well suited for deploying spatial and context-aware interactive experiences.

19:00 - 22:00

Conference Dinner


Location : Brighton Palace Pier, Palm Court Restaurant Madeira Drive, Brighton

09:15 - 10:55

Session 6: Mobile Interaction: Type, Touch & Bend 

Chair: Fabrice Matulic (Preferred Networks, Japan)

Typing on a Smartwatch for Smart Glasses


Sunggeun Ahn, Seongkook Heo, Geehyuk Lee

While smart glasses make information more accessible in mobile scenarios, entering text on these devices is still difficult. In this paper, we suggest using a smartwatch as an indirect input device (not requiring visual attention) for smart glasses text entry. With the watch-glasses combination, users do not need to lift the arm to touch the glasses nor need to carry a special external input device. To prove the feasibility of the suggested combination, we implemented two text entry methods: a modified version of SwipeBoard, which we adapted for the suggested combination, and HoldBoard, which we newly designed and implemented specifically for the suggested combination. We evaluated the performances of the two text entry methods through two user studies, and could show that they are faster than prior art for smart glasses text entry in a seated condition. A further study showed that they are competitive with the prior art also in a walking condition.

Designing Touch Gestures Using the Space around the Smartwatch as Continuous Input Space


Jaehyun Han, Sunggeun Ahn, Keunwoo Park, Geehyuk Lee

Small touchscreen interfaces such as a smartwatch have usability problems due to the small screen. One solution to these problems is to utilize the space around the smartwatch as continuous input space for the touchscreen interface. We defined four steps for a gesture that starts on the touchscreen and continues in the air. The goal of this definition was to bring the experience of large touchscreen devices into a smartwatch usage. We compared design options for the four steps and made decisions for the options based on the results of four user experiments. We expect that gestures designed based on these decisions will be both easy to learn and robust.

Estimating the Finger Orientation on Capacitive Touchscreens Using Convolutional Neural Networks


Sven Mayer, Huy Viet Le, Niels Henze

In the last years, touchscreens became the most common input device for a wide range of computers. While touchscreens are truly pervasive, commercial devices reduce the richness of touch input to two-dimensional positions on the screen. Recent work proposed interaction techniques to extend the richness of the input vocabulary using the finger orientation. Approaches for determining a finger's orientation using off-the-shelf capacitive touchscreens proposed in previous work already enable compelling use cases. However, the low estimation accuracy limits the usability and restricts the usage of finger orientation to non-precise input. With this paper, we provide a ground truth data set for capacitive touch screens recorded with a high-precision motion capture system. Using this data set, we show that a convolutional neural network can outperform approaches proposed in previous work. Instead of relying on hand-crafted features, we trained the model based on the raw capacitive images. Thereby we reduce the pitch error by 9.8% and the yaw error by 45.7%.

PredicTouch: A System to Reduce Touchscreen Latency using Neural Networks and Inertial Measurement Units


Huy Viet Le, Valentin Schwind, Philipp Göttlich, Niels Henze

Touchscreens are the dominant input mechanism for a variety of devices. One of the main limitations of touchscreens is the latency to receive input, refresh, and respond. This latency is easily perceivable and reduces users' performance. Previous work proposed to reduce latency by extrapolating finger movements to identify future movements - albeit with limited success. In this paper, we propose PredicTouch, a system that improves this extrapolation using inertial measurement units (IMUs). We combine IMU data with users' touch trajectories to train a multi-layer feedforward neural network that predicts future trajectories. We found that this hybrid approach (software: prediction, and hardware: IMU) can significantly reduce the prediction error, reducing latency effects. We show that using a wrist-worn IMU increases the throughput by 15% for finger input and 17% for a stylus.

Effects of Bend Gesture Training on Learnability and Memorability in a Mobile Game


Elias Fares, Victor Cheung, Audrey Girouard

Bend gestures can be used as a form of Around Device Interaction to address usability issues in touchscreen mobile devices. Yet, it is unclear whether bend gestures can be easily learned and memorized as control schema for games. To answer this, we built a novel deformable smartphone case that detects bend gestures at its corners and sides, and created PaperNinja, a mobile game that uses bends as input. We conducted a study comparing the effect of three pre-game training levels on learnability and memorability: no training, training of the bend gestures only, and training of both the bend gestures and their in-game action mapping. We found that including gesture-mapping positively impacted the initial learning (faster completion time and fewer gestures performed), but had a similar outcome as no training on memorability, while the gestures-without-mapping led to a negative outcome. Our findings suggest that players can learn bend gestures by discovery and training is not essential.

Release, Don’t Wait! Reliable Force Input Confirmation with Quick Release


Christian Corsten, Simon Voelker, Jan Borchers

Modern smartphones, like iPhone 7, feature touchscreens with co-located force sensing. This makes touch input more expressive, e.g., by enabling single-finger continuous zooming when coupling zoom levels to force intensity. Often, however, the user wants to select and confirm a particular force value, say, to lock a certain zoom level. The most common confirmation techniques are Dwell Time (DT) and Quick Release (QR). While DT has shown to be reliable, it slows the interaction, as the user must typically wait for 1 s before her selection is confirmed. Conversely, QR is fast but reported to be less reliable, although no reference reports how to actually detect and implement it. In this paper, we set out to challenge the low reliability of QR: We collected user data to (1) report how it can be implemented and (2) show that it is as reliable as DT (97.6% vs. 97.2% success). Since QR was also the faster technique and more preferred by users, we recommend it over DT for force confirmation on modern smartphones.

10:55 - 11:15

Coffee Break

11:15 - 12:30

Poster and Demos

12:30 - 13:45

Lunch and Townhall Meeting With Demos and Posters

13:45 - 15:25

Session 7: In the Wild : Education, Sports and Beyond 

Chair: Craig Anslow (Victoria University of Wellington, New Zealand)

In the Footsteps of Henri Tudor: Creating Batteries on a Tangible Interactive Workbench


Valerie Maquil, Christian Moll, João Martins

This paper describes the design and implementation of BatSim, a tangible user interface for playful discovery of different methods of creating batteries. BatSim combines tangible interactions with augmented reality in an interactive workbench to support museum visitors in physically performing the different steps of the procedures and viewing the consequences on embedded screens. In this paper we describe the rationale of our design solution as well as how it could be realized in three iterations, progressively focusing on 1) the spatial setting, 2) the model and interactions and 3) the form and feedback. Based on our gained insights, we discuss the importance of combining multiple prototyping methods to take into account the different facets of tangible interaction design.

FireFlies2: Interactive Tangible Pixels to enable Distributed Cognition in Classroom Technologies


David Verweij, Saskia Bakker, Berry Eggen

Continuous developments in the field of Human Computer Interaction (HCI) are resulting in an omnipresence of digital technologies in our everyday lives, which is also visible in the presence of supportive technologies in education. These technologies, e.g. tablets and computers, usually require focused attention to be operated, which hinders teachers from appropriating them while teaching. Peripheral interactive systems, which do not require focused attention, could play a role in relieving teachers’ cognitive load, such that mental resources are freed to focus on other teaching tasks. This paper presents an exploratory study on enabling such cognitive offloading through peripheral interaction in the classroom. We present the design and a seven-week field deployment of FireFlies2 interactive tangible pixels which are distributed over the classroom. Our findings show that FireFlies2 supported cognitive processes of teachers and pupils in a number of scenarios.

ClimbVis | Investigating In-situ Visualizations for Understanding Climbing Movements by Demonstration


Felix Kosmalla, Florian Daiber, Frederik Wiehr, Antonio Krüger

Rock climbing involves complex movements and therefore requires guidance when acquiring a new technique. The classic approach is mimicking the movements of a more experienced climber. However, the trainee has to remember every nuance of the climb, since the sequence of movements cannot be performed in parallel to the experienced climber. As a solution to this problem, we present a video recording and replay system for climbing. The replay component allows for different in-situ video feedback methods. We investigated the video feedback component of the system by studying two example visualization techniques, i.e.\ a life-sized in-place projection and a real-time third-person view of the climber, augmented by a video showing a successful ascent. The latter is presented to the user on both Google Glass and a projected display. The results indicate that a life-sized projection was perceived as easiest to follow, while most of the climbers had problems with the context switches between the augmented video and the climbing wall. These findings can aid in the design of assistance systems that teach complex movements.

Run&Tap: Investigation of On-Body Tapping for Runners


Nur Al-huda Hamdan, Ravi Kanth Kosuru, Christian Corsten, Jan Borchers

Devices like smartphones, smartwatches, and fitness trackers enable runners to control music, query fitness parameters such as heart rate and speed, or be guided by coaching apps. But while these devices are portable, interacting with them during running is difficult: they usually have small buttons or touchscreens which force the user to slow down to interact with them properly. On-body tapping is an interaction technique that allows users to trigger actions by tapping at different body locations eyes-free. This paper investigates on- body tapping as a potential input technique for runners. We conducted a user study to evaluate where and how accurately runners can tap on their body. We motion-captured participants while tapping locations on their body and running on a treadmill at different speeds. Results show that a uniform layout of five targets per arm and two targets on the abdomen achieved 96% accuracy rate. We present a set of design implications to inform the design of on-body interfaces for runners.

Towards Around-Device Interaction using Corneal Imaging


Daniel Schneider, Jens Grubert

Around-device interaction techniques aim at extending the input space using various sensing modalities on mobile and wearable devices. In this paper, we present our work towards extending the input area of mobile devices using front-facing device-centered cameras that capture reflections in the human eye. As current generation mobile devices lack high resolution front-facing cameras we study the feasibility of around-device interaction using corneal reflective imaging based on a high resolution camera. We present a workflow, a technical prototype and an evaluation, including a migration path from high resolution to low resolution imagers. Our study indicates, that under optimal conditions a spatial sensing resolution of 5 cm in the vicinity of a mobile phone is possible.

15:25 - 15:45

Coffee Break