9:00 - 9:15
Greetings from the Chairs
9:15 - 10:15
Opening Keynote (Shared with SUI)
10:15 - 10:35
10:35 - 12:15
Chair: Diego Martinez Plasencia (University of Sussex, UK) and Raimund Dachselt (TU Dresden, Germany)
Yu Ishikawa, Buntarou Shizuki, Junichi Hoshino
YuanYuan Qian, Robert J. Teather
Nicholas Katzakis, Jonathan Tong, Oscar Javier Ariza Nunez, Lihan Chen, Gudrun Klinker, Brigitte Roeder, Frank Steinicke
12:15 - 13:30
Lunch Break (Buffet Lunch is provided)
13:30 - 14:45
Panelists: Susanne Boedker, Aarhus University, Denmark
Marianna Obrist, University of Sussex, UK
Albrecht Schmidt, LMU Munich, Germany
Giulio Jacucci, University of Helsinki, Finland
14:45 - 15:05
15:05 - 16:45
Chair: Jens Grubert (Coburg University, Germany)
Maurício Sousa, Daniel Mendes, Rafael Kuffner dos Anjos, Daniel Medeiros, Alfredo Ferreira, Alberto Raposo, João Madeiras Pereira, Joaquim Jorge
Context-aware pervasive applications can improve user experiences by tracking people in their surroundings.
Such systems use multiple sensors to gather information regarding people and devices.
However, when developing novel user experiences, researchers are left to building foundation code to support multiple network-connected sensors, a major hurdle to rapidly developing and testing new ideas.
We introduce Creepy Tracker, an open-source toolkit to ease prototyping with multiple commodity depth cameras.
It automatically selects the best sensor to follow each person, handling occlusions and maximizing interaction space, while providing full-body tracking in scalable and extensible manners.
It also keeps position and orientation of stationary interactive surfaces while offering continuously updated point-cloud user representations combining both depth and color data.
Our performance evaluation shows that, although slightly less precise than marker-based optical systems, Creepy Tracker provides reliable multi-joint tracking without any wearable markers or special devices.
Furthermore, implemented representative scenarios show that Creepy Tracker is well suited for deploying spatial and context-aware interactive experiences.
Ayman Alzayat, Mark Hancock, Miguel Nacenta
New interaction techniques, like multi-touch, tangible inter-action, and mid-air gestures often promise to be more intuitive and natural; however, there is little work on how to measure these constructs. One way is to leverage the phenomenon of tool embodiment—when a tool becomes an extension of one’s body, attention shifts to the task at hand, rather than the tool itself. In this work, we construct-ed a framework to measure tool embodiment by incorporating philosophical and psychological concepts. We applied this framework to design and conduct a study that uses attention to measure readiness-to-hand with both a physical tool and a virtual tool. We introduce a novel task where participants use a tool to rotate an object, while simultaneously responding to visual stimuli both near their hand and near the task. Our results showed that participants paid more attention to the task than to both kinds of tool. We also discuss how this evaluation framework can be used to investigate whether novel interaction techniques allow for this kind of tool embodiment.
Valerie Maquil, Christian Moll, João Martins
This paper describes the design and implementation of BatSim, a tangible user interface for playful discovery of different methods of creating batteries. BatSim combines tangible interactions with augmented reality in an interactive workbench to support museum visitors in physically performing the different steps of the procedures and viewing the consequences on embedded screens. In this paper we describe the rationale of our design solution as well as how it could be realized in three iterations, progressively focusing on 1) the spatial setting, 2) the model and interactions and 3) the form and feedback. Based on our gained insights, we discuss the importance of combining multiple prototyping methods to take into account the different facets of tangible interaction design.
Christian Corsten, Simon Voelker, Jan Borchers
Modern smartphones, like iPhone 7, feature touchscreens with co-located force sensing. This makes touch input more expressive, e.g., by enabling single-finger continuous zooming when coupling zoom levels to force intensity. Often, however, the user wants to select and confirm a particular force value, say, to lock a certain zoom level. The most common confirmation techniques are Dwell Time (DT) and Quick Release (QR). While DT has shown to be reliable, it slows the interaction, as the user must typically wait for 1 s before her selection is confirmed. Conversely, QR is fast but reported to be less reliable, although no reference reports how to actually detect and implement it. In this paper, we set out to challenge the low reliability of QR: We collected user data to (1) report how it can be implemented and (2) show that it is as reliable as DT (97.6% vs. 97.2% success). Since QR was also the faster technique and more preferred by users, we recommend it over DT for force confirmation on modern smartphones.
Andreas Rene Fender, Hrvoje Benko, Andy Wilson
MeetAlive combines multiple depth cameras and projectors to create a room-scale omni-directional display surface designed to support collaborative face-to-face group meetings. With MeetAlive, all participants may simultaneously display and share content from their personal laptop wirelessly anywhere in the room. MeetAlive gives each participant complete control over displayed content in the room. This is achieved by a perspective corrected mouse cursor that transcends the boundary of the laptop screen to position, resize, and edit their own and others’ shared content. MeetAlive includes features to replicate content views to ensure that all participants may see the actions of other participants even as they are seated around a conference table. We report on observing six groups of three participants who worked on a collaborative task with minimal assistance. Participants’ feedback highlighted the value of MeetAlive features for multi-user engagement in meetings involving brainstorming and content creation.
16:45 - 17:45
Posters and Demo Madness