9:00 - 9:15
Greetings from the Chairs
9:15 - 10:15
Opening Keynote (Shared with SUI)
10:15 - 10:35
Coffee Break
10:35 - 12:15
Chair: Raimund Dachselt (TU Dresden, Germany)
Yu Ishikawa, Buntarou Shizuki, Junichi Hoshino
YuanYuan Qian, Robert J. Teather
Nicholas Katzakis, Jonathan Tong, Oscar Javier Ariza Nunez, Lihan Chen, Gudrun Klinker, Brigitte Roeder, Frank Steinicke
12:15 - 13:30
Lunch With Demos and Posters (Buffet Lunch is provided)
13:30 - 14:45
Panelists: Susanne Boedker, Aarhus University, Denmark
Giulio Jacucci, University of Helsinki, Finland
Marianna Obrist, University of Sussex, UK
Albrecht Schmidt, LMU Munich, Germany
Orit Shaer, Wellesley College, USA
Chair: Albrecht Schmidt, LMU Munich, Germany
14:45 - 15:05
Coffee Break
15:05 - 16:45
Chair: Jens Grubert (Coburg University, Germany)
Fabrice Matulic, Daniel Vogel, Raimund Dachselt
Tabletop interaction can be enriched by considering whole hands as input instead of only fingertips. We describe a generalised, reproducible computer vision algorithm to recognise hand contact shapes, with support for arm rejection, as well as dynamic properties like finger movement and hover. A controlled experiment shows the algorithm can detect seven different contact shapes with roughly 91% average accuracy. The effect of long sleeves and non-user specific templates is also explored. The algorithm is used to trigger, parameterise, and dynamically control menu and tool widgets, and the usability of a subset of these are qualitatively evaluated in a realistic application. Based on our findings, we formulate a number of design recommendations for hand shape-based interaction.
Ruben Balcazar, Francisco Ortega, Katherine Tarre, Armando Barreto, Mark Weiss, Naphtali D. Rishe
CircGR is a multi-touch non-symbolic gesture recognition algorithm, which utilizes circular statistic measures to implement linearithmic (O(n lg n)) template-based matching. CircGR provides a solution to gesture designers, which allows for building complex multi-touch gestures with high-confidence accuracy. We demonstrated the algorithm and described a user study with 60 subjects and over 12,000 gestures collected for an original gesture set of 36. The accuracy is over 99% with the Matthews correlation coefficient of 0.95. In addition, early gesture detection was successful in CircGR as well.
Eva Lösch, Florian Alt, Michael Koch
In this paper, we investigate how effectively users' representations convey interactivity and foster interaction on large information touch displays. This research is motivated by the fact that user representations have been shown to be very efficient in playful applications that support mid-air interaction. At the same time, little is known about the effects of applying this approach to settings with a different primary mode of interaction, e.g. touch. It is also unclear how the playfulness of user representations influences the interest of users in the displayed information. To close this gap, we combine a touch display with screens showing life-sized video representations of passers-by. In a deployment, we compare different spatial arrangements to understand how passers-by are \emph{attracted} and \emph{enticed} to interact, how they \emph{explore} the application, and how they \emph{socially behave}. Findings reveal that (a) opposing displays foster interaction, but (b) may also reduce interaction at the main display; (c) a large intersection between focus and nimbus helps to notice interactivity; (d) using playful elements at information displays is not counterproductive; (e) mixed interaction modalities are hard to understand.
Isabel Benavente, Nicolai Marquardt
Public interactive displays with gesture-recognizing cameras enable new forms of interactions. However, often such systems do not yet allow passers-by a choice to engage voluntarily or disengage from an interaction. To address this issue, this paper explores how people could use different kinds of gestures or voice commands to explicitly opt-in or opt-out of interactions with public installations. We report the results of a gesture elicitation study with 16 participants, generating gestures within five gesture-types for both a commercial and entertainment scenario. We present a categorization and themes of the 430 proposed gestures, and agreement scores showing higher consensus for torso gestures and for opting-out with face/head. Furthermore, patterns indicate that participants often chose non-verbal representations of opposing pairs such as ‘close and open’ when proposing gestures. Quantitative results showed overall preference for hand and arm gestures, and generally a higher acceptance for gestural interaction in the entertainment setting.
Ayman Alzayat, Mark Hancock, Miguel Nacenta
New interaction techniques, like multi-touch, tangible inter-action, and mid-air gestures often promise to be more intuitive and natural; however, there is little work on how to measure these constructs. One way is to leverage the phenomenon of tool embodiment—when a tool becomes an extension of one’s body, attention shifts to the task at hand, rather than the tool itself. In this work, we construct-ed a framework to measure tool embodiment by incorporating philosophical and psychological concepts. We applied this framework to design and conduct a study that uses attention to measure readiness-to-hand with both a physical tool and a virtual tool. We introduce a novel task where participants use a tool to rotate an object, while simultaneously responding to visual stimuli both near their hand and near the task. Our results showed that participants paid more attention to the task than to both kinds of tool. We also discuss how this evaluation framework can be used to investigate whether novel interaction techniques allow for this kind of tool embodiment.
16:45 - 17:45
Posters and Demo Madness