Adaptive Workspace using MovemenTable

Yoshiki Kudo, Kazuki Takashima, Yoshifumi Kitamura

We present a concept of dynamically adaptive workspace using MovemenTable, autonomously moving interactive digital table. In this demo, we introduce two interaction scenarios, arranging workspace anywhere and floating interactive surface on large floor screen. In the first scenario, the interactive digital table is automatically arranged anywhere in a room to keep user’s workspace efficient and optimal by self-actuated movement of MovemenTable. The second scenario is more advanced one with collaborative use of large floor screen system where the MovemenTable works as a floating interactive surface that allows local data exploration of the global information on the floor screen.

Ambient Notifications with Shape Changing Circuits in Peripheral Locations

Lee Jones, John C McClelland, Phonesavahnt Thongsouksanoumane, Audrey Girouard

Calm technologies help us avoid distraction by embedding notifications in our surroundings with peripheral updates. However, users also lose out on the passive awareness that comes from more overt notifications. In our paper, we present an initial study setup on shape changing circuits as notifications. We compare near and far peripheral locations to determine the optimal location for these notifications by assigning a primary task of arithmetic questions, and a secondary task of responding to bend notifications. Our demonstration will show the set-up of our study to encourage discussion on possible applications of shape changing notifications in peripheral locations.

CAPP – Capacitive Passive Programmable Tangibles

Valentina Burjan, Kirstin Kohler, Cristin Volz

Tangible systems and examples of use have been explored for two decades, nevertheless there are only a few manufacturers. Most systems are camera-based, expensive and in many cases not supported anymore. This prevents researchers as well as practitioners from further implementing/supporting scenarios that are promising for tangible interactions (like sales settings, museums or educational environments). The goal of our work is to provide an instruction for makers to build a programmable tangible for the usage on capacitive displays. This allows everyone to turn arbitrary multi-touch displays in a tangible user interface. Our solution simulates touch points and is controlled and powered by a microcontroller. It is controlled through a wireless interface which allows dynamically changing the ID of the tangible during use.

CircGR: Interactive Multi-Touch Gesture Recognition using Circular Measurements

Ruben Balcazar, Francisco Ortega, Katherine Tarre, Armando Barreto, Mark Weiss, Naphtali D. Rishe

CircGR is a multi-touch non-symbolic gesture recognition algorithm, which utilizes circular statistic measures to implement linearithmic (O(n lg n)) template-based matching. CircGR provides a solution to gesture designers, which allows for building complex multi-touch gestures with high-confidence accuracy. We demonstrated the algorithm and described a user study with 60 subjects and over 12,000 gestures collected for an original gesture set of 36. The accuracy is over 99% with the Matthews correlation coefficient of 0.95. In addition, early gesture detection was successful in CircGR as well.

DESKTOP VR USING A MIRROR METAPHOR FOR NATURAL USER INTERFACE

Santawat Thanyadit, Ting-Chuen Pong
The main objective of this research work is to create a desktop VR environment which enables users to interact naturally with virtual objects positioned both in front and behind the screen. We propose a mirror metaphor that simulates a physical stereoscopic screen with the properties of a mirror. In addition to allowing users to interact with virtual objects positioned in front of the stereoscopic screen using their virtual hands, the virtual hands can be transferred inside the virtual mirror to interact with objects behind the screen. When the virtual hands are operating inside the virtual mirror, they are transformed like the reflection in a real mirror. This effectively doubles the interactable space and creates an interactive space that could facilitate collaborative tasks. Our user study shows that users could interact through the mirror approach as effectively as similar interaction techniques, hence demonstrating that the mirror technique is a viable interface in certain VR setups.

Distant Pointing User Interfaces based on 3D Hand Pointing Recognition

Yutaka Endo, Dai Fujita, Takashi Komuro

In this paper, we propose a system that realizes remote control of a computer with small hand gestures. Distant pointing is realized using a 3D hand pointing recognition algorithm that obtains position and direction of the pointing hand. We show the effectiveness of the system by constructing three types of user interfaces considering the accuracy of distant pointing in the current system. We created a tile layout interface for rough selection operation, a pie menu interface for detailed operation, and a viewer interface for document browsing.

FAMOUS DEATHS

Marcel van Brakel

As you are being shoved into one of the four mortuary freezers that make up the Famous Deaths installation you enter a reconstruction of the final moments of famous celebrities by means of sound and scent. You smell Princes Diana’s fatal destiny, John F. Kennedy’s tragedy, Kaddafi’s desperate flight into a sewer and Whitney Houston fatal sob. We all know the images of that open car that slowly drives through the streets of Dallas, the President happily waving to the crowd. And then, those few fatal shots. What must it have been like to be near that car? Famous Deaths is an innovative way of documentary storytelling where you experience these intimate moments as smell tableau in first person perspective. You smell an autumn wind, the grass, the leather car seats, Jacky Kennedy's perfume, exhaust fumes mingled with the somewhat musty scent of that limousine, and then suddenly the penetrating scent of blood, brains and gunpowder drilling its way into your nostrils.

Floating Widgets: Interaction with Acoustically-Levitated Widgets

Euan Freeman, Ross Anderson, Carl Andersson, Julie Williamson, Stephen Brewster
Acoustic levitation enables new types of human-computer interface, where the content that users interact with is made up from small objects held in mid-air. We show that acoustically-levitated objects can form mid-air widgets that respond to interaction. Users can interact with them using in-air hand gestures. Sound and widget movement are used as feedback about the interaction.

FootStriker - A Wearable EMS-based Foot Strike Assistant for Running

Florian Daiber, Frederik Wiehr, Felix Kosmalla, Antonio Krüger

Today, ambitious amateur athletes often do not have access to professional coaching but still invest great effort in becoming faster runners. Apart from a pure increase in the quantitative training load, a change of the running technique, e.g. transitioning from heel striking to mid-/forefoot running, can be highly effective and usually prevents knee-related injuries. In this demo, we present a self-contained wearable that detects heel striking while running with a pressure-sensitive insole. Heel striking is corrected in real-time to mid-/forefoot running by applying electrical muscle stimulation (EMS) on the calf muscle. We further discuss potential scenarios for EMS-based training in interactive spaces. The device will be worn and demonstrated by the presenter but if possible, the device can also be tested directly by the conference attendees.

GooseBumps: Towards Sensory Substitution Using Servo Motors

Hrishi Olickel, Parag Bhatnagar, Aaron Ong, Simon Tangi Perrault

GooseBumps is a wearable device that enables users to get spatial information using haptic feedback through servo motors. By rotating the motors, spatial information on a single axis can be encoded and processed by the users in real time. In this demonstration, the participants will be trained to use the system to use a simple driving game similar to Out Run [2] while being blindfolded. Our system thus provides sensory substitution, where visual information (such as car position and road orientation) will be encoded with haptic information. GooseBumps has multiple applications, from Virtual Reality to Visually Impaired people. This demo, in a reduced form, has been presented previously at the 2016 HacknRoll competition, where it was presented with the first prize.

Haptics and Directional Audio Using Acoustic Metasurfaces

Louis Jackowski - Ashley, Gianluca Memoli, Mihai Caleap, Nicolas Slack, Bruce Drinkwater, Sriram Subramanian
The ability to control acoustic fields offers many possible applications in loudspeaker design, ultrasound imaging, medical therapy, and acoustic levitation. Sound waves are currently shaped using phased array systems, even though the complex electronics required are expensive and hinder widespread use. Here we show how to control, direct, and manipulate sound using 2-dimensional, planar, acoustic metasurfaces that require only one driving signal. This offers the advantages of ease of use and versatility over currently available phased arrays. We demonstrate the creation of a haptic sensation and steering of a beam produced by a parametric speaker. This simple, yet highly effective, method of creating single-beam manipulators could be introduced in medical or manufacturing applications.

Illuminated Interactive Paper with Multiple Input Modalities for Form Filling Applications

Konstantin Klamka, Wolfgang Büschel, Raimund Dachselt

In this paper, we demonstrate IllumiPaper: a system that provides new forms of paper-integrated visual feedback and enables multiple input channels to enhance digital paper applications. We aim to take advantage of traditional form sheets, including their haptic qualities, simplicity, and archivability, and simultaneously integrate rich digital functionalities such as dynamic status queries, real-time notifications, and visual feedback for widget controls. Our approach builds on emerging, novel paper-based technologies. We describe a fabrication process that allow us to directly integrate segment-based displays, touch and flex sensors, as well as digital pen input on the paper itself. With our fully functional research platform we demonstrate an interactive prototype for an industrial form-filling maintenance application to service computer networks that covers a wide range of typical paper-related tasks.

iVoLVER: a Visual Language for Constructing Visualizations from In-the-Wild Data

Miguel Nacenta, Gonzalo Mendez

iVoLVER, the Interactive Visual Language for Visualization Extraction and Reconstruction, is a web-based pen-and-touch interface that graphically supports the construction of interactive visualizations. iVoLVER is designed to enable extraction of data from different types of artifacts (e.g., pictures of the real world) and to use that data to generate new original representations of that data. People can create visualizations from data that is not structured in traditional formats without the need of textual programming or sitting at their desk. This demonstration shows how iVoLVER visualizations are constructed and also demonstrates the possible uses of iVoLVER in several contexts.

LeviSpace: Augmenting the Space above Displays with Levitated Particles

Asier Marzo, Sriram Subramanian, Bruce Drinkwater

The screen in our laptops or the white surface of a projector screen are common examples of displays that we use in our daily life to visualize information. Here, we propose to use levitated particles above these displays to enrich the presented information. By means of acoustic or magnetic levitation we show that it is possible to move a particle in mid-air in the space above these displays. We present different configurations that are feasible with current technology, as well as use cases. In this demo, the visitors will be able to explore and interact with various levitators integrated with traditional displays.

Mid-air haptics for supernatural experiences in VR

UltraHaptics

From the moment we can reach out, touch is key to how we explore the world around us. But what if it could transport us to a different world? A world of magic that we can feel with our bare hands. In this demo we bring the sense of touch to VR spaces, like never before, enabling users to cast spells, summon powers, and actually feel what it’s like to be a wizard. All without the need of a wand.

Paper for E-Paper: Towards Paper Like Tangible Experience using E-Paper

Gavin Bailey, Deepak Ranjan Sahoo, Matt Jones
Our work presents a method to use paper as an input de- vice while reading on a mobile device, where the user turns a physical page in the real world in order to turn a page in the digital world. Our goal in this work is to replicate the feedback and affordances one would receive from a printed book on a mobile device, where to fully replicate the reading experience the user would need to turn pages as they would naturally with a printed book. Through a small study we discovered a number of ways that pages are often turned and these techniques became vital to the project. We describe a prototype device which uses paper as an in- put device with transparent electrodes and bend sensors embedded to pages, so that the turning and bending of pages can be digitally detected and addressed. The pro- totype is able to detect the page turns and bends made by the user and the state of each page. We go on to discuss how this device could be used as a general input device, using the web as an example.

PaperNinja: Using Bend Gestures as Around Device Interaction for Mobile Games

Elias Fares, Victor Cheung, Audrey Girouard

Bend gestures can be used as a form of Around Device Interaction to address usability issues in touchscreen mobile devices. Yet, it is unclear whether bend gestures can be easily learned and memorized as control schema for games. To answer this, we built a novel deformable smartphone case that detects bend gestures at its corners and sides, and created PaperNinja, a mobile game that uses bends as input. We conducted a study comparing the effect of three pre-game training levels on learnability and memorability: no training, training of the bend gestures only, and training of both the bend gestures and their in-game action mapping. We found that including gesture-mapping positively impacted the initial learning (faster completion time and fewer gestures performed), but had a similar outcome as no training on memorability, while the gestures-without-mapping led to a negative outcome. Our findings suggest that players can learn bend gestures by discovery and training is not essential.

Printerface: Screen Printed Electroluminescent Touch Interface

Charles Tyson Van de Zande

Printerface proposes a visual design technique to create customized and interactive icons that may be applied to paper or textile substrates. The icons are screen printed electroluminescent (EL) and conductive inks. Research in the field of printed electronics traditionally uses either dot-matrix or multiple capacitive buttons for interaction. Current low-resolution interfaces limit customizability and user interaction. The Printerface demo suggest a new method which allows a designer visual freedom over low-resolution displays. The graphical nature of the screen print process allows a series of communicative icons to follow a defined visual language. Special steps were taken during the design process to create icons that illuminate from a single layout of segments. A microcontroller illuminates specified segments of EL to display each icon of a music playback interface: play, pause, back, and skip (figure 1a). Each electrode of the display additionally acts as a capacitive sensor to interpret user interaction. The capabilities of this technology allow new notification and interactive possibilities for low-resolution, flexible interfaces.

Programmable Liquid Matter: 2D Shape Drawing of Liquid Metals by Dynamic Electric Field

Yutaka Tokuda, Jose Luis Berna Moya, Gianluca Memoli, Timothy Neate, Deepak Ranjan Sahoo, Simon Robinson, Jennifer Pearson, Matt Jones, Sriram Subramanian

We present a programmable liquid matter which can dynamically transform its 2D shape into a variety of forms and present unique organic animations based on spatio-temporally controlled electric fields. We deployed a EGaIn (Gallium indium eutectic alloy) liquid metal as our smart liquid material since it features a superior electric conductivity in spite of a liquid state and presents a high dynamic range of surface tension and 2D area controlled by applied voltage strength and polarity. Our proposed liquid metal shape and motion control algorithms with dynamically patterned electric fields realize path tracing organic animation. We demonstrate an interactive 7x7 electrode array control system with a computer vision based GUI system to enable novice users to physically draw alphabet letters and 2D shapes by unique animatronics of liquid metals.

Proposal of Product Navigation Interface and Evaluation of Purchasing Motivation

kazuki osamura

We proposes a spatial augmented reality system ″Touch de YEBISU Navi″ that stimulate customers’ purchasing willingness. This system uses the knowledge that the purchase rate will rise when a customer pick up a product, and conducts guidance so that contents about the relevant information of a product will not advance unless a customer pick up a product. Furthermore, in order to lengthen the time a customer has in hand, this system can change contents according to the position of a product in hand. To evaluate the effectiveness of this system, a demonstration experiment was conducted at YEBISU Memorial Museum Shop on 25th and 26th February 2017.As a result, we report that the degree of interest in a product has improved and the effect of stimulating purchasing willingness had been shown.

SensArt Demo: A Multisensory Prototype for Engaging with Visual Art

Daniella Briotto Faustino, Sandra Gabriele, Rami Ibrahim, Anna-Lena Theus, Audrey Girouard

Typically, visits to modern art galleries or museums are characterized as visual experiences supported by text-based information describing the works of art. Our goal was to investigate the potential of providing a fuller and richer experience while viewing visual art by appealing to the senses beyond sight. We designed SensArt, a multisensory experience whereby someone viewing a painting received a translation of the art through a headset with music and a belt programmed with vibration patterns and changes in temperature.

Smart Home Control using Motion Matching and Smart Watches

David Verweij, Augusto Esteves, Saskia Bakker, Vassilis-Javed Khan

This paper presents a prototype of a smart home control system operated through motion matching input. In motion matching, targets move continuously in a singular and pre-defined path; users interact with these targets by tracking their movement for a short period of time. Our prototype captures user input through the motion sensors embedded in off-the-shelf smartwatches while users track the moving targets with their arms and hands. The wearable nature of the tracking system makes our prototype ideal for interaction with numerous devices in a smart home.

Synaestheatre: Sonification of Vision using Synaesthetic Associations

Giles Hamilton-Fletcher , Jamie Ward
Technology that harnesses their users' unconscious multisensory associations are faster to learn, more aesthetically appealing, and less cognitively demanding to use. These factors become increasingly important for applications that aim to convert large quantities of information from one sense into another. Relatedly, sensory substitution devices (SSDs) seek to convert visual information into sound in order to provide assistance for the visually-impaired, as well as provide new opportunities for multisensory art and sensory augmentation. Here we present the Synaestheatre, an SSD that turns 3D space, size, shape, and colour information into patterns of spatially distributed sounds, varying in pitch, persistence and timbre, all varying in real-time. The Synaestheatre's sonification method is informed by a combination of natural hearing processes, as well as multisensory associations found in synaesthesia and unconsciously in the wider population. In combination, these produce easy-to-learn, responsive, and aesthetically appealing soundscapes. Here we show that novice users can rely on their intuitions to track the location, size, colour, and movement of visual objects through sound alone. Our demo showcases this through users listening to soundscape changes as arrangements of coloured objects are manipulated in front of the Synaestheatre's camera.

TastyFloats: A Contactless Food Delivery System

Chi Thanh Vi, Asier Marzo, Damien Ablart, Gianluca Memoli, Sriram Subramanian, Bruce Drinkwater, Marianna Obrist

We present two realizations of TastyFloats, a novel system that uses acoustic levitation to deliver food morsels to the users’ tongue. To explore TastyFloats’ associated design framework, we first address the technical challenges to successfully levitate and deliver different types of foods on the tongue. We then conduct a user study, assessing the effect of acoustic levitation on users’ taste perception, comparing three basic taste stimuli (i.e., sweet, bitter and umami) and three volume sizes of droplets (5µL, 10µL and 20µL). Our results show that users perceive sweet and umami easily, even in minimal quantities, whereas bitter is the least detectable taste, despite its typical association with an unpleasant taste experience. Our results are a first step towards the creation of new culinary experiences and innovative gustatory interfaces.

Typing on a Smartwatch for Smart Glasses

Sunggeun Ahn, Seongkook Heo, Geehyuk Lee

While smart glasses make information more accessible in mobile scenarios, entering text on these devices is still difficult. In this paper, we suggest using a smartwatch as an indirect input device (not requiring visual attention) for smart glasses text entry. With the watch-glasses combination, users do not need to lift the arm to touch the glasses nor need to carry a special external input device. To prove the feasibility of the suggested combination, we implemented two text entry methods: a modified version of SwipeBoard, which we adapted for the suggested combination, and HoldBoard, which we newly designed and implemented specifically for the suggested combination. We evaluated the performances of the two text entry methods through two user studies, and could show that they are faster than prior art for smart glasses text entry in a seated condition. A further study showed that they are competitive with the prior art also in a walking condition.