ISS 2022
Proceedings of the ACM on Human-Computer Interaction, Volume 6, Number ISS
Powered by
Conference Publishing Consulting

Proceedings of the ACM on Human-Computer Interaction, Volume 6, Number ISS, November 20–23, 2022, Wellington, New Zealand

ISS – Journal Issue

Contents - Abstracts - Authors

Frontmatter

Title Page


Editorial Message
It is our great pleasure to welcome you to this issue of the Proceedings of the ACM on Human-Computer Interaction, the third to focus on the contributions from the research community Interactive Surfaces and Spaces (ISS). Interactive Surfaces and Spaces increasingly pervade our everyday life, appearing in various sizes, shapes, and application contexts, offering a rich variety of ways to interact. This diverse research community explores the design, development, and use of new and emerging interactive surface technologies and interactive spaces.

Papers

Re-locations: Augmenting Personal and Shared Workspaces to Support Remote Collaboration in Incongruent Spaces
Daniel Immanuel Fink ORCID logo, Johannes Zagermann ORCID logo, Harald Reiterer ORCID logo, and Hans-Christian Jetter ORCID logo
(University of Konstanz, Konstanz, Germany; University of Lübeck, Lübeck, Germany)
Augmented reality (AR) can create the illusion of being virtually co-located during remote collaboration, e.g., by visualizing remote co-workers as avatars. However, spatial awareness of each other's activities is limited as physical spaces, including the position of physical devices, are often incongruent. Therefore, alignment methods are needed to support activities on physical devices. In this paper, we present the concept of Re-locations, a method for enabling remote collaboration with augmented reality in incongruent spaces. The idea of the concept is to enrich remote collaboration activities on multiple physical devices with attributes of co-located collaboration such as spatial awareness and spatial referencing by locally relocating remote user representations to user-defined workspaces. We evaluated the Re-locations concept in an explorative user study with dyads using an authentic, collaborative task. Our findings indicate that Re-locations introduce attributes of co-located collaboration like spatial awareness and social presence. Based on our findings, we provide implications for future research and design of remote collaboration systems using AR.

Publisher's Version
Leveraging Smartwatch and Earbuds Gesture Capture to Support Wearable Interaction
Hanae Rateau ORCID logo, Edward Lank ORCID logo, and Zhe Liu ORCID logo
(University of Waterloo, Waterloo, Canada; Inria, Lille, France; Huawei, Markham, Canada)
Due to the proliferation of smart wearables, it is now the case that designers can explore novel ways that devices can be used in combination by end-users. In this paper, we explore the gestural input enabled by the combination of smart earbuds coupled with a proximal smartwatch. We identify a consensus set of gestures and a taxonomy of the types of gestures participants create through an elicitation study. In a follow-on study conducted on Amazon's Mechanical Turk, we explore the social acceptability of gestures enabled by watch+earbud gesture capture. While elicited gestures continue to be simple, discrete, in-context actions, we find that elicited input is frequently abstract, varies in size and duration, and is split almost equally between on-body, proximal, and more distant actions. Together, our results provide guidelines for on-body, near-ear, and in-air input using earbuds and a smartwatch to support gesture capture.

Publisher's Version
Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments
Roozbeh Manshaei ORCID logo, Uzair Mayat ORCID logo, Syeda Aniqa Imtiaz ORCID logo, Veronica Andric ORCID logo, Kazeera Aliar ORCID logo, Nour Abu Hantash ORCID logo, Kashaf Masood ORCID logo, Gabby Resch ORCID logo, Alexander Bakogeorge ORCID logo, Sarah Sabatinos ORCID logo, and Ali Mazalek ORCID logo
(Ryerson University, Toronto, Canada)
In biology, microscopy data from thousands of individual cellular events presents challenges for analysis and problem solving. These include a lack of visual analysis tools to complement algorithmic approaches for tracking important but rare cellular events, and a lack of support for collaborative exploration and interpretation. In response to these challenges, we have designed and implemented Tangible Chromatin, a tangible and multi-surface system that promotes novel analysis of complex data generated from high-content microscopy experiments. The system facilitates three specific approaches to analysis: it (1) visualizes the detailed information and results from the image processing algorithms, (2) provides interactive approaches for browsing, selecting, and comparing individual data elements, and (3) expands options for productive collaboration through both independent and joint work. We present three main contributions: (i) design requirements that derive from the analytical goals of DNA replication biology, (ii) tangible and multi-surface interaction techniques to support the exploration and analysis of datasets from high-content microscopy experiments, and (iii) the results of a user study that investigated how the system supports individual and collaborative data analysis and interpretation tasks.

Publisher's Version Video
Theoretically-Defined vs. User-Defined Squeeze Gestures
Santiago Villarreal-NarvaezORCID logo, Arthur SluÿtersORCID logo, Jean Vanderdonckt ORCID logo, and Efrem Mbaki Luzayisu ORCID logo
(Université Catholique de Louvain, Louvain-la-Neuve, Belgium; University of Kinshasa, Kinshasa, Congo)
This paper presents theoretical and empirical results about user-defined gesture preferences for squeezable objects by focusing on a particular object: a deformable cushion. We start with a theoretical analysis of potential gestures for this squeezable object by defining a multi-dimension taxonomy of squeeze gestures composed of 82 gesture classes. We then empirically analyze the results of a gesture elicitation study resulting in a set of N=32 participants X 21 referents = 672 elicited gestures, further classified into 26 gesture classes. We also contribute to the practice of gesture elicitation studies by explaining why we started from a theoretical analysis (by systematically exploring a design space of potential squeeze gestures) to end up with an empirical analysis (by conducting a gesture elicitation study afterward): the intersection of the results from these sources confirm or disconfirm consensus gestures. Based on these findings, we extract from the taxonomy a subset of recommended gestures that give rise to design implications for gesture interaction with squeezable objects.

Publisher's Version Video
Conductor: Intersection-Based Bimanual Pointing in Augmented and Virtual Reality
Futian Zhang ORCID logo, Keiko Katsuragawa ORCID logo, and Edward Lank ORCID logo
(University of Waterloo, Waterloo, Canada; National Research Council, Waterloo, Canada; University of Lille, Lille, France)
Pointing is an elementary interaction in virtual and augmented reality environments, and, to effectively support selection, techniques must deal with the challenges of occlusion and depth specification. Most of the previous techniques require two explicit steps to handle occlusion. In this paper, we propose Conductor, an intuitive, plane-ray, intersection-based, 3D pointing technique where users leverage bimanual input to control a ray and intersecting plane. Conductor allows users to use the non-dominant hand to adjust the cursor distance on the ray while pointing with the dominant hand. We evaluate Conductor against Raycursor, a state-of-the-art VR pointing technique, and show that Conductor outperforms Raycursor for selection tasks. Given our results, we argue that bimanual selection techniques merit additional exploration to support object selection and placement within virtual environments.

Publisher's Version Video
UbiChromics: Enabling Ubiquitously Deployable Interactive Displays with Photochromic Paint
Amani Alkayyali ORCID logo, Yasha Iravantchi ORCID logo, Jaylin Herskovitz ORCID logo, and Alanson P. Sample ORCID logo
(University of Michigan, Ann Arbor, USA)
Pervasive and interactive displays promise to present our digital content seamlessly throughout our environment. However, traditional display technologies do not scale to room-wide applications due to high per-unit-area costs and the need for constant wired power and data infrastructure. This research proposes the use of photochromic paint as a display medium. Applying the paint to any surface or object creates ultra-low-cost displays, which can change color when exposed to specific wavelengths of light. We develop new paint formulations that enable wide area application of photochromic material. Along with a specially modified wide-area laser projector and depth camera that can draw custom images and create on-demand, room-wide user interfaces on photochromic enabled surfaces. System parameters such as light intensity, material activation time, and user readability are examined to optimize the display. Results show that images and user interfaces can last up to 16 minutes and can be updated indefinitely. Finally, usage scenarios such as displaying static and dynamic images, ephemeral notifications, and the creation of on-demand interfaces, such as light switches and music controllers, are demonstrated and explored. Ultimately, the UbiChromics system demonstrates the possibility of extending digital content to all painted surfaces.

Publisher's Version Video Info
HandyGaze: A Gaze Tracking Technique for Room-Scale Environments using a Single Smartphone
Takahiro Nagai ORCID logo, Kazuyuki Fujita ORCID logo, Kazuki Takashima ORCID logo, and Yoshifumi Kitamura ORCID logo
(Tohoku University, Sendai, Japan)
We propose HandyGaze, a 6-DoF gaze tracking technique for room-scale environments that can be carried out by simply holding a smartphone naturally without installing any sensors or markers in the environment. Our technique simultaneously employs the smartphone’s front and rear cameras: The front camera estimates the user’s gaze vector relative to the smartphone, while the rear camera (and depth sensor, if available) performs self-localization by reconstructing a pre-obtained 3D map of the environment. To achieve this, we implemented a prototype that works on iOS smartphones by running an ARKit-based algorithm for estimating the user’s 6-DoF head orientation. We additionally implemented a novel calibration method that offsets the user-specific deviation between the head and gaze orientations. We then conducted a user study (N=10) that measured our technique’s positional accuracy to the gaze target under four conditions, based on combinations of use with and without a depth sensor and calibration. The results show that our calibration method was able to reduce the mean absolute error of the gaze point by 27%, with an error of 0.53 m when using the depth sensor. We also report the target size required to avoid erroneous inputs. Finally, we suggest possible applications such as a gaze-based guidance application for museums.

Publisher's Version Archive submitted (74 MB)
Players and Performance: Opportunities for Social Interaction with Augmented Tabletop Games at Centres for Children with Autism
Qin Wu ORCID logo, Rao Xu ORCID logo, Yuantong Liu ORCID logo, Danielle Lottridge ORCID logo, and Suranga Nanayakkara ORCID logo
(University of Auckland, Auckland, New Zealand; Chengdu University of Information Technology, Chengdu, China; National University of Singapore, Singapore, Singapore)
This research aimed to investigate how children with autism interacted with rich audio and visual augmented reality (AR) tabletop games. Based on in-depth needs analysis facilitated through autism centers in China, we designed and developed MagicBLOCKS, a series of tabletop AR interactive games for children with autism. We conducted a four-week field study with 15 male children. We found that the interactive dynamics in games were rewarding and played critical roles in motivation and sustained interest. In addition, based on post-hoc interviews and video analysis with expert therapists, we found that MagicBLOCKS provided opportunities for children with autism to engage with each other through player performances and audience interactions with episodes of cooperation and territoriality. We discuss the limitations and the insights offered by this research.

Publisher's Version
TetraForce: A Magnetic-Based Interface Enabling Pressure Force and Shear Force Input Applied to Front and Back of a Smartphone
Taichi Tsuchida ORCID logo, Kazuyuki Fujita ORCID logo, Kaori Ikematsu ORCID logo, Sayan Sarcar ORCID logo, Kazuki Takashima ORCID logo, and Yoshifumi Kitamura ORCID logo
(Tohoku University, Sendai, Japan; Yahoo, Tokyo, Japan; Birmingham City University, Birmingham, UK)
We propose a novel phone-case-shaped interface named TetraForce, which enables four types of force input consisting of two force directions (i.e., pressure force and shear force) and two force-applied surfaces (i.e., touch surface and back surface) in a single device. Force detection is achieved using the smartphone's built-in magnetometer (and supplementary accelerometer and gyro sensor) by estimating the displacement of a magnet attached to a 3-DoF passively movable panel at the back. We conducted a user study (N=12) to investigate the fundamental user performance by our interface and demonstrated that the input was detected as intended with a success rate of 97.4% on average for all four input types. We further conducted an ideation workshop with people who were involved in human-computer interaction (N=12) to explore possible applications of this interface, and we obtained 137 ideas for applications using individual input types and 51 possible scenarios using them in combination. Organizing these ideas reveals the advantages of each input type and suggests that our interface is useful for applications that require complex operations and that it can improve intuitiveness through elastic feedback.

Publisher's Version Archive submitted (320 MB)
Push, Tap, Dwell, and Pinch: Evaluation of Four Mid-air Selection Methods Augmented with Ultrasonic Haptic Feedback
Tafadzwa Joseph Dube ORCID logo, Yuan Ren ORCID logo, Hannah Limerick ORCID logo, I. Scott MacKenzie ORCID logo, and Ahmed Sabbir ArifORCID logo
(University of California, Merced, USA; Ultraleap, Bristol, UK; York University, Toronto, Canada)
This work compares four mid-air target selection methods (Push, Tap, Dwell, Pinch) with two types of ultrasonic haptic feedback (Select, HoverSelect) in a Fitts’ law experiment. Results revealed that Tap is the fastest, the most accurate, and one of the least physically and cognitively demanding selection methods. Pinch is relatively fast but error prone and physically and cognitively demanding. Dwell is slowest by design, yet the most accurate and the least physically and cognitively demanding. Both haptic feedback methods improve selection performance by increasing users’ spatial awareness. Particularly, Push augmented with Hover & Select feedback is comparable to Tap. Besides, participants perceive the selection methods as faster, more accurate, and more physically and cognitively comfortable with the haptic feedback methods.

Publisher's Version Video
A Survey of Augmented Piano Prototypes: Has Augmentation Improved Learning Experiences?
Jordan Aiko Deja ORCID logo, Sven Mayer ORCID logo, Klen Čopič Pucihar ORCID logo, and Matjaž Kljun ORCID logo
(University of Primorska, Koper, Slovenia; De La Salle University, Manila, Philippines; LMU Munich, Munich, Germany)
Humans have been developing and playing musical instruments for millennia. With technological advancements, instruments were becoming ever more sophisticated. In recent decades computer-supported innovations have also been introduced in hardware design, usability, and aesthetics. One of the most commonly digitally augmented instruments is the piano. Besides electronic keyboards, several prototypes augmenting pianos with different projections providing various levels of interactivity on and around the keyboard have been implemented in order to support piano players. However, it is still unclear whether these solutions support the learning process. In this paper, we present a systematic review of augmented piano prototypes focusing on instrument learning based on the four themes derived from interviews with piano experts to understand better the problems of teaching the piano. These themes are (i) synchronised movement and body posture, (ii) sight-reading, (iii) ensuring motivation, and (iv) encouraging improvisation. We found that prototypes are saturated on the synchronisation themes, and there are opportunities for sight-reading, motivation, and improvisation themes. We conclude by presenting recommendations on augmenting piano systems towards enriching the piano learning experience as well as on possible directions to expand knowledge in the area.

Publisher's Version Info
Dynamic Pinhole Paper: Interacting with Horizontal Displays through Perforated Paper
Cuauhtli Campos ORCID logo, Matjaž Kljun ORCID logo, and Klen Čopič Pucihar ORCID logo
(University of Primorska, Koper, Slovenia; Faculty of Information Studies in Novo Mesto, Novo Mesto, Slovenia)
Ubiquitous nature, versatility, and durability enabled paper to maintain its importance in the digital age. It is not surprising there have been numerous attempts to combine paper with the digital content. One way to do so is to place paper on a horizontal interactive display (e.g. tabletop or tablet). The paper thus becomes "the screen" on which the digital content is viewed, yet it also acts as a barrier that degrades the quality of the perceived image. This research tries to address this problem by proposing and evaluating a novel paper display concept called Dynamic pinhole paper. The concept is based on perforating the paper (to decrease its opacity) and moving digital content beneath the perforated area (to increase the resolution). To evaluate this novel concept, we fabricated the pinhole paper and implemented the software in order to run multiple user studies exploring the concept’s viability, optimal movement trajectory (amount, direction and velocity), and the effect of perforation on printing, writing and reading. The results show that the movement of digital content is a highly effective strategy for improving the resolution of the digital content through perforation where the optimal velocity is independent from trajectory direction (e.g. horizontal or circular) and amount of movement. Results also show the concept is viable on of the shelf hardware and that it is possible to write and print on perforated paper.

Publisher's Version Video Info
XSpace: An Augmented Reality Toolkit for Enabling Spatially-Aware Distributed Collaboration
Jaylin Herskovitz ORCID logo, Yi Fei Cheng ORCID logo, Anhong Guo ORCID logo, Alanson P. Sample ORCID logo, and Michael Nebeling ORCID logo
(University of Michigan, Ann Arbor, USA; Carnegie Mellon University, Pittsburgh, USA)
Augmented Reality (AR) has the potential to leverage environmental information to better facilitate distributed collaboration, however, such applications are difficult to develop. We present XSpace, a toolkit for creating spatially-aware AR applications for distributed collaboration. Based on a review of existing applications and developer tools, we design XSpace to support three methods for creating shared virtual spaces, each emphasizing a different aspect: shared objects, user perspectives, and environmental meshes. XSpace implements these methods in a developer toolkit, and also provides a set of complimentary visual authoring tools to allow developers to preview a variety of configurations for a shared virtual space. We present five example applications to illustrate that XSpace can support the development of a rich set of collaborative AR experiences that are difficult to produce with current solutions. Through XSpace, we discuss implications for future application design, including user space customization and privacy and safety concerns when sharing users' environments.

Publisher's Version Video
Eliciting User-Defined Touch and Mid-air Gestures for Co-located Mobile Gaming
Chloe Ng ORCID logo and Nicolai Marquardt ORCID logo
(University College London, London, UK)
Many interaction techniques have been developed to best support mobile gaming – but developed gestures and techniques might not always match user behaviour or preferences. To inform this design space of gesture input for co-located mobile gaming, we present insights from a gesture elicitation user study for touch and mid-air input, specifically focusing on board and card games due to the materiality of game artefacts and rich interaction between players. We obtained touch and mid-air gesture proposals for 11 game tasks with 12 dyads and gained insights into user preferences. We contribute our classification and analysis of 622 elicited gestures (showing more collaborative gestures in the mid-air modality), resulting in a consensus gesture set, and agreement rates showing higher consensus for touch gestures. Furthermore, we identified interaction patterns – such as benefits of situational awareness, social etiquette, gestures fostering interaction between players, and roles of gestures providing fun, excitement, and suspense to the game – which can inform future games and gesture design.

Publisher's Version
Design and Evaluation of a Silent Speech-Based Selection Method for Eye-Gaze Pointing
Laxmi Pandey ORCID logo and Ahmed Sabbir ArifORCID logo
(University of California, Merced, USA)
We investigate silent speech as a hands-free selection method in eye-gaze pointing. We first propose a stripped-down image-based model that can recognize a small number of silent commands almost as fast as state-of-the-art speech recognition models. We then compare it with other hands-free selection methods (dwell, speech) in a Fitts' law study. Results revealed that speech and silent speech are comparable in throughput and selection time, but the latter is significantly more accurate than the other methods. A follow-up study revealed that target selection around the center of a display is significantly faster and more accurate, while around the top corners and the bottom are slower and error prone. We then present a method for selecting menu items with eye-gaze and silent speech. A study revealed that it significantly reduces task completion time and error rate.

Publisher's Version Video
Semi-automated Analysis of Collaborative Interaction: Are We There Yet?
Thomas Neumayr ORCID logo, Mirjam Augstein ORCID logo, Johannes Schönböck ORCID logo, Sean RintelORCID logo, Helmut Leeb ORCID logo, and Thomas Teichmeister ORCID logo
(University of Applied Sciences Upper Austria, Hagenberg, Austria; JKU Linz, Linz, Austria; Microsoft Research, Cambridge, UK)
In recent years, research on collaborative interaction has relied on manual coding of rich audio/video recordings. The fine-grained analysis of such material is extremely time-consuming and labor-intensive. This is not only difficult to scale, but, as a result, might also limit the quality and completeness of coding due to fatigue, inherent human biases, (accidental or intentional), and inter-rater inconsistencies. In this paper, we explore how recent advances in machine learning may reduce manual effort and loss of information while retaining the value of human intelligence in the coding process. We present ACACIA (AI Chain for Augmented Collaborative Interaction Analysis), an AI video data analysis application which combines a range of advances in machine perception of video material for the analysis of collaborative interaction. We evaluate ACACIA's abilities, show how far we can already get, and which challenges remain. Our contribution lies in establishing a combined machine and human analysis pipeline that may be generalized to different collaborative settings and guide future research.

Publisher's Version Archive submitted (52 MB)
TiltWalker: Operating a Telepresence Robot with One-Hand by Tilt Controls on a Smartphone
Ghazal Zand ORCID logo, Yuan Ren ORCID logo, and Ahmed Sabbir ArifORCID logo
(University of California, Merced, USA)
Mobile clients for telepresence robots are cluttered with interactive elements that either leave a little room for the camera feeds or occlude them. Many do not provide meaningful feedback on the robot's state and most require the use of both hands. These make maneuvering telepresence robots difficult with mobile devices. TiltWalker enables controlling a telepresence robot with one hand using tilt gestures with a smartphone. In a series of studies, we first justify the use of a Web platform, determine how far and fast users can tilt without compromising the comfort and the legibility of the display content, and identify a velocity-based function well-suited for control-display mapping. We refine TiltWalker based on the findings of these studies, then compare it with a default method in the final study. Results revealed that TiltWalker is significantly faster and more accurate than the default method. Besides, participants preferred TiltWalker's interaction methods and graphical feedback significantly more than those of the default method.

Publisher's Version Video
LightMeUp: Back-print Illumination Paper Display with Multi-stable Visuals
Cuauhtli Campos ORCID logo, Matjaž Kljun ORCID logo, Jakub Sandak ORCID logo, and Klen Čopič Pucihar ORCID logo
(University of Primorska, Koper, Slovenia; Faculty of Information Studies in Novo Mesto, Novo Mesto, Slovenia; InnoRenew CoE, Isola, Slovenia)
Despite the drive to digitise learning, paper still holds a prominent role within educational settings. While computational devices have several advantages over paper (e.g. changing and showing content based on user interaction and needs) their prolonged or incorrect usage can hinder educational achievements. In this paper, we combine the interactivity of computational devices with paper whilst reducing the usage of technology to the minimum. To this end, we developed and evaluated a novel back-print illumination paper display called LightMeUp where different information printed on the back side of the paper becomes visible when paper is placed on an interactive display and back-illuminated with a particular colour. To develop this novel display, we first built a display simulator that enables the simulation of various spectral characteristics of the elements used in the system (i.e. light sources such as tablet computers, paper types and printing inks). By using our simulator, we designed various use-case prototypes that demonstrate the capabilities and feasibility of the proposed system. With our simulator and use-cases presented, educators and educational content designers can easily design multi-stable interactive visuals by using readily available paper, printers and touch displays.

Publisher's Version Video Info
Investigating the Use of AR Glasses for Content Annotation on Mobile Devices
Francesco Riccardo Di Gioia ORCID logo, Eugenie Brasier ORCID logo, Emmanuel Pietriga ORCID logo, and Caroline Appert ORCID logo
(Université Paris-Saclay, Orsay, France; CNRS, Orsay, France; Inria, Orsay, France)
Mobile devices such as smartphones and tablets have limited display size and input capabilities that make a variety of tasks challenging. Coupling the mobile device with Augmented Reality eyewear such as smartglasses can help address some of these challenges. In the specific context of digital content annotation tasks, this combination has the potential to enhance the user experience on two fronts. First, annotations can be offloaded into the air around the mobile device, freeing precious screen real-estate. Second, as smartglasses often come equipped with a variety of sensors including a camera, users can annotate documents with pictures or videos of their environment, captured on the spot, hands-free, and from the wearer's perspective. We present AnnotAR, a prototype that we use as a research probe to assess the viability of this approach to digital content annotation. We use AnnotAR to gather users' preliminary feedback in a laboratory setting, and to showcase how it could support real-world use cases.

Publisher's Version Video Info
VideoPoseVR: Authoring Virtual Reality Character Animations with Online Videos
Cheng Yao Wang ORCID logo, Qian Zhou ORCID logo, George Fitzmaurice ORCID logo, and Fraser Anderson ORCID logo
(Cornell University, Ithaca, USA; Autodesk Research, Toronto, Canada)
We present VideoPoseVR, a video-based animation authoring workflow using online videos to author character animations in VR. It leverages the state-of-the-art deep learning approach to reconstruct 3D motions from online videos, caption the motions, and store them in a motion dataset. Creators can import the videos, search in the dataset, modify the motion timeline, and combine multiple motions from videos to author character animations in VR. We implemented a proof-of-concept prototype and conducted a user study to evaluate the feasibility of the video-based authoring approach as well as gather initial feedback of the prototype. The study results suggest that VideoPoseVR was easy to learn for novice users to author animations and enable rapid exploration of prototyping for applications such as entertainment, skills training, and crowd simulations.

Publisher's Version
Effects of Display Layout on Spatial Memory for Immersive Environments
Jiazhou Liu ORCID logo, Arnaud Prouzeau ORCID logo, Barrett Ens ORCID logo, and Tim Dwyer ORCID logo
(Monash University, Melbourne, Australia; Inria, Bordeaux, France)
In immersive environments, positioning data visualisations around the user in a wraparound layout has been advocated as advantageous over flat arrangements more typical of traditional screens. However, other than limiting the distance users must walk, there is no clear design rationale behind this common practice, and little research on the impact of wraparound layouts on visualisation tasks. The ability to remember the spatial location of elements of visualisations within the display space is crucial to support visual analytical tasks, especially those that require users to shift their focus or perform comparisons. This ability is influenced by the user's spatial memory but how spatial memory is affected by different display layouts remains unclear. In this paper, we perform two user studies to evaluate the effects of three layouts with varying degrees of curvature around the user (flat-wall, semicircular-wraparound, and circular-wraparound) on a visuo-spatial memory task in a virtual environment. The results show that participants are able to recall spatial patterns with greater accuracy and report more positive subjective ratings using flat than circular-wraparound layouts. While we didn't find any significant performance differences between the flat and semicircular-wraparound layouts, participants overwhelmingly preferred the semicircular-wraparound layout suggesting it is a good compromise between the two extremes of display curvature.

Publisher's Version Video
Reducing the Latency of Touch Tracking on Ad-hoc Surfaces
Neil Xu Fan ORCID logo and Robert Xiao ORCID logo
(University of British Columbia, Vancouver, Canada)
Touch sensing on ad-hoc surfaces has the potential to transform everyday surfaces in the environment - desks, tables and walls - into tactile, touch-interactive surfaces, creating large, comfortable interactive spaces without the cost of large touch sensors. Depth sensors are a promising way to provide touch sensing on arbitrary surfaces, but past systems have suffered from high latency and poor touch detection accuracy. We apply a novel state machine-based approach to analyzing touch events, combined with a machine-learning approach to predictively classify touch events from depth data with lower latency and higher touch accuracy than previous approaches. Our system can reduce end-to-end touch latency to under 70ms, comparable to conventional capacitive touchscreens. Additionally, we open-source our dataset of over 30,000 touch events recorded in depth, infrared and RGB for the benefit of future researchers.

Publisher's Version Info
Extended Mid-air Ultrasound Haptics for Virtual Reality
Steeven VillaORCID logo, Sven Mayer ORCID logo, Jess Hartcher-O'Brien ORCID logo, Albrecht Schmidt ORCID logo, and Tonja-Katrin Machulla ORCID logo
(LMU Munich, Munich, Germany; Delft University of Technology, Delft, Netherlands; TU Chemnitz, Chemnitz, Germany)
Mid-air haptics allow bare-hand tactile stimulation; however, it has a constrained workspace, making it unsuitable for room-scale haptics. We present a novel approach to rendering mid-air haptic sensations in a large rendering volume by turning a static array into a dynamic array following the user's hand. We used a 6DOF robot to drive a haptic ultrasound array over a large 3D space. Our system enables rendering room-scale mid-air experiences while preserving bare-hand interaction, thus, providing tangibility for virtual environments. To evaluate our approach, we performed three evaluations. First, we performed a technical system evaluation, showcasing the feasibility of such a system. Next, we conducted three psychophysical experiments, showing that the motion does not affect the user's perception with high likelihood. Lastly, we explored seven use cases that showcase our system's potential using a user study. We discuss challenges and opportunities in how large-scale mid-air haptics can contribute toward room-scale haptic feedback. Thus, with our system, we contribute to general haptic mid-air feedback on a large scale.

Publisher's Version
Predicting Touch Accuracy for Rectangular Targets by Using One-Dimensional Task Results
Hiroki Usuba ORCID logo, Shota Yamanaka ORCID logo, Junichi Sato ORCID logo, and Homei Miyashita ORCID logo
(Yahoo, Chiyoda-ku, Japan; Yahoo, Tokyo, Japan; Meiji University, Tokyo, Japan)
We propose a method that predicts the success rate in pointing to 2D rectangular targets by using 1D vertical-bar and horizontal-bar task results. The method can predict the success rates for more practical situations under fewer experimental conditions. This shortens the duration of experiments, thus saving costs for researchers and practitioners. We verified the method through two experiments: laboratory-based and crowdsourced ones. In the laboratory-based experiment, we found that using 1D task results to predict the success rate for 2D targets slightly decreases the prediction accuracy. In the crowdsourced experiment, this method scored better than using 2D task results. Thus, we recommend that researchers use the method properly depending on the situation.

Publisher's Version
Remote Graphic-Based Teaching for Pupils with Visual Impairments: Understanding Current Practices and Co-designing an Accessible Tool with Special Education Teachers
Kaixing Zhao ORCID logo, Julie Mulet ORCID logo, Clara Sorita ORCID logo, Bernard Oriola ORCID logo, Marcos Serrano ORCID logo, and Christophe Jouffrais ORCID logo
(Northwestern Polytechnical University, Xi'an, China; University of Toulouse, Toulouse, France; CNRS, Toulouse, France)
The lockdown period related to the COVID-19 pandemic has had a strong impact on the educational system in general, but more particularly on the special education system. Indeed, in the case of people with visual impairments, the regular tools relying heavily on images and videos were no longer usable. This specific situation highlighted an urgent need to develop tools that are accessible and that can provide solutions for remote teaching with people with VI. However, there is little work on the difficulties that this population encounters when they learn remotely as well as on the current practices of special education teachers. Such a lack of understanding limits the development of remote teaching systems that are adapted. In this paper, we conducted an online survey regarding the practices of 59 professionals giving lessons to pupils with VI, followed by a series of focus groups with special education workers facing teaching issues during the lockdown period. We followed an iterative design process where we designed successive low-fidelity prototypes to drive successive focus groups. We contribute with an analysis of the issues faced by special education teachers in this situation, and a concept to drive the future development of a tool for remote graphic-based teaching with pupils with VI.

Publisher's Version
NoteWordy: Investigating Touch and Speech Input on Smartphones for Personal Data Capture
Yuhan Luo ORCID logo, Bongshin Lee ORCID logo, Young-Ho Kim ORCID logo, and Eun Kyoung Choe ORCID logo
(City University of Hong Kong, Hong Kong, China; Microsoft Research, Redmond, USA; University of Maryland, College Park, USA; NAVER AI Lab, Seongnam, South Korea)
Speech as a natural and low-burden input modality has great potential to support personal data capture. However, little is known about how people use speech input, together with traditional touch input, to capture different types of data in self-tracking contexts. In this work, we designed and developed NoteWordy, a multimodal self-tracking application integrating touch and speech input, and deployed it in the context of productivity tracking for two weeks (N = 17). Our participants used the two input modalities differently, depending on the data type as well as personal preferences, error tolerance for speech recognition issues, and social surroundings. Additionally, we found speech input reduced participants' diary entry time and enhanced the data richness of the free-form text. Drawing from the findings, we discuss opportunities for supporting efficient personal data capture with multimodal input and implications for improving the user experience with natural language input to capture various self-tracking data.

Publisher's Version Video
AngleCAD: Surface-Based 3D Modelling Techniques on Foldable Touchscreens
Can Liu ORCID logo, Chenyue Dai ORCID logo, Qingzhou Ma ORCID logo, Brinda Mehra ORCID logo, and Alvaro Cassinelli ORCID logo
(City University of Hong Kong, Hong Kong, China; Massachusetts Institude of Technology, Cambridge, USA; University of Michigan, Ann Arbor, USA)
3D modelling and printing are becoming increasingly popular. However, beginners often face high barriers of entry when trying to use existing 3D modelling tools, even for creating simple objects. This is further complicated on mobile devices by the lack of direct manipulation in the Z dimension. In this paper, we explore the possibility of using foldable mobile devices for modelling simple objects by constructing a 2.5D display and interaction space with folded touch screens. We present a set of novel interaction techniques - AngleCAD, which allows users to view and navigate a 3D space through folded screens, and to modify the 3D object using the physical support of touchscreens and folding angles. The design of these techniques was inspired by woodworking practices to support surface-based operations that allow users to cut, snap and taper objects directly with the touch screen, and extrude and drill them according to the physical fold angle. A preliminary study identified the benefits of this approach and the key design factors that affect the user experience.

Publisher's Version Video
Investigating Pointing Performance for Tangible Surfaces with Physical 3D Targets
Aluna Everitt ORCID logo, Anne Roudaut ORCID logo, Kasper Hornbæk ORCID logo, Mike Fraser ORCID logo, and Jason Alexander ORCID logo
(University of Oxford, Oxford, UK; University of Bristol, Bristol, UK; University of Copenhagen, Copenhagen, Denmark; University of Bath, Bath, UK)
One of the most fundamental interactions –pointing– is well understood on flat surfaces. However, pointing performance on tangible surfaces with physical targets is still limited for Tangible User Interfaces (TUIs). We investigate the effect of a target’s physical width, height, and distance on user pointing performance. We conducted a study using a reciprocal tapping task (n=19) with physical rods arranged in a circle. We compared our data with five conventional interaction models designed for 2D/3D tasks rather than tangible targets. We show that variance in the movement times was only satisfactorily explained by a model established for volumetric displays (r2=0.954). Analysis shows that movement direction and height should be included as parameters to this model to generalize for 3D tangible targets. Qualitative feedback from participants suggests that pointing at physical targets involves additional human factors (e.g., perception of sharpness or robustness) that need to be investigated further to understand how performance with tangible objects is affected.

Publisher's Version
The Effectiveness of Path-Segmentation for Modeling Lasso Times in Width-Varying Paths
Shota Yamanaka ORCID logo, Hiroki Usuba ORCID logo, Wolfgang Stuerzlinger ORCID logo, and Homei Miyashita ORCID logo
(Yahoo, Tokyo, Japan; Yahoo, Chiyoda-ku, Japan; Simon Fraser University, Vancouver, Canada; Meiji University, Tokyo, Japan)
Models of lassoing time to select multiple square icons exist, but realistic lasso tasks also typically involve encircling non-rectangular objects. Thus, it is unclear if we can apply existing models to such conditions where, e.g., the width of the path that users want to steer through changes dynamically or step-wise. In this work, we conducted two experiments where the objects were non-rectangular, with path widths that narrowed or widened, smoothly or step-wise. The results showed that the baseline models for pen-steering movements (the steering and crossing law models) fitted the timing data well, but also that segmenting width-changing areas led to significant improvements. Our work enables the modeling of novel UIs requiring continuous strokes, e.g., for grouping icons.

Publisher's Version
Visual Transitions around Tabletops in Mixed Reality: Study on a Visual Acquisition Task between Vertical Virtual Displays and Horizontal Tabletops
Gary Perelman ORCID logo, Emmanuel Dubois ORCID logo, Alice Probst ORCID logo, and Marcos Serrano ORCID logo
(University of Toulouse, Toulouse, France)
See-through Head-Mounted Displays (HMDs) offer interesting opportunities to augment the interaction space around screens, especially around horizontal tabletops. In such context, HMDs can display surrounding vertical virtual windows to complement the tabletop content with data displayed in close vicinity. However, the effects of such combination on the visual acquisition of targets in the resulting combined display space have scarcely been explored. In this paper we conduct a study to explore visual acquisitions in such contexts, with a specific focus on the analysis of visual transitions between the horizontal tabletop display and the vertical virtual displays (in front and on the side of the tabletop). To further study the possible visual perception of the tabletop content out of the HMD and its impact on visual interaction, we distinguished two solutions for displaying information on the horizontal tabletop: using the see-through HMD to display virtual content over the tabletop surface (virtual overlay), i.e. the content is only visible inside the HMD’s FoV, or using the tabletop itself (tabletop screen). 12 participants performed visual acquisition tasks involving the horizontal and vertical displays. We measured the time to perform the task, the head movements, the portions of the displays visible in the HMD’s field of view, the physical fatigue and the user’s preference. Our results show that it is faster to acquire virtual targets in the front display than on the side. Results reveal that the use of the virtual overlay on the tabletop slows down the visual acquisition compared to the use of the tabletop screen, showing that users exploit the visual perception of the tabletop content on the peripheral visual space. We were also able to quantify when and to which extent targets on the tabletop can be acquired without being visible within the HMD's field of view when using the tabletop screen, i.e. by looking under the HMD. These results lead to design recommendations for more efficient, comfortable and integrated interfaces combining tabletop and surrounding vertical virtual displays.

Publisher's Version Video
SaferHome: Interactive Physical and Digital Smart Home Dashboards for Communicating Privacy Assessments to Owners and Bystanders
Maximiliane Windl ORCID logo, Alexander Hiesinger ORCID logo, Robin Welsch ORCID logo, Albrecht Schmidt ORCID logo, and Sebastian S. Feger ORCID logo
(LMU Munich, Munich, Germany; Munich Center for Machine Learning, Munich, Germany; Aalto University, Espoo, Finland)
Private homes are increasingly becoming smart spaces. While smart homes promise comfort, they expose most intimate spaces to security and privacy risks. Unfortunately, most users today are not equipped with the right tools to assess the vulnerabilities or privacy practices of smart devices. Further, users might lose track of the devices installed in their homes or are unaware of devices placed by a partner or host. We developed SaferHome, an interactive digital-physical privacy framework, to provide smart home users with security and privacy assessments and a sense of device location. SaferHome includes a digital list view and physical and digital dashboards that map real floor plans. We evaluated SaferHome with eight households in the wild. We find that users adopted various strategies to integrate the dashboards into their understanding and interpretation of smart home privacy. We present implications for the design of future smart home privacy frameworks that are impacted by technical affinity, device types, device ownership, and tangibility of assessments.

Publisher's Version Info
ElectronicsAR: Design and Evaluation of a Mobile and Tangible High-Fidelity Augmented Electronics Toolkit
Sebastian S. Feger ORCID logo, Lars Semmler ORCID logo, Albrecht Schmidt ORCID logo, and Thomas Kosch ORCID logo
(LMU Munich, Munich, Germany; TU Darmstadt, Darmstadt, Germany; Humboldt University of Berlin, Berlin, Germany)
Exploring and interacting with electronics is challenging as the internal processes of components are not visible. Further barriers to engagement with electronics include fear of injury and hardware damage. In response, Augmented Reality (AR) applications address those challenges to make internal processes and the functionality of circuits visible. However, current apps are either limited to abstract low-fidelity applications or entirely virtual environments. We present ElectronicsAR, a tangible high-fidelity AR electronics kit with scaled hardware components representing the shape of real electronics. Our evaluation with 24 participants showed that users were more efficient and more effective at naming components, as well as building and debugging circuits. We discuss our findings in the context of ElectronicsAR's unique characteristics that we contrast with related work. Based on this, we discuss opportunities for future research to design functional mobile AR applications that meet the needs of beginners and experts.

Publisher's Version
Towards Immersive Collaborative Sensemaking
Ying Yang ORCID logo, Tim Dwyer ORCID logo, Michael Wybrow ORCID logo, Benjamin Lee ORCID logo, Maxime Cordeil ORCID logo, Mark Billinghurst ORCID logo, and Bruce H. ThomasORCID logo
(Monash University, Melbourne, Australia; University of Queensland, Brisbane, Australia; University of South Australia, Mawson Lakes, Australia)
When collaborating face-to-face, people commonly use the surfaces and spaces around them to perform sensemaking tasks, such as spatially organising documents, notes or images. However, when people collaborate remotely using desktop interfaces they no longer feel like they are sharing the same space. This limitation may be overcome through collaboration in immersive environments, which simulate the physical in-person experience. In this paper, we report on a between-groups study comparing collaborations on image organisation tasks, in an immersive Virtual Reality (VR) environment to more conventional desktop conferencing. Collecting data from 40 subjects in groups of four, we measured task performance, user behaviours, collaboration engagement and awareness. Overall, the VR and desktop interface resulted in similar speed, accuracy and social presence rating, but we observed more conversations and interaction with objects, and more equal contributions to the interaction from participants within groups in VR. We also identified differences in coordination and collaborative awareness behaviours between VR and desktop platforms. We report on a set of systematic measures for assessing VR collaborative experience and a new analysis tool that we have developed to capture user behaviours in collaborative setting. Finally, we provide design considerations and directions for future work.

Publisher's Version Video

proc time: 8.45