ISS 2022 – Author Index |
Contents -
Abstracts -
Authors
|
A B C D E F G H I J K L M N O P R S T U V W X Y Z
Abu Hantash, Nour |
ISS '22: "Tangible Chromatin: Tangible ..."
Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments
Roozbeh Manshaei, Uzair Mayat, Syeda Aniqa Imtiaz, Veronica Andric, Kazeera Aliar, Nour Abu Hantash, Kashaf Masood, Gabby Resch, Alexander Bakogeorge, Sarah Sabatinos, and Ali Mazalek (Ryerson University, Toronto, Canada) In biology, microscopy data from thousands of individual cellular events presents challenges for analysis and problem solving. These include a lack of visual analysis tools to complement algorithmic approaches for tracking important but rare cellular events, and a lack of support for collaborative exploration and interpretation. In response to these challenges, we have designed and implemented Tangible Chromatin, a tangible and multi-surface system that promotes novel analysis of complex data generated from high-content microscopy experiments. The system facilitates three specific approaches to analysis: it (1) visualizes the detailed information and results from the image processing algorithms, (2) provides interactive approaches for browsing, selecting, and comparing individual data elements, and (3) expands options for productive collaboration through both independent and joint work. We present three main contributions: (i) design requirements that derive from the analytical goals of DNA replication biology, (ii) tangible and multi-surface interaction techniques to support the exploration and analysis of datasets from high-content microscopy experiments, and (iii) the results of a user study that investigated how the system supports individual and collaborative data analysis and interpretation tasks. @Article{ISS22p558, author = {Roozbeh Manshaei and Uzair Mayat and Syeda Aniqa Imtiaz and Veronica Andric and Kazeera Aliar and Nour Abu Hantash and Kashaf Masood and Gabby Resch and Alexander Bakogeorge and Sarah Sabatinos and Ali Mazalek}, title = {Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {558}, numpages = {22}, doi = {10.1145/3567711}, year = {2022}, } Publisher's Version Video |
|
Alexander, Jason |
ISS '22: "Investigating Pointing Performance ..."
Investigating Pointing Performance for Tangible Surfaces with Physical 3D Targets
Aluna Everitt, Anne Roudaut, Kasper Hornbæk, Mike Fraser, and Jason Alexander (University of Oxford, Oxford, UK; University of Bristol, Bristol, UK; University of Copenhagen, Copenhagen, Denmark; University of Bath, Bath, UK) One of the most fundamental interactions –pointing– is well understood on flat surfaces. However, pointing performance on tangible surfaces with physical targets is still limited for Tangible User Interfaces (TUIs). We investigate the effect of a target’s physical width, height, and distance on user pointing performance. We conducted a study using a reciprocal tapping task (n=19) with physical rods arranged in a circle. We compared our data with five conventional interaction models designed for 2D/3D tasks rather than tangible targets. We show that variance in the movement times was only satisfactorily explained by a model established for volumetric displays (r2=0.954). Analysis shows that movement direction and height should be included as parameters to this model to generalize for 3D tangible targets. Qualitative feedback from participants suggests that pointing at physical targets involves additional human factors (e.g., perception of sharpness or robustness) that need to be investigated further to understand how performance with tangible objects is affected. @Article{ISS22p583, author = {Aluna Everitt and Anne Roudaut and Kasper Hornbæk and Mike Fraser and Jason Alexander}, title = {Investigating Pointing Performance for Tangible Surfaces with Physical 3D Targets}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {583}, numpages = {23}, doi = {10.1145/3567736}, year = {2022}, } Publisher's Version |
|
Aliar, Kazeera |
ISS '22: "Tangible Chromatin: Tangible ..."
Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments
Roozbeh Manshaei, Uzair Mayat, Syeda Aniqa Imtiaz, Veronica Andric, Kazeera Aliar, Nour Abu Hantash, Kashaf Masood, Gabby Resch, Alexander Bakogeorge, Sarah Sabatinos, and Ali Mazalek (Ryerson University, Toronto, Canada) In biology, microscopy data from thousands of individual cellular events presents challenges for analysis and problem solving. These include a lack of visual analysis tools to complement algorithmic approaches for tracking important but rare cellular events, and a lack of support for collaborative exploration and interpretation. In response to these challenges, we have designed and implemented Tangible Chromatin, a tangible and multi-surface system that promotes novel analysis of complex data generated from high-content microscopy experiments. The system facilitates three specific approaches to analysis: it (1) visualizes the detailed information and results from the image processing algorithms, (2) provides interactive approaches for browsing, selecting, and comparing individual data elements, and (3) expands options for productive collaboration through both independent and joint work. We present three main contributions: (i) design requirements that derive from the analytical goals of DNA replication biology, (ii) tangible and multi-surface interaction techniques to support the exploration and analysis of datasets from high-content microscopy experiments, and (iii) the results of a user study that investigated how the system supports individual and collaborative data analysis and interpretation tasks. @Article{ISS22p558, author = {Roozbeh Manshaei and Uzair Mayat and Syeda Aniqa Imtiaz and Veronica Andric and Kazeera Aliar and Nour Abu Hantash and Kashaf Masood and Gabby Resch and Alexander Bakogeorge and Sarah Sabatinos and Ali Mazalek}, title = {Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {558}, numpages = {22}, doi = {10.1145/3567711}, year = {2022}, } Publisher's Version Video |
|
Alkayyali, Amani |
ISS '22: "UbiChromics: Enabling Ubiquitously ..."
UbiChromics: Enabling Ubiquitously Deployable Interactive Displays with Photochromic Paint
Amani Alkayyali, Yasha Iravantchi, Jaylin Herskovitz, and Alanson P. Sample (University of Michigan, Ann Arbor, USA) Pervasive and interactive displays promise to present our digital content seamlessly throughout our environment. However, traditional display technologies do not scale to room-wide applications due to high per-unit-area costs and the need for constant wired power and data infrastructure. This research proposes the use of photochromic paint as a display medium. Applying the paint to any surface or object creates ultra-low-cost displays, which can change color when exposed to specific wavelengths of light. We develop new paint formulations that enable wide area application of photochromic material. Along with a specially modified wide-area laser projector and depth camera that can draw custom images and create on-demand, room-wide user interfaces on photochromic enabled surfaces. System parameters such as light intensity, material activation time, and user readability are examined to optimize the display. Results show that images and user interfaces can last up to 16 minutes and can be updated indefinitely. Finally, usage scenarios such as displaying static and dynamic images, ephemeral notifications, and the creation of on-demand interfaces, such as light switches and music controllers, are demonstrated and explored. Ultimately, the UbiChromics system demonstrates the possibility of extending digital content to all painted surfaces. @Article{ISS22p561, author = {Amani Alkayyali and Yasha Iravantchi and Jaylin Herskovitz and Alanson P. Sample}, title = {UbiChromics: Enabling Ubiquitously Deployable Interactive Displays with Photochromic Paint}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {561}, numpages = {25}, doi = {10.1145/3567714}, year = {2022}, } Publisher's Version Video Info |
|
Anderson, Fraser |
ISS '22: "VideoPoseVR: Authoring Virtual ..."
VideoPoseVR: Authoring Virtual Reality Character Animations with Online Videos
Cheng Yao Wang, Qian Zhou, George Fitzmaurice, and Fraser Anderson (Cornell University, Ithaca, USA; Autodesk Research, Toronto, Canada) We present VideoPoseVR, a video-based animation authoring workflow using online videos to author character animations in VR. It leverages the state-of-the-art deep learning approach to reconstruct 3D motions from online videos, caption the motions, and store them in a motion dataset. Creators can import the videos, search in the dataset, modify the motion timeline, and combine multiple motions from videos to author character animations in VR. We implemented a proof-of-concept prototype and conducted a user study to evaluate the feasibility of the video-based authoring approach as well as gather initial feedback of the prototype. The study results suggest that VideoPoseVR was easy to learn for novice users to author animations and enable rapid exploration of prototyping for applications such as entertainment, skills training, and crowd simulations. @Article{ISS22p575, author = {Cheng Yao Wang and Qian Zhou and George Fitzmaurice and Fraser Anderson}, title = {VideoPoseVR: Authoring Virtual Reality Character Animations with Online Videos}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {575}, numpages = {20}, doi = {10.1145/3567728}, year = {2022}, } Publisher's Version |
|
Andric, Veronica |
ISS '22: "Tangible Chromatin: Tangible ..."
Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments
Roozbeh Manshaei, Uzair Mayat, Syeda Aniqa Imtiaz, Veronica Andric, Kazeera Aliar, Nour Abu Hantash, Kashaf Masood, Gabby Resch, Alexander Bakogeorge, Sarah Sabatinos, and Ali Mazalek (Ryerson University, Toronto, Canada) In biology, microscopy data from thousands of individual cellular events presents challenges for analysis and problem solving. These include a lack of visual analysis tools to complement algorithmic approaches for tracking important but rare cellular events, and a lack of support for collaborative exploration and interpretation. In response to these challenges, we have designed and implemented Tangible Chromatin, a tangible and multi-surface system that promotes novel analysis of complex data generated from high-content microscopy experiments. The system facilitates three specific approaches to analysis: it (1) visualizes the detailed information and results from the image processing algorithms, (2) provides interactive approaches for browsing, selecting, and comparing individual data elements, and (3) expands options for productive collaboration through both independent and joint work. We present three main contributions: (i) design requirements that derive from the analytical goals of DNA replication biology, (ii) tangible and multi-surface interaction techniques to support the exploration and analysis of datasets from high-content microscopy experiments, and (iii) the results of a user study that investigated how the system supports individual and collaborative data analysis and interpretation tasks. @Article{ISS22p558, author = {Roozbeh Manshaei and Uzair Mayat and Syeda Aniqa Imtiaz and Veronica Andric and Kazeera Aliar and Nour Abu Hantash and Kashaf Masood and Gabby Resch and Alexander Bakogeorge and Sarah Sabatinos and Ali Mazalek}, title = {Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {558}, numpages = {22}, doi = {10.1145/3567711}, year = {2022}, } Publisher's Version Video |
|
Appert, Caroline |
ISS '22: "Investigating the Use of AR ..."
Investigating the Use of AR Glasses for Content Annotation on Mobile Devices
Francesco Riccardo Di Gioia, Eugenie Brasier, Emmanuel Pietriga, and Caroline Appert (Université Paris-Saclay, Orsay, France; CNRS, Orsay, France; Inria, Orsay, France) Mobile devices such as smartphones and tablets have limited display size and input capabilities that make a variety of tasks challenging. Coupling the mobile device with Augmented Reality eyewear such as smartglasses can help address some of these challenges. In the specific context of digital content annotation tasks, this combination has the potential to enhance the user experience on two fronts. First, annotations can be offloaded into the air around the mobile device, freeing precious screen real-estate. Second, as smartglasses often come equipped with a variety of sensors including a camera, users can annotate documents with pictures or videos of their environment, captured on the spot, hands-free, and from the wearer's perspective. We present AnnotAR, a prototype that we use as a research probe to assess the viability of this approach to digital content annotation. We use AnnotAR to gather users' preliminary feedback in a laboratory setting, and to showcase how it could support real-world use cases. @Article{ISS22p574, author = {Francesco Riccardo Di Gioia and Eugenie Brasier and Emmanuel Pietriga and Caroline Appert}, title = {Investigating the Use of AR Glasses for Content Annotation on Mobile Devices}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {574}, numpages = {18}, doi = {10.1145/3567727}, year = {2022}, } Publisher's Version Video Info |
|
Arif, Ahmed Sabbir |
ISS '22: "Push, Tap, Dwell, and Pinch: ..."
Push, Tap, Dwell, and Pinch: Evaluation of Four Mid-air Selection Methods Augmented with Ultrasonic Haptic Feedback
Tafadzwa Joseph Dube, Yuan Ren, Hannah Limerick, I. Scott MacKenzie, and Ahmed Sabbir Arif (University of California, Merced, USA; Ultraleap, Bristol, UK; York University, Toronto, Canada) This work compares four mid-air target selection methods (Push, Tap, Dwell, Pinch) with two types of ultrasonic haptic feedback (Select, HoverSelect) in a Fitts’ law experiment. Results revealed that Tap is the fastest, the most accurate, and one of the least physically and cognitively demanding selection methods. Pinch is relatively fast but error prone and physically and cognitively demanding. Dwell is slowest by design, yet the most accurate and the least physically and cognitively demanding. Both haptic feedback methods improve selection performance by increasing users’ spatial awareness. Particularly, Push augmented with Hover & Select feedback is comparable to Tap. Besides, participants perceive the selection methods as faster, more accurate, and more physically and cognitively comfortable with the haptic feedback methods. @Article{ISS22p565, author = {Tafadzwa Joseph Dube and Yuan Ren and Hannah Limerick and I. Scott MacKenzie and Ahmed Sabbir Arif}, title = {Push, Tap, Dwell, and Pinch: Evaluation of Four Mid-air Selection Methods Augmented with Ultrasonic Haptic Feedback}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {565}, numpages = {19}, doi = {10.1145/3567718}, year = {2022}, } Publisher's Version Video ISS '22: "TiltWalker: Operating a Telepresence ..." TiltWalker: Operating a Telepresence Robot with One-Hand by Tilt Controls on a Smartphone Ghazal Zand, Yuan Ren, and Ahmed Sabbir Arif (University of California, Merced, USA) Mobile clients for telepresence robots are cluttered with interactive elements that either leave a little room for the camera feeds or occlude them. Many do not provide meaningful feedback on the robot's state and most require the use of both hands. These make maneuvering telepresence robots difficult with mobile devices. TiltWalker enables controlling a telepresence robot with one hand using tilt gestures with a smartphone. In a series of studies, we first justify the use of a Web platform, determine how far and fast users can tilt without compromising the comfort and the legibility of the display content, and identify a velocity-based function well-suited for control-display mapping. We refine TiltWalker based on the findings of these studies, then compare it with a default method in the final study. Results revealed that TiltWalker is significantly faster and more accurate than the default method. Besides, participants preferred TiltWalker's interaction methods and graphical feedback significantly more than those of the default method. @Article{ISS22p572, author = {Ghazal Zand and Yuan Ren and Ahmed Sabbir Arif}, title = {TiltWalker: Operating a Telepresence Robot with One-Hand by Tilt Controls on a Smartphone}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {572}, numpages = {26}, doi = {10.1145/3567725}, year = {2022}, } Publisher's Version Video ISS '22: "Design and Evaluation of a ..." Design and Evaluation of a Silent Speech-Based Selection Method for Eye-Gaze Pointing Laxmi Pandey and Ahmed Sabbir Arif (University of California, Merced, USA) We investigate silent speech as a hands-free selection method in eye-gaze pointing. We first propose a stripped-down image-based model that can recognize a small number of silent commands almost as fast as state-of-the-art speech recognition models. We then compare it with other hands-free selection methods (dwell, speech) in a Fitts' law study. Results revealed that speech and silent speech are comparable in throughput and selection time, but the latter is significantly more accurate than the other methods. A follow-up study revealed that target selection around the center of a display is significantly faster and more accurate, while around the top corners and the bottom are slower and error prone. We then present a method for selecting menu items with eye-gaze and silent speech. A study revealed that it significantly reduces task completion time and error rate. @Article{ISS22p570, author = {Laxmi Pandey and Ahmed Sabbir Arif}, title = {Design and Evaluation of a Silent Speech-Based Selection Method for Eye-Gaze Pointing}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {570}, numpages = {26}, doi = {10.1145/3567723}, year = {2022}, } Publisher's Version Video |
|
Augstein, Mirjam |
ISS '22: "Semi-automated Analysis of ..."
Semi-automated Analysis of Collaborative Interaction: Are We There Yet?
Thomas Neumayr, Mirjam Augstein, Johannes Schönböck, Sean Rintel, Helmut Leeb, and Thomas Teichmeister (University of Applied Sciences Upper Austria, Hagenberg, Austria; JKU Linz, Linz, Austria; Microsoft Research, Cambridge, UK) In recent years, research on collaborative interaction has relied on manual coding of rich audio/video recordings. The fine-grained analysis of such material is extremely time-consuming and labor-intensive. This is not only difficult to scale, but, as a result, might also limit the quality and completeness of coding due to fatigue, inherent human biases, (accidental or intentional), and inter-rater inconsistencies. In this paper, we explore how recent advances in machine learning may reduce manual effort and loss of information while retaining the value of human intelligence in the coding process. We present ACACIA (AI Chain for Augmented Collaborative Interaction Analysis), an AI video data analysis application which combines a range of advances in machine perception of video material for the analysis of collaborative interaction. We evaluate ACACIA's abilities, show how far we can already get, and which challenges remain. Our contribution lies in establishing a combined machine and human analysis pipeline that may be generalized to different collaborative settings and guide future research. @Article{ISS22p571, author = {Thomas Neumayr and Mirjam Augstein and Johannes Schönböck and Sean Rintel and Helmut Leeb and Thomas Teichmeister}, title = {Semi-automated Analysis of Collaborative Interaction: Are We There Yet?}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {571}, numpages = {27}, doi = {10.1145/3567724}, year = {2022}, } Publisher's Version Archive submitted (52 MB) |
|
Bakogeorge, Alexander |
ISS '22: "Tangible Chromatin: Tangible ..."
Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments
Roozbeh Manshaei, Uzair Mayat, Syeda Aniqa Imtiaz, Veronica Andric, Kazeera Aliar, Nour Abu Hantash, Kashaf Masood, Gabby Resch, Alexander Bakogeorge, Sarah Sabatinos, and Ali Mazalek (Ryerson University, Toronto, Canada) In biology, microscopy data from thousands of individual cellular events presents challenges for analysis and problem solving. These include a lack of visual analysis tools to complement algorithmic approaches for tracking important but rare cellular events, and a lack of support for collaborative exploration and interpretation. In response to these challenges, we have designed and implemented Tangible Chromatin, a tangible and multi-surface system that promotes novel analysis of complex data generated from high-content microscopy experiments. The system facilitates three specific approaches to analysis: it (1) visualizes the detailed information and results from the image processing algorithms, (2) provides interactive approaches for browsing, selecting, and comparing individual data elements, and (3) expands options for productive collaboration through both independent and joint work. We present three main contributions: (i) design requirements that derive from the analytical goals of DNA replication biology, (ii) tangible and multi-surface interaction techniques to support the exploration and analysis of datasets from high-content microscopy experiments, and (iii) the results of a user study that investigated how the system supports individual and collaborative data analysis and interpretation tasks. @Article{ISS22p558, author = {Roozbeh Manshaei and Uzair Mayat and Syeda Aniqa Imtiaz and Veronica Andric and Kazeera Aliar and Nour Abu Hantash and Kashaf Masood and Gabby Resch and Alexander Bakogeorge and Sarah Sabatinos and Ali Mazalek}, title = {Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {558}, numpages = {22}, doi = {10.1145/3567711}, year = {2022}, } Publisher's Version Video |
|
Billinghurst, Mark |
ISS '22: "Towards Immersive Collaborative ..."
Towards Immersive Collaborative Sensemaking
Ying Yang, Tim Dwyer, Michael Wybrow, Benjamin Lee, Maxime Cordeil, Mark Billinghurst, and Bruce H. Thomas (Monash University, Melbourne, Australia; University of Queensland, Brisbane, Australia; University of South Australia, Mawson Lakes, Australia) When collaborating face-to-face, people commonly use the surfaces and spaces around them to perform sensemaking tasks, such as spatially organising documents, notes or images. However, when people collaborate remotely using desktop interfaces they no longer feel like they are sharing the same space. This limitation may be overcome through collaboration in immersive environments, which simulate the physical in-person experience. In this paper, we report on a between-groups study comparing collaborations on image organisation tasks, in an immersive Virtual Reality (VR) environment to more conventional desktop conferencing. Collecting data from 40 subjects in groups of four, we measured task performance, user behaviours, collaboration engagement and awareness. Overall, the VR and desktop interface resulted in similar speed, accuracy and social presence rating, but we observed more conversations and interaction with objects, and more equal contributions to the interaction from participants within groups in VR. We also identified differences in coordination and collaborative awareness behaviours between VR and desktop platforms. We report on a set of systematic measures for assessing VR collaborative experience and a new analysis tool that we have developed to capture user behaviours in collaborative setting. Finally, we provide design considerations and directions for future work. @Article{ISS22p588, author = {Ying Yang and Tim Dwyer and Michael Wybrow and Benjamin Lee and Maxime Cordeil and Mark Billinghurst and Bruce H. Thomas}, title = {Towards Immersive Collaborative Sensemaking}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {588}, numpages = {25}, doi = {10.1145/3567741}, year = {2022}, } Publisher's Version Video |
|
Brasier, Eugenie |
ISS '22: "Investigating the Use of AR ..."
Investigating the Use of AR Glasses for Content Annotation on Mobile Devices
Francesco Riccardo Di Gioia, Eugenie Brasier, Emmanuel Pietriga, and Caroline Appert (Université Paris-Saclay, Orsay, France; CNRS, Orsay, France; Inria, Orsay, France) Mobile devices such as smartphones and tablets have limited display size and input capabilities that make a variety of tasks challenging. Coupling the mobile device with Augmented Reality eyewear such as smartglasses can help address some of these challenges. In the specific context of digital content annotation tasks, this combination has the potential to enhance the user experience on two fronts. First, annotations can be offloaded into the air around the mobile device, freeing precious screen real-estate. Second, as smartglasses often come equipped with a variety of sensors including a camera, users can annotate documents with pictures or videos of their environment, captured on the spot, hands-free, and from the wearer's perspective. We present AnnotAR, a prototype that we use as a research probe to assess the viability of this approach to digital content annotation. We use AnnotAR to gather users' preliminary feedback in a laboratory setting, and to showcase how it could support real-world use cases. @Article{ISS22p574, author = {Francesco Riccardo Di Gioia and Eugenie Brasier and Emmanuel Pietriga and Caroline Appert}, title = {Investigating the Use of AR Glasses for Content Annotation on Mobile Devices}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {574}, numpages = {18}, doi = {10.1145/3567727}, year = {2022}, } Publisher's Version Video Info |
|
Campos, Cuauhtli |
ISS '22: "Dynamic Pinhole Paper: Interacting ..."
Dynamic Pinhole Paper: Interacting with Horizontal Displays through Perforated Paper
Cuauhtli Campos, Matjaž Kljun, and Klen Čopič Pucihar (University of Primorska, Koper, Slovenia; Faculty of Information Studies in Novo Mesto, Novo Mesto, Slovenia) Ubiquitous nature, versatility, and durability enabled paper to maintain its importance in the digital age. It is not surprising there have been numerous attempts to combine paper with the digital content. One way to do so is to place paper on a horizontal interactive display (e.g. tabletop or tablet). The paper thus becomes "the screen" on which the digital content is viewed, yet it also acts as a barrier that degrades the quality of the perceived image. This research tries to address this problem by proposing and evaluating a novel paper display concept called Dynamic pinhole paper. The concept is based on perforating the paper (to decrease its opacity) and moving digital content beneath the perforated area (to increase the resolution). To evaluate this novel concept, we fabricated the pinhole paper and implemented the software in order to run multiple user studies exploring the concept’s viability, optimal movement trajectory (amount, direction and velocity), and the effect of perforation on printing, writing and reading. The results show that the movement of digital content is a highly effective strategy for improving the resolution of the digital content through perforation where the optimal velocity is independent from trajectory direction (e.g. horizontal or circular) and amount of movement. Results also show the concept is viable on of the shelf hardware and that it is possible to write and print on perforated paper. @Article{ISS22p567, author = {Cuauhtli Campos and Matjaž Kljun and Klen Čopič Pucihar}, title = {Dynamic Pinhole Paper: Interacting with Horizontal Displays through Perforated Paper}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {567}, numpages = {23}, doi = {10.1145/3567720}, year = {2022}, } Publisher's Version Video Info ISS '22: "LightMeUp: Back-print Illumination ..." LightMeUp: Back-print Illumination Paper Display with Multi-stable Visuals Cuauhtli Campos, Matjaž Kljun, Jakub Sandak, and Klen Čopič Pucihar (University of Primorska, Koper, Slovenia; Faculty of Information Studies in Novo Mesto, Novo Mesto, Slovenia; InnoRenew CoE, Isola, Slovenia) Despite the drive to digitise learning, paper still holds a prominent role within educational settings. While computational devices have several advantages over paper (e.g. changing and showing content based on user interaction and needs) their prolonged or incorrect usage can hinder educational achievements. In this paper, we combine the interactivity of computational devices with paper whilst reducing the usage of technology to the minimum. To this end, we developed and evaluated a novel back-print illumination paper display called LightMeUp where different information printed on the back side of the paper becomes visible when paper is placed on an interactive display and back-illuminated with a particular colour. To develop this novel display, we first built a display simulator that enables the simulation of various spectral characteristics of the elements used in the system (i.e. light sources such as tablet computers, paper types and printing inks). By using our simulator, we designed various use-case prototypes that demonstrate the capabilities and feasibility of the proposed system. With our simulator and use-cases presented, educators and educational content designers can easily design multi-stable interactive visuals by using readily available paper, printers and touch displays. @Article{ISS22p573, author = {Cuauhtli Campos and Matjaž Kljun and Jakub Sandak and Klen Čopič Pucihar}, title = {LightMeUp: Back-print Illumination Paper Display with Multi-stable Visuals}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {573}, numpages = {23}, doi = {10.1145/3570333}, year = {2022}, } Publisher's Version Video Info |
|
Cassinelli, Alvaro |
ISS '22: "AngleCAD: Surface-Based 3D ..."
AngleCAD: Surface-Based 3D Modelling Techniques on Foldable Touchscreens
Can Liu, Chenyue Dai, Qingzhou Ma, Brinda Mehra, and Alvaro Cassinelli (City University of Hong Kong, Hong Kong, China; Massachusetts Institude of Technology, Cambridge, USA; University of Michigan, Ann Arbor, USA) 3D modelling and printing are becoming increasingly popular. However, beginners often face high barriers of entry when trying to use existing 3D modelling tools, even for creating simple objects. This is further complicated on mobile devices by the lack of direct manipulation in the Z dimension. In this paper, we explore the possibility of using foldable mobile devices for modelling simple objects by constructing a 2.5D display and interaction space with folded touch screens. We present a set of novel interaction techniques - AngleCAD, which allows users to view and navigate a 3D space through folded screens, and to modify the 3D object using the physical support of touchscreens and folding angles. The design of these techniques was inspired by woodworking practices to support surface-based operations that allow users to cut, snap and taper objects directly with the touch screen, and extrude and drill them according to the physical fold angle. A preliminary study identified the benefits of this approach and the key design factors that affect the user experience. @Article{ISS22p582, author = {Can Liu and Chenyue Dai and Qingzhou Ma and Brinda Mehra and Alvaro Cassinelli}, title = {AngleCAD: Surface-Based 3D Modelling Techniques on Foldable Touchscreens}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {582}, numpages = {25}, doi = {10.1145/3567735}, year = {2022}, } Publisher's Version Video |
|
Cheng, Yi Fei |
ISS '22: "XSpace: An Augmented Reality ..."
XSpace: An Augmented Reality Toolkit for Enabling Spatially-Aware Distributed Collaboration
Jaylin Herskovitz, Yi Fei Cheng, Anhong Guo, Alanson P. Sample, and Michael Nebeling (University of Michigan, Ann Arbor, USA; Carnegie Mellon University, Pittsburgh, USA) Augmented Reality (AR) has the potential to leverage environmental information to better facilitate distributed collaboration, however, such applications are difficult to develop. We present XSpace, a toolkit for creating spatially-aware AR applications for distributed collaboration. Based on a review of existing applications and developer tools, we design XSpace to support three methods for creating shared virtual spaces, each emphasizing a different aspect: shared objects, user perspectives, and environmental meshes. XSpace implements these methods in a developer toolkit, and also provides a set of complimentary visual authoring tools to allow developers to preview a variety of configurations for a shared virtual space. We present five example applications to illustrate that XSpace can support the development of a rich set of collaborative AR experiences that are difficult to produce with current solutions. Through XSpace, we discuss implications for future application design, including user space customization and privacy and safety concerns when sharing users' environments. @Article{ISS22p568, author = {Jaylin Herskovitz and Yi Fei Cheng and Anhong Guo and Alanson P. Sample and Michael Nebeling}, title = {XSpace: An Augmented Reality Toolkit for Enabling Spatially-Aware Distributed Collaboration}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {568}, numpages = {26}, doi = {10.1145/3567721}, year = {2022}, } Publisher's Version Video |
|
Choe, Eun Kyoung |
ISS '22: "NoteWordy: Investigating Touch ..."
NoteWordy: Investigating Touch and Speech Input on Smartphones for Personal Data Capture
Yuhan Luo, Bongshin Lee, Young-Ho Kim, and Eun Kyoung Choe (City University of Hong Kong, Hong Kong, China; Microsoft Research, Redmond, USA; University of Maryland, College Park, USA; NAVER AI Lab, Seongnam, South Korea) Speech as a natural and low-burden input modality has great potential to support personal data capture. However, little is known about how people use speech input, together with traditional touch input, to capture different types of data in self-tracking contexts. In this work, we designed and developed NoteWordy, a multimodal self-tracking application integrating touch and speech input, and deployed it in the context of productivity tracking for two weeks (N = 17). Our participants used the two input modalities differently, depending on the data type as well as personal preferences, error tolerance for speech recognition issues, and social surroundings. Additionally, we found speech input reduced participants' diary entry time and enhanced the data richness of the free-form text. Drawing from the findings, we discuss opportunities for supporting efficient personal data capture with multimodal input and implications for improving the user experience with natural language input to capture various self-tracking data. @Article{ISS22p581, author = {Yuhan Luo and Bongshin Lee and Young-Ho Kim and Eun Kyoung Choe}, title = {NoteWordy: Investigating Touch and Speech Input on Smartphones for Personal Data Capture}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {581}, numpages = {24}, doi = {10.1145/3567734}, year = {2022}, } Publisher's Version Video |
|
Čopič Pucihar, Klen |
ISS '22: "Dynamic Pinhole Paper: Interacting ..."
Dynamic Pinhole Paper: Interacting with Horizontal Displays through Perforated Paper
Cuauhtli Campos, Matjaž Kljun, and Klen Čopič Pucihar (University of Primorska, Koper, Slovenia; Faculty of Information Studies in Novo Mesto, Novo Mesto, Slovenia) Ubiquitous nature, versatility, and durability enabled paper to maintain its importance in the digital age. It is not surprising there have been numerous attempts to combine paper with the digital content. One way to do so is to place paper on a horizontal interactive display (e.g. tabletop or tablet). The paper thus becomes "the screen" on which the digital content is viewed, yet it also acts as a barrier that degrades the quality of the perceived image. This research tries to address this problem by proposing and evaluating a novel paper display concept called Dynamic pinhole paper. The concept is based on perforating the paper (to decrease its opacity) and moving digital content beneath the perforated area (to increase the resolution). To evaluate this novel concept, we fabricated the pinhole paper and implemented the software in order to run multiple user studies exploring the concept’s viability, optimal movement trajectory (amount, direction and velocity), and the effect of perforation on printing, writing and reading. The results show that the movement of digital content is a highly effective strategy for improving the resolution of the digital content through perforation where the optimal velocity is independent from trajectory direction (e.g. horizontal or circular) and amount of movement. Results also show the concept is viable on of the shelf hardware and that it is possible to write and print on perforated paper. @Article{ISS22p567, author = {Cuauhtli Campos and Matjaž Kljun and Klen Čopič Pucihar}, title = {Dynamic Pinhole Paper: Interacting with Horizontal Displays through Perforated Paper}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {567}, numpages = {23}, doi = {10.1145/3567720}, year = {2022}, } Publisher's Version Video Info ISS '22: "A Survey of Augmented Piano ..." A Survey of Augmented Piano Prototypes: Has Augmentation Improved Learning Experiences? Jordan Aiko Deja, Sven Mayer, Klen Čopič Pucihar, and Matjaž Kljun (University of Primorska, Koper, Slovenia; De La Salle University, Manila, Philippines; LMU Munich, Munich, Germany) Humans have been developing and playing musical instruments for millennia. With technological advancements, instruments were becoming ever more sophisticated. In recent decades computer-supported innovations have also been introduced in hardware design, usability, and aesthetics. One of the most commonly digitally augmented instruments is the piano. Besides electronic keyboards, several prototypes augmenting pianos with different projections providing various levels of interactivity on and around the keyboard have been implemented in order to support piano players. However, it is still unclear whether these solutions support the learning process. In this paper, we present a systematic review of augmented piano prototypes focusing on instrument learning based on the four themes derived from interviews with piano experts to understand better the problems of teaching the piano. These themes are (i) synchronised movement and body posture, (ii) sight-reading, (iii) ensuring motivation, and (iv) encouraging improvisation. We found that prototypes are saturated on the synchronisation themes, and there are opportunities for sight-reading, motivation, and improvisation themes. We conclude by presenting recommendations on augmenting piano systems towards enriching the piano learning experience as well as on possible directions to expand knowledge in the area. @Article{ISS22p566, author = {Jordan Aiko Deja and Sven Mayer and Klen Čopič Pucihar and Matjaž Kljun}, title = {A Survey of Augmented Piano Prototypes: Has Augmentation Improved Learning Experiences?}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {566}, numpages = {28}, doi = {10.1145/3567719}, year = {2022}, } Publisher's Version Info ISS '22: "LightMeUp: Back-print Illumination ..." LightMeUp: Back-print Illumination Paper Display with Multi-stable Visuals Cuauhtli Campos, Matjaž Kljun, Jakub Sandak, and Klen Čopič Pucihar (University of Primorska, Koper, Slovenia; Faculty of Information Studies in Novo Mesto, Novo Mesto, Slovenia; InnoRenew CoE, Isola, Slovenia) Despite the drive to digitise learning, paper still holds a prominent role within educational settings. While computational devices have several advantages over paper (e.g. changing and showing content based on user interaction and needs) their prolonged or incorrect usage can hinder educational achievements. In this paper, we combine the interactivity of computational devices with paper whilst reducing the usage of technology to the minimum. To this end, we developed and evaluated a novel back-print illumination paper display called LightMeUp where different information printed on the back side of the paper becomes visible when paper is placed on an interactive display and back-illuminated with a particular colour. To develop this novel display, we first built a display simulator that enables the simulation of various spectral characteristics of the elements used in the system (i.e. light sources such as tablet computers, paper types and printing inks). By using our simulator, we designed various use-case prototypes that demonstrate the capabilities and feasibility of the proposed system. With our simulator and use-cases presented, educators and educational content designers can easily design multi-stable interactive visuals by using readily available paper, printers and touch displays. @Article{ISS22p573, author = {Cuauhtli Campos and Matjaž Kljun and Jakub Sandak and Klen Čopič Pucihar}, title = {LightMeUp: Back-print Illumination Paper Display with Multi-stable Visuals}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {573}, numpages = {23}, doi = {10.1145/3570333}, year = {2022}, } Publisher's Version Video Info |
|
Cordeil, Maxime |
ISS '22: "Towards Immersive Collaborative ..."
Towards Immersive Collaborative Sensemaking
Ying Yang, Tim Dwyer, Michael Wybrow, Benjamin Lee, Maxime Cordeil, Mark Billinghurst, and Bruce H. Thomas (Monash University, Melbourne, Australia; University of Queensland, Brisbane, Australia; University of South Australia, Mawson Lakes, Australia) When collaborating face-to-face, people commonly use the surfaces and spaces around them to perform sensemaking tasks, such as spatially organising documents, notes or images. However, when people collaborate remotely using desktop interfaces they no longer feel like they are sharing the same space. This limitation may be overcome through collaboration in immersive environments, which simulate the physical in-person experience. In this paper, we report on a between-groups study comparing collaborations on image organisation tasks, in an immersive Virtual Reality (VR) environment to more conventional desktop conferencing. Collecting data from 40 subjects in groups of four, we measured task performance, user behaviours, collaboration engagement and awareness. Overall, the VR and desktop interface resulted in similar speed, accuracy and social presence rating, but we observed more conversations and interaction with objects, and more equal contributions to the interaction from participants within groups in VR. We also identified differences in coordination and collaborative awareness behaviours between VR and desktop platforms. We report on a set of systematic measures for assessing VR collaborative experience and a new analysis tool that we have developed to capture user behaviours in collaborative setting. Finally, we provide design considerations and directions for future work. @Article{ISS22p588, author = {Ying Yang and Tim Dwyer and Michael Wybrow and Benjamin Lee and Maxime Cordeil and Mark Billinghurst and Bruce H. Thomas}, title = {Towards Immersive Collaborative Sensemaking}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {588}, numpages = {25}, doi = {10.1145/3567741}, year = {2022}, } Publisher's Version Video |
|
Dai, Chenyue |
ISS '22: "AngleCAD: Surface-Based 3D ..."
AngleCAD: Surface-Based 3D Modelling Techniques on Foldable Touchscreens
Can Liu, Chenyue Dai, Qingzhou Ma, Brinda Mehra, and Alvaro Cassinelli (City University of Hong Kong, Hong Kong, China; Massachusetts Institude of Technology, Cambridge, USA; University of Michigan, Ann Arbor, USA) 3D modelling and printing are becoming increasingly popular. However, beginners often face high barriers of entry when trying to use existing 3D modelling tools, even for creating simple objects. This is further complicated on mobile devices by the lack of direct manipulation in the Z dimension. In this paper, we explore the possibility of using foldable mobile devices for modelling simple objects by constructing a 2.5D display and interaction space with folded touch screens. We present a set of novel interaction techniques - AngleCAD, which allows users to view and navigate a 3D space through folded screens, and to modify the 3D object using the physical support of touchscreens and folding angles. The design of these techniques was inspired by woodworking practices to support surface-based operations that allow users to cut, snap and taper objects directly with the touch screen, and extrude and drill them according to the physical fold angle. A preliminary study identified the benefits of this approach and the key design factors that affect the user experience. @Article{ISS22p582, author = {Can Liu and Chenyue Dai and Qingzhou Ma and Brinda Mehra and Alvaro Cassinelli}, title = {AngleCAD: Surface-Based 3D Modelling Techniques on Foldable Touchscreens}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {582}, numpages = {25}, doi = {10.1145/3567735}, year = {2022}, } Publisher's Version Video |
|
Deja, Jordan Aiko |
ISS '22: "A Survey of Augmented Piano ..."
A Survey of Augmented Piano Prototypes: Has Augmentation Improved Learning Experiences?
Jordan Aiko Deja, Sven Mayer, Klen Čopič Pucihar, and Matjaž Kljun (University of Primorska, Koper, Slovenia; De La Salle University, Manila, Philippines; LMU Munich, Munich, Germany) Humans have been developing and playing musical instruments for millennia. With technological advancements, instruments were becoming ever more sophisticated. In recent decades computer-supported innovations have also been introduced in hardware design, usability, and aesthetics. One of the most commonly digitally augmented instruments is the piano. Besides electronic keyboards, several prototypes augmenting pianos with different projections providing various levels of interactivity on and around the keyboard have been implemented in order to support piano players. However, it is still unclear whether these solutions support the learning process. In this paper, we present a systematic review of augmented piano prototypes focusing on instrument learning based on the four themes derived from interviews with piano experts to understand better the problems of teaching the piano. These themes are (i) synchronised movement and body posture, (ii) sight-reading, (iii) ensuring motivation, and (iv) encouraging improvisation. We found that prototypes are saturated on the synchronisation themes, and there are opportunities for sight-reading, motivation, and improvisation themes. We conclude by presenting recommendations on augmenting piano systems towards enriching the piano learning experience as well as on possible directions to expand knowledge in the area. @Article{ISS22p566, author = {Jordan Aiko Deja and Sven Mayer and Klen Čopič Pucihar and Matjaž Kljun}, title = {A Survey of Augmented Piano Prototypes: Has Augmentation Improved Learning Experiences?}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {566}, numpages = {28}, doi = {10.1145/3567719}, year = {2022}, } Publisher's Version Info |
|
Di Gioia, Francesco Riccardo |
ISS '22: "Investigating the Use of AR ..."
Investigating the Use of AR Glasses for Content Annotation on Mobile Devices
Francesco Riccardo Di Gioia, Eugenie Brasier, Emmanuel Pietriga, and Caroline Appert (Université Paris-Saclay, Orsay, France; CNRS, Orsay, France; Inria, Orsay, France) Mobile devices such as smartphones and tablets have limited display size and input capabilities that make a variety of tasks challenging. Coupling the mobile device with Augmented Reality eyewear such as smartglasses can help address some of these challenges. In the specific context of digital content annotation tasks, this combination has the potential to enhance the user experience on two fronts. First, annotations can be offloaded into the air around the mobile device, freeing precious screen real-estate. Second, as smartglasses often come equipped with a variety of sensors including a camera, users can annotate documents with pictures or videos of their environment, captured on the spot, hands-free, and from the wearer's perspective. We present AnnotAR, a prototype that we use as a research probe to assess the viability of this approach to digital content annotation. We use AnnotAR to gather users' preliminary feedback in a laboratory setting, and to showcase how it could support real-world use cases. @Article{ISS22p574, author = {Francesco Riccardo Di Gioia and Eugenie Brasier and Emmanuel Pietriga and Caroline Appert}, title = {Investigating the Use of AR Glasses for Content Annotation on Mobile Devices}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {574}, numpages = {18}, doi = {10.1145/3567727}, year = {2022}, } Publisher's Version Video Info |
|
Dube, Tafadzwa Joseph |
ISS '22: "Push, Tap, Dwell, and Pinch: ..."
Push, Tap, Dwell, and Pinch: Evaluation of Four Mid-air Selection Methods Augmented with Ultrasonic Haptic Feedback
Tafadzwa Joseph Dube, Yuan Ren, Hannah Limerick, I. Scott MacKenzie, and Ahmed Sabbir Arif (University of California, Merced, USA; Ultraleap, Bristol, UK; York University, Toronto, Canada) This work compares four mid-air target selection methods (Push, Tap, Dwell, Pinch) with two types of ultrasonic haptic feedback (Select, HoverSelect) in a Fitts’ law experiment. Results revealed that Tap is the fastest, the most accurate, and one of the least physically and cognitively demanding selection methods. Pinch is relatively fast but error prone and physically and cognitively demanding. Dwell is slowest by design, yet the most accurate and the least physically and cognitively demanding. Both haptic feedback methods improve selection performance by increasing users’ spatial awareness. Particularly, Push augmented with Hover & Select feedback is comparable to Tap. Besides, participants perceive the selection methods as faster, more accurate, and more physically and cognitively comfortable with the haptic feedback methods. @Article{ISS22p565, author = {Tafadzwa Joseph Dube and Yuan Ren and Hannah Limerick and I. Scott MacKenzie and Ahmed Sabbir Arif}, title = {Push, Tap, Dwell, and Pinch: Evaluation of Four Mid-air Selection Methods Augmented with Ultrasonic Haptic Feedback}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {565}, numpages = {19}, doi = {10.1145/3567718}, year = {2022}, } Publisher's Version Video |
|
Dubois, Emmanuel |
ISS '22: "Visual Transitions around ..."
Visual Transitions around Tabletops in Mixed Reality: Study on a Visual Acquisition Task between Vertical Virtual Displays and Horizontal Tabletops
Gary Perelman, Emmanuel Dubois, Alice Probst, and Marcos Serrano (University of Toulouse, Toulouse, France) See-through Head-Mounted Displays (HMDs) offer interesting opportunities to augment the interaction space around screens, especially around horizontal tabletops. In such context, HMDs can display surrounding vertical virtual windows to complement the tabletop content with data displayed in close vicinity. However, the effects of such combination on the visual acquisition of targets in the resulting combined display space have scarcely been explored. In this paper we conduct a study to explore visual acquisitions in such contexts, with a specific focus on the analysis of visual transitions between the horizontal tabletop display and the vertical virtual displays (in front and on the side of the tabletop). To further study the possible visual perception of the tabletop content out of the HMD and its impact on visual interaction, we distinguished two solutions for displaying information on the horizontal tabletop: using the see-through HMD to display virtual content over the tabletop surface (virtual overlay), i.e. the content is only visible inside the HMD’s FoV, or using the tabletop itself (tabletop screen). 12 participants performed visual acquisition tasks involving the horizontal and vertical displays. We measured the time to perform the task, the head movements, the portions of the displays visible in the HMD’s field of view, the physical fatigue and the user’s preference. Our results show that it is faster to acquire virtual targets in the front display than on the side. Results reveal that the use of the virtual overlay on the tabletop slows down the visual acquisition compared to the use of the tabletop screen, showing that users exploit the visual perception of the tabletop content on the peripheral visual space. We were also able to quantify when and to which extent targets on the tabletop can be acquired without being visible within the HMD's field of view when using the tabletop screen, i.e. by looking under the HMD. These results lead to design recommendations for more efficient, comfortable and integrated interfaces combining tabletop and surrounding vertical virtual displays. @Article{ISS22p585, author = {Gary Perelman and Emmanuel Dubois and Alice Probst and Marcos Serrano}, title = {Visual Transitions around Tabletops in Mixed Reality: Study on a Visual Acquisition Task between Vertical Virtual Displays and Horizontal Tabletops}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {585}, numpages = {20}, doi = {10.1145/3567738}, year = {2022}, } Publisher's Version Video |
|
Dwyer, Tim |
ISS '22: "Towards Immersive Collaborative ..."
Towards Immersive Collaborative Sensemaking
Ying Yang, Tim Dwyer, Michael Wybrow, Benjamin Lee, Maxime Cordeil, Mark Billinghurst, and Bruce H. Thomas (Monash University, Melbourne, Australia; University of Queensland, Brisbane, Australia; University of South Australia, Mawson Lakes, Australia) When collaborating face-to-face, people commonly use the surfaces and spaces around them to perform sensemaking tasks, such as spatially organising documents, notes or images. However, when people collaborate remotely using desktop interfaces they no longer feel like they are sharing the same space. This limitation may be overcome through collaboration in immersive environments, which simulate the physical in-person experience. In this paper, we report on a between-groups study comparing collaborations on image organisation tasks, in an immersive Virtual Reality (VR) environment to more conventional desktop conferencing. Collecting data from 40 subjects in groups of four, we measured task performance, user behaviours, collaboration engagement and awareness. Overall, the VR and desktop interface resulted in similar speed, accuracy and social presence rating, but we observed more conversations and interaction with objects, and more equal contributions to the interaction from participants within groups in VR. We also identified differences in coordination and collaborative awareness behaviours between VR and desktop platforms. We report on a set of systematic measures for assessing VR collaborative experience and a new analysis tool that we have developed to capture user behaviours in collaborative setting. Finally, we provide design considerations and directions for future work. @Article{ISS22p588, author = {Ying Yang and Tim Dwyer and Michael Wybrow and Benjamin Lee and Maxime Cordeil and Mark Billinghurst and Bruce H. Thomas}, title = {Towards Immersive Collaborative Sensemaking}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {588}, numpages = {25}, doi = {10.1145/3567741}, year = {2022}, } Publisher's Version Video ISS '22: "Effects of Display Layout ..." Effects of Display Layout on Spatial Memory for Immersive Environments Jiazhou Liu, Arnaud Prouzeau, Barrett Ens, and Tim Dwyer (Monash University, Melbourne, Australia; Inria, Bordeaux, France) In immersive environments, positioning data visualisations around the user in a wraparound layout has been advocated as advantageous over flat arrangements more typical of traditional screens. However, other than limiting the distance users must walk, there is no clear design rationale behind this common practice, and little research on the impact of wraparound layouts on visualisation tasks. The ability to remember the spatial location of elements of visualisations within the display space is crucial to support visual analytical tasks, especially those that require users to shift their focus or perform comparisons. This ability is influenced by the user's spatial memory but how spatial memory is affected by different display layouts remains unclear. In this paper, we perform two user studies to evaluate the effects of three layouts with varying degrees of curvature around the user (flat-wall, semicircular-wraparound, and circular-wraparound) on a visuo-spatial memory task in a virtual environment. The results show that participants are able to recall spatial patterns with greater accuracy and report more positive subjective ratings using flat than circular-wraparound layouts. While we didn't find any significant performance differences between the flat and semicircular-wraparound layouts, participants overwhelmingly preferred the semicircular-wraparound layout suggesting it is a good compromise between the two extremes of display curvature. @Article{ISS22p576, author = {Jiazhou Liu and Arnaud Prouzeau and Barrett Ens and Tim Dwyer}, title = {Effects of Display Layout on Spatial Memory for Immersive Environments}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {576}, numpages = {21}, doi = {10.1145/3567729}, year = {2022}, } Publisher's Version Video |
|
Ens, Barrett |
ISS '22: "Effects of Display Layout ..."
Effects of Display Layout on Spatial Memory for Immersive Environments
Jiazhou Liu, Arnaud Prouzeau, Barrett Ens, and Tim Dwyer (Monash University, Melbourne, Australia; Inria, Bordeaux, France) In immersive environments, positioning data visualisations around the user in a wraparound layout has been advocated as advantageous over flat arrangements more typical of traditional screens. However, other than limiting the distance users must walk, there is no clear design rationale behind this common practice, and little research on the impact of wraparound layouts on visualisation tasks. The ability to remember the spatial location of elements of visualisations within the display space is crucial to support visual analytical tasks, especially those that require users to shift their focus or perform comparisons. This ability is influenced by the user's spatial memory but how spatial memory is affected by different display layouts remains unclear. In this paper, we perform two user studies to evaluate the effects of three layouts with varying degrees of curvature around the user (flat-wall, semicircular-wraparound, and circular-wraparound) on a visuo-spatial memory task in a virtual environment. The results show that participants are able to recall spatial patterns with greater accuracy and report more positive subjective ratings using flat than circular-wraparound layouts. While we didn't find any significant performance differences between the flat and semicircular-wraparound layouts, participants overwhelmingly preferred the semicircular-wraparound layout suggesting it is a good compromise between the two extremes of display curvature. @Article{ISS22p576, author = {Jiazhou Liu and Arnaud Prouzeau and Barrett Ens and Tim Dwyer}, title = {Effects of Display Layout on Spatial Memory for Immersive Environments}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {576}, numpages = {21}, doi = {10.1145/3567729}, year = {2022}, } Publisher's Version Video |
|
Everitt, Aluna |
ISS '22: "Investigating Pointing Performance ..."
Investigating Pointing Performance for Tangible Surfaces with Physical 3D Targets
Aluna Everitt, Anne Roudaut, Kasper Hornbæk, Mike Fraser, and Jason Alexander (University of Oxford, Oxford, UK; University of Bristol, Bristol, UK; University of Copenhagen, Copenhagen, Denmark; University of Bath, Bath, UK) One of the most fundamental interactions –pointing– is well understood on flat surfaces. However, pointing performance on tangible surfaces with physical targets is still limited for Tangible User Interfaces (TUIs). We investigate the effect of a target’s physical width, height, and distance on user pointing performance. We conducted a study using a reciprocal tapping task (n=19) with physical rods arranged in a circle. We compared our data with five conventional interaction models designed for 2D/3D tasks rather than tangible targets. We show that variance in the movement times was only satisfactorily explained by a model established for volumetric displays (r2=0.954). Analysis shows that movement direction and height should be included as parameters to this model to generalize for 3D tangible targets. Qualitative feedback from participants suggests that pointing at physical targets involves additional human factors (e.g., perception of sharpness or robustness) that need to be investigated further to understand how performance with tangible objects is affected. @Article{ISS22p583, author = {Aluna Everitt and Anne Roudaut and Kasper Hornbæk and Mike Fraser and Jason Alexander}, title = {Investigating Pointing Performance for Tangible Surfaces with Physical 3D Targets}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {583}, numpages = {23}, doi = {10.1145/3567736}, year = {2022}, } Publisher's Version |
|
Fan, Neil Xu |
ISS '22: "Reducing the Latency of Touch ..."
Reducing the Latency of Touch Tracking on Ad-hoc Surfaces
Neil Xu Fan and Robert Xiao (University of British Columbia, Vancouver, Canada) Touch sensing on ad-hoc surfaces has the potential to transform everyday surfaces in the environment - desks, tables and walls - into tactile, touch-interactive surfaces, creating large, comfortable interactive spaces without the cost of large touch sensors. Depth sensors are a promising way to provide touch sensing on arbitrary surfaces, but past systems have suffered from high latency and poor touch detection accuracy. We apply a novel state machine-based approach to analyzing touch events, combined with a machine-learning approach to predictively classify touch events from depth data with lower latency and higher touch accuracy than previous approaches. Our system can reduce end-to-end touch latency to under 70ms, comparable to conventional capacitive touchscreens. Additionally, we open-source our dataset of over 30,000 touch events recorded in depth, infrared and RGB for the benefit of future researchers. @Article{ISS22p577, author = {Neil Xu Fan and Robert Xiao}, title = {Reducing the Latency of Touch Tracking on Ad-hoc Surfaces}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {577}, numpages = {11}, doi = {10.1145/3567730}, year = {2022}, } Publisher's Version Info |
|
Feger, Sebastian S. |
ISS '22: "ElectronicsAR: Design and ..."
ElectronicsAR: Design and Evaluation of a Mobile and Tangible High-Fidelity Augmented Electronics Toolkit
Sebastian S. Feger, Lars Semmler, Albrecht Schmidt, and Thomas Kosch (LMU Munich, Munich, Germany; TU Darmstadt, Darmstadt, Germany; Humboldt University of Berlin, Berlin, Germany) Exploring and interacting with electronics is challenging as the internal processes of components are not visible. Further barriers to engagement with electronics include fear of injury and hardware damage. In response, Augmented Reality (AR) applications address those challenges to make internal processes and the functionality of circuits visible. However, current apps are either limited to abstract low-fidelity applications or entirely virtual environments. We present ElectronicsAR, a tangible high-fidelity AR electronics kit with scaled hardware components representing the shape of real electronics. Our evaluation with 24 participants showed that users were more efficient and more effective at naming components, as well as building and debugging circuits. We discuss our findings in the context of ElectronicsAR's unique characteristics that we contrast with related work. Based on this, we discuss opportunities for future research to design functional mobile AR applications that meet the needs of beginners and experts. @Article{ISS22p587, author = {Sebastian S. Feger and Lars Semmler and Albrecht Schmidt and Thomas Kosch}, title = {ElectronicsAR: Design and Evaluation of a Mobile and Tangible High-Fidelity Augmented Electronics Toolkit}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {587}, numpages = {22}, doi = {10.1145/3567740}, year = {2022}, } Publisher's Version ISS '22: "SaferHome: Interactive Physical ..." SaferHome: Interactive Physical and Digital Smart Home Dashboards for Communicating Privacy Assessments to Owners and Bystanders Maximiliane Windl, Alexander Hiesinger, Robin Welsch, Albrecht Schmidt, and Sebastian S. Feger (LMU Munich, Munich, Germany; Munich Center for Machine Learning, Munich, Germany; Aalto University, Espoo, Finland) Private homes are increasingly becoming smart spaces. While smart homes promise comfort, they expose most intimate spaces to security and privacy risks. Unfortunately, most users today are not equipped with the right tools to assess the vulnerabilities or privacy practices of smart devices. Further, users might lose track of the devices installed in their homes or are unaware of devices placed by a partner or host. We developed SaferHome, an interactive digital-physical privacy framework, to provide smart home users with security and privacy assessments and a sense of device location. SaferHome includes a digital list view and physical and digital dashboards that map real floor plans. We evaluated SaferHome with eight households in the wild. We find that users adopted various strategies to integrate the dashboards into their understanding and interpretation of smart home privacy. We present implications for the design of future smart home privacy frameworks that are impacted by technical affinity, device types, device ownership, and tangibility of assessments. @Article{ISS22p586, author = {Maximiliane Windl and Alexander Hiesinger and Robin Welsch and Albrecht Schmidt and Sebastian S. Feger}, title = {SaferHome: Interactive Physical and Digital Smart Home Dashboards for Communicating Privacy Assessments to Owners and Bystanders}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {586}, numpages = {20}, doi = {10.1145/3567739}, year = {2022}, } Publisher's Version Info |
|
Fink, Daniel Immanuel |
ISS '22: "Re-locations: Augmenting Personal ..."
Re-locations: Augmenting Personal and Shared Workspaces to Support Remote Collaboration in Incongruent Spaces
Daniel Immanuel Fink, Johannes Zagermann, Harald Reiterer, and Hans-Christian Jetter (University of Konstanz, Konstanz, Germany; University of Lübeck, Lübeck, Germany) Augmented reality (AR) can create the illusion of being virtually co-located during remote collaboration, e.g., by visualizing remote co-workers as avatars. However, spatial awareness of each other's activities is limited as physical spaces, including the position of physical devices, are often incongruent. Therefore, alignment methods are needed to support activities on physical devices. In this paper, we present the concept of Re-locations, a method for enabling remote collaboration with augmented reality in incongruent spaces. The idea of the concept is to enrich remote collaboration activities on multiple physical devices with attributes of co-located collaboration such as spatial awareness and spatial referencing by locally relocating remote user representations to user-defined workspaces. We evaluated the Re-locations concept in an explorative user study with dyads using an authentic, collaborative task. Our findings indicate that Re-locations introduce attributes of co-located collaboration like spatial awareness and social presence. Based on our findings, we provide implications for future research and design of remote collaboration systems using AR. @Article{ISS22p556, author = {Daniel Immanuel Fink and Johannes Zagermann and Harald Reiterer and Hans-Christian Jetter}, title = {Re-locations: Augmenting Personal and Shared Workspaces to Support Remote Collaboration in Incongruent Spaces}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {556}, numpages = {30}, doi = {10.1145/3567709}, year = {2022}, } Publisher's Version |
|
Fitzmaurice, George |
ISS '22: "VideoPoseVR: Authoring Virtual ..."
VideoPoseVR: Authoring Virtual Reality Character Animations with Online Videos
Cheng Yao Wang, Qian Zhou, George Fitzmaurice, and Fraser Anderson (Cornell University, Ithaca, USA; Autodesk Research, Toronto, Canada) We present VideoPoseVR, a video-based animation authoring workflow using online videos to author character animations in VR. It leverages the state-of-the-art deep learning approach to reconstruct 3D motions from online videos, caption the motions, and store them in a motion dataset. Creators can import the videos, search in the dataset, modify the motion timeline, and combine multiple motions from videos to author character animations in VR. We implemented a proof-of-concept prototype and conducted a user study to evaluate the feasibility of the video-based authoring approach as well as gather initial feedback of the prototype. The study results suggest that VideoPoseVR was easy to learn for novice users to author animations and enable rapid exploration of prototyping for applications such as entertainment, skills training, and crowd simulations. @Article{ISS22p575, author = {Cheng Yao Wang and Qian Zhou and George Fitzmaurice and Fraser Anderson}, title = {VideoPoseVR: Authoring Virtual Reality Character Animations with Online Videos}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {575}, numpages = {20}, doi = {10.1145/3567728}, year = {2022}, } Publisher's Version |
|
Fraser, Mike |
ISS '22: "Investigating Pointing Performance ..."
Investigating Pointing Performance for Tangible Surfaces with Physical 3D Targets
Aluna Everitt, Anne Roudaut, Kasper Hornbæk, Mike Fraser, and Jason Alexander (University of Oxford, Oxford, UK; University of Bristol, Bristol, UK; University of Copenhagen, Copenhagen, Denmark; University of Bath, Bath, UK) One of the most fundamental interactions –pointing– is well understood on flat surfaces. However, pointing performance on tangible surfaces with physical targets is still limited for Tangible User Interfaces (TUIs). We investigate the effect of a target’s physical width, height, and distance on user pointing performance. We conducted a study using a reciprocal tapping task (n=19) with physical rods arranged in a circle. We compared our data with five conventional interaction models designed for 2D/3D tasks rather than tangible targets. We show that variance in the movement times was only satisfactorily explained by a model established for volumetric displays (r2=0.954). Analysis shows that movement direction and height should be included as parameters to this model to generalize for 3D tangible targets. Qualitative feedback from participants suggests that pointing at physical targets involves additional human factors (e.g., perception of sharpness or robustness) that need to be investigated further to understand how performance with tangible objects is affected. @Article{ISS22p583, author = {Aluna Everitt and Anne Roudaut and Kasper Hornbæk and Mike Fraser and Jason Alexander}, title = {Investigating Pointing Performance for Tangible Surfaces with Physical 3D Targets}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {583}, numpages = {23}, doi = {10.1145/3567736}, year = {2022}, } Publisher's Version |
|
Fujita, Kazuyuki |
ISS '22: "HandyGaze: A Gaze Tracking ..."
HandyGaze: A Gaze Tracking Technique for Room-Scale Environments using a Single Smartphone
Takahiro Nagai, Kazuyuki Fujita, Kazuki Takashima, and Yoshifumi Kitamura (Tohoku University, Sendai, Japan) We propose HandyGaze, a 6-DoF gaze tracking technique for room-scale environments that can be carried out by simply holding a smartphone naturally without installing any sensors or markers in the environment. Our technique simultaneously employs the smartphone’s front and rear cameras: The front camera estimates the user’s gaze vector relative to the smartphone, while the rear camera (and depth sensor, if available) performs self-localization by reconstructing a pre-obtained 3D map of the environment. To achieve this, we implemented a prototype that works on iOS smartphones by running an ARKit-based algorithm for estimating the user’s 6-DoF head orientation. We additionally implemented a novel calibration method that offsets the user-specific deviation between the head and gaze orientations. We then conducted a user study (N=10) that measured our technique’s positional accuracy to the gaze target under four conditions, based on combinations of use with and without a depth sensor and calibration. The results show that our calibration method was able to reduce the mean absolute error of the gaze point by 27%, with an error of 0.53 m when using the depth sensor. We also report the target size required to avoid erroneous inputs. Finally, we suggest possible applications such as a gaze-based guidance application for museums. @Article{ISS22p562, author = {Takahiro Nagai and Kazuyuki Fujita and Kazuki Takashima and Yoshifumi Kitamura}, title = {HandyGaze: A Gaze Tracking Technique for Room-Scale Environments using a Single Smartphone}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {562}, numpages = {18}, doi = {10.1145/3567715}, year = {2022}, } Publisher's Version Archive submitted (74 MB) ISS '22: "TetraForce: A Magnetic-Based ..." TetraForce: A Magnetic-Based Interface Enabling Pressure Force and Shear Force Input Applied to Front and Back of a Smartphone Taichi Tsuchida, Kazuyuki Fujita, Kaori Ikematsu, Sayan Sarcar, Kazuki Takashima, and Yoshifumi Kitamura (Tohoku University, Sendai, Japan; Yahoo, Tokyo, Japan; Birmingham City University, Birmingham, UK) We propose a novel phone-case-shaped interface named TetraForce, which enables four types of force input consisting of two force directions (i.e., pressure force and shear force) and two force-applied surfaces (i.e., touch surface and back surface) in a single device. Force detection is achieved using the smartphone's built-in magnetometer (and supplementary accelerometer and gyro sensor) by estimating the displacement of a magnet attached to a 3-DoF passively movable panel at the back. We conducted a user study (N=12) to investigate the fundamental user performance by our interface and demonstrated that the input was detected as intended with a success rate of 97.4% on average for all four input types. We further conducted an ideation workshop with people who were involved in human-computer interaction (N=12) to explore possible applications of this interface, and we obtained 137 ideas for applications using individual input types and 51 possible scenarios using them in combination. Organizing these ideas reveals the advantages of each input type and suggests that our interface is useful for applications that require complex operations and that it can improve intuitiveness through elastic feedback. @Article{ISS22p564, author = {Taichi Tsuchida and Kazuyuki Fujita and Kaori Ikematsu and Sayan Sarcar and Kazuki Takashima and Yoshifumi Kitamura}, title = {TetraForce: A Magnetic-Based Interface Enabling Pressure Force and Shear Force Input Applied to Front and Back of a Smartphone}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {564}, numpages = {22}, doi = {10.1145/3567717}, year = {2022}, } Publisher's Version Archive submitted (320 MB) |
|
Guo, Anhong |
ISS '22: "XSpace: An Augmented Reality ..."
XSpace: An Augmented Reality Toolkit for Enabling Spatially-Aware Distributed Collaboration
Jaylin Herskovitz, Yi Fei Cheng, Anhong Guo, Alanson P. Sample, and Michael Nebeling (University of Michigan, Ann Arbor, USA; Carnegie Mellon University, Pittsburgh, USA) Augmented Reality (AR) has the potential to leverage environmental information to better facilitate distributed collaboration, however, such applications are difficult to develop. We present XSpace, a toolkit for creating spatially-aware AR applications for distributed collaboration. Based on a review of existing applications and developer tools, we design XSpace to support three methods for creating shared virtual spaces, each emphasizing a different aspect: shared objects, user perspectives, and environmental meshes. XSpace implements these methods in a developer toolkit, and also provides a set of complimentary visual authoring tools to allow developers to preview a variety of configurations for a shared virtual space. We present five example applications to illustrate that XSpace can support the development of a rich set of collaborative AR experiences that are difficult to produce with current solutions. Through XSpace, we discuss implications for future application design, including user space customization and privacy and safety concerns when sharing users' environments. @Article{ISS22p568, author = {Jaylin Herskovitz and Yi Fei Cheng and Anhong Guo and Alanson P. Sample and Michael Nebeling}, title = {XSpace: An Augmented Reality Toolkit for Enabling Spatially-Aware Distributed Collaboration}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {568}, numpages = {26}, doi = {10.1145/3567721}, year = {2022}, } Publisher's Version Video |
|
Hartcher-O'Brien, Jess |
ISS '22: "Extended Mid-air Ultrasound ..."
Extended Mid-air Ultrasound Haptics for Virtual Reality
Steeven Villa, Sven Mayer, Jess Hartcher-O'Brien, Albrecht Schmidt, and Tonja-Katrin Machulla (LMU Munich, Munich, Germany; Delft University of Technology, Delft, Netherlands; TU Chemnitz, Chemnitz, Germany) Mid-air haptics allow bare-hand tactile stimulation; however, it has a constrained workspace, making it unsuitable for room-scale haptics. We present a novel approach to rendering mid-air haptic sensations in a large rendering volume by turning a static array into a dynamic array following the user's hand. We used a 6DOF robot to drive a haptic ultrasound array over a large 3D space. Our system enables rendering room-scale mid-air experiences while preserving bare-hand interaction, thus, providing tangibility for virtual environments. To evaluate our approach, we performed three evaluations. First, we performed a technical system evaluation, showcasing the feasibility of such a system. Next, we conducted three psychophysical experiments, showing that the motion does not affect the user's perception with high likelihood. Lastly, we explored seven use cases that showcase our system's potential using a user study. We discuss challenges and opportunities in how large-scale mid-air haptics can contribute toward room-scale haptic feedback. Thus, with our system, we contribute to general haptic mid-air feedback on a large scale. @Article{ISS22p578, author = {Steeven Villa and Sven Mayer and Jess Hartcher-O'Brien and Albrecht Schmidt and Tonja-Katrin Machulla}, title = {Extended Mid-air Ultrasound Haptics for Virtual Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {578}, numpages = {25}, doi = {10.1145/3567731}, year = {2022}, } Publisher's Version |
|
Herskovitz, Jaylin |
ISS '22: "UbiChromics: Enabling Ubiquitously ..."
UbiChromics: Enabling Ubiquitously Deployable Interactive Displays with Photochromic Paint
Amani Alkayyali, Yasha Iravantchi, Jaylin Herskovitz, and Alanson P. Sample (University of Michigan, Ann Arbor, USA) Pervasive and interactive displays promise to present our digital content seamlessly throughout our environment. However, traditional display technologies do not scale to room-wide applications due to high per-unit-area costs and the need for constant wired power and data infrastructure. This research proposes the use of photochromic paint as a display medium. Applying the paint to any surface or object creates ultra-low-cost displays, which can change color when exposed to specific wavelengths of light. We develop new paint formulations that enable wide area application of photochromic material. Along with a specially modified wide-area laser projector and depth camera that can draw custom images and create on-demand, room-wide user interfaces on photochromic enabled surfaces. System parameters such as light intensity, material activation time, and user readability are examined to optimize the display. Results show that images and user interfaces can last up to 16 minutes and can be updated indefinitely. Finally, usage scenarios such as displaying static and dynamic images, ephemeral notifications, and the creation of on-demand interfaces, such as light switches and music controllers, are demonstrated and explored. Ultimately, the UbiChromics system demonstrates the possibility of extending digital content to all painted surfaces. @Article{ISS22p561, author = {Amani Alkayyali and Yasha Iravantchi and Jaylin Herskovitz and Alanson P. Sample}, title = {UbiChromics: Enabling Ubiquitously Deployable Interactive Displays with Photochromic Paint}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {561}, numpages = {25}, doi = {10.1145/3567714}, year = {2022}, } Publisher's Version Video Info ISS '22: "XSpace: An Augmented Reality ..." XSpace: An Augmented Reality Toolkit for Enabling Spatially-Aware Distributed Collaboration Jaylin Herskovitz, Yi Fei Cheng, Anhong Guo, Alanson P. Sample, and Michael Nebeling (University of Michigan, Ann Arbor, USA; Carnegie Mellon University, Pittsburgh, USA) Augmented Reality (AR) has the potential to leverage environmental information to better facilitate distributed collaboration, however, such applications are difficult to develop. We present XSpace, a toolkit for creating spatially-aware AR applications for distributed collaboration. Based on a review of existing applications and developer tools, we design XSpace to support three methods for creating shared virtual spaces, each emphasizing a different aspect: shared objects, user perspectives, and environmental meshes. XSpace implements these methods in a developer toolkit, and also provides a set of complimentary visual authoring tools to allow developers to preview a variety of configurations for a shared virtual space. We present five example applications to illustrate that XSpace can support the development of a rich set of collaborative AR experiences that are difficult to produce with current solutions. Through XSpace, we discuss implications for future application design, including user space customization and privacy and safety concerns when sharing users' environments. @Article{ISS22p568, author = {Jaylin Herskovitz and Yi Fei Cheng and Anhong Guo and Alanson P. Sample and Michael Nebeling}, title = {XSpace: An Augmented Reality Toolkit for Enabling Spatially-Aware Distributed Collaboration}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {568}, numpages = {26}, doi = {10.1145/3567721}, year = {2022}, } Publisher's Version Video |
|
Hiesinger, Alexander |
ISS '22: "SaferHome: Interactive Physical ..."
SaferHome: Interactive Physical and Digital Smart Home Dashboards for Communicating Privacy Assessments to Owners and Bystanders
Maximiliane Windl, Alexander Hiesinger, Robin Welsch, Albrecht Schmidt, and Sebastian S. Feger (LMU Munich, Munich, Germany; Munich Center for Machine Learning, Munich, Germany; Aalto University, Espoo, Finland) Private homes are increasingly becoming smart spaces. While smart homes promise comfort, they expose most intimate spaces to security and privacy risks. Unfortunately, most users today are not equipped with the right tools to assess the vulnerabilities or privacy practices of smart devices. Further, users might lose track of the devices installed in their homes or are unaware of devices placed by a partner or host. We developed SaferHome, an interactive digital-physical privacy framework, to provide smart home users with security and privacy assessments and a sense of device location. SaferHome includes a digital list view and physical and digital dashboards that map real floor plans. We evaluated SaferHome with eight households in the wild. We find that users adopted various strategies to integrate the dashboards into their understanding and interpretation of smart home privacy. We present implications for the design of future smart home privacy frameworks that are impacted by technical affinity, device types, device ownership, and tangibility of assessments. @Article{ISS22p586, author = {Maximiliane Windl and Alexander Hiesinger and Robin Welsch and Albrecht Schmidt and Sebastian S. Feger}, title = {SaferHome: Interactive Physical and Digital Smart Home Dashboards for Communicating Privacy Assessments to Owners and Bystanders}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {586}, numpages = {20}, doi = {10.1145/3567739}, year = {2022}, } Publisher's Version Info |
|
Hornbæk, Kasper |
ISS '22: "Investigating Pointing Performance ..."
Investigating Pointing Performance for Tangible Surfaces with Physical 3D Targets
Aluna Everitt, Anne Roudaut, Kasper Hornbæk, Mike Fraser, and Jason Alexander (University of Oxford, Oxford, UK; University of Bristol, Bristol, UK; University of Copenhagen, Copenhagen, Denmark; University of Bath, Bath, UK) One of the most fundamental interactions –pointing– is well understood on flat surfaces. However, pointing performance on tangible surfaces with physical targets is still limited for Tangible User Interfaces (TUIs). We investigate the effect of a target’s physical width, height, and distance on user pointing performance. We conducted a study using a reciprocal tapping task (n=19) with physical rods arranged in a circle. We compared our data with five conventional interaction models designed for 2D/3D tasks rather than tangible targets. We show that variance in the movement times was only satisfactorily explained by a model established for volumetric displays (r2=0.954). Analysis shows that movement direction and height should be included as parameters to this model to generalize for 3D tangible targets. Qualitative feedback from participants suggests that pointing at physical targets involves additional human factors (e.g., perception of sharpness or robustness) that need to be investigated further to understand how performance with tangible objects is affected. @Article{ISS22p583, author = {Aluna Everitt and Anne Roudaut and Kasper Hornbæk and Mike Fraser and Jason Alexander}, title = {Investigating Pointing Performance for Tangible Surfaces with Physical 3D Targets}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {583}, numpages = {23}, doi = {10.1145/3567736}, year = {2022}, } Publisher's Version |
|
Ikematsu, Kaori |
ISS '22: "TetraForce: A Magnetic-Based ..."
TetraForce: A Magnetic-Based Interface Enabling Pressure Force and Shear Force Input Applied to Front and Back of a Smartphone
Taichi Tsuchida, Kazuyuki Fujita, Kaori Ikematsu, Sayan Sarcar, Kazuki Takashima, and Yoshifumi Kitamura (Tohoku University, Sendai, Japan; Yahoo, Tokyo, Japan; Birmingham City University, Birmingham, UK) We propose a novel phone-case-shaped interface named TetraForce, which enables four types of force input consisting of two force directions (i.e., pressure force and shear force) and two force-applied surfaces (i.e., touch surface and back surface) in a single device. Force detection is achieved using the smartphone's built-in magnetometer (and supplementary accelerometer and gyro sensor) by estimating the displacement of a magnet attached to a 3-DoF passively movable panel at the back. We conducted a user study (N=12) to investigate the fundamental user performance by our interface and demonstrated that the input was detected as intended with a success rate of 97.4% on average for all four input types. We further conducted an ideation workshop with people who were involved in human-computer interaction (N=12) to explore possible applications of this interface, and we obtained 137 ideas for applications using individual input types and 51 possible scenarios using them in combination. Organizing these ideas reveals the advantages of each input type and suggests that our interface is useful for applications that require complex operations and that it can improve intuitiveness through elastic feedback. @Article{ISS22p564, author = {Taichi Tsuchida and Kazuyuki Fujita and Kaori Ikematsu and Sayan Sarcar and Kazuki Takashima and Yoshifumi Kitamura}, title = {TetraForce: A Magnetic-Based Interface Enabling Pressure Force and Shear Force Input Applied to Front and Back of a Smartphone}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {564}, numpages = {22}, doi = {10.1145/3567717}, year = {2022}, } Publisher's Version Archive submitted (320 MB) |
|
Imtiaz, Syeda Aniqa |
ISS '22: "Tangible Chromatin: Tangible ..."
Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments
Roozbeh Manshaei, Uzair Mayat, Syeda Aniqa Imtiaz, Veronica Andric, Kazeera Aliar, Nour Abu Hantash, Kashaf Masood, Gabby Resch, Alexander Bakogeorge, Sarah Sabatinos, and Ali Mazalek (Ryerson University, Toronto, Canada) In biology, microscopy data from thousands of individual cellular events presents challenges for analysis and problem solving. These include a lack of visual analysis tools to complement algorithmic approaches for tracking important but rare cellular events, and a lack of support for collaborative exploration and interpretation. In response to these challenges, we have designed and implemented Tangible Chromatin, a tangible and multi-surface system that promotes novel analysis of complex data generated from high-content microscopy experiments. The system facilitates three specific approaches to analysis: it (1) visualizes the detailed information and results from the image processing algorithms, (2) provides interactive approaches for browsing, selecting, and comparing individual data elements, and (3) expands options for productive collaboration through both independent and joint work. We present three main contributions: (i) design requirements that derive from the analytical goals of DNA replication biology, (ii) tangible and multi-surface interaction techniques to support the exploration and analysis of datasets from high-content microscopy experiments, and (iii) the results of a user study that investigated how the system supports individual and collaborative data analysis and interpretation tasks. @Article{ISS22p558, author = {Roozbeh Manshaei and Uzair Mayat and Syeda Aniqa Imtiaz and Veronica Andric and Kazeera Aliar and Nour Abu Hantash and Kashaf Masood and Gabby Resch and Alexander Bakogeorge and Sarah Sabatinos and Ali Mazalek}, title = {Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {558}, numpages = {22}, doi = {10.1145/3567711}, year = {2022}, } Publisher's Version Video |
|
Iravantchi, Yasha |
ISS '22: "UbiChromics: Enabling Ubiquitously ..."
UbiChromics: Enabling Ubiquitously Deployable Interactive Displays with Photochromic Paint
Amani Alkayyali, Yasha Iravantchi, Jaylin Herskovitz, and Alanson P. Sample (University of Michigan, Ann Arbor, USA) Pervasive and interactive displays promise to present our digital content seamlessly throughout our environment. However, traditional display technologies do not scale to room-wide applications due to high per-unit-area costs and the need for constant wired power and data infrastructure. This research proposes the use of photochromic paint as a display medium. Applying the paint to any surface or object creates ultra-low-cost displays, which can change color when exposed to specific wavelengths of light. We develop new paint formulations that enable wide area application of photochromic material. Along with a specially modified wide-area laser projector and depth camera that can draw custom images and create on-demand, room-wide user interfaces on photochromic enabled surfaces. System parameters such as light intensity, material activation time, and user readability are examined to optimize the display. Results show that images and user interfaces can last up to 16 minutes and can be updated indefinitely. Finally, usage scenarios such as displaying static and dynamic images, ephemeral notifications, and the creation of on-demand interfaces, such as light switches and music controllers, are demonstrated and explored. Ultimately, the UbiChromics system demonstrates the possibility of extending digital content to all painted surfaces. @Article{ISS22p561, author = {Amani Alkayyali and Yasha Iravantchi and Jaylin Herskovitz and Alanson P. Sample}, title = {UbiChromics: Enabling Ubiquitously Deployable Interactive Displays with Photochromic Paint}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {561}, numpages = {25}, doi = {10.1145/3567714}, year = {2022}, } Publisher's Version Video Info |
|
Jetter, Hans-Christian |
ISS '22: "Re-locations: Augmenting Personal ..."
Re-locations: Augmenting Personal and Shared Workspaces to Support Remote Collaboration in Incongruent Spaces
Daniel Immanuel Fink, Johannes Zagermann, Harald Reiterer, and Hans-Christian Jetter (University of Konstanz, Konstanz, Germany; University of Lübeck, Lübeck, Germany) Augmented reality (AR) can create the illusion of being virtually co-located during remote collaboration, e.g., by visualizing remote co-workers as avatars. However, spatial awareness of each other's activities is limited as physical spaces, including the position of physical devices, are often incongruent. Therefore, alignment methods are needed to support activities on physical devices. In this paper, we present the concept of Re-locations, a method for enabling remote collaboration with augmented reality in incongruent spaces. The idea of the concept is to enrich remote collaboration activities on multiple physical devices with attributes of co-located collaboration such as spatial awareness and spatial referencing by locally relocating remote user representations to user-defined workspaces. We evaluated the Re-locations concept in an explorative user study with dyads using an authentic, collaborative task. Our findings indicate that Re-locations introduce attributes of co-located collaboration like spatial awareness and social presence. Based on our findings, we provide implications for future research and design of remote collaboration systems using AR. @Article{ISS22p556, author = {Daniel Immanuel Fink and Johannes Zagermann and Harald Reiterer and Hans-Christian Jetter}, title = {Re-locations: Augmenting Personal and Shared Workspaces to Support Remote Collaboration in Incongruent Spaces}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {556}, numpages = {30}, doi = {10.1145/3567709}, year = {2022}, } Publisher's Version |
|
Jouffrais, Christophe |
ISS '22: "Remote Graphic-Based Teaching ..."
Remote Graphic-Based Teaching for Pupils with Visual Impairments: Understanding Current Practices and Co-designing an Accessible Tool with Special Education Teachers
Kaixing Zhao, Julie Mulet, Clara Sorita, Bernard Oriola, Marcos Serrano, and Christophe Jouffrais (Northwestern Polytechnical University, Xi'an, China; University of Toulouse, Toulouse, France; CNRS, Toulouse, France) The lockdown period related to the COVID-19 pandemic has had a strong impact on the educational system in general, but more particularly on the special education system. Indeed, in the case of people with visual impairments, the regular tools relying heavily on images and videos were no longer usable. This specific situation highlighted an urgent need to develop tools that are accessible and that can provide solutions for remote teaching with people with VI. However, there is little work on the difficulties that this population encounters when they learn remotely as well as on the current practices of special education teachers. Such a lack of understanding limits the development of remote teaching systems that are adapted. In this paper, we conducted an online survey regarding the practices of 59 professionals giving lessons to pupils with VI, followed by a series of focus groups with special education workers facing teaching issues during the lockdown period. We followed an iterative design process where we designed successive low-fidelity prototypes to drive successive focus groups. We contribute with an analysis of the issues faced by special education teachers in this situation, and a concept to drive the future development of a tool for remote graphic-based teaching with pupils with VI. @Article{ISS22p580, author = {Kaixing Zhao and Julie Mulet and Clara Sorita and Bernard Oriola and Marcos Serrano and Christophe Jouffrais}, title = {Remote Graphic-Based Teaching for Pupils with Visual Impairments: Understanding Current Practices and Co-designing an Accessible Tool with Special Education Teachers}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {580}, numpages = {30}, doi = {10.1145/3567733}, year = {2022}, } Publisher's Version |
|
Katsuragawa, Keiko |
ISS '22: "Conductor: Intersection-Based ..."
Conductor: Intersection-Based Bimanual Pointing in Augmented and Virtual Reality
Futian Zhang, Keiko Katsuragawa, and Edward Lank (University of Waterloo, Waterloo, Canada; National Research Council, Waterloo, Canada; University of Lille, Lille, France) Pointing is an elementary interaction in virtual and augmented reality environments, and, to effectively support selection, techniques must deal with the challenges of occlusion and depth specification. Most of the previous techniques require two explicit steps to handle occlusion. In this paper, we propose Conductor, an intuitive, plane-ray, intersection-based, 3D pointing technique where users leverage bimanual input to control a ray and intersecting plane. Conductor allows users to use the non-dominant hand to adjust the cursor distance on the ray while pointing with the dominant hand. We evaluate Conductor against Raycursor, a state-of-the-art VR pointing technique, and show that Conductor outperforms Raycursor for selection tasks. Given our results, we argue that bimanual selection techniques merit additional exploration to support object selection and placement within virtual environments. @Article{ISS22p560, author = {Futian Zhang and Keiko Katsuragawa and Edward Lank}, title = {Conductor: Intersection-Based Bimanual Pointing in Augmented and Virtual Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {560}, numpages = {15}, doi = {10.1145/3567713}, year = {2022}, } Publisher's Version Video |
|
Kim, Young-Ho |
ISS '22: "NoteWordy: Investigating Touch ..."
NoteWordy: Investigating Touch and Speech Input on Smartphones for Personal Data Capture
Yuhan Luo, Bongshin Lee, Young-Ho Kim, and Eun Kyoung Choe (City University of Hong Kong, Hong Kong, China; Microsoft Research, Redmond, USA; University of Maryland, College Park, USA; NAVER AI Lab, Seongnam, South Korea) Speech as a natural and low-burden input modality has great potential to support personal data capture. However, little is known about how people use speech input, together with traditional touch input, to capture different types of data in self-tracking contexts. In this work, we designed and developed NoteWordy, a multimodal self-tracking application integrating touch and speech input, and deployed it in the context of productivity tracking for two weeks (N = 17). Our participants used the two input modalities differently, depending on the data type as well as personal preferences, error tolerance for speech recognition issues, and social surroundings. Additionally, we found speech input reduced participants' diary entry time and enhanced the data richness of the free-form text. Drawing from the findings, we discuss opportunities for supporting efficient personal data capture with multimodal input and implications for improving the user experience with natural language input to capture various self-tracking data. @Article{ISS22p581, author = {Yuhan Luo and Bongshin Lee and Young-Ho Kim and Eun Kyoung Choe}, title = {NoteWordy: Investigating Touch and Speech Input on Smartphones for Personal Data Capture}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {581}, numpages = {24}, doi = {10.1145/3567734}, year = {2022}, } Publisher's Version Video |
|
Kitamura, Yoshifumi |
ISS '22: "HandyGaze: A Gaze Tracking ..."
HandyGaze: A Gaze Tracking Technique for Room-Scale Environments using a Single Smartphone
Takahiro Nagai, Kazuyuki Fujita, Kazuki Takashima, and Yoshifumi Kitamura (Tohoku University, Sendai, Japan) We propose HandyGaze, a 6-DoF gaze tracking technique for room-scale environments that can be carried out by simply holding a smartphone naturally without installing any sensors or markers in the environment. Our technique simultaneously employs the smartphone’s front and rear cameras: The front camera estimates the user’s gaze vector relative to the smartphone, while the rear camera (and depth sensor, if available) performs self-localization by reconstructing a pre-obtained 3D map of the environment. To achieve this, we implemented a prototype that works on iOS smartphones by running an ARKit-based algorithm for estimating the user’s 6-DoF head orientation. We additionally implemented a novel calibration method that offsets the user-specific deviation between the head and gaze orientations. We then conducted a user study (N=10) that measured our technique’s positional accuracy to the gaze target under four conditions, based on combinations of use with and without a depth sensor and calibration. The results show that our calibration method was able to reduce the mean absolute error of the gaze point by 27%, with an error of 0.53 m when using the depth sensor. We also report the target size required to avoid erroneous inputs. Finally, we suggest possible applications such as a gaze-based guidance application for museums. @Article{ISS22p562, author = {Takahiro Nagai and Kazuyuki Fujita and Kazuki Takashima and Yoshifumi Kitamura}, title = {HandyGaze: A Gaze Tracking Technique for Room-Scale Environments using a Single Smartphone}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {562}, numpages = {18}, doi = {10.1145/3567715}, year = {2022}, } Publisher's Version Archive submitted (74 MB) ISS '22: "TetraForce: A Magnetic-Based ..." TetraForce: A Magnetic-Based Interface Enabling Pressure Force and Shear Force Input Applied to Front and Back of a Smartphone Taichi Tsuchida, Kazuyuki Fujita, Kaori Ikematsu, Sayan Sarcar, Kazuki Takashima, and Yoshifumi Kitamura (Tohoku University, Sendai, Japan; Yahoo, Tokyo, Japan; Birmingham City University, Birmingham, UK) We propose a novel phone-case-shaped interface named TetraForce, which enables four types of force input consisting of two force directions (i.e., pressure force and shear force) and two force-applied surfaces (i.e., touch surface and back surface) in a single device. Force detection is achieved using the smartphone's built-in magnetometer (and supplementary accelerometer and gyro sensor) by estimating the displacement of a magnet attached to a 3-DoF passively movable panel at the back. We conducted a user study (N=12) to investigate the fundamental user performance by our interface and demonstrated that the input was detected as intended with a success rate of 97.4% on average for all four input types. We further conducted an ideation workshop with people who were involved in human-computer interaction (N=12) to explore possible applications of this interface, and we obtained 137 ideas for applications using individual input types and 51 possible scenarios using them in combination. Organizing these ideas reveals the advantages of each input type and suggests that our interface is useful for applications that require complex operations and that it can improve intuitiveness through elastic feedback. @Article{ISS22p564, author = {Taichi Tsuchida and Kazuyuki Fujita and Kaori Ikematsu and Sayan Sarcar and Kazuki Takashima and Yoshifumi Kitamura}, title = {TetraForce: A Magnetic-Based Interface Enabling Pressure Force and Shear Force Input Applied to Front and Back of a Smartphone}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {564}, numpages = {22}, doi = {10.1145/3567717}, year = {2022}, } Publisher's Version Archive submitted (320 MB) |
|
Kljun, Matjaž |
ISS '22: "Dynamic Pinhole Paper: Interacting ..."
Dynamic Pinhole Paper: Interacting with Horizontal Displays through Perforated Paper
Cuauhtli Campos, Matjaž Kljun, and Klen Čopič Pucihar (University of Primorska, Koper, Slovenia; Faculty of Information Studies in Novo Mesto, Novo Mesto, Slovenia) Ubiquitous nature, versatility, and durability enabled paper to maintain its importance in the digital age. It is not surprising there have been numerous attempts to combine paper with the digital content. One way to do so is to place paper on a horizontal interactive display (e.g. tabletop or tablet). The paper thus becomes "the screen" on which the digital content is viewed, yet it also acts as a barrier that degrades the quality of the perceived image. This research tries to address this problem by proposing and evaluating a novel paper display concept called Dynamic pinhole paper. The concept is based on perforating the paper (to decrease its opacity) and moving digital content beneath the perforated area (to increase the resolution). To evaluate this novel concept, we fabricated the pinhole paper and implemented the software in order to run multiple user studies exploring the concept’s viability, optimal movement trajectory (amount, direction and velocity), and the effect of perforation on printing, writing and reading. The results show that the movement of digital content is a highly effective strategy for improving the resolution of the digital content through perforation where the optimal velocity is independent from trajectory direction (e.g. horizontal or circular) and amount of movement. Results also show the concept is viable on of the shelf hardware and that it is possible to write and print on perforated paper. @Article{ISS22p567, author = {Cuauhtli Campos and Matjaž Kljun and Klen Čopič Pucihar}, title = {Dynamic Pinhole Paper: Interacting with Horizontal Displays through Perforated Paper}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {567}, numpages = {23}, doi = {10.1145/3567720}, year = {2022}, } Publisher's Version Video Info ISS '22: "A Survey of Augmented Piano ..." A Survey of Augmented Piano Prototypes: Has Augmentation Improved Learning Experiences? Jordan Aiko Deja, Sven Mayer, Klen Čopič Pucihar, and Matjaž Kljun (University of Primorska, Koper, Slovenia; De La Salle University, Manila, Philippines; LMU Munich, Munich, Germany) Humans have been developing and playing musical instruments for millennia. With technological advancements, instruments were becoming ever more sophisticated. In recent decades computer-supported innovations have also been introduced in hardware design, usability, and aesthetics. One of the most commonly digitally augmented instruments is the piano. Besides electronic keyboards, several prototypes augmenting pianos with different projections providing various levels of interactivity on and around the keyboard have been implemented in order to support piano players. However, it is still unclear whether these solutions support the learning process. In this paper, we present a systematic review of augmented piano prototypes focusing on instrument learning based on the four themes derived from interviews with piano experts to understand better the problems of teaching the piano. These themes are (i) synchronised movement and body posture, (ii) sight-reading, (iii) ensuring motivation, and (iv) encouraging improvisation. We found that prototypes are saturated on the synchronisation themes, and there are opportunities for sight-reading, motivation, and improvisation themes. We conclude by presenting recommendations on augmenting piano systems towards enriching the piano learning experience as well as on possible directions to expand knowledge in the area. @Article{ISS22p566, author = {Jordan Aiko Deja and Sven Mayer and Klen Čopič Pucihar and Matjaž Kljun}, title = {A Survey of Augmented Piano Prototypes: Has Augmentation Improved Learning Experiences?}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {566}, numpages = {28}, doi = {10.1145/3567719}, year = {2022}, } Publisher's Version Info ISS '22: "LightMeUp: Back-print Illumination ..." LightMeUp: Back-print Illumination Paper Display with Multi-stable Visuals Cuauhtli Campos, Matjaž Kljun, Jakub Sandak, and Klen Čopič Pucihar (University of Primorska, Koper, Slovenia; Faculty of Information Studies in Novo Mesto, Novo Mesto, Slovenia; InnoRenew CoE, Isola, Slovenia) Despite the drive to digitise learning, paper still holds a prominent role within educational settings. While computational devices have several advantages over paper (e.g. changing and showing content based on user interaction and needs) their prolonged or incorrect usage can hinder educational achievements. In this paper, we combine the interactivity of computational devices with paper whilst reducing the usage of technology to the minimum. To this end, we developed and evaluated a novel back-print illumination paper display called LightMeUp where different information printed on the back side of the paper becomes visible when paper is placed on an interactive display and back-illuminated with a particular colour. To develop this novel display, we first built a display simulator that enables the simulation of various spectral characteristics of the elements used in the system (i.e. light sources such as tablet computers, paper types and printing inks). By using our simulator, we designed various use-case prototypes that demonstrate the capabilities and feasibility of the proposed system. With our simulator and use-cases presented, educators and educational content designers can easily design multi-stable interactive visuals by using readily available paper, printers and touch displays. @Article{ISS22p573, author = {Cuauhtli Campos and Matjaž Kljun and Jakub Sandak and Klen Čopič Pucihar}, title = {LightMeUp: Back-print Illumination Paper Display with Multi-stable Visuals}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {573}, numpages = {23}, doi = {10.1145/3570333}, year = {2022}, } Publisher's Version Video Info |
|
Kosch, Thomas |
ISS '22: "ElectronicsAR: Design and ..."
ElectronicsAR: Design and Evaluation of a Mobile and Tangible High-Fidelity Augmented Electronics Toolkit
Sebastian S. Feger, Lars Semmler, Albrecht Schmidt, and Thomas Kosch (LMU Munich, Munich, Germany; TU Darmstadt, Darmstadt, Germany; Humboldt University of Berlin, Berlin, Germany) Exploring and interacting with electronics is challenging as the internal processes of components are not visible. Further barriers to engagement with electronics include fear of injury and hardware damage. In response, Augmented Reality (AR) applications address those challenges to make internal processes and the functionality of circuits visible. However, current apps are either limited to abstract low-fidelity applications or entirely virtual environments. We present ElectronicsAR, a tangible high-fidelity AR electronics kit with scaled hardware components representing the shape of real electronics. Our evaluation with 24 participants showed that users were more efficient and more effective at naming components, as well as building and debugging circuits. We discuss our findings in the context of ElectronicsAR's unique characteristics that we contrast with related work. Based on this, we discuss opportunities for future research to design functional mobile AR applications that meet the needs of beginners and experts. @Article{ISS22p587, author = {Sebastian S. Feger and Lars Semmler and Albrecht Schmidt and Thomas Kosch}, title = {ElectronicsAR: Design and Evaluation of a Mobile and Tangible High-Fidelity Augmented Electronics Toolkit}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {587}, numpages = {22}, doi = {10.1145/3567740}, year = {2022}, } Publisher's Version |
|
Lank, Edward |
ISS '22: "Leveraging Smartwatch and ..."
Leveraging Smartwatch and Earbuds Gesture Capture to Support Wearable Interaction
Hanae Rateau, Edward Lank, and Zhe Liu (University of Waterloo, Waterloo, Canada; Inria, Lille, France; Huawei, Markham, Canada) Due to the proliferation of smart wearables, it is now the case that designers can explore novel ways that devices can be used in combination by end-users. In this paper, we explore the gestural input enabled by the combination of smart earbuds coupled with a proximal smartwatch. We identify a consensus set of gestures and a taxonomy of the types of gestures participants create through an elicitation study. In a follow-on study conducted on Amazon's Mechanical Turk, we explore the social acceptability of gestures enabled by watch+earbud gesture capture. While elicited gestures continue to be simple, discrete, in-context actions, we find that elicited input is frequently abstract, varies in size and duration, and is split almost equally between on-body, proximal, and more distant actions. Together, our results provide guidelines for on-body, near-ear, and in-air input using earbuds and a smartwatch to support gesture capture. @Article{ISS22p557, author = {Hanae Rateau and Edward Lank and Zhe Liu}, title = {Leveraging Smartwatch and Earbuds Gesture Capture to Support Wearable Interaction}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {557}, numpages = {20}, doi = {10.1145/3567710}, year = {2022}, } Publisher's Version ISS '22: "Conductor: Intersection-Based ..." Conductor: Intersection-Based Bimanual Pointing in Augmented and Virtual Reality Futian Zhang, Keiko Katsuragawa, and Edward Lank (University of Waterloo, Waterloo, Canada; National Research Council, Waterloo, Canada; University of Lille, Lille, France) Pointing is an elementary interaction in virtual and augmented reality environments, and, to effectively support selection, techniques must deal with the challenges of occlusion and depth specification. Most of the previous techniques require two explicit steps to handle occlusion. In this paper, we propose Conductor, an intuitive, plane-ray, intersection-based, 3D pointing technique where users leverage bimanual input to control a ray and intersecting plane. Conductor allows users to use the non-dominant hand to adjust the cursor distance on the ray while pointing with the dominant hand. We evaluate Conductor against Raycursor, a state-of-the-art VR pointing technique, and show that Conductor outperforms Raycursor for selection tasks. Given our results, we argue that bimanual selection techniques merit additional exploration to support object selection and placement within virtual environments. @Article{ISS22p560, author = {Futian Zhang and Keiko Katsuragawa and Edward Lank}, title = {Conductor: Intersection-Based Bimanual Pointing in Augmented and Virtual Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {560}, numpages = {15}, doi = {10.1145/3567713}, year = {2022}, } Publisher's Version Video |
|
Lee, Benjamin |
ISS '22: "Towards Immersive Collaborative ..."
Towards Immersive Collaborative Sensemaking
Ying Yang, Tim Dwyer, Michael Wybrow, Benjamin Lee, Maxime Cordeil, Mark Billinghurst, and Bruce H. Thomas (Monash University, Melbourne, Australia; University of Queensland, Brisbane, Australia; University of South Australia, Mawson Lakes, Australia) When collaborating face-to-face, people commonly use the surfaces and spaces around them to perform sensemaking tasks, such as spatially organising documents, notes or images. However, when people collaborate remotely using desktop interfaces they no longer feel like they are sharing the same space. This limitation may be overcome through collaboration in immersive environments, which simulate the physical in-person experience. In this paper, we report on a between-groups study comparing collaborations on image organisation tasks, in an immersive Virtual Reality (VR) environment to more conventional desktop conferencing. Collecting data from 40 subjects in groups of four, we measured task performance, user behaviours, collaboration engagement and awareness. Overall, the VR and desktop interface resulted in similar speed, accuracy and social presence rating, but we observed more conversations and interaction with objects, and more equal contributions to the interaction from participants within groups in VR. We also identified differences in coordination and collaborative awareness behaviours between VR and desktop platforms. We report on a set of systematic measures for assessing VR collaborative experience and a new analysis tool that we have developed to capture user behaviours in collaborative setting. Finally, we provide design considerations and directions for future work. @Article{ISS22p588, author = {Ying Yang and Tim Dwyer and Michael Wybrow and Benjamin Lee and Maxime Cordeil and Mark Billinghurst and Bruce H. Thomas}, title = {Towards Immersive Collaborative Sensemaking}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {588}, numpages = {25}, doi = {10.1145/3567741}, year = {2022}, } Publisher's Version Video |
|
Lee, Bongshin |
ISS '22: "NoteWordy: Investigating Touch ..."
NoteWordy: Investigating Touch and Speech Input on Smartphones for Personal Data Capture
Yuhan Luo, Bongshin Lee, Young-Ho Kim, and Eun Kyoung Choe (City University of Hong Kong, Hong Kong, China; Microsoft Research, Redmond, USA; University of Maryland, College Park, USA; NAVER AI Lab, Seongnam, South Korea) Speech as a natural and low-burden input modality has great potential to support personal data capture. However, little is known about how people use speech input, together with traditional touch input, to capture different types of data in self-tracking contexts. In this work, we designed and developed NoteWordy, a multimodal self-tracking application integrating touch and speech input, and deployed it in the context of productivity tracking for two weeks (N = 17). Our participants used the two input modalities differently, depending on the data type as well as personal preferences, error tolerance for speech recognition issues, and social surroundings. Additionally, we found speech input reduced participants' diary entry time and enhanced the data richness of the free-form text. Drawing from the findings, we discuss opportunities for supporting efficient personal data capture with multimodal input and implications for improving the user experience with natural language input to capture various self-tracking data. @Article{ISS22p581, author = {Yuhan Luo and Bongshin Lee and Young-Ho Kim and Eun Kyoung Choe}, title = {NoteWordy: Investigating Touch and Speech Input on Smartphones for Personal Data Capture}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {581}, numpages = {24}, doi = {10.1145/3567734}, year = {2022}, } Publisher's Version Video |
|
Leeb, Helmut |
ISS '22: "Semi-automated Analysis of ..."
Semi-automated Analysis of Collaborative Interaction: Are We There Yet?
Thomas Neumayr, Mirjam Augstein, Johannes Schönböck, Sean Rintel, Helmut Leeb, and Thomas Teichmeister (University of Applied Sciences Upper Austria, Hagenberg, Austria; JKU Linz, Linz, Austria; Microsoft Research, Cambridge, UK) In recent years, research on collaborative interaction has relied on manual coding of rich audio/video recordings. The fine-grained analysis of such material is extremely time-consuming and labor-intensive. This is not only difficult to scale, but, as a result, might also limit the quality and completeness of coding due to fatigue, inherent human biases, (accidental or intentional), and inter-rater inconsistencies. In this paper, we explore how recent advances in machine learning may reduce manual effort and loss of information while retaining the value of human intelligence in the coding process. We present ACACIA (AI Chain for Augmented Collaborative Interaction Analysis), an AI video data analysis application which combines a range of advances in machine perception of video material for the analysis of collaborative interaction. We evaluate ACACIA's abilities, show how far we can already get, and which challenges remain. Our contribution lies in establishing a combined machine and human analysis pipeline that may be generalized to different collaborative settings and guide future research. @Article{ISS22p571, author = {Thomas Neumayr and Mirjam Augstein and Johannes Schönböck and Sean Rintel and Helmut Leeb and Thomas Teichmeister}, title = {Semi-automated Analysis of Collaborative Interaction: Are We There Yet?}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {571}, numpages = {27}, doi = {10.1145/3567724}, year = {2022}, } Publisher's Version Archive submitted (52 MB) |
|
Limerick, Hannah |
ISS '22: "Push, Tap, Dwell, and Pinch: ..."
Push, Tap, Dwell, and Pinch: Evaluation of Four Mid-air Selection Methods Augmented with Ultrasonic Haptic Feedback
Tafadzwa Joseph Dube, Yuan Ren, Hannah Limerick, I. Scott MacKenzie, and Ahmed Sabbir Arif (University of California, Merced, USA; Ultraleap, Bristol, UK; York University, Toronto, Canada) This work compares four mid-air target selection methods (Push, Tap, Dwell, Pinch) with two types of ultrasonic haptic feedback (Select, HoverSelect) in a Fitts’ law experiment. Results revealed that Tap is the fastest, the most accurate, and one of the least physically and cognitively demanding selection methods. Pinch is relatively fast but error prone and physically and cognitively demanding. Dwell is slowest by design, yet the most accurate and the least physically and cognitively demanding. Both haptic feedback methods improve selection performance by increasing users’ spatial awareness. Particularly, Push augmented with Hover & Select feedback is comparable to Tap. Besides, participants perceive the selection methods as faster, more accurate, and more physically and cognitively comfortable with the haptic feedback methods. @Article{ISS22p565, author = {Tafadzwa Joseph Dube and Yuan Ren and Hannah Limerick and I. Scott MacKenzie and Ahmed Sabbir Arif}, title = {Push, Tap, Dwell, and Pinch: Evaluation of Four Mid-air Selection Methods Augmented with Ultrasonic Haptic Feedback}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {565}, numpages = {19}, doi = {10.1145/3567718}, year = {2022}, } Publisher's Version Video |
|
Liu, Can |
ISS '22: "AngleCAD: Surface-Based 3D ..."
AngleCAD: Surface-Based 3D Modelling Techniques on Foldable Touchscreens
Can Liu, Chenyue Dai, Qingzhou Ma, Brinda Mehra, and Alvaro Cassinelli (City University of Hong Kong, Hong Kong, China; Massachusetts Institude of Technology, Cambridge, USA; University of Michigan, Ann Arbor, USA) 3D modelling and printing are becoming increasingly popular. However, beginners often face high barriers of entry when trying to use existing 3D modelling tools, even for creating simple objects. This is further complicated on mobile devices by the lack of direct manipulation in the Z dimension. In this paper, we explore the possibility of using foldable mobile devices for modelling simple objects by constructing a 2.5D display and interaction space with folded touch screens. We present a set of novel interaction techniques - AngleCAD, which allows users to view and navigate a 3D space through folded screens, and to modify the 3D object using the physical support of touchscreens and folding angles. The design of these techniques was inspired by woodworking practices to support surface-based operations that allow users to cut, snap and taper objects directly with the touch screen, and extrude and drill them according to the physical fold angle. A preliminary study identified the benefits of this approach and the key design factors that affect the user experience. @Article{ISS22p582, author = {Can Liu and Chenyue Dai and Qingzhou Ma and Brinda Mehra and Alvaro Cassinelli}, title = {AngleCAD: Surface-Based 3D Modelling Techniques on Foldable Touchscreens}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {582}, numpages = {25}, doi = {10.1145/3567735}, year = {2022}, } Publisher's Version Video |
|
Liu, Jiazhou |
ISS '22: "Effects of Display Layout ..."
Effects of Display Layout on Spatial Memory for Immersive Environments
Jiazhou Liu, Arnaud Prouzeau, Barrett Ens, and Tim Dwyer (Monash University, Melbourne, Australia; Inria, Bordeaux, France) In immersive environments, positioning data visualisations around the user in a wraparound layout has been advocated as advantageous over flat arrangements more typical of traditional screens. However, other than limiting the distance users must walk, there is no clear design rationale behind this common practice, and little research on the impact of wraparound layouts on visualisation tasks. The ability to remember the spatial location of elements of visualisations within the display space is crucial to support visual analytical tasks, especially those that require users to shift their focus or perform comparisons. This ability is influenced by the user's spatial memory but how spatial memory is affected by different display layouts remains unclear. In this paper, we perform two user studies to evaluate the effects of three layouts with varying degrees of curvature around the user (flat-wall, semicircular-wraparound, and circular-wraparound) on a visuo-spatial memory task in a virtual environment. The results show that participants are able to recall spatial patterns with greater accuracy and report more positive subjective ratings using flat than circular-wraparound layouts. While we didn't find any significant performance differences between the flat and semicircular-wraparound layouts, participants overwhelmingly preferred the semicircular-wraparound layout suggesting it is a good compromise between the two extremes of display curvature. @Article{ISS22p576, author = {Jiazhou Liu and Arnaud Prouzeau and Barrett Ens and Tim Dwyer}, title = {Effects of Display Layout on Spatial Memory for Immersive Environments}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {576}, numpages = {21}, doi = {10.1145/3567729}, year = {2022}, } Publisher's Version Video |
|
Liu, Yuantong |
ISS '22: "Players and Performance: Opportunities ..."
Players and Performance: Opportunities for Social Interaction with Augmented Tabletop Games at Centres for Children with Autism
Qin Wu, Rao Xu, Yuantong Liu, Danielle Lottridge, and Suranga Nanayakkara (University of Auckland, Auckland, New Zealand; Chengdu University of Information Technology, Chengdu, China; National University of Singapore, Singapore, Singapore) This research aimed to investigate how children with autism interacted with rich audio and visual augmented reality (AR) tabletop games. Based on in-depth needs analysis facilitated through autism centers in China, we designed and developed MagicBLOCKS, a series of tabletop AR interactive games for children with autism. We conducted a four-week field study with 15 male children. We found that the interactive dynamics in games were rewarding and played critical roles in motivation and sustained interest. In addition, based on post-hoc interviews and video analysis with expert therapists, we found that MagicBLOCKS provided opportunities for children with autism to engage with each other through player performances and audience interactions with episodes of cooperation and territoriality. We discuss the limitations and the insights offered by this research. @Article{ISS22p563, author = {Qin Wu and Rao Xu and Yuantong Liu and Danielle Lottridge and Suranga Nanayakkara}, title = {Players and Performance: Opportunities for Social Interaction with Augmented Tabletop Games at Centres for Children with Autism}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {563}, numpages = {24}, doi = {10.1145/3567716}, year = {2022}, } Publisher's Version |
|
Liu, Zhe |
ISS '22: "Leveraging Smartwatch and ..."
Leveraging Smartwatch and Earbuds Gesture Capture to Support Wearable Interaction
Hanae Rateau, Edward Lank, and Zhe Liu (University of Waterloo, Waterloo, Canada; Inria, Lille, France; Huawei, Markham, Canada) Due to the proliferation of smart wearables, it is now the case that designers can explore novel ways that devices can be used in combination by end-users. In this paper, we explore the gestural input enabled by the combination of smart earbuds coupled with a proximal smartwatch. We identify a consensus set of gestures and a taxonomy of the types of gestures participants create through an elicitation study. In a follow-on study conducted on Amazon's Mechanical Turk, we explore the social acceptability of gestures enabled by watch+earbud gesture capture. While elicited gestures continue to be simple, discrete, in-context actions, we find that elicited input is frequently abstract, varies in size and duration, and is split almost equally between on-body, proximal, and more distant actions. Together, our results provide guidelines for on-body, near-ear, and in-air input using earbuds and a smartwatch to support gesture capture. @Article{ISS22p557, author = {Hanae Rateau and Edward Lank and Zhe Liu}, title = {Leveraging Smartwatch and Earbuds Gesture Capture to Support Wearable Interaction}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {557}, numpages = {20}, doi = {10.1145/3567710}, year = {2022}, } Publisher's Version |
|
Lottridge, Danielle |
ISS '22: "Players and Performance: Opportunities ..."
Players and Performance: Opportunities for Social Interaction with Augmented Tabletop Games at Centres for Children with Autism
Qin Wu, Rao Xu, Yuantong Liu, Danielle Lottridge, and Suranga Nanayakkara (University of Auckland, Auckland, New Zealand; Chengdu University of Information Technology, Chengdu, China; National University of Singapore, Singapore, Singapore) This research aimed to investigate how children with autism interacted with rich audio and visual augmented reality (AR) tabletop games. Based on in-depth needs analysis facilitated through autism centers in China, we designed and developed MagicBLOCKS, a series of tabletop AR interactive games for children with autism. We conducted a four-week field study with 15 male children. We found that the interactive dynamics in games were rewarding and played critical roles in motivation and sustained interest. In addition, based on post-hoc interviews and video analysis with expert therapists, we found that MagicBLOCKS provided opportunities for children with autism to engage with each other through player performances and audience interactions with episodes of cooperation and territoriality. We discuss the limitations and the insights offered by this research. @Article{ISS22p563, author = {Qin Wu and Rao Xu and Yuantong Liu and Danielle Lottridge and Suranga Nanayakkara}, title = {Players and Performance: Opportunities for Social Interaction with Augmented Tabletop Games at Centres for Children with Autism}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {563}, numpages = {24}, doi = {10.1145/3567716}, year = {2022}, } Publisher's Version |
|
Luo, Yuhan |
ISS '22: "NoteWordy: Investigating Touch ..."
NoteWordy: Investigating Touch and Speech Input on Smartphones for Personal Data Capture
Yuhan Luo, Bongshin Lee, Young-Ho Kim, and Eun Kyoung Choe (City University of Hong Kong, Hong Kong, China; Microsoft Research, Redmond, USA; University of Maryland, College Park, USA; NAVER AI Lab, Seongnam, South Korea) Speech as a natural and low-burden input modality has great potential to support personal data capture. However, little is known about how people use speech input, together with traditional touch input, to capture different types of data in self-tracking contexts. In this work, we designed and developed NoteWordy, a multimodal self-tracking application integrating touch and speech input, and deployed it in the context of productivity tracking for two weeks (N = 17). Our participants used the two input modalities differently, depending on the data type as well as personal preferences, error tolerance for speech recognition issues, and social surroundings. Additionally, we found speech input reduced participants' diary entry time and enhanced the data richness of the free-form text. Drawing from the findings, we discuss opportunities for supporting efficient personal data capture with multimodal input and implications for improving the user experience with natural language input to capture various self-tracking data. @Article{ISS22p581, author = {Yuhan Luo and Bongshin Lee and Young-Ho Kim and Eun Kyoung Choe}, title = {NoteWordy: Investigating Touch and Speech Input on Smartphones for Personal Data Capture}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {581}, numpages = {24}, doi = {10.1145/3567734}, year = {2022}, } Publisher's Version Video |
|
Ma, Qingzhou |
ISS '22: "AngleCAD: Surface-Based 3D ..."
AngleCAD: Surface-Based 3D Modelling Techniques on Foldable Touchscreens
Can Liu, Chenyue Dai, Qingzhou Ma, Brinda Mehra, and Alvaro Cassinelli (City University of Hong Kong, Hong Kong, China; Massachusetts Institude of Technology, Cambridge, USA; University of Michigan, Ann Arbor, USA) 3D modelling and printing are becoming increasingly popular. However, beginners often face high barriers of entry when trying to use existing 3D modelling tools, even for creating simple objects. This is further complicated on mobile devices by the lack of direct manipulation in the Z dimension. In this paper, we explore the possibility of using foldable mobile devices for modelling simple objects by constructing a 2.5D display and interaction space with folded touch screens. We present a set of novel interaction techniques - AngleCAD, which allows users to view and navigate a 3D space through folded screens, and to modify the 3D object using the physical support of touchscreens and folding angles. The design of these techniques was inspired by woodworking practices to support surface-based operations that allow users to cut, snap and taper objects directly with the touch screen, and extrude and drill them according to the physical fold angle. A preliminary study identified the benefits of this approach and the key design factors that affect the user experience. @Article{ISS22p582, author = {Can Liu and Chenyue Dai and Qingzhou Ma and Brinda Mehra and Alvaro Cassinelli}, title = {AngleCAD: Surface-Based 3D Modelling Techniques on Foldable Touchscreens}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {582}, numpages = {25}, doi = {10.1145/3567735}, year = {2022}, } Publisher's Version Video |
|
Machulla, Tonja-Katrin |
ISS '22: "Extended Mid-air Ultrasound ..."
Extended Mid-air Ultrasound Haptics for Virtual Reality
Steeven Villa, Sven Mayer, Jess Hartcher-O'Brien, Albrecht Schmidt, and Tonja-Katrin Machulla (LMU Munich, Munich, Germany; Delft University of Technology, Delft, Netherlands; TU Chemnitz, Chemnitz, Germany) Mid-air haptics allow bare-hand tactile stimulation; however, it has a constrained workspace, making it unsuitable for room-scale haptics. We present a novel approach to rendering mid-air haptic sensations in a large rendering volume by turning a static array into a dynamic array following the user's hand. We used a 6DOF robot to drive a haptic ultrasound array over a large 3D space. Our system enables rendering room-scale mid-air experiences while preserving bare-hand interaction, thus, providing tangibility for virtual environments. To evaluate our approach, we performed three evaluations. First, we performed a technical system evaluation, showcasing the feasibility of such a system. Next, we conducted three psychophysical experiments, showing that the motion does not affect the user's perception with high likelihood. Lastly, we explored seven use cases that showcase our system's potential using a user study. We discuss challenges and opportunities in how large-scale mid-air haptics can contribute toward room-scale haptic feedback. Thus, with our system, we contribute to general haptic mid-air feedback on a large scale. @Article{ISS22p578, author = {Steeven Villa and Sven Mayer and Jess Hartcher-O'Brien and Albrecht Schmidt and Tonja-Katrin Machulla}, title = {Extended Mid-air Ultrasound Haptics for Virtual Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {578}, numpages = {25}, doi = {10.1145/3567731}, year = {2022}, } Publisher's Version |
|
MacKenzie, I. Scott |
ISS '22: "Push, Tap, Dwell, and Pinch: ..."
Push, Tap, Dwell, and Pinch: Evaluation of Four Mid-air Selection Methods Augmented with Ultrasonic Haptic Feedback
Tafadzwa Joseph Dube, Yuan Ren, Hannah Limerick, I. Scott MacKenzie, and Ahmed Sabbir Arif (University of California, Merced, USA; Ultraleap, Bristol, UK; York University, Toronto, Canada) This work compares four mid-air target selection methods (Push, Tap, Dwell, Pinch) with two types of ultrasonic haptic feedback (Select, HoverSelect) in a Fitts’ law experiment. Results revealed that Tap is the fastest, the most accurate, and one of the least physically and cognitively demanding selection methods. Pinch is relatively fast but error prone and physically and cognitively demanding. Dwell is slowest by design, yet the most accurate and the least physically and cognitively demanding. Both haptic feedback methods improve selection performance by increasing users’ spatial awareness. Particularly, Push augmented with Hover & Select feedback is comparable to Tap. Besides, participants perceive the selection methods as faster, more accurate, and more physically and cognitively comfortable with the haptic feedback methods. @Article{ISS22p565, author = {Tafadzwa Joseph Dube and Yuan Ren and Hannah Limerick and I. Scott MacKenzie and Ahmed Sabbir Arif}, title = {Push, Tap, Dwell, and Pinch: Evaluation of Four Mid-air Selection Methods Augmented with Ultrasonic Haptic Feedback}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {565}, numpages = {19}, doi = {10.1145/3567718}, year = {2022}, } Publisher's Version Video |
|
Manshaei, Roozbeh |
ISS '22: "Tangible Chromatin: Tangible ..."
Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments
Roozbeh Manshaei, Uzair Mayat, Syeda Aniqa Imtiaz, Veronica Andric, Kazeera Aliar, Nour Abu Hantash, Kashaf Masood, Gabby Resch, Alexander Bakogeorge, Sarah Sabatinos, and Ali Mazalek (Ryerson University, Toronto, Canada) In biology, microscopy data from thousands of individual cellular events presents challenges for analysis and problem solving. These include a lack of visual analysis tools to complement algorithmic approaches for tracking important but rare cellular events, and a lack of support for collaborative exploration and interpretation. In response to these challenges, we have designed and implemented Tangible Chromatin, a tangible and multi-surface system that promotes novel analysis of complex data generated from high-content microscopy experiments. The system facilitates three specific approaches to analysis: it (1) visualizes the detailed information and results from the image processing algorithms, (2) provides interactive approaches for browsing, selecting, and comparing individual data elements, and (3) expands options for productive collaboration through both independent and joint work. We present three main contributions: (i) design requirements that derive from the analytical goals of DNA replication biology, (ii) tangible and multi-surface interaction techniques to support the exploration and analysis of datasets from high-content microscopy experiments, and (iii) the results of a user study that investigated how the system supports individual and collaborative data analysis and interpretation tasks. @Article{ISS22p558, author = {Roozbeh Manshaei and Uzair Mayat and Syeda Aniqa Imtiaz and Veronica Andric and Kazeera Aliar and Nour Abu Hantash and Kashaf Masood and Gabby Resch and Alexander Bakogeorge and Sarah Sabatinos and Ali Mazalek}, title = {Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {558}, numpages = {22}, doi = {10.1145/3567711}, year = {2022}, } Publisher's Version Video |
|
Marquardt, Nicolai |
ISS '22: "Eliciting User-Defined Touch ..."
Eliciting User-Defined Touch and Mid-air Gestures for Co-located Mobile Gaming
Chloe Ng and Nicolai Marquardt (University College London, London, UK) Many interaction techniques have been developed to best support mobile gaming – but developed gestures and techniques might not always match user behaviour or preferences. To inform this design space of gesture input for co-located mobile gaming, we present insights from a gesture elicitation user study for touch and mid-air input, specifically focusing on board and card games due to the materiality of game artefacts and rich interaction between players. We obtained touch and mid-air gesture proposals for 11 game tasks with 12 dyads and gained insights into user preferences. We contribute our classification and analysis of 622 elicited gestures (showing more collaborative gestures in the mid-air modality), resulting in a consensus gesture set, and agreement rates showing higher consensus for touch gestures. Furthermore, we identified interaction patterns – such as benefits of situational awareness, social etiquette, gestures fostering interaction between players, and roles of gestures providing fun, excitement, and suspense to the game – which can inform future games and gesture design. @Article{ISS22p569, author = {Chloe Ng and Nicolai Marquardt}, title = {Eliciting User-Defined Touch and Mid-air Gestures for Co-located Mobile Gaming}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {569}, numpages = {25}, doi = {10.1145/3567722}, year = {2022}, } Publisher's Version |
|
Masood, Kashaf |
ISS '22: "Tangible Chromatin: Tangible ..."
Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments
Roozbeh Manshaei, Uzair Mayat, Syeda Aniqa Imtiaz, Veronica Andric, Kazeera Aliar, Nour Abu Hantash, Kashaf Masood, Gabby Resch, Alexander Bakogeorge, Sarah Sabatinos, and Ali Mazalek (Ryerson University, Toronto, Canada) In biology, microscopy data from thousands of individual cellular events presents challenges for analysis and problem solving. These include a lack of visual analysis tools to complement algorithmic approaches for tracking important but rare cellular events, and a lack of support for collaborative exploration and interpretation. In response to these challenges, we have designed and implemented Tangible Chromatin, a tangible and multi-surface system that promotes novel analysis of complex data generated from high-content microscopy experiments. The system facilitates three specific approaches to analysis: it (1) visualizes the detailed information and results from the image processing algorithms, (2) provides interactive approaches for browsing, selecting, and comparing individual data elements, and (3) expands options for productive collaboration through both independent and joint work. We present three main contributions: (i) design requirements that derive from the analytical goals of DNA replication biology, (ii) tangible and multi-surface interaction techniques to support the exploration and analysis of datasets from high-content microscopy experiments, and (iii) the results of a user study that investigated how the system supports individual and collaborative data analysis and interpretation tasks. @Article{ISS22p558, author = {Roozbeh Manshaei and Uzair Mayat and Syeda Aniqa Imtiaz and Veronica Andric and Kazeera Aliar and Nour Abu Hantash and Kashaf Masood and Gabby Resch and Alexander Bakogeorge and Sarah Sabatinos and Ali Mazalek}, title = {Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {558}, numpages = {22}, doi = {10.1145/3567711}, year = {2022}, } Publisher's Version Video |
|
Mayat, Uzair |
ISS '22: "Tangible Chromatin: Tangible ..."
Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments
Roozbeh Manshaei, Uzair Mayat, Syeda Aniqa Imtiaz, Veronica Andric, Kazeera Aliar, Nour Abu Hantash, Kashaf Masood, Gabby Resch, Alexander Bakogeorge, Sarah Sabatinos, and Ali Mazalek (Ryerson University, Toronto, Canada) In biology, microscopy data from thousands of individual cellular events presents challenges for analysis and problem solving. These include a lack of visual analysis tools to complement algorithmic approaches for tracking important but rare cellular events, and a lack of support for collaborative exploration and interpretation. In response to these challenges, we have designed and implemented Tangible Chromatin, a tangible and multi-surface system that promotes novel analysis of complex data generated from high-content microscopy experiments. The system facilitates three specific approaches to analysis: it (1) visualizes the detailed information and results from the image processing algorithms, (2) provides interactive approaches for browsing, selecting, and comparing individual data elements, and (3) expands options for productive collaboration through both independent and joint work. We present three main contributions: (i) design requirements that derive from the analytical goals of DNA replication biology, (ii) tangible and multi-surface interaction techniques to support the exploration and analysis of datasets from high-content microscopy experiments, and (iii) the results of a user study that investigated how the system supports individual and collaborative data analysis and interpretation tasks. @Article{ISS22p558, author = {Roozbeh Manshaei and Uzair Mayat and Syeda Aniqa Imtiaz and Veronica Andric and Kazeera Aliar and Nour Abu Hantash and Kashaf Masood and Gabby Resch and Alexander Bakogeorge and Sarah Sabatinos and Ali Mazalek}, title = {Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {558}, numpages = {22}, doi = {10.1145/3567711}, year = {2022}, } Publisher's Version Video |
|
Mayer, Sven |
ISS '22: "Extended Mid-air Ultrasound ..."
Extended Mid-air Ultrasound Haptics for Virtual Reality
Steeven Villa, Sven Mayer, Jess Hartcher-O'Brien, Albrecht Schmidt, and Tonja-Katrin Machulla (LMU Munich, Munich, Germany; Delft University of Technology, Delft, Netherlands; TU Chemnitz, Chemnitz, Germany) Mid-air haptics allow bare-hand tactile stimulation; however, it has a constrained workspace, making it unsuitable for room-scale haptics. We present a novel approach to rendering mid-air haptic sensations in a large rendering volume by turning a static array into a dynamic array following the user's hand. We used a 6DOF robot to drive a haptic ultrasound array over a large 3D space. Our system enables rendering room-scale mid-air experiences while preserving bare-hand interaction, thus, providing tangibility for virtual environments. To evaluate our approach, we performed three evaluations. First, we performed a technical system evaluation, showcasing the feasibility of such a system. Next, we conducted three psychophysical experiments, showing that the motion does not affect the user's perception with high likelihood. Lastly, we explored seven use cases that showcase our system's potential using a user study. We discuss challenges and opportunities in how large-scale mid-air haptics can contribute toward room-scale haptic feedback. Thus, with our system, we contribute to general haptic mid-air feedback on a large scale. @Article{ISS22p578, author = {Steeven Villa and Sven Mayer and Jess Hartcher-O'Brien and Albrecht Schmidt and Tonja-Katrin Machulla}, title = {Extended Mid-air Ultrasound Haptics for Virtual Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {578}, numpages = {25}, doi = {10.1145/3567731}, year = {2022}, } Publisher's Version ISS '22: "A Survey of Augmented Piano ..." A Survey of Augmented Piano Prototypes: Has Augmentation Improved Learning Experiences? Jordan Aiko Deja, Sven Mayer, Klen Čopič Pucihar, and Matjaž Kljun (University of Primorska, Koper, Slovenia; De La Salle University, Manila, Philippines; LMU Munich, Munich, Germany) Humans have been developing and playing musical instruments for millennia. With technological advancements, instruments were becoming ever more sophisticated. In recent decades computer-supported innovations have also been introduced in hardware design, usability, and aesthetics. One of the most commonly digitally augmented instruments is the piano. Besides electronic keyboards, several prototypes augmenting pianos with different projections providing various levels of interactivity on and around the keyboard have been implemented in order to support piano players. However, it is still unclear whether these solutions support the learning process. In this paper, we present a systematic review of augmented piano prototypes focusing on instrument learning based on the four themes derived from interviews with piano experts to understand better the problems of teaching the piano. These themes are (i) synchronised movement and body posture, (ii) sight-reading, (iii) ensuring motivation, and (iv) encouraging improvisation. We found that prototypes are saturated on the synchronisation themes, and there are opportunities for sight-reading, motivation, and improvisation themes. We conclude by presenting recommendations on augmenting piano systems towards enriching the piano learning experience as well as on possible directions to expand knowledge in the area. @Article{ISS22p566, author = {Jordan Aiko Deja and Sven Mayer and Klen Čopič Pucihar and Matjaž Kljun}, title = {A Survey of Augmented Piano Prototypes: Has Augmentation Improved Learning Experiences?}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {566}, numpages = {28}, doi = {10.1145/3567719}, year = {2022}, } Publisher's Version Info |
|
Mazalek, Ali |
ISS '22: "Tangible Chromatin: Tangible ..."
Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments
Roozbeh Manshaei, Uzair Mayat, Syeda Aniqa Imtiaz, Veronica Andric, Kazeera Aliar, Nour Abu Hantash, Kashaf Masood, Gabby Resch, Alexander Bakogeorge, Sarah Sabatinos, and Ali Mazalek (Ryerson University, Toronto, Canada) In biology, microscopy data from thousands of individual cellular events presents challenges for analysis and problem solving. These include a lack of visual analysis tools to complement algorithmic approaches for tracking important but rare cellular events, and a lack of support for collaborative exploration and interpretation. In response to these challenges, we have designed and implemented Tangible Chromatin, a tangible and multi-surface system that promotes novel analysis of complex data generated from high-content microscopy experiments. The system facilitates three specific approaches to analysis: it (1) visualizes the detailed information and results from the image processing algorithms, (2) provides interactive approaches for browsing, selecting, and comparing individual data elements, and (3) expands options for productive collaboration through both independent and joint work. We present three main contributions: (i) design requirements that derive from the analytical goals of DNA replication biology, (ii) tangible and multi-surface interaction techniques to support the exploration and analysis of datasets from high-content microscopy experiments, and (iii) the results of a user study that investigated how the system supports individual and collaborative data analysis and interpretation tasks. @Article{ISS22p558, author = {Roozbeh Manshaei and Uzair Mayat and Syeda Aniqa Imtiaz and Veronica Andric and Kazeera Aliar and Nour Abu Hantash and Kashaf Masood and Gabby Resch and Alexander Bakogeorge and Sarah Sabatinos and Ali Mazalek}, title = {Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {558}, numpages = {22}, doi = {10.1145/3567711}, year = {2022}, } Publisher's Version Video |
|
Mbaki Luzayisu, Efrem |
ISS '22: "Theoretically-Defined vs. ..."
Theoretically-Defined vs. User-Defined Squeeze Gestures
Santiago Villarreal-Narvaez, Arthur Sluÿters, Jean Vanderdonckt, and Efrem Mbaki Luzayisu (Université Catholique de Louvain, Louvain-la-Neuve, Belgium; University of Kinshasa, Kinshasa, Congo) This paper presents theoretical and empirical results about user-defined gesture preferences for squeezable objects by focusing on a particular object: a deformable cushion. We start with a theoretical analysis of potential gestures for this squeezable object by defining a multi-dimension taxonomy of squeeze gestures composed of 82 gesture classes. We then empirically analyze the results of a gesture elicitation study resulting in a set of N=32 participants X 21 referents = 672 elicited gestures, further classified into 26 gesture classes. We also contribute to the practice of gesture elicitation studies by explaining why we started from a theoretical analysis (by systematically exploring a design space of potential squeeze gestures) to end up with an empirical analysis (by conducting a gesture elicitation study afterward): the intersection of the results from these sources confirm or disconfirm consensus gestures. Based on these findings, we extract from the taxonomy a subset of recommended gestures that give rise to design implications for gesture interaction with squeezable objects. @Article{ISS22p559, author = {Santiago Villarreal-Narvaez and Arthur Sluÿters and Jean Vanderdonckt and Efrem Mbaki Luzayisu}, title = {Theoretically-Defined vs. User-Defined Squeeze Gestures}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {559}, numpages = {30}, doi = {10.1145/3567805}, year = {2022}, } Publisher's Version Video |
|
Mehra, Brinda |
ISS '22: "AngleCAD: Surface-Based 3D ..."
AngleCAD: Surface-Based 3D Modelling Techniques on Foldable Touchscreens
Can Liu, Chenyue Dai, Qingzhou Ma, Brinda Mehra, and Alvaro Cassinelli (City University of Hong Kong, Hong Kong, China; Massachusetts Institude of Technology, Cambridge, USA; University of Michigan, Ann Arbor, USA) 3D modelling and printing are becoming increasingly popular. However, beginners often face high barriers of entry when trying to use existing 3D modelling tools, even for creating simple objects. This is further complicated on mobile devices by the lack of direct manipulation in the Z dimension. In this paper, we explore the possibility of using foldable mobile devices for modelling simple objects by constructing a 2.5D display and interaction space with folded touch screens. We present a set of novel interaction techniques - AngleCAD, which allows users to view and navigate a 3D space through folded screens, and to modify the 3D object using the physical support of touchscreens and folding angles. The design of these techniques was inspired by woodworking practices to support surface-based operations that allow users to cut, snap and taper objects directly with the touch screen, and extrude and drill them according to the physical fold angle. A preliminary study identified the benefits of this approach and the key design factors that affect the user experience. @Article{ISS22p582, author = {Can Liu and Chenyue Dai and Qingzhou Ma and Brinda Mehra and Alvaro Cassinelli}, title = {AngleCAD: Surface-Based 3D Modelling Techniques on Foldable Touchscreens}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {582}, numpages = {25}, doi = {10.1145/3567735}, year = {2022}, } Publisher's Version Video |
|
Miyashita, Homei |
ISS '22: "The Effectiveness of Path-Segmentation ..."
The Effectiveness of Path-Segmentation for Modeling Lasso Times in Width-Varying Paths
Shota Yamanaka, Hiroki Usuba, Wolfgang Stuerzlinger, and Homei Miyashita (Yahoo, Tokyo, Japan; Yahoo, Chiyoda-ku, Japan; Simon Fraser University, Vancouver, Canada; Meiji University, Tokyo, Japan) Models of lassoing time to select multiple square icons exist, but realistic lasso tasks also typically involve encircling non-rectangular objects. Thus, it is unclear if we can apply existing models to such conditions where, e.g., the width of the path that users want to steer through changes dynamically or step-wise. In this work, we conducted two experiments where the objects were non-rectangular, with path widths that narrowed or widened, smoothly or step-wise. The results showed that the baseline models for pen-steering movements (the steering and crossing law models) fitted the timing data well, but also that segmenting width-changing areas led to significant improvements. Our work enables the modeling of novel UIs requiring continuous strokes, e.g., for grouping icons. @Article{ISS22p584, author = {Shota Yamanaka and Hiroki Usuba and Wolfgang Stuerzlinger and Homei Miyashita}, title = {The Effectiveness of Path-Segmentation for Modeling Lasso Times in Width-Varying Paths}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {584}, numpages = {20}, doi = {10.1145/3567737}, year = {2022}, } Publisher's Version ISS '22: "Predicting Touch Accuracy ..." Predicting Touch Accuracy for Rectangular Targets by Using One-Dimensional Task Results Hiroki Usuba, Shota Yamanaka, Junichi Sato, and Homei Miyashita (Yahoo, Chiyoda-ku, Japan; Yahoo, Tokyo, Japan; Meiji University, Tokyo, Japan) We propose a method that predicts the success rate in pointing to 2D rectangular targets by using 1D vertical-bar and horizontal-bar task results. The method can predict the success rates for more practical situations under fewer experimental conditions. This shortens the duration of experiments, thus saving costs for researchers and practitioners. We verified the method through two experiments: laboratory-based and crowdsourced ones. In the laboratory-based experiment, we found that using 1D task results to predict the success rate for 2D targets slightly decreases the prediction accuracy. In the crowdsourced experiment, this method scored better than using 2D task results. Thus, we recommend that researchers use the method properly depending on the situation. @Article{ISS22p579, author = {Hiroki Usuba and Shota Yamanaka and Junichi Sato and Homei Miyashita}, title = {Predicting Touch Accuracy for Rectangular Targets by Using One-Dimensional Task Results}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {579}, numpages = {13}, doi = {10.1145/3567732}, year = {2022}, } Publisher's Version |
|
Mulet, Julie |
ISS '22: "Remote Graphic-Based Teaching ..."
Remote Graphic-Based Teaching for Pupils with Visual Impairments: Understanding Current Practices and Co-designing an Accessible Tool with Special Education Teachers
Kaixing Zhao, Julie Mulet, Clara Sorita, Bernard Oriola, Marcos Serrano, and Christophe Jouffrais (Northwestern Polytechnical University, Xi'an, China; University of Toulouse, Toulouse, France; CNRS, Toulouse, France) The lockdown period related to the COVID-19 pandemic has had a strong impact on the educational system in general, but more particularly on the special education system. Indeed, in the case of people with visual impairments, the regular tools relying heavily on images and videos were no longer usable. This specific situation highlighted an urgent need to develop tools that are accessible and that can provide solutions for remote teaching with people with VI. However, there is little work on the difficulties that this population encounters when they learn remotely as well as on the current practices of special education teachers. Such a lack of understanding limits the development of remote teaching systems that are adapted. In this paper, we conducted an online survey regarding the practices of 59 professionals giving lessons to pupils with VI, followed by a series of focus groups with special education workers facing teaching issues during the lockdown period. We followed an iterative design process where we designed successive low-fidelity prototypes to drive successive focus groups. We contribute with an analysis of the issues faced by special education teachers in this situation, and a concept to drive the future development of a tool for remote graphic-based teaching with pupils with VI. @Article{ISS22p580, author = {Kaixing Zhao and Julie Mulet and Clara Sorita and Bernard Oriola and Marcos Serrano and Christophe Jouffrais}, title = {Remote Graphic-Based Teaching for Pupils with Visual Impairments: Understanding Current Practices and Co-designing an Accessible Tool with Special Education Teachers}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {580}, numpages = {30}, doi = {10.1145/3567733}, year = {2022}, } Publisher's Version |
|
Nagai, Takahiro |
ISS '22: "HandyGaze: A Gaze Tracking ..."
HandyGaze: A Gaze Tracking Technique for Room-Scale Environments using a Single Smartphone
Takahiro Nagai, Kazuyuki Fujita, Kazuki Takashima, and Yoshifumi Kitamura (Tohoku University, Sendai, Japan) We propose HandyGaze, a 6-DoF gaze tracking technique for room-scale environments that can be carried out by simply holding a smartphone naturally without installing any sensors or markers in the environment. Our technique simultaneously employs the smartphone’s front and rear cameras: The front camera estimates the user’s gaze vector relative to the smartphone, while the rear camera (and depth sensor, if available) performs self-localization by reconstructing a pre-obtained 3D map of the environment. To achieve this, we implemented a prototype that works on iOS smartphones by running an ARKit-based algorithm for estimating the user’s 6-DoF head orientation. We additionally implemented a novel calibration method that offsets the user-specific deviation between the head and gaze orientations. We then conducted a user study (N=10) that measured our technique’s positional accuracy to the gaze target under four conditions, based on combinations of use with and without a depth sensor and calibration. The results show that our calibration method was able to reduce the mean absolute error of the gaze point by 27%, with an error of 0.53 m when using the depth sensor. We also report the target size required to avoid erroneous inputs. Finally, we suggest possible applications such as a gaze-based guidance application for museums. @Article{ISS22p562, author = {Takahiro Nagai and Kazuyuki Fujita and Kazuki Takashima and Yoshifumi Kitamura}, title = {HandyGaze: A Gaze Tracking Technique for Room-Scale Environments using a Single Smartphone}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {562}, numpages = {18}, doi = {10.1145/3567715}, year = {2022}, } Publisher's Version Archive submitted (74 MB) |
|
Nanayakkara, Suranga |
ISS '22: "Players and Performance: Opportunities ..."
Players and Performance: Opportunities for Social Interaction with Augmented Tabletop Games at Centres for Children with Autism
Qin Wu, Rao Xu, Yuantong Liu, Danielle Lottridge, and Suranga Nanayakkara (University of Auckland, Auckland, New Zealand; Chengdu University of Information Technology, Chengdu, China; National University of Singapore, Singapore, Singapore) This research aimed to investigate how children with autism interacted with rich audio and visual augmented reality (AR) tabletop games. Based on in-depth needs analysis facilitated through autism centers in China, we designed and developed MagicBLOCKS, a series of tabletop AR interactive games for children with autism. We conducted a four-week field study with 15 male children. We found that the interactive dynamics in games were rewarding and played critical roles in motivation and sustained interest. In addition, based on post-hoc interviews and video analysis with expert therapists, we found that MagicBLOCKS provided opportunities for children with autism to engage with each other through player performances and audience interactions with episodes of cooperation and territoriality. We discuss the limitations and the insights offered by this research. @Article{ISS22p563, author = {Qin Wu and Rao Xu and Yuantong Liu and Danielle Lottridge and Suranga Nanayakkara}, title = {Players and Performance: Opportunities for Social Interaction with Augmented Tabletop Games at Centres for Children with Autism}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {563}, numpages = {24}, doi = {10.1145/3567716}, year = {2022}, } Publisher's Version |
|
Nebeling, Michael |
ISS '22: "XSpace: An Augmented Reality ..."
XSpace: An Augmented Reality Toolkit for Enabling Spatially-Aware Distributed Collaboration
Jaylin Herskovitz, Yi Fei Cheng, Anhong Guo, Alanson P. Sample, and Michael Nebeling (University of Michigan, Ann Arbor, USA; Carnegie Mellon University, Pittsburgh, USA) Augmented Reality (AR) has the potential to leverage environmental information to better facilitate distributed collaboration, however, such applications are difficult to develop. We present XSpace, a toolkit for creating spatially-aware AR applications for distributed collaboration. Based on a review of existing applications and developer tools, we design XSpace to support three methods for creating shared virtual spaces, each emphasizing a different aspect: shared objects, user perspectives, and environmental meshes. XSpace implements these methods in a developer toolkit, and also provides a set of complimentary visual authoring tools to allow developers to preview a variety of configurations for a shared virtual space. We present five example applications to illustrate that XSpace can support the development of a rich set of collaborative AR experiences that are difficult to produce with current solutions. Through XSpace, we discuss implications for future application design, including user space customization and privacy and safety concerns when sharing users' environments. @Article{ISS22p568, author = {Jaylin Herskovitz and Yi Fei Cheng and Anhong Guo and Alanson P. Sample and Michael Nebeling}, title = {XSpace: An Augmented Reality Toolkit for Enabling Spatially-Aware Distributed Collaboration}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {568}, numpages = {26}, doi = {10.1145/3567721}, year = {2022}, } Publisher's Version Video |
|
Neumayr, Thomas |
ISS '22: "Semi-automated Analysis of ..."
Semi-automated Analysis of Collaborative Interaction: Are We There Yet?
Thomas Neumayr, Mirjam Augstein, Johannes Schönböck, Sean Rintel, Helmut Leeb, and Thomas Teichmeister (University of Applied Sciences Upper Austria, Hagenberg, Austria; JKU Linz, Linz, Austria; Microsoft Research, Cambridge, UK) In recent years, research on collaborative interaction has relied on manual coding of rich audio/video recordings. The fine-grained analysis of such material is extremely time-consuming and labor-intensive. This is not only difficult to scale, but, as a result, might also limit the quality and completeness of coding due to fatigue, inherent human biases, (accidental or intentional), and inter-rater inconsistencies. In this paper, we explore how recent advances in machine learning may reduce manual effort and loss of information while retaining the value of human intelligence in the coding process. We present ACACIA (AI Chain for Augmented Collaborative Interaction Analysis), an AI video data analysis application which combines a range of advances in machine perception of video material for the analysis of collaborative interaction. We evaluate ACACIA's abilities, show how far we can already get, and which challenges remain. Our contribution lies in establishing a combined machine and human analysis pipeline that may be generalized to different collaborative settings and guide future research. @Article{ISS22p571, author = {Thomas Neumayr and Mirjam Augstein and Johannes Schönböck and Sean Rintel and Helmut Leeb and Thomas Teichmeister}, title = {Semi-automated Analysis of Collaborative Interaction: Are We There Yet?}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {571}, numpages = {27}, doi = {10.1145/3567724}, year = {2022}, } Publisher's Version Archive submitted (52 MB) |
|
Ng, Chloe |
ISS '22: "Eliciting User-Defined Touch ..."
Eliciting User-Defined Touch and Mid-air Gestures for Co-located Mobile Gaming
Chloe Ng and Nicolai Marquardt (University College London, London, UK) Many interaction techniques have been developed to best support mobile gaming – but developed gestures and techniques might not always match user behaviour or preferences. To inform this design space of gesture input for co-located mobile gaming, we present insights from a gesture elicitation user study for touch and mid-air input, specifically focusing on board and card games due to the materiality of game artefacts and rich interaction between players. We obtained touch and mid-air gesture proposals for 11 game tasks with 12 dyads and gained insights into user preferences. We contribute our classification and analysis of 622 elicited gestures (showing more collaborative gestures in the mid-air modality), resulting in a consensus gesture set, and agreement rates showing higher consensus for touch gestures. Furthermore, we identified interaction patterns – such as benefits of situational awareness, social etiquette, gestures fostering interaction between players, and roles of gestures providing fun, excitement, and suspense to the game – which can inform future games and gesture design. @Article{ISS22p569, author = {Chloe Ng and Nicolai Marquardt}, title = {Eliciting User-Defined Touch and Mid-air Gestures for Co-located Mobile Gaming}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {569}, numpages = {25}, doi = {10.1145/3567722}, year = {2022}, } Publisher's Version |
|
Oriola, Bernard |
ISS '22: "Remote Graphic-Based Teaching ..."
Remote Graphic-Based Teaching for Pupils with Visual Impairments: Understanding Current Practices and Co-designing an Accessible Tool with Special Education Teachers
Kaixing Zhao, Julie Mulet, Clara Sorita, Bernard Oriola, Marcos Serrano, and Christophe Jouffrais (Northwestern Polytechnical University, Xi'an, China; University of Toulouse, Toulouse, France; CNRS, Toulouse, France) The lockdown period related to the COVID-19 pandemic has had a strong impact on the educational system in general, but more particularly on the special education system. Indeed, in the case of people with visual impairments, the regular tools relying heavily on images and videos were no longer usable. This specific situation highlighted an urgent need to develop tools that are accessible and that can provide solutions for remote teaching with people with VI. However, there is little work on the difficulties that this population encounters when they learn remotely as well as on the current practices of special education teachers. Such a lack of understanding limits the development of remote teaching systems that are adapted. In this paper, we conducted an online survey regarding the practices of 59 professionals giving lessons to pupils with VI, followed by a series of focus groups with special education workers facing teaching issues during the lockdown period. We followed an iterative design process where we designed successive low-fidelity prototypes to drive successive focus groups. We contribute with an analysis of the issues faced by special education teachers in this situation, and a concept to drive the future development of a tool for remote graphic-based teaching with pupils with VI. @Article{ISS22p580, author = {Kaixing Zhao and Julie Mulet and Clara Sorita and Bernard Oriola and Marcos Serrano and Christophe Jouffrais}, title = {Remote Graphic-Based Teaching for Pupils with Visual Impairments: Understanding Current Practices and Co-designing an Accessible Tool with Special Education Teachers}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {580}, numpages = {30}, doi = {10.1145/3567733}, year = {2022}, } Publisher's Version |
|
Pandey, Laxmi |
ISS '22: "Design and Evaluation of a ..."
Design and Evaluation of a Silent Speech-Based Selection Method for Eye-Gaze Pointing
Laxmi Pandey and Ahmed Sabbir Arif (University of California, Merced, USA) We investigate silent speech as a hands-free selection method in eye-gaze pointing. We first propose a stripped-down image-based model that can recognize a small number of silent commands almost as fast as state-of-the-art speech recognition models. We then compare it with other hands-free selection methods (dwell, speech) in a Fitts' law study. Results revealed that speech and silent speech are comparable in throughput and selection time, but the latter is significantly more accurate than the other methods. A follow-up study revealed that target selection around the center of a display is significantly faster and more accurate, while around the top corners and the bottom are slower and error prone. We then present a method for selecting menu items with eye-gaze and silent speech. A study revealed that it significantly reduces task completion time and error rate. @Article{ISS22p570, author = {Laxmi Pandey and Ahmed Sabbir Arif}, title = {Design and Evaluation of a Silent Speech-Based Selection Method for Eye-Gaze Pointing}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {570}, numpages = {26}, doi = {10.1145/3567723}, year = {2022}, } Publisher's Version Video |
|
Perelman, Gary |
ISS '22: "Visual Transitions around ..."
Visual Transitions around Tabletops in Mixed Reality: Study on a Visual Acquisition Task between Vertical Virtual Displays and Horizontal Tabletops
Gary Perelman, Emmanuel Dubois, Alice Probst, and Marcos Serrano (University of Toulouse, Toulouse, France) See-through Head-Mounted Displays (HMDs) offer interesting opportunities to augment the interaction space around screens, especially around horizontal tabletops. In such context, HMDs can display surrounding vertical virtual windows to complement the tabletop content with data displayed in close vicinity. However, the effects of such combination on the visual acquisition of targets in the resulting combined display space have scarcely been explored. In this paper we conduct a study to explore visual acquisitions in such contexts, with a specific focus on the analysis of visual transitions between the horizontal tabletop display and the vertical virtual displays (in front and on the side of the tabletop). To further study the possible visual perception of the tabletop content out of the HMD and its impact on visual interaction, we distinguished two solutions for displaying information on the horizontal tabletop: using the see-through HMD to display virtual content over the tabletop surface (virtual overlay), i.e. the content is only visible inside the HMD’s FoV, or using the tabletop itself (tabletop screen). 12 participants performed visual acquisition tasks involving the horizontal and vertical displays. We measured the time to perform the task, the head movements, the portions of the displays visible in the HMD’s field of view, the physical fatigue and the user’s preference. Our results show that it is faster to acquire virtual targets in the front display than on the side. Results reveal that the use of the virtual overlay on the tabletop slows down the visual acquisition compared to the use of the tabletop screen, showing that users exploit the visual perception of the tabletop content on the peripheral visual space. We were also able to quantify when and to which extent targets on the tabletop can be acquired without being visible within the HMD's field of view when using the tabletop screen, i.e. by looking under the HMD. These results lead to design recommendations for more efficient, comfortable and integrated interfaces combining tabletop and surrounding vertical virtual displays. @Article{ISS22p585, author = {Gary Perelman and Emmanuel Dubois and Alice Probst and Marcos Serrano}, title = {Visual Transitions around Tabletops in Mixed Reality: Study on a Visual Acquisition Task between Vertical Virtual Displays and Horizontal Tabletops}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {585}, numpages = {20}, doi = {10.1145/3567738}, year = {2022}, } Publisher's Version Video |
|
Pietriga, Emmanuel |
ISS '22: "Investigating the Use of AR ..."
Investigating the Use of AR Glasses for Content Annotation on Mobile Devices
Francesco Riccardo Di Gioia, Eugenie Brasier, Emmanuel Pietriga, and Caroline Appert (Université Paris-Saclay, Orsay, France; CNRS, Orsay, France; Inria, Orsay, France) Mobile devices such as smartphones and tablets have limited display size and input capabilities that make a variety of tasks challenging. Coupling the mobile device with Augmented Reality eyewear such as smartglasses can help address some of these challenges. In the specific context of digital content annotation tasks, this combination has the potential to enhance the user experience on two fronts. First, annotations can be offloaded into the air around the mobile device, freeing precious screen real-estate. Second, as smartglasses often come equipped with a variety of sensors including a camera, users can annotate documents with pictures or videos of their environment, captured on the spot, hands-free, and from the wearer's perspective. We present AnnotAR, a prototype that we use as a research probe to assess the viability of this approach to digital content annotation. We use AnnotAR to gather users' preliminary feedback in a laboratory setting, and to showcase how it could support real-world use cases. @Article{ISS22p574, author = {Francesco Riccardo Di Gioia and Eugenie Brasier and Emmanuel Pietriga and Caroline Appert}, title = {Investigating the Use of AR Glasses for Content Annotation on Mobile Devices}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {574}, numpages = {18}, doi = {10.1145/3567727}, year = {2022}, } Publisher's Version Video Info |
|
Probst, Alice |
ISS '22: "Visual Transitions around ..."
Visual Transitions around Tabletops in Mixed Reality: Study on a Visual Acquisition Task between Vertical Virtual Displays and Horizontal Tabletops
Gary Perelman, Emmanuel Dubois, Alice Probst, and Marcos Serrano (University of Toulouse, Toulouse, France) See-through Head-Mounted Displays (HMDs) offer interesting opportunities to augment the interaction space around screens, especially around horizontal tabletops. In such context, HMDs can display surrounding vertical virtual windows to complement the tabletop content with data displayed in close vicinity. However, the effects of such combination on the visual acquisition of targets in the resulting combined display space have scarcely been explored. In this paper we conduct a study to explore visual acquisitions in such contexts, with a specific focus on the analysis of visual transitions between the horizontal tabletop display and the vertical virtual displays (in front and on the side of the tabletop). To further study the possible visual perception of the tabletop content out of the HMD and its impact on visual interaction, we distinguished two solutions for displaying information on the horizontal tabletop: using the see-through HMD to display virtual content over the tabletop surface (virtual overlay), i.e. the content is only visible inside the HMD’s FoV, or using the tabletop itself (tabletop screen). 12 participants performed visual acquisition tasks involving the horizontal and vertical displays. We measured the time to perform the task, the head movements, the portions of the displays visible in the HMD’s field of view, the physical fatigue and the user’s preference. Our results show that it is faster to acquire virtual targets in the front display than on the side. Results reveal that the use of the virtual overlay on the tabletop slows down the visual acquisition compared to the use of the tabletop screen, showing that users exploit the visual perception of the tabletop content on the peripheral visual space. We were also able to quantify when and to which extent targets on the tabletop can be acquired without being visible within the HMD's field of view when using the tabletop screen, i.e. by looking under the HMD. These results lead to design recommendations for more efficient, comfortable and integrated interfaces combining tabletop and surrounding vertical virtual displays. @Article{ISS22p585, author = {Gary Perelman and Emmanuel Dubois and Alice Probst and Marcos Serrano}, title = {Visual Transitions around Tabletops in Mixed Reality: Study on a Visual Acquisition Task between Vertical Virtual Displays and Horizontal Tabletops}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {585}, numpages = {20}, doi = {10.1145/3567738}, year = {2022}, } Publisher's Version Video |
|
Prouzeau, Arnaud |
ISS '22: "Effects of Display Layout ..."
Effects of Display Layout on Spatial Memory for Immersive Environments
Jiazhou Liu, Arnaud Prouzeau, Barrett Ens, and Tim Dwyer (Monash University, Melbourne, Australia; Inria, Bordeaux, France) In immersive environments, positioning data visualisations around the user in a wraparound layout has been advocated as advantageous over flat arrangements more typical of traditional screens. However, other than limiting the distance users must walk, there is no clear design rationale behind this common practice, and little research on the impact of wraparound layouts on visualisation tasks. The ability to remember the spatial location of elements of visualisations within the display space is crucial to support visual analytical tasks, especially those that require users to shift their focus or perform comparisons. This ability is influenced by the user's spatial memory but how spatial memory is affected by different display layouts remains unclear. In this paper, we perform two user studies to evaluate the effects of three layouts with varying degrees of curvature around the user (flat-wall, semicircular-wraparound, and circular-wraparound) on a visuo-spatial memory task in a virtual environment. The results show that participants are able to recall spatial patterns with greater accuracy and report more positive subjective ratings using flat than circular-wraparound layouts. While we didn't find any significant performance differences between the flat and semicircular-wraparound layouts, participants overwhelmingly preferred the semicircular-wraparound layout suggesting it is a good compromise between the two extremes of display curvature. @Article{ISS22p576, author = {Jiazhou Liu and Arnaud Prouzeau and Barrett Ens and Tim Dwyer}, title = {Effects of Display Layout on Spatial Memory for Immersive Environments}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {576}, numpages = {21}, doi = {10.1145/3567729}, year = {2022}, } Publisher's Version Video |
|
Rateau, Hanae |
ISS '22: "Leveraging Smartwatch and ..."
Leveraging Smartwatch and Earbuds Gesture Capture to Support Wearable Interaction
Hanae Rateau, Edward Lank, and Zhe Liu (University of Waterloo, Waterloo, Canada; Inria, Lille, France; Huawei, Markham, Canada) Due to the proliferation of smart wearables, it is now the case that designers can explore novel ways that devices can be used in combination by end-users. In this paper, we explore the gestural input enabled by the combination of smart earbuds coupled with a proximal smartwatch. We identify a consensus set of gestures and a taxonomy of the types of gestures participants create through an elicitation study. In a follow-on study conducted on Amazon's Mechanical Turk, we explore the social acceptability of gestures enabled by watch+earbud gesture capture. While elicited gestures continue to be simple, discrete, in-context actions, we find that elicited input is frequently abstract, varies in size and duration, and is split almost equally between on-body, proximal, and more distant actions. Together, our results provide guidelines for on-body, near-ear, and in-air input using earbuds and a smartwatch to support gesture capture. @Article{ISS22p557, author = {Hanae Rateau and Edward Lank and Zhe Liu}, title = {Leveraging Smartwatch and Earbuds Gesture Capture to Support Wearable Interaction}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {557}, numpages = {20}, doi = {10.1145/3567710}, year = {2022}, } Publisher's Version |
|
Reiterer, Harald |
ISS '22: "Re-locations: Augmenting Personal ..."
Re-locations: Augmenting Personal and Shared Workspaces to Support Remote Collaboration in Incongruent Spaces
Daniel Immanuel Fink, Johannes Zagermann, Harald Reiterer, and Hans-Christian Jetter (University of Konstanz, Konstanz, Germany; University of Lübeck, Lübeck, Germany) Augmented reality (AR) can create the illusion of being virtually co-located during remote collaboration, e.g., by visualizing remote co-workers as avatars. However, spatial awareness of each other's activities is limited as physical spaces, including the position of physical devices, are often incongruent. Therefore, alignment methods are needed to support activities on physical devices. In this paper, we present the concept of Re-locations, a method for enabling remote collaboration with augmented reality in incongruent spaces. The idea of the concept is to enrich remote collaboration activities on multiple physical devices with attributes of co-located collaboration such as spatial awareness and spatial referencing by locally relocating remote user representations to user-defined workspaces. We evaluated the Re-locations concept in an explorative user study with dyads using an authentic, collaborative task. Our findings indicate that Re-locations introduce attributes of co-located collaboration like spatial awareness and social presence. Based on our findings, we provide implications for future research and design of remote collaboration systems using AR. @Article{ISS22p556, author = {Daniel Immanuel Fink and Johannes Zagermann and Harald Reiterer and Hans-Christian Jetter}, title = {Re-locations: Augmenting Personal and Shared Workspaces to Support Remote Collaboration in Incongruent Spaces}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {556}, numpages = {30}, doi = {10.1145/3567709}, year = {2022}, } Publisher's Version |
|
Ren, Yuan |
ISS '22: "Push, Tap, Dwell, and Pinch: ..."
Push, Tap, Dwell, and Pinch: Evaluation of Four Mid-air Selection Methods Augmented with Ultrasonic Haptic Feedback
Tafadzwa Joseph Dube, Yuan Ren, Hannah Limerick, I. Scott MacKenzie, and Ahmed Sabbir Arif (University of California, Merced, USA; Ultraleap, Bristol, UK; York University, Toronto, Canada) This work compares four mid-air target selection methods (Push, Tap, Dwell, Pinch) with two types of ultrasonic haptic feedback (Select, HoverSelect) in a Fitts’ law experiment. Results revealed that Tap is the fastest, the most accurate, and one of the least physically and cognitively demanding selection methods. Pinch is relatively fast but error prone and physically and cognitively demanding. Dwell is slowest by design, yet the most accurate and the least physically and cognitively demanding. Both haptic feedback methods improve selection performance by increasing users’ spatial awareness. Particularly, Push augmented with Hover & Select feedback is comparable to Tap. Besides, participants perceive the selection methods as faster, more accurate, and more physically and cognitively comfortable with the haptic feedback methods. @Article{ISS22p565, author = {Tafadzwa Joseph Dube and Yuan Ren and Hannah Limerick and I. Scott MacKenzie and Ahmed Sabbir Arif}, title = {Push, Tap, Dwell, and Pinch: Evaluation of Four Mid-air Selection Methods Augmented with Ultrasonic Haptic Feedback}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {565}, numpages = {19}, doi = {10.1145/3567718}, year = {2022}, } Publisher's Version Video ISS '22: "TiltWalker: Operating a Telepresence ..." TiltWalker: Operating a Telepresence Robot with One-Hand by Tilt Controls on a Smartphone Ghazal Zand, Yuan Ren, and Ahmed Sabbir Arif (University of California, Merced, USA) Mobile clients for telepresence robots are cluttered with interactive elements that either leave a little room for the camera feeds or occlude them. Many do not provide meaningful feedback on the robot's state and most require the use of both hands. These make maneuvering telepresence robots difficult with mobile devices. TiltWalker enables controlling a telepresence robot with one hand using tilt gestures with a smartphone. In a series of studies, we first justify the use of a Web platform, determine how far and fast users can tilt without compromising the comfort and the legibility of the display content, and identify a velocity-based function well-suited for control-display mapping. We refine TiltWalker based on the findings of these studies, then compare it with a default method in the final study. Results revealed that TiltWalker is significantly faster and more accurate than the default method. Besides, participants preferred TiltWalker's interaction methods and graphical feedback significantly more than those of the default method. @Article{ISS22p572, author = {Ghazal Zand and Yuan Ren and Ahmed Sabbir Arif}, title = {TiltWalker: Operating a Telepresence Robot with One-Hand by Tilt Controls on a Smartphone}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {572}, numpages = {26}, doi = {10.1145/3567725}, year = {2022}, } Publisher's Version Video |
|
Resch, Gabby |
ISS '22: "Tangible Chromatin: Tangible ..."
Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments
Roozbeh Manshaei, Uzair Mayat, Syeda Aniqa Imtiaz, Veronica Andric, Kazeera Aliar, Nour Abu Hantash, Kashaf Masood, Gabby Resch, Alexander Bakogeorge, Sarah Sabatinos, and Ali Mazalek (Ryerson University, Toronto, Canada) In biology, microscopy data from thousands of individual cellular events presents challenges for analysis and problem solving. These include a lack of visual analysis tools to complement algorithmic approaches for tracking important but rare cellular events, and a lack of support for collaborative exploration and interpretation. In response to these challenges, we have designed and implemented Tangible Chromatin, a tangible and multi-surface system that promotes novel analysis of complex data generated from high-content microscopy experiments. The system facilitates three specific approaches to analysis: it (1) visualizes the detailed information and results from the image processing algorithms, (2) provides interactive approaches for browsing, selecting, and comparing individual data elements, and (3) expands options for productive collaboration through both independent and joint work. We present three main contributions: (i) design requirements that derive from the analytical goals of DNA replication biology, (ii) tangible and multi-surface interaction techniques to support the exploration and analysis of datasets from high-content microscopy experiments, and (iii) the results of a user study that investigated how the system supports individual and collaborative data analysis and interpretation tasks. @Article{ISS22p558, author = {Roozbeh Manshaei and Uzair Mayat and Syeda Aniqa Imtiaz and Veronica Andric and Kazeera Aliar and Nour Abu Hantash and Kashaf Masood and Gabby Resch and Alexander Bakogeorge and Sarah Sabatinos and Ali Mazalek}, title = {Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {558}, numpages = {22}, doi = {10.1145/3567711}, year = {2022}, } Publisher's Version Video |
|
Rintel, Sean |
ISS '22: "Semi-automated Analysis of ..."
Semi-automated Analysis of Collaborative Interaction: Are We There Yet?
Thomas Neumayr, Mirjam Augstein, Johannes Schönböck, Sean Rintel, Helmut Leeb, and Thomas Teichmeister (University of Applied Sciences Upper Austria, Hagenberg, Austria; JKU Linz, Linz, Austria; Microsoft Research, Cambridge, UK) In recent years, research on collaborative interaction has relied on manual coding of rich audio/video recordings. The fine-grained analysis of such material is extremely time-consuming and labor-intensive. This is not only difficult to scale, but, as a result, might also limit the quality and completeness of coding due to fatigue, inherent human biases, (accidental or intentional), and inter-rater inconsistencies. In this paper, we explore how recent advances in machine learning may reduce manual effort and loss of information while retaining the value of human intelligence in the coding process. We present ACACIA (AI Chain for Augmented Collaborative Interaction Analysis), an AI video data analysis application which combines a range of advances in machine perception of video material for the analysis of collaborative interaction. We evaluate ACACIA's abilities, show how far we can already get, and which challenges remain. Our contribution lies in establishing a combined machine and human analysis pipeline that may be generalized to different collaborative settings and guide future research. @Article{ISS22p571, author = {Thomas Neumayr and Mirjam Augstein and Johannes Schönböck and Sean Rintel and Helmut Leeb and Thomas Teichmeister}, title = {Semi-automated Analysis of Collaborative Interaction: Are We There Yet?}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {571}, numpages = {27}, doi = {10.1145/3567724}, year = {2022}, } Publisher's Version Archive submitted (52 MB) |
|
Roudaut, Anne |
ISS '22: "Investigating Pointing Performance ..."
Investigating Pointing Performance for Tangible Surfaces with Physical 3D Targets
Aluna Everitt, Anne Roudaut, Kasper Hornbæk, Mike Fraser, and Jason Alexander (University of Oxford, Oxford, UK; University of Bristol, Bristol, UK; University of Copenhagen, Copenhagen, Denmark; University of Bath, Bath, UK) One of the most fundamental interactions –pointing– is well understood on flat surfaces. However, pointing performance on tangible surfaces with physical targets is still limited for Tangible User Interfaces (TUIs). We investigate the effect of a target’s physical width, height, and distance on user pointing performance. We conducted a study using a reciprocal tapping task (n=19) with physical rods arranged in a circle. We compared our data with five conventional interaction models designed for 2D/3D tasks rather than tangible targets. We show that variance in the movement times was only satisfactorily explained by a model established for volumetric displays (r2=0.954). Analysis shows that movement direction and height should be included as parameters to this model to generalize for 3D tangible targets. Qualitative feedback from participants suggests that pointing at physical targets involves additional human factors (e.g., perception of sharpness or robustness) that need to be investigated further to understand how performance with tangible objects is affected. @Article{ISS22p583, author = {Aluna Everitt and Anne Roudaut and Kasper Hornbæk and Mike Fraser and Jason Alexander}, title = {Investigating Pointing Performance for Tangible Surfaces with Physical 3D Targets}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {583}, numpages = {23}, doi = {10.1145/3567736}, year = {2022}, } Publisher's Version |
|
Sabatinos, Sarah |
ISS '22: "Tangible Chromatin: Tangible ..."
Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments
Roozbeh Manshaei, Uzair Mayat, Syeda Aniqa Imtiaz, Veronica Andric, Kazeera Aliar, Nour Abu Hantash, Kashaf Masood, Gabby Resch, Alexander Bakogeorge, Sarah Sabatinos, and Ali Mazalek (Ryerson University, Toronto, Canada) In biology, microscopy data from thousands of individual cellular events presents challenges for analysis and problem solving. These include a lack of visual analysis tools to complement algorithmic approaches for tracking important but rare cellular events, and a lack of support for collaborative exploration and interpretation. In response to these challenges, we have designed and implemented Tangible Chromatin, a tangible and multi-surface system that promotes novel analysis of complex data generated from high-content microscopy experiments. The system facilitates three specific approaches to analysis: it (1) visualizes the detailed information and results from the image processing algorithms, (2) provides interactive approaches for browsing, selecting, and comparing individual data elements, and (3) expands options for productive collaboration through both independent and joint work. We present three main contributions: (i) design requirements that derive from the analytical goals of DNA replication biology, (ii) tangible and multi-surface interaction techniques to support the exploration and analysis of datasets from high-content microscopy experiments, and (iii) the results of a user study that investigated how the system supports individual and collaborative data analysis and interpretation tasks. @Article{ISS22p558, author = {Roozbeh Manshaei and Uzair Mayat and Syeda Aniqa Imtiaz and Veronica Andric and Kazeera Aliar and Nour Abu Hantash and Kashaf Masood and Gabby Resch and Alexander Bakogeorge and Sarah Sabatinos and Ali Mazalek}, title = {Tangible Chromatin: Tangible and Multi-surface Interactions for Exploring Datasets from High-Content Microscopy Experiments}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {558}, numpages = {22}, doi = {10.1145/3567711}, year = {2022}, } Publisher's Version Video |
|
Sample, Alanson P. |
ISS '22: "UbiChromics: Enabling Ubiquitously ..."
UbiChromics: Enabling Ubiquitously Deployable Interactive Displays with Photochromic Paint
Amani Alkayyali, Yasha Iravantchi, Jaylin Herskovitz, and Alanson P. Sample (University of Michigan, Ann Arbor, USA) Pervasive and interactive displays promise to present our digital content seamlessly throughout our environment. However, traditional display technologies do not scale to room-wide applications due to high per-unit-area costs and the need for constant wired power and data infrastructure. This research proposes the use of photochromic paint as a display medium. Applying the paint to any surface or object creates ultra-low-cost displays, which can change color when exposed to specific wavelengths of light. We develop new paint formulations that enable wide area application of photochromic material. Along with a specially modified wide-area laser projector and depth camera that can draw custom images and create on-demand, room-wide user interfaces on photochromic enabled surfaces. System parameters such as light intensity, material activation time, and user readability are examined to optimize the display. Results show that images and user interfaces can last up to 16 minutes and can be updated indefinitely. Finally, usage scenarios such as displaying static and dynamic images, ephemeral notifications, and the creation of on-demand interfaces, such as light switches and music controllers, are demonstrated and explored. Ultimately, the UbiChromics system demonstrates the possibility of extending digital content to all painted surfaces. @Article{ISS22p561, author = {Amani Alkayyali and Yasha Iravantchi and Jaylin Herskovitz and Alanson P. Sample}, title = {UbiChromics: Enabling Ubiquitously Deployable Interactive Displays with Photochromic Paint}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {561}, numpages = {25}, doi = {10.1145/3567714}, year = {2022}, } Publisher's Version Video Info ISS '22: "XSpace: An Augmented Reality ..." XSpace: An Augmented Reality Toolkit for Enabling Spatially-Aware Distributed Collaboration Jaylin Herskovitz, Yi Fei Cheng, Anhong Guo, Alanson P. Sample, and Michael Nebeling (University of Michigan, Ann Arbor, USA; Carnegie Mellon University, Pittsburgh, USA) Augmented Reality (AR) has the potential to leverage environmental information to better facilitate distributed collaboration, however, such applications are difficult to develop. We present XSpace, a toolkit for creating spatially-aware AR applications for distributed collaboration. Based on a review of existing applications and developer tools, we design XSpace to support three methods for creating shared virtual spaces, each emphasizing a different aspect: shared objects, user perspectives, and environmental meshes. XSpace implements these methods in a developer toolkit, and also provides a set of complimentary visual authoring tools to allow developers to preview a variety of configurations for a shared virtual space. We present five example applications to illustrate that XSpace can support the development of a rich set of collaborative AR experiences that are difficult to produce with current solutions. Through XSpace, we discuss implications for future application design, including user space customization and privacy and safety concerns when sharing users' environments. @Article{ISS22p568, author = {Jaylin Herskovitz and Yi Fei Cheng and Anhong Guo and Alanson P. Sample and Michael Nebeling}, title = {XSpace: An Augmented Reality Toolkit for Enabling Spatially-Aware Distributed Collaboration}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {568}, numpages = {26}, doi = {10.1145/3567721}, year = {2022}, } Publisher's Version Video |
|
Sandak, Jakub |
ISS '22: "LightMeUp: Back-print Illumination ..."
LightMeUp: Back-print Illumination Paper Display with Multi-stable Visuals
Cuauhtli Campos, Matjaž Kljun, Jakub Sandak, and Klen Čopič Pucihar (University of Primorska, Koper, Slovenia; Faculty of Information Studies in Novo Mesto, Novo Mesto, Slovenia; InnoRenew CoE, Isola, Slovenia) Despite the drive to digitise learning, paper still holds a prominent role within educational settings. While computational devices have several advantages over paper (e.g. changing and showing content based on user interaction and needs) their prolonged or incorrect usage can hinder educational achievements. In this paper, we combine the interactivity of computational devices with paper whilst reducing the usage of technology to the minimum. To this end, we developed and evaluated a novel back-print illumination paper display called LightMeUp where different information printed on the back side of the paper becomes visible when paper is placed on an interactive display and back-illuminated with a particular colour. To develop this novel display, we first built a display simulator that enables the simulation of various spectral characteristics of the elements used in the system (i.e. light sources such as tablet computers, paper types and printing inks). By using our simulator, we designed various use-case prototypes that demonstrate the capabilities and feasibility of the proposed system. With our simulator and use-cases presented, educators and educational content designers can easily design multi-stable interactive visuals by using readily available paper, printers and touch displays. @Article{ISS22p573, author = {Cuauhtli Campos and Matjaž Kljun and Jakub Sandak and Klen Čopič Pucihar}, title = {LightMeUp: Back-print Illumination Paper Display with Multi-stable Visuals}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {573}, numpages = {23}, doi = {10.1145/3570333}, year = {2022}, } Publisher's Version Video Info |
|
Sarcar, Sayan |
ISS '22: "TetraForce: A Magnetic-Based ..."
TetraForce: A Magnetic-Based Interface Enabling Pressure Force and Shear Force Input Applied to Front and Back of a Smartphone
Taichi Tsuchida, Kazuyuki Fujita, Kaori Ikematsu, Sayan Sarcar, Kazuki Takashima, and Yoshifumi Kitamura (Tohoku University, Sendai, Japan; Yahoo, Tokyo, Japan; Birmingham City University, Birmingham, UK) We propose a novel phone-case-shaped interface named TetraForce, which enables four types of force input consisting of two force directions (i.e., pressure force and shear force) and two force-applied surfaces (i.e., touch surface and back surface) in a single device. Force detection is achieved using the smartphone's built-in magnetometer (and supplementary accelerometer and gyro sensor) by estimating the displacement of a magnet attached to a 3-DoF passively movable panel at the back. We conducted a user study (N=12) to investigate the fundamental user performance by our interface and demonstrated that the input was detected as intended with a success rate of 97.4% on average for all four input types. We further conducted an ideation workshop with people who were involved in human-computer interaction (N=12) to explore possible applications of this interface, and we obtained 137 ideas for applications using individual input types and 51 possible scenarios using them in combination. Organizing these ideas reveals the advantages of each input type and suggests that our interface is useful for applications that require complex operations and that it can improve intuitiveness through elastic feedback. @Article{ISS22p564, author = {Taichi Tsuchida and Kazuyuki Fujita and Kaori Ikematsu and Sayan Sarcar and Kazuki Takashima and Yoshifumi Kitamura}, title = {TetraForce: A Magnetic-Based Interface Enabling Pressure Force and Shear Force Input Applied to Front and Back of a Smartphone}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {564}, numpages = {22}, doi = {10.1145/3567717}, year = {2022}, } Publisher's Version Archive submitted (320 MB) |
|
Sato, Junichi |
ISS '22: "Predicting Touch Accuracy ..."
Predicting Touch Accuracy for Rectangular Targets by Using One-Dimensional Task Results
Hiroki Usuba, Shota Yamanaka, Junichi Sato, and Homei Miyashita (Yahoo, Chiyoda-ku, Japan; Yahoo, Tokyo, Japan; Meiji University, Tokyo, Japan) We propose a method that predicts the success rate in pointing to 2D rectangular targets by using 1D vertical-bar and horizontal-bar task results. The method can predict the success rates for more practical situations under fewer experimental conditions. This shortens the duration of experiments, thus saving costs for researchers and practitioners. We verified the method through two experiments: laboratory-based and crowdsourced ones. In the laboratory-based experiment, we found that using 1D task results to predict the success rate for 2D targets slightly decreases the prediction accuracy. In the crowdsourced experiment, this method scored better than using 2D task results. Thus, we recommend that researchers use the method properly depending on the situation. @Article{ISS22p579, author = {Hiroki Usuba and Shota Yamanaka and Junichi Sato and Homei Miyashita}, title = {Predicting Touch Accuracy for Rectangular Targets by Using One-Dimensional Task Results}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {579}, numpages = {13}, doi = {10.1145/3567732}, year = {2022}, } Publisher's Version |
|
Schmidt, Albrecht |
ISS '22: "ElectronicsAR: Design and ..."
ElectronicsAR: Design and Evaluation of a Mobile and Tangible High-Fidelity Augmented Electronics Toolkit
Sebastian S. Feger, Lars Semmler, Albrecht Schmidt, and Thomas Kosch (LMU Munich, Munich, Germany; TU Darmstadt, Darmstadt, Germany; Humboldt University of Berlin, Berlin, Germany) Exploring and interacting with electronics is challenging as the internal processes of components are not visible. Further barriers to engagement with electronics include fear of injury and hardware damage. In response, Augmented Reality (AR) applications address those challenges to make internal processes and the functionality of circuits visible. However, current apps are either limited to abstract low-fidelity applications or entirely virtual environments. We present ElectronicsAR, a tangible high-fidelity AR electronics kit with scaled hardware components representing the shape of real electronics. Our evaluation with 24 participants showed that users were more efficient and more effective at naming components, as well as building and debugging circuits. We discuss our findings in the context of ElectronicsAR's unique characteristics that we contrast with related work. Based on this, we discuss opportunities for future research to design functional mobile AR applications that meet the needs of beginners and experts. @Article{ISS22p587, author = {Sebastian S. Feger and Lars Semmler and Albrecht Schmidt and Thomas Kosch}, title = {ElectronicsAR: Design and Evaluation of a Mobile and Tangible High-Fidelity Augmented Electronics Toolkit}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {587}, numpages = {22}, doi = {10.1145/3567740}, year = {2022}, } Publisher's Version ISS '22: "SaferHome: Interactive Physical ..." SaferHome: Interactive Physical and Digital Smart Home Dashboards for Communicating Privacy Assessments to Owners and Bystanders Maximiliane Windl, Alexander Hiesinger, Robin Welsch, Albrecht Schmidt, and Sebastian S. Feger (LMU Munich, Munich, Germany; Munich Center for Machine Learning, Munich, Germany; Aalto University, Espoo, Finland) Private homes are increasingly becoming smart spaces. While smart homes promise comfort, they expose most intimate spaces to security and privacy risks. Unfortunately, most users today are not equipped with the right tools to assess the vulnerabilities or privacy practices of smart devices. Further, users might lose track of the devices installed in their homes or are unaware of devices placed by a partner or host. We developed SaferHome, an interactive digital-physical privacy framework, to provide smart home users with security and privacy assessments and a sense of device location. SaferHome includes a digital list view and physical and digital dashboards that map real floor plans. We evaluated SaferHome with eight households in the wild. We find that users adopted various strategies to integrate the dashboards into their understanding and interpretation of smart home privacy. We present implications for the design of future smart home privacy frameworks that are impacted by technical affinity, device types, device ownership, and tangibility of assessments. @Article{ISS22p586, author = {Maximiliane Windl and Alexander Hiesinger and Robin Welsch and Albrecht Schmidt and Sebastian S. Feger}, title = {SaferHome: Interactive Physical and Digital Smart Home Dashboards for Communicating Privacy Assessments to Owners and Bystanders}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {586}, numpages = {20}, doi = {10.1145/3567739}, year = {2022}, } Publisher's Version Info ISS '22: "Extended Mid-air Ultrasound ..." Extended Mid-air Ultrasound Haptics for Virtual Reality Steeven Villa, Sven Mayer, Jess Hartcher-O'Brien, Albrecht Schmidt, and Tonja-Katrin Machulla (LMU Munich, Munich, Germany; Delft University of Technology, Delft, Netherlands; TU Chemnitz, Chemnitz, Germany) Mid-air haptics allow bare-hand tactile stimulation; however, it has a constrained workspace, making it unsuitable for room-scale haptics. We present a novel approach to rendering mid-air haptic sensations in a large rendering volume by turning a static array into a dynamic array following the user's hand. We used a 6DOF robot to drive a haptic ultrasound array over a large 3D space. Our system enables rendering room-scale mid-air experiences while preserving bare-hand interaction, thus, providing tangibility for virtual environments. To evaluate our approach, we performed three evaluations. First, we performed a technical system evaluation, showcasing the feasibility of such a system. Next, we conducted three psychophysical experiments, showing that the motion does not affect the user's perception with high likelihood. Lastly, we explored seven use cases that showcase our system's potential using a user study. We discuss challenges and opportunities in how large-scale mid-air haptics can contribute toward room-scale haptic feedback. Thus, with our system, we contribute to general haptic mid-air feedback on a large scale. @Article{ISS22p578, author = {Steeven Villa and Sven Mayer and Jess Hartcher-O'Brien and Albrecht Schmidt and Tonja-Katrin Machulla}, title = {Extended Mid-air Ultrasound Haptics for Virtual Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {578}, numpages = {25}, doi = {10.1145/3567731}, year = {2022}, } Publisher's Version |
|
Schönböck, Johannes |
ISS '22: "Semi-automated Analysis of ..."
Semi-automated Analysis of Collaborative Interaction: Are We There Yet?
Thomas Neumayr, Mirjam Augstein, Johannes Schönböck, Sean Rintel, Helmut Leeb, and Thomas Teichmeister (University of Applied Sciences Upper Austria, Hagenberg, Austria; JKU Linz, Linz, Austria; Microsoft Research, Cambridge, UK) In recent years, research on collaborative interaction has relied on manual coding of rich audio/video recordings. The fine-grained analysis of such material is extremely time-consuming and labor-intensive. This is not only difficult to scale, but, as a result, might also limit the quality and completeness of coding due to fatigue, inherent human biases, (accidental or intentional), and inter-rater inconsistencies. In this paper, we explore how recent advances in machine learning may reduce manual effort and loss of information while retaining the value of human intelligence in the coding process. We present ACACIA (AI Chain for Augmented Collaborative Interaction Analysis), an AI video data analysis application which combines a range of advances in machine perception of video material for the analysis of collaborative interaction. We evaluate ACACIA's abilities, show how far we can already get, and which challenges remain. Our contribution lies in establishing a combined machine and human analysis pipeline that may be generalized to different collaborative settings and guide future research. @Article{ISS22p571, author = {Thomas Neumayr and Mirjam Augstein and Johannes Schönböck and Sean Rintel and Helmut Leeb and Thomas Teichmeister}, title = {Semi-automated Analysis of Collaborative Interaction: Are We There Yet?}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {571}, numpages = {27}, doi = {10.1145/3567724}, year = {2022}, } Publisher's Version Archive submitted (52 MB) |
|
Semmler, Lars |
ISS '22: "ElectronicsAR: Design and ..."
ElectronicsAR: Design and Evaluation of a Mobile and Tangible High-Fidelity Augmented Electronics Toolkit
Sebastian S. Feger, Lars Semmler, Albrecht Schmidt, and Thomas Kosch (LMU Munich, Munich, Germany; TU Darmstadt, Darmstadt, Germany; Humboldt University of Berlin, Berlin, Germany) Exploring and interacting with electronics is challenging as the internal processes of components are not visible. Further barriers to engagement with electronics include fear of injury and hardware damage. In response, Augmented Reality (AR) applications address those challenges to make internal processes and the functionality of circuits visible. However, current apps are either limited to abstract low-fidelity applications or entirely virtual environments. We present ElectronicsAR, a tangible high-fidelity AR electronics kit with scaled hardware components representing the shape of real electronics. Our evaluation with 24 participants showed that users were more efficient and more effective at naming components, as well as building and debugging circuits. We discuss our findings in the context of ElectronicsAR's unique characteristics that we contrast with related work. Based on this, we discuss opportunities for future research to design functional mobile AR applications that meet the needs of beginners and experts. @Article{ISS22p587, author = {Sebastian S. Feger and Lars Semmler and Albrecht Schmidt and Thomas Kosch}, title = {ElectronicsAR: Design and Evaluation of a Mobile and Tangible High-Fidelity Augmented Electronics Toolkit}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {587}, numpages = {22}, doi = {10.1145/3567740}, year = {2022}, } Publisher's Version |
|
Serrano, Marcos |
ISS '22: "Remote Graphic-Based Teaching ..."
Remote Graphic-Based Teaching for Pupils with Visual Impairments: Understanding Current Practices and Co-designing an Accessible Tool with Special Education Teachers
Kaixing Zhao, Julie Mulet, Clara Sorita, Bernard Oriola, Marcos Serrano, and Christophe Jouffrais (Northwestern Polytechnical University, Xi'an, China; University of Toulouse, Toulouse, France; CNRS, Toulouse, France) The lockdown period related to the COVID-19 pandemic has had a strong impact on the educational system in general, but more particularly on the special education system. Indeed, in the case of people with visual impairments, the regular tools relying heavily on images and videos were no longer usable. This specific situation highlighted an urgent need to develop tools that are accessible and that can provide solutions for remote teaching with people with VI. However, there is little work on the difficulties that this population encounters when they learn remotely as well as on the current practices of special education teachers. Such a lack of understanding limits the development of remote teaching systems that are adapted. In this paper, we conducted an online survey regarding the practices of 59 professionals giving lessons to pupils with VI, followed by a series of focus groups with special education workers facing teaching issues during the lockdown period. We followed an iterative design process where we designed successive low-fidelity prototypes to drive successive focus groups. We contribute with an analysis of the issues faced by special education teachers in this situation, and a concept to drive the future development of a tool for remote graphic-based teaching with pupils with VI. @Article{ISS22p580, author = {Kaixing Zhao and Julie Mulet and Clara Sorita and Bernard Oriola and Marcos Serrano and Christophe Jouffrais}, title = {Remote Graphic-Based Teaching for Pupils with Visual Impairments: Understanding Current Practices and Co-designing an Accessible Tool with Special Education Teachers}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {580}, numpages = {30}, doi = {10.1145/3567733}, year = {2022}, } Publisher's Version ISS '22: "Visual Transitions around ..." Visual Transitions around Tabletops in Mixed Reality: Study on a Visual Acquisition Task between Vertical Virtual Displays and Horizontal Tabletops Gary Perelman, Emmanuel Dubois, Alice Probst, and Marcos Serrano (University of Toulouse, Toulouse, France) See-through Head-Mounted Displays (HMDs) offer interesting opportunities to augment the interaction space around screens, especially around horizontal tabletops. In such context, HMDs can display surrounding vertical virtual windows to complement the tabletop content with data displayed in close vicinity. However, the effects of such combination on the visual acquisition of targets in the resulting combined display space have scarcely been explored. In this paper we conduct a study to explore visual acquisitions in such contexts, with a specific focus on the analysis of visual transitions between the horizontal tabletop display and the vertical virtual displays (in front and on the side of the tabletop). To further study the possible visual perception of the tabletop content out of the HMD and its impact on visual interaction, we distinguished two solutions for displaying information on the horizontal tabletop: using the see-through HMD to display virtual content over the tabletop surface (virtual overlay), i.e. the content is only visible inside the HMD’s FoV, or using the tabletop itself (tabletop screen). 12 participants performed visual acquisition tasks involving the horizontal and vertical displays. We measured the time to perform the task, the head movements, the portions of the displays visible in the HMD’s field of view, the physical fatigue and the user’s preference. Our results show that it is faster to acquire virtual targets in the front display than on the side. Results reveal that the use of the virtual overlay on the tabletop slows down the visual acquisition compared to the use of the tabletop screen, showing that users exploit the visual perception of the tabletop content on the peripheral visual space. We were also able to quantify when and to which extent targets on the tabletop can be acquired without being visible within the HMD's field of view when using the tabletop screen, i.e. by looking under the HMD. These results lead to design recommendations for more efficient, comfortable and integrated interfaces combining tabletop and surrounding vertical virtual displays. @Article{ISS22p585, author = {Gary Perelman and Emmanuel Dubois and Alice Probst and Marcos Serrano}, title = {Visual Transitions around Tabletops in Mixed Reality: Study on a Visual Acquisition Task between Vertical Virtual Displays and Horizontal Tabletops}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {585}, numpages = {20}, doi = {10.1145/3567738}, year = {2022}, } Publisher's Version Video |
|
Sluÿters, Arthur |
ISS '22: "Theoretically-Defined vs. ..."
Theoretically-Defined vs. User-Defined Squeeze Gestures
Santiago Villarreal-Narvaez, Arthur Sluÿters, Jean Vanderdonckt, and Efrem Mbaki Luzayisu (Université Catholique de Louvain, Louvain-la-Neuve, Belgium; University of Kinshasa, Kinshasa, Congo) This paper presents theoretical and empirical results about user-defined gesture preferences for squeezable objects by focusing on a particular object: a deformable cushion. We start with a theoretical analysis of potential gestures for this squeezable object by defining a multi-dimension taxonomy of squeeze gestures composed of 82 gesture classes. We then empirically analyze the results of a gesture elicitation study resulting in a set of N=32 participants X 21 referents = 672 elicited gestures, further classified into 26 gesture classes. We also contribute to the practice of gesture elicitation studies by explaining why we started from a theoretical analysis (by systematically exploring a design space of potential squeeze gestures) to end up with an empirical analysis (by conducting a gesture elicitation study afterward): the intersection of the results from these sources confirm or disconfirm consensus gestures. Based on these findings, we extract from the taxonomy a subset of recommended gestures that give rise to design implications for gesture interaction with squeezable objects. @Article{ISS22p559, author = {Santiago Villarreal-Narvaez and Arthur Sluÿters and Jean Vanderdonckt and Efrem Mbaki Luzayisu}, title = {Theoretically-Defined vs. User-Defined Squeeze Gestures}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {559}, numpages = {30}, doi = {10.1145/3567805}, year = {2022}, } Publisher's Version Video |
|
Sorita, Clara |
ISS '22: "Remote Graphic-Based Teaching ..."
Remote Graphic-Based Teaching for Pupils with Visual Impairments: Understanding Current Practices and Co-designing an Accessible Tool with Special Education Teachers
Kaixing Zhao, Julie Mulet, Clara Sorita, Bernard Oriola, Marcos Serrano, and Christophe Jouffrais (Northwestern Polytechnical University, Xi'an, China; University of Toulouse, Toulouse, France; CNRS, Toulouse, France) The lockdown period related to the COVID-19 pandemic has had a strong impact on the educational system in general, but more particularly on the special education system. Indeed, in the case of people with visual impairments, the regular tools relying heavily on images and videos were no longer usable. This specific situation highlighted an urgent need to develop tools that are accessible and that can provide solutions for remote teaching with people with VI. However, there is little work on the difficulties that this population encounters when they learn remotely as well as on the current practices of special education teachers. Such a lack of understanding limits the development of remote teaching systems that are adapted. In this paper, we conducted an online survey regarding the practices of 59 professionals giving lessons to pupils with VI, followed by a series of focus groups with special education workers facing teaching issues during the lockdown period. We followed an iterative design process where we designed successive low-fidelity prototypes to drive successive focus groups. We contribute with an analysis of the issues faced by special education teachers in this situation, and a concept to drive the future development of a tool for remote graphic-based teaching with pupils with VI. @Article{ISS22p580, author = {Kaixing Zhao and Julie Mulet and Clara Sorita and Bernard Oriola and Marcos Serrano and Christophe Jouffrais}, title = {Remote Graphic-Based Teaching for Pupils with Visual Impairments: Understanding Current Practices and Co-designing an Accessible Tool with Special Education Teachers}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {580}, numpages = {30}, doi = {10.1145/3567733}, year = {2022}, } Publisher's Version |
|
Stuerzlinger, Wolfgang |
ISS '22: "The Effectiveness of Path-Segmentation ..."
The Effectiveness of Path-Segmentation for Modeling Lasso Times in Width-Varying Paths
Shota Yamanaka, Hiroki Usuba, Wolfgang Stuerzlinger, and Homei Miyashita (Yahoo, Tokyo, Japan; Yahoo, Chiyoda-ku, Japan; Simon Fraser University, Vancouver, Canada; Meiji University, Tokyo, Japan) Models of lassoing time to select multiple square icons exist, but realistic lasso tasks also typically involve encircling non-rectangular objects. Thus, it is unclear if we can apply existing models to such conditions where, e.g., the width of the path that users want to steer through changes dynamically or step-wise. In this work, we conducted two experiments where the objects were non-rectangular, with path widths that narrowed or widened, smoothly or step-wise. The results showed that the baseline models for pen-steering movements (the steering and crossing law models) fitted the timing data well, but also that segmenting width-changing areas led to significant improvements. Our work enables the modeling of novel UIs requiring continuous strokes, e.g., for grouping icons. @Article{ISS22p584, author = {Shota Yamanaka and Hiroki Usuba and Wolfgang Stuerzlinger and Homei Miyashita}, title = {The Effectiveness of Path-Segmentation for Modeling Lasso Times in Width-Varying Paths}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {584}, numpages = {20}, doi = {10.1145/3567737}, year = {2022}, } Publisher's Version |
|
Takashima, Kazuki |
ISS '22: "HandyGaze: A Gaze Tracking ..."
HandyGaze: A Gaze Tracking Technique for Room-Scale Environments using a Single Smartphone
Takahiro Nagai, Kazuyuki Fujita, Kazuki Takashima, and Yoshifumi Kitamura (Tohoku University, Sendai, Japan) We propose HandyGaze, a 6-DoF gaze tracking technique for room-scale environments that can be carried out by simply holding a smartphone naturally without installing any sensors or markers in the environment. Our technique simultaneously employs the smartphone’s front and rear cameras: The front camera estimates the user’s gaze vector relative to the smartphone, while the rear camera (and depth sensor, if available) performs self-localization by reconstructing a pre-obtained 3D map of the environment. To achieve this, we implemented a prototype that works on iOS smartphones by running an ARKit-based algorithm for estimating the user’s 6-DoF head orientation. We additionally implemented a novel calibration method that offsets the user-specific deviation between the head and gaze orientations. We then conducted a user study (N=10) that measured our technique’s positional accuracy to the gaze target under four conditions, based on combinations of use with and without a depth sensor and calibration. The results show that our calibration method was able to reduce the mean absolute error of the gaze point by 27%, with an error of 0.53 m when using the depth sensor. We also report the target size required to avoid erroneous inputs. Finally, we suggest possible applications such as a gaze-based guidance application for museums. @Article{ISS22p562, author = {Takahiro Nagai and Kazuyuki Fujita and Kazuki Takashima and Yoshifumi Kitamura}, title = {HandyGaze: A Gaze Tracking Technique for Room-Scale Environments using a Single Smartphone}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {562}, numpages = {18}, doi = {10.1145/3567715}, year = {2022}, } Publisher's Version Archive submitted (74 MB) ISS '22: "TetraForce: A Magnetic-Based ..." TetraForce: A Magnetic-Based Interface Enabling Pressure Force and Shear Force Input Applied to Front and Back of a Smartphone Taichi Tsuchida, Kazuyuki Fujita, Kaori Ikematsu, Sayan Sarcar, Kazuki Takashima, and Yoshifumi Kitamura (Tohoku University, Sendai, Japan; Yahoo, Tokyo, Japan; Birmingham City University, Birmingham, UK) We propose a novel phone-case-shaped interface named TetraForce, which enables four types of force input consisting of two force directions (i.e., pressure force and shear force) and two force-applied surfaces (i.e., touch surface and back surface) in a single device. Force detection is achieved using the smartphone's built-in magnetometer (and supplementary accelerometer and gyro sensor) by estimating the displacement of a magnet attached to a 3-DoF passively movable panel at the back. We conducted a user study (N=12) to investigate the fundamental user performance by our interface and demonstrated that the input was detected as intended with a success rate of 97.4% on average for all four input types. We further conducted an ideation workshop with people who were involved in human-computer interaction (N=12) to explore possible applications of this interface, and we obtained 137 ideas for applications using individual input types and 51 possible scenarios using them in combination. Organizing these ideas reveals the advantages of each input type and suggests that our interface is useful for applications that require complex operations and that it can improve intuitiveness through elastic feedback. @Article{ISS22p564, author = {Taichi Tsuchida and Kazuyuki Fujita and Kaori Ikematsu and Sayan Sarcar and Kazuki Takashima and Yoshifumi Kitamura}, title = {TetraForce: A Magnetic-Based Interface Enabling Pressure Force and Shear Force Input Applied to Front and Back of a Smartphone}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {564}, numpages = {22}, doi = {10.1145/3567717}, year = {2022}, } Publisher's Version Archive submitted (320 MB) |
|
Teichmeister, Thomas |
ISS '22: "Semi-automated Analysis of ..."
Semi-automated Analysis of Collaborative Interaction: Are We There Yet?
Thomas Neumayr, Mirjam Augstein, Johannes Schönböck, Sean Rintel, Helmut Leeb, and Thomas Teichmeister (University of Applied Sciences Upper Austria, Hagenberg, Austria; JKU Linz, Linz, Austria; Microsoft Research, Cambridge, UK) In recent years, research on collaborative interaction has relied on manual coding of rich audio/video recordings. The fine-grained analysis of such material is extremely time-consuming and labor-intensive. This is not only difficult to scale, but, as a result, might also limit the quality and completeness of coding due to fatigue, inherent human biases, (accidental or intentional), and inter-rater inconsistencies. In this paper, we explore how recent advances in machine learning may reduce manual effort and loss of information while retaining the value of human intelligence in the coding process. We present ACACIA (AI Chain for Augmented Collaborative Interaction Analysis), an AI video data analysis application which combines a range of advances in machine perception of video material for the analysis of collaborative interaction. We evaluate ACACIA's abilities, show how far we can already get, and which challenges remain. Our contribution lies in establishing a combined machine and human analysis pipeline that may be generalized to different collaborative settings and guide future research. @Article{ISS22p571, author = {Thomas Neumayr and Mirjam Augstein and Johannes Schönböck and Sean Rintel and Helmut Leeb and Thomas Teichmeister}, title = {Semi-automated Analysis of Collaborative Interaction: Are We There Yet?}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {571}, numpages = {27}, doi = {10.1145/3567724}, year = {2022}, } Publisher's Version Archive submitted (52 MB) |
|
Thomas, Bruce H. |
ISS '22: "Towards Immersive Collaborative ..."
Towards Immersive Collaborative Sensemaking
Ying Yang, Tim Dwyer, Michael Wybrow, Benjamin Lee, Maxime Cordeil, Mark Billinghurst, and Bruce H. Thomas (Monash University, Melbourne, Australia; University of Queensland, Brisbane, Australia; University of South Australia, Mawson Lakes, Australia) When collaborating face-to-face, people commonly use the surfaces and spaces around them to perform sensemaking tasks, such as spatially organising documents, notes or images. However, when people collaborate remotely using desktop interfaces they no longer feel like they are sharing the same space. This limitation may be overcome through collaboration in immersive environments, which simulate the physical in-person experience. In this paper, we report on a between-groups study comparing collaborations on image organisation tasks, in an immersive Virtual Reality (VR) environment to more conventional desktop conferencing. Collecting data from 40 subjects in groups of four, we measured task performance, user behaviours, collaboration engagement and awareness. Overall, the VR and desktop interface resulted in similar speed, accuracy and social presence rating, but we observed more conversations and interaction with objects, and more equal contributions to the interaction from participants within groups in VR. We also identified differences in coordination and collaborative awareness behaviours between VR and desktop platforms. We report on a set of systematic measures for assessing VR collaborative experience and a new analysis tool that we have developed to capture user behaviours in collaborative setting. Finally, we provide design considerations and directions for future work. @Article{ISS22p588, author = {Ying Yang and Tim Dwyer and Michael Wybrow and Benjamin Lee and Maxime Cordeil and Mark Billinghurst and Bruce H. Thomas}, title = {Towards Immersive Collaborative Sensemaking}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {588}, numpages = {25}, doi = {10.1145/3567741}, year = {2022}, } Publisher's Version Video |
|
Tsuchida, Taichi |
ISS '22: "TetraForce: A Magnetic-Based ..."
TetraForce: A Magnetic-Based Interface Enabling Pressure Force and Shear Force Input Applied to Front and Back of a Smartphone
Taichi Tsuchida, Kazuyuki Fujita, Kaori Ikematsu, Sayan Sarcar, Kazuki Takashima, and Yoshifumi Kitamura (Tohoku University, Sendai, Japan; Yahoo, Tokyo, Japan; Birmingham City University, Birmingham, UK) We propose a novel phone-case-shaped interface named TetraForce, which enables four types of force input consisting of two force directions (i.e., pressure force and shear force) and two force-applied surfaces (i.e., touch surface and back surface) in a single device. Force detection is achieved using the smartphone's built-in magnetometer (and supplementary accelerometer and gyro sensor) by estimating the displacement of a magnet attached to a 3-DoF passively movable panel at the back. We conducted a user study (N=12) to investigate the fundamental user performance by our interface and demonstrated that the input was detected as intended with a success rate of 97.4% on average for all four input types. We further conducted an ideation workshop with people who were involved in human-computer interaction (N=12) to explore possible applications of this interface, and we obtained 137 ideas for applications using individual input types and 51 possible scenarios using them in combination. Organizing these ideas reveals the advantages of each input type and suggests that our interface is useful for applications that require complex operations and that it can improve intuitiveness through elastic feedback. @Article{ISS22p564, author = {Taichi Tsuchida and Kazuyuki Fujita and Kaori Ikematsu and Sayan Sarcar and Kazuki Takashima and Yoshifumi Kitamura}, title = {TetraForce: A Magnetic-Based Interface Enabling Pressure Force and Shear Force Input Applied to Front and Back of a Smartphone}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {564}, numpages = {22}, doi = {10.1145/3567717}, year = {2022}, } Publisher's Version Archive submitted (320 MB) |
|
Usuba, Hiroki |
ISS '22: "The Effectiveness of Path-Segmentation ..."
The Effectiveness of Path-Segmentation for Modeling Lasso Times in Width-Varying Paths
Shota Yamanaka, Hiroki Usuba, Wolfgang Stuerzlinger, and Homei Miyashita (Yahoo, Tokyo, Japan; Yahoo, Chiyoda-ku, Japan; Simon Fraser University, Vancouver, Canada; Meiji University, Tokyo, Japan) Models of lassoing time to select multiple square icons exist, but realistic lasso tasks also typically involve encircling non-rectangular objects. Thus, it is unclear if we can apply existing models to such conditions where, e.g., the width of the path that users want to steer through changes dynamically or step-wise. In this work, we conducted two experiments where the objects were non-rectangular, with path widths that narrowed or widened, smoothly or step-wise. The results showed that the baseline models for pen-steering movements (the steering and crossing law models) fitted the timing data well, but also that segmenting width-changing areas led to significant improvements. Our work enables the modeling of novel UIs requiring continuous strokes, e.g., for grouping icons. @Article{ISS22p584, author = {Shota Yamanaka and Hiroki Usuba and Wolfgang Stuerzlinger and Homei Miyashita}, title = {The Effectiveness of Path-Segmentation for Modeling Lasso Times in Width-Varying Paths}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {584}, numpages = {20}, doi = {10.1145/3567737}, year = {2022}, } Publisher's Version ISS '22: "Predicting Touch Accuracy ..." Predicting Touch Accuracy for Rectangular Targets by Using One-Dimensional Task Results Hiroki Usuba, Shota Yamanaka, Junichi Sato, and Homei Miyashita (Yahoo, Chiyoda-ku, Japan; Yahoo, Tokyo, Japan; Meiji University, Tokyo, Japan) We propose a method that predicts the success rate in pointing to 2D rectangular targets by using 1D vertical-bar and horizontal-bar task results. The method can predict the success rates for more practical situations under fewer experimental conditions. This shortens the duration of experiments, thus saving costs for researchers and practitioners. We verified the method through two experiments: laboratory-based and crowdsourced ones. In the laboratory-based experiment, we found that using 1D task results to predict the success rate for 2D targets slightly decreases the prediction accuracy. In the crowdsourced experiment, this method scored better than using 2D task results. Thus, we recommend that researchers use the method properly depending on the situation. @Article{ISS22p579, author = {Hiroki Usuba and Shota Yamanaka and Junichi Sato and Homei Miyashita}, title = {Predicting Touch Accuracy for Rectangular Targets by Using One-Dimensional Task Results}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {579}, numpages = {13}, doi = {10.1145/3567732}, year = {2022}, } Publisher's Version |
|
Vanderdonckt, Jean |
ISS '22: "Theoretically-Defined vs. ..."
Theoretically-Defined vs. User-Defined Squeeze Gestures
Santiago Villarreal-Narvaez, Arthur Sluÿters, Jean Vanderdonckt, and Efrem Mbaki Luzayisu (Université Catholique de Louvain, Louvain-la-Neuve, Belgium; University of Kinshasa, Kinshasa, Congo) This paper presents theoretical and empirical results about user-defined gesture preferences for squeezable objects by focusing on a particular object: a deformable cushion. We start with a theoretical analysis of potential gestures for this squeezable object by defining a multi-dimension taxonomy of squeeze gestures composed of 82 gesture classes. We then empirically analyze the results of a gesture elicitation study resulting in a set of N=32 participants X 21 referents = 672 elicited gestures, further classified into 26 gesture classes. We also contribute to the practice of gesture elicitation studies by explaining why we started from a theoretical analysis (by systematically exploring a design space of potential squeeze gestures) to end up with an empirical analysis (by conducting a gesture elicitation study afterward): the intersection of the results from these sources confirm or disconfirm consensus gestures. Based on these findings, we extract from the taxonomy a subset of recommended gestures that give rise to design implications for gesture interaction with squeezable objects. @Article{ISS22p559, author = {Santiago Villarreal-Narvaez and Arthur Sluÿters and Jean Vanderdonckt and Efrem Mbaki Luzayisu}, title = {Theoretically-Defined vs. User-Defined Squeeze Gestures}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {559}, numpages = {30}, doi = {10.1145/3567805}, year = {2022}, } Publisher's Version Video |
|
Villa, Steeven |
ISS '22: "Extended Mid-air Ultrasound ..."
Extended Mid-air Ultrasound Haptics for Virtual Reality
Steeven Villa, Sven Mayer, Jess Hartcher-O'Brien, Albrecht Schmidt, and Tonja-Katrin Machulla (LMU Munich, Munich, Germany; Delft University of Technology, Delft, Netherlands; TU Chemnitz, Chemnitz, Germany) Mid-air haptics allow bare-hand tactile stimulation; however, it has a constrained workspace, making it unsuitable for room-scale haptics. We present a novel approach to rendering mid-air haptic sensations in a large rendering volume by turning a static array into a dynamic array following the user's hand. We used a 6DOF robot to drive a haptic ultrasound array over a large 3D space. Our system enables rendering room-scale mid-air experiences while preserving bare-hand interaction, thus, providing tangibility for virtual environments. To evaluate our approach, we performed three evaluations. First, we performed a technical system evaluation, showcasing the feasibility of such a system. Next, we conducted three psychophysical experiments, showing that the motion does not affect the user's perception with high likelihood. Lastly, we explored seven use cases that showcase our system's potential using a user study. We discuss challenges and opportunities in how large-scale mid-air haptics can contribute toward room-scale haptic feedback. Thus, with our system, we contribute to general haptic mid-air feedback on a large scale. @Article{ISS22p578, author = {Steeven Villa and Sven Mayer and Jess Hartcher-O'Brien and Albrecht Schmidt and Tonja-Katrin Machulla}, title = {Extended Mid-air Ultrasound Haptics for Virtual Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {578}, numpages = {25}, doi = {10.1145/3567731}, year = {2022}, } Publisher's Version |
|
Villarreal-Narvaez, Santiago |
ISS '22: "Theoretically-Defined vs. ..."
Theoretically-Defined vs. User-Defined Squeeze Gestures
Santiago Villarreal-Narvaez, Arthur Sluÿters, Jean Vanderdonckt, and Efrem Mbaki Luzayisu (Université Catholique de Louvain, Louvain-la-Neuve, Belgium; University of Kinshasa, Kinshasa, Congo) This paper presents theoretical and empirical results about user-defined gesture preferences for squeezable objects by focusing on a particular object: a deformable cushion. We start with a theoretical analysis of potential gestures for this squeezable object by defining a multi-dimension taxonomy of squeeze gestures composed of 82 gesture classes. We then empirically analyze the results of a gesture elicitation study resulting in a set of N=32 participants X 21 referents = 672 elicited gestures, further classified into 26 gesture classes. We also contribute to the practice of gesture elicitation studies by explaining why we started from a theoretical analysis (by systematically exploring a design space of potential squeeze gestures) to end up with an empirical analysis (by conducting a gesture elicitation study afterward): the intersection of the results from these sources confirm or disconfirm consensus gestures. Based on these findings, we extract from the taxonomy a subset of recommended gestures that give rise to design implications for gesture interaction with squeezable objects. @Article{ISS22p559, author = {Santiago Villarreal-Narvaez and Arthur Sluÿters and Jean Vanderdonckt and Efrem Mbaki Luzayisu}, title = {Theoretically-Defined vs. User-Defined Squeeze Gestures}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {559}, numpages = {30}, doi = {10.1145/3567805}, year = {2022}, } Publisher's Version Video |
|
Wang, Cheng Yao |
ISS '22: "VideoPoseVR: Authoring Virtual ..."
VideoPoseVR: Authoring Virtual Reality Character Animations with Online Videos
Cheng Yao Wang, Qian Zhou, George Fitzmaurice, and Fraser Anderson (Cornell University, Ithaca, USA; Autodesk Research, Toronto, Canada) We present VideoPoseVR, a video-based animation authoring workflow using online videos to author character animations in VR. It leverages the state-of-the-art deep learning approach to reconstruct 3D motions from online videos, caption the motions, and store them in a motion dataset. Creators can import the videos, search in the dataset, modify the motion timeline, and combine multiple motions from videos to author character animations in VR. We implemented a proof-of-concept prototype and conducted a user study to evaluate the feasibility of the video-based authoring approach as well as gather initial feedback of the prototype. The study results suggest that VideoPoseVR was easy to learn for novice users to author animations and enable rapid exploration of prototyping for applications such as entertainment, skills training, and crowd simulations. @Article{ISS22p575, author = {Cheng Yao Wang and Qian Zhou and George Fitzmaurice and Fraser Anderson}, title = {VideoPoseVR: Authoring Virtual Reality Character Animations with Online Videos}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {575}, numpages = {20}, doi = {10.1145/3567728}, year = {2022}, } Publisher's Version |
|
Welsch, Robin |
ISS '22: "SaferHome: Interactive Physical ..."
SaferHome: Interactive Physical and Digital Smart Home Dashboards for Communicating Privacy Assessments to Owners and Bystanders
Maximiliane Windl, Alexander Hiesinger, Robin Welsch, Albrecht Schmidt, and Sebastian S. Feger (LMU Munich, Munich, Germany; Munich Center for Machine Learning, Munich, Germany; Aalto University, Espoo, Finland) Private homes are increasingly becoming smart spaces. While smart homes promise comfort, they expose most intimate spaces to security and privacy risks. Unfortunately, most users today are not equipped with the right tools to assess the vulnerabilities or privacy practices of smart devices. Further, users might lose track of the devices installed in their homes or are unaware of devices placed by a partner or host. We developed SaferHome, an interactive digital-physical privacy framework, to provide smart home users with security and privacy assessments and a sense of device location. SaferHome includes a digital list view and physical and digital dashboards that map real floor plans. We evaluated SaferHome with eight households in the wild. We find that users adopted various strategies to integrate the dashboards into their understanding and interpretation of smart home privacy. We present implications for the design of future smart home privacy frameworks that are impacted by technical affinity, device types, device ownership, and tangibility of assessments. @Article{ISS22p586, author = {Maximiliane Windl and Alexander Hiesinger and Robin Welsch and Albrecht Schmidt and Sebastian S. Feger}, title = {SaferHome: Interactive Physical and Digital Smart Home Dashboards for Communicating Privacy Assessments to Owners and Bystanders}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {586}, numpages = {20}, doi = {10.1145/3567739}, year = {2022}, } Publisher's Version Info |
|
Windl, Maximiliane |
ISS '22: "SaferHome: Interactive Physical ..."
SaferHome: Interactive Physical and Digital Smart Home Dashboards for Communicating Privacy Assessments to Owners and Bystanders
Maximiliane Windl, Alexander Hiesinger, Robin Welsch, Albrecht Schmidt, and Sebastian S. Feger (LMU Munich, Munich, Germany; Munich Center for Machine Learning, Munich, Germany; Aalto University, Espoo, Finland) Private homes are increasingly becoming smart spaces. While smart homes promise comfort, they expose most intimate spaces to security and privacy risks. Unfortunately, most users today are not equipped with the right tools to assess the vulnerabilities or privacy practices of smart devices. Further, users might lose track of the devices installed in their homes or are unaware of devices placed by a partner or host. We developed SaferHome, an interactive digital-physical privacy framework, to provide smart home users with security and privacy assessments and a sense of device location. SaferHome includes a digital list view and physical and digital dashboards that map real floor plans. We evaluated SaferHome with eight households in the wild. We find that users adopted various strategies to integrate the dashboards into their understanding and interpretation of smart home privacy. We present implications for the design of future smart home privacy frameworks that are impacted by technical affinity, device types, device ownership, and tangibility of assessments. @Article{ISS22p586, author = {Maximiliane Windl and Alexander Hiesinger and Robin Welsch and Albrecht Schmidt and Sebastian S. Feger}, title = {SaferHome: Interactive Physical and Digital Smart Home Dashboards for Communicating Privacy Assessments to Owners and Bystanders}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {586}, numpages = {20}, doi = {10.1145/3567739}, year = {2022}, } Publisher's Version Info |
|
Wu, Qin |
ISS '22: "Players and Performance: Opportunities ..."
Players and Performance: Opportunities for Social Interaction with Augmented Tabletop Games at Centres for Children with Autism
Qin Wu, Rao Xu, Yuantong Liu, Danielle Lottridge, and Suranga Nanayakkara (University of Auckland, Auckland, New Zealand; Chengdu University of Information Technology, Chengdu, China; National University of Singapore, Singapore, Singapore) This research aimed to investigate how children with autism interacted with rich audio and visual augmented reality (AR) tabletop games. Based on in-depth needs analysis facilitated through autism centers in China, we designed and developed MagicBLOCKS, a series of tabletop AR interactive games for children with autism. We conducted a four-week field study with 15 male children. We found that the interactive dynamics in games were rewarding and played critical roles in motivation and sustained interest. In addition, based on post-hoc interviews and video analysis with expert therapists, we found that MagicBLOCKS provided opportunities for children with autism to engage with each other through player performances and audience interactions with episodes of cooperation and territoriality. We discuss the limitations and the insights offered by this research. @Article{ISS22p563, author = {Qin Wu and Rao Xu and Yuantong Liu and Danielle Lottridge and Suranga Nanayakkara}, title = {Players and Performance: Opportunities for Social Interaction with Augmented Tabletop Games at Centres for Children with Autism}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {563}, numpages = {24}, doi = {10.1145/3567716}, year = {2022}, } Publisher's Version |
|
Wybrow, Michael |
ISS '22: "Towards Immersive Collaborative ..."
Towards Immersive Collaborative Sensemaking
Ying Yang, Tim Dwyer, Michael Wybrow, Benjamin Lee, Maxime Cordeil, Mark Billinghurst, and Bruce H. Thomas (Monash University, Melbourne, Australia; University of Queensland, Brisbane, Australia; University of South Australia, Mawson Lakes, Australia) When collaborating face-to-face, people commonly use the surfaces and spaces around them to perform sensemaking tasks, such as spatially organising documents, notes or images. However, when people collaborate remotely using desktop interfaces they no longer feel like they are sharing the same space. This limitation may be overcome through collaboration in immersive environments, which simulate the physical in-person experience. In this paper, we report on a between-groups study comparing collaborations on image organisation tasks, in an immersive Virtual Reality (VR) environment to more conventional desktop conferencing. Collecting data from 40 subjects in groups of four, we measured task performance, user behaviours, collaboration engagement and awareness. Overall, the VR and desktop interface resulted in similar speed, accuracy and social presence rating, but we observed more conversations and interaction with objects, and more equal contributions to the interaction from participants within groups in VR. We also identified differences in coordination and collaborative awareness behaviours between VR and desktop platforms. We report on a set of systematic measures for assessing VR collaborative experience and a new analysis tool that we have developed to capture user behaviours in collaborative setting. Finally, we provide design considerations and directions for future work. @Article{ISS22p588, author = {Ying Yang and Tim Dwyer and Michael Wybrow and Benjamin Lee and Maxime Cordeil and Mark Billinghurst and Bruce H. Thomas}, title = {Towards Immersive Collaborative Sensemaking}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {588}, numpages = {25}, doi = {10.1145/3567741}, year = {2022}, } Publisher's Version Video |
|
Xiao, Robert |
ISS '22: "Reducing the Latency of Touch ..."
Reducing the Latency of Touch Tracking on Ad-hoc Surfaces
Neil Xu Fan and Robert Xiao (University of British Columbia, Vancouver, Canada) Touch sensing on ad-hoc surfaces has the potential to transform everyday surfaces in the environment - desks, tables and walls - into tactile, touch-interactive surfaces, creating large, comfortable interactive spaces without the cost of large touch sensors. Depth sensors are a promising way to provide touch sensing on arbitrary surfaces, but past systems have suffered from high latency and poor touch detection accuracy. We apply a novel state machine-based approach to analyzing touch events, combined with a machine-learning approach to predictively classify touch events from depth data with lower latency and higher touch accuracy than previous approaches. Our system can reduce end-to-end touch latency to under 70ms, comparable to conventional capacitive touchscreens. Additionally, we open-source our dataset of over 30,000 touch events recorded in depth, infrared and RGB for the benefit of future researchers. @Article{ISS22p577, author = {Neil Xu Fan and Robert Xiao}, title = {Reducing the Latency of Touch Tracking on Ad-hoc Surfaces}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {577}, numpages = {11}, doi = {10.1145/3567730}, year = {2022}, } Publisher's Version Info |
|
Xu, Rao |
ISS '22: "Players and Performance: Opportunities ..."
Players and Performance: Opportunities for Social Interaction with Augmented Tabletop Games at Centres for Children with Autism
Qin Wu, Rao Xu, Yuantong Liu, Danielle Lottridge, and Suranga Nanayakkara (University of Auckland, Auckland, New Zealand; Chengdu University of Information Technology, Chengdu, China; National University of Singapore, Singapore, Singapore) This research aimed to investigate how children with autism interacted with rich audio and visual augmented reality (AR) tabletop games. Based on in-depth needs analysis facilitated through autism centers in China, we designed and developed MagicBLOCKS, a series of tabletop AR interactive games for children with autism. We conducted a four-week field study with 15 male children. We found that the interactive dynamics in games were rewarding and played critical roles in motivation and sustained interest. In addition, based on post-hoc interviews and video analysis with expert therapists, we found that MagicBLOCKS provided opportunities for children with autism to engage with each other through player performances and audience interactions with episodes of cooperation and territoriality. We discuss the limitations and the insights offered by this research. @Article{ISS22p563, author = {Qin Wu and Rao Xu and Yuantong Liu and Danielle Lottridge and Suranga Nanayakkara}, title = {Players and Performance: Opportunities for Social Interaction with Augmented Tabletop Games at Centres for Children with Autism}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {563}, numpages = {24}, doi = {10.1145/3567716}, year = {2022}, } Publisher's Version |
|
Yamanaka, Shota |
ISS '22: "The Effectiveness of Path-Segmentation ..."
The Effectiveness of Path-Segmentation for Modeling Lasso Times in Width-Varying Paths
Shota Yamanaka, Hiroki Usuba, Wolfgang Stuerzlinger, and Homei Miyashita (Yahoo, Tokyo, Japan; Yahoo, Chiyoda-ku, Japan; Simon Fraser University, Vancouver, Canada; Meiji University, Tokyo, Japan) Models of lassoing time to select multiple square icons exist, but realistic lasso tasks also typically involve encircling non-rectangular objects. Thus, it is unclear if we can apply existing models to such conditions where, e.g., the width of the path that users want to steer through changes dynamically or step-wise. In this work, we conducted two experiments where the objects were non-rectangular, with path widths that narrowed or widened, smoothly or step-wise. The results showed that the baseline models for pen-steering movements (the steering and crossing law models) fitted the timing data well, but also that segmenting width-changing areas led to significant improvements. Our work enables the modeling of novel UIs requiring continuous strokes, e.g., for grouping icons. @Article{ISS22p584, author = {Shota Yamanaka and Hiroki Usuba and Wolfgang Stuerzlinger and Homei Miyashita}, title = {The Effectiveness of Path-Segmentation for Modeling Lasso Times in Width-Varying Paths}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {584}, numpages = {20}, doi = {10.1145/3567737}, year = {2022}, } Publisher's Version ISS '22: "Predicting Touch Accuracy ..." Predicting Touch Accuracy for Rectangular Targets by Using One-Dimensional Task Results Hiroki Usuba, Shota Yamanaka, Junichi Sato, and Homei Miyashita (Yahoo, Chiyoda-ku, Japan; Yahoo, Tokyo, Japan; Meiji University, Tokyo, Japan) We propose a method that predicts the success rate in pointing to 2D rectangular targets by using 1D vertical-bar and horizontal-bar task results. The method can predict the success rates for more practical situations under fewer experimental conditions. This shortens the duration of experiments, thus saving costs for researchers and practitioners. We verified the method through two experiments: laboratory-based and crowdsourced ones. In the laboratory-based experiment, we found that using 1D task results to predict the success rate for 2D targets slightly decreases the prediction accuracy. In the crowdsourced experiment, this method scored better than using 2D task results. Thus, we recommend that researchers use the method properly depending on the situation. @Article{ISS22p579, author = {Hiroki Usuba and Shota Yamanaka and Junichi Sato and Homei Miyashita}, title = {Predicting Touch Accuracy for Rectangular Targets by Using One-Dimensional Task Results}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {579}, numpages = {13}, doi = {10.1145/3567732}, year = {2022}, } Publisher's Version |
|
Yang, Ying |
ISS '22: "Towards Immersive Collaborative ..."
Towards Immersive Collaborative Sensemaking
Ying Yang, Tim Dwyer, Michael Wybrow, Benjamin Lee, Maxime Cordeil, Mark Billinghurst, and Bruce H. Thomas (Monash University, Melbourne, Australia; University of Queensland, Brisbane, Australia; University of South Australia, Mawson Lakes, Australia) When collaborating face-to-face, people commonly use the surfaces and spaces around them to perform sensemaking tasks, such as spatially organising documents, notes or images. However, when people collaborate remotely using desktop interfaces they no longer feel like they are sharing the same space. This limitation may be overcome through collaboration in immersive environments, which simulate the physical in-person experience. In this paper, we report on a between-groups study comparing collaborations on image organisation tasks, in an immersive Virtual Reality (VR) environment to more conventional desktop conferencing. Collecting data from 40 subjects in groups of four, we measured task performance, user behaviours, collaboration engagement and awareness. Overall, the VR and desktop interface resulted in similar speed, accuracy and social presence rating, but we observed more conversations and interaction with objects, and more equal contributions to the interaction from participants within groups in VR. We also identified differences in coordination and collaborative awareness behaviours between VR and desktop platforms. We report on a set of systematic measures for assessing VR collaborative experience and a new analysis tool that we have developed to capture user behaviours in collaborative setting. Finally, we provide design considerations and directions for future work. @Article{ISS22p588, author = {Ying Yang and Tim Dwyer and Michael Wybrow and Benjamin Lee and Maxime Cordeil and Mark Billinghurst and Bruce H. Thomas}, title = {Towards Immersive Collaborative Sensemaking}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {588}, numpages = {25}, doi = {10.1145/3567741}, year = {2022}, } Publisher's Version Video |
|
Zagermann, Johannes |
ISS '22: "Re-locations: Augmenting Personal ..."
Re-locations: Augmenting Personal and Shared Workspaces to Support Remote Collaboration in Incongruent Spaces
Daniel Immanuel Fink, Johannes Zagermann, Harald Reiterer, and Hans-Christian Jetter (University of Konstanz, Konstanz, Germany; University of Lübeck, Lübeck, Germany) Augmented reality (AR) can create the illusion of being virtually co-located during remote collaboration, e.g., by visualizing remote co-workers as avatars. However, spatial awareness of each other's activities is limited as physical spaces, including the position of physical devices, are often incongruent. Therefore, alignment methods are needed to support activities on physical devices. In this paper, we present the concept of Re-locations, a method for enabling remote collaboration with augmented reality in incongruent spaces. The idea of the concept is to enrich remote collaboration activities on multiple physical devices with attributes of co-located collaboration such as spatial awareness and spatial referencing by locally relocating remote user representations to user-defined workspaces. We evaluated the Re-locations concept in an explorative user study with dyads using an authentic, collaborative task. Our findings indicate that Re-locations introduce attributes of co-located collaboration like spatial awareness and social presence. Based on our findings, we provide implications for future research and design of remote collaboration systems using AR. @Article{ISS22p556, author = {Daniel Immanuel Fink and Johannes Zagermann and Harald Reiterer and Hans-Christian Jetter}, title = {Re-locations: Augmenting Personal and Shared Workspaces to Support Remote Collaboration in Incongruent Spaces}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {556}, numpages = {30}, doi = {10.1145/3567709}, year = {2022}, } Publisher's Version |
|
Zand, Ghazal |
ISS '22: "TiltWalker: Operating a Telepresence ..."
TiltWalker: Operating a Telepresence Robot with One-Hand by Tilt Controls on a Smartphone
Ghazal Zand, Yuan Ren, and Ahmed Sabbir Arif (University of California, Merced, USA) Mobile clients for telepresence robots are cluttered with interactive elements that either leave a little room for the camera feeds or occlude them. Many do not provide meaningful feedback on the robot's state and most require the use of both hands. These make maneuvering telepresence robots difficult with mobile devices. TiltWalker enables controlling a telepresence robot with one hand using tilt gestures with a smartphone. In a series of studies, we first justify the use of a Web platform, determine how far and fast users can tilt without compromising the comfort and the legibility of the display content, and identify a velocity-based function well-suited for control-display mapping. We refine TiltWalker based on the findings of these studies, then compare it with a default method in the final study. Results revealed that TiltWalker is significantly faster and more accurate than the default method. Besides, participants preferred TiltWalker's interaction methods and graphical feedback significantly more than those of the default method. @Article{ISS22p572, author = {Ghazal Zand and Yuan Ren and Ahmed Sabbir Arif}, title = {TiltWalker: Operating a Telepresence Robot with One-Hand by Tilt Controls on a Smartphone}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {572}, numpages = {26}, doi = {10.1145/3567725}, year = {2022}, } Publisher's Version Video |
|
Zhang, Futian |
ISS '22: "Conductor: Intersection-Based ..."
Conductor: Intersection-Based Bimanual Pointing in Augmented and Virtual Reality
Futian Zhang, Keiko Katsuragawa, and Edward Lank (University of Waterloo, Waterloo, Canada; National Research Council, Waterloo, Canada; University of Lille, Lille, France) Pointing is an elementary interaction in virtual and augmented reality environments, and, to effectively support selection, techniques must deal with the challenges of occlusion and depth specification. Most of the previous techniques require two explicit steps to handle occlusion. In this paper, we propose Conductor, an intuitive, plane-ray, intersection-based, 3D pointing technique where users leverage bimanual input to control a ray and intersecting plane. Conductor allows users to use the non-dominant hand to adjust the cursor distance on the ray while pointing with the dominant hand. We evaluate Conductor against Raycursor, a state-of-the-art VR pointing technique, and show that Conductor outperforms Raycursor for selection tasks. Given our results, we argue that bimanual selection techniques merit additional exploration to support object selection and placement within virtual environments. @Article{ISS22p560, author = {Futian Zhang and Keiko Katsuragawa and Edward Lank}, title = {Conductor: Intersection-Based Bimanual Pointing in Augmented and Virtual Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {560}, numpages = {15}, doi = {10.1145/3567713}, year = {2022}, } Publisher's Version Video |
|
Zhao, Kaixing |
ISS '22: "Remote Graphic-Based Teaching ..."
Remote Graphic-Based Teaching for Pupils with Visual Impairments: Understanding Current Practices and Co-designing an Accessible Tool with Special Education Teachers
Kaixing Zhao, Julie Mulet, Clara Sorita, Bernard Oriola, Marcos Serrano, and Christophe Jouffrais (Northwestern Polytechnical University, Xi'an, China; University of Toulouse, Toulouse, France; CNRS, Toulouse, France) The lockdown period related to the COVID-19 pandemic has had a strong impact on the educational system in general, but more particularly on the special education system. Indeed, in the case of people with visual impairments, the regular tools relying heavily on images and videos were no longer usable. This specific situation highlighted an urgent need to develop tools that are accessible and that can provide solutions for remote teaching with people with VI. However, there is little work on the difficulties that this population encounters when they learn remotely as well as on the current practices of special education teachers. Such a lack of understanding limits the development of remote teaching systems that are adapted. In this paper, we conducted an online survey regarding the practices of 59 professionals giving lessons to pupils with VI, followed by a series of focus groups with special education workers facing teaching issues during the lockdown period. We followed an iterative design process where we designed successive low-fidelity prototypes to drive successive focus groups. We contribute with an analysis of the issues faced by special education teachers in this situation, and a concept to drive the future development of a tool for remote graphic-based teaching with pupils with VI. @Article{ISS22p580, author = {Kaixing Zhao and Julie Mulet and Clara Sorita and Bernard Oriola and Marcos Serrano and Christophe Jouffrais}, title = {Remote Graphic-Based Teaching for Pupils with Visual Impairments: Understanding Current Practices and Co-designing an Accessible Tool with Special Education Teachers}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {580}, numpages = {30}, doi = {10.1145/3567733}, year = {2022}, } Publisher's Version |
|
Zhou, Qian |
ISS '22: "VideoPoseVR: Authoring Virtual ..."
VideoPoseVR: Authoring Virtual Reality Character Animations with Online Videos
Cheng Yao Wang, Qian Zhou, George Fitzmaurice, and Fraser Anderson (Cornell University, Ithaca, USA; Autodesk Research, Toronto, Canada) We present VideoPoseVR, a video-based animation authoring workflow using online videos to author character animations in VR. It leverages the state-of-the-art deep learning approach to reconstruct 3D motions from online videos, caption the motions, and store them in a motion dataset. Creators can import the videos, search in the dataset, modify the motion timeline, and combine multiple motions from videos to author character animations in VR. We implemented a proof-of-concept prototype and conducted a user study to evaluate the feasibility of the video-based authoring approach as well as gather initial feedback of the prototype. The study results suggest that VideoPoseVR was easy to learn for novice users to author animations and enable rapid exploration of prototyping for applications such as entertainment, skills training, and crowd simulations. @Article{ISS22p575, author = {Cheng Yao Wang and Qian Zhou and George Fitzmaurice and Fraser Anderson}, title = {VideoPoseVR: Authoring Virtual Reality Character Animations with Online Videos}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {6}, number = {ISS}, articleno = {575}, numpages = {20}, doi = {10.1145/3567728}, year = {2022}, } Publisher's Version |
122 authors
proc time: 28.82