ISS 2023 – Author Index |
Contents -
Abstracts -
Authors
|
A B C D E F G H I J K L M N O P R S T U W X Y Z
Akbar, Ahsan Jamal |
ISS '23: "Cross-Domain Gesture Sequence ..."
Cross-Domain Gesture Sequence Recognition for Two-Player Exergames using COTS mmWave Radar
Ahsan Jamal Akbar, Zhiyao Sheng, Qian Zhang, and Dong Wang (Shanghai Jiao Tong University, Shanghai, China) Wireless-based gesture recognition provides an effective input method for exergames. However, previous works in wireless-based gesture recognition systems mainly recognize one primary user's gestures. In the multi-player scenario, the mutual interference between users makes it difficult to predict multiple players' gestures individually. To address this challenge, we propose a flexible FMCW-radar-based system, RFDual, which enables real-time cross-domain gesture sequence recognition for two players. To eliminate the mutual interference between users, we extract a new feature type, biased range-velocity spectrum (BRVS), which only depends on a target user. We then propose customized preprocessing methods (cropping and stationary component removal) to produce environment-independent and position-independent inputs. To enhance RFDual's resistance to unseen users and articulating speeds, we design effective data augmentation methods, sequence concatenating, and randomizing. RFDual is evaluated with a dataset containing only unseen gesture sequences and achieves a gesture error rate of 1.41%. Extensive experimental results show the impressive robustness of RFDual for data in new domains, including new users, articulating speeds, positions, and environments. These results demonstrate the great potential of RFDual in practical applications like two-player exergames and gesture/activity recognition for drivers and passengers in the cab. @Article{ISS23p441, author = {Ahsan Jamal Akbar and Zhiyao Sheng and Qian Zhang and Dong Wang}, title = {Cross-Domain Gesture Sequence Recognition for Two-Player Exergames using COTS mmWave Radar}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {441}, numpages = {30}, doi = {10.1145/3626477}, year = {2023}, } Publisher's Version |
|
Anthes, Christoph |
ISS '23: "Aircraft Cockpit Interaction ..."
Aircraft Cockpit Interaction in Virtual Reality with Visual, Auditive, and Vibrotactile Feedback
Stefan Auer, Christoph Anthes, Harald Reiterer, and Hans-Christian Jetter (University of Applied Sciences Upper Austria, Hagenberg, Austria; University of Konstanz, Konstanz, Germany; University of Lübeck, Lübeck, Germany) Safety-critical interactive spaces for supervision and time-critical control tasks are usually characterized by many small displays and physical controls, typically found in control rooms or automotive, railway, and aviation cockpits. Using Virtual Reality (VR) simulations instead of a physical system can significantly reduce the training costs of these interactive spaces without risking real-world accidents or occupying expensive physical simulators. However, the user's physical interactions and feedback methods must be technologically mediated. Therefore, we conducted a within-subjects study with 24 participants and compared performance, task load, and simulator sickness during training of authentic aircraft cockpit manipulation tasks. The participants were asked to perform these tasks inside a VR flight simulator (VRFS) for three feedback methods (acoustic, haptic, and acoustic+haptic) and inside a physical flight simulator (PFS) of a commercial airplane cockpit. The study revealed a partial equivalence of VRFS and PFS, control-specific differences input elements, irrelevance of rudimentary vibrotactile feedback, slower movements in VR, as well as a preference for PFS. @Article{ISS23p445, author = {Stefan Auer and Christoph Anthes and Harald Reiterer and Hans-Christian Jetter}, title = {Aircraft Cockpit Interaction in Virtual Reality with Visual, Auditive, and Vibrotactile Feedback}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {445}, numpages = {24}, doi = {10.1145/3626481}, year = {2023}, } Publisher's Version |
|
Auer, Stefan |
ISS '23: "Aircraft Cockpit Interaction ..."
Aircraft Cockpit Interaction in Virtual Reality with Visual, Auditive, and Vibrotactile Feedback
Stefan Auer, Christoph Anthes, Harald Reiterer, and Hans-Christian Jetter (University of Applied Sciences Upper Austria, Hagenberg, Austria; University of Konstanz, Konstanz, Germany; University of Lübeck, Lübeck, Germany) Safety-critical interactive spaces for supervision and time-critical control tasks are usually characterized by many small displays and physical controls, typically found in control rooms or automotive, railway, and aviation cockpits. Using Virtual Reality (VR) simulations instead of a physical system can significantly reduce the training costs of these interactive spaces without risking real-world accidents or occupying expensive physical simulators. However, the user's physical interactions and feedback methods must be technologically mediated. Therefore, we conducted a within-subjects study with 24 participants and compared performance, task load, and simulator sickness during training of authentic aircraft cockpit manipulation tasks. The participants were asked to perform these tasks inside a VR flight simulator (VRFS) for three feedback methods (acoustic, haptic, and acoustic+haptic) and inside a physical flight simulator (PFS) of a commercial airplane cockpit. The study revealed a partial equivalence of VRFS and PFS, control-specific differences input elements, irrelevance of rudimentary vibrotactile feedback, slower movements in VR, as well as a preference for PFS. @Article{ISS23p445, author = {Stefan Auer and Christoph Anthes and Harald Reiterer and Hans-Christian Jetter}, title = {Aircraft Cockpit Interaction in Virtual Reality with Visual, Auditive, and Vibrotactile Feedback}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {445}, numpages = {24}, doi = {10.1145/3626481}, year = {2023}, } Publisher's Version |
|
Bhatnagar, Tigmanshu |
ISS '23: "Analysis of Product Architectures ..."
Analysis of Product Architectures of Pin Array Technologies for Tactile Displays
Tigmanshu Bhatnagar, Albert Higgins, Nicolai Marquardt, Mark Miodownik, and Catherine Holloway (University College London, London, UK; Global Disability Innovation Hub, London, UK; Microsoft Research, Redmond, USA) Refreshable tactile displays based on pin array technologies have a significant impact on the education of children with visual impairments, but they are prohibitively expensive. To better understand their design and the reason for the high cost, we created a database and analyzed the product architectures of 67 unique pin array technologies from literature and patents. We qualitatively coded their functional elements and analyzed the physical parts that execute the functions. Our findings highlight that pin array surfaces aim to achieve three key functions, i.e., raise and lower pins, lock pins, and create a large array. We also contribute a concise morphological chart that organises the various mechanisms for these three functions. Based on this, we discuss the reasons for the high cost and complexity of these surface haptic technologies and infer why larger displays and more affordable devices are not available. Our findings can be used to design new mechanisms for more affordable and scalable pin array display systems. @Article{ISS23p432, author = {Tigmanshu Bhatnagar and Albert Higgins and Nicolai Marquardt and Mark Miodownik and Catherine Holloway}, title = {Analysis of Product Architectures of Pin Array Technologies for Tactile Displays}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {432}, numpages = {21}, doi = {10.1145/3626468}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available |
|
Butz, Andreas Martin |
ISS '23: "SeatmateVR: Proxemic Cues ..."
SeatmateVR: Proxemic Cues for Close Bystander-Awareness in Virtual Reality
Jingyi Li, Hyerim Park, Robin Welsch, Sven Mayer, and Andreas Martin Butz (LMU Munich, Munich, Germany; Aalto University, Helsinki, Finland) Prior research explored ways to alert virtual reality users of bystanders entering the play area from afar. However, in confined social settings like sharing a couch with seatmates, bystanders' proxemic cues, such as distance, are limited during interruptions, posing challenges for proxemic-aware systems. To address this, we investigated three visualizations, using a 2D animoji, a fully-rendered avatar, and their combination, to gradually share bystanders' orientation and location during interruptions. In a user study (N=22), participants played virtual reality games while responding to questions from their seatmates. We found that the avatar preserved game experiences yet did not support the fast identification of seatmates as the animoji did. Instead, users preferred the mixed visualization, where they found the seatmate's orientation cues instantly in their view and were gradually guided to the person's actual location. We discuss implications for fine-grained proxemic-aware virtual reality systems to support interaction in constrained social spaces. @Article{ISS23p438, author = {Jingyi Li and Hyerim Park and Robin Welsch and Sven Mayer and Andreas Martin Butz}, title = {SeatmateVR: Proxemic Cues for Close Bystander-Awareness in Virtual Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {438}, numpages = {20}, doi = {10.1145/3626474}, year = {2023}, } Publisher's Version Archive submitted (190 MB) |
|
Chen, Niu |
ISS '23: "Using Online Videos as the ..."
Using Online Videos as the Basis for Developing Design Guidelines: A Case Study of AR-Based Assembly Instructions
Niu Chen, Frances Jihae Sin, Laura Mariah Herman, Cuong Nguyen, Ivan Song, and Dongwook Yoon (University of British Columbia, Vancouver, Canada; Cornell University, Ithaca, USA; Adobe, San Francisco, USA; Adobe Research, San Francisco, USA) Design guidelines serve as an important conceptual tool to guide designers of interactive applications with well-established principles and heuristics. Consulting domain experts is a common way to develop guidelines. However, experts are often not easily accessible, and their time can be expensive. This problem poses challenges in developing comprehensive and practical guidelines. We propose a new guideline development method that uses online public videos as the basis for capturing diverse patterns of design goals and interaction primitives. In a case study focusing on AR-based assembly instructions, we apply our novel Identify-Rationalize pipeline, which distills design patterns from videos featuring AR-based assembly instructions (N=146) into a set of guidelines that cover a wide range of design considerations. The evaluation conducted with 16 AR designers indicated that the pipeline is useful for generating comprehensive guidelines. We conclude by discussing the transferability and practicality of our method. @Article{ISS23p428, author = {Niu Chen and Frances Jihae Sin and Laura Mariah Herman and Cuong Nguyen and Ivan Song and Dongwook Yoon}, title = {Using Online Videos as the Basis for Developing Design Guidelines: A Case Study of AR-Based Assembly Instructions}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {428}, numpages = {23}, doi = {10.1145/3626464}, year = {2023}, } Publisher's Version Archive submitted (53 MB) |
|
Cho, Hyunsung |
ISS '23: "BlendMR: A Computational Method ..."
BlendMR: A Computational Method to Create Ambient Mixed Reality Interfaces
Violet Yinuo Han, Hyunsung Cho, Kiyosu Maeda, Alexandra Ion, and David Lindlbauer (Carnegie Mellon University, Pittsburgh, USA; University of Tokyo, Tokyo, Japan) Mixed Reality (MR) systems display content freely in space, and present nearly arbitrary amounts of information, enabling ubiquitous access to digital information. This approach, however, introduces clutter and distraction if too much virtual content is shown. We present BlendMR, an optimization-based MR system that blends virtual content onto the physical objects in users’ environments to serve as ambient information displays. Our approach takes existing 2D applications and meshes of physical objects as input. It analyses the geometry of the physical objects and identifies regions that are suitable hosts for virtual elements. Using a novel integer programming formulation, our approach then optimally maps selected contents of the 2D applications onto the object, optimizing for factors such as importance and hierarchy of information, viewing angle, and geometric distortion. We evaluate BlendMR by comparing it to a 2D window baseline. Study results show that BlendMR decreases clutter and distraction, and is preferred by users. We demonstrate the applicability of BlendMR in a series of results and usage scenarios. @Article{ISS23p436, author = {Violet Yinuo Han and Hyunsung Cho and Kiyosu Maeda and Alexandra Ion and David Lindlbauer}, title = {BlendMR: A Computational Method to Create Ambient Mixed Reality Interfaces}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {436}, numpages = {25}, doi = {10.1145/3626472}, year = {2023}, } Publisher's Version Info |
|
Cordts, Maurice |
ISS '23: "BrickStARt: Enabling In-situ ..."
BrickStARt: Enabling In-situ Design and Tangible Exploration for Personal Fabrication using Mixed Reality
Evgeny Stemasov, Jessica Hohn, Maurice Cordts, Anja Schikorr, Enrico Rukzio, and Jan Gugenheimer (Ulm University, Ulm, Germany; TU-Darmstadt, Darmstadt, Germany; Institut Polytechnique de Paris, Paris, France) 3D-printers enable end-users to design and fabricate unique physical artifacts but maintain an increased entry barrier and friction. End users must design tangible artifacts through intangible media away from the main problem space (ex-situ) and transfer spatial requirements to an abstract software environment. To allow users to evaluate dimensions, balance, or fit early and in-situ, we developed BrickStARt, a design tool using tangible construction blocks paired with a mixed-reality headset. Users assemble a physical block model at the envisioned location of the fabricated artifact. Designs can be tested tangibly, refined, and digitally post-processed, remaining continuously in-situ. We implemented BrickStARt using a Magic Leap headset and present walkthroughs, highlighting novel interactions for 3D-design. In a user study (n=16), first-time 3D-modelers succeeded more often using BrickStARt than Tinkercad. Our results suggest that BrickStARt provides an accessible and explorative process while facilitating quick, tangible design iterations that allow users to detect physics-related issues (e.g., clearance) early on. @Article{ISS23p429, author = {Evgeny Stemasov and Jessica Hohn and Maurice Cordts and Anja Schikorr and Enrico Rukzio and Jan Gugenheimer}, title = {BrickStARt: Enabling In-situ Design and Tangible Exploration for Personal Fabrication using Mixed Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {429}, numpages = {29}, doi = {10.1145/3626465}, year = {2023}, } Publisher's Version Video |
|
Duan, Yongjie |
ISS '23: "3D Finger Rotation Estimation ..."
3D Finger Rotation Estimation from Fingerprint Images
Yongjie Duan, Jinyang Yu, Jianjiang Feng, Ke He, Jiwen Lu, and Jie Zhou (Tsinghua University, Beijing, China) Various touch-based interaction techniques have been developed to make interactions on mobile devices more effective, efficient, and intuitive. Finger orientation, especially, has attracted a lot of attentions since it intuitively brings three additional degrees of freedom (DOF) compared with two-dimensional (2D) touching points. The mapping of finger orientation can be classified as being either absolute or relative, suitable for different interaction applications. However, only absolute orientation has been explored in prior works. The relative angles can be calculated based on two estimated absolute orientations, although, a higher accuracy is expected by predicting relative rotation from input images directly. Consequently, in this paper, we propose to estimate complete 3D relative finger angles based on two fingerprint images, which incorporate more information with a higher image resolution than capacitive images. For algorithm training and evaluation, we constructed a dataset consisting of fingerprint images and their corresponding ground truth 3D relative finger rotation angles. Experimental results on this dataset revealed that our method outperforms previous approaches with absolute finger angle models. Further, extensive experiments were conducted to explore the impact of image resolutions, finger types, and rotation ranges on performance. A user study was also conducted to examine the efficiency and precision using 3D relative finger orientation in 3D object rotation task. @Article{ISS23p431, author = {Yongjie Duan and Jinyang Yu and Jianjiang Feng and Ke He and Jiwen Lu and Jie Zhou}, title = {3D Finger Rotation Estimation from Fingerprint Images}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {431}, numpages = {21}, doi = {10.1145/3626467}, year = {2023}, } Publisher's Version |
|
Echtler, Florian |
ISS '23: "SurfaceCast: Ubiquitous, Cross-Device ..."
SurfaceCast: Ubiquitous, Cross-Device Surface Sharing
Florian Echtler, Vitus Maierhöfer, Nicolai Brodersen Hansen, and Raphael Wimmer (Aalborg University, Aalborg, Denmark; University of Regensburg, Regensburg, Germany) Real-time online interaction is the norm today. Tabletops and other dedicated interactive surface devices with direct input and tangible interaction can enhance remote collaboration, and open up new interaction scenarios based on mixed physical/virtual components. However, they are only available to a small subset of users, as they usually require identical bespoke hardware for every participant, are complex to setup, and need custom scenario-specific applications. We present SurfaceCast, a software toolkit designed to merge multiple distributed, heterogeneous end-user devices into a single, shared mixed-reality surface. Supported devices include regular desktop and laptop computers, tablets, and mixed-reality headsets, as well as projector-camera setups and dedicated interactive tabletop systems. This device-agnostic approach provides a fundamental building block for exploration of a far wider range of usage scenarios than previously feasible, including future clients using our provided API. In this paper, we discuss the software architecture of SurfaceCast, present a formative user study and a quantitative performance analysis of our framework, and introduce five example application scenarios which we enhance through the multi-user and multi-device features of the framework. Our results show that the hardware- and content-agnostic architecture of SurfaceCast can run on a wide variety of devices with sufficient performance and fidelity for real-time interaction. @Article{ISS23p439, author = {Florian Echtler and Vitus Maierhöfer and Nicolai Brodersen Hansen and Raphael Wimmer}, title = {SurfaceCast: Ubiquitous, Cross-Device Surface Sharing}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {439}, numpages = {23}, doi = {10.1145/3626475}, year = {2023}, } Publisher's Version Published Artifact Info Artifacts Available |
|
Ens, Barrett |
ISS '23: "Embodied Provenance for Immersive ..."
Embodied Provenance for Immersive Sensemaking
Yidan Zhang, Barrett Ens, Kadek Ananta Satriadi, Ying Yang, and Sarah Goodwin (Monash University, Melbourne, Australia) Immersive analytics research has explored how embodied data representations and interactions can be used to engage users in sensemaking. Prior research has broadly overlooked the potential of immersive space for supporting analytic provenance, the understanding of sensemaking processes through users’ interaction histories. We propose the concept of embodied provenance, the use of three-dimensional space and embodied interactions in supporting recalling, reproducing, annotating and sharing analysis history in immersive environments. We design a conceptual framework for embodied provenance by highlighting a set of design criteria for analytic provenance drawn from prior work and identifying essential properties for embodied provenance. We develop a prototype system in virtual reality to demonstrate the concept and support the conceptual framework by providing multiple data views and embodied interaction metaphors in a large virtual space. We present a use case scenario of energy consumption analysis and evaluated the system through a qualitative evaluation with 17 participants, which show the system’s potential for assisting analytic provenance using embodiment. Our exploration of embodied provenance through this prototype provides lessons learnt to guide the design of immersive analytic tools for embodied provenance. @Article{ISS23p435, author = {Yidan Zhang and Barrett Ens and Kadek Ananta Satriadi and Ying Yang and Sarah Goodwin}, title = {Embodied Provenance for Immersive Sensemaking}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {435}, numpages = {19}, doi = {10.1145/3626471}, year = {2023}, } Publisher's Version Video |
|
Evangelista Belo, João Marcelo |
ISS '23: "CADTrack: Instructions and ..."
CADTrack: Instructions and Support for Orientation Disambiguation of Near-Symmetrical Objects
João Marcelo Evangelista Belo, Jon Wissing, Tiare Feuchtner, and Kaj Grønbæk (Aarhus University, Aarhus, Denmark; University of Konstanz, Konstanz, Germany) Determining the correct orientation of objects can be critical to succeed in tasks like assembly and quality assurance. In particular, near-symmetrical objects may require careful inspection of small visual features to disambiguate their orientation. We propose CADTrack, a digital assistant for providing instructions and support for tasks where the object orientation matters but may be hard to disambiguate with the naked eye. Additionally, we present a deep learning pipeline for tracking the orientation of near-symmetrical objects. In contrast to existing approaches, which require labeled datasets involving laborious data acquisition and annotation processes, CADTrack uses a digital model of the object to generate synthetic data and train a convolutional neural network. Furthermore, we extend the architecture of Mask R-CNN with a confidence prediction branch to avoid errors caused by misleading orientation guidance. We evaluate CADTrack in a user study, comparing our tracking-based instructions to other methods to confirm the benefits of our approach in terms of preference and required effort. @Article{ISS23p426, author = {João Marcelo Evangelista Belo and Jon Wissing and Tiare Feuchtner and Kaj Grønbæk}, title = {CADTrack: Instructions and Support for Orientation Disambiguation of Near-Symmetrical Objects}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {426}, numpages = {20}, doi = {10.1145/3626462}, year = {2023}, } Publisher's Version Video |
|
Feng, Jianjiang |
ISS '23: "3D Finger Rotation Estimation ..."
3D Finger Rotation Estimation from Fingerprint Images
Yongjie Duan, Jinyang Yu, Jianjiang Feng, Ke He, Jiwen Lu, and Jie Zhou (Tsinghua University, Beijing, China) Various touch-based interaction techniques have been developed to make interactions on mobile devices more effective, efficient, and intuitive. Finger orientation, especially, has attracted a lot of attentions since it intuitively brings three additional degrees of freedom (DOF) compared with two-dimensional (2D) touching points. The mapping of finger orientation can be classified as being either absolute or relative, suitable for different interaction applications. However, only absolute orientation has been explored in prior works. The relative angles can be calculated based on two estimated absolute orientations, although, a higher accuracy is expected by predicting relative rotation from input images directly. Consequently, in this paper, we propose to estimate complete 3D relative finger angles based on two fingerprint images, which incorporate more information with a higher image resolution than capacitive images. For algorithm training and evaluation, we constructed a dataset consisting of fingerprint images and their corresponding ground truth 3D relative finger rotation angles. Experimental results on this dataset revealed that our method outperforms previous approaches with absolute finger angle models. Further, extensive experiments were conducted to explore the impact of image resolutions, finger types, and rotation ranges on performance. A user study was also conducted to examine the efficiency and precision using 3D relative finger orientation in 3D object rotation task. @Article{ISS23p431, author = {Yongjie Duan and Jinyang Yu and Jianjiang Feng and Ke He and Jiwen Lu and Jie Zhou}, title = {3D Finger Rotation Estimation from Fingerprint Images}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {431}, numpages = {21}, doi = {10.1145/3626467}, year = {2023}, } Publisher's Version |
|
Feng, Li |
ISS '23: "1D-Touch: NLP-Assisted Coarse ..."
1D-Touch: NLP-Assisted Coarse Text Selection via a Semi-Direct Gesture
Peiling Jiang, Li Feng, Fuling Sun, Parakrant Sarkar, Haijun Xia, and Can Liu (City University of Hong Kong, Hong Kong, China; University of California San Diego, La Jolla, USA; Hong Kong University of Science and Technology, Guangzhou, China) Existing text selection techniques on touchscreen focus on improving the control for moving the carets. Coarse-grained text selection on word and phrase levels has not received much support beyond word-snapping and entity recognition. We introduce 1D-Touch, a novel text selection method that complements the carets-based sub-word selection by facilitating the selection of semantic units of words and above. This method employs a simple vertical slide gesture to expand and contract a selection area from a word. The expansion can be by words or by semantic chunks ranging from sub-phrases to sentences. This technique shifts the concept of text selection, from defining a range by locating the first and last words, towards a dynamic process of expanding and contracting a textual semantic entity. To understand the effects of our approach, we prototyped and tested two variants: WordTouch, which offers a straightforward word-by-word expansion, and ChunkTouch, which leverages NLP to chunk text into syntactic units, allowing the selection to grow by semantically meaningful units in response to the sliding gesture. Our evaluation, focused on the coarse-grained selection tasks handled by 1D-Touch, shows a 20% improvement over the default word-snapping selection method on Android. @Article{ISS23p447, author = {Peiling Jiang and Li Feng and Fuling Sun and Parakrant Sarkar and Haijun Xia and Can Liu}, title = {1D-Touch: NLP-Assisted Coarse Text Selection via a Semi-Direct Gesture}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {447}, numpages = {20}, doi = {10.1145/3626483}, year = {2023}, } Publisher's Version |
|
Feuchtner, Tiare |
ISS '23: "CADTrack: Instructions and ..."
CADTrack: Instructions and Support for Orientation Disambiguation of Near-Symmetrical Objects
João Marcelo Evangelista Belo, Jon Wissing, Tiare Feuchtner, and Kaj Grønbæk (Aarhus University, Aarhus, Denmark; University of Konstanz, Konstanz, Germany) Determining the correct orientation of objects can be critical to succeed in tasks like assembly and quality assurance. In particular, near-symmetrical objects may require careful inspection of small visual features to disambiguate their orientation. We propose CADTrack, a digital assistant for providing instructions and support for tasks where the object orientation matters but may be hard to disambiguate with the naked eye. Additionally, we present a deep learning pipeline for tracking the orientation of near-symmetrical objects. In contrast to existing approaches, which require labeled datasets involving laborious data acquisition and annotation processes, CADTrack uses a digital model of the object to generate synthetic data and train a convolutional neural network. Furthermore, we extend the architecture of Mask R-CNN with a confidence prediction branch to avoid errors caused by misleading orientation guidance. We evaluate CADTrack in a user study, comparing our tracking-based instructions to other methods to confirm the benefits of our approach in terms of preference and required effort. @Article{ISS23p426, author = {João Marcelo Evangelista Belo and Jon Wissing and Tiare Feuchtner and Kaj Grønbæk}, title = {CADTrack: Instructions and Support for Orientation Disambiguation of Near-Symmetrical Objects}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {426}, numpages = {20}, doi = {10.1145/3626462}, year = {2023}, } Publisher's Version Video |
|
Fujita, Kazuyuki |
ISS '23: "UbiSurface: A Robotic Touch ..."
UbiSurface: A Robotic Touch Surface for Supporting Mid-air Planar Interactions in Room-Scale VR
Ryota Gomi, Kazuki Takashima, Yuki Onishi, Kazuyuki Fujita, and Yoshifumi Kitamura (Tohoku University, Sendai, Japan; Singapore Management Univeristy, Singapore, Singapore) Room-scale VR has been considered an alternative to physical office workspaces. For office activities, users frequently require planar input methods, such as typing or handwriting, to quickly record annotations to virtual content. However, current off-the-shelf VR HMD setups rely on mid-air interactions, which can cause arm fatigue and decrease input accuracy. To address this issue, we propose UbiSurface, a robotic touch surface that can automatically reposition itself to physically present a virtual planar input surface (VR whiteboard, VR canvas, etc.) to users and to permit them to achieve accurate and fatigue-less input while walking around a virtual room. We design and implement a prototype of UbiSurface that can dynamically change a canvas-sized touch surface's position, height, and pitch and yaw angles to adapt to virtual surfaces spatially arranged at various locations and angles around a virtual room. We then conduct studies to validate its technical performance and examine how UbiSurface facilitates the user's primary mid-air planar interactions, such as painting and writing in a room-scale VR setup. Our results indicate that this system reduces arm fatigue and increases input accuracy, especially for writing tasks. We then discuss the potential benefits and challenges of robotic touch devices for future room-scale VR setups. @Article{ISS23p443, author = {Ryota Gomi and Kazuki Takashima and Yuki Onishi and Kazuyuki Fujita and Yoshifumi Kitamura}, title = {UbiSurface: A Robotic Touch Surface for Supporting Mid-air Planar Interactions in Room-Scale VR}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {443}, numpages = {22}, doi = {10.1145/3626479}, year = {2023}, } Publisher's Version |
|
Funazaki, Yukina |
ISS '23: "Evaluating the Applicability ..."
Evaluating the Applicability of GUI-Based Steering Laws to VR Car Driving: A Case of Curved Constrained Paths
Shota Yamanaka, Takumi Takaku, Yukina Funazaki, Noboru Seto, and Satoshi Nakamura (Yahoo, Tokyo, Japan; Meiji University, Tokyo, Japan; Meiji University, Nakano, Japan) Evaluating the validity of an existing user performance model in a variety of tasks is important for enhancing its applicability. The model studied in this work is the steering law for predicting the speed and time needed to perform tasks in which a cursor or a car passes through a constrained path. Previous HCI studies have refined this model to take additional path factors into account, but its applicability has only been evaluated in GUI-based environments such as those using mice or pen tablets. Accordingly, we conducted a user experiment with a driving simulator to measure the speed and time on curved roads and thus facilitate evaluation of models for pen-based path-steering tasks. The results showed that the best-fit models for speed and time had adjusted r^2 values of 0.9342 and 0.9723, respectively, for three road widths and eight curvature radii. While the models required some adjustments, the overall components of the tested models were consistent with those in previous pen-based experimental results. Our results demonstrated that user experiments to validate potential models based on pen-based tasks are effective as a pilot approach for driving tasks with more complex road conditions. @Article{ISS23p430, author = {Shota Yamanaka and Takumi Takaku and Yukina Funazaki and Noboru Seto and Satoshi Nakamura}, title = {Evaluating the Applicability of GUI-Based Steering Laws to VR Car Driving: A Case of Curved Constrained Paths}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {430}, numpages = {21}, doi = {10.1145/3626466}, year = {2023}, } Publisher's Version Archive submitted (240 kB) |
|
Gomi, Ryota |
ISS '23: "UbiSurface: A Robotic Touch ..."
UbiSurface: A Robotic Touch Surface for Supporting Mid-air Planar Interactions in Room-Scale VR
Ryota Gomi, Kazuki Takashima, Yuki Onishi, Kazuyuki Fujita, and Yoshifumi Kitamura (Tohoku University, Sendai, Japan; Singapore Management Univeristy, Singapore, Singapore) Room-scale VR has been considered an alternative to physical office workspaces. For office activities, users frequently require planar input methods, such as typing or handwriting, to quickly record annotations to virtual content. However, current off-the-shelf VR HMD setups rely on mid-air interactions, which can cause arm fatigue and decrease input accuracy. To address this issue, we propose UbiSurface, a robotic touch surface that can automatically reposition itself to physically present a virtual planar input surface (VR whiteboard, VR canvas, etc.) to users and to permit them to achieve accurate and fatigue-less input while walking around a virtual room. We design and implement a prototype of UbiSurface that can dynamically change a canvas-sized touch surface's position, height, and pitch and yaw angles to adapt to virtual surfaces spatially arranged at various locations and angles around a virtual room. We then conduct studies to validate its technical performance and examine how UbiSurface facilitates the user's primary mid-air planar interactions, such as painting and writing in a room-scale VR setup. Our results indicate that this system reduces arm fatigue and increases input accuracy, especially for writing tasks. We then discuss the potential benefits and challenges of robotic touch devices for future room-scale VR setups. @Article{ISS23p443, author = {Ryota Gomi and Kazuki Takashima and Yuki Onishi and Kazuyuki Fujita and Yoshifumi Kitamura}, title = {UbiSurface: A Robotic Touch Surface for Supporting Mid-air Planar Interactions in Room-Scale VR}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {443}, numpages = {22}, doi = {10.1145/3626479}, year = {2023}, } Publisher's Version |
|
Goodwin, Sarah |
ISS '23: "Embodied Provenance for Immersive ..."
Embodied Provenance for Immersive Sensemaking
Yidan Zhang, Barrett Ens, Kadek Ananta Satriadi, Ying Yang, and Sarah Goodwin (Monash University, Melbourne, Australia) Immersive analytics research has explored how embodied data representations and interactions can be used to engage users in sensemaking. Prior research has broadly overlooked the potential of immersive space for supporting analytic provenance, the understanding of sensemaking processes through users’ interaction histories. We propose the concept of embodied provenance, the use of three-dimensional space and embodied interactions in supporting recalling, reproducing, annotating and sharing analysis history in immersive environments. We design a conceptual framework for embodied provenance by highlighting a set of design criteria for analytic provenance drawn from prior work and identifying essential properties for embodied provenance. We develop a prototype system in virtual reality to demonstrate the concept and support the conceptual framework by providing multiple data views and embodied interaction metaphors in a large virtual space. We present a use case scenario of energy consumption analysis and evaluated the system through a qualitative evaluation with 17 participants, which show the system’s potential for assisting analytic provenance using embodiment. Our exploration of embodied provenance through this prototype provides lessons learnt to guide the design of immersive analytic tools for embodied provenance. @Article{ISS23p435, author = {Yidan Zhang and Barrett Ens and Kadek Ananta Satriadi and Ying Yang and Sarah Goodwin}, title = {Embodied Provenance for Immersive Sensemaking}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {435}, numpages = {19}, doi = {10.1145/3626471}, year = {2023}, } Publisher's Version Video |
|
Grant, Alana |
ISS '23: "Hum-ble Beginnings: Developing ..."
Hum-ble Beginnings: Developing Touch- and Proximity-Input-Based Interfaces for Zoo-Housed Giraffes’ Audio Enrichment
Alana Grant, Vilma Kankaanpää, and Ilyena Hirskyj-Douglas (University of Glasgow, Glasgow, UK) Though computer systems have entered widespread use for animals' enrichment in zoos, no interactive computer systems suited to giraffes have yet been developed. Hence, which input modes or audio stimuli giraffes might best utilise remains unknown. To address this issue and probe development of such systems alongside the animals themselves and zookeepers, researchers gathered requirements from the keepers and from prototyping with giraffes, then created two interfaces -- one touch-based and one proximity-based -- that play giraffe-humming audio or white noise when activated. Over two months of observation, giraffes utilised the proximity-based system more frequently than the touch-based one but in shorter episodes. Secondly, the study highlighted the significance of considering user-specific needs in computer systems' development: the lack of preference shown for any specific audio type indicates that the audio stimuli chosen were inappropriate for these giraffes. In addition, the paper articulates several lessons that can be drawn from human--computer interaction when one develops systems for animals and, in turn, what the findings presented mean for humans. @Article{ISS23p434, author = {Alana Grant and Vilma Kankaanpää and Ilyena Hirskyj-Douglas}, title = {Hum-ble Beginnings: Developing Touch- and Proximity-Input-Based Interfaces for Zoo-Housed Giraffes’ Audio Enrichment}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {434}, numpages = {23}, doi = {10.1145/3626470}, year = {2023}, } Publisher's Version |
|
Grønbæk, Jens Emil Sloth |
ISS '23: "Reality and Beyond: Proxemics ..."
Reality and Beyond: Proxemics as a Lens for Designing Handheld Collaborative Augmented Reality
Mille Skovhus Lunding, Jens Emil Sloth Grønbæk, Nicolai Grymer, Thomas Wells, Steven Houben, and Marianne Graves Petersen (Aarhus University, Aarhus, Denmark; Lancaster University, Lancaster, UK; Eindhoven University of Technology, Eindhoven, Netherlands) Augmented Reality (AR) has shown great potential for supporting co-located collaboration. Yet, it is rarely articulated in the design rationales of AR systems that they promote a certain socio-spatial configuration of the users. Learning from proxemics, we argue that such configurations enable and constrain different co-located spatial behaviors with consequences for collaborative activities. We focus specifically on enabling different collaboration styles via the design of Handheld Collaborative Augmented Reality (HCAR) systems. Drawing upon notions of proxemics, we show how different HCAR designs enable different socio-spatial configurations. Through a design exploration, we demonstrate interaction techniques to expand on the notion of collaborative coupling styles by either deliberately designing for aligning with physical reality or going beyond. The main contributions are a proxemics-based conceptual lens and vocabulary for supporting interaction designers in being mindful of the proxemic consequences when developing handheld multi-user AR systems. @Article{ISS23p427, author = {Mille Skovhus Lunding and Jens Emil Sloth Grønbæk and Nicolai Grymer and Thomas Wells and Steven Houben and Marianne Graves Petersen}, title = {Reality and Beyond: Proxemics as a Lens for Designing Handheld Collaborative Augmented Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {427}, numpages = {20}, doi = {10.1145/3626463}, year = {2023}, } Publisher's Version Video |
|
Grønbæk, Kaj |
ISS '23: "CADTrack: Instructions and ..."
CADTrack: Instructions and Support for Orientation Disambiguation of Near-Symmetrical Objects
João Marcelo Evangelista Belo, Jon Wissing, Tiare Feuchtner, and Kaj Grønbæk (Aarhus University, Aarhus, Denmark; University of Konstanz, Konstanz, Germany) Determining the correct orientation of objects can be critical to succeed in tasks like assembly and quality assurance. In particular, near-symmetrical objects may require careful inspection of small visual features to disambiguate their orientation. We propose CADTrack, a digital assistant for providing instructions and support for tasks where the object orientation matters but may be hard to disambiguate with the naked eye. Additionally, we present a deep learning pipeline for tracking the orientation of near-symmetrical objects. In contrast to existing approaches, which require labeled datasets involving laborious data acquisition and annotation processes, CADTrack uses a digital model of the object to generate synthetic data and train a convolutional neural network. Furthermore, we extend the architecture of Mask R-CNN with a confidence prediction branch to avoid errors caused by misleading orientation guidance. We evaluate CADTrack in a user study, comparing our tracking-based instructions to other methods to confirm the benefits of our approach in terms of preference and required effort. @Article{ISS23p426, author = {João Marcelo Evangelista Belo and Jon Wissing and Tiare Feuchtner and Kaj Grønbæk}, title = {CADTrack: Instructions and Support for Orientation Disambiguation of Near-Symmetrical Objects}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {426}, numpages = {20}, doi = {10.1145/3626462}, year = {2023}, } Publisher's Version Video |
|
Grymer, Nicolai |
ISS '23: "Reality and Beyond: Proxemics ..."
Reality and Beyond: Proxemics as a Lens for Designing Handheld Collaborative Augmented Reality
Mille Skovhus Lunding, Jens Emil Sloth Grønbæk, Nicolai Grymer, Thomas Wells, Steven Houben, and Marianne Graves Petersen (Aarhus University, Aarhus, Denmark; Lancaster University, Lancaster, UK; Eindhoven University of Technology, Eindhoven, Netherlands) Augmented Reality (AR) has shown great potential for supporting co-located collaboration. Yet, it is rarely articulated in the design rationales of AR systems that they promote a certain socio-spatial configuration of the users. Learning from proxemics, we argue that such configurations enable and constrain different co-located spatial behaviors with consequences for collaborative activities. We focus specifically on enabling different collaboration styles via the design of Handheld Collaborative Augmented Reality (HCAR) systems. Drawing upon notions of proxemics, we show how different HCAR designs enable different socio-spatial configurations. Through a design exploration, we demonstrate interaction techniques to expand on the notion of collaborative coupling styles by either deliberately designing for aligning with physical reality or going beyond. The main contributions are a proxemics-based conceptual lens and vocabulary for supporting interaction designers in being mindful of the proxemic consequences when developing handheld multi-user AR systems. @Article{ISS23p427, author = {Mille Skovhus Lunding and Jens Emil Sloth Grønbæk and Nicolai Grymer and Thomas Wells and Steven Houben and Marianne Graves Petersen}, title = {Reality and Beyond: Proxemics as a Lens for Designing Handheld Collaborative Augmented Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {427}, numpages = {20}, doi = {10.1145/3626463}, year = {2023}, } Publisher's Version Video |
|
Gugenheimer, Jan |
ISS '23: "BrickStARt: Enabling In-situ ..."
BrickStARt: Enabling In-situ Design and Tangible Exploration for Personal Fabrication using Mixed Reality
Evgeny Stemasov, Jessica Hohn, Maurice Cordts, Anja Schikorr, Enrico Rukzio, and Jan Gugenheimer (Ulm University, Ulm, Germany; TU-Darmstadt, Darmstadt, Germany; Institut Polytechnique de Paris, Paris, France) 3D-printers enable end-users to design and fabricate unique physical artifacts but maintain an increased entry barrier and friction. End users must design tangible artifacts through intangible media away from the main problem space (ex-situ) and transfer spatial requirements to an abstract software environment. To allow users to evaluate dimensions, balance, or fit early and in-situ, we developed BrickStARt, a design tool using tangible construction blocks paired with a mixed-reality headset. Users assemble a physical block model at the envisioned location of the fabricated artifact. Designs can be tested tangibly, refined, and digitally post-processed, remaining continuously in-situ. We implemented BrickStARt using a Magic Leap headset and present walkthroughs, highlighting novel interactions for 3D-design. In a user study (n=16), first-time 3D-modelers succeeded more often using BrickStARt than Tinkercad. Our results suggest that BrickStARt provides an accessible and explorative process while facilitating quick, tangible design iterations that allow users to detect physics-related issues (e.g., clearance) early on. @Article{ISS23p429, author = {Evgeny Stemasov and Jessica Hohn and Maurice Cordts and Anja Schikorr and Enrico Rukzio and Jan Gugenheimer}, title = {BrickStARt: Enabling In-situ Design and Tangible Exploration for Personal Fabrication using Mixed Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {429}, numpages = {29}, doi = {10.1145/3626465}, year = {2023}, } Publisher's Version Video |
|
Han, Violet Yinuo |
ISS '23: "BlendMR: A Computational Method ..."
BlendMR: A Computational Method to Create Ambient Mixed Reality Interfaces
Violet Yinuo Han, Hyunsung Cho, Kiyosu Maeda, Alexandra Ion, and David Lindlbauer (Carnegie Mellon University, Pittsburgh, USA; University of Tokyo, Tokyo, Japan) Mixed Reality (MR) systems display content freely in space, and present nearly arbitrary amounts of information, enabling ubiquitous access to digital information. This approach, however, introduces clutter and distraction if too much virtual content is shown. We present BlendMR, an optimization-based MR system that blends virtual content onto the physical objects in users’ environments to serve as ambient information displays. Our approach takes existing 2D applications and meshes of physical objects as input. It analyses the geometry of the physical objects and identifies regions that are suitable hosts for virtual elements. Using a novel integer programming formulation, our approach then optimally maps selected contents of the 2D applications onto the object, optimizing for factors such as importance and hierarchy of information, viewing angle, and geometric distortion. We evaluate BlendMR by comparing it to a 2D window baseline. Study results show that BlendMR decreases clutter and distraction, and is preferred by users. We demonstrate the applicability of BlendMR in a series of results and usage scenarios. @Article{ISS23p436, author = {Violet Yinuo Han and Hyunsung Cho and Kiyosu Maeda and Alexandra Ion and David Lindlbauer}, title = {BlendMR: A Computational Method to Create Ambient Mixed Reality Interfaces}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {436}, numpages = {25}, doi = {10.1145/3626472}, year = {2023}, } Publisher's Version Info |
|
Hansen, Nicolai Brodersen |
ISS '23: "SurfaceCast: Ubiquitous, Cross-Device ..."
SurfaceCast: Ubiquitous, Cross-Device Surface Sharing
Florian Echtler, Vitus Maierhöfer, Nicolai Brodersen Hansen, and Raphael Wimmer (Aalborg University, Aalborg, Denmark; University of Regensburg, Regensburg, Germany) Real-time online interaction is the norm today. Tabletops and other dedicated interactive surface devices with direct input and tangible interaction can enhance remote collaboration, and open up new interaction scenarios based on mixed physical/virtual components. However, they are only available to a small subset of users, as they usually require identical bespoke hardware for every participant, are complex to setup, and need custom scenario-specific applications. We present SurfaceCast, a software toolkit designed to merge multiple distributed, heterogeneous end-user devices into a single, shared mixed-reality surface. Supported devices include regular desktop and laptop computers, tablets, and mixed-reality headsets, as well as projector-camera setups and dedicated interactive tabletop systems. This device-agnostic approach provides a fundamental building block for exploration of a far wider range of usage scenarios than previously feasible, including future clients using our provided API. In this paper, we discuss the software architecture of SurfaceCast, present a formative user study and a quantitative performance analysis of our framework, and introduce five example application scenarios which we enhance through the multi-user and multi-device features of the framework. Our results show that the hardware- and content-agnostic architecture of SurfaceCast can run on a wide variety of devices with sufficient performance and fidelity for real-time interaction. @Article{ISS23p439, author = {Florian Echtler and Vitus Maierhöfer and Nicolai Brodersen Hansen and Raphael Wimmer}, title = {SurfaceCast: Ubiquitous, Cross-Device Surface Sharing}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {439}, numpages = {23}, doi = {10.1145/3626475}, year = {2023}, } Publisher's Version Published Artifact Info Artifacts Available |
|
Harrison, Chris |
ISS '23: "WorldPoint: Finger Pointing ..."
WorldPoint: Finger Pointing as a Rapid and Natural Trigger for In-the-Wild Mobile Interactions
Daehwa Kim, Vimal Mollyn, and Chris Harrison (Carnegie Mellon University, Pittsburgh, USA) Pointing with one's finger is a natural and rapid way to denote an area or object of interest. It is routinely used in human-human interaction to increase both the speed and accuracy of communication, but it is rarely utilized in human-computer interactions. In this work, we use the recent inclusion of wide-angle, rear-facing smartphone cameras, along with hardware-accelerated machine learning, to enable real-time, infrastructure-free, finger-pointing interactions on today's mobile phones. We envision users raising their hands to point in front of their phones as a "wake gesture". This can then be coupled with a voice command to trigger advanced functionality. For example, while composing an email, a user can point at a document on a table and say "attach". Our interaction technique requires no navigation away from the current app and is both faster and more privacy-preserving than the current method of taking a photo. @Article{ISS23p442, author = {Daehwa Kim and Vimal Mollyn and Chris Harrison}, title = {WorldPoint: Finger Pointing as a Rapid and Natural Trigger for In-the-Wild Mobile Interactions}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {442}, numpages = {19}, doi = {10.1145/3626478}, year = {2023}, } Publisher's Version |
|
He, Ke |
ISS '23: "3D Finger Rotation Estimation ..."
3D Finger Rotation Estimation from Fingerprint Images
Yongjie Duan, Jinyang Yu, Jianjiang Feng, Ke He, Jiwen Lu, and Jie Zhou (Tsinghua University, Beijing, China) Various touch-based interaction techniques have been developed to make interactions on mobile devices more effective, efficient, and intuitive. Finger orientation, especially, has attracted a lot of attentions since it intuitively brings three additional degrees of freedom (DOF) compared with two-dimensional (2D) touching points. The mapping of finger orientation can be classified as being either absolute or relative, suitable for different interaction applications. However, only absolute orientation has been explored in prior works. The relative angles can be calculated based on two estimated absolute orientations, although, a higher accuracy is expected by predicting relative rotation from input images directly. Consequently, in this paper, we propose to estimate complete 3D relative finger angles based on two fingerprint images, which incorporate more information with a higher image resolution than capacitive images. For algorithm training and evaluation, we constructed a dataset consisting of fingerprint images and their corresponding ground truth 3D relative finger rotation angles. Experimental results on this dataset revealed that our method outperforms previous approaches with absolute finger angle models. Further, extensive experiments were conducted to explore the impact of image resolutions, finger types, and rotation ranges on performance. A user study was also conducted to examine the efficiency and precision using 3D relative finger orientation in 3D object rotation task. @Article{ISS23p431, author = {Yongjie Duan and Jinyang Yu and Jianjiang Feng and Ke He and Jiwen Lu and Jie Zhou}, title = {3D Finger Rotation Estimation from Fingerprint Images}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {431}, numpages = {21}, doi = {10.1145/3626467}, year = {2023}, } Publisher's Version |
|
Herman, Laura Mariah |
ISS '23: "Using Online Videos as the ..."
Using Online Videos as the Basis for Developing Design Guidelines: A Case Study of AR-Based Assembly Instructions
Niu Chen, Frances Jihae Sin, Laura Mariah Herman, Cuong Nguyen, Ivan Song, and Dongwook Yoon (University of British Columbia, Vancouver, Canada; Cornell University, Ithaca, USA; Adobe, San Francisco, USA; Adobe Research, San Francisco, USA) Design guidelines serve as an important conceptual tool to guide designers of interactive applications with well-established principles and heuristics. Consulting domain experts is a common way to develop guidelines. However, experts are often not easily accessible, and their time can be expensive. This problem poses challenges in developing comprehensive and practical guidelines. We propose a new guideline development method that uses online public videos as the basis for capturing diverse patterns of design goals and interaction primitives. In a case study focusing on AR-based assembly instructions, we apply our novel Identify-Rationalize pipeline, which distills design patterns from videos featuring AR-based assembly instructions (N=146) into a set of guidelines that cover a wide range of design considerations. The evaluation conducted with 16 AR designers indicated that the pipeline is useful for generating comprehensive guidelines. We conclude by discussing the transferability and practicality of our method. @Article{ISS23p428, author = {Niu Chen and Frances Jihae Sin and Laura Mariah Herman and Cuong Nguyen and Ivan Song and Dongwook Yoon}, title = {Using Online Videos as the Basis for Developing Design Guidelines: A Case Study of AR-Based Assembly Instructions}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {428}, numpages = {23}, doi = {10.1145/3626464}, year = {2023}, } Publisher's Version Archive submitted (53 MB) |
|
Higgins, Albert |
ISS '23: "Analysis of Product Architectures ..."
Analysis of Product Architectures of Pin Array Technologies for Tactile Displays
Tigmanshu Bhatnagar, Albert Higgins, Nicolai Marquardt, Mark Miodownik, and Catherine Holloway (University College London, London, UK; Global Disability Innovation Hub, London, UK; Microsoft Research, Redmond, USA) Refreshable tactile displays based on pin array technologies have a significant impact on the education of children with visual impairments, but they are prohibitively expensive. To better understand their design and the reason for the high cost, we created a database and analyzed the product architectures of 67 unique pin array technologies from literature and patents. We qualitatively coded their functional elements and analyzed the physical parts that execute the functions. Our findings highlight that pin array surfaces aim to achieve three key functions, i.e., raise and lower pins, lock pins, and create a large array. We also contribute a concise morphological chart that organises the various mechanisms for these three functions. Based on this, we discuss the reasons for the high cost and complexity of these surface haptic technologies and infer why larger displays and more affordable devices are not available. Our findings can be used to design new mechanisms for more affordable and scalable pin array display systems. @Article{ISS23p432, author = {Tigmanshu Bhatnagar and Albert Higgins and Nicolai Marquardt and Mark Miodownik and Catherine Holloway}, title = {Analysis of Product Architectures of Pin Array Technologies for Tactile Displays}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {432}, numpages = {21}, doi = {10.1145/3626468}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available |
|
Higuchi, Keita |
ISS '23: "Interactive 3D Annotation ..."
Interactive 3D Annotation of Objects in Moving Videos from Sparse Multi-view Frames
Kotaro Oomori, Wataru Kawabe, Fabrice Matulic, Takeo Igarashi, and Keita Higuchi (University of Tokyo, Tokyo, Japan; Preferred Networks, Tokyo, Japan) Segmenting and determining the 3D bounding boxes of objects of interest in RGB videos is an important task for a variety of applications such as augmented reality, navigation, and robotics. Supervised machine learning techniques are commonly used for this, but they need training datasets: sets of images with associated 3D bounding boxes manually defined by human annotators using a labelling tool. However, precisely placing 3D bounding boxes can be difficult using conventional 3D manipulation tools on a 2D interface. To alleviate that burden, we propose a novel technique with which 3D bounding boxes can be created by simply drawing 2D bounding rectangles on multiple frames of a video sequence showing the object from different angles. The method uses reconstructed dense 3D point clouds from the video and computes tightly fitting 3D bounding boxes of desired objects selected by back-projecting the 2D rectangles. We show concrete application scenarios of our interface, including training dataset creation and editing 3D spaces and videos. An evaluation comparing our technique with a conventional 3D annotation tool shows that our method results in higher accuracy. We also confirm that the bounding boxes created with our interface have a lower variance, likely yielding more consistent labels and datasets. @Article{ISS23p440, author = {Kotaro Oomori and Wataru Kawabe and Fabrice Matulic and Takeo Igarashi and Keita Higuchi}, title = {Interactive 3D Annotation of Objects in Moving Videos from Sparse Multi-view Frames}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {440}, numpages = {18}, doi = {10.1145/3626476}, year = {2023}, } Publisher's Version |
|
Hirskyj-Douglas, Ilyena |
ISS '23: "Hum-ble Beginnings: Developing ..."
Hum-ble Beginnings: Developing Touch- and Proximity-Input-Based Interfaces for Zoo-Housed Giraffes’ Audio Enrichment
Alana Grant, Vilma Kankaanpää, and Ilyena Hirskyj-Douglas (University of Glasgow, Glasgow, UK) Though computer systems have entered widespread use for animals' enrichment in zoos, no interactive computer systems suited to giraffes have yet been developed. Hence, which input modes or audio stimuli giraffes might best utilise remains unknown. To address this issue and probe development of such systems alongside the animals themselves and zookeepers, researchers gathered requirements from the keepers and from prototyping with giraffes, then created two interfaces -- one touch-based and one proximity-based -- that play giraffe-humming audio or white noise when activated. Over two months of observation, giraffes utilised the proximity-based system more frequently than the touch-based one but in shorter episodes. Secondly, the study highlighted the significance of considering user-specific needs in computer systems' development: the lack of preference shown for any specific audio type indicates that the audio stimuli chosen were inappropriate for these giraffes. In addition, the paper articulates several lessons that can be drawn from human--computer interaction when one develops systems for animals and, in turn, what the findings presented mean for humans. @Article{ISS23p434, author = {Alana Grant and Vilma Kankaanpää and Ilyena Hirskyj-Douglas}, title = {Hum-ble Beginnings: Developing Touch- and Proximity-Input-Based Interfaces for Zoo-Housed Giraffes’ Audio Enrichment}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {434}, numpages = {23}, doi = {10.1145/3626470}, year = {2023}, } Publisher's Version |
|
Hohn, Jessica |
ISS '23: "BrickStARt: Enabling In-situ ..."
BrickStARt: Enabling In-situ Design and Tangible Exploration for Personal Fabrication using Mixed Reality
Evgeny Stemasov, Jessica Hohn, Maurice Cordts, Anja Schikorr, Enrico Rukzio, and Jan Gugenheimer (Ulm University, Ulm, Germany; TU-Darmstadt, Darmstadt, Germany; Institut Polytechnique de Paris, Paris, France) 3D-printers enable end-users to design and fabricate unique physical artifacts but maintain an increased entry barrier and friction. End users must design tangible artifacts through intangible media away from the main problem space (ex-situ) and transfer spatial requirements to an abstract software environment. To allow users to evaluate dimensions, balance, or fit early and in-situ, we developed BrickStARt, a design tool using tangible construction blocks paired with a mixed-reality headset. Users assemble a physical block model at the envisioned location of the fabricated artifact. Designs can be tested tangibly, refined, and digitally post-processed, remaining continuously in-situ. We implemented BrickStARt using a Magic Leap headset and present walkthroughs, highlighting novel interactions for 3D-design. In a user study (n=16), first-time 3D-modelers succeeded more often using BrickStARt than Tinkercad. Our results suggest that BrickStARt provides an accessible and explorative process while facilitating quick, tangible design iterations that allow users to detect physics-related issues (e.g., clearance) early on. @Article{ISS23p429, author = {Evgeny Stemasov and Jessica Hohn and Maurice Cordts and Anja Schikorr and Enrico Rukzio and Jan Gugenheimer}, title = {BrickStARt: Enabling In-situ Design and Tangible Exploration for Personal Fabrication using Mixed Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {429}, numpages = {29}, doi = {10.1145/3626465}, year = {2023}, } Publisher's Version Video |
|
Holloway, Catherine |
ISS '23: "Analysis of Product Architectures ..."
Analysis of Product Architectures of Pin Array Technologies for Tactile Displays
Tigmanshu Bhatnagar, Albert Higgins, Nicolai Marquardt, Mark Miodownik, and Catherine Holloway (University College London, London, UK; Global Disability Innovation Hub, London, UK; Microsoft Research, Redmond, USA) Refreshable tactile displays based on pin array technologies have a significant impact on the education of children with visual impairments, but they are prohibitively expensive. To better understand their design and the reason for the high cost, we created a database and analyzed the product architectures of 67 unique pin array technologies from literature and patents. We qualitatively coded their functional elements and analyzed the physical parts that execute the functions. Our findings highlight that pin array surfaces aim to achieve three key functions, i.e., raise and lower pins, lock pins, and create a large array. We also contribute a concise morphological chart that organises the various mechanisms for these three functions. Based on this, we discuss the reasons for the high cost and complexity of these surface haptic technologies and infer why larger displays and more affordable devices are not available. Our findings can be used to design new mechanisms for more affordable and scalable pin array display systems. @Article{ISS23p432, author = {Tigmanshu Bhatnagar and Albert Higgins and Nicolai Marquardt and Mark Miodownik and Catherine Holloway}, title = {Analysis of Product Architectures of Pin Array Technologies for Tactile Displays}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {432}, numpages = {21}, doi = {10.1145/3626468}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available |
|
Houben, Steven |
ISS '23: "Reality and Beyond: Proxemics ..."
Reality and Beyond: Proxemics as a Lens for Designing Handheld Collaborative Augmented Reality
Mille Skovhus Lunding, Jens Emil Sloth Grønbæk, Nicolai Grymer, Thomas Wells, Steven Houben, and Marianne Graves Petersen (Aarhus University, Aarhus, Denmark; Lancaster University, Lancaster, UK; Eindhoven University of Technology, Eindhoven, Netherlands) Augmented Reality (AR) has shown great potential for supporting co-located collaboration. Yet, it is rarely articulated in the design rationales of AR systems that they promote a certain socio-spatial configuration of the users. Learning from proxemics, we argue that such configurations enable and constrain different co-located spatial behaviors with consequences for collaborative activities. We focus specifically on enabling different collaboration styles via the design of Handheld Collaborative Augmented Reality (HCAR) systems. Drawing upon notions of proxemics, we show how different HCAR designs enable different socio-spatial configurations. Through a design exploration, we demonstrate interaction techniques to expand on the notion of collaborative coupling styles by either deliberately designing for aligning with physical reality or going beyond. The main contributions are a proxemics-based conceptual lens and vocabulary for supporting interaction designers in being mindful of the proxemic consequences when developing handheld multi-user AR systems. @Article{ISS23p427, author = {Mille Skovhus Lunding and Jens Emil Sloth Grønbæk and Nicolai Grymer and Thomas Wells and Steven Houben and Marianne Graves Petersen}, title = {Reality and Beyond: Proxemics as a Lens for Designing Handheld Collaborative Augmented Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {427}, numpages = {20}, doi = {10.1145/3626463}, year = {2023}, } Publisher's Version Video |
|
Igarashi, Takeo |
ISS '23: "Interactive 3D Annotation ..."
Interactive 3D Annotation of Objects in Moving Videos from Sparse Multi-view Frames
Kotaro Oomori, Wataru Kawabe, Fabrice Matulic, Takeo Igarashi, and Keita Higuchi (University of Tokyo, Tokyo, Japan; Preferred Networks, Tokyo, Japan) Segmenting and determining the 3D bounding boxes of objects of interest in RGB videos is an important task for a variety of applications such as augmented reality, navigation, and robotics. Supervised machine learning techniques are commonly used for this, but they need training datasets: sets of images with associated 3D bounding boxes manually defined by human annotators using a labelling tool. However, precisely placing 3D bounding boxes can be difficult using conventional 3D manipulation tools on a 2D interface. To alleviate that burden, we propose a novel technique with which 3D bounding boxes can be created by simply drawing 2D bounding rectangles on multiple frames of a video sequence showing the object from different angles. The method uses reconstructed dense 3D point clouds from the video and computes tightly fitting 3D bounding boxes of desired objects selected by back-projecting the 2D rectangles. We show concrete application scenarios of our interface, including training dataset creation and editing 3D spaces and videos. An evaluation comparing our technique with a conventional 3D annotation tool shows that our method results in higher accuracy. We also confirm that the bounding boxes created with our interface have a lower variance, likely yielding more consistent labels and datasets. @Article{ISS23p440, author = {Kotaro Oomori and Wataru Kawabe and Fabrice Matulic and Takeo Igarashi and Keita Higuchi}, title = {Interactive 3D Annotation of Objects in Moving Videos from Sparse Multi-view Frames}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {440}, numpages = {18}, doi = {10.1145/3626476}, year = {2023}, } Publisher's Version |
|
Ion, Alexandra |
ISS '23: "BlendMR: A Computational Method ..."
BlendMR: A Computational Method to Create Ambient Mixed Reality Interfaces
Violet Yinuo Han, Hyunsung Cho, Kiyosu Maeda, Alexandra Ion, and David Lindlbauer (Carnegie Mellon University, Pittsburgh, USA; University of Tokyo, Tokyo, Japan) Mixed Reality (MR) systems display content freely in space, and present nearly arbitrary amounts of information, enabling ubiquitous access to digital information. This approach, however, introduces clutter and distraction if too much virtual content is shown. We present BlendMR, an optimization-based MR system that blends virtual content onto the physical objects in users’ environments to serve as ambient information displays. Our approach takes existing 2D applications and meshes of physical objects as input. It analyses the geometry of the physical objects and identifies regions that are suitable hosts for virtual elements. Using a novel integer programming formulation, our approach then optimally maps selected contents of the 2D applications onto the object, optimizing for factors such as importance and hierarchy of information, viewing angle, and geometric distortion. We evaluate BlendMR by comparing it to a 2D window baseline. Study results show that BlendMR decreases clutter and distraction, and is preferred by users. We demonstrate the applicability of BlendMR in a series of results and usage scenarios. @Article{ISS23p436, author = {Violet Yinuo Han and Hyunsung Cho and Kiyosu Maeda and Alexandra Ion and David Lindlbauer}, title = {BlendMR: A Computational Method to Create Ambient Mixed Reality Interfaces}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {436}, numpages = {25}, doi = {10.1145/3626472}, year = {2023}, } Publisher's Version Info |
|
Jetter, Hans-Christian |
ISS '23: "Aircraft Cockpit Interaction ..."
Aircraft Cockpit Interaction in Virtual Reality with Visual, Auditive, and Vibrotactile Feedback
Stefan Auer, Christoph Anthes, Harald Reiterer, and Hans-Christian Jetter (University of Applied Sciences Upper Austria, Hagenberg, Austria; University of Konstanz, Konstanz, Germany; University of Lübeck, Lübeck, Germany) Safety-critical interactive spaces for supervision and time-critical control tasks are usually characterized by many small displays and physical controls, typically found in control rooms or automotive, railway, and aviation cockpits. Using Virtual Reality (VR) simulations instead of a physical system can significantly reduce the training costs of these interactive spaces without risking real-world accidents or occupying expensive physical simulators. However, the user's physical interactions and feedback methods must be technologically mediated. Therefore, we conducted a within-subjects study with 24 participants and compared performance, task load, and simulator sickness during training of authentic aircraft cockpit manipulation tasks. The participants were asked to perform these tasks inside a VR flight simulator (VRFS) for three feedback methods (acoustic, haptic, and acoustic+haptic) and inside a physical flight simulator (PFS) of a commercial airplane cockpit. The study revealed a partial equivalence of VRFS and PFS, control-specific differences input elements, irrelevance of rudimentary vibrotactile feedback, slower movements in VR, as well as a preference for PFS. @Article{ISS23p445, author = {Stefan Auer and Christoph Anthes and Harald Reiterer and Hans-Christian Jetter}, title = {Aircraft Cockpit Interaction in Virtual Reality with Visual, Auditive, and Vibrotactile Feedback}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {445}, numpages = {24}, doi = {10.1145/3626481}, year = {2023}, } Publisher's Version |
|
Jiang, Peiling |
ISS '23: "1D-Touch: NLP-Assisted Coarse ..."
1D-Touch: NLP-Assisted Coarse Text Selection via a Semi-Direct Gesture
Peiling Jiang, Li Feng, Fuling Sun, Parakrant Sarkar, Haijun Xia, and Can Liu (City University of Hong Kong, Hong Kong, China; University of California San Diego, La Jolla, USA; Hong Kong University of Science and Technology, Guangzhou, China) Existing text selection techniques on touchscreen focus on improving the control for moving the carets. Coarse-grained text selection on word and phrase levels has not received much support beyond word-snapping and entity recognition. We introduce 1D-Touch, a novel text selection method that complements the carets-based sub-word selection by facilitating the selection of semantic units of words and above. This method employs a simple vertical slide gesture to expand and contract a selection area from a word. The expansion can be by words or by semantic chunks ranging from sub-phrases to sentences. This technique shifts the concept of text selection, from defining a range by locating the first and last words, towards a dynamic process of expanding and contracting a textual semantic entity. To understand the effects of our approach, we prototyped and tested two variants: WordTouch, which offers a straightforward word-by-word expansion, and ChunkTouch, which leverages NLP to chunk text into syntactic units, allowing the selection to grow by semantically meaningful units in response to the sliding gesture. Our evaluation, focused on the coarse-grained selection tasks handled by 1D-Touch, shows a 20% improvement over the default word-snapping selection method on Android. @Article{ISS23p447, author = {Peiling Jiang and Li Feng and Fuling Sun and Parakrant Sarkar and Haijun Xia and Can Liu}, title = {1D-Touch: NLP-Assisted Coarse Text Selection via a Semi-Direct Gesture}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {447}, numpages = {20}, doi = {10.1145/3626483}, year = {2023}, } Publisher's Version |
|
Kankaanpää, Vilma |
ISS '23: "Hum-ble Beginnings: Developing ..."
Hum-ble Beginnings: Developing Touch- and Proximity-Input-Based Interfaces for Zoo-Housed Giraffes’ Audio Enrichment
Alana Grant, Vilma Kankaanpää, and Ilyena Hirskyj-Douglas (University of Glasgow, Glasgow, UK) Though computer systems have entered widespread use for animals' enrichment in zoos, no interactive computer systems suited to giraffes have yet been developed. Hence, which input modes or audio stimuli giraffes might best utilise remains unknown. To address this issue and probe development of such systems alongside the animals themselves and zookeepers, researchers gathered requirements from the keepers and from prototyping with giraffes, then created two interfaces -- one touch-based and one proximity-based -- that play giraffe-humming audio or white noise when activated. Over two months of observation, giraffes utilised the proximity-based system more frequently than the touch-based one but in shorter episodes. Secondly, the study highlighted the significance of considering user-specific needs in computer systems' development: the lack of preference shown for any specific audio type indicates that the audio stimuli chosen were inappropriate for these giraffes. In addition, the paper articulates several lessons that can be drawn from human--computer interaction when one develops systems for animals and, in turn, what the findings presented mean for humans. @Article{ISS23p434, author = {Alana Grant and Vilma Kankaanpää and Ilyena Hirskyj-Douglas}, title = {Hum-ble Beginnings: Developing Touch- and Proximity-Input-Based Interfaces for Zoo-Housed Giraffes’ Audio Enrichment}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {434}, numpages = {23}, doi = {10.1145/3626470}, year = {2023}, } Publisher's Version |
|
Kawabe, Wataru |
ISS '23: "Interactive 3D Annotation ..."
Interactive 3D Annotation of Objects in Moving Videos from Sparse Multi-view Frames
Kotaro Oomori, Wataru Kawabe, Fabrice Matulic, Takeo Igarashi, and Keita Higuchi (University of Tokyo, Tokyo, Japan; Preferred Networks, Tokyo, Japan) Segmenting and determining the 3D bounding boxes of objects of interest in RGB videos is an important task for a variety of applications such as augmented reality, navigation, and robotics. Supervised machine learning techniques are commonly used for this, but they need training datasets: sets of images with associated 3D bounding boxes manually defined by human annotators using a labelling tool. However, precisely placing 3D bounding boxes can be difficult using conventional 3D manipulation tools on a 2D interface. To alleviate that burden, we propose a novel technique with which 3D bounding boxes can be created by simply drawing 2D bounding rectangles on multiple frames of a video sequence showing the object from different angles. The method uses reconstructed dense 3D point clouds from the video and computes tightly fitting 3D bounding boxes of desired objects selected by back-projecting the 2D rectangles. We show concrete application scenarios of our interface, including training dataset creation and editing 3D spaces and videos. An evaluation comparing our technique with a conventional 3D annotation tool shows that our method results in higher accuracy. We also confirm that the bounding boxes created with our interface have a lower variance, likely yielding more consistent labels and datasets. @Article{ISS23p440, author = {Kotaro Oomori and Wataru Kawabe and Fabrice Matulic and Takeo Igarashi and Keita Higuchi}, title = {Interactive 3D Annotation of Objects in Moving Videos from Sparse Multi-view Frames}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {440}, numpages = {18}, doi = {10.1145/3626476}, year = {2023}, } Publisher's Version |
|
Kim, Daehwa |
ISS '23: "WorldPoint: Finger Pointing ..."
WorldPoint: Finger Pointing as a Rapid and Natural Trigger for In-the-Wild Mobile Interactions
Daehwa Kim, Vimal Mollyn, and Chris Harrison (Carnegie Mellon University, Pittsburgh, USA) Pointing with one's finger is a natural and rapid way to denote an area or object of interest. It is routinely used in human-human interaction to increase both the speed and accuracy of communication, but it is rarely utilized in human-computer interactions. In this work, we use the recent inclusion of wide-angle, rear-facing smartphone cameras, along with hardware-accelerated machine learning, to enable real-time, infrastructure-free, finger-pointing interactions on today's mobile phones. We envision users raising their hands to point in front of their phones as a "wake gesture". This can then be coupled with a voice command to trigger advanced functionality. For example, while composing an email, a user can point at a document on a table and say "attach". Our interaction technique requires no navigation away from the current app and is both faster and more privacy-preserving than the current method of taking a photo. @Article{ISS23p442, author = {Daehwa Kim and Vimal Mollyn and Chris Harrison}, title = {WorldPoint: Finger Pointing as a Rapid and Natural Trigger for In-the-Wild Mobile Interactions}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {442}, numpages = {19}, doi = {10.1145/3626478}, year = {2023}, } Publisher's Version |
|
Kitamura, Yoshifumi |
ISS '23: "UbiSurface: A Robotic Touch ..."
UbiSurface: A Robotic Touch Surface for Supporting Mid-air Planar Interactions in Room-Scale VR
Ryota Gomi, Kazuki Takashima, Yuki Onishi, Kazuyuki Fujita, and Yoshifumi Kitamura (Tohoku University, Sendai, Japan; Singapore Management Univeristy, Singapore, Singapore) Room-scale VR has been considered an alternative to physical office workspaces. For office activities, users frequently require planar input methods, such as typing or handwriting, to quickly record annotations to virtual content. However, current off-the-shelf VR HMD setups rely on mid-air interactions, which can cause arm fatigue and decrease input accuracy. To address this issue, we propose UbiSurface, a robotic touch surface that can automatically reposition itself to physically present a virtual planar input surface (VR whiteboard, VR canvas, etc.) to users and to permit them to achieve accurate and fatigue-less input while walking around a virtual room. We design and implement a prototype of UbiSurface that can dynamically change a canvas-sized touch surface's position, height, and pitch and yaw angles to adapt to virtual surfaces spatially arranged at various locations and angles around a virtual room. We then conduct studies to validate its technical performance and examine how UbiSurface facilitates the user's primary mid-air planar interactions, such as painting and writing in a room-scale VR setup. Our results indicate that this system reduces arm fatigue and increases input accuracy, especially for writing tasks. We then discuss the potential benefits and challenges of robotic touch devices for future room-scale VR setups. @Article{ISS23p443, author = {Ryota Gomi and Kazuki Takashima and Yuki Onishi and Kazuyuki Fujita and Yoshifumi Kitamura}, title = {UbiSurface: A Robotic Touch Surface for Supporting Mid-air Planar Interactions in Room-Scale VR}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {443}, numpages = {22}, doi = {10.1145/3626479}, year = {2023}, } Publisher's Version |
|
Lank, Edward |
ISS '23: "Interactions across Displays ..."
Interactions across Displays and Space: A Study of Virtual Reality Streaming Practices on Twitch
Liwei Wu, Qing Liu, Jian Zhao, and Edward Lank (University of Waterloo, Waterloo, Canada) The growing live streaming economy and virtual reality (VR) technologies have sparked interest in VR streaming among streamers and viewers. However, limited research has been conducted to understand this emerging streaming practice. To address this gap, we conducted an in-depth thematic analysis of 34 streaming videos from 12 VR streamers with varying levels of experience, to explore the current practices, interaction styles, and strategies, as well as to investigate the challenges and opportunities for VR streaming. Our findings indicate that VR streamers face challenges in building emotional connections and maintaining streaming flow due to technical problems, lack of fluid transitions between physical and virtual environments, and not intentionally designed game scenes. As a response, we propose six design implications to encourage collaboration between game designers and streaming app developers, facilitating fluid, rich, and broad interactions for an enhanced streaming experience. In addition, we discuss the use of streaming videos as user-generated data for research, highlighting the lessons learned and emphasizing the need for tools to support streaming video analysis. Our research sheds light on the unique aspects of VR streaming, which combines interactions across displays and space. @Article{ISS23p437, author = {Liwei Wu and Qing Liu and Jian Zhao and Edward Lank}, title = {Interactions across Displays and Space: A Study of Virtual Reality Streaming Practices on Twitch}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {437}, numpages = {24}, doi = {10.1145/3626473}, year = {2023}, } Publisher's Version |
|
Li, Jingyi |
ISS '23: "SeatmateVR: Proxemic Cues ..."
SeatmateVR: Proxemic Cues for Close Bystander-Awareness in Virtual Reality
Jingyi Li, Hyerim Park, Robin Welsch, Sven Mayer, and Andreas Martin Butz (LMU Munich, Munich, Germany; Aalto University, Helsinki, Finland) Prior research explored ways to alert virtual reality users of bystanders entering the play area from afar. However, in confined social settings like sharing a couch with seatmates, bystanders' proxemic cues, such as distance, are limited during interruptions, posing challenges for proxemic-aware systems. To address this, we investigated three visualizations, using a 2D animoji, a fully-rendered avatar, and their combination, to gradually share bystanders' orientation and location during interruptions. In a user study (N=22), participants played virtual reality games while responding to questions from their seatmates. We found that the avatar preserved game experiences yet did not support the fast identification of seatmates as the animoji did. Instead, users preferred the mixed visualization, where they found the seatmate's orientation cues instantly in their view and were gradually guided to the person's actual location. We discuss implications for fine-grained proxemic-aware virtual reality systems to support interaction in constrained social spaces. @Article{ISS23p438, author = {Jingyi Li and Hyerim Park and Robin Welsch and Sven Mayer and Andreas Martin Butz}, title = {SeatmateVR: Proxemic Cues for Close Bystander-Awareness in Virtual Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {438}, numpages = {20}, doi = {10.1145/3626474}, year = {2023}, } Publisher's Version Archive submitted (190 MB) |
|
Lindlbauer, David |
ISS '23: "BlendMR: A Computational Method ..."
BlendMR: A Computational Method to Create Ambient Mixed Reality Interfaces
Violet Yinuo Han, Hyunsung Cho, Kiyosu Maeda, Alexandra Ion, and David Lindlbauer (Carnegie Mellon University, Pittsburgh, USA; University of Tokyo, Tokyo, Japan) Mixed Reality (MR) systems display content freely in space, and present nearly arbitrary amounts of information, enabling ubiquitous access to digital information. This approach, however, introduces clutter and distraction if too much virtual content is shown. We present BlendMR, an optimization-based MR system that blends virtual content onto the physical objects in users’ environments to serve as ambient information displays. Our approach takes existing 2D applications and meshes of physical objects as input. It analyses the geometry of the physical objects and identifies regions that are suitable hosts for virtual elements. Using a novel integer programming formulation, our approach then optimally maps selected contents of the 2D applications onto the object, optimizing for factors such as importance and hierarchy of information, viewing angle, and geometric distortion. We evaluate BlendMR by comparing it to a 2D window baseline. Study results show that BlendMR decreases clutter and distraction, and is preferred by users. We demonstrate the applicability of BlendMR in a series of results and usage scenarios. @Article{ISS23p436, author = {Violet Yinuo Han and Hyunsung Cho and Kiyosu Maeda and Alexandra Ion and David Lindlbauer}, title = {BlendMR: A Computational Method to Create Ambient Mixed Reality Interfaces}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {436}, numpages = {25}, doi = {10.1145/3626472}, year = {2023}, } Publisher's Version Info |
|
Liu, Can |
ISS '23: "1D-Touch: NLP-Assisted Coarse ..."
1D-Touch: NLP-Assisted Coarse Text Selection via a Semi-Direct Gesture
Peiling Jiang, Li Feng, Fuling Sun, Parakrant Sarkar, Haijun Xia, and Can Liu (City University of Hong Kong, Hong Kong, China; University of California San Diego, La Jolla, USA; Hong Kong University of Science and Technology, Guangzhou, China) Existing text selection techniques on touchscreen focus on improving the control for moving the carets. Coarse-grained text selection on word and phrase levels has not received much support beyond word-snapping and entity recognition. We introduce 1D-Touch, a novel text selection method that complements the carets-based sub-word selection by facilitating the selection of semantic units of words and above. This method employs a simple vertical slide gesture to expand and contract a selection area from a word. The expansion can be by words or by semantic chunks ranging from sub-phrases to sentences. This technique shifts the concept of text selection, from defining a range by locating the first and last words, towards a dynamic process of expanding and contracting a textual semantic entity. To understand the effects of our approach, we prototyped and tested two variants: WordTouch, which offers a straightforward word-by-word expansion, and ChunkTouch, which leverages NLP to chunk text into syntactic units, allowing the selection to grow by semantically meaningful units in response to the sliding gesture. Our evaluation, focused on the coarse-grained selection tasks handled by 1D-Touch, shows a 20% improvement over the default word-snapping selection method on Android. @Article{ISS23p447, author = {Peiling Jiang and Li Feng and Fuling Sun and Parakrant Sarkar and Haijun Xia and Can Liu}, title = {1D-Touch: NLP-Assisted Coarse Text Selection via a Semi-Direct Gesture}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {447}, numpages = {20}, doi = {10.1145/3626483}, year = {2023}, } Publisher's Version |
|
Liu, Qing |
ISS '23: "Interactions across Displays ..."
Interactions across Displays and Space: A Study of Virtual Reality Streaming Practices on Twitch
Liwei Wu, Qing Liu, Jian Zhao, and Edward Lank (University of Waterloo, Waterloo, Canada) The growing live streaming economy and virtual reality (VR) technologies have sparked interest in VR streaming among streamers and viewers. However, limited research has been conducted to understand this emerging streaming practice. To address this gap, we conducted an in-depth thematic analysis of 34 streaming videos from 12 VR streamers with varying levels of experience, to explore the current practices, interaction styles, and strategies, as well as to investigate the challenges and opportunities for VR streaming. Our findings indicate that VR streamers face challenges in building emotional connections and maintaining streaming flow due to technical problems, lack of fluid transitions between physical and virtual environments, and not intentionally designed game scenes. As a response, we propose six design implications to encourage collaboration between game designers and streaming app developers, facilitating fluid, rich, and broad interactions for an enhanced streaming experience. In addition, we discuss the use of streaming videos as user-generated data for research, highlighting the lessons learned and emphasizing the need for tools to support streaming video analysis. Our research sheds light on the unique aspects of VR streaming, which combines interactions across displays and space. @Article{ISS23p437, author = {Liwei Wu and Qing Liu and Jian Zhao and Edward Lank}, title = {Interactions across Displays and Space: A Study of Virtual Reality Streaming Practices on Twitch}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {437}, numpages = {24}, doi = {10.1145/3626473}, year = {2023}, } Publisher's Version |
|
Lu, Jiwen |
ISS '23: "3D Finger Rotation Estimation ..."
3D Finger Rotation Estimation from Fingerprint Images
Yongjie Duan, Jinyang Yu, Jianjiang Feng, Ke He, Jiwen Lu, and Jie Zhou (Tsinghua University, Beijing, China) Various touch-based interaction techniques have been developed to make interactions on mobile devices more effective, efficient, and intuitive. Finger orientation, especially, has attracted a lot of attentions since it intuitively brings three additional degrees of freedom (DOF) compared with two-dimensional (2D) touching points. The mapping of finger orientation can be classified as being either absolute or relative, suitable for different interaction applications. However, only absolute orientation has been explored in prior works. The relative angles can be calculated based on two estimated absolute orientations, although, a higher accuracy is expected by predicting relative rotation from input images directly. Consequently, in this paper, we propose to estimate complete 3D relative finger angles based on two fingerprint images, which incorporate more information with a higher image resolution than capacitive images. For algorithm training and evaluation, we constructed a dataset consisting of fingerprint images and their corresponding ground truth 3D relative finger rotation angles. Experimental results on this dataset revealed that our method outperforms previous approaches with absolute finger angle models. Further, extensive experiments were conducted to explore the impact of image resolutions, finger types, and rotation ranges on performance. A user study was also conducted to examine the efficiency and precision using 3D relative finger orientation in 3D object rotation task. @Article{ISS23p431, author = {Yongjie Duan and Jinyang Yu and Jianjiang Feng and Ke He and Jiwen Lu and Jie Zhou}, title = {3D Finger Rotation Estimation from Fingerprint Images}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {431}, numpages = {21}, doi = {10.1145/3626467}, year = {2023}, } Publisher's Version |
|
Lunding, Mille Skovhus |
ISS '23: "Reality and Beyond: Proxemics ..."
Reality and Beyond: Proxemics as a Lens for Designing Handheld Collaborative Augmented Reality
Mille Skovhus Lunding, Jens Emil Sloth Grønbæk, Nicolai Grymer, Thomas Wells, Steven Houben, and Marianne Graves Petersen (Aarhus University, Aarhus, Denmark; Lancaster University, Lancaster, UK; Eindhoven University of Technology, Eindhoven, Netherlands) Augmented Reality (AR) has shown great potential for supporting co-located collaboration. Yet, it is rarely articulated in the design rationales of AR systems that they promote a certain socio-spatial configuration of the users. Learning from proxemics, we argue that such configurations enable and constrain different co-located spatial behaviors with consequences for collaborative activities. We focus specifically on enabling different collaboration styles via the design of Handheld Collaborative Augmented Reality (HCAR) systems. Drawing upon notions of proxemics, we show how different HCAR designs enable different socio-spatial configurations. Through a design exploration, we demonstrate interaction techniques to expand on the notion of collaborative coupling styles by either deliberately designing for aligning with physical reality or going beyond. The main contributions are a proxemics-based conceptual lens and vocabulary for supporting interaction designers in being mindful of the proxemic consequences when developing handheld multi-user AR systems. @Article{ISS23p427, author = {Mille Skovhus Lunding and Jens Emil Sloth Grønbæk and Nicolai Grymer and Thomas Wells and Steven Houben and Marianne Graves Petersen}, title = {Reality and Beyond: Proxemics as a Lens for Designing Handheld Collaborative Augmented Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {427}, numpages = {20}, doi = {10.1145/3626463}, year = {2023}, } Publisher's Version Video |
|
Maeda, Kiyosu |
ISS '23: "BlendMR: A Computational Method ..."
BlendMR: A Computational Method to Create Ambient Mixed Reality Interfaces
Violet Yinuo Han, Hyunsung Cho, Kiyosu Maeda, Alexandra Ion, and David Lindlbauer (Carnegie Mellon University, Pittsburgh, USA; University of Tokyo, Tokyo, Japan) Mixed Reality (MR) systems display content freely in space, and present nearly arbitrary amounts of information, enabling ubiquitous access to digital information. This approach, however, introduces clutter and distraction if too much virtual content is shown. We present BlendMR, an optimization-based MR system that blends virtual content onto the physical objects in users’ environments to serve as ambient information displays. Our approach takes existing 2D applications and meshes of physical objects as input. It analyses the geometry of the physical objects and identifies regions that are suitable hosts for virtual elements. Using a novel integer programming formulation, our approach then optimally maps selected contents of the 2D applications onto the object, optimizing for factors such as importance and hierarchy of information, viewing angle, and geometric distortion. We evaluate BlendMR by comparing it to a 2D window baseline. Study results show that BlendMR decreases clutter and distraction, and is preferred by users. We demonstrate the applicability of BlendMR in a series of results and usage scenarios. @Article{ISS23p436, author = {Violet Yinuo Han and Hyunsung Cho and Kiyosu Maeda and Alexandra Ion and David Lindlbauer}, title = {BlendMR: A Computational Method to Create Ambient Mixed Reality Interfaces}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {436}, numpages = {25}, doi = {10.1145/3626472}, year = {2023}, } Publisher's Version Info |
|
Maierhöfer, Vitus |
ISS '23: "SurfaceCast: Ubiquitous, Cross-Device ..."
SurfaceCast: Ubiquitous, Cross-Device Surface Sharing
Florian Echtler, Vitus Maierhöfer, Nicolai Brodersen Hansen, and Raphael Wimmer (Aalborg University, Aalborg, Denmark; University of Regensburg, Regensburg, Germany) Real-time online interaction is the norm today. Tabletops and other dedicated interactive surface devices with direct input and tangible interaction can enhance remote collaboration, and open up new interaction scenarios based on mixed physical/virtual components. However, they are only available to a small subset of users, as they usually require identical bespoke hardware for every participant, are complex to setup, and need custom scenario-specific applications. We present SurfaceCast, a software toolkit designed to merge multiple distributed, heterogeneous end-user devices into a single, shared mixed-reality surface. Supported devices include regular desktop and laptop computers, tablets, and mixed-reality headsets, as well as projector-camera setups and dedicated interactive tabletop systems. This device-agnostic approach provides a fundamental building block for exploration of a far wider range of usage scenarios than previously feasible, including future clients using our provided API. In this paper, we discuss the software architecture of SurfaceCast, present a formative user study and a quantitative performance analysis of our framework, and introduce five example application scenarios which we enhance through the multi-user and multi-device features of the framework. Our results show that the hardware- and content-agnostic architecture of SurfaceCast can run on a wide variety of devices with sufficient performance and fidelity for real-time interaction. @Article{ISS23p439, author = {Florian Echtler and Vitus Maierhöfer and Nicolai Brodersen Hansen and Raphael Wimmer}, title = {SurfaceCast: Ubiquitous, Cross-Device Surface Sharing}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {439}, numpages = {23}, doi = {10.1145/3626475}, year = {2023}, } Publisher's Version Published Artifact Info Artifacts Available |
|
Marquardt, Nicolai |
ISS '23: "Analysis of Product Architectures ..."
Analysis of Product Architectures of Pin Array Technologies for Tactile Displays
Tigmanshu Bhatnagar, Albert Higgins, Nicolai Marquardt, Mark Miodownik, and Catherine Holloway (University College London, London, UK; Global Disability Innovation Hub, London, UK; Microsoft Research, Redmond, USA) Refreshable tactile displays based on pin array technologies have a significant impact on the education of children with visual impairments, but they are prohibitively expensive. To better understand their design and the reason for the high cost, we created a database and analyzed the product architectures of 67 unique pin array technologies from literature and patents. We qualitatively coded their functional elements and analyzed the physical parts that execute the functions. Our findings highlight that pin array surfaces aim to achieve three key functions, i.e., raise and lower pins, lock pins, and create a large array. We also contribute a concise morphological chart that organises the various mechanisms for these three functions. Based on this, we discuss the reasons for the high cost and complexity of these surface haptic technologies and infer why larger displays and more affordable devices are not available. Our findings can be used to design new mechanisms for more affordable and scalable pin array display systems. @Article{ISS23p432, author = {Tigmanshu Bhatnagar and Albert Higgins and Nicolai Marquardt and Mark Miodownik and Catherine Holloway}, title = {Analysis of Product Architectures of Pin Array Technologies for Tactile Displays}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {432}, numpages = {21}, doi = {10.1145/3626468}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available |
|
Matulic, Fabrice |
ISS '23: "Interactive 3D Annotation ..."
Interactive 3D Annotation of Objects in Moving Videos from Sparse Multi-view Frames
Kotaro Oomori, Wataru Kawabe, Fabrice Matulic, Takeo Igarashi, and Keita Higuchi (University of Tokyo, Tokyo, Japan; Preferred Networks, Tokyo, Japan) Segmenting and determining the 3D bounding boxes of objects of interest in RGB videos is an important task for a variety of applications such as augmented reality, navigation, and robotics. Supervised machine learning techniques are commonly used for this, but they need training datasets: sets of images with associated 3D bounding boxes manually defined by human annotators using a labelling tool. However, precisely placing 3D bounding boxes can be difficult using conventional 3D manipulation tools on a 2D interface. To alleviate that burden, we propose a novel technique with which 3D bounding boxes can be created by simply drawing 2D bounding rectangles on multiple frames of a video sequence showing the object from different angles. The method uses reconstructed dense 3D point clouds from the video and computes tightly fitting 3D bounding boxes of desired objects selected by back-projecting the 2D rectangles. We show concrete application scenarios of our interface, including training dataset creation and editing 3D spaces and videos. An evaluation comparing our technique with a conventional 3D annotation tool shows that our method results in higher accuracy. We also confirm that the bounding boxes created with our interface have a lower variance, likely yielding more consistent labels and datasets. @Article{ISS23p440, author = {Kotaro Oomori and Wataru Kawabe and Fabrice Matulic and Takeo Igarashi and Keita Higuchi}, title = {Interactive 3D Annotation of Objects in Moving Videos from Sparse Multi-view Frames}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {440}, numpages = {18}, doi = {10.1145/3626476}, year = {2023}, } Publisher's Version |
|
Mayer, Sven |
ISS '23: "SeatmateVR: Proxemic Cues ..."
SeatmateVR: Proxemic Cues for Close Bystander-Awareness in Virtual Reality
Jingyi Li, Hyerim Park, Robin Welsch, Sven Mayer, and Andreas Martin Butz (LMU Munich, Munich, Germany; Aalto University, Helsinki, Finland) Prior research explored ways to alert virtual reality users of bystanders entering the play area from afar. However, in confined social settings like sharing a couch with seatmates, bystanders' proxemic cues, such as distance, are limited during interruptions, posing challenges for proxemic-aware systems. To address this, we investigated three visualizations, using a 2D animoji, a fully-rendered avatar, and their combination, to gradually share bystanders' orientation and location during interruptions. In a user study (N=22), participants played virtual reality games while responding to questions from their seatmates. We found that the avatar preserved game experiences yet did not support the fast identification of seatmates as the animoji did. Instead, users preferred the mixed visualization, where they found the seatmate's orientation cues instantly in their view and were gradually guided to the person's actual location. We discuss implications for fine-grained proxemic-aware virtual reality systems to support interaction in constrained social spaces. @Article{ISS23p438, author = {Jingyi Li and Hyerim Park and Robin Welsch and Sven Mayer and Andreas Martin Butz}, title = {SeatmateVR: Proxemic Cues for Close Bystander-Awareness in Virtual Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {438}, numpages = {20}, doi = {10.1145/3626474}, year = {2023}, } Publisher's Version Archive submitted (190 MB) |
|
Min, Tian |
ISS '23: "Seeing the Wind: An Interactive ..."
Seeing the Wind: An Interactive Mist Interface for Airflow Input
Tian Min, Chengshuo Xia, Takumi Yamamoto, and Yuta Sugiura (Keio University, Yokohama, Japan; Xidian University, Guangzhou, China) Human activities can introduce variations in various environmental cues, such as light and sound, which can serve as inputs for interfaces. However, one often overlooked aspect is the airflow variation caused by these activities, which presents challenges in detection and utilization due to its intangible nature. In this paper, we have unveiled an approach using mist to capture invisible airflow variations, rendering them detectable by Time-of-Flight (ToF) sensors. We investigate the capability of this sensing technique under different types of mist or smoke, as well as the impact of airflow speed. To illustrate the feasibility of this concept, we created a prototype using a humidifier and demonstrated its capability to recognize motions. On this basis, we introduce potential applications, discuss inherent limitations, and provide design lessons grounded in mist-based airflow sensing. @Article{ISS23p444, author = {Tian Min and Chengshuo Xia and Takumi Yamamoto and Yuta Sugiura}, title = {Seeing the Wind: An Interactive Mist Interface for Airflow Input}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {444}, numpages = {22}, doi = {10.1145/3626480}, year = {2023}, } Publisher's Version |
|
Miodownik, Mark |
ISS '23: "Analysis of Product Architectures ..."
Analysis of Product Architectures of Pin Array Technologies for Tactile Displays
Tigmanshu Bhatnagar, Albert Higgins, Nicolai Marquardt, Mark Miodownik, and Catherine Holloway (University College London, London, UK; Global Disability Innovation Hub, London, UK; Microsoft Research, Redmond, USA) Refreshable tactile displays based on pin array technologies have a significant impact on the education of children with visual impairments, but they are prohibitively expensive. To better understand their design and the reason for the high cost, we created a database and analyzed the product architectures of 67 unique pin array technologies from literature and patents. We qualitatively coded their functional elements and analyzed the physical parts that execute the functions. Our findings highlight that pin array surfaces aim to achieve three key functions, i.e., raise and lower pins, lock pins, and create a large array. We also contribute a concise morphological chart that organises the various mechanisms for these three functions. Based on this, we discuss the reasons for the high cost and complexity of these surface haptic technologies and infer why larger displays and more affordable devices are not available. Our findings can be used to design new mechanisms for more affordable and scalable pin array display systems. @Article{ISS23p432, author = {Tigmanshu Bhatnagar and Albert Higgins and Nicolai Marquardt and Mark Miodownik and Catherine Holloway}, title = {Analysis of Product Architectures of Pin Array Technologies for Tactile Displays}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {432}, numpages = {21}, doi = {10.1145/3626468}, year = {2023}, } Publisher's Version Published Artifact Artifacts Available |
|
Mollyn, Vimal |
ISS '23: "WorldPoint: Finger Pointing ..."
WorldPoint: Finger Pointing as a Rapid and Natural Trigger for In-the-Wild Mobile Interactions
Daehwa Kim, Vimal Mollyn, and Chris Harrison (Carnegie Mellon University, Pittsburgh, USA) Pointing with one's finger is a natural and rapid way to denote an area or object of interest. It is routinely used in human-human interaction to increase both the speed and accuracy of communication, but it is rarely utilized in human-computer interactions. In this work, we use the recent inclusion of wide-angle, rear-facing smartphone cameras, along with hardware-accelerated machine learning, to enable real-time, infrastructure-free, finger-pointing interactions on today's mobile phones. We envision users raising their hands to point in front of their phones as a "wake gesture". This can then be coupled with a voice command to trigger advanced functionality. For example, while composing an email, a user can point at a document on a table and say "attach". Our interaction technique requires no navigation away from the current app and is both faster and more privacy-preserving than the current method of taking a photo. @Article{ISS23p442, author = {Daehwa Kim and Vimal Mollyn and Chris Harrison}, title = {WorldPoint: Finger Pointing as a Rapid and Natural Trigger for In-the-Wild Mobile Interactions}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {442}, numpages = {19}, doi = {10.1145/3626478}, year = {2023}, } Publisher's Version |
|
Nakamura, Satoshi |
ISS '23: "Evaluating the Applicability ..."
Evaluating the Applicability of GUI-Based Steering Laws to VR Car Driving: A Case of Curved Constrained Paths
Shota Yamanaka, Takumi Takaku, Yukina Funazaki, Noboru Seto, and Satoshi Nakamura (Yahoo, Tokyo, Japan; Meiji University, Tokyo, Japan; Meiji University, Nakano, Japan) Evaluating the validity of an existing user performance model in a variety of tasks is important for enhancing its applicability. The model studied in this work is the steering law for predicting the speed and time needed to perform tasks in which a cursor or a car passes through a constrained path. Previous HCI studies have refined this model to take additional path factors into account, but its applicability has only been evaluated in GUI-based environments such as those using mice or pen tablets. Accordingly, we conducted a user experiment with a driving simulator to measure the speed and time on curved roads and thus facilitate evaluation of models for pen-based path-steering tasks. The results showed that the best-fit models for speed and time had adjusted r^2 values of 0.9342 and 0.9723, respectively, for three road widths and eight curvature radii. While the models required some adjustments, the overall components of the tested models were consistent with those in previous pen-based experimental results. Our results demonstrated that user experiments to validate potential models based on pen-based tasks are effective as a pilot approach for driving tasks with more complex road conditions. @Article{ISS23p430, author = {Shota Yamanaka and Takumi Takaku and Yukina Funazaki and Noboru Seto and Satoshi Nakamura}, title = {Evaluating the Applicability of GUI-Based Steering Laws to VR Car Driving: A Case of Curved Constrained Paths}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {430}, numpages = {21}, doi = {10.1145/3626466}, year = {2023}, } Publisher's Version Archive submitted (240 kB) |
|
Nguyen, Cuong |
ISS '23: "Using Online Videos as the ..."
Using Online Videos as the Basis for Developing Design Guidelines: A Case Study of AR-Based Assembly Instructions
Niu Chen, Frances Jihae Sin, Laura Mariah Herman, Cuong Nguyen, Ivan Song, and Dongwook Yoon (University of British Columbia, Vancouver, Canada; Cornell University, Ithaca, USA; Adobe, San Francisco, USA; Adobe Research, San Francisco, USA) Design guidelines serve as an important conceptual tool to guide designers of interactive applications with well-established principles and heuristics. Consulting domain experts is a common way to develop guidelines. However, experts are often not easily accessible, and their time can be expensive. This problem poses challenges in developing comprehensive and practical guidelines. We propose a new guideline development method that uses online public videos as the basis for capturing diverse patterns of design goals and interaction primitives. In a case study focusing on AR-based assembly instructions, we apply our novel Identify-Rationalize pipeline, which distills design patterns from videos featuring AR-based assembly instructions (N=146) into a set of guidelines that cover a wide range of design considerations. The evaluation conducted with 16 AR designers indicated that the pipeline is useful for generating comprehensive guidelines. We conclude by discussing the transferability and practicality of our method. @Article{ISS23p428, author = {Niu Chen and Frances Jihae Sin and Laura Mariah Herman and Cuong Nguyen and Ivan Song and Dongwook Yoon}, title = {Using Online Videos as the Basis for Developing Design Guidelines: A Case Study of AR-Based Assembly Instructions}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {428}, numpages = {23}, doi = {10.1145/3626464}, year = {2023}, } Publisher's Version Archive submitted (53 MB) |
|
Onishi, Yuki |
ISS '23: "UbiSurface: A Robotic Touch ..."
UbiSurface: A Robotic Touch Surface for Supporting Mid-air Planar Interactions in Room-Scale VR
Ryota Gomi, Kazuki Takashima, Yuki Onishi, Kazuyuki Fujita, and Yoshifumi Kitamura (Tohoku University, Sendai, Japan; Singapore Management Univeristy, Singapore, Singapore) Room-scale VR has been considered an alternative to physical office workspaces. For office activities, users frequently require planar input methods, such as typing or handwriting, to quickly record annotations to virtual content. However, current off-the-shelf VR HMD setups rely on mid-air interactions, which can cause arm fatigue and decrease input accuracy. To address this issue, we propose UbiSurface, a robotic touch surface that can automatically reposition itself to physically present a virtual planar input surface (VR whiteboard, VR canvas, etc.) to users and to permit them to achieve accurate and fatigue-less input while walking around a virtual room. We design and implement a prototype of UbiSurface that can dynamically change a canvas-sized touch surface's position, height, and pitch and yaw angles to adapt to virtual surfaces spatially arranged at various locations and angles around a virtual room. We then conduct studies to validate its technical performance and examine how UbiSurface facilitates the user's primary mid-air planar interactions, such as painting and writing in a room-scale VR setup. Our results indicate that this system reduces arm fatigue and increases input accuracy, especially for writing tasks. We then discuss the potential benefits and challenges of robotic touch devices for future room-scale VR setups. @Article{ISS23p443, author = {Ryota Gomi and Kazuki Takashima and Yuki Onishi and Kazuyuki Fujita and Yoshifumi Kitamura}, title = {UbiSurface: A Robotic Touch Surface for Supporting Mid-air Planar Interactions in Room-Scale VR}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {443}, numpages = {22}, doi = {10.1145/3626479}, year = {2023}, } Publisher's Version |
|
Oomori, Kotaro |
ISS '23: "Interactive 3D Annotation ..."
Interactive 3D Annotation of Objects in Moving Videos from Sparse Multi-view Frames
Kotaro Oomori, Wataru Kawabe, Fabrice Matulic, Takeo Igarashi, and Keita Higuchi (University of Tokyo, Tokyo, Japan; Preferred Networks, Tokyo, Japan) Segmenting and determining the 3D bounding boxes of objects of interest in RGB videos is an important task for a variety of applications such as augmented reality, navigation, and robotics. Supervised machine learning techniques are commonly used for this, but they need training datasets: sets of images with associated 3D bounding boxes manually defined by human annotators using a labelling tool. However, precisely placing 3D bounding boxes can be difficult using conventional 3D manipulation tools on a 2D interface. To alleviate that burden, we propose a novel technique with which 3D bounding boxes can be created by simply drawing 2D bounding rectangles on multiple frames of a video sequence showing the object from different angles. The method uses reconstructed dense 3D point clouds from the video and computes tightly fitting 3D bounding boxes of desired objects selected by back-projecting the 2D rectangles. We show concrete application scenarios of our interface, including training dataset creation and editing 3D spaces and videos. An evaluation comparing our technique with a conventional 3D annotation tool shows that our method results in higher accuracy. We also confirm that the bounding boxes created with our interface have a lower variance, likely yielding more consistent labels and datasets. @Article{ISS23p440, author = {Kotaro Oomori and Wataru Kawabe and Fabrice Matulic and Takeo Igarashi and Keita Higuchi}, title = {Interactive 3D Annotation of Objects in Moving Videos from Sparse Multi-view Frames}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {440}, numpages = {18}, doi = {10.1145/3626476}, year = {2023}, } Publisher's Version |
|
Park, Hyerim |
ISS '23: "SeatmateVR: Proxemic Cues ..."
SeatmateVR: Proxemic Cues for Close Bystander-Awareness in Virtual Reality
Jingyi Li, Hyerim Park, Robin Welsch, Sven Mayer, and Andreas Martin Butz (LMU Munich, Munich, Germany; Aalto University, Helsinki, Finland) Prior research explored ways to alert virtual reality users of bystanders entering the play area from afar. However, in confined social settings like sharing a couch with seatmates, bystanders' proxemic cues, such as distance, are limited during interruptions, posing challenges for proxemic-aware systems. To address this, we investigated three visualizations, using a 2D animoji, a fully-rendered avatar, and their combination, to gradually share bystanders' orientation and location during interruptions. In a user study (N=22), participants played virtual reality games while responding to questions from their seatmates. We found that the avatar preserved game experiences yet did not support the fast identification of seatmates as the animoji did. Instead, users preferred the mixed visualization, where they found the seatmate's orientation cues instantly in their view and were gradually guided to the person's actual location. We discuss implications for fine-grained proxemic-aware virtual reality systems to support interaction in constrained social spaces. @Article{ISS23p438, author = {Jingyi Li and Hyerim Park and Robin Welsch and Sven Mayer and Andreas Martin Butz}, title = {SeatmateVR: Proxemic Cues for Close Bystander-Awareness in Virtual Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {438}, numpages = {20}, doi = {10.1145/3626474}, year = {2023}, } Publisher's Version Archive submitted (190 MB) |
|
Petersen, Marianne Graves |
ISS '23: "Reality and Beyond: Proxemics ..."
Reality and Beyond: Proxemics as a Lens for Designing Handheld Collaborative Augmented Reality
Mille Skovhus Lunding, Jens Emil Sloth Grønbæk, Nicolai Grymer, Thomas Wells, Steven Houben, and Marianne Graves Petersen (Aarhus University, Aarhus, Denmark; Lancaster University, Lancaster, UK; Eindhoven University of Technology, Eindhoven, Netherlands) Augmented Reality (AR) has shown great potential for supporting co-located collaboration. Yet, it is rarely articulated in the design rationales of AR systems that they promote a certain socio-spatial configuration of the users. Learning from proxemics, we argue that such configurations enable and constrain different co-located spatial behaviors with consequences for collaborative activities. We focus specifically on enabling different collaboration styles via the design of Handheld Collaborative Augmented Reality (HCAR) systems. Drawing upon notions of proxemics, we show how different HCAR designs enable different socio-spatial configurations. Through a design exploration, we demonstrate interaction techniques to expand on the notion of collaborative coupling styles by either deliberately designing for aligning with physical reality or going beyond. The main contributions are a proxemics-based conceptual lens and vocabulary for supporting interaction designers in being mindful of the proxemic consequences when developing handheld multi-user AR systems. @Article{ISS23p427, author = {Mille Skovhus Lunding and Jens Emil Sloth Grønbæk and Nicolai Grymer and Thomas Wells and Steven Houben and Marianne Graves Petersen}, title = {Reality and Beyond: Proxemics as a Lens for Designing Handheld Collaborative Augmented Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {427}, numpages = {20}, doi = {10.1145/3626463}, year = {2023}, } Publisher's Version Video |
|
Reiterer, Harald |
ISS '23: "Aircraft Cockpit Interaction ..."
Aircraft Cockpit Interaction in Virtual Reality with Visual, Auditive, and Vibrotactile Feedback
Stefan Auer, Christoph Anthes, Harald Reiterer, and Hans-Christian Jetter (University of Applied Sciences Upper Austria, Hagenberg, Austria; University of Konstanz, Konstanz, Germany; University of Lübeck, Lübeck, Germany) Safety-critical interactive spaces for supervision and time-critical control tasks are usually characterized by many small displays and physical controls, typically found in control rooms or automotive, railway, and aviation cockpits. Using Virtual Reality (VR) simulations instead of a physical system can significantly reduce the training costs of these interactive spaces without risking real-world accidents or occupying expensive physical simulators. However, the user's physical interactions and feedback methods must be technologically mediated. Therefore, we conducted a within-subjects study with 24 participants and compared performance, task load, and simulator sickness during training of authentic aircraft cockpit manipulation tasks. The participants were asked to perform these tasks inside a VR flight simulator (VRFS) for three feedback methods (acoustic, haptic, and acoustic+haptic) and inside a physical flight simulator (PFS) of a commercial airplane cockpit. The study revealed a partial equivalence of VRFS and PFS, control-specific differences input elements, irrelevance of rudimentary vibrotactile feedback, slower movements in VR, as well as a preference for PFS. @Article{ISS23p445, author = {Stefan Auer and Christoph Anthes and Harald Reiterer and Hans-Christian Jetter}, title = {Aircraft Cockpit Interaction in Virtual Reality with Visual, Auditive, and Vibrotactile Feedback}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {445}, numpages = {24}, doi = {10.1145/3626481}, year = {2023}, } Publisher's Version |
|
Rukzio, Enrico |
ISS '23: "BrickStARt: Enabling In-situ ..."
BrickStARt: Enabling In-situ Design and Tangible Exploration for Personal Fabrication using Mixed Reality
Evgeny Stemasov, Jessica Hohn, Maurice Cordts, Anja Schikorr, Enrico Rukzio, and Jan Gugenheimer (Ulm University, Ulm, Germany; TU-Darmstadt, Darmstadt, Germany; Institut Polytechnique de Paris, Paris, France) 3D-printers enable end-users to design and fabricate unique physical artifacts but maintain an increased entry barrier and friction. End users must design tangible artifacts through intangible media away from the main problem space (ex-situ) and transfer spatial requirements to an abstract software environment. To allow users to evaluate dimensions, balance, or fit early and in-situ, we developed BrickStARt, a design tool using tangible construction blocks paired with a mixed-reality headset. Users assemble a physical block model at the envisioned location of the fabricated artifact. Designs can be tested tangibly, refined, and digitally post-processed, remaining continuously in-situ. We implemented BrickStARt using a Magic Leap headset and present walkthroughs, highlighting novel interactions for 3D-design. In a user study (n=16), first-time 3D-modelers succeeded more often using BrickStARt than Tinkercad. Our results suggest that BrickStARt provides an accessible and explorative process while facilitating quick, tangible design iterations that allow users to detect physics-related issues (e.g., clearance) early on. @Article{ISS23p429, author = {Evgeny Stemasov and Jessica Hohn and Maurice Cordts and Anja Schikorr and Enrico Rukzio and Jan Gugenheimer}, title = {BrickStARt: Enabling In-situ Design and Tangible Exploration for Personal Fabrication using Mixed Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {429}, numpages = {29}, doi = {10.1145/3626465}, year = {2023}, } Publisher's Version Video |
|
Sarkar, Parakrant |
ISS '23: "1D-Touch: NLP-Assisted Coarse ..."
1D-Touch: NLP-Assisted Coarse Text Selection via a Semi-Direct Gesture
Peiling Jiang, Li Feng, Fuling Sun, Parakrant Sarkar, Haijun Xia, and Can Liu (City University of Hong Kong, Hong Kong, China; University of California San Diego, La Jolla, USA; Hong Kong University of Science and Technology, Guangzhou, China) Existing text selection techniques on touchscreen focus on improving the control for moving the carets. Coarse-grained text selection on word and phrase levels has not received much support beyond word-snapping and entity recognition. We introduce 1D-Touch, a novel text selection method that complements the carets-based sub-word selection by facilitating the selection of semantic units of words and above. This method employs a simple vertical slide gesture to expand and contract a selection area from a word. The expansion can be by words or by semantic chunks ranging from sub-phrases to sentences. This technique shifts the concept of text selection, from defining a range by locating the first and last words, towards a dynamic process of expanding and contracting a textual semantic entity. To understand the effects of our approach, we prototyped and tested two variants: WordTouch, which offers a straightforward word-by-word expansion, and ChunkTouch, which leverages NLP to chunk text into syntactic units, allowing the selection to grow by semantically meaningful units in response to the sliding gesture. Our evaluation, focused on the coarse-grained selection tasks handled by 1D-Touch, shows a 20% improvement over the default word-snapping selection method on Android. @Article{ISS23p447, author = {Peiling Jiang and Li Feng and Fuling Sun and Parakrant Sarkar and Haijun Xia and Can Liu}, title = {1D-Touch: NLP-Assisted Coarse Text Selection via a Semi-Direct Gesture}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {447}, numpages = {20}, doi = {10.1145/3626483}, year = {2023}, } Publisher's Version |
|
Sato, Junichi |
ISS '23: "Clarifying the Effect of Edge ..."
Clarifying the Effect of Edge Targets in Touch Pointing through Crowdsourced Experiments
Hiroki Usuba, Shota Yamanaka, and Junichi Sato (Yahoo, Chiyoda-ku, Japan; Yahoo, Tokyo, Japan) A prior work has recommended adding a 4-mm gap between a target and the edge of a screen, as tapping a target located at the screen edge takes longer than tapping non-edge targets. However, it is possible that this recommendation was created based on statistical errors, and unexplored situations existed in the prior work. In this study, we re-examine the recommendation by utilizing crowdsourced experiments to resolve the issues. If we observe the same results as the prior work through experiments including diversities, we can verify that the recommendation is suitable. We found that increasing the gap between the target and the screen edge decreased the movement time, which was consistent with the prior work. In addition, we newly found that increasing the gap decreased the error rate as well. On the basis of these results, we discuss how the gap and the target should be designed. @Article{ISS23p433, author = {Hiroki Usuba and Shota Yamanaka and Junichi Sato}, title = {Clarifying the Effect of Edge Targets in Touch Pointing through Crowdsourced Experiments}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {433}, numpages = {19}, doi = {10.1145/3626469}, year = {2023}, } Publisher's Version |
|
Satriadi, Kadek Ananta |
ISS '23: "Embodied Provenance for Immersive ..."
Embodied Provenance for Immersive Sensemaking
Yidan Zhang, Barrett Ens, Kadek Ananta Satriadi, Ying Yang, and Sarah Goodwin (Monash University, Melbourne, Australia) Immersive analytics research has explored how embodied data representations and interactions can be used to engage users in sensemaking. Prior research has broadly overlooked the potential of immersive space for supporting analytic provenance, the understanding of sensemaking processes through users’ interaction histories. We propose the concept of embodied provenance, the use of three-dimensional space and embodied interactions in supporting recalling, reproducing, annotating and sharing analysis history in immersive environments. We design a conceptual framework for embodied provenance by highlighting a set of design criteria for analytic provenance drawn from prior work and identifying essential properties for embodied provenance. We develop a prototype system in virtual reality to demonstrate the concept and support the conceptual framework by providing multiple data views and embodied interaction metaphors in a large virtual space. We present a use case scenario of energy consumption analysis and evaluated the system through a qualitative evaluation with 17 participants, which show the system’s potential for assisting analytic provenance using embodiment. Our exploration of embodied provenance through this prototype provides lessons learnt to guide the design of immersive analytic tools for embodied provenance. @Article{ISS23p435, author = {Yidan Zhang and Barrett Ens and Kadek Ananta Satriadi and Ying Yang and Sarah Goodwin}, title = {Embodied Provenance for Immersive Sensemaking}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {435}, numpages = {19}, doi = {10.1145/3626471}, year = {2023}, } Publisher's Version Video |
|
Schikorr, Anja |
ISS '23: "BrickStARt: Enabling In-situ ..."
BrickStARt: Enabling In-situ Design and Tangible Exploration for Personal Fabrication using Mixed Reality
Evgeny Stemasov, Jessica Hohn, Maurice Cordts, Anja Schikorr, Enrico Rukzio, and Jan Gugenheimer (Ulm University, Ulm, Germany; TU-Darmstadt, Darmstadt, Germany; Institut Polytechnique de Paris, Paris, France) 3D-printers enable end-users to design and fabricate unique physical artifacts but maintain an increased entry barrier and friction. End users must design tangible artifacts through intangible media away from the main problem space (ex-situ) and transfer spatial requirements to an abstract software environment. To allow users to evaluate dimensions, balance, or fit early and in-situ, we developed BrickStARt, a design tool using tangible construction blocks paired with a mixed-reality headset. Users assemble a physical block model at the envisioned location of the fabricated artifact. Designs can be tested tangibly, refined, and digitally post-processed, remaining continuously in-situ. We implemented BrickStARt using a Magic Leap headset and present walkthroughs, highlighting novel interactions for 3D-design. In a user study (n=16), first-time 3D-modelers succeeded more often using BrickStARt than Tinkercad. Our results suggest that BrickStARt provides an accessible and explorative process while facilitating quick, tangible design iterations that allow users to detect physics-related issues (e.g., clearance) early on. @Article{ISS23p429, author = {Evgeny Stemasov and Jessica Hohn and Maurice Cordts and Anja Schikorr and Enrico Rukzio and Jan Gugenheimer}, title = {BrickStARt: Enabling In-situ Design and Tangible Exploration for Personal Fabrication using Mixed Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {429}, numpages = {29}, doi = {10.1145/3626465}, year = {2023}, } Publisher's Version Video |
|
Seto, Noboru |
ISS '23: "Evaluating the Applicability ..."
Evaluating the Applicability of GUI-Based Steering Laws to VR Car Driving: A Case of Curved Constrained Paths
Shota Yamanaka, Takumi Takaku, Yukina Funazaki, Noboru Seto, and Satoshi Nakamura (Yahoo, Tokyo, Japan; Meiji University, Tokyo, Japan; Meiji University, Nakano, Japan) Evaluating the validity of an existing user performance model in a variety of tasks is important for enhancing its applicability. The model studied in this work is the steering law for predicting the speed and time needed to perform tasks in which a cursor or a car passes through a constrained path. Previous HCI studies have refined this model to take additional path factors into account, but its applicability has only been evaluated in GUI-based environments such as those using mice or pen tablets. Accordingly, we conducted a user experiment with a driving simulator to measure the speed and time on curved roads and thus facilitate evaluation of models for pen-based path-steering tasks. The results showed that the best-fit models for speed and time had adjusted r^2 values of 0.9342 and 0.9723, respectively, for three road widths and eight curvature radii. While the models required some adjustments, the overall components of the tested models were consistent with those in previous pen-based experimental results. Our results demonstrated that user experiments to validate potential models based on pen-based tasks are effective as a pilot approach for driving tasks with more complex road conditions. @Article{ISS23p430, author = {Shota Yamanaka and Takumi Takaku and Yukina Funazaki and Noboru Seto and Satoshi Nakamura}, title = {Evaluating the Applicability of GUI-Based Steering Laws to VR Car Driving: A Case of Curved Constrained Paths}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {430}, numpages = {21}, doi = {10.1145/3626466}, year = {2023}, } Publisher's Version Archive submitted (240 kB) |
|
Sheng, Zhiyao |
ISS '23: "Cross-Domain Gesture Sequence ..."
Cross-Domain Gesture Sequence Recognition for Two-Player Exergames using COTS mmWave Radar
Ahsan Jamal Akbar, Zhiyao Sheng, Qian Zhang, and Dong Wang (Shanghai Jiao Tong University, Shanghai, China) Wireless-based gesture recognition provides an effective input method for exergames. However, previous works in wireless-based gesture recognition systems mainly recognize one primary user's gestures. In the multi-player scenario, the mutual interference between users makes it difficult to predict multiple players' gestures individually. To address this challenge, we propose a flexible FMCW-radar-based system, RFDual, which enables real-time cross-domain gesture sequence recognition for two players. To eliminate the mutual interference between users, we extract a new feature type, biased range-velocity spectrum (BRVS), which only depends on a target user. We then propose customized preprocessing methods (cropping and stationary component removal) to produce environment-independent and position-independent inputs. To enhance RFDual's resistance to unseen users and articulating speeds, we design effective data augmentation methods, sequence concatenating, and randomizing. RFDual is evaluated with a dataset containing only unseen gesture sequences and achieves a gesture error rate of 1.41%. Extensive experimental results show the impressive robustness of RFDual for data in new domains, including new users, articulating speeds, positions, and environments. These results demonstrate the great potential of RFDual in practical applications like two-player exergames and gesture/activity recognition for drivers and passengers in the cab. @Article{ISS23p441, author = {Ahsan Jamal Akbar and Zhiyao Sheng and Qian Zhang and Dong Wang}, title = {Cross-Domain Gesture Sequence Recognition for Two-Player Exergames using COTS mmWave Radar}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {441}, numpages = {30}, doi = {10.1145/3626477}, year = {2023}, } Publisher's Version |
|
Sin, Frances Jihae |
ISS '23: "Using Online Videos as the ..."
Using Online Videos as the Basis for Developing Design Guidelines: A Case Study of AR-Based Assembly Instructions
Niu Chen, Frances Jihae Sin, Laura Mariah Herman, Cuong Nguyen, Ivan Song, and Dongwook Yoon (University of British Columbia, Vancouver, Canada; Cornell University, Ithaca, USA; Adobe, San Francisco, USA; Adobe Research, San Francisco, USA) Design guidelines serve as an important conceptual tool to guide designers of interactive applications with well-established principles and heuristics. Consulting domain experts is a common way to develop guidelines. However, experts are often not easily accessible, and their time can be expensive. This problem poses challenges in developing comprehensive and practical guidelines. We propose a new guideline development method that uses online public videos as the basis for capturing diverse patterns of design goals and interaction primitives. In a case study focusing on AR-based assembly instructions, we apply our novel Identify-Rationalize pipeline, which distills design patterns from videos featuring AR-based assembly instructions (N=146) into a set of guidelines that cover a wide range of design considerations. The evaluation conducted with 16 AR designers indicated that the pipeline is useful for generating comprehensive guidelines. We conclude by discussing the transferability and practicality of our method. @Article{ISS23p428, author = {Niu Chen and Frances Jihae Sin and Laura Mariah Herman and Cuong Nguyen and Ivan Song and Dongwook Yoon}, title = {Using Online Videos as the Basis for Developing Design Guidelines: A Case Study of AR-Based Assembly Instructions}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {428}, numpages = {23}, doi = {10.1145/3626464}, year = {2023}, } Publisher's Version Archive submitted (53 MB) |
|
Song, Ivan |
ISS '23: "Using Online Videos as the ..."
Using Online Videos as the Basis for Developing Design Guidelines: A Case Study of AR-Based Assembly Instructions
Niu Chen, Frances Jihae Sin, Laura Mariah Herman, Cuong Nguyen, Ivan Song, and Dongwook Yoon (University of British Columbia, Vancouver, Canada; Cornell University, Ithaca, USA; Adobe, San Francisco, USA; Adobe Research, San Francisco, USA) Design guidelines serve as an important conceptual tool to guide designers of interactive applications with well-established principles and heuristics. Consulting domain experts is a common way to develop guidelines. However, experts are often not easily accessible, and their time can be expensive. This problem poses challenges in developing comprehensive and practical guidelines. We propose a new guideline development method that uses online public videos as the basis for capturing diverse patterns of design goals and interaction primitives. In a case study focusing on AR-based assembly instructions, we apply our novel Identify-Rationalize pipeline, which distills design patterns from videos featuring AR-based assembly instructions (N=146) into a set of guidelines that cover a wide range of design considerations. The evaluation conducted with 16 AR designers indicated that the pipeline is useful for generating comprehensive guidelines. We conclude by discussing the transferability and practicality of our method. @Article{ISS23p428, author = {Niu Chen and Frances Jihae Sin and Laura Mariah Herman and Cuong Nguyen and Ivan Song and Dongwook Yoon}, title = {Using Online Videos as the Basis for Developing Design Guidelines: A Case Study of AR-Based Assembly Instructions}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {428}, numpages = {23}, doi = {10.1145/3626464}, year = {2023}, } Publisher's Version Archive submitted (53 MB) |
|
Stemasov, Evgeny |
ISS '23: "BrickStARt: Enabling In-situ ..."
BrickStARt: Enabling In-situ Design and Tangible Exploration for Personal Fabrication using Mixed Reality
Evgeny Stemasov, Jessica Hohn, Maurice Cordts, Anja Schikorr, Enrico Rukzio, and Jan Gugenheimer (Ulm University, Ulm, Germany; TU-Darmstadt, Darmstadt, Germany; Institut Polytechnique de Paris, Paris, France) 3D-printers enable end-users to design and fabricate unique physical artifacts but maintain an increased entry barrier and friction. End users must design tangible artifacts through intangible media away from the main problem space (ex-situ) and transfer spatial requirements to an abstract software environment. To allow users to evaluate dimensions, balance, or fit early and in-situ, we developed BrickStARt, a design tool using tangible construction blocks paired with a mixed-reality headset. Users assemble a physical block model at the envisioned location of the fabricated artifact. Designs can be tested tangibly, refined, and digitally post-processed, remaining continuously in-situ. We implemented BrickStARt using a Magic Leap headset and present walkthroughs, highlighting novel interactions for 3D-design. In a user study (n=16), first-time 3D-modelers succeeded more often using BrickStARt than Tinkercad. Our results suggest that BrickStARt provides an accessible and explorative process while facilitating quick, tangible design iterations that allow users to detect physics-related issues (e.g., clearance) early on. @Article{ISS23p429, author = {Evgeny Stemasov and Jessica Hohn and Maurice Cordts and Anja Schikorr and Enrico Rukzio and Jan Gugenheimer}, title = {BrickStARt: Enabling In-situ Design and Tangible Exploration for Personal Fabrication using Mixed Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {429}, numpages = {29}, doi = {10.1145/3626465}, year = {2023}, } Publisher's Version Video |
|
Sugiura, Yuta |
ISS '23: "Seeing the Wind: An Interactive ..."
Seeing the Wind: An Interactive Mist Interface for Airflow Input
Tian Min, Chengshuo Xia, Takumi Yamamoto, and Yuta Sugiura (Keio University, Yokohama, Japan; Xidian University, Guangzhou, China) Human activities can introduce variations in various environmental cues, such as light and sound, which can serve as inputs for interfaces. However, one often overlooked aspect is the airflow variation caused by these activities, which presents challenges in detection and utilization due to its intangible nature. In this paper, we have unveiled an approach using mist to capture invisible airflow variations, rendering them detectable by Time-of-Flight (ToF) sensors. We investigate the capability of this sensing technique under different types of mist or smoke, as well as the impact of airflow speed. To illustrate the feasibility of this concept, we created a prototype using a humidifier and demonstrated its capability to recognize motions. On this basis, we introduce potential applications, discuss inherent limitations, and provide design lessons grounded in mist-based airflow sensing. @Article{ISS23p444, author = {Tian Min and Chengshuo Xia and Takumi Yamamoto and Yuta Sugiura}, title = {Seeing the Wind: An Interactive Mist Interface for Airflow Input}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {444}, numpages = {22}, doi = {10.1145/3626480}, year = {2023}, } Publisher's Version |
|
Sun, Fuling |
ISS '23: "1D-Touch: NLP-Assisted Coarse ..."
1D-Touch: NLP-Assisted Coarse Text Selection via a Semi-Direct Gesture
Peiling Jiang, Li Feng, Fuling Sun, Parakrant Sarkar, Haijun Xia, and Can Liu (City University of Hong Kong, Hong Kong, China; University of California San Diego, La Jolla, USA; Hong Kong University of Science and Technology, Guangzhou, China) Existing text selection techniques on touchscreen focus on improving the control for moving the carets. Coarse-grained text selection on word and phrase levels has not received much support beyond word-snapping and entity recognition. We introduce 1D-Touch, a novel text selection method that complements the carets-based sub-word selection by facilitating the selection of semantic units of words and above. This method employs a simple vertical slide gesture to expand and contract a selection area from a word. The expansion can be by words or by semantic chunks ranging from sub-phrases to sentences. This technique shifts the concept of text selection, from defining a range by locating the first and last words, towards a dynamic process of expanding and contracting a textual semantic entity. To understand the effects of our approach, we prototyped and tested two variants: WordTouch, which offers a straightforward word-by-word expansion, and ChunkTouch, which leverages NLP to chunk text into syntactic units, allowing the selection to grow by semantically meaningful units in response to the sliding gesture. Our evaluation, focused on the coarse-grained selection tasks handled by 1D-Touch, shows a 20% improvement over the default word-snapping selection method on Android. @Article{ISS23p447, author = {Peiling Jiang and Li Feng and Fuling Sun and Parakrant Sarkar and Haijun Xia and Can Liu}, title = {1D-Touch: NLP-Assisted Coarse Text Selection via a Semi-Direct Gesture}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {447}, numpages = {20}, doi = {10.1145/3626483}, year = {2023}, } Publisher's Version |
|
Takaku, Takumi |
ISS '23: "Evaluating the Applicability ..."
Evaluating the Applicability of GUI-Based Steering Laws to VR Car Driving: A Case of Curved Constrained Paths
Shota Yamanaka, Takumi Takaku, Yukina Funazaki, Noboru Seto, and Satoshi Nakamura (Yahoo, Tokyo, Japan; Meiji University, Tokyo, Japan; Meiji University, Nakano, Japan) Evaluating the validity of an existing user performance model in a variety of tasks is important for enhancing its applicability. The model studied in this work is the steering law for predicting the speed and time needed to perform tasks in which a cursor or a car passes through a constrained path. Previous HCI studies have refined this model to take additional path factors into account, but its applicability has only been evaluated in GUI-based environments such as those using mice or pen tablets. Accordingly, we conducted a user experiment with a driving simulator to measure the speed and time on curved roads and thus facilitate evaluation of models for pen-based path-steering tasks. The results showed that the best-fit models for speed and time had adjusted r^2 values of 0.9342 and 0.9723, respectively, for three road widths and eight curvature radii. While the models required some adjustments, the overall components of the tested models were consistent with those in previous pen-based experimental results. Our results demonstrated that user experiments to validate potential models based on pen-based tasks are effective as a pilot approach for driving tasks with more complex road conditions. @Article{ISS23p430, author = {Shota Yamanaka and Takumi Takaku and Yukina Funazaki and Noboru Seto and Satoshi Nakamura}, title = {Evaluating the Applicability of GUI-Based Steering Laws to VR Car Driving: A Case of Curved Constrained Paths}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {430}, numpages = {21}, doi = {10.1145/3626466}, year = {2023}, } Publisher's Version Archive submitted (240 kB) |
|
Takashima, Kazuki |
ISS '23: "UbiSurface: A Robotic Touch ..."
UbiSurface: A Robotic Touch Surface for Supporting Mid-air Planar Interactions in Room-Scale VR
Ryota Gomi, Kazuki Takashima, Yuki Onishi, Kazuyuki Fujita, and Yoshifumi Kitamura (Tohoku University, Sendai, Japan; Singapore Management Univeristy, Singapore, Singapore) Room-scale VR has been considered an alternative to physical office workspaces. For office activities, users frequently require planar input methods, such as typing or handwriting, to quickly record annotations to virtual content. However, current off-the-shelf VR HMD setups rely on mid-air interactions, which can cause arm fatigue and decrease input accuracy. To address this issue, we propose UbiSurface, a robotic touch surface that can automatically reposition itself to physically present a virtual planar input surface (VR whiteboard, VR canvas, etc.) to users and to permit them to achieve accurate and fatigue-less input while walking around a virtual room. We design and implement a prototype of UbiSurface that can dynamically change a canvas-sized touch surface's position, height, and pitch and yaw angles to adapt to virtual surfaces spatially arranged at various locations and angles around a virtual room. We then conduct studies to validate its technical performance and examine how UbiSurface facilitates the user's primary mid-air planar interactions, such as painting and writing in a room-scale VR setup. Our results indicate that this system reduces arm fatigue and increases input accuracy, especially for writing tasks. We then discuss the potential benefits and challenges of robotic touch devices for future room-scale VR setups. @Article{ISS23p443, author = {Ryota Gomi and Kazuki Takashima and Yuki Onishi and Kazuyuki Fujita and Yoshifumi Kitamura}, title = {UbiSurface: A Robotic Touch Surface for Supporting Mid-air Planar Interactions in Room-Scale VR}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {443}, numpages = {22}, doi = {10.1145/3626479}, year = {2023}, } Publisher's Version |
|
Usuba, Hiroki |
ISS '23: "Clarifying the Effect of Edge ..."
Clarifying the Effect of Edge Targets in Touch Pointing through Crowdsourced Experiments
Hiroki Usuba, Shota Yamanaka, and Junichi Sato (Yahoo, Chiyoda-ku, Japan; Yahoo, Tokyo, Japan) A prior work has recommended adding a 4-mm gap between a target and the edge of a screen, as tapping a target located at the screen edge takes longer than tapping non-edge targets. However, it is possible that this recommendation was created based on statistical errors, and unexplored situations existed in the prior work. In this study, we re-examine the recommendation by utilizing crowdsourced experiments to resolve the issues. If we observe the same results as the prior work through experiments including diversities, we can verify that the recommendation is suitable. We found that increasing the gap between the target and the screen edge decreased the movement time, which was consistent with the prior work. In addition, we newly found that increasing the gap decreased the error rate as well. On the basis of these results, we discuss how the gap and the target should be designed. @Article{ISS23p433, author = {Hiroki Usuba and Shota Yamanaka and Junichi Sato}, title = {Clarifying the Effect of Edge Targets in Touch Pointing through Crowdsourced Experiments}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {433}, numpages = {19}, doi = {10.1145/3626469}, year = {2023}, } Publisher's Version |
|
Wang, Dong |
ISS '23: "Cross-Domain Gesture Sequence ..."
Cross-Domain Gesture Sequence Recognition for Two-Player Exergames using COTS mmWave Radar
Ahsan Jamal Akbar, Zhiyao Sheng, Qian Zhang, and Dong Wang (Shanghai Jiao Tong University, Shanghai, China) Wireless-based gesture recognition provides an effective input method for exergames. However, previous works in wireless-based gesture recognition systems mainly recognize one primary user's gestures. In the multi-player scenario, the mutual interference between users makes it difficult to predict multiple players' gestures individually. To address this challenge, we propose a flexible FMCW-radar-based system, RFDual, which enables real-time cross-domain gesture sequence recognition for two players. To eliminate the mutual interference between users, we extract a new feature type, biased range-velocity spectrum (BRVS), which only depends on a target user. We then propose customized preprocessing methods (cropping and stationary component removal) to produce environment-independent and position-independent inputs. To enhance RFDual's resistance to unseen users and articulating speeds, we design effective data augmentation methods, sequence concatenating, and randomizing. RFDual is evaluated with a dataset containing only unseen gesture sequences and achieves a gesture error rate of 1.41%. Extensive experimental results show the impressive robustness of RFDual for data in new domains, including new users, articulating speeds, positions, and environments. These results demonstrate the great potential of RFDual in practical applications like two-player exergames and gesture/activity recognition for drivers and passengers in the cab. @Article{ISS23p441, author = {Ahsan Jamal Akbar and Zhiyao Sheng and Qian Zhang and Dong Wang}, title = {Cross-Domain Gesture Sequence Recognition for Two-Player Exergames using COTS mmWave Radar}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {441}, numpages = {30}, doi = {10.1145/3626477}, year = {2023}, } Publisher's Version |
|
Wells, Thomas |
ISS '23: "Reality and Beyond: Proxemics ..."
Reality and Beyond: Proxemics as a Lens for Designing Handheld Collaborative Augmented Reality
Mille Skovhus Lunding, Jens Emil Sloth Grønbæk, Nicolai Grymer, Thomas Wells, Steven Houben, and Marianne Graves Petersen (Aarhus University, Aarhus, Denmark; Lancaster University, Lancaster, UK; Eindhoven University of Technology, Eindhoven, Netherlands) Augmented Reality (AR) has shown great potential for supporting co-located collaboration. Yet, it is rarely articulated in the design rationales of AR systems that they promote a certain socio-spatial configuration of the users. Learning from proxemics, we argue that such configurations enable and constrain different co-located spatial behaviors with consequences for collaborative activities. We focus specifically on enabling different collaboration styles via the design of Handheld Collaborative Augmented Reality (HCAR) systems. Drawing upon notions of proxemics, we show how different HCAR designs enable different socio-spatial configurations. Through a design exploration, we demonstrate interaction techniques to expand on the notion of collaborative coupling styles by either deliberately designing for aligning with physical reality or going beyond. The main contributions are a proxemics-based conceptual lens and vocabulary for supporting interaction designers in being mindful of the proxemic consequences when developing handheld multi-user AR systems. @Article{ISS23p427, author = {Mille Skovhus Lunding and Jens Emil Sloth Grønbæk and Nicolai Grymer and Thomas Wells and Steven Houben and Marianne Graves Petersen}, title = {Reality and Beyond: Proxemics as a Lens for Designing Handheld Collaborative Augmented Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {427}, numpages = {20}, doi = {10.1145/3626463}, year = {2023}, } Publisher's Version Video |
|
Welsch, Robin |
ISS '23: "SeatmateVR: Proxemic Cues ..."
SeatmateVR: Proxemic Cues for Close Bystander-Awareness in Virtual Reality
Jingyi Li, Hyerim Park, Robin Welsch, Sven Mayer, and Andreas Martin Butz (LMU Munich, Munich, Germany; Aalto University, Helsinki, Finland) Prior research explored ways to alert virtual reality users of bystanders entering the play area from afar. However, in confined social settings like sharing a couch with seatmates, bystanders' proxemic cues, such as distance, are limited during interruptions, posing challenges for proxemic-aware systems. To address this, we investigated three visualizations, using a 2D animoji, a fully-rendered avatar, and their combination, to gradually share bystanders' orientation and location during interruptions. In a user study (N=22), participants played virtual reality games while responding to questions from their seatmates. We found that the avatar preserved game experiences yet did not support the fast identification of seatmates as the animoji did. Instead, users preferred the mixed visualization, where they found the seatmate's orientation cues instantly in their view and were gradually guided to the person's actual location. We discuss implications for fine-grained proxemic-aware virtual reality systems to support interaction in constrained social spaces. @Article{ISS23p438, author = {Jingyi Li and Hyerim Park and Robin Welsch and Sven Mayer and Andreas Martin Butz}, title = {SeatmateVR: Proxemic Cues for Close Bystander-Awareness in Virtual Reality}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {438}, numpages = {20}, doi = {10.1145/3626474}, year = {2023}, } Publisher's Version Archive submitted (190 MB) |
|
Wimmer, Raphael |
ISS '23: "SurfaceCast: Ubiquitous, Cross-Device ..."
SurfaceCast: Ubiquitous, Cross-Device Surface Sharing
Florian Echtler, Vitus Maierhöfer, Nicolai Brodersen Hansen, and Raphael Wimmer (Aalborg University, Aalborg, Denmark; University of Regensburg, Regensburg, Germany) Real-time online interaction is the norm today. Tabletops and other dedicated interactive surface devices with direct input and tangible interaction can enhance remote collaboration, and open up new interaction scenarios based on mixed physical/virtual components. However, they are only available to a small subset of users, as they usually require identical bespoke hardware for every participant, are complex to setup, and need custom scenario-specific applications. We present SurfaceCast, a software toolkit designed to merge multiple distributed, heterogeneous end-user devices into a single, shared mixed-reality surface. Supported devices include regular desktop and laptop computers, tablets, and mixed-reality headsets, as well as projector-camera setups and dedicated interactive tabletop systems. This device-agnostic approach provides a fundamental building block for exploration of a far wider range of usage scenarios than previously feasible, including future clients using our provided API. In this paper, we discuss the software architecture of SurfaceCast, present a formative user study and a quantitative performance analysis of our framework, and introduce five example application scenarios which we enhance through the multi-user and multi-device features of the framework. Our results show that the hardware- and content-agnostic architecture of SurfaceCast can run on a wide variety of devices with sufficient performance and fidelity for real-time interaction. @Article{ISS23p439, author = {Florian Echtler and Vitus Maierhöfer and Nicolai Brodersen Hansen and Raphael Wimmer}, title = {SurfaceCast: Ubiquitous, Cross-Device Surface Sharing}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {439}, numpages = {23}, doi = {10.1145/3626475}, year = {2023}, } Publisher's Version Published Artifact Info Artifacts Available |
|
Wissing, Jon |
ISS '23: "CADTrack: Instructions and ..."
CADTrack: Instructions and Support for Orientation Disambiguation of Near-Symmetrical Objects
João Marcelo Evangelista Belo, Jon Wissing, Tiare Feuchtner, and Kaj Grønbæk (Aarhus University, Aarhus, Denmark; University of Konstanz, Konstanz, Germany) Determining the correct orientation of objects can be critical to succeed in tasks like assembly and quality assurance. In particular, near-symmetrical objects may require careful inspection of small visual features to disambiguate their orientation. We propose CADTrack, a digital assistant for providing instructions and support for tasks where the object orientation matters but may be hard to disambiguate with the naked eye. Additionally, we present a deep learning pipeline for tracking the orientation of near-symmetrical objects. In contrast to existing approaches, which require labeled datasets involving laborious data acquisition and annotation processes, CADTrack uses a digital model of the object to generate synthetic data and train a convolutional neural network. Furthermore, we extend the architecture of Mask R-CNN with a confidence prediction branch to avoid errors caused by misleading orientation guidance. We evaluate CADTrack in a user study, comparing our tracking-based instructions to other methods to confirm the benefits of our approach in terms of preference and required effort. @Article{ISS23p426, author = {João Marcelo Evangelista Belo and Jon Wissing and Tiare Feuchtner and Kaj Grønbæk}, title = {CADTrack: Instructions and Support for Orientation Disambiguation of Near-Symmetrical Objects}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {426}, numpages = {20}, doi = {10.1145/3626462}, year = {2023}, } Publisher's Version Video |
|
Wu, Liwei |
ISS '23: "Interactions across Displays ..."
Interactions across Displays and Space: A Study of Virtual Reality Streaming Practices on Twitch
Liwei Wu, Qing Liu, Jian Zhao, and Edward Lank (University of Waterloo, Waterloo, Canada) The growing live streaming economy and virtual reality (VR) technologies have sparked interest in VR streaming among streamers and viewers. However, limited research has been conducted to understand this emerging streaming practice. To address this gap, we conducted an in-depth thematic analysis of 34 streaming videos from 12 VR streamers with varying levels of experience, to explore the current practices, interaction styles, and strategies, as well as to investigate the challenges and opportunities for VR streaming. Our findings indicate that VR streamers face challenges in building emotional connections and maintaining streaming flow due to technical problems, lack of fluid transitions between physical and virtual environments, and not intentionally designed game scenes. As a response, we propose six design implications to encourage collaboration between game designers and streaming app developers, facilitating fluid, rich, and broad interactions for an enhanced streaming experience. In addition, we discuss the use of streaming videos as user-generated data for research, highlighting the lessons learned and emphasizing the need for tools to support streaming video analysis. Our research sheds light on the unique aspects of VR streaming, which combines interactions across displays and space. @Article{ISS23p437, author = {Liwei Wu and Qing Liu and Jian Zhao and Edward Lank}, title = {Interactions across Displays and Space: A Study of Virtual Reality Streaming Practices on Twitch}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {437}, numpages = {24}, doi = {10.1145/3626473}, year = {2023}, } Publisher's Version |
|
Xia, Chengshuo |
ISS '23: "Seeing the Wind: An Interactive ..."
Seeing the Wind: An Interactive Mist Interface for Airflow Input
Tian Min, Chengshuo Xia, Takumi Yamamoto, and Yuta Sugiura (Keio University, Yokohama, Japan; Xidian University, Guangzhou, China) Human activities can introduce variations in various environmental cues, such as light and sound, which can serve as inputs for interfaces. However, one often overlooked aspect is the airflow variation caused by these activities, which presents challenges in detection and utilization due to its intangible nature. In this paper, we have unveiled an approach using mist to capture invisible airflow variations, rendering them detectable by Time-of-Flight (ToF) sensors. We investigate the capability of this sensing technique under different types of mist or smoke, as well as the impact of airflow speed. To illustrate the feasibility of this concept, we created a prototype using a humidifier and demonstrated its capability to recognize motions. On this basis, we introduce potential applications, discuss inherent limitations, and provide design lessons grounded in mist-based airflow sensing. @Article{ISS23p444, author = {Tian Min and Chengshuo Xia and Takumi Yamamoto and Yuta Sugiura}, title = {Seeing the Wind: An Interactive Mist Interface for Airflow Input}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {444}, numpages = {22}, doi = {10.1145/3626480}, year = {2023}, } Publisher's Version |
|
Xia, Haijun |
ISS '23: "1D-Touch: NLP-Assisted Coarse ..."
1D-Touch: NLP-Assisted Coarse Text Selection via a Semi-Direct Gesture
Peiling Jiang, Li Feng, Fuling Sun, Parakrant Sarkar, Haijun Xia, and Can Liu (City University of Hong Kong, Hong Kong, China; University of California San Diego, La Jolla, USA; Hong Kong University of Science and Technology, Guangzhou, China) Existing text selection techniques on touchscreen focus on improving the control for moving the carets. Coarse-grained text selection on word and phrase levels has not received much support beyond word-snapping and entity recognition. We introduce 1D-Touch, a novel text selection method that complements the carets-based sub-word selection by facilitating the selection of semantic units of words and above. This method employs a simple vertical slide gesture to expand and contract a selection area from a word. The expansion can be by words or by semantic chunks ranging from sub-phrases to sentences. This technique shifts the concept of text selection, from defining a range by locating the first and last words, towards a dynamic process of expanding and contracting a textual semantic entity. To understand the effects of our approach, we prototyped and tested two variants: WordTouch, which offers a straightforward word-by-word expansion, and ChunkTouch, which leverages NLP to chunk text into syntactic units, allowing the selection to grow by semantically meaningful units in response to the sliding gesture. Our evaluation, focused on the coarse-grained selection tasks handled by 1D-Touch, shows a 20% improvement over the default word-snapping selection method on Android. @Article{ISS23p447, author = {Peiling Jiang and Li Feng and Fuling Sun and Parakrant Sarkar and Haijun Xia and Can Liu}, title = {1D-Touch: NLP-Assisted Coarse Text Selection via a Semi-Direct Gesture}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {447}, numpages = {20}, doi = {10.1145/3626483}, year = {2023}, } Publisher's Version |
|
Yamamoto, Takumi |
ISS '23: "Seeing the Wind: An Interactive ..."
Seeing the Wind: An Interactive Mist Interface for Airflow Input
Tian Min, Chengshuo Xia, Takumi Yamamoto, and Yuta Sugiura (Keio University, Yokohama, Japan; Xidian University, Guangzhou, China) Human activities can introduce variations in various environmental cues, such as light and sound, which can serve as inputs for interfaces. However, one often overlooked aspect is the airflow variation caused by these activities, which presents challenges in detection and utilization due to its intangible nature. In this paper, we have unveiled an approach using mist to capture invisible airflow variations, rendering them detectable by Time-of-Flight (ToF) sensors. We investigate the capability of this sensing technique under different types of mist or smoke, as well as the impact of airflow speed. To illustrate the feasibility of this concept, we created a prototype using a humidifier and demonstrated its capability to recognize motions. On this basis, we introduce potential applications, discuss inherent limitations, and provide design lessons grounded in mist-based airflow sensing. @Article{ISS23p444, author = {Tian Min and Chengshuo Xia and Takumi Yamamoto and Yuta Sugiura}, title = {Seeing the Wind: An Interactive Mist Interface for Airflow Input}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {444}, numpages = {22}, doi = {10.1145/3626480}, year = {2023}, } Publisher's Version |
|
Yamanaka, Shota |
ISS '23: "Clarifying the Effect of Edge ..."
Clarifying the Effect of Edge Targets in Touch Pointing through Crowdsourced Experiments
Hiroki Usuba, Shota Yamanaka, and Junichi Sato (Yahoo, Chiyoda-ku, Japan; Yahoo, Tokyo, Japan) A prior work has recommended adding a 4-mm gap between a target and the edge of a screen, as tapping a target located at the screen edge takes longer than tapping non-edge targets. However, it is possible that this recommendation was created based on statistical errors, and unexplored situations existed in the prior work. In this study, we re-examine the recommendation by utilizing crowdsourced experiments to resolve the issues. If we observe the same results as the prior work through experiments including diversities, we can verify that the recommendation is suitable. We found that increasing the gap between the target and the screen edge decreased the movement time, which was consistent with the prior work. In addition, we newly found that increasing the gap decreased the error rate as well. On the basis of these results, we discuss how the gap and the target should be designed. @Article{ISS23p433, author = {Hiroki Usuba and Shota Yamanaka and Junichi Sato}, title = {Clarifying the Effect of Edge Targets in Touch Pointing through Crowdsourced Experiments}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {433}, numpages = {19}, doi = {10.1145/3626469}, year = {2023}, } Publisher's Version ISS '23: "Evaluating the Applicability ..." Evaluating the Applicability of GUI-Based Steering Laws to VR Car Driving: A Case of Curved Constrained Paths Shota Yamanaka, Takumi Takaku, Yukina Funazaki, Noboru Seto, and Satoshi Nakamura (Yahoo, Tokyo, Japan; Meiji University, Tokyo, Japan; Meiji University, Nakano, Japan) Evaluating the validity of an existing user performance model in a variety of tasks is important for enhancing its applicability. The model studied in this work is the steering law for predicting the speed and time needed to perform tasks in which a cursor or a car passes through a constrained path. Previous HCI studies have refined this model to take additional path factors into account, but its applicability has only been evaluated in GUI-based environments such as those using mice or pen tablets. Accordingly, we conducted a user experiment with a driving simulator to measure the speed and time on curved roads and thus facilitate evaluation of models for pen-based path-steering tasks. The results showed that the best-fit models for speed and time had adjusted r^2 values of 0.9342 and 0.9723, respectively, for three road widths and eight curvature radii. While the models required some adjustments, the overall components of the tested models were consistent with those in previous pen-based experimental results. Our results demonstrated that user experiments to validate potential models based on pen-based tasks are effective as a pilot approach for driving tasks with more complex road conditions. @Article{ISS23p430, author = {Shota Yamanaka and Takumi Takaku and Yukina Funazaki and Noboru Seto and Satoshi Nakamura}, title = {Evaluating the Applicability of GUI-Based Steering Laws to VR Car Driving: A Case of Curved Constrained Paths}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {430}, numpages = {21}, doi = {10.1145/3626466}, year = {2023}, } Publisher's Version Archive submitted (240 kB) |
|
Yang, Ying |
ISS '23: "Embodied Provenance for Immersive ..."
Embodied Provenance for Immersive Sensemaking
Yidan Zhang, Barrett Ens, Kadek Ananta Satriadi, Ying Yang, and Sarah Goodwin (Monash University, Melbourne, Australia) Immersive analytics research has explored how embodied data representations and interactions can be used to engage users in sensemaking. Prior research has broadly overlooked the potential of immersive space for supporting analytic provenance, the understanding of sensemaking processes through users’ interaction histories. We propose the concept of embodied provenance, the use of three-dimensional space and embodied interactions in supporting recalling, reproducing, annotating and sharing analysis history in immersive environments. We design a conceptual framework for embodied provenance by highlighting a set of design criteria for analytic provenance drawn from prior work and identifying essential properties for embodied provenance. We develop a prototype system in virtual reality to demonstrate the concept and support the conceptual framework by providing multiple data views and embodied interaction metaphors in a large virtual space. We present a use case scenario of energy consumption analysis and evaluated the system through a qualitative evaluation with 17 participants, which show the system’s potential for assisting analytic provenance using embodiment. Our exploration of embodied provenance through this prototype provides lessons learnt to guide the design of immersive analytic tools for embodied provenance. @Article{ISS23p435, author = {Yidan Zhang and Barrett Ens and Kadek Ananta Satriadi and Ying Yang and Sarah Goodwin}, title = {Embodied Provenance for Immersive Sensemaking}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {435}, numpages = {19}, doi = {10.1145/3626471}, year = {2023}, } Publisher's Version Video |
|
Yoon, Dongwook |
ISS '23: "Using Online Videos as the ..."
Using Online Videos as the Basis for Developing Design Guidelines: A Case Study of AR-Based Assembly Instructions
Niu Chen, Frances Jihae Sin, Laura Mariah Herman, Cuong Nguyen, Ivan Song, and Dongwook Yoon (University of British Columbia, Vancouver, Canada; Cornell University, Ithaca, USA; Adobe, San Francisco, USA; Adobe Research, San Francisco, USA) Design guidelines serve as an important conceptual tool to guide designers of interactive applications with well-established principles and heuristics. Consulting domain experts is a common way to develop guidelines. However, experts are often not easily accessible, and their time can be expensive. This problem poses challenges in developing comprehensive and practical guidelines. We propose a new guideline development method that uses online public videos as the basis for capturing diverse patterns of design goals and interaction primitives. In a case study focusing on AR-based assembly instructions, we apply our novel Identify-Rationalize pipeline, which distills design patterns from videos featuring AR-based assembly instructions (N=146) into a set of guidelines that cover a wide range of design considerations. The evaluation conducted with 16 AR designers indicated that the pipeline is useful for generating comprehensive guidelines. We conclude by discussing the transferability and practicality of our method. @Article{ISS23p428, author = {Niu Chen and Frances Jihae Sin and Laura Mariah Herman and Cuong Nguyen and Ivan Song and Dongwook Yoon}, title = {Using Online Videos as the Basis for Developing Design Guidelines: A Case Study of AR-Based Assembly Instructions}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {428}, numpages = {23}, doi = {10.1145/3626464}, year = {2023}, } Publisher's Version Archive submitted (53 MB) |
|
Yu, Jinyang |
ISS '23: "3D Finger Rotation Estimation ..."
3D Finger Rotation Estimation from Fingerprint Images
Yongjie Duan, Jinyang Yu, Jianjiang Feng, Ke He, Jiwen Lu, and Jie Zhou (Tsinghua University, Beijing, China) Various touch-based interaction techniques have been developed to make interactions on mobile devices more effective, efficient, and intuitive. Finger orientation, especially, has attracted a lot of attentions since it intuitively brings three additional degrees of freedom (DOF) compared with two-dimensional (2D) touching points. The mapping of finger orientation can be classified as being either absolute or relative, suitable for different interaction applications. However, only absolute orientation has been explored in prior works. The relative angles can be calculated based on two estimated absolute orientations, although, a higher accuracy is expected by predicting relative rotation from input images directly. Consequently, in this paper, we propose to estimate complete 3D relative finger angles based on two fingerprint images, which incorporate more information with a higher image resolution than capacitive images. For algorithm training and evaluation, we constructed a dataset consisting of fingerprint images and their corresponding ground truth 3D relative finger rotation angles. Experimental results on this dataset revealed that our method outperforms previous approaches with absolute finger angle models. Further, extensive experiments were conducted to explore the impact of image resolutions, finger types, and rotation ranges on performance. A user study was also conducted to examine the efficiency and precision using 3D relative finger orientation in 3D object rotation task. @Article{ISS23p431, author = {Yongjie Duan and Jinyang Yu and Jianjiang Feng and Ke He and Jiwen Lu and Jie Zhou}, title = {3D Finger Rotation Estimation from Fingerprint Images}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {431}, numpages = {21}, doi = {10.1145/3626467}, year = {2023}, } Publisher's Version |
|
Zhang, Qian |
ISS '23: "Cross-Domain Gesture Sequence ..."
Cross-Domain Gesture Sequence Recognition for Two-Player Exergames using COTS mmWave Radar
Ahsan Jamal Akbar, Zhiyao Sheng, Qian Zhang, and Dong Wang (Shanghai Jiao Tong University, Shanghai, China) Wireless-based gesture recognition provides an effective input method for exergames. However, previous works in wireless-based gesture recognition systems mainly recognize one primary user's gestures. In the multi-player scenario, the mutual interference between users makes it difficult to predict multiple players' gestures individually. To address this challenge, we propose a flexible FMCW-radar-based system, RFDual, which enables real-time cross-domain gesture sequence recognition for two players. To eliminate the mutual interference between users, we extract a new feature type, biased range-velocity spectrum (BRVS), which only depends on a target user. We then propose customized preprocessing methods (cropping and stationary component removal) to produce environment-independent and position-independent inputs. To enhance RFDual's resistance to unseen users and articulating speeds, we design effective data augmentation methods, sequence concatenating, and randomizing. RFDual is evaluated with a dataset containing only unseen gesture sequences and achieves a gesture error rate of 1.41%. Extensive experimental results show the impressive robustness of RFDual for data in new domains, including new users, articulating speeds, positions, and environments. These results demonstrate the great potential of RFDual in practical applications like two-player exergames and gesture/activity recognition for drivers and passengers in the cab. @Article{ISS23p441, author = {Ahsan Jamal Akbar and Zhiyao Sheng and Qian Zhang and Dong Wang}, title = {Cross-Domain Gesture Sequence Recognition for Two-Player Exergames using COTS mmWave Radar}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {441}, numpages = {30}, doi = {10.1145/3626477}, year = {2023}, } Publisher's Version |
|
Zhang, Xinyong |
ISS '23: "Understanding the Effects ..."
Understanding the Effects of Movement Direction on 2D Touch Pointing Tasks
Xinyong Zhang (Renmin University of China, Beijing, China) HCI researchers have long recognized the significant effects of movement direction on human performance, and this factor has been carefully addressed to benefit user interface design. According to our previous study (2012), the weights of the two target dimensions, width W and height H, in the extended index of difficulty (ID) for 2D pointing tasks are asymmetric and appear to vary periodically based on movement direction (θ), following a cosine function. However, this periodic effect of movement direction is uncertain for direct 2D touch pointing tasks, and a thorough understanding of the effects of movement direction on direct pointing tasks, such as on touch input surfaces, is still lacking. In this paper, we conducted two experiments on a 24-inch touch screen, with tilted and horizontal orientations respectively, to confirm the periodic effect in the context of direct pointing and illustrate its variations across different pointing tasks. At the same time, we propose a quantification formula to measure the real differences in task difficulty caused by the direction factor. To the best of our knowledge, this is the first study to do so. Using this formula, the ID values in different directions can be unified to the same scale and compared, providing a new perspective for understanding and evaluating human performance in different interaction environments. @Article{ISS23p446, author = {Xinyong Zhang}, title = {Understanding the Effects of Movement Direction on 2D Touch Pointing Tasks}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {446}, numpages = {19}, doi = {10.1145/3626482}, year = {2023}, } Publisher's Version |
|
Zhang, Yidan |
ISS '23: "Embodied Provenance for Immersive ..."
Embodied Provenance for Immersive Sensemaking
Yidan Zhang, Barrett Ens, Kadek Ananta Satriadi, Ying Yang, and Sarah Goodwin (Monash University, Melbourne, Australia) Immersive analytics research has explored how embodied data representations and interactions can be used to engage users in sensemaking. Prior research has broadly overlooked the potential of immersive space for supporting analytic provenance, the understanding of sensemaking processes through users’ interaction histories. We propose the concept of embodied provenance, the use of three-dimensional space and embodied interactions in supporting recalling, reproducing, annotating and sharing analysis history in immersive environments. We design a conceptual framework for embodied provenance by highlighting a set of design criteria for analytic provenance drawn from prior work and identifying essential properties for embodied provenance. We develop a prototype system in virtual reality to demonstrate the concept and support the conceptual framework by providing multiple data views and embodied interaction metaphors in a large virtual space. We present a use case scenario of energy consumption analysis and evaluated the system through a qualitative evaluation with 17 participants, which show the system’s potential for assisting analytic provenance using embodiment. Our exploration of embodied provenance through this prototype provides lessons learnt to guide the design of immersive analytic tools for embodied provenance. @Article{ISS23p435, author = {Yidan Zhang and Barrett Ens and Kadek Ananta Satriadi and Ying Yang and Sarah Goodwin}, title = {Embodied Provenance for Immersive Sensemaking}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {435}, numpages = {19}, doi = {10.1145/3626471}, year = {2023}, } Publisher's Version Video |
|
Zhao, Jian |
ISS '23: "Interactions across Displays ..."
Interactions across Displays and Space: A Study of Virtual Reality Streaming Practices on Twitch
Liwei Wu, Qing Liu, Jian Zhao, and Edward Lank (University of Waterloo, Waterloo, Canada) The growing live streaming economy and virtual reality (VR) technologies have sparked interest in VR streaming among streamers and viewers. However, limited research has been conducted to understand this emerging streaming practice. To address this gap, we conducted an in-depth thematic analysis of 34 streaming videos from 12 VR streamers with varying levels of experience, to explore the current practices, interaction styles, and strategies, as well as to investigate the challenges and opportunities for VR streaming. Our findings indicate that VR streamers face challenges in building emotional connections and maintaining streaming flow due to technical problems, lack of fluid transitions between physical and virtual environments, and not intentionally designed game scenes. As a response, we propose six design implications to encourage collaboration between game designers and streaming app developers, facilitating fluid, rich, and broad interactions for an enhanced streaming experience. In addition, we discuss the use of streaming videos as user-generated data for research, highlighting the lessons learned and emphasizing the need for tools to support streaming video analysis. Our research sheds light on the unique aspects of VR streaming, which combines interactions across displays and space. @Article{ISS23p437, author = {Liwei Wu and Qing Liu and Jian Zhao and Edward Lank}, title = {Interactions across Displays and Space: A Study of Virtual Reality Streaming Practices on Twitch}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {437}, numpages = {24}, doi = {10.1145/3626473}, year = {2023}, } Publisher's Version |
|
Zhou, Jie |
ISS '23: "3D Finger Rotation Estimation ..."
3D Finger Rotation Estimation from Fingerprint Images
Yongjie Duan, Jinyang Yu, Jianjiang Feng, Ke He, Jiwen Lu, and Jie Zhou (Tsinghua University, Beijing, China) Various touch-based interaction techniques have been developed to make interactions on mobile devices more effective, efficient, and intuitive. Finger orientation, especially, has attracted a lot of attentions since it intuitively brings three additional degrees of freedom (DOF) compared with two-dimensional (2D) touching points. The mapping of finger orientation can be classified as being either absolute or relative, suitable for different interaction applications. However, only absolute orientation has been explored in prior works. The relative angles can be calculated based on two estimated absolute orientations, although, a higher accuracy is expected by predicting relative rotation from input images directly. Consequently, in this paper, we propose to estimate complete 3D relative finger angles based on two fingerprint images, which incorporate more information with a higher image resolution than capacitive images. For algorithm training and evaluation, we constructed a dataset consisting of fingerprint images and their corresponding ground truth 3D relative finger rotation angles. Experimental results on this dataset revealed that our method outperforms previous approaches with absolute finger angle models. Further, extensive experiments were conducted to explore the impact of image resolutions, finger types, and rotation ranges on performance. A user study was also conducted to examine the efficiency and precision using 3D relative finger orientation in 3D object rotation task. @Article{ISS23p431, author = {Yongjie Duan and Jinyang Yu and Jianjiang Feng and Ke He and Jiwen Lu and Jie Zhou}, title = {3D Finger Rotation Estimation from Fingerprint Images}, journal = {Proc. ACM Hum. Comput. Interact.}, volume = {7}, number = {ISS}, articleno = {431}, numpages = {21}, doi = {10.1145/3626467}, year = {2023}, } Publisher's Version |
98 authors
proc time: 17.19