ISS 2024 – Author Index |
Contents -
Abstracts -
Authors
|
A B C D E F G H I J K L M N O P Q R S T U V W Y Z
Albrecht, Matthias |
ISS '24: "Gaze, Wall, and Racket: Combining ..."
Gaze, Wall, and Racket: Combining Gaze and Hand-Controlled Plane for 3D Selection in Virtual Reality
Uta Wagner, Andreas Asferg Jacobsen, Matthias Albrecht, Haopeng Wang, Hans Gellersen, and Ken Pfeuffer (Aarhus University, Denmark; University of Konstanz, Germany; Lancaster University, United Kingdom) Raypointing, the status-quo pointing technique for virtual reality, is challenging with many occluded and overlapping objects. In this work, we investigate how eye-tracking input can assist the gestural ray pointing in the disambiguation of targets in densely populated scenes. We explore the concept of Gaze + Plane, where the intersection between the user's gaze and a hand-controlled plane facilitates 3D position specification. In particular, two techniques are investigated: Gaze&Wall, which employs an indirect plane positioned in depth using a hand ray, and Gaze&Racket, featuring a hand-held and rotatable plane. In a first experiment, we reveal the speed-error trade-offs between Gaze + Plane techniques. In a second study, we compared the best techniques to newly designed gesture-only techniques, finding that Gaze&Wall is less error-prone and significantly faster. Our research has relevance for spatial interaction, specifically on advanced techniques for complex 3D tasks. Article Search |
|
Aurisano, Jillian |
ISS '24: "The Elephant in the Room: ..."
The Elephant in the Room: Expert Experiences Designing, Developing and Evaluating Data Visualizations on Large Displays
Mahsa Sinaei Hamed, Pak Kwan, Matthew Klich, Jillian Aurisano, and Fateme Rajabiyazdi (Carleton University, Canada; University of Cincinnati, USA) Large displays can provide the necessary space and resolution for comprehensive explorations of data visualizations. However, designing and developing visualizations for such displays pose distinct challenges. Identifying these challenges is essential for data visualization designers and developers creating data visualizations on large displays. In this study, we aim to identify the challenges designers and developers encounter when creating data visualizations for large displays. We conducted semi-structured interviews with 13 experts experienced in creating data visualizations for large displays and, through affinity diagramming, categorized the challenges. We identified several challenges in designing, developing, and evaluating data visualizations on large displays, as well as building infrastructure for large displays. Design challenges included scaling visual encodings, limited design tools, and adopting design guidelines for large displays. In the development phase, developers faced difficulties working away from large displays and dealing with insufficient tools and resources. During the evaluation phase, researchers encountered issues with individuals' unfamiliarity with large display technology, interaction interruptions by technical limitations such as cursor visibility issues, and limitations in feedback gathering. Infrastructure challenges involved environmental constraints, technical issues, and difficulties in relocating large display setups. We share the lessons learned from our study and provide future directions along with research project examples to address these challenges. Article Search |
|
Bain, Chris |
ISS '24: "3D Remote Monitoring and Diagnosis ..."
3D Remote Monitoring and Diagnosis during a Pandemic: Holoportation and Digital Twin Requirements
Kabir Ahmed Rufai, Helen C. Purchase, Barrett Ens, Patrick Reuter, Chris Bain, Peter Chan, and Jim Smiley (Monash University, Australia; University of British Columbia, Okanagan, Canada; LaBRI, France; Eastern Health, Australia) COVID-19 regulations presented clinicians with a new set of challenges that affected their ability to efficiently provide patient care and, as a result, telemedicine was rapidly adopted to deliver care remotely. However, these telemedicine platforms undermine patient care due to clinicians' inability to acquire all the relevant patient information required to diagnose and treat the patient. To explore this gap, we conducted a requirements analysis for the development of a 3D remote patient monitoring and diagnosis platform, by using a user-centric design methodology. In this requirements analysis, we elicited information about the clinical domain, identified clinicians’ requirements, elicited clinicians’ insights, and documented the clinicians' requirements. The outcome was the emergence of refined clinicians' requirements to guide the implementation of the Digital Twin concept paired with holoportation for remote 3D monitoring and diagnosis of patients. We anticipate that the application of a 3D telemedicine platform with these requirements for patient care during a pandemic could potentially enhance clinicians' efficiency and the effectiveness of remote patient care. Article Search |
|
Balachandran, Aarav |
ISS '24: "A Virtual Reality Approach ..."
A Virtual Reality Approach to Overcome Glossophobia among University Students
Aarav Balachandran, Prajna Vohra, and Anmol Srivastava (IIIT Delhi, India) In the contemporary academic landscape, university students frequently deliver presentations in front of their peers and faculty, often leading to heightened levels of Public Speaking Anxiety (PSA). This study explores the potential of Virtual Reality Exposure Therapy (VRET) to alleviate PSA among students. Our study introduces "Manch," a realistic VR environment that simulates classroom public speaking scenarios with lifelike audience interactions and a slide-deck presentation feature. The study was conducted with N=28 participants, showing a significant reduction in PSA levels post-VR exposure, thereby establishing VR's efficacy in mitigating PSA. Additionally, we also incorporated a unique qualitative analysis through participant interviews, offering deeper insights into individual experiences with VRET. Manch shows great promise as a tool for future studies and interventions aimed at reducing PSA, particularly among university students. Article Search |
|
Batmaz, Anil Ufuk |
ISS '24: "Lights, Headset, Tablet, Action: ..."
Lights, Headset, Tablet, Action: Hybrid User Interfaces for Situated Analytics
Xiaoyan Zhou, Benjamin Lee, Francisco Raul Ortega, Anil Ufuk Batmaz, and Yalong Yang (Colorado State University, USA; University of Stuttgart, Germany; JPMorganChase, New York, USA; Concordia University, Canada; Georgia Institute of Technology, USA) While augmented reality (AR) headsets provide entirely new ways of seeing and interacting with data, traditional computing devices can play a symbiotic role when used in conjunction with AR as a hybrid user interface. A promising use case for this setup is situated analytics. AR can provide embedded views that are integrated with their physical referents, and a separate device such as a tablet can provide a familiar situated overview of the entire dataset being examined. While prior work has explored similar setups, we sought to understand how people perceive and make use of visualizations presented on both embedded visualizations (in AR) and situated visualizations (on a tablet) to achieve their own goals. To this end, we conducted an exploratory study using a scenario and task familiar to most: adjusting light levels in a smart home based on personal preference and energy usage. In a prototype that simulates AR in virtual reality, embedded visualizations are positioned next to lights distributed across an apartment, and situated visualizations are provided on a handheld tablet. We observed and interviewed 19 participants using the prototype. Participants were easily able to perform the task, though the extent the visualizations were used during the task varied, with some making decisions based on the data and others only on their own preferences. Our findings also suggest the two distinct roles that situated and embedded visualizations can have, and how this clear separation might improve user satisfaction and minimize attention-switching overheads in this hybrid user interface setup. We conclude by discussing the importance of considering the user's needs, goals, and the physical environment for designing and evaluating effective situated analytics applications. Article Search |
|
Benhamida, Leyla |
ISS '24: "Hapstick-Figure: Investigating ..."
Hapstick-Figure: Investigating the Design of a Haptic Representation of Human Gestures from Theater Performances for Blind and Visually-Impaired People
Leyla Benhamida, Slimane Larabi, and Oussama Metatla (USTHB University, Algeria; University of Bristol, United Kingdom) Theaters serve as platforms that transport audiences into diverse worlds, offering a collective enjoyment of live performances and a shared cultural experience. However, theater performances have strong visual components, such as physical props and actors’ movements and gestures, which are inaccessible to visually impaired and blind audience members and thus can exclude them from such shared social experiences. We conducted formative interviews with eight blind and visually impaired people about their experiences with barriers to theater performance gestures. We then present Hapstick-Figure, a prototype design to represent and communicate human gestures via a 3D-printed tactile surface. Next, we used Hapstick-Figure as a technology probe in a qualitative evaluation with six of our BVI participants to explore non-visual interpretation and engagement with this prototype. We outline insights into the haptic representation of theater performance gestures and reflections on designing for accessibility in this context. Article Search |
|
Bihani, Dhruv |
ISS '24: "Exploring Pointer Enhancement ..."
Exploring Pointer Enhancement Techniques for Target Selection on Immersive 3D Large Curved Display
Dhruv Bihani, A. K. M. Amanat Ullah, Charles-Olivier Dufresne-Camaro, William Delamare, Pourang Irani, and Khalad Hasan (University of British Columbia, Okanagan, Canada; University of British Columbia, Canada; ESTIA, France) Large curved displays are becoming increasingly popular due to their ability to provide users with a wider field of view and a more immersive experience compared to flat displays. Current interaction techniques for large curved displays often assume a user is positioned at the display's centre, crucially failing to accommodate general use conditions where the user may move during use. In this work, we investigated how user position impacts pointing interaction on large curved displays and evaluated cursor enhancement techniques to provide faster and more accurate performance across positions. To this effect, we conducted two user studies. First, we evaluated the effects of user position on pointing performance on a large semi-circular display (3m-tall, 3270R curvature) through a 2D Fitts' Law selection task. Our results indicate that as users move away from the display, their pointing speed significantly increases (at least by 9%), but accuracy decreases (by at least 6%). Additionally, we observed participants were slower when pointing from laterally offset positions. Secondly, we explored which pointing techniques providing motor- and visual-space enhancements best afford effective pointing performance across user positions. Across a total of six techniques tested, we found that a combination of acceleration and distance-based adjustments with cursor enlargement significantly improves target selection speed and accuracy across different user positions. Results further show techniques with visual-space enhancements (e.g., cursor enlargement) are significantly faster and more accurate than their non-visually-enhanced counterparts. Based on our results we provide design recommendations for implementing cursor enhancement techniques for large curved displays. Article Search |
|
Blagojevic, Rachel |
ISS '24: "Passive Stylus Tracking: A ..."
Passive Stylus Tracking: A Systematic Literature Review
Tavish M. Burnah, Md. Athar Imtiaz, Hans Werner Guesgen, George L. Rudolph, and Rachel Blagojevic (Massey University, New Zealand; Utah Valley University, USA) Passive stylus systems offer a simple and cost-effective solution for digital input, compatible with a wide range of surfaces and devices. This study reviews the domain of passive stylus tracking on passive surfaces, a topic previously underexplored in existing literature. We answer four key research questions: what type of systems exist in this domain, what methods do they use for tracking styli, how accurate are they, and what are their limitations? A systematic literature review resulted in 24 papers describing passive stylus systems. Their methods primarily fall into four categories: monocular cameras with image processing, multiple camera systems with image processing, machine learning systems using high-speed cameras or motion capture hardware, and radio frequency signal-based systems with signal processing. We found the system with the highest accuracy used a single monocular camera. In many systems, markers such as retroreflective spheres, tape, or fiducial markers were used to enhance the feature matching. We have also found stagnation and in some cases, regression in the precision and reliability of these systems over time. The limitations in these systems include the lack of varied stylus form factor support, the restriction to specific camera positions and angles, and the requirement of expensive hardware. Given these findings, we discuss the important characteristics and features of passive stylus systems and propose ways forward in this field. Article Search |
|
Brehmer, Matthew |
ISS '24: "VisConductor: Affect-Varying ..."
VisConductor: Affect-Varying Widgets for Animated Data Storytelling in Gesture-Aware Augmented Video Presentation
Temiloluwa Paul Femi-Gege, Matthew Brehmer, and Jian Zhao (University of Waterloo, Canada; Tableau Research, USA) Augmented video presentation tools provide a natural way for presenters to interact with their content, resulting in engaging experiences for remote audiences, such as when a presenter uses hand gestures to manipulate and direct attention to visual aids overlaid on their webcam feed. However, authoring and customizing these presentations can be challenging, particularly when presenting dynamic data visualization (i.e., animated charts). To this end, we introduce VisConductor, an authoring and presentation tool that equips presenters with the ability to configure gestures that control affect-varying visualization animation, foreshadow visualization transitions, direct attention to notable data points, and animate the disclosure of annotations. These gestures are integrated into configurable widgets, allowing presenters to trigger content transformations by executing gestures within widget boundaries, with feedback visible only to them. Altogether, our palette of widgets provides a level of flexibility appropriate for improvisational presentations and ad-hoc content transformations, such as when responding to audience engagement. To evaluate VisConductor, we conducted two studies focusing on presenters (𝑁=11) and audience members (𝑁=11). Our findings indicate that our approach taken with VisConductor can facilitate interactive and engaging remote presentations with dynamic visual aids. Reflecting on our findings, we also offer insights to inform the future of augmented video presentation tools. Article Search Video |
|
Burnah, Tavish M. |
ISS '24: "Passive Stylus Tracking: A ..."
Passive Stylus Tracking: A Systematic Literature Review
Tavish M. Burnah, Md. Athar Imtiaz, Hans Werner Guesgen, George L. Rudolph, and Rachel Blagojevic (Massey University, New Zealand; Utah Valley University, USA) Passive stylus systems offer a simple and cost-effective solution for digital input, compatible with a wide range of surfaces and devices. This study reviews the domain of passive stylus tracking on passive surfaces, a topic previously underexplored in existing literature. We answer four key research questions: what type of systems exist in this domain, what methods do they use for tracking styli, how accurate are they, and what are their limitations? A systematic literature review resulted in 24 papers describing passive stylus systems. Their methods primarily fall into four categories: monocular cameras with image processing, multiple camera systems with image processing, machine learning systems using high-speed cameras or motion capture hardware, and radio frequency signal-based systems with signal processing. We found the system with the highest accuracy used a single monocular camera. In many systems, markers such as retroreflective spheres, tape, or fiducial markers were used to enhance the feature matching. We have also found stagnation and in some cases, regression in the precision and reliability of these systems over time. The limitations in these systems include the lack of varied stylus form factor support, the restriction to specific camera positions and angles, and the requirement of expensive hardware. Given these findings, we discuss the important characteristics and features of passive stylus systems and propose ways forward in this field. Article Search |
|
Chan, Peter |
ISS '24: "3D Remote Monitoring and Diagnosis ..."
3D Remote Monitoring and Diagnosis during a Pandemic: Holoportation and Digital Twin Requirements
Kabir Ahmed Rufai, Helen C. Purchase, Barrett Ens, Patrick Reuter, Chris Bain, Peter Chan, and Jim Smiley (Monash University, Australia; University of British Columbia, Okanagan, Canada; LaBRI, France; Eastern Health, Australia) COVID-19 regulations presented clinicians with a new set of challenges that affected their ability to efficiently provide patient care and, as a result, telemedicine was rapidly adopted to deliver care remotely. However, these telemedicine platforms undermine patient care due to clinicians' inability to acquire all the relevant patient information required to diagnose and treat the patient. To explore this gap, we conducted a requirements analysis for the development of a 3D remote patient monitoring and diagnosis platform, by using a user-centric design methodology. In this requirements analysis, we elicited information about the clinical domain, identified clinicians’ requirements, elicited clinicians’ insights, and documented the clinicians' requirements. The outcome was the emergence of refined clinicians' requirements to guide the implementation of the Digital Twin concept paired with holoportation for remote 3D monitoring and diagnosis of patients. We anticipate that the application of a 3D telemedicine platform with these requirements for patient care during a pandemic could potentially enhance clinicians' efficiency and the effectiveness of remote patient care. Article Search |
|
Chen, Dazhi |
ISS '24: "Towards Adapting CLIP for ..."
Towards Adapting CLIP for Gaze Object Prediction
Dazhi Chen and Gang Gou (Guizhou University, China) Article Search |
|
Chen, Yiqiang |
ISS '24: "GestureGPT: Toward Zero-Shot ..."
GestureGPT: Toward Zero-Shot Free-Form Hand Gesture Understanding with Large Language Model Agents
Xin Zeng, Xiaoyu Wang, Tengxiang Zhang, Chun Yu, Shengdong Zhao, and Yiqiang Chen (Chinese Academy of Sciences, China; Institute of Computing Technology at Chinese Academy of Sciences, China; Tsinghua University, China; City University of Hong Kong, China) Existing gesture interfaces only work with a fixed set of gestures defined either by interface designers or by users themselves, which introduces learning or demonstration efforts that diminish their naturalness. Humans, on the other hand, understand free-form gestures by synthesizing the gesture, context, experience, and common sense. In this way, the user does not need to learn, demonstrate, or associate gestures. We introduce GestureGPT, a free-form hand gesture understanding framework that mimics human gesture understanding procedures to enable a natural free-form gestural interface. Our framework leverages multiple Large Language Model agents to manage and synthesize gesture and context information, then infers the interaction intent by associating the gesture with an interface function. More specifically, our triple-agent framework includes a Gesture Description Agent that automatically segments and formulates natural language descriptions of hand poses and movements based on hand landmark coordinates. The description is deciphered by a Gesture Inference Agent through self-reasoning and querying about the interaction context (e.g., interaction history, gaze data), which is managed by a Context Management Agent. Following iterative exchanges, the Gesture Inference Agent discerns the user’s intent by grounding it to an interactive function. We validated our framework offline under two real-world scenarios: smart home control and online video streaming. The average zero-shot Top-1/Top-5 grounding accuracies are 44.79%/83.59% for smart home tasks and 37.50%/73.44% for video streaming tasks. We also provide an extensive discussion that includes rationale for model selection, generalizability, and future research directions for a practical system etc. Article Search |
|
Chiossi, Francesco |
ISS '24: "Evaluating Typing Performance ..."
Evaluating Typing Performance in Different Mixed Reality Manifestations using Physiological Features
Francesco Chiossi, Yassmine El Khaoudi, Changkun Ou, Ludwig Sidenmark, Abdelrahman Zaky, Tiare Feuchtner, and Sven Mayer (LMU Munich, Germany; University of Toronto, Canada; University of Konstanz, Germany) Article Search |
|
Cunningham, Andrew |
ISS '24: "Hey Building! Novel Interfaces ..."
Hey Building! Novel Interfaces for Parametric Design Manipulations in Virtual Reality
Adam Drogemuller, Brandon J. Matthews, Andrew Cunningham, Rongrong Yu, Ning Gu, and Bruce H. Thomas (University of South Australia, Australia) Parametric Design enables designers to formulate and explore new ideas through parameters, typically by manipulating numerical values. However, visualising and exploring the design space of an established parametric design solution is natively difficult through desktop displays for designers due to screen space constraints and requiring familiarity with visual-language programming interfaces. Thus, we sought to explore Virtual Reality (VR), inspired by Natural User Interfaces (NUI), to develop and explore new interfaces departing from traditional programming interfaces, that could complement the spatial and embodied affordances of contemporary VR devices. Informed by two industry-led focus groups with architects we developed and examined the usability of three different interfaces: 1) Paramaxes, an axes-based interface that allows designers to distribute and manipulate parameter visualisations around them in physical space; 2) ParamUtter, a Voice-based User Interface (VUI) that allows designers to manipulate parameter visualisations through natural languages; 3) Control Panel, which presents the parameters as sliders in a scrollable pane and acts as baseline comparison. We ran an exploratory study with experts and found that the Control Panel was ultimately the preferred interface for a design manipulation task. However, participants commented favorably towards qualities in the unconventional interfaces, with ParamUtter scoring highest in System Usability Scores (SUS), and participants valuing the potential of using physical space to explore design spaces with Paramaxes. Article Search |
|
Delamare, William |
ISS '24: "Exploring Pointer Enhancement ..."
Exploring Pointer Enhancement Techniques for Target Selection on Immersive 3D Large Curved Display
Dhruv Bihani, A. K. M. Amanat Ullah, Charles-Olivier Dufresne-Camaro, William Delamare, Pourang Irani, and Khalad Hasan (University of British Columbia, Okanagan, Canada; University of British Columbia, Canada; ESTIA, France) Large curved displays are becoming increasingly popular due to their ability to provide users with a wider field of view and a more immersive experience compared to flat displays. Current interaction techniques for large curved displays often assume a user is positioned at the display's centre, crucially failing to accommodate general use conditions where the user may move during use. In this work, we investigated how user position impacts pointing interaction on large curved displays and evaluated cursor enhancement techniques to provide faster and more accurate performance across positions. To this effect, we conducted two user studies. First, we evaluated the effects of user position on pointing performance on a large semi-circular display (3m-tall, 3270R curvature) through a 2D Fitts' Law selection task. Our results indicate that as users move away from the display, their pointing speed significantly increases (at least by 9%), but accuracy decreases (by at least 6%). Additionally, we observed participants were slower when pointing from laterally offset positions. Secondly, we explored which pointing techniques providing motor- and visual-space enhancements best afford effective pointing performance across user positions. Across a total of six techniques tested, we found that a combination of acceleration and distance-based adjustments with cursor enlargement significantly improves target selection speed and accuracy across different user positions. Results further show techniques with visual-space enhancements (e.g., cursor enlargement) are significantly faster and more accurate than their non-visually-enhanced counterparts. Based on our results we provide design recommendations for implementing cursor enhancement techniques for large curved displays. Article Search |
|
Drogemuller, Adam |
ISS '24: "Hey Building! Novel Interfaces ..."
Hey Building! Novel Interfaces for Parametric Design Manipulations in Virtual Reality
Adam Drogemuller, Brandon J. Matthews, Andrew Cunningham, Rongrong Yu, Ning Gu, and Bruce H. Thomas (University of South Australia, Australia) Parametric Design enables designers to formulate and explore new ideas through parameters, typically by manipulating numerical values. However, visualising and exploring the design space of an established parametric design solution is natively difficult through desktop displays for designers due to screen space constraints and requiring familiarity with visual-language programming interfaces. Thus, we sought to explore Virtual Reality (VR), inspired by Natural User Interfaces (NUI), to develop and explore new interfaces departing from traditional programming interfaces, that could complement the spatial and embodied affordances of contemporary VR devices. Informed by two industry-led focus groups with architects we developed and examined the usability of three different interfaces: 1) Paramaxes, an axes-based interface that allows designers to distribute and manipulate parameter visualisations around them in physical space; 2) ParamUtter, a Voice-based User Interface (VUI) that allows designers to manipulate parameter visualisations through natural languages; 3) Control Panel, which presents the parameters as sliders in a scrollable pane and acts as baseline comparison. We ran an exploratory study with experts and found that the Control Panel was ultimately the preferred interface for a design manipulation task. However, participants commented favorably towards qualities in the unconventional interfaces, with ParamUtter scoring highest in System Usability Scores (SUS), and participants valuing the potential of using physical space to explore design spaces with Paramaxes. Article Search |
|
Dufresne-Camaro, Charles-Olivier |
ISS '24: "Exploring Pointer Enhancement ..."
Exploring Pointer Enhancement Techniques for Target Selection on Immersive 3D Large Curved Display
Dhruv Bihani, A. K. M. Amanat Ullah, Charles-Olivier Dufresne-Camaro, William Delamare, Pourang Irani, and Khalad Hasan (University of British Columbia, Okanagan, Canada; University of British Columbia, Canada; ESTIA, France) Large curved displays are becoming increasingly popular due to their ability to provide users with a wider field of view and a more immersive experience compared to flat displays. Current interaction techniques for large curved displays often assume a user is positioned at the display's centre, crucially failing to accommodate general use conditions where the user may move during use. In this work, we investigated how user position impacts pointing interaction on large curved displays and evaluated cursor enhancement techniques to provide faster and more accurate performance across positions. To this effect, we conducted two user studies. First, we evaluated the effects of user position on pointing performance on a large semi-circular display (3m-tall, 3270R curvature) through a 2D Fitts' Law selection task. Our results indicate that as users move away from the display, their pointing speed significantly increases (at least by 9%), but accuracy decreases (by at least 6%). Additionally, we observed participants were slower when pointing from laterally offset positions. Secondly, we explored which pointing techniques providing motor- and visual-space enhancements best afford effective pointing performance across user positions. Across a total of six techniques tested, we found that a combination of acceleration and distance-based adjustments with cursor enlargement significantly improves target selection speed and accuracy across different user positions. Results further show techniques with visual-space enhancements (e.g., cursor enlargement) are significantly faster and more accurate than their non-visually-enhanced counterparts. Based on our results we provide design recommendations for implementing cursor enhancement techniques for large curved displays. Article Search |
|
El Khaoudi, Yassmine |
ISS '24: "Evaluating Typing Performance ..."
Evaluating Typing Performance in Different Mixed Reality Manifestations using Physiological Features
Francesco Chiossi, Yassmine El Khaoudi, Changkun Ou, Ludwig Sidenmark, Abdelrahman Zaky, Tiare Feuchtner, and Sven Mayer (LMU Munich, Germany; University of Toronto, Canada; University of Konstanz, Germany) Article Search |
|
Enriquez, Daniel |
ISS '24: "Evaluating Layout Dimensionalities ..."
Evaluating Layout Dimensionalities in PC+VR Asymmetric Collaborative Decision Making
Daniel Enriquez, Wai Tong, Chris North, Huamin Qu, and Yalong Yang (Cornell Tech, USA; Texas A&M University, USA; Virginia Tech, USA; Hong Kong University of Science and Technology, China; Georgia Institute of Technology, USA) With the commercialization of virtual/augmented reality (VR/AR) devices, there is an increasing interest in combining immersive and non-immersive devices (e.g., desktop computers) for asymmetric collaborations. While such asymmetric settings have been examined in social platforms, significant questions around layout dimensionality in data-driven decision-making remain underexplored. A crucial inquiry arises: although presenting a consistent 3D virtual world on both immersive and non-immersive platforms has been a common practice in social applications, does the same guideline apply to lay out data? Or should data placement be optimized locally according to each device's display capacity? This study aims to provide empirical insights into the user experience of asymmetric collaboration in data-driven decision-making. We tested practical dimensionality combinations between PC and VR, resulting in three conditions: PC2D+VR2D, PC2D+VR3D, and PC3D+VR3D. The results revealed a preference for PC2D+VR3D, and PC2D+VR2D led to the quickest task completion. Our investigation facilitates an in-depth discussion of the trade-offs associated with different layout dimensionalities in asymmetric collaborations. Article Search Info |
|
Ens, Barrett |
ISS '24: "3D Remote Monitoring and Diagnosis ..."
3D Remote Monitoring and Diagnosis during a Pandemic: Holoportation and Digital Twin Requirements
Kabir Ahmed Rufai, Helen C. Purchase, Barrett Ens, Patrick Reuter, Chris Bain, Peter Chan, and Jim Smiley (Monash University, Australia; University of British Columbia, Okanagan, Canada; LaBRI, France; Eastern Health, Australia) COVID-19 regulations presented clinicians with a new set of challenges that affected their ability to efficiently provide patient care and, as a result, telemedicine was rapidly adopted to deliver care remotely. However, these telemedicine platforms undermine patient care due to clinicians' inability to acquire all the relevant patient information required to diagnose and treat the patient. To explore this gap, we conducted a requirements analysis for the development of a 3D remote patient monitoring and diagnosis platform, by using a user-centric design methodology. In this requirements analysis, we elicited information about the clinical domain, identified clinicians’ requirements, elicited clinicians’ insights, and documented the clinicians' requirements. The outcome was the emergence of refined clinicians' requirements to guide the implementation of the Digital Twin concept paired with holoportation for remote 3D monitoring and diagnosis of patients. We anticipate that the application of a 3D telemedicine platform with these requirements for patient care during a pandemic could potentially enhance clinicians' efficiency and the effectiveness of remote patient care. Article Search |
|
Faleel, Shariff AM |
ISS '24: "Comparison of Unencumbered ..."
Comparison of Unencumbered Interaction Technique for Head-Mounted Displays
Shariff AM Faleel, Rajveer Sodhi, and Pourang Irani (University of British Columbia, Canada; University of British Columbia, Okanagan, Canada) Head Mounted Displays (HMDs) are gaining more public attention. With the advancement of tracking technologies, they are incorporating unencumbered interaction techniques to address the need for user-friendly and efficient interaction techniques for day-to-day activities. While there is a good understanding of the different interaction techniques individually, very little research has been done to compare them directly. This would be vital to understanding their strengths and weaknesses in different contexts and building better synergies among them. This paper uses a target selection task to compare the performance and user preferences for four interaction techniques: gaze-pinch, ray pointer, hand-proximate user interface, and direct mid-air interactions. Results indicate that the gaze-pinch interaction technique required significantly more time to complete the task than the others, whose time to complete was similar. However, in terms of preferences and errors, the interaction techniques mostly performed similar. Article Search |
|
Femi-Gege, Temiloluwa Paul |
ISS '24: "VisConductor: Affect-Varying ..."
VisConductor: Affect-Varying Widgets for Animated Data Storytelling in Gesture-Aware Augmented Video Presentation
Temiloluwa Paul Femi-Gege, Matthew Brehmer, and Jian Zhao (University of Waterloo, Canada; Tableau Research, USA) Augmented video presentation tools provide a natural way for presenters to interact with their content, resulting in engaging experiences for remote audiences, such as when a presenter uses hand gestures to manipulate and direct attention to visual aids overlaid on their webcam feed. However, authoring and customizing these presentations can be challenging, particularly when presenting dynamic data visualization (i.e., animated charts). To this end, we introduce VisConductor, an authoring and presentation tool that equips presenters with the ability to configure gestures that control affect-varying visualization animation, foreshadow visualization transitions, direct attention to notable data points, and animate the disclosure of annotations. These gestures are integrated into configurable widgets, allowing presenters to trigger content transformations by executing gestures within widget boundaries, with feedback visible only to them. Altogether, our palette of widgets provides a level of flexibility appropriate for improvisational presentations and ad-hoc content transformations, such as when responding to audience engagement. To evaluate VisConductor, we conducted two studies focusing on presenters (𝑁=11) and audience members (𝑁=11). Our findings indicate that our approach taken with VisConductor can facilitate interactive and engaging remote presentations with dynamic visual aids. Reflecting on our findings, we also offer insights to inform the future of augmented video presentation tools. Article Search Video |
|
Feuchtner, Tiare |
ISS '24: "There Is More to Avatars Than ..."
There Is More to Avatars Than Visuals: Investigating Combinations of Visual and Auditory User Representations for Remote Collaboration in Augmented Reality
Daniel Immanuel Fink, Moritz Skowronski, Johannes Zagermann, Anke Verena Reinschluessel, Harald Reiterer, and Tiare Feuchtner (University of Konstanz, Germany; Aarhus University, Denmark) Supporting remote collaboration through augmented reality facilitates the experience of co-presence by presenting the collaborator’s avatar in the user’s physical environment. While visual user representations are continuously researched and advanced, the audio configuration – especially in combination with different visualizations – is rarely considered. In a user study (n = 48, 24 dyads), we evaluate the combination of two visual (Simple vs. Rich Avatar) with two auditory (Mono vs. Spatial Audio) user representations, to investigate their impact on user’s overall experience, performance, and perceived social presence during collaboration. Our results show a preference for rich auditory and visual user representations, as Spatial Audio supports completion of parallel tasks and the Rich Avatar positively influence user experience. However, the Simple Avatar draws less attention, which potentially benefits task efficiency, advocating for simpler visualizations in performance-oriented settings. Our findings contribute to a deeper understanding of how visual and auditory user representation combinations impact remote collaboration in augmented reality. Article Search ISS '24: "Evaluating Typing Performance ..." Evaluating Typing Performance in Different Mixed Reality Manifestations using Physiological Features Francesco Chiossi, Yassmine El Khaoudi, Changkun Ou, Ludwig Sidenmark, Abdelrahman Zaky, Tiare Feuchtner, and Sven Mayer (LMU Munich, Germany; University of Toronto, Canada; University of Konstanz, Germany) Article Search |
|
Fink, Daniel Immanuel |
ISS '24: "There Is More to Avatars Than ..."
There Is More to Avatars Than Visuals: Investigating Combinations of Visual and Auditory User Representations for Remote Collaboration in Augmented Reality
Daniel Immanuel Fink, Moritz Skowronski, Johannes Zagermann, Anke Verena Reinschluessel, Harald Reiterer, and Tiare Feuchtner (University of Konstanz, Germany; Aarhus University, Denmark) Supporting remote collaboration through augmented reality facilitates the experience of co-presence by presenting the collaborator’s avatar in the user’s physical environment. While visual user representations are continuously researched and advanced, the audio configuration – especially in combination with different visualizations – is rarely considered. In a user study (n = 48, 24 dyads), we evaluate the combination of two visual (Simple vs. Rich Avatar) with two auditory (Mono vs. Spatial Audio) user representations, to investigate their impact on user’s overall experience, performance, and perceived social presence during collaboration. Our results show a preference for rich auditory and visual user representations, as Spatial Audio supports completion of parallel tasks and the Rich Avatar positively influence user experience. However, the Simple Avatar draws less attention, which potentially benefits task efficiency, advocating for simpler visualizations in performance-oriented settings. Our findings contribute to a deeper understanding of how visual and auditory user representation combinations impact remote collaboration in augmented reality. Article Search |
|
Gao, Jingyi |
ISS '24: "Planar or Spatial: Exploring ..."
Planar or Spatial: Exploring Design Aspects and Challenges for Presentations in Virtual Reality with No-coding Interface
Liwei Wu, Yilin Zhang, Justin Leung, Jingyi Gao, April Li, and Jian Zhao (University of Waterloo, Canada) The proliferation of virtual reality (VR) has led to its increasing adoption as an immersive medium for delivering presentations, distinct from other VR experiences like games and 360-degree videos by sharing information in richly interactive environments. However, creating engaging VR presentations remains a challenging and time-consuming task for users, hindering the full realization of VR presentation's capabilities. This research aims to explore the potential of VR presentation, analyze users' opinions, and investigate these via providing a user-friendly no-coding authoring tool. Through an examination of popular presentation software and interviews with seven professionals, we identified five design aspects and four design challenges for VR presentations. Based on the findings, we developed VRStory, a prototype for presentation authoring without coding to explore the design aspects and strategies for addressing the challenges. VRStory offers a variety of predefined and customizable VR elements, as well as modules for layout design, navigation control, and asset generation. A user study was then conducted with 12 participants to investigate their opinions and authoring experience with VRStory. Our results demonstrated that, while acknowledging the advantages of immersive and spatial features in VR, users often have a consistent mental model for traditional 2D presentations and may still prefer planar and static formats in VR for better accessibility and efficient communication. We finally shared our learned design considerations for future development of VR presentation tools, emphasizing the importance of balancing of promoting immersive features and ensuring accessibility. Article Search |
|
Gellersen, Hans |
ISS '24: "Gaze, Wall, and Racket: Combining ..."
Gaze, Wall, and Racket: Combining Gaze and Hand-Controlled Plane for 3D Selection in Virtual Reality
Uta Wagner, Andreas Asferg Jacobsen, Matthias Albrecht, Haopeng Wang, Hans Gellersen, and Ken Pfeuffer (Aarhus University, Denmark; University of Konstanz, Germany; Lancaster University, United Kingdom) Raypointing, the status-quo pointing technique for virtual reality, is challenging with many occluded and overlapping objects. In this work, we investigate how eye-tracking input can assist the gestural ray pointing in the disambiguation of targets in densely populated scenes. We explore the concept of Gaze + Plane, where the intersection between the user's gaze and a hand-controlled plane facilitates 3D position specification. In particular, two techniques are investigated: Gaze&Wall, which employs an indirect plane positioned in depth using a hand ray, and Gaze&Racket, featuring a hand-held and rotatable plane. In a first experiment, we reveal the speed-error trade-offs between Gaze + Plane techniques. In a second study, we compared the best techniques to newly designed gesture-only techniques, finding that Gaze&Wall is less error-prone and significantly faster. Our research has relevance for spatial interaction, specifically on advanced techniques for complex 3D tasks. Article Search |
|
Gou, Gang |
ISS '24: "Towards Adapting CLIP for ..."
Towards Adapting CLIP for Gaze Object Prediction
Dazhi Chen and Gang Gou (Guizhou University, China) Article Search |
|
Gruenefeld, Uwe |
ISS '24: "Magic Mirror: Designing a ..."
Magic Mirror: Designing a Weight Change Visualization for Domestic Use
Jonas Keppel, Marvin Strauss, Uwe Gruenefeld, and Stefan Schneegass (University of Duisburg-Essen, Germany) Virtual mirrors displaying weight changes can support users in forming healthier habits by visualizing potential future body shapes. However, these often come with privacy, feasibility, and cost limitations. This paper introduces the Magic Mirror, a novel distortion-based mirror that leverages curvature effects to alter the appearance of body size while preserving privacy. We constructed the Magic Mirror and compared it to a video-based alternative. In an online study (N=115), we determined the optimal parameters for each system, comparing weight change visualizations and manipulation levels. Afterward, we conducted a laboratory study (N=24) to compare the two systems in terms of user perception, motivational potential, and willingness to use daily. Our findings indicate that the Magic Mirror surpasses the video-based mirror in terms of suitability for residential application, as it addresses feasibility concerns commonly associated with virtual mirrors. Our work demonstrates that mirrors that display weight changes can be implemented in users’ homes without any cameras, ensuring privacy. Article Search |
|
Gu, Ning |
ISS '24: "Hey Building! Novel Interfaces ..."
Hey Building! Novel Interfaces for Parametric Design Manipulations in Virtual Reality
Adam Drogemuller, Brandon J. Matthews, Andrew Cunningham, Rongrong Yu, Ning Gu, and Bruce H. Thomas (University of South Australia, Australia) Parametric Design enables designers to formulate and explore new ideas through parameters, typically by manipulating numerical values. However, visualising and exploring the design space of an established parametric design solution is natively difficult through desktop displays for designers due to screen space constraints and requiring familiarity with visual-language programming interfaces. Thus, we sought to explore Virtual Reality (VR), inspired by Natural User Interfaces (NUI), to develop and explore new interfaces departing from traditional programming interfaces, that could complement the spatial and embodied affordances of contemporary VR devices. Informed by two industry-led focus groups with architects we developed and examined the usability of three different interfaces: 1) Paramaxes, an axes-based interface that allows designers to distribute and manipulate parameter visualisations around them in physical space; 2) ParamUtter, a Voice-based User Interface (VUI) that allows designers to manipulate parameter visualisations through natural languages; 3) Control Panel, which presents the parameters as sliders in a scrollable pane and acts as baseline comparison. We ran an exploratory study with experts and found that the Control Panel was ultimately the preferred interface for a design manipulation task. However, participants commented favorably towards qualities in the unconventional interfaces, with ParamUtter scoring highest in System Usability Scores (SUS), and participants valuing the potential of using physical space to explore design spaces with Paramaxes. Article Search |
|
Guesgen, Hans Werner |
ISS '24: "Passive Stylus Tracking: A ..."
Passive Stylus Tracking: A Systematic Literature Review
Tavish M. Burnah, Md. Athar Imtiaz, Hans Werner Guesgen, George L. Rudolph, and Rachel Blagojevic (Massey University, New Zealand; Utah Valley University, USA) Passive stylus systems offer a simple and cost-effective solution for digital input, compatible with a wide range of surfaces and devices. This study reviews the domain of passive stylus tracking on passive surfaces, a topic previously underexplored in existing literature. We answer four key research questions: what type of systems exist in this domain, what methods do they use for tracking styli, how accurate are they, and what are their limitations? A systematic literature review resulted in 24 papers describing passive stylus systems. Their methods primarily fall into four categories: monocular cameras with image processing, multiple camera systems with image processing, machine learning systems using high-speed cameras or motion capture hardware, and radio frequency signal-based systems with signal processing. We found the system with the highest accuracy used a single monocular camera. In many systems, markers such as retroreflective spheres, tape, or fiducial markers were used to enhance the feature matching. We have also found stagnation and in some cases, regression in the precision and reliability of these systems over time. The limitations in these systems include the lack of varied stylus form factor support, the restriction to specific camera positions and angles, and the requirement of expensive hardware. Given these findings, we discuss the important characteristics and features of passive stylus systems and propose ways forward in this field. Article Search |
|
Gugenheimer, Jan |
ISS '24: "Exploring Creation Perspectives ..."
Exploring Creation Perspectives and Icon Placement for On-Body Menus in Virtual Reality
Xiang Li, Wei He, Shan Jin, Jan Gugenheimer, Pan Hui, Hai-Ning Liang, and Per Ola Kristensson (University of Cambridge, United Kingdom; Hong Kong University of Science and Technology, Guangzhou, China; TU Darmstadt, Germany; Hong Kong University of Science and Technology, China; Xi’an Jiaotong-Liverpool University, China) On-body menus present a novel interaction paradigm within Virtual Reality (VR) environments by embedding virtual interfaces directly onto the user’s body. Unlike traditional screen-based interfaces, on-body menus enable users to interact with virtual options or icons visually attached to their physical form. In this paper, We investigated the impact of the creation process on the effectiveness of on-body menus, comparing first-person, third-person, and mirror perspectives. Our first study (N = 12) revealed that the mirror perspective led to faster creation times and more accurate recall compared to the other two perspectives. To further explore user preferences, we conducted a second study (N = 18) utilizing a VR system with integrated body tracking. By combining distributions of icons from both studies (N = 30), we confirmed significant preferences in on-body menu placement based on icon category (e.g., Social Media icons were consistently placed on forearms). We also discovered associations between categories, such as Leisure and Social Media icons frequently co-occurring. Our findings highlight the importance of the creation process, uncover user preferences for on-body menu organization, and provide insights to guide the development of intuitive and effective on-body interactions within virtual environments. Article Search |
|
Hasan, Khalad |
ISS '24: "Exploring Pointer Enhancement ..."
Exploring Pointer Enhancement Techniques for Target Selection on Immersive 3D Large Curved Display
Dhruv Bihani, A. K. M. Amanat Ullah, Charles-Olivier Dufresne-Camaro, William Delamare, Pourang Irani, and Khalad Hasan (University of British Columbia, Okanagan, Canada; University of British Columbia, Canada; ESTIA, France) Large curved displays are becoming increasingly popular due to their ability to provide users with a wider field of view and a more immersive experience compared to flat displays. Current interaction techniques for large curved displays often assume a user is positioned at the display's centre, crucially failing to accommodate general use conditions where the user may move during use. In this work, we investigated how user position impacts pointing interaction on large curved displays and evaluated cursor enhancement techniques to provide faster and more accurate performance across positions. To this effect, we conducted two user studies. First, we evaluated the effects of user position on pointing performance on a large semi-circular display (3m-tall, 3270R curvature) through a 2D Fitts' Law selection task. Our results indicate that as users move away from the display, their pointing speed significantly increases (at least by 9%), but accuracy decreases (by at least 6%). Additionally, we observed participants were slower when pointing from laterally offset positions. Secondly, we explored which pointing techniques providing motor- and visual-space enhancements best afford effective pointing performance across user positions. Across a total of six techniques tested, we found that a combination of acceleration and distance-based adjustments with cursor enlargement significantly improves target selection speed and accuracy across different user positions. Results further show techniques with visual-space enhancements (e.g., cursor enlargement) are significantly faster and more accurate than their non-visually-enhanced counterparts. Based on our results we provide design recommendations for implementing cursor enhancement techniques for large curved displays. Article Search |
|
He, Fengming |
ISS '24: "AdapTUI: Adaptation of Geometric-Feature-Based ..."
AdapTUI: Adaptation of Geometric-Feature-Based Tangible User Interfaces in Augmented Reality
Fengming He, Xiyun Hu, Xun Qian, Zhengzhe Zhu, and Karthik Ramani (Purdue University, USA; Google Research, USA) With the advents in geometry perception and Augmented Reality (AR), end-users can customize Tangible User Interfaces (TUIs) that control digital assets using intuitive and comfortable interactions with physical geometries (e.g., edges and surfaces). However, it remains challenging to adapt such TUIs in varied physical environments while maintaining the same spatial and ergonomic affordance. We propose AdapTUI, an end-to- end system that enables an end-user to author geometric-based TUIs and automatically adapts the TUIs when the user moves to a new environment. Leveraging a geometry detection module and the spatial awareness of AR, AdapTUI first lets users create custom mappings between geometric features and digital functions. Then, AdapTUI uses an optimization-based adaptation framework, which considers both the geometric variations and human-factor nuances, to dynamically adjust the attachment of the user-authored TUIs. We demonstrate three application scenarios where end-users can utilize TUIs at different locations, including portable car play, efficient AR workstation, and entertainment. We evaluated the effectiveness of the adaptation method as well as the overall usability through a comparison user study (N=12). The satisfactory adaptation of the user-authored TUIs and the positive qualitative feedback demonstrate the effectiveness of our system. Article Search Video |
|
He, Wei |
ISS '24: "Exploring Creation Perspectives ..."
Exploring Creation Perspectives and Icon Placement for On-Body Menus in Virtual Reality
Xiang Li, Wei He, Shan Jin, Jan Gugenheimer, Pan Hui, Hai-Ning Liang, and Per Ola Kristensson (University of Cambridge, United Kingdom; Hong Kong University of Science and Technology, Guangzhou, China; TU Darmstadt, Germany; Hong Kong University of Science and Technology, China; Xi’an Jiaotong-Liverpool University, China) On-body menus present a novel interaction paradigm within Virtual Reality (VR) environments by embedding virtual interfaces directly onto the user’s body. Unlike traditional screen-based interfaces, on-body menus enable users to interact with virtual options or icons visually attached to their physical form. In this paper, We investigated the impact of the creation process on the effectiveness of on-body menus, comparing first-person, third-person, and mirror perspectives. Our first study (N = 12) revealed that the mirror perspective led to faster creation times and more accurate recall compared to the other two perspectives. To further explore user preferences, we conducted a second study (N = 18) utilizing a VR system with integrated body tracking. By combining distributions of icons from both studies (N = 30), we confirmed significant preferences in on-body menu placement based on icon category (e.g., Social Media icons were consistently placed on forearms). We also discovered associations between categories, such as Leisure and Social Media icons frequently co-occurring. Our findings highlight the importance of the creation process, uncover user preferences for on-body menu organization, and provide insights to guide the development of intuitive and effective on-body interactions within virtual environments. Article Search |
|
Heo, Seongkook |
ISS '24: "MoiréTag: A Low-Cost Tag ..."
MoiréTag: A Low-Cost Tag for High-Precision Tangible Interactions without Active Components
Peiyu Zhang, Wen Ying, Sara Riggs, and Seongkook Heo (University of Virginia, USA) In this paper, we present MoiréTag—a novel tag-like device that magnifies displacement without active components for indirect sensing of subtle tangible interactions. The device consists of two overlapping layers of stripe patterns with distinct pattern frequencies. These layers create Moiré fringes that can move faster than the actual movement of a layer. Using a customized image processing pipeline, we show that MoiréTag can reliably detect sub-mm movement in real-time (mean error = 0.043 mm) under varying lighting conditions, camera angles, and camera distances. We also demonstrate five applications of MoiréTag to showcase its potential as a low-cost solution to capture and monitor small changes in movement and other physical properties, such as force and volume, by converting them into displacement. Article Search |
|
Hu, Xiyun |
ISS '24: "AdapTUI: Adaptation of Geometric-Feature-Based ..."
AdapTUI: Adaptation of Geometric-Feature-Based Tangible User Interfaces in Augmented Reality
Fengming He, Xiyun Hu, Xun Qian, Zhengzhe Zhu, and Karthik Ramani (Purdue University, USA; Google Research, USA) With the advents in geometry perception and Augmented Reality (AR), end-users can customize Tangible User Interfaces (TUIs) that control digital assets using intuitive and comfortable interactions with physical geometries (e.g., edges and surfaces). However, it remains challenging to adapt such TUIs in varied physical environments while maintaining the same spatial and ergonomic affordance. We propose AdapTUI, an end-to- end system that enables an end-user to author geometric-based TUIs and automatically adapts the TUIs when the user moves to a new environment. Leveraging a geometry detection module and the spatial awareness of AR, AdapTUI first lets users create custom mappings between geometric features and digital functions. Then, AdapTUI uses an optimization-based adaptation framework, which considers both the geometric variations and human-factor nuances, to dynamically adjust the attachment of the user-authored TUIs. We demonstrate three application scenarios where end-users can utilize TUIs at different locations, including portable car play, efficient AR workstation, and entertainment. We evaluated the effectiveness of the adaptation method as well as the overall usability through a comparison user study (N=12). The satisfactory adaptation of the user-authored TUIs and the positive qualitative feedback demonstrate the effectiveness of our system. Article Search Video |
|
Hu, Xuning |
ISS '24: "Experimental Analysis of Freehand ..."
Experimental Analysis of Freehand Multi-object Selection Techniques in Virtual Reality Head-Mounted Displays
Rongkai Shi, Yushi Wei, Xuning Hu, Yu Liu, Yong Yue, Lingyun Yu, and Hai-Ning Liang (Hong Kong University of Science and Technology, Guangzhou, China; Xi’an Jiaotong-Liverpool University, China) Object selection is essential in virtual reality (VR) head-mounted displays (HMDs). Prior work mainly focuses on enhancing and evaluating techniques for selecting a single object in VR, leaving a gap in the techniques for multi-object selection, a more complex but common selection scenario. To enable multi-object selection, the interaction technique should support group selection in addition to the default pointing selection mode for acquiring a single target. This composite interaction could be particularly challenging when using freehand gestural input. In this work, we present an empirical comparison of six freehand techniques, which are comprised of three mode-switching gestures (Finger Segment, Multi-Finger, and Wrist Orientation) and two group selection techniques (Cone-casting Selection and Crossing Selection) derived from prior work. Our results demonstrate the performance, user experience, and preference of each technique. The findings derive three design implications that can guide the design of freehand techniques for multi-object selection in VR HMDs. Article Search |
|
Hui, Pan |
ISS '24: "Exploring Creation Perspectives ..."
Exploring Creation Perspectives and Icon Placement for On-Body Menus in Virtual Reality
Xiang Li, Wei He, Shan Jin, Jan Gugenheimer, Pan Hui, Hai-Ning Liang, and Per Ola Kristensson (University of Cambridge, United Kingdom; Hong Kong University of Science and Technology, Guangzhou, China; TU Darmstadt, Germany; Hong Kong University of Science and Technology, China; Xi’an Jiaotong-Liverpool University, China) On-body menus present a novel interaction paradigm within Virtual Reality (VR) environments by embedding virtual interfaces directly onto the user’s body. Unlike traditional screen-based interfaces, on-body menus enable users to interact with virtual options or icons visually attached to their physical form. In this paper, We investigated the impact of the creation process on the effectiveness of on-body menus, comparing first-person, third-person, and mirror perspectives. Our first study (N = 12) revealed that the mirror perspective led to faster creation times and more accurate recall compared to the other two perspectives. To further explore user preferences, we conducted a second study (N = 18) utilizing a VR system with integrated body tracking. By combining distributions of icons from both studies (N = 30), we confirmed significant preferences in on-body menu placement based on icon category (e.g., Social Media icons were consistently placed on forearms). We also discovered associations between categories, such as Leisure and Social Media icons frequently co-occurring. Our findings highlight the importance of the creation process, uncover user preferences for on-body menu organization, and provide insights to guide the development of intuitive and effective on-body interactions within virtual environments. Article Search |
|
Igarashi, Yuki |
ISS '24: "Designing Privacy-Protecting ..."
Designing Privacy-Protecting System with Visual Masking Based on Investigation of Privacy Concerns in Virtual Screen Sharing Environments
Mizuki Ishida, Kaori Ikematsu, and Yuki Igarashi (Ochanomizu University, Japan; LY Corporation, Japan) Virtual meeting tools, such as Zoom, can sometimes lead to inadvertently sharing private information through screen sharing features. We conducted fundamental investigations on what concerns and strategies users have for protecting their privacy. The results indicate that while most users take actions to protect their privacy before or during meetings, a significant number of users also reported experiences with inadvertent information sharing. We also found that users avoid sharing not only personal data, such as usernames, but also information that could reveal their interests and activities, such as browsing histories or personalized recommendations. Thus, we propose a system that automatically occludes areas that users do not want to share to facilitate the management of privacy during screen sharing. We conducted a data collection experiment to construct a deep learning model to detect the areas that users do not want to share with others. We implemented our system by using this model to protect information in real time during screen sharing in virtual meeting tools. Article Search |
|
Ikematsu, Kaori |
ISS '24: "Designing Privacy-Protecting ..."
Designing Privacy-Protecting System with Visual Masking Based on Investigation of Privacy Concerns in Virtual Screen Sharing Environments
Mizuki Ishida, Kaori Ikematsu, and Yuki Igarashi (Ochanomizu University, Japan; LY Corporation, Japan) Virtual meeting tools, such as Zoom, can sometimes lead to inadvertently sharing private information through screen sharing features. We conducted fundamental investigations on what concerns and strategies users have for protecting their privacy. The results indicate that while most users take actions to protect their privacy before or during meetings, a significant number of users also reported experiences with inadvertent information sharing. We also found that users avoid sharing not only personal data, such as usernames, but also information that could reveal their interests and activities, such as browsing histories or personalized recommendations. Thus, we propose a system that automatically occludes areas that users do not want to share to facilitate the management of privacy during screen sharing. We conducted a data collection experiment to construct a deep learning model to detect the areas that users do not want to share with others. We implemented our system by using this model to protect information in real time during screen sharing in virtual meeting tools. Article Search |
|
Imtiaz, Md. Athar |
ISS '24: "Passive Stylus Tracking: A ..."
Passive Stylus Tracking: A Systematic Literature Review
Tavish M. Burnah, Md. Athar Imtiaz, Hans Werner Guesgen, George L. Rudolph, and Rachel Blagojevic (Massey University, New Zealand; Utah Valley University, USA) Passive stylus systems offer a simple and cost-effective solution for digital input, compatible with a wide range of surfaces and devices. This study reviews the domain of passive stylus tracking on passive surfaces, a topic previously underexplored in existing literature. We answer four key research questions: what type of systems exist in this domain, what methods do they use for tracking styli, how accurate are they, and what are their limitations? A systematic literature review resulted in 24 papers describing passive stylus systems. Their methods primarily fall into four categories: monocular cameras with image processing, multiple camera systems with image processing, machine learning systems using high-speed cameras or motion capture hardware, and radio frequency signal-based systems with signal processing. We found the system with the highest accuracy used a single monocular camera. In many systems, markers such as retroreflective spheres, tape, or fiducial markers were used to enhance the feature matching. We have also found stagnation and in some cases, regression in the precision and reliability of these systems over time. The limitations in these systems include the lack of varied stylus form factor support, the restriction to specific camera positions and angles, and the requirement of expensive hardware. Given these findings, we discuss the important characteristics and features of passive stylus systems and propose ways forward in this field. Article Search |
|
Irani, Pourang |
ISS '24: "Comparison of Unencumbered ..."
Comparison of Unencumbered Interaction Technique for Head-Mounted Displays
Shariff AM Faleel, Rajveer Sodhi, and Pourang Irani (University of British Columbia, Canada; University of British Columbia, Okanagan, Canada) Head Mounted Displays (HMDs) are gaining more public attention. With the advancement of tracking technologies, they are incorporating unencumbered interaction techniques to address the need for user-friendly and efficient interaction techniques for day-to-day activities. While there is a good understanding of the different interaction techniques individually, very little research has been done to compare them directly. This would be vital to understanding their strengths and weaknesses in different contexts and building better synergies among them. This paper uses a target selection task to compare the performance and user preferences for four interaction techniques: gaze-pinch, ray pointer, hand-proximate user interface, and direct mid-air interactions. Results indicate that the gaze-pinch interaction technique required significantly more time to complete the task than the others, whose time to complete was similar. However, in terms of preferences and errors, the interaction techniques mostly performed similar. Article Search ISS '24: "Exploring Pointer Enhancement ..." Exploring Pointer Enhancement Techniques for Target Selection on Immersive 3D Large Curved Display Dhruv Bihani, A. K. M. Amanat Ullah, Charles-Olivier Dufresne-Camaro, William Delamare, Pourang Irani, and Khalad Hasan (University of British Columbia, Okanagan, Canada; University of British Columbia, Canada; ESTIA, France) Large curved displays are becoming increasingly popular due to their ability to provide users with a wider field of view and a more immersive experience compared to flat displays. Current interaction techniques for large curved displays often assume a user is positioned at the display's centre, crucially failing to accommodate general use conditions where the user may move during use. In this work, we investigated how user position impacts pointing interaction on large curved displays and evaluated cursor enhancement techniques to provide faster and more accurate performance across positions. To this effect, we conducted two user studies. First, we evaluated the effects of user position on pointing performance on a large semi-circular display (3m-tall, 3270R curvature) through a 2D Fitts' Law selection task. Our results indicate that as users move away from the display, their pointing speed significantly increases (at least by 9%), but accuracy decreases (by at least 6%). Additionally, we observed participants were slower when pointing from laterally offset positions. Secondly, we explored which pointing techniques providing motor- and visual-space enhancements best afford effective pointing performance across user positions. Across a total of six techniques tested, we found that a combination of acceleration and distance-based adjustments with cursor enlargement significantly improves target selection speed and accuracy across different user positions. Results further show techniques with visual-space enhancements (e.g., cursor enlargement) are significantly faster and more accurate than their non-visually-enhanced counterparts. Based on our results we provide design recommendations for implementing cursor enhancement techniques for large curved displays. Article Search |
|
Ishida, Mizuki |
ISS '24: "Designing Privacy-Protecting ..."
Designing Privacy-Protecting System with Visual Masking Based on Investigation of Privacy Concerns in Virtual Screen Sharing Environments
Mizuki Ishida, Kaori Ikematsu, and Yuki Igarashi (Ochanomizu University, Japan; LY Corporation, Japan) Virtual meeting tools, such as Zoom, can sometimes lead to inadvertently sharing private information through screen sharing features. We conducted fundamental investigations on what concerns and strategies users have for protecting their privacy. The results indicate that while most users take actions to protect their privacy before or during meetings, a significant number of users also reported experiences with inadvertent information sharing. We also found that users avoid sharing not only personal data, such as usernames, but also information that could reveal their interests and activities, such as browsing histories or personalized recommendations. Thus, we propose a system that automatically occludes areas that users do not want to share to facilitate the management of privacy during screen sharing. We conducted a data collection experiment to construct a deep learning model to detect the areas that users do not want to share with others. We implemented our system by using this model to protect information in real time during screen sharing in virtual meeting tools. Article Search |
|
Israel, Johann Habakuk |
ISS '24: "Assessing the Effects of Sensory ..."
Assessing the Effects of Sensory Modality Conditions on Object Retention across Virtual Reality and Projected Surface Display Environments
Lucas Siqueira Rodrigues, Timo Torsten Schmidt, Johann Habakuk Israel, Stefan Zachow, John Nyakatura, and Thomas Kosch (Humboldt University of Berlin, Germany; Freie Universität Berlin, Germany; HTW Berlin, Germany; Zuse Institute Berlin, Germany) Haptic feedback reportedly enhances human interaction with 3D data, particularly improving the retention of mental representations of digital objects in immersive settings. However, the effectiveness of visuohaptic integration in promoting object retention across different display environments remains underexplored. Our study extends previous research on the retention effects of haptics from virtual reality to a projected surface display to assess whether earlier findings generalize to 2D environments. Participants performed a delayed match-to-sample task incorporating visual, haptic, and visuohaptic sensory feedback within a projected surface display environment. We compared error rates and response times across these sensory modalities and display environments. Our results reveal that visuohaptic integration significantly enhances object retention on projected surfaces, benefiting task performance across display environments. Our findings suggest that haptics can improve object retention without requiring fully immersive setups, offering insights for the design of interactive systems that assist professionals who rely on precise mental representations of digital objects. Article Search |
|
Iwamoto, Takuya |
ISS '24: "Popping-Up Poster: A Pin-Based ..."
Popping-Up Poster: A Pin-Based Promotional Poster Device for Engaging Customers through Physical Shape Transformation
Kojiro Tanaka, Yuki Okafuji, and Takuya Iwamoto (University of Tsukuba, Japan; CyberAgent, Japan; Osaka University, Japan) Promotional media, such as paper posters and digital signage, are installed in shopping malls to recommend products and services. However, it has been reported that many customers tend not to be interested in these promotional media and do not receive the information. When product information is not communicated effectively, advertisers are unable to convey the information they wish to share with customers, and customers miss the opportunity to receive valuable information. To address such issues, a lot of methods have been proposed to make people aware of the presence of media; however, there are not many methods that take into account the delivery of product information to customers. In this study, we propose Popping-Up Poster, a pin-based poster device designed to capture customer attention and convey information through dynamic shape changes. To verify the effectiveness of the proposed system, field experiments were conducted in a café, where its promotional effects were compared with those of traditional promotional media, including paper posters and digital signage. These results show that Popping-Up Poster has the potential to be more effective in recommending products and influencing customer product choices compared to conventional promotional media. Article Search |
|
Jacobsen, Andreas Asferg |
ISS '24: "Gaze, Wall, and Racket: Combining ..."
Gaze, Wall, and Racket: Combining Gaze and Hand-Controlled Plane for 3D Selection in Virtual Reality
Uta Wagner, Andreas Asferg Jacobsen, Matthias Albrecht, Haopeng Wang, Hans Gellersen, and Ken Pfeuffer (Aarhus University, Denmark; University of Konstanz, Germany; Lancaster University, United Kingdom) Raypointing, the status-quo pointing technique for virtual reality, is challenging with many occluded and overlapping objects. In this work, we investigate how eye-tracking input can assist the gestural ray pointing in the disambiguation of targets in densely populated scenes. We explore the concept of Gaze + Plane, where the intersection between the user's gaze and a hand-controlled plane facilitates 3D position specification. In particular, two techniques are investigated: Gaze&Wall, which employs an indirect plane positioned in depth using a hand ray, and Gaze&Racket, featuring a hand-held and rotatable plane. In a first experiment, we reveal the speed-error trade-offs between Gaze + Plane techniques. In a second study, we compared the best techniques to newly designed gesture-only techniques, finding that Gaze&Wall is less error-prone and significantly faster. Our research has relevance for spatial interaction, specifically on advanced techniques for complex 3D tasks. Article Search |
|
Jin, Shan |
ISS '24: "Exploring Creation Perspectives ..."
Exploring Creation Perspectives and Icon Placement for On-Body Menus in Virtual Reality
Xiang Li, Wei He, Shan Jin, Jan Gugenheimer, Pan Hui, Hai-Ning Liang, and Per Ola Kristensson (University of Cambridge, United Kingdom; Hong Kong University of Science and Technology, Guangzhou, China; TU Darmstadt, Germany; Hong Kong University of Science and Technology, China; Xi’an Jiaotong-Liverpool University, China) On-body menus present a novel interaction paradigm within Virtual Reality (VR) environments by embedding virtual interfaces directly onto the user’s body. Unlike traditional screen-based interfaces, on-body menus enable users to interact with virtual options or icons visually attached to their physical form. In this paper, We investigated the impact of the creation process on the effectiveness of on-body menus, comparing first-person, third-person, and mirror perspectives. Our first study (N = 12) revealed that the mirror perspective led to faster creation times and more accurate recall compared to the other two perspectives. To further explore user preferences, we conducted a second study (N = 18) utilizing a VR system with integrated body tracking. By combining distributions of icons from both studies (N = 30), we confirmed significant preferences in on-body menu placement based on icon category (e.g., Social Media icons were consistently placed on forearms). We also discovered associations between categories, such as Leisure and Social Media icons frequently co-occurring. Our findings highlight the importance of the creation process, uncover user preferences for on-body menu organization, and provide insights to guide the development of intuitive and effective on-body interactions within virtual environments. Article Search |
|
Jouffrais, Christophe |
ISS '24: "Audio-Vibratory You-Are-Here ..."
Audio-Vibratory You-Are-Here Mobile Maps for People with Visual Impairments
Elen Sargsyan, Bernard Oriola, Marcos Serrano, and Christophe Jouffrais (University of Toulouse 3 - IRIT, France; CNRS - IRIT, France; CNRS - IPAL, Singapore) Self-localization and wayfinding are challenging tasks for people with visual impairments (PVIs), severely impacting independent mobility. Visual “You-are-here” (YAH) maps are useful for assisting local wayfinding of sighted users. They are used to self-localize and display points of interest, landmarks and routes in the surroundings. However, these maps are not always available and rarely accessible to PVIs. Relying on an iterative participatory design process with eight end-users with visual impairments, we created a proof of concept of a mobile audio-vibratory YAH map. Our design is based on either a tablet or a smartphone to ensure a small and portable solution. A user study with ten PVIs showed that the audio-vibratory YAH map that we designed provides the user with a good understanding of the surroundings and wayfinding cues. Surprisingly, the results show that the audio-vibratory YAH map prototype was as usable as the control condition (audio-tactile YAH map with a tactile overlay), with similar user satisfaction and cognitive load. A follow-up field study with two participants showed the effectiveness of the prototype for assisting in crossroad understanding. To conclude, our innovative design of a mobile audio-vibratory YAH map can overcome the portability and printing issues associated with tactile overlays and can be an appropriate solution for assisting the pedestrian navigation of PVIs. Article Search |
|
Keppel, Jonas |
ISS '24: "Magic Mirror: Designing a ..."
Magic Mirror: Designing a Weight Change Visualization for Domestic Use
Jonas Keppel, Marvin Strauss, Uwe Gruenefeld, and Stefan Schneegass (University of Duisburg-Essen, Germany) Virtual mirrors displaying weight changes can support users in forming healthier habits by visualizing potential future body shapes. However, these often come with privacy, feasibility, and cost limitations. This paper introduces the Magic Mirror, a novel distortion-based mirror that leverages curvature effects to alter the appearance of body size while preserving privacy. We constructed the Magic Mirror and compared it to a video-based alternative. In an online study (N=115), we determined the optimal parameters for each system, comparing weight change visualizations and manipulation levels. Afterward, we conducted a laboratory study (N=24) to compare the two systems in terms of user perception, motivational potential, and willingness to use daily. Our findings indicate that the Magic Mirror surpasses the video-based mirror in terms of suitability for residential application, as it addresses feasibility concerns commonly associated with virtual mirrors. Our work demonstrates that mirrors that display weight changes can be implemented in users’ homes without any cameras, ensuring privacy. Article Search |
|
Khan, Talha |
ISS '24: "Don’t Block My Stuff: Fostering ..."
Don’t Block My Stuff: Fostering Personal Object Awareness in Multi-user Mixed Reality Environments
Talha Khan and David Lindlbauer (University of Pittsburgh, USA; Carnegie Mellon University, USA) In Mixed Reality (MR), users can collaborate efficiently by creating personalized layouts that incorporate both personal and shared virtual objects. Unlike in the real world, personal objects in MR are only visible to their owner. This makes them susceptible to occlusions from shared objects of other users, who remain unaware of their existence. Thus, achieving unobstructed layouts in collaborative MR settings requires knowledge of where others have placed their personal objects. In this paper, we assessed the effects of three visualizations, and a baseline without any visualization, on occlusions and user perceptions. Our study involved 16 dyads (N=32) who engaged in a series of collaborative sorting tasks. Results indicate that the choice of visualization significantly impacts both occlusion and perception, emphasizing the need for effective visualizations to enhance collaborative MR experiences. We conclude with design recommendations for multi-user MR systems to better accommodate both personal and shared interfaces simultaneously. Article Search |
|
Klich, Matthew |
ISS '24: "The Elephant in the Room: ..."
The Elephant in the Room: Expert Experiences Designing, Developing and Evaluating Data Visualizations on Large Displays
Mahsa Sinaei Hamed, Pak Kwan, Matthew Klich, Jillian Aurisano, and Fateme Rajabiyazdi (Carleton University, Canada; University of Cincinnati, USA) Large displays can provide the necessary space and resolution for comprehensive explorations of data visualizations. However, designing and developing visualizations for such displays pose distinct challenges. Identifying these challenges is essential for data visualization designers and developers creating data visualizations on large displays. In this study, we aim to identify the challenges designers and developers encounter when creating data visualizations for large displays. We conducted semi-structured interviews with 13 experts experienced in creating data visualizations for large displays and, through affinity diagramming, categorized the challenges. We identified several challenges in designing, developing, and evaluating data visualizations on large displays, as well as building infrastructure for large displays. Design challenges included scaling visual encodings, limited design tools, and adopting design guidelines for large displays. In the development phase, developers faced difficulties working away from large displays and dealing with insufficient tools and resources. During the evaluation phase, researchers encountered issues with individuals' unfamiliarity with large display technology, interaction interruptions by technical limitations such as cursor visibility issues, and limitations in feedback gathering. Infrastructure challenges involved environmental constraints, technical issues, and difficulties in relocating large display setups. We share the lessons learned from our study and provide future directions along with research project examples to address these challenges. Article Search |
|
Kosch, Thomas |
ISS '24: "Assessing the Effects of Sensory ..."
Assessing the Effects of Sensory Modality Conditions on Object Retention across Virtual Reality and Projected Surface Display Environments
Lucas Siqueira Rodrigues, Timo Torsten Schmidt, Johann Habakuk Israel, Stefan Zachow, John Nyakatura, and Thomas Kosch (Humboldt University of Berlin, Germany; Freie Universität Berlin, Germany; HTW Berlin, Germany; Zuse Institute Berlin, Germany) Haptic feedback reportedly enhances human interaction with 3D data, particularly improving the retention of mental representations of digital objects in immersive settings. However, the effectiveness of visuohaptic integration in promoting object retention across different display environments remains underexplored. Our study extends previous research on the retention effects of haptics from virtual reality to a projected surface display to assess whether earlier findings generalize to 2D environments. Participants performed a delayed match-to-sample task incorporating visual, haptic, and visuohaptic sensory feedback within a projected surface display environment. We compared error rates and response times across these sensory modalities and display environments. Our results reveal that visuohaptic integration significantly enhances object retention on projected surfaces, benefiting task performance across display environments. Our findings suggest that haptics can improve object retention without requiring fully immersive setups, offering insights for the design of interactive systems that assist professionals who rely on precise mental representations of digital objects. Article Search |
|
Kristensson, Per Ola |
ISS '24: "Exploring Creation Perspectives ..."
Exploring Creation Perspectives and Icon Placement for On-Body Menus in Virtual Reality
Xiang Li, Wei He, Shan Jin, Jan Gugenheimer, Pan Hui, Hai-Ning Liang, and Per Ola Kristensson (University of Cambridge, United Kingdom; Hong Kong University of Science and Technology, Guangzhou, China; TU Darmstadt, Germany; Hong Kong University of Science and Technology, China; Xi’an Jiaotong-Liverpool University, China) On-body menus present a novel interaction paradigm within Virtual Reality (VR) environments by embedding virtual interfaces directly onto the user’s body. Unlike traditional screen-based interfaces, on-body menus enable users to interact with virtual options or icons visually attached to their physical form. In this paper, We investigated the impact of the creation process on the effectiveness of on-body menus, comparing first-person, third-person, and mirror perspectives. Our first study (N = 12) revealed that the mirror perspective led to faster creation times and more accurate recall compared to the other two perspectives. To further explore user preferences, we conducted a second study (N = 18) utilizing a VR system with integrated body tracking. By combining distributions of icons from both studies (N = 30), we confirmed significant preferences in on-body menu placement based on icon category (e.g., Social Media icons were consistently placed on forearms). We also discovered associations between categories, such as Leisure and Social Media icons frequently co-occurring. Our findings highlight the importance of the creation process, uncover user preferences for on-body menu organization, and provide insights to guide the development of intuitive and effective on-body interactions within virtual environments. Article Search |
|
Kwan, Pak |
ISS '24: "The Elephant in the Room: ..."
The Elephant in the Room: Expert Experiences Designing, Developing and Evaluating Data Visualizations on Large Displays
Mahsa Sinaei Hamed, Pak Kwan, Matthew Klich, Jillian Aurisano, and Fateme Rajabiyazdi (Carleton University, Canada; University of Cincinnati, USA) Large displays can provide the necessary space and resolution for comprehensive explorations of data visualizations. However, designing and developing visualizations for such displays pose distinct challenges. Identifying these challenges is essential for data visualization designers and developers creating data visualizations on large displays. In this study, we aim to identify the challenges designers and developers encounter when creating data visualizations for large displays. We conducted semi-structured interviews with 13 experts experienced in creating data visualizations for large displays and, through affinity diagramming, categorized the challenges. We identified several challenges in designing, developing, and evaluating data visualizations on large displays, as well as building infrastructure for large displays. Design challenges included scaling visual encodings, limited design tools, and adopting design guidelines for large displays. In the development phase, developers faced difficulties working away from large displays and dealing with insufficient tools and resources. During the evaluation phase, researchers encountered issues with individuals' unfamiliarity with large display technology, interaction interruptions by technical limitations such as cursor visibility issues, and limitations in feedback gathering. Infrastructure challenges involved environmental constraints, technical issues, and difficulties in relocating large display setups. We share the lessons learned from our study and provide future directions along with research project examples to address these challenges. Article Search |
|
Larabi, Slimane |
ISS '24: "Hapstick-Figure: Investigating ..."
Hapstick-Figure: Investigating the Design of a Haptic Representation of Human Gestures from Theater Performances for Blind and Visually-Impaired People
Leyla Benhamida, Slimane Larabi, and Oussama Metatla (USTHB University, Algeria; University of Bristol, United Kingdom) Theaters serve as platforms that transport audiences into diverse worlds, offering a collective enjoyment of live performances and a shared cultural experience. However, theater performances have strong visual components, such as physical props and actors’ movements and gestures, which are inaccessible to visually impaired and blind audience members and thus can exclude them from such shared social experiences. We conducted formative interviews with eight blind and visually impaired people about their experiences with barriers to theater performance gestures. We then present Hapstick-Figure, a prototype design to represent and communicate human gestures via a 3D-printed tactile surface. Next, we used Hapstick-Figure as a technology probe in a qualitative evaluation with six of our BVI participants to explore non-visual interpretation and engagement with this prototype. We outline insights into the haptic representation of theater performance gestures and reflections on designing for accessibility in this context. Article Search |
|
Lee, Benjamin |
ISS '24: "Lights, Headset, Tablet, Action: ..."
Lights, Headset, Tablet, Action: Hybrid User Interfaces for Situated Analytics
Xiaoyan Zhou, Benjamin Lee, Francisco Raul Ortega, Anil Ufuk Batmaz, and Yalong Yang (Colorado State University, USA; University of Stuttgart, Germany; JPMorganChase, New York, USA; Concordia University, Canada; Georgia Institute of Technology, USA) While augmented reality (AR) headsets provide entirely new ways of seeing and interacting with data, traditional computing devices can play a symbiotic role when used in conjunction with AR as a hybrid user interface. A promising use case for this setup is situated analytics. AR can provide embedded views that are integrated with their physical referents, and a separate device such as a tablet can provide a familiar situated overview of the entire dataset being examined. While prior work has explored similar setups, we sought to understand how people perceive and make use of visualizations presented on both embedded visualizations (in AR) and situated visualizations (on a tablet) to achieve their own goals. To this end, we conducted an exploratory study using a scenario and task familiar to most: adjusting light levels in a smart home based on personal preference and energy usage. In a prototype that simulates AR in virtual reality, embedded visualizations are positioned next to lights distributed across an apartment, and situated visualizations are provided on a handheld tablet. We observed and interviewed 19 participants using the prototype. Participants were easily able to perform the task, though the extent the visualizations were used during the task varied, with some making decisions based on the data and others only on their own preferences. Our findings also suggest the two distinct roles that situated and embedded visualizations can have, and how this clear separation might improve user satisfaction and minimize attention-switching overheads in this hybrid user interface setup. We conclude by discussing the importance of considering the user's needs, goals, and the physical environment for designing and evaluating effective situated analytics applications. Article Search |
|
Leung, Justin |
ISS '24: "Planar or Spatial: Exploring ..."
Planar or Spatial: Exploring Design Aspects and Challenges for Presentations in Virtual Reality with No-coding Interface
Liwei Wu, Yilin Zhang, Justin Leung, Jingyi Gao, April Li, and Jian Zhao (University of Waterloo, Canada) The proliferation of virtual reality (VR) has led to its increasing adoption as an immersive medium for delivering presentations, distinct from other VR experiences like games and 360-degree videos by sharing information in richly interactive environments. However, creating engaging VR presentations remains a challenging and time-consuming task for users, hindering the full realization of VR presentation's capabilities. This research aims to explore the potential of VR presentation, analyze users' opinions, and investigate these via providing a user-friendly no-coding authoring tool. Through an examination of popular presentation software and interviews with seven professionals, we identified five design aspects and four design challenges for VR presentations. Based on the findings, we developed VRStory, a prototype for presentation authoring without coding to explore the design aspects and strategies for addressing the challenges. VRStory offers a variety of predefined and customizable VR elements, as well as modules for layout design, navigation control, and asset generation. A user study was then conducted with 12 participants to investigate their opinions and authoring experience with VRStory. Our results demonstrated that, while acknowledging the advantages of immersive and spatial features in VR, users often have a consistent mental model for traditional 2D presentations and may still prefer planar and static formats in VR for better accessibility and efficient communication. We finally shared our learned design considerations for future development of VR presentation tools, emphasizing the importance of balancing of promoting immersive features and ensuring accessibility. Article Search |
|
Li, April |
ISS '24: "Planar or Spatial: Exploring ..."
Planar or Spatial: Exploring Design Aspects and Challenges for Presentations in Virtual Reality with No-coding Interface
Liwei Wu, Yilin Zhang, Justin Leung, Jingyi Gao, April Li, and Jian Zhao (University of Waterloo, Canada) The proliferation of virtual reality (VR) has led to its increasing adoption as an immersive medium for delivering presentations, distinct from other VR experiences like games and 360-degree videos by sharing information in richly interactive environments. However, creating engaging VR presentations remains a challenging and time-consuming task for users, hindering the full realization of VR presentation's capabilities. This research aims to explore the potential of VR presentation, analyze users' opinions, and investigate these via providing a user-friendly no-coding authoring tool. Through an examination of popular presentation software and interviews with seven professionals, we identified five design aspects and four design challenges for VR presentations. Based on the findings, we developed VRStory, a prototype for presentation authoring without coding to explore the design aspects and strategies for addressing the challenges. VRStory offers a variety of predefined and customizable VR elements, as well as modules for layout design, navigation control, and asset generation. A user study was then conducted with 12 participants to investigate their opinions and authoring experience with VRStory. Our results demonstrated that, while acknowledging the advantages of immersive and spatial features in VR, users often have a consistent mental model for traditional 2D presentations and may still prefer planar and static formats in VR for better accessibility and efficient communication. We finally shared our learned design considerations for future development of VR presentation tools, emphasizing the importance of balancing of promoting immersive features and ensuring accessibility. Article Search |
|
Li, Xiang |
ISS '24: "Exploring Creation Perspectives ..."
Exploring Creation Perspectives and Icon Placement for On-Body Menus in Virtual Reality
Xiang Li, Wei He, Shan Jin, Jan Gugenheimer, Pan Hui, Hai-Ning Liang, and Per Ola Kristensson (University of Cambridge, United Kingdom; Hong Kong University of Science and Technology, Guangzhou, China; TU Darmstadt, Germany; Hong Kong University of Science and Technology, China; Xi’an Jiaotong-Liverpool University, China) On-body menus present a novel interaction paradigm within Virtual Reality (VR) environments by embedding virtual interfaces directly onto the user’s body. Unlike traditional screen-based interfaces, on-body menus enable users to interact with virtual options or icons visually attached to their physical form. In this paper, We investigated the impact of the creation process on the effectiveness of on-body menus, comparing first-person, third-person, and mirror perspectives. Our first study (N = 12) revealed that the mirror perspective led to faster creation times and more accurate recall compared to the other two perspectives. To further explore user preferences, we conducted a second study (N = 18) utilizing a VR system with integrated body tracking. By combining distributions of icons from both studies (N = 30), we confirmed significant preferences in on-body menu placement based on icon category (e.g., Social Media icons were consistently placed on forearms). We also discovered associations between categories, such as Leisure and Social Media icons frequently co-occurring. Our findings highlight the importance of the creation process, uncover user preferences for on-body menu organization, and provide insights to guide the development of intuitive and effective on-body interactions within virtual environments. Article Search |
|
Liang, Hai-Ning |
ISS '24: "Experimental Analysis of Freehand ..."
Experimental Analysis of Freehand Multi-object Selection Techniques in Virtual Reality Head-Mounted Displays
Rongkai Shi, Yushi Wei, Xuning Hu, Yu Liu, Yong Yue, Lingyun Yu, and Hai-Ning Liang (Hong Kong University of Science and Technology, Guangzhou, China; Xi’an Jiaotong-Liverpool University, China) Object selection is essential in virtual reality (VR) head-mounted displays (HMDs). Prior work mainly focuses on enhancing and evaluating techniques for selecting a single object in VR, leaving a gap in the techniques for multi-object selection, a more complex but common selection scenario. To enable multi-object selection, the interaction technique should support group selection in addition to the default pointing selection mode for acquiring a single target. This composite interaction could be particularly challenging when using freehand gestural input. In this work, we present an empirical comparison of six freehand techniques, which are comprised of three mode-switching gestures (Finger Segment, Multi-Finger, and Wrist Orientation) and two group selection techniques (Cone-casting Selection and Crossing Selection) derived from prior work. Our results demonstrate the performance, user experience, and preference of each technique. The findings derive three design implications that can guide the design of freehand techniques for multi-object selection in VR HMDs. Article Search ISS '24: "Exploring Creation Perspectives ..." Exploring Creation Perspectives and Icon Placement for On-Body Menus in Virtual Reality Xiang Li, Wei He, Shan Jin, Jan Gugenheimer, Pan Hui, Hai-Ning Liang, and Per Ola Kristensson (University of Cambridge, United Kingdom; Hong Kong University of Science and Technology, Guangzhou, China; TU Darmstadt, Germany; Hong Kong University of Science and Technology, China; Xi’an Jiaotong-Liverpool University, China) On-body menus present a novel interaction paradigm within Virtual Reality (VR) environments by embedding virtual interfaces directly onto the user’s body. Unlike traditional screen-based interfaces, on-body menus enable users to interact with virtual options or icons visually attached to their physical form. In this paper, We investigated the impact of the creation process on the effectiveness of on-body menus, comparing first-person, third-person, and mirror perspectives. Our first study (N = 12) revealed that the mirror perspective led to faster creation times and more accurate recall compared to the other two perspectives. To further explore user preferences, we conducted a second study (N = 18) utilizing a VR system with integrated body tracking. By combining distributions of icons from both studies (N = 30), we confirmed significant preferences in on-body menu placement based on icon category (e.g., Social Media icons were consistently placed on forearms). We also discovered associations between categories, such as Leisure and Social Media icons frequently co-occurring. Our findings highlight the importance of the creation process, uncover user preferences for on-body menu organization, and provide insights to guide the development of intuitive and effective on-body interactions within virtual environments. Article Search |
|
Lindlbauer, David |
ISS '24: "Don’t Block My Stuff: Fostering ..."
Don’t Block My Stuff: Fostering Personal Object Awareness in Multi-user Mixed Reality Environments
Talha Khan and David Lindlbauer (University of Pittsburgh, USA; Carnegie Mellon University, USA) In Mixed Reality (MR), users can collaborate efficiently by creating personalized layouts that incorporate both personal and shared virtual objects. Unlike in the real world, personal objects in MR are only visible to their owner. This makes them susceptible to occlusions from shared objects of other users, who remain unaware of their existence. Thus, achieving unobstructed layouts in collaborative MR settings requires knowledge of where others have placed their personal objects. In this paper, we assessed the effects of three visualizations, and a baseline without any visualization, on occlusions and user perceptions. Our study involved 16 dyads (N=32) who engaged in a series of collaborative sorting tasks. Results indicate that the choice of visualization significantly impacts both occlusion and perception, emphasizing the need for effective visualizations to enhance collaborative MR experiences. We conclude with design recommendations for multi-user MR systems to better accommodate both personal and shared interfaces simultaneously. Article Search |
|
Liu, Yu |
ISS '24: "Experimental Analysis of Freehand ..."
Experimental Analysis of Freehand Multi-object Selection Techniques in Virtual Reality Head-Mounted Displays
Rongkai Shi, Yushi Wei, Xuning Hu, Yu Liu, Yong Yue, Lingyun Yu, and Hai-Ning Liang (Hong Kong University of Science and Technology, Guangzhou, China; Xi’an Jiaotong-Liverpool University, China) Object selection is essential in virtual reality (VR) head-mounted displays (HMDs). Prior work mainly focuses on enhancing and evaluating techniques for selecting a single object in VR, leaving a gap in the techniques for multi-object selection, a more complex but common selection scenario. To enable multi-object selection, the interaction technique should support group selection in addition to the default pointing selection mode for acquiring a single target. This composite interaction could be particularly challenging when using freehand gestural input. In this work, we present an empirical comparison of six freehand techniques, which are comprised of three mode-switching gestures (Finger Segment, Multi-Finger, and Wrist Orientation) and two group selection techniques (Cone-casting Selection and Crossing Selection) derived from prior work. Our results demonstrate the performance, user experience, and preference of each technique. The findings derive three design implications that can guide the design of freehand techniques for multi-object selection in VR HMDs. Article Search |
|
Matthews, Brandon J. |
ISS '24: "Hey Building! Novel Interfaces ..."
Hey Building! Novel Interfaces for Parametric Design Manipulations in Virtual Reality
Adam Drogemuller, Brandon J. Matthews, Andrew Cunningham, Rongrong Yu, Ning Gu, and Bruce H. Thomas (University of South Australia, Australia) Parametric Design enables designers to formulate and explore new ideas through parameters, typically by manipulating numerical values. However, visualising and exploring the design space of an established parametric design solution is natively difficult through desktop displays for designers due to screen space constraints and requiring familiarity with visual-language programming interfaces. Thus, we sought to explore Virtual Reality (VR), inspired by Natural User Interfaces (NUI), to develop and explore new interfaces departing from traditional programming interfaces, that could complement the spatial and embodied affordances of contemporary VR devices. Informed by two industry-led focus groups with architects we developed and examined the usability of three different interfaces: 1) Paramaxes, an axes-based interface that allows designers to distribute and manipulate parameter visualisations around them in physical space; 2) ParamUtter, a Voice-based User Interface (VUI) that allows designers to manipulate parameter visualisations through natural languages; 3) Control Panel, which presents the parameters as sliders in a scrollable pane and acts as baseline comparison. We ran an exploratory study with experts and found that the Control Panel was ultimately the preferred interface for a design manipulation task. However, participants commented favorably towards qualities in the unconventional interfaces, with ParamUtter scoring highest in System Usability Scores (SUS), and participants valuing the potential of using physical space to explore design spaces with Paramaxes. Article Search |
|
Mayer, Sven |
ISS '24: "Evaluating Typing Performance ..."
Evaluating Typing Performance in Different Mixed Reality Manifestations using Physiological Features
Francesco Chiossi, Yassmine El Khaoudi, Changkun Ou, Ludwig Sidenmark, Abdelrahman Zaky, Tiare Feuchtner, and Sven Mayer (LMU Munich, Germany; University of Toronto, Canada; University of Konstanz, Germany) Article Search |
|
Metatla, Oussama |
ISS '24: "Hapstick-Figure: Investigating ..."
Hapstick-Figure: Investigating the Design of a Haptic Representation of Human Gestures from Theater Performances for Blind and Visually-Impaired People
Leyla Benhamida, Slimane Larabi, and Oussama Metatla (USTHB University, Algeria; University of Bristol, United Kingdom) Theaters serve as platforms that transport audiences into diverse worlds, offering a collective enjoyment of live performances and a shared cultural experience. However, theater performances have strong visual components, such as physical props and actors’ movements and gestures, which are inaccessible to visually impaired and blind audience members and thus can exclude them from such shared social experiences. We conducted formative interviews with eight blind and visually impaired people about their experiences with barriers to theater performance gestures. We then present Hapstick-Figure, a prototype design to represent and communicate human gestures via a 3D-printed tactile surface. Next, we used Hapstick-Figure as a technology probe in a qualitative evaluation with six of our BVI participants to explore non-visual interpretation and engagement with this prototype. We outline insights into the haptic representation of theater performance gestures and reflections on designing for accessibility in this context. Article Search |
|
North, Chris |
ISS '24: "Evaluating Layout Dimensionalities ..."
Evaluating Layout Dimensionalities in PC+VR Asymmetric Collaborative Decision Making
Daniel Enriquez, Wai Tong, Chris North, Huamin Qu, and Yalong Yang (Cornell Tech, USA; Texas A&M University, USA; Virginia Tech, USA; Hong Kong University of Science and Technology, China; Georgia Institute of Technology, USA) With the commercialization of virtual/augmented reality (VR/AR) devices, there is an increasing interest in combining immersive and non-immersive devices (e.g., desktop computers) for asymmetric collaborations. While such asymmetric settings have been examined in social platforms, significant questions around layout dimensionality in data-driven decision-making remain underexplored. A crucial inquiry arises: although presenting a consistent 3D virtual world on both immersive and non-immersive platforms has been a common practice in social applications, does the same guideline apply to lay out data? Or should data placement be optimized locally according to each device's display capacity? This study aims to provide empirical insights into the user experience of asymmetric collaboration in data-driven decision-making. We tested practical dimensionality combinations between PC and VR, resulting in three conditions: PC2D+VR2D, PC2D+VR3D, and PC3D+VR3D. The results revealed a preference for PC2D+VR3D, and PC2D+VR2D led to the quickest task completion. Our investigation facilitates an in-depth discussion of the trade-offs associated with different layout dimensionalities in asymmetric collaborations. Article Search Info |
|
Nyakatura, John |
ISS '24: "Assessing the Effects of Sensory ..."
Assessing the Effects of Sensory Modality Conditions on Object Retention across Virtual Reality and Projected Surface Display Environments
Lucas Siqueira Rodrigues, Timo Torsten Schmidt, Johann Habakuk Israel, Stefan Zachow, John Nyakatura, and Thomas Kosch (Humboldt University of Berlin, Germany; Freie Universität Berlin, Germany; HTW Berlin, Germany; Zuse Institute Berlin, Germany) Haptic feedback reportedly enhances human interaction with 3D data, particularly improving the retention of mental representations of digital objects in immersive settings. However, the effectiveness of visuohaptic integration in promoting object retention across different display environments remains underexplored. Our study extends previous research on the retention effects of haptics from virtual reality to a projected surface display to assess whether earlier findings generalize to 2D environments. Participants performed a delayed match-to-sample task incorporating visual, haptic, and visuohaptic sensory feedback within a projected surface display environment. We compared error rates and response times across these sensory modalities and display environments. Our results reveal that visuohaptic integration significantly enhances object retention on projected surfaces, benefiting task performance across display environments. Our findings suggest that haptics can improve object retention without requiring fully immersive setups, offering insights for the design of interactive systems that assist professionals who rely on precise mental representations of digital objects. Article Search |
|
Okafuji, Yuki |
ISS '24: "Popping-Up Poster: A Pin-Based ..."
Popping-Up Poster: A Pin-Based Promotional Poster Device for Engaging Customers through Physical Shape Transformation
Kojiro Tanaka, Yuki Okafuji, and Takuya Iwamoto (University of Tsukuba, Japan; CyberAgent, Japan; Osaka University, Japan) Promotional media, such as paper posters and digital signage, are installed in shopping malls to recommend products and services. However, it has been reported that many customers tend not to be interested in these promotional media and do not receive the information. When product information is not communicated effectively, advertisers are unable to convey the information they wish to share with customers, and customers miss the opportunity to receive valuable information. To address such issues, a lot of methods have been proposed to make people aware of the presence of media; however, there are not many methods that take into account the delivery of product information to customers. In this study, we propose Popping-Up Poster, a pin-based poster device designed to capture customer attention and convey information through dynamic shape changes. To verify the effectiveness of the proposed system, field experiments were conducted in a café, where its promotional effects were compared with those of traditional promotional media, including paper posters and digital signage. These results show that Popping-Up Poster has the potential to be more effective in recommending products and influencing customer product choices compared to conventional promotional media. Article Search |
|
Oriola, Bernard |
ISS '24: "Audio-Vibratory You-Are-Here ..."
Audio-Vibratory You-Are-Here Mobile Maps for People with Visual Impairments
Elen Sargsyan, Bernard Oriola, Marcos Serrano, and Christophe Jouffrais (University of Toulouse 3 - IRIT, France; CNRS - IRIT, France; CNRS - IPAL, Singapore) Self-localization and wayfinding are challenging tasks for people with visual impairments (PVIs), severely impacting independent mobility. Visual “You-are-here” (YAH) maps are useful for assisting local wayfinding of sighted users. They are used to self-localize and display points of interest, landmarks and routes in the surroundings. However, these maps are not always available and rarely accessible to PVIs. Relying on an iterative participatory design process with eight end-users with visual impairments, we created a proof of concept of a mobile audio-vibratory YAH map. Our design is based on either a tablet or a smartphone to ensure a small and portable solution. A user study with ten PVIs showed that the audio-vibratory YAH map that we designed provides the user with a good understanding of the surroundings and wayfinding cues. Surprisingly, the results show that the audio-vibratory YAH map prototype was as usable as the control condition (audio-tactile YAH map with a tactile overlay), with similar user satisfaction and cognitive load. A follow-up field study with two participants showed the effectiveness of the prototype for assisting in crossroad understanding. To conclude, our innovative design of a mobile audio-vibratory YAH map can overcome the portability and printing issues associated with tactile overlays and can be an appropriate solution for assisting the pedestrian navigation of PVIs. Article Search |
|
Ortega, Francisco Raul |
ISS '24: "Lights, Headset, Tablet, Action: ..."
Lights, Headset, Tablet, Action: Hybrid User Interfaces for Situated Analytics
Xiaoyan Zhou, Benjamin Lee, Francisco Raul Ortega, Anil Ufuk Batmaz, and Yalong Yang (Colorado State University, USA; University of Stuttgart, Germany; JPMorganChase, New York, USA; Concordia University, Canada; Georgia Institute of Technology, USA) While augmented reality (AR) headsets provide entirely new ways of seeing and interacting with data, traditional computing devices can play a symbiotic role when used in conjunction with AR as a hybrid user interface. A promising use case for this setup is situated analytics. AR can provide embedded views that are integrated with their physical referents, and a separate device such as a tablet can provide a familiar situated overview of the entire dataset being examined. While prior work has explored similar setups, we sought to understand how people perceive and make use of visualizations presented on both embedded visualizations (in AR) and situated visualizations (on a tablet) to achieve their own goals. To this end, we conducted an exploratory study using a scenario and task familiar to most: adjusting light levels in a smart home based on personal preference and energy usage. In a prototype that simulates AR in virtual reality, embedded visualizations are positioned next to lights distributed across an apartment, and situated visualizations are provided on a handheld tablet. We observed and interviewed 19 participants using the prototype. Participants were easily able to perform the task, though the extent the visualizations were used during the task varied, with some making decisions based on the data and others only on their own preferences. Our findings also suggest the two distinct roles that situated and embedded visualizations can have, and how this clear separation might improve user satisfaction and minimize attention-switching overheads in this hybrid user interface setup. We conclude by discussing the importance of considering the user's needs, goals, and the physical environment for designing and evaluating effective situated analytics applications. Article Search |
|
Ou, Changkun |
ISS '24: "Evaluating Typing Performance ..."
Evaluating Typing Performance in Different Mixed Reality Manifestations using Physiological Features
Francesco Chiossi, Yassmine El Khaoudi, Changkun Ou, Ludwig Sidenmark, Abdelrahman Zaky, Tiare Feuchtner, and Sven Mayer (LMU Munich, Germany; University of Toronto, Canada; University of Konstanz, Germany) Article Search |
|
Pfeuffer, Ken |
ISS '24: "Gaze, Wall, and Racket: Combining ..."
Gaze, Wall, and Racket: Combining Gaze and Hand-Controlled Plane for 3D Selection in Virtual Reality
Uta Wagner, Andreas Asferg Jacobsen, Matthias Albrecht, Haopeng Wang, Hans Gellersen, and Ken Pfeuffer (Aarhus University, Denmark; University of Konstanz, Germany; Lancaster University, United Kingdom) Raypointing, the status-quo pointing technique for virtual reality, is challenging with many occluded and overlapping objects. In this work, we investigate how eye-tracking input can assist the gestural ray pointing in the disambiguation of targets in densely populated scenes. We explore the concept of Gaze + Plane, where the intersection between the user's gaze and a hand-controlled plane facilitates 3D position specification. In particular, two techniques are investigated: Gaze&Wall, which employs an indirect plane positioned in depth using a hand ray, and Gaze&Racket, featuring a hand-held and rotatable plane. In a first experiment, we reveal the speed-error trade-offs between Gaze + Plane techniques. In a second study, we compared the best techniques to newly designed gesture-only techniques, finding that Gaze&Wall is less error-prone and significantly faster. Our research has relevance for spatial interaction, specifically on advanced techniques for complex 3D tasks. Article Search |
|
Purchase, Helen C. |
ISS '24: "3D Remote Monitoring and Diagnosis ..."
3D Remote Monitoring and Diagnosis during a Pandemic: Holoportation and Digital Twin Requirements
Kabir Ahmed Rufai, Helen C. Purchase, Barrett Ens, Patrick Reuter, Chris Bain, Peter Chan, and Jim Smiley (Monash University, Australia; University of British Columbia, Okanagan, Canada; LaBRI, France; Eastern Health, Australia) COVID-19 regulations presented clinicians with a new set of challenges that affected their ability to efficiently provide patient care and, as a result, telemedicine was rapidly adopted to deliver care remotely. However, these telemedicine platforms undermine patient care due to clinicians' inability to acquire all the relevant patient information required to diagnose and treat the patient. To explore this gap, we conducted a requirements analysis for the development of a 3D remote patient monitoring and diagnosis platform, by using a user-centric design methodology. In this requirements analysis, we elicited information about the clinical domain, identified clinicians’ requirements, elicited clinicians’ insights, and documented the clinicians' requirements. The outcome was the emergence of refined clinicians' requirements to guide the implementation of the Digital Twin concept paired with holoportation for remote 3D monitoring and diagnosis of patients. We anticipate that the application of a 3D telemedicine platform with these requirements for patient care during a pandemic could potentially enhance clinicians' efficiency and the effectiveness of remote patient care. Article Search |
|
Qian, Xun |
ISS '24: "AdapTUI: Adaptation of Geometric-Feature-Based ..."
AdapTUI: Adaptation of Geometric-Feature-Based Tangible User Interfaces in Augmented Reality
Fengming He, Xiyun Hu, Xun Qian, Zhengzhe Zhu, and Karthik Ramani (Purdue University, USA; Google Research, USA) With the advents in geometry perception and Augmented Reality (AR), end-users can customize Tangible User Interfaces (TUIs) that control digital assets using intuitive and comfortable interactions with physical geometries (e.g., edges and surfaces). However, it remains challenging to adapt such TUIs in varied physical environments while maintaining the same spatial and ergonomic affordance. We propose AdapTUI, an end-to- end system that enables an end-user to author geometric-based TUIs and automatically adapts the TUIs when the user moves to a new environment. Leveraging a geometry detection module and the spatial awareness of AR, AdapTUI first lets users create custom mappings between geometric features and digital functions. Then, AdapTUI uses an optimization-based adaptation framework, which considers both the geometric variations and human-factor nuances, to dynamically adjust the attachment of the user-authored TUIs. We demonstrate three application scenarios where end-users can utilize TUIs at different locations, including portable car play, efficient AR workstation, and entertainment. We evaluated the effectiveness of the adaptation method as well as the overall usability through a comparison user study (N=12). The satisfactory adaptation of the user-authored TUIs and the positive qualitative feedback demonstrate the effectiveness of our system. Article Search Video |
|
Qu, Huamin |
ISS '24: "Evaluating Layout Dimensionalities ..."
Evaluating Layout Dimensionalities in PC+VR Asymmetric Collaborative Decision Making
Daniel Enriquez, Wai Tong, Chris North, Huamin Qu, and Yalong Yang (Cornell Tech, USA; Texas A&M University, USA; Virginia Tech, USA; Hong Kong University of Science and Technology, China; Georgia Institute of Technology, USA) With the commercialization of virtual/augmented reality (VR/AR) devices, there is an increasing interest in combining immersive and non-immersive devices (e.g., desktop computers) for asymmetric collaborations. While such asymmetric settings have been examined in social platforms, significant questions around layout dimensionality in data-driven decision-making remain underexplored. A crucial inquiry arises: although presenting a consistent 3D virtual world on both immersive and non-immersive platforms has been a common practice in social applications, does the same guideline apply to lay out data? Or should data placement be optimized locally according to each device's display capacity? This study aims to provide empirical insights into the user experience of asymmetric collaboration in data-driven decision-making. We tested practical dimensionality combinations between PC and VR, resulting in three conditions: PC2D+VR2D, PC2D+VR3D, and PC3D+VR3D. The results revealed a preference for PC2D+VR3D, and PC2D+VR2D led to the quickest task completion. Our investigation facilitates an in-depth discussion of the trade-offs associated with different layout dimensionalities in asymmetric collaborations. Article Search Info |
|
Rajabiyazdi, Fateme |
ISS '24: "The Elephant in the Room: ..."
The Elephant in the Room: Expert Experiences Designing, Developing and Evaluating Data Visualizations on Large Displays
Mahsa Sinaei Hamed, Pak Kwan, Matthew Klich, Jillian Aurisano, and Fateme Rajabiyazdi (Carleton University, Canada; University of Cincinnati, USA) Large displays can provide the necessary space and resolution for comprehensive explorations of data visualizations. However, designing and developing visualizations for such displays pose distinct challenges. Identifying these challenges is essential for data visualization designers and developers creating data visualizations on large displays. In this study, we aim to identify the challenges designers and developers encounter when creating data visualizations for large displays. We conducted semi-structured interviews with 13 experts experienced in creating data visualizations for large displays and, through affinity diagramming, categorized the challenges. We identified several challenges in designing, developing, and evaluating data visualizations on large displays, as well as building infrastructure for large displays. Design challenges included scaling visual encodings, limited design tools, and adopting design guidelines for large displays. In the development phase, developers faced difficulties working away from large displays and dealing with insufficient tools and resources. During the evaluation phase, researchers encountered issues with individuals' unfamiliarity with large display technology, interaction interruptions by technical limitations such as cursor visibility issues, and limitations in feedback gathering. Infrastructure challenges involved environmental constraints, technical issues, and difficulties in relocating large display setups. We share the lessons learned from our study and provide future directions along with research project examples to address these challenges. Article Search |
|
Ramani, Karthik |
ISS '24: "AdapTUI: Adaptation of Geometric-Feature-Based ..."
AdapTUI: Adaptation of Geometric-Feature-Based Tangible User Interfaces in Augmented Reality
Fengming He, Xiyun Hu, Xun Qian, Zhengzhe Zhu, and Karthik Ramani (Purdue University, USA; Google Research, USA) With the advents in geometry perception and Augmented Reality (AR), end-users can customize Tangible User Interfaces (TUIs) that control digital assets using intuitive and comfortable interactions with physical geometries (e.g., edges and surfaces). However, it remains challenging to adapt such TUIs in varied physical environments while maintaining the same spatial and ergonomic affordance. We propose AdapTUI, an end-to- end system that enables an end-user to author geometric-based TUIs and automatically adapts the TUIs when the user moves to a new environment. Leveraging a geometry detection module and the spatial awareness of AR, AdapTUI first lets users create custom mappings between geometric features and digital functions. Then, AdapTUI uses an optimization-based adaptation framework, which considers both the geometric variations and human-factor nuances, to dynamically adjust the attachment of the user-authored TUIs. We demonstrate three application scenarios where end-users can utilize TUIs at different locations, including portable car play, efficient AR workstation, and entertainment. We evaluated the effectiveness of the adaptation method as well as the overall usability through a comparison user study (N=12). The satisfactory adaptation of the user-authored TUIs and the positive qualitative feedback demonstrate the effectiveness of our system. Article Search Video |
|
Reinschluessel, Anke Verena |
ISS '24: "There Is More to Avatars Than ..."
There Is More to Avatars Than Visuals: Investigating Combinations of Visual and Auditory User Representations for Remote Collaboration in Augmented Reality
Daniel Immanuel Fink, Moritz Skowronski, Johannes Zagermann, Anke Verena Reinschluessel, Harald Reiterer, and Tiare Feuchtner (University of Konstanz, Germany; Aarhus University, Denmark) Supporting remote collaboration through augmented reality facilitates the experience of co-presence by presenting the collaborator’s avatar in the user’s physical environment. While visual user representations are continuously researched and advanced, the audio configuration – especially in combination with different visualizations – is rarely considered. In a user study (n = 48, 24 dyads), we evaluate the combination of two visual (Simple vs. Rich Avatar) with two auditory (Mono vs. Spatial Audio) user representations, to investigate their impact on user’s overall experience, performance, and perceived social presence during collaboration. Our results show a preference for rich auditory and visual user representations, as Spatial Audio supports completion of parallel tasks and the Rich Avatar positively influence user experience. However, the Simple Avatar draws less attention, which potentially benefits task efficiency, advocating for simpler visualizations in performance-oriented settings. Our findings contribute to a deeper understanding of how visual and auditory user representation combinations impact remote collaboration in augmented reality. Article Search |
|
Reiterer, Harald |
ISS '24: "There Is More to Avatars Than ..."
There Is More to Avatars Than Visuals: Investigating Combinations of Visual and Auditory User Representations for Remote Collaboration in Augmented Reality
Daniel Immanuel Fink, Moritz Skowronski, Johannes Zagermann, Anke Verena Reinschluessel, Harald Reiterer, and Tiare Feuchtner (University of Konstanz, Germany; Aarhus University, Denmark) Supporting remote collaboration through augmented reality facilitates the experience of co-presence by presenting the collaborator’s avatar in the user’s physical environment. While visual user representations are continuously researched and advanced, the audio configuration – especially in combination with different visualizations – is rarely considered. In a user study (n = 48, 24 dyads), we evaluate the combination of two visual (Simple vs. Rich Avatar) with two auditory (Mono vs. Spatial Audio) user representations, to investigate their impact on user’s overall experience, performance, and perceived social presence during collaboration. Our results show a preference for rich auditory and visual user representations, as Spatial Audio supports completion of parallel tasks and the Rich Avatar positively influence user experience. However, the Simple Avatar draws less attention, which potentially benefits task efficiency, advocating for simpler visualizations in performance-oriented settings. Our findings contribute to a deeper understanding of how visual and auditory user representation combinations impact remote collaboration in augmented reality. Article Search |
|
Reuter, Patrick |
ISS '24: "3D Remote Monitoring and Diagnosis ..."
3D Remote Monitoring and Diagnosis during a Pandemic: Holoportation and Digital Twin Requirements
Kabir Ahmed Rufai, Helen C. Purchase, Barrett Ens, Patrick Reuter, Chris Bain, Peter Chan, and Jim Smiley (Monash University, Australia; University of British Columbia, Okanagan, Canada; LaBRI, France; Eastern Health, Australia) COVID-19 regulations presented clinicians with a new set of challenges that affected their ability to efficiently provide patient care and, as a result, telemedicine was rapidly adopted to deliver care remotely. However, these telemedicine platforms undermine patient care due to clinicians' inability to acquire all the relevant patient information required to diagnose and treat the patient. To explore this gap, we conducted a requirements analysis for the development of a 3D remote patient monitoring and diagnosis platform, by using a user-centric design methodology. In this requirements analysis, we elicited information about the clinical domain, identified clinicians’ requirements, elicited clinicians’ insights, and documented the clinicians' requirements. The outcome was the emergence of refined clinicians' requirements to guide the implementation of the Digital Twin concept paired with holoportation for remote 3D monitoring and diagnosis of patients. We anticipate that the application of a 3D telemedicine platform with these requirements for patient care during a pandemic could potentially enhance clinicians' efficiency and the effectiveness of remote patient care. Article Search |
|
Riggs, Sara |
ISS '24: "MoiréTag: A Low-Cost Tag ..."
MoiréTag: A Low-Cost Tag for High-Precision Tangible Interactions without Active Components
Peiyu Zhang, Wen Ying, Sara Riggs, and Seongkook Heo (University of Virginia, USA) In this paper, we present MoiréTag—a novel tag-like device that magnifies displacement without active components for indirect sensing of subtle tangible interactions. The device consists of two overlapping layers of stripe patterns with distinct pattern frequencies. These layers create Moiré fringes that can move faster than the actual movement of a layer. Using a customized image processing pipeline, we show that MoiréTag can reliably detect sub-mm movement in real-time (mean error = 0.043 mm) under varying lighting conditions, camera angles, and camera distances. We also demonstrate five applications of MoiréTag to showcase its potential as a low-cost solution to capture and monitor small changes in movement and other physical properties, such as force and volume, by converting them into displacement. Article Search |
|
Rodrigues, Lucas Siqueira |
ISS '24: "Assessing the Effects of Sensory ..."
Assessing the Effects of Sensory Modality Conditions on Object Retention across Virtual Reality and Projected Surface Display Environments
Lucas Siqueira Rodrigues, Timo Torsten Schmidt, Johann Habakuk Israel, Stefan Zachow, John Nyakatura, and Thomas Kosch (Humboldt University of Berlin, Germany; Freie Universität Berlin, Germany; HTW Berlin, Germany; Zuse Institute Berlin, Germany) Haptic feedback reportedly enhances human interaction with 3D data, particularly improving the retention of mental representations of digital objects in immersive settings. However, the effectiveness of visuohaptic integration in promoting object retention across different display environments remains underexplored. Our study extends previous research on the retention effects of haptics from virtual reality to a projected surface display to assess whether earlier findings generalize to 2D environments. Participants performed a delayed match-to-sample task incorporating visual, haptic, and visuohaptic sensory feedback within a projected surface display environment. We compared error rates and response times across these sensory modalities and display environments. Our results reveal that visuohaptic integration significantly enhances object retention on projected surfaces, benefiting task performance across display environments. Our findings suggest that haptics can improve object retention without requiring fully immersive setups, offering insights for the design of interactive systems that assist professionals who rely on precise mental representations of digital objects. Article Search |
|
Rudolph, George L. |
ISS '24: "Passive Stylus Tracking: A ..."
Passive Stylus Tracking: A Systematic Literature Review
Tavish M. Burnah, Md. Athar Imtiaz, Hans Werner Guesgen, George L. Rudolph, and Rachel Blagojevic (Massey University, New Zealand; Utah Valley University, USA) Passive stylus systems offer a simple and cost-effective solution for digital input, compatible with a wide range of surfaces and devices. This study reviews the domain of passive stylus tracking on passive surfaces, a topic previously underexplored in existing literature. We answer four key research questions: what type of systems exist in this domain, what methods do they use for tracking styli, how accurate are they, and what are their limitations? A systematic literature review resulted in 24 papers describing passive stylus systems. Their methods primarily fall into four categories: monocular cameras with image processing, multiple camera systems with image processing, machine learning systems using high-speed cameras or motion capture hardware, and radio frequency signal-based systems with signal processing. We found the system with the highest accuracy used a single monocular camera. In many systems, markers such as retroreflective spheres, tape, or fiducial markers were used to enhance the feature matching. We have also found stagnation and in some cases, regression in the precision and reliability of these systems over time. The limitations in these systems include the lack of varied stylus form factor support, the restriction to specific camera positions and angles, and the requirement of expensive hardware. Given these findings, we discuss the important characteristics and features of passive stylus systems and propose ways forward in this field. Article Search |
|
Rufai, Kabir Ahmed |
ISS '24: "3D Remote Monitoring and Diagnosis ..."
3D Remote Monitoring and Diagnosis during a Pandemic: Holoportation and Digital Twin Requirements
Kabir Ahmed Rufai, Helen C. Purchase, Barrett Ens, Patrick Reuter, Chris Bain, Peter Chan, and Jim Smiley (Monash University, Australia; University of British Columbia, Okanagan, Canada; LaBRI, France; Eastern Health, Australia) COVID-19 regulations presented clinicians with a new set of challenges that affected their ability to efficiently provide patient care and, as a result, telemedicine was rapidly adopted to deliver care remotely. However, these telemedicine platforms undermine patient care due to clinicians' inability to acquire all the relevant patient information required to diagnose and treat the patient. To explore this gap, we conducted a requirements analysis for the development of a 3D remote patient monitoring and diagnosis platform, by using a user-centric design methodology. In this requirements analysis, we elicited information about the clinical domain, identified clinicians’ requirements, elicited clinicians’ insights, and documented the clinicians' requirements. The outcome was the emergence of refined clinicians' requirements to guide the implementation of the Digital Twin concept paired with holoportation for remote 3D monitoring and diagnosis of patients. We anticipate that the application of a 3D telemedicine platform with these requirements for patient care during a pandemic could potentially enhance clinicians' efficiency and the effectiveness of remote patient care. Article Search |
|
Sargsyan, Elen |
ISS '24: "Audio-Vibratory You-Are-Here ..."
Audio-Vibratory You-Are-Here Mobile Maps for People with Visual Impairments
Elen Sargsyan, Bernard Oriola, Marcos Serrano, and Christophe Jouffrais (University of Toulouse 3 - IRIT, France; CNRS - IRIT, France; CNRS - IPAL, Singapore) Self-localization and wayfinding are challenging tasks for people with visual impairments (PVIs), severely impacting independent mobility. Visual “You-are-here” (YAH) maps are useful for assisting local wayfinding of sighted users. They are used to self-localize and display points of interest, landmarks and routes in the surroundings. However, these maps are not always available and rarely accessible to PVIs. Relying on an iterative participatory design process with eight end-users with visual impairments, we created a proof of concept of a mobile audio-vibratory YAH map. Our design is based on either a tablet or a smartphone to ensure a small and portable solution. A user study with ten PVIs showed that the audio-vibratory YAH map that we designed provides the user with a good understanding of the surroundings and wayfinding cues. Surprisingly, the results show that the audio-vibratory YAH map prototype was as usable as the control condition (audio-tactile YAH map with a tactile overlay), with similar user satisfaction and cognitive load. A follow-up field study with two participants showed the effectiveness of the prototype for assisting in crossroad understanding. To conclude, our innovative design of a mobile audio-vibratory YAH map can overcome the portability and printing issues associated with tactile overlays and can be an appropriate solution for assisting the pedestrian navigation of PVIs. Article Search |
|
Schmidt, Timo Torsten |
ISS '24: "Assessing the Effects of Sensory ..."
Assessing the Effects of Sensory Modality Conditions on Object Retention across Virtual Reality and Projected Surface Display Environments
Lucas Siqueira Rodrigues, Timo Torsten Schmidt, Johann Habakuk Israel, Stefan Zachow, John Nyakatura, and Thomas Kosch (Humboldt University of Berlin, Germany; Freie Universität Berlin, Germany; HTW Berlin, Germany; Zuse Institute Berlin, Germany) Haptic feedback reportedly enhances human interaction with 3D data, particularly improving the retention of mental representations of digital objects in immersive settings. However, the effectiveness of visuohaptic integration in promoting object retention across different display environments remains underexplored. Our study extends previous research on the retention effects of haptics from virtual reality to a projected surface display to assess whether earlier findings generalize to 2D environments. Participants performed a delayed match-to-sample task incorporating visual, haptic, and visuohaptic sensory feedback within a projected surface display environment. We compared error rates and response times across these sensory modalities and display environments. Our results reveal that visuohaptic integration significantly enhances object retention on projected surfaces, benefiting task performance across display environments. Our findings suggest that haptics can improve object retention without requiring fully immersive setups, offering insights for the design of interactive systems that assist professionals who rely on precise mental representations of digital objects. Article Search |
|
Schneegass, Stefan |
ISS '24: "Magic Mirror: Designing a ..."
Magic Mirror: Designing a Weight Change Visualization for Domestic Use
Jonas Keppel, Marvin Strauss, Uwe Gruenefeld, and Stefan Schneegass (University of Duisburg-Essen, Germany) Virtual mirrors displaying weight changes can support users in forming healthier habits by visualizing potential future body shapes. However, these often come with privacy, feasibility, and cost limitations. This paper introduces the Magic Mirror, a novel distortion-based mirror that leverages curvature effects to alter the appearance of body size while preserving privacy. We constructed the Magic Mirror and compared it to a video-based alternative. In an online study (N=115), we determined the optimal parameters for each system, comparing weight change visualizations and manipulation levels. Afterward, we conducted a laboratory study (N=24) to compare the two systems in terms of user perception, motivational potential, and willingness to use daily. Our findings indicate that the Magic Mirror surpasses the video-based mirror in terms of suitability for residential application, as it addresses feasibility concerns commonly associated with virtual mirrors. Our work demonstrates that mirrors that display weight changes can be implemented in users’ homes without any cameras, ensuring privacy. Article Search |
|
Serrano, Marcos |
ISS '24: "Audio-Vibratory You-Are-Here ..."
Audio-Vibratory You-Are-Here Mobile Maps for People with Visual Impairments
Elen Sargsyan, Bernard Oriola, Marcos Serrano, and Christophe Jouffrais (University of Toulouse 3 - IRIT, France; CNRS - IRIT, France; CNRS - IPAL, Singapore) Self-localization and wayfinding are challenging tasks for people with visual impairments (PVIs), severely impacting independent mobility. Visual “You-are-here” (YAH) maps are useful for assisting local wayfinding of sighted users. They are used to self-localize and display points of interest, landmarks and routes in the surroundings. However, these maps are not always available and rarely accessible to PVIs. Relying on an iterative participatory design process with eight end-users with visual impairments, we created a proof of concept of a mobile audio-vibratory YAH map. Our design is based on either a tablet or a smartphone to ensure a small and portable solution. A user study with ten PVIs showed that the audio-vibratory YAH map that we designed provides the user with a good understanding of the surroundings and wayfinding cues. Surprisingly, the results show that the audio-vibratory YAH map prototype was as usable as the control condition (audio-tactile YAH map with a tactile overlay), with similar user satisfaction and cognitive load. A follow-up field study with two participants showed the effectiveness of the prototype for assisting in crossroad understanding. To conclude, our innovative design of a mobile audio-vibratory YAH map can overcome the portability and printing issues associated with tactile overlays and can be an appropriate solution for assisting the pedestrian navigation of PVIs. Article Search |
|
Shi, Rongkai |
ISS '24: "Experimental Analysis of Freehand ..."
Experimental Analysis of Freehand Multi-object Selection Techniques in Virtual Reality Head-Mounted Displays
Rongkai Shi, Yushi Wei, Xuning Hu, Yu Liu, Yong Yue, Lingyun Yu, and Hai-Ning Liang (Hong Kong University of Science and Technology, Guangzhou, China; Xi’an Jiaotong-Liverpool University, China) Object selection is essential in virtual reality (VR) head-mounted displays (HMDs). Prior work mainly focuses on enhancing and evaluating techniques for selecting a single object in VR, leaving a gap in the techniques for multi-object selection, a more complex but common selection scenario. To enable multi-object selection, the interaction technique should support group selection in addition to the default pointing selection mode for acquiring a single target. This composite interaction could be particularly challenging when using freehand gestural input. In this work, we present an empirical comparison of six freehand techniques, which are comprised of three mode-switching gestures (Finger Segment, Multi-Finger, and Wrist Orientation) and two group selection techniques (Cone-casting Selection and Crossing Selection) derived from prior work. Our results demonstrate the performance, user experience, and preference of each technique. The findings derive three design implications that can guide the design of freehand techniques for multi-object selection in VR HMDs. Article Search |
|
Sidenmark, Ludwig |
ISS '24: "Evaluating Typing Performance ..."
Evaluating Typing Performance in Different Mixed Reality Manifestations using Physiological Features
Francesco Chiossi, Yassmine El Khaoudi, Changkun Ou, Ludwig Sidenmark, Abdelrahman Zaky, Tiare Feuchtner, and Sven Mayer (LMU Munich, Germany; University of Toronto, Canada; University of Konstanz, Germany) Article Search |
|
Sinaei Hamed, Mahsa |
ISS '24: "The Elephant in the Room: ..."
The Elephant in the Room: Expert Experiences Designing, Developing and Evaluating Data Visualizations on Large Displays
Mahsa Sinaei Hamed, Pak Kwan, Matthew Klich, Jillian Aurisano, and Fateme Rajabiyazdi (Carleton University, Canada; University of Cincinnati, USA) Large displays can provide the necessary space and resolution for comprehensive explorations of data visualizations. However, designing and developing visualizations for such displays pose distinct challenges. Identifying these challenges is essential for data visualization designers and developers creating data visualizations on large displays. In this study, we aim to identify the challenges designers and developers encounter when creating data visualizations for large displays. We conducted semi-structured interviews with 13 experts experienced in creating data visualizations for large displays and, through affinity diagramming, categorized the challenges. We identified several challenges in designing, developing, and evaluating data visualizations on large displays, as well as building infrastructure for large displays. Design challenges included scaling visual encodings, limited design tools, and adopting design guidelines for large displays. In the development phase, developers faced difficulties working away from large displays and dealing with insufficient tools and resources. During the evaluation phase, researchers encountered issues with individuals' unfamiliarity with large display technology, interaction interruptions by technical limitations such as cursor visibility issues, and limitations in feedback gathering. Infrastructure challenges involved environmental constraints, technical issues, and difficulties in relocating large display setups. We share the lessons learned from our study and provide future directions along with research project examples to address these challenges. Article Search |
|
Skowronski, Moritz |
ISS '24: "There Is More to Avatars Than ..."
There Is More to Avatars Than Visuals: Investigating Combinations of Visual and Auditory User Representations for Remote Collaboration in Augmented Reality
Daniel Immanuel Fink, Moritz Skowronski, Johannes Zagermann, Anke Verena Reinschluessel, Harald Reiterer, and Tiare Feuchtner (University of Konstanz, Germany; Aarhus University, Denmark) Supporting remote collaboration through augmented reality facilitates the experience of co-presence by presenting the collaborator’s avatar in the user’s physical environment. While visual user representations are continuously researched and advanced, the audio configuration – especially in combination with different visualizations – is rarely considered. In a user study (n = 48, 24 dyads), we evaluate the combination of two visual (Simple vs. Rich Avatar) with two auditory (Mono vs. Spatial Audio) user representations, to investigate their impact on user’s overall experience, performance, and perceived social presence during collaboration. Our results show a preference for rich auditory and visual user representations, as Spatial Audio supports completion of parallel tasks and the Rich Avatar positively influence user experience. However, the Simple Avatar draws less attention, which potentially benefits task efficiency, advocating for simpler visualizations in performance-oriented settings. Our findings contribute to a deeper understanding of how visual and auditory user representation combinations impact remote collaboration in augmented reality. Article Search |
|
Smiley, Jim |
ISS '24: "3D Remote Monitoring and Diagnosis ..."
3D Remote Monitoring and Diagnosis during a Pandemic: Holoportation and Digital Twin Requirements
Kabir Ahmed Rufai, Helen C. Purchase, Barrett Ens, Patrick Reuter, Chris Bain, Peter Chan, and Jim Smiley (Monash University, Australia; University of British Columbia, Okanagan, Canada; LaBRI, France; Eastern Health, Australia) COVID-19 regulations presented clinicians with a new set of challenges that affected their ability to efficiently provide patient care and, as a result, telemedicine was rapidly adopted to deliver care remotely. However, these telemedicine platforms undermine patient care due to clinicians' inability to acquire all the relevant patient information required to diagnose and treat the patient. To explore this gap, we conducted a requirements analysis for the development of a 3D remote patient monitoring and diagnosis platform, by using a user-centric design methodology. In this requirements analysis, we elicited information about the clinical domain, identified clinicians’ requirements, elicited clinicians’ insights, and documented the clinicians' requirements. The outcome was the emergence of refined clinicians' requirements to guide the implementation of the Digital Twin concept paired with holoportation for remote 3D monitoring and diagnosis of patients. We anticipate that the application of a 3D telemedicine platform with these requirements for patient care during a pandemic could potentially enhance clinicians' efficiency and the effectiveness of remote patient care. Article Search |
|
Sodhi, Rajveer |
ISS '24: "Comparison of Unencumbered ..."
Comparison of Unencumbered Interaction Technique for Head-Mounted Displays
Shariff AM Faleel, Rajveer Sodhi, and Pourang Irani (University of British Columbia, Canada; University of British Columbia, Okanagan, Canada) Head Mounted Displays (HMDs) are gaining more public attention. With the advancement of tracking technologies, they are incorporating unencumbered interaction techniques to address the need for user-friendly and efficient interaction techniques for day-to-day activities. While there is a good understanding of the different interaction techniques individually, very little research has been done to compare them directly. This would be vital to understanding their strengths and weaknesses in different contexts and building better synergies among them. This paper uses a target selection task to compare the performance and user preferences for four interaction techniques: gaze-pinch, ray pointer, hand-proximate user interface, and direct mid-air interactions. Results indicate that the gaze-pinch interaction technique required significantly more time to complete the task than the others, whose time to complete was similar. However, in terms of preferences and errors, the interaction techniques mostly performed similar. Article Search |
|
Srivastava, Anmol |
ISS '24: "A Virtual Reality Approach ..."
A Virtual Reality Approach to Overcome Glossophobia among University Students
Aarav Balachandran, Prajna Vohra, and Anmol Srivastava (IIIT Delhi, India) In the contemporary academic landscape, university students frequently deliver presentations in front of their peers and faculty, often leading to heightened levels of Public Speaking Anxiety (PSA). This study explores the potential of Virtual Reality Exposure Therapy (VRET) to alleviate PSA among students. Our study introduces "Manch," a realistic VR environment that simulates classroom public speaking scenarios with lifelike audience interactions and a slide-deck presentation feature. The study was conducted with N=28 participants, showing a significant reduction in PSA levels post-VR exposure, thereby establishing VR's efficacy in mitigating PSA. Additionally, we also incorporated a unique qualitative analysis through participant interviews, offering deeper insights into individual experiences with VRET. Manch shows great promise as a tool for future studies and interventions aimed at reducing PSA, particularly among university students. Article Search |
|
Strauss, Marvin |
ISS '24: "Magic Mirror: Designing a ..."
Magic Mirror: Designing a Weight Change Visualization for Domestic Use
Jonas Keppel, Marvin Strauss, Uwe Gruenefeld, and Stefan Schneegass (University of Duisburg-Essen, Germany) Virtual mirrors displaying weight changes can support users in forming healthier habits by visualizing potential future body shapes. However, these often come with privacy, feasibility, and cost limitations. This paper introduces the Magic Mirror, a novel distortion-based mirror that leverages curvature effects to alter the appearance of body size while preserving privacy. We constructed the Magic Mirror and compared it to a video-based alternative. In an online study (N=115), we determined the optimal parameters for each system, comparing weight change visualizations and manipulation levels. Afterward, we conducted a laboratory study (N=24) to compare the two systems in terms of user perception, motivational potential, and willingness to use daily. Our findings indicate that the Magic Mirror surpasses the video-based mirror in terms of suitability for residential application, as it addresses feasibility concerns commonly associated with virtual mirrors. Our work demonstrates that mirrors that display weight changes can be implemented in users’ homes without any cameras, ensuring privacy. Article Search |
|
Strömel, Konstantin R. |
ISS '24: "Zooming In: A Review of Designing ..."
Zooming In: A Review of Designing for Photo Taking in Human-Computer Interaction and Future Prospects
Aleksandra Wysokińska, Paweł W. Woźniak, and Konstantin R. Strömel (Lodz University of Technology, Poland; TU Wien, Austria; Osnabrück University, Germany) Photography has been pivotal in culture for decades and its importance has increased with the rise of digital technology. However, the exploration of picture taking within the Human-Computer Interaction (HCI) community does not seem to match its cultural and technological significance. Recognizing this discrepancy, we sought to understand areas of interest in photography as a conscious creative process. To address this issue, we performed a systematic literature review using the PRISMA methodology. From our research, we identified 62 pertinent papers spanning from 2005 to 2022. Our examination revealed six primary dimensions, further classified into study type, design goal, photo-taking style, device type, interaction style, and number of users. In-depth analysis showed the dominant role of exploratory and functional research, the balance between qualitative and quantitative methods, and a strong focus on smartphone cameras. Our review has highlighted significant gaps in the existing literature, offering valuable insights for future research on photo taking. Article Search |
|
Tanaka, Kojiro |
ISS '24: "Popping-Up Poster: A Pin-Based ..."
Popping-Up Poster: A Pin-Based Promotional Poster Device for Engaging Customers through Physical Shape Transformation
Kojiro Tanaka, Yuki Okafuji, and Takuya Iwamoto (University of Tsukuba, Japan; CyberAgent, Japan; Osaka University, Japan) Promotional media, such as paper posters and digital signage, are installed in shopping malls to recommend products and services. However, it has been reported that many customers tend not to be interested in these promotional media and do not receive the information. When product information is not communicated effectively, advertisers are unable to convey the information they wish to share with customers, and customers miss the opportunity to receive valuable information. To address such issues, a lot of methods have been proposed to make people aware of the presence of media; however, there are not many methods that take into account the delivery of product information to customers. In this study, we propose Popping-Up Poster, a pin-based poster device designed to capture customer attention and convey information through dynamic shape changes. To verify the effectiveness of the proposed system, field experiments were conducted in a café, where its promotional effects were compared with those of traditional promotional media, including paper posters and digital signage. These results show that Popping-Up Poster has the potential to be more effective in recommending products and influencing customer product choices compared to conventional promotional media. Article Search |
|
Thomas, Bruce H. |
ISS '24: "Hey Building! Novel Interfaces ..."
Hey Building! Novel Interfaces for Parametric Design Manipulations in Virtual Reality
Adam Drogemuller, Brandon J. Matthews, Andrew Cunningham, Rongrong Yu, Ning Gu, and Bruce H. Thomas (University of South Australia, Australia) Parametric Design enables designers to formulate and explore new ideas through parameters, typically by manipulating numerical values. However, visualising and exploring the design space of an established parametric design solution is natively difficult through desktop displays for designers due to screen space constraints and requiring familiarity with visual-language programming interfaces. Thus, we sought to explore Virtual Reality (VR), inspired by Natural User Interfaces (NUI), to develop and explore new interfaces departing from traditional programming interfaces, that could complement the spatial and embodied affordances of contemporary VR devices. Informed by two industry-led focus groups with architects we developed and examined the usability of three different interfaces: 1) Paramaxes, an axes-based interface that allows designers to distribute and manipulate parameter visualisations around them in physical space; 2) ParamUtter, a Voice-based User Interface (VUI) that allows designers to manipulate parameter visualisations through natural languages; 3) Control Panel, which presents the parameters as sliders in a scrollable pane and acts as baseline comparison. We ran an exploratory study with experts and found that the Control Panel was ultimately the preferred interface for a design manipulation task. However, participants commented favorably towards qualities in the unconventional interfaces, with ParamUtter scoring highest in System Usability Scores (SUS), and participants valuing the potential of using physical space to explore design spaces with Paramaxes. Article Search |
|
Tong, Wai |
ISS '24: "Evaluating Layout Dimensionalities ..."
Evaluating Layout Dimensionalities in PC+VR Asymmetric Collaborative Decision Making
Daniel Enriquez, Wai Tong, Chris North, Huamin Qu, and Yalong Yang (Cornell Tech, USA; Texas A&M University, USA; Virginia Tech, USA; Hong Kong University of Science and Technology, China; Georgia Institute of Technology, USA) With the commercialization of virtual/augmented reality (VR/AR) devices, there is an increasing interest in combining immersive and non-immersive devices (e.g., desktop computers) for asymmetric collaborations. While such asymmetric settings have been examined in social platforms, significant questions around layout dimensionality in data-driven decision-making remain underexplored. A crucial inquiry arises: although presenting a consistent 3D virtual world on both immersive and non-immersive platforms has been a common practice in social applications, does the same guideline apply to lay out data? Or should data placement be optimized locally according to each device's display capacity? This study aims to provide empirical insights into the user experience of asymmetric collaboration in data-driven decision-making. We tested practical dimensionality combinations between PC and VR, resulting in three conditions: PC2D+VR2D, PC2D+VR3D, and PC3D+VR3D. The results revealed a preference for PC2D+VR3D, and PC2D+VR2D led to the quickest task completion. Our investigation facilitates an in-depth discussion of the trade-offs associated with different layout dimensionalities in asymmetric collaborations. Article Search Info |
|
Ullah, A. K. M. Amanat |
ISS '24: "Exploring Pointer Enhancement ..."
Exploring Pointer Enhancement Techniques for Target Selection on Immersive 3D Large Curved Display
Dhruv Bihani, A. K. M. Amanat Ullah, Charles-Olivier Dufresne-Camaro, William Delamare, Pourang Irani, and Khalad Hasan (University of British Columbia, Okanagan, Canada; University of British Columbia, Canada; ESTIA, France) Large curved displays are becoming increasingly popular due to their ability to provide users with a wider field of view and a more immersive experience compared to flat displays. Current interaction techniques for large curved displays often assume a user is positioned at the display's centre, crucially failing to accommodate general use conditions where the user may move during use. In this work, we investigated how user position impacts pointing interaction on large curved displays and evaluated cursor enhancement techniques to provide faster and more accurate performance across positions. To this effect, we conducted two user studies. First, we evaluated the effects of user position on pointing performance on a large semi-circular display (3m-tall, 3270R curvature) through a 2D Fitts' Law selection task. Our results indicate that as users move away from the display, their pointing speed significantly increases (at least by 9%), but accuracy decreases (by at least 6%). Additionally, we observed participants were slower when pointing from laterally offset positions. Secondly, we explored which pointing techniques providing motor- and visual-space enhancements best afford effective pointing performance across user positions. Across a total of six techniques tested, we found that a combination of acceleration and distance-based adjustments with cursor enlargement significantly improves target selection speed and accuracy across different user positions. Results further show techniques with visual-space enhancements (e.g., cursor enlargement) are significantly faster and more accurate than their non-visually-enhanced counterparts. Based on our results we provide design recommendations for implementing cursor enhancement techniques for large curved displays. Article Search |
|
Usuba, Hiroki |
ISS '24: "0.2-mm-Step Verification of ..."
0.2-mm-Step Verification of the Dual Gaussian Distribution Model with Large Sample Size for Predicting Tap Success Rates
Shota Yamanaka and Hiroki Usuba (LY Corporation, Japan) The Dual Gaussian Distribution Model can be utilized for predicting the success rates of tapping targets. However, previous studies have shown that the prediction error increases to as much as 10 points, where ``points'' represent the percentage difference between the observed and predicted values of the tap success rate, particularly for a small target width W such as 2 mm. We hypothesize that this could be due to the experimental designs with sparse W levels performed by few participants, rather than the model itself. Our experiment involving horizontal and vertical bar targets with W = 2-8 mm (step: 0.2 mm) performed by more than 180 participants showed that the maximum prediction errors were relatively small: 2.769 and 3.185 points, respectively. Furthermore, the correlation between W and the prediction error was statistically small (Pearson's |r| < 0.2), and W was not a significant contributor to changing prediction errors (p>0.05). As these results do not support the concerns that the Dual Gaussian Distribution Model has an issue when used with small targets, the development of applications and refined models is encouraged to continue. Article Search |
|
Vohra, Prajna |
ISS '24: "A Virtual Reality Approach ..."
A Virtual Reality Approach to Overcome Glossophobia among University Students
Aarav Balachandran, Prajna Vohra, and Anmol Srivastava (IIIT Delhi, India) In the contemporary academic landscape, university students frequently deliver presentations in front of their peers and faculty, often leading to heightened levels of Public Speaking Anxiety (PSA). This study explores the potential of Virtual Reality Exposure Therapy (VRET) to alleviate PSA among students. Our study introduces "Manch," a realistic VR environment that simulates classroom public speaking scenarios with lifelike audience interactions and a slide-deck presentation feature. The study was conducted with N=28 participants, showing a significant reduction in PSA levels post-VR exposure, thereby establishing VR's efficacy in mitigating PSA. Additionally, we also incorporated a unique qualitative analysis through participant interviews, offering deeper insights into individual experiences with VRET. Manch shows great promise as a tool for future studies and interventions aimed at reducing PSA, particularly among university students. Article Search |
|
Wagner, Uta |
ISS '24: "Gaze, Wall, and Racket: Combining ..."
Gaze, Wall, and Racket: Combining Gaze and Hand-Controlled Plane for 3D Selection in Virtual Reality
Uta Wagner, Andreas Asferg Jacobsen, Matthias Albrecht, Haopeng Wang, Hans Gellersen, and Ken Pfeuffer (Aarhus University, Denmark; University of Konstanz, Germany; Lancaster University, United Kingdom) Raypointing, the status-quo pointing technique for virtual reality, is challenging with many occluded and overlapping objects. In this work, we investigate how eye-tracking input can assist the gestural ray pointing in the disambiguation of targets in densely populated scenes. We explore the concept of Gaze + Plane, where the intersection between the user's gaze and a hand-controlled plane facilitates 3D position specification. In particular, two techniques are investigated: Gaze&Wall, which employs an indirect plane positioned in depth using a hand ray, and Gaze&Racket, featuring a hand-held and rotatable plane. In a first experiment, we reveal the speed-error trade-offs between Gaze + Plane techniques. In a second study, we compared the best techniques to newly designed gesture-only techniques, finding that Gaze&Wall is less error-prone and significantly faster. Our research has relevance for spatial interaction, specifically on advanced techniques for complex 3D tasks. Article Search |
|
Wang, Haopeng |
ISS '24: "Gaze, Wall, and Racket: Combining ..."
Gaze, Wall, and Racket: Combining Gaze and Hand-Controlled Plane for 3D Selection in Virtual Reality
Uta Wagner, Andreas Asferg Jacobsen, Matthias Albrecht, Haopeng Wang, Hans Gellersen, and Ken Pfeuffer (Aarhus University, Denmark; University of Konstanz, Germany; Lancaster University, United Kingdom) Raypointing, the status-quo pointing technique for virtual reality, is challenging with many occluded and overlapping objects. In this work, we investigate how eye-tracking input can assist the gestural ray pointing in the disambiguation of targets in densely populated scenes. We explore the concept of Gaze + Plane, where the intersection between the user's gaze and a hand-controlled plane facilitates 3D position specification. In particular, two techniques are investigated: Gaze&Wall, which employs an indirect plane positioned in depth using a hand ray, and Gaze&Racket, featuring a hand-held and rotatable plane. In a first experiment, we reveal the speed-error trade-offs between Gaze + Plane techniques. In a second study, we compared the best techniques to newly designed gesture-only techniques, finding that Gaze&Wall is less error-prone and significantly faster. Our research has relevance for spatial interaction, specifically on advanced techniques for complex 3D tasks. Article Search |
|
Wang, Xiaoyu |
ISS '24: "GestureGPT: Toward Zero-Shot ..."
GestureGPT: Toward Zero-Shot Free-Form Hand Gesture Understanding with Large Language Model Agents
Xin Zeng, Xiaoyu Wang, Tengxiang Zhang, Chun Yu, Shengdong Zhao, and Yiqiang Chen (Chinese Academy of Sciences, China; Institute of Computing Technology at Chinese Academy of Sciences, China; Tsinghua University, China; City University of Hong Kong, China) Existing gesture interfaces only work with a fixed set of gestures defined either by interface designers or by users themselves, which introduces learning or demonstration efforts that diminish their naturalness. Humans, on the other hand, understand free-form gestures by synthesizing the gesture, context, experience, and common sense. In this way, the user does not need to learn, demonstrate, or associate gestures. We introduce GestureGPT, a free-form hand gesture understanding framework that mimics human gesture understanding procedures to enable a natural free-form gestural interface. Our framework leverages multiple Large Language Model agents to manage and synthesize gesture and context information, then infers the interaction intent by associating the gesture with an interface function. More specifically, our triple-agent framework includes a Gesture Description Agent that automatically segments and formulates natural language descriptions of hand poses and movements based on hand landmark coordinates. The description is deciphered by a Gesture Inference Agent through self-reasoning and querying about the interaction context (e.g., interaction history, gaze data), which is managed by a Context Management Agent. Following iterative exchanges, the Gesture Inference Agent discerns the user’s intent by grounding it to an interactive function. We validated our framework offline under two real-world scenarios: smart home control and online video streaming. The average zero-shot Top-1/Top-5 grounding accuracies are 44.79%/83.59% for smart home tasks and 37.50%/73.44% for video streaming tasks. We also provide an extensive discussion that includes rationale for model selection, generalizability, and future research directions for a practical system etc. Article Search |
|
Wei, Yushi |
ISS '24: "Experimental Analysis of Freehand ..."
Experimental Analysis of Freehand Multi-object Selection Techniques in Virtual Reality Head-Mounted Displays
Rongkai Shi, Yushi Wei, Xuning Hu, Yu Liu, Yong Yue, Lingyun Yu, and Hai-Ning Liang (Hong Kong University of Science and Technology, Guangzhou, China; Xi’an Jiaotong-Liverpool University, China) Object selection is essential in virtual reality (VR) head-mounted displays (HMDs). Prior work mainly focuses on enhancing and evaluating techniques for selecting a single object in VR, leaving a gap in the techniques for multi-object selection, a more complex but common selection scenario. To enable multi-object selection, the interaction technique should support group selection in addition to the default pointing selection mode for acquiring a single target. This composite interaction could be particularly challenging when using freehand gestural input. In this work, we present an empirical comparison of six freehand techniques, which are comprised of three mode-switching gestures (Finger Segment, Multi-Finger, and Wrist Orientation) and two group selection techniques (Cone-casting Selection and Crossing Selection) derived from prior work. Our results demonstrate the performance, user experience, and preference of each technique. The findings derive three design implications that can guide the design of freehand techniques for multi-object selection in VR HMDs. Article Search |
|
Woźniak, Paweł W. |
ISS '24: "Zooming In: A Review of Designing ..."
Zooming In: A Review of Designing for Photo Taking in Human-Computer Interaction and Future Prospects
Aleksandra Wysokińska, Paweł W. Woźniak, and Konstantin R. Strömel (Lodz University of Technology, Poland; TU Wien, Austria; Osnabrück University, Germany) Photography has been pivotal in culture for decades and its importance has increased with the rise of digital technology. However, the exploration of picture taking within the Human-Computer Interaction (HCI) community does not seem to match its cultural and technological significance. Recognizing this discrepancy, we sought to understand areas of interest in photography as a conscious creative process. To address this issue, we performed a systematic literature review using the PRISMA methodology. From our research, we identified 62 pertinent papers spanning from 2005 to 2022. Our examination revealed six primary dimensions, further classified into study type, design goal, photo-taking style, device type, interaction style, and number of users. In-depth analysis showed the dominant role of exploratory and functional research, the balance between qualitative and quantitative methods, and a strong focus on smartphone cameras. Our review has highlighted significant gaps in the existing literature, offering valuable insights for future research on photo taking. Article Search |
|
Wu, Liwei |
ISS '24: "Planar or Spatial: Exploring ..."
Planar or Spatial: Exploring Design Aspects and Challenges for Presentations in Virtual Reality with No-coding Interface
Liwei Wu, Yilin Zhang, Justin Leung, Jingyi Gao, April Li, and Jian Zhao (University of Waterloo, Canada) The proliferation of virtual reality (VR) has led to its increasing adoption as an immersive medium for delivering presentations, distinct from other VR experiences like games and 360-degree videos by sharing information in richly interactive environments. However, creating engaging VR presentations remains a challenging and time-consuming task for users, hindering the full realization of VR presentation's capabilities. This research aims to explore the potential of VR presentation, analyze users' opinions, and investigate these via providing a user-friendly no-coding authoring tool. Through an examination of popular presentation software and interviews with seven professionals, we identified five design aspects and four design challenges for VR presentations. Based on the findings, we developed VRStory, a prototype for presentation authoring without coding to explore the design aspects and strategies for addressing the challenges. VRStory offers a variety of predefined and customizable VR elements, as well as modules for layout design, navigation control, and asset generation. A user study was then conducted with 12 participants to investigate their opinions and authoring experience with VRStory. Our results demonstrated that, while acknowledging the advantages of immersive and spatial features in VR, users often have a consistent mental model for traditional 2D presentations and may still prefer planar and static formats in VR for better accessibility and efficient communication. We finally shared our learned design considerations for future development of VR presentation tools, emphasizing the importance of balancing of promoting immersive features and ensuring accessibility. Article Search |
|
Wysokińska, Aleksandra |
ISS '24: "Zooming In: A Review of Designing ..."
Zooming In: A Review of Designing for Photo Taking in Human-Computer Interaction and Future Prospects
Aleksandra Wysokińska, Paweł W. Woźniak, and Konstantin R. Strömel (Lodz University of Technology, Poland; TU Wien, Austria; Osnabrück University, Germany) Photography has been pivotal in culture for decades and its importance has increased with the rise of digital technology. However, the exploration of picture taking within the Human-Computer Interaction (HCI) community does not seem to match its cultural and technological significance. Recognizing this discrepancy, we sought to understand areas of interest in photography as a conscious creative process. To address this issue, we performed a systematic literature review using the PRISMA methodology. From our research, we identified 62 pertinent papers spanning from 2005 to 2022. Our examination revealed six primary dimensions, further classified into study type, design goal, photo-taking style, device type, interaction style, and number of users. In-depth analysis showed the dominant role of exploratory and functional research, the balance between qualitative and quantitative methods, and a strong focus on smartphone cameras. Our review has highlighted significant gaps in the existing literature, offering valuable insights for future research on photo taking. Article Search |
|
Yamanaka, Shota |
ISS '24: "0.2-mm-Step Verification of ..."
0.2-mm-Step Verification of the Dual Gaussian Distribution Model with Large Sample Size for Predicting Tap Success Rates
Shota Yamanaka and Hiroki Usuba (LY Corporation, Japan) The Dual Gaussian Distribution Model can be utilized for predicting the success rates of tapping targets. However, previous studies have shown that the prediction error increases to as much as 10 points, where ``points'' represent the percentage difference between the observed and predicted values of the tap success rate, particularly for a small target width W such as 2 mm. We hypothesize that this could be due to the experimental designs with sparse W levels performed by few participants, rather than the model itself. Our experiment involving horizontal and vertical bar targets with W = 2-8 mm (step: 0.2 mm) performed by more than 180 participants showed that the maximum prediction errors were relatively small: 2.769 and 3.185 points, respectively. Furthermore, the correlation between W and the prediction error was statistically small (Pearson's |r| < 0.2), and W was not a significant contributor to changing prediction errors (p>0.05). As these results do not support the concerns that the Dual Gaussian Distribution Model has an issue when used with small targets, the development of applications and refined models is encouraged to continue. Article Search |
|
Yang, Yalong |
ISS '24: "Evaluating Layout Dimensionalities ..."
Evaluating Layout Dimensionalities in PC+VR Asymmetric Collaborative Decision Making
Daniel Enriquez, Wai Tong, Chris North, Huamin Qu, and Yalong Yang (Cornell Tech, USA; Texas A&M University, USA; Virginia Tech, USA; Hong Kong University of Science and Technology, China; Georgia Institute of Technology, USA) With the commercialization of virtual/augmented reality (VR/AR) devices, there is an increasing interest in combining immersive and non-immersive devices (e.g., desktop computers) for asymmetric collaborations. While such asymmetric settings have been examined in social platforms, significant questions around layout dimensionality in data-driven decision-making remain underexplored. A crucial inquiry arises: although presenting a consistent 3D virtual world on both immersive and non-immersive platforms has been a common practice in social applications, does the same guideline apply to lay out data? Or should data placement be optimized locally according to each device's display capacity? This study aims to provide empirical insights into the user experience of asymmetric collaboration in data-driven decision-making. We tested practical dimensionality combinations between PC and VR, resulting in three conditions: PC2D+VR2D, PC2D+VR3D, and PC3D+VR3D. The results revealed a preference for PC2D+VR3D, and PC2D+VR2D led to the quickest task completion. Our investigation facilitates an in-depth discussion of the trade-offs associated with different layout dimensionalities in asymmetric collaborations. Article Search Info ISS '24: "Lights, Headset, Tablet, Action: ..." Lights, Headset, Tablet, Action: Hybrid User Interfaces for Situated Analytics Xiaoyan Zhou, Benjamin Lee, Francisco Raul Ortega, Anil Ufuk Batmaz, and Yalong Yang (Colorado State University, USA; University of Stuttgart, Germany; JPMorganChase, New York, USA; Concordia University, Canada; Georgia Institute of Technology, USA) While augmented reality (AR) headsets provide entirely new ways of seeing and interacting with data, traditional computing devices can play a symbiotic role when used in conjunction with AR as a hybrid user interface. A promising use case for this setup is situated analytics. AR can provide embedded views that are integrated with their physical referents, and a separate device such as a tablet can provide a familiar situated overview of the entire dataset being examined. While prior work has explored similar setups, we sought to understand how people perceive and make use of visualizations presented on both embedded visualizations (in AR) and situated visualizations (on a tablet) to achieve their own goals. To this end, we conducted an exploratory study using a scenario and task familiar to most: adjusting light levels in a smart home based on personal preference and energy usage. In a prototype that simulates AR in virtual reality, embedded visualizations are positioned next to lights distributed across an apartment, and situated visualizations are provided on a handheld tablet. We observed and interviewed 19 participants using the prototype. Participants were easily able to perform the task, though the extent the visualizations were used during the task varied, with some making decisions based on the data and others only on their own preferences. Our findings also suggest the two distinct roles that situated and embedded visualizations can have, and how this clear separation might improve user satisfaction and minimize attention-switching overheads in this hybrid user interface setup. We conclude by discussing the importance of considering the user's needs, goals, and the physical environment for designing and evaluating effective situated analytics applications. Article Search |
|
Ying, Wen |
ISS '24: "MoiréTag: A Low-Cost Tag ..."
MoiréTag: A Low-Cost Tag for High-Precision Tangible Interactions without Active Components
Peiyu Zhang, Wen Ying, Sara Riggs, and Seongkook Heo (University of Virginia, USA) In this paper, we present MoiréTag—a novel tag-like device that magnifies displacement without active components for indirect sensing of subtle tangible interactions. The device consists of two overlapping layers of stripe patterns with distinct pattern frequencies. These layers create Moiré fringes that can move faster than the actual movement of a layer. Using a customized image processing pipeline, we show that MoiréTag can reliably detect sub-mm movement in real-time (mean error = 0.043 mm) under varying lighting conditions, camera angles, and camera distances. We also demonstrate five applications of MoiréTag to showcase its potential as a low-cost solution to capture and monitor small changes in movement and other physical properties, such as force and volume, by converting them into displacement. Article Search |
|
Yu, Chun |
ISS '24: "GestureGPT: Toward Zero-Shot ..."
GestureGPT: Toward Zero-Shot Free-Form Hand Gesture Understanding with Large Language Model Agents
Xin Zeng, Xiaoyu Wang, Tengxiang Zhang, Chun Yu, Shengdong Zhao, and Yiqiang Chen (Chinese Academy of Sciences, China; Institute of Computing Technology at Chinese Academy of Sciences, China; Tsinghua University, China; City University of Hong Kong, China) Existing gesture interfaces only work with a fixed set of gestures defined either by interface designers or by users themselves, which introduces learning or demonstration efforts that diminish their naturalness. Humans, on the other hand, understand free-form gestures by synthesizing the gesture, context, experience, and common sense. In this way, the user does not need to learn, demonstrate, or associate gestures. We introduce GestureGPT, a free-form hand gesture understanding framework that mimics human gesture understanding procedures to enable a natural free-form gestural interface. Our framework leverages multiple Large Language Model agents to manage and synthesize gesture and context information, then infers the interaction intent by associating the gesture with an interface function. More specifically, our triple-agent framework includes a Gesture Description Agent that automatically segments and formulates natural language descriptions of hand poses and movements based on hand landmark coordinates. The description is deciphered by a Gesture Inference Agent through self-reasoning and querying about the interaction context (e.g., interaction history, gaze data), which is managed by a Context Management Agent. Following iterative exchanges, the Gesture Inference Agent discerns the user’s intent by grounding it to an interactive function. We validated our framework offline under two real-world scenarios: smart home control and online video streaming. The average zero-shot Top-1/Top-5 grounding accuracies are 44.79%/83.59% for smart home tasks and 37.50%/73.44% for video streaming tasks. We also provide an extensive discussion that includes rationale for model selection, generalizability, and future research directions for a practical system etc. Article Search |
|
Yu, Lingyun |
ISS '24: "Experimental Analysis of Freehand ..."
Experimental Analysis of Freehand Multi-object Selection Techniques in Virtual Reality Head-Mounted Displays
Rongkai Shi, Yushi Wei, Xuning Hu, Yu Liu, Yong Yue, Lingyun Yu, and Hai-Ning Liang (Hong Kong University of Science and Technology, Guangzhou, China; Xi’an Jiaotong-Liverpool University, China) Object selection is essential in virtual reality (VR) head-mounted displays (HMDs). Prior work mainly focuses on enhancing and evaluating techniques for selecting a single object in VR, leaving a gap in the techniques for multi-object selection, a more complex but common selection scenario. To enable multi-object selection, the interaction technique should support group selection in addition to the default pointing selection mode for acquiring a single target. This composite interaction could be particularly challenging when using freehand gestural input. In this work, we present an empirical comparison of six freehand techniques, which are comprised of three mode-switching gestures (Finger Segment, Multi-Finger, and Wrist Orientation) and two group selection techniques (Cone-casting Selection and Crossing Selection) derived from prior work. Our results demonstrate the performance, user experience, and preference of each technique. The findings derive three design implications that can guide the design of freehand techniques for multi-object selection in VR HMDs. Article Search |
|
Yu, Rongrong |
ISS '24: "Hey Building! Novel Interfaces ..."
Hey Building! Novel Interfaces for Parametric Design Manipulations in Virtual Reality
Adam Drogemuller, Brandon J. Matthews, Andrew Cunningham, Rongrong Yu, Ning Gu, and Bruce H. Thomas (University of South Australia, Australia) Parametric Design enables designers to formulate and explore new ideas through parameters, typically by manipulating numerical values. However, visualising and exploring the design space of an established parametric design solution is natively difficult through desktop displays for designers due to screen space constraints and requiring familiarity with visual-language programming interfaces. Thus, we sought to explore Virtual Reality (VR), inspired by Natural User Interfaces (NUI), to develop and explore new interfaces departing from traditional programming interfaces, that could complement the spatial and embodied affordances of contemporary VR devices. Informed by two industry-led focus groups with architects we developed and examined the usability of three different interfaces: 1) Paramaxes, an axes-based interface that allows designers to distribute and manipulate parameter visualisations around them in physical space; 2) ParamUtter, a Voice-based User Interface (VUI) that allows designers to manipulate parameter visualisations through natural languages; 3) Control Panel, which presents the parameters as sliders in a scrollable pane and acts as baseline comparison. We ran an exploratory study with experts and found that the Control Panel was ultimately the preferred interface for a design manipulation task. However, participants commented favorably towards qualities in the unconventional interfaces, with ParamUtter scoring highest in System Usability Scores (SUS), and participants valuing the potential of using physical space to explore design spaces with Paramaxes. Article Search |
|
Yue, Yong |
ISS '24: "Experimental Analysis of Freehand ..."
Experimental Analysis of Freehand Multi-object Selection Techniques in Virtual Reality Head-Mounted Displays
Rongkai Shi, Yushi Wei, Xuning Hu, Yu Liu, Yong Yue, Lingyun Yu, and Hai-Ning Liang (Hong Kong University of Science and Technology, Guangzhou, China; Xi’an Jiaotong-Liverpool University, China) Object selection is essential in virtual reality (VR) head-mounted displays (HMDs). Prior work mainly focuses on enhancing and evaluating techniques for selecting a single object in VR, leaving a gap in the techniques for multi-object selection, a more complex but common selection scenario. To enable multi-object selection, the interaction technique should support group selection in addition to the default pointing selection mode for acquiring a single target. This composite interaction could be particularly challenging when using freehand gestural input. In this work, we present an empirical comparison of six freehand techniques, which are comprised of three mode-switching gestures (Finger Segment, Multi-Finger, and Wrist Orientation) and two group selection techniques (Cone-casting Selection and Crossing Selection) derived from prior work. Our results demonstrate the performance, user experience, and preference of each technique. The findings derive three design implications that can guide the design of freehand techniques for multi-object selection in VR HMDs. Article Search |
|
Zachow, Stefan |
ISS '24: "Assessing the Effects of Sensory ..."
Assessing the Effects of Sensory Modality Conditions on Object Retention across Virtual Reality and Projected Surface Display Environments
Lucas Siqueira Rodrigues, Timo Torsten Schmidt, Johann Habakuk Israel, Stefan Zachow, John Nyakatura, and Thomas Kosch (Humboldt University of Berlin, Germany; Freie Universität Berlin, Germany; HTW Berlin, Germany; Zuse Institute Berlin, Germany) Haptic feedback reportedly enhances human interaction with 3D data, particularly improving the retention of mental representations of digital objects in immersive settings. However, the effectiveness of visuohaptic integration in promoting object retention across different display environments remains underexplored. Our study extends previous research on the retention effects of haptics from virtual reality to a projected surface display to assess whether earlier findings generalize to 2D environments. Participants performed a delayed match-to-sample task incorporating visual, haptic, and visuohaptic sensory feedback within a projected surface display environment. We compared error rates and response times across these sensory modalities and display environments. Our results reveal that visuohaptic integration significantly enhances object retention on projected surfaces, benefiting task performance across display environments. Our findings suggest that haptics can improve object retention without requiring fully immersive setups, offering insights for the design of interactive systems that assist professionals who rely on precise mental representations of digital objects. Article Search |
|
Zagermann, Johannes |
ISS '24: "There Is More to Avatars Than ..."
There Is More to Avatars Than Visuals: Investigating Combinations of Visual and Auditory User Representations for Remote Collaboration in Augmented Reality
Daniel Immanuel Fink, Moritz Skowronski, Johannes Zagermann, Anke Verena Reinschluessel, Harald Reiterer, and Tiare Feuchtner (University of Konstanz, Germany; Aarhus University, Denmark) Supporting remote collaboration through augmented reality facilitates the experience of co-presence by presenting the collaborator’s avatar in the user’s physical environment. While visual user representations are continuously researched and advanced, the audio configuration – especially in combination with different visualizations – is rarely considered. In a user study (n = 48, 24 dyads), we evaluate the combination of two visual (Simple vs. Rich Avatar) with two auditory (Mono vs. Spatial Audio) user representations, to investigate their impact on user’s overall experience, performance, and perceived social presence during collaboration. Our results show a preference for rich auditory and visual user representations, as Spatial Audio supports completion of parallel tasks and the Rich Avatar positively influence user experience. However, the Simple Avatar draws less attention, which potentially benefits task efficiency, advocating for simpler visualizations in performance-oriented settings. Our findings contribute to a deeper understanding of how visual and auditory user representation combinations impact remote collaboration in augmented reality. Article Search |
|
Zaky, Abdelrahman |
ISS '24: "Evaluating Typing Performance ..."
Evaluating Typing Performance in Different Mixed Reality Manifestations using Physiological Features
Francesco Chiossi, Yassmine El Khaoudi, Changkun Ou, Ludwig Sidenmark, Abdelrahman Zaky, Tiare Feuchtner, and Sven Mayer (LMU Munich, Germany; University of Toronto, Canada; University of Konstanz, Germany) Article Search |
|
Zeng, Xin |
ISS '24: "GestureGPT: Toward Zero-Shot ..."
GestureGPT: Toward Zero-Shot Free-Form Hand Gesture Understanding with Large Language Model Agents
Xin Zeng, Xiaoyu Wang, Tengxiang Zhang, Chun Yu, Shengdong Zhao, and Yiqiang Chen (Chinese Academy of Sciences, China; Institute of Computing Technology at Chinese Academy of Sciences, China; Tsinghua University, China; City University of Hong Kong, China) Existing gesture interfaces only work with a fixed set of gestures defined either by interface designers or by users themselves, which introduces learning or demonstration efforts that diminish their naturalness. Humans, on the other hand, understand free-form gestures by synthesizing the gesture, context, experience, and common sense. In this way, the user does not need to learn, demonstrate, or associate gestures. We introduce GestureGPT, a free-form hand gesture understanding framework that mimics human gesture understanding procedures to enable a natural free-form gestural interface. Our framework leverages multiple Large Language Model agents to manage and synthesize gesture and context information, then infers the interaction intent by associating the gesture with an interface function. More specifically, our triple-agent framework includes a Gesture Description Agent that automatically segments and formulates natural language descriptions of hand poses and movements based on hand landmark coordinates. The description is deciphered by a Gesture Inference Agent through self-reasoning and querying about the interaction context (e.g., interaction history, gaze data), which is managed by a Context Management Agent. Following iterative exchanges, the Gesture Inference Agent discerns the user’s intent by grounding it to an interactive function. We validated our framework offline under two real-world scenarios: smart home control and online video streaming. The average zero-shot Top-1/Top-5 grounding accuracies are 44.79%/83.59% for smart home tasks and 37.50%/73.44% for video streaming tasks. We also provide an extensive discussion that includes rationale for model selection, generalizability, and future research directions for a practical system etc. Article Search |
|
Zhang, Peiyu |
ISS '24: "MoiréTag: A Low-Cost Tag ..."
MoiréTag: A Low-Cost Tag for High-Precision Tangible Interactions without Active Components
Peiyu Zhang, Wen Ying, Sara Riggs, and Seongkook Heo (University of Virginia, USA) In this paper, we present MoiréTag—a novel tag-like device that magnifies displacement without active components for indirect sensing of subtle tangible interactions. The device consists of two overlapping layers of stripe patterns with distinct pattern frequencies. These layers create Moiré fringes that can move faster than the actual movement of a layer. Using a customized image processing pipeline, we show that MoiréTag can reliably detect sub-mm movement in real-time (mean error = 0.043 mm) under varying lighting conditions, camera angles, and camera distances. We also demonstrate five applications of MoiréTag to showcase its potential as a low-cost solution to capture and monitor small changes in movement and other physical properties, such as force and volume, by converting them into displacement. Article Search |
|
Zhang, Tengxiang |
ISS '24: "GestureGPT: Toward Zero-Shot ..."
GestureGPT: Toward Zero-Shot Free-Form Hand Gesture Understanding with Large Language Model Agents
Xin Zeng, Xiaoyu Wang, Tengxiang Zhang, Chun Yu, Shengdong Zhao, and Yiqiang Chen (Chinese Academy of Sciences, China; Institute of Computing Technology at Chinese Academy of Sciences, China; Tsinghua University, China; City University of Hong Kong, China) Existing gesture interfaces only work with a fixed set of gestures defined either by interface designers or by users themselves, which introduces learning or demonstration efforts that diminish their naturalness. Humans, on the other hand, understand free-form gestures by synthesizing the gesture, context, experience, and common sense. In this way, the user does not need to learn, demonstrate, or associate gestures. We introduce GestureGPT, a free-form hand gesture understanding framework that mimics human gesture understanding procedures to enable a natural free-form gestural interface. Our framework leverages multiple Large Language Model agents to manage and synthesize gesture and context information, then infers the interaction intent by associating the gesture with an interface function. More specifically, our triple-agent framework includes a Gesture Description Agent that automatically segments and formulates natural language descriptions of hand poses and movements based on hand landmark coordinates. The description is deciphered by a Gesture Inference Agent through self-reasoning and querying about the interaction context (e.g., interaction history, gaze data), which is managed by a Context Management Agent. Following iterative exchanges, the Gesture Inference Agent discerns the user’s intent by grounding it to an interactive function. We validated our framework offline under two real-world scenarios: smart home control and online video streaming. The average zero-shot Top-1/Top-5 grounding accuracies are 44.79%/83.59% for smart home tasks and 37.50%/73.44% for video streaming tasks. We also provide an extensive discussion that includes rationale for model selection, generalizability, and future research directions for a practical system etc. Article Search |
|
Zhang, Yilin |
ISS '24: "Planar or Spatial: Exploring ..."
Planar or Spatial: Exploring Design Aspects and Challenges for Presentations in Virtual Reality with No-coding Interface
Liwei Wu, Yilin Zhang, Justin Leung, Jingyi Gao, April Li, and Jian Zhao (University of Waterloo, Canada) The proliferation of virtual reality (VR) has led to its increasing adoption as an immersive medium for delivering presentations, distinct from other VR experiences like games and 360-degree videos by sharing information in richly interactive environments. However, creating engaging VR presentations remains a challenging and time-consuming task for users, hindering the full realization of VR presentation's capabilities. This research aims to explore the potential of VR presentation, analyze users' opinions, and investigate these via providing a user-friendly no-coding authoring tool. Through an examination of popular presentation software and interviews with seven professionals, we identified five design aspects and four design challenges for VR presentations. Based on the findings, we developed VRStory, a prototype for presentation authoring without coding to explore the design aspects and strategies for addressing the challenges. VRStory offers a variety of predefined and customizable VR elements, as well as modules for layout design, navigation control, and asset generation. A user study was then conducted with 12 participants to investigate their opinions and authoring experience with VRStory. Our results demonstrated that, while acknowledging the advantages of immersive and spatial features in VR, users often have a consistent mental model for traditional 2D presentations and may still prefer planar and static formats in VR for better accessibility and efficient communication. We finally shared our learned design considerations for future development of VR presentation tools, emphasizing the importance of balancing of promoting immersive features and ensuring accessibility. Article Search |
|
Zhao, Jian |
ISS '24: "VisConductor: Affect-Varying ..."
VisConductor: Affect-Varying Widgets for Animated Data Storytelling in Gesture-Aware Augmented Video Presentation
Temiloluwa Paul Femi-Gege, Matthew Brehmer, and Jian Zhao (University of Waterloo, Canada; Tableau Research, USA) Augmented video presentation tools provide a natural way for presenters to interact with their content, resulting in engaging experiences for remote audiences, such as when a presenter uses hand gestures to manipulate and direct attention to visual aids overlaid on their webcam feed. However, authoring and customizing these presentations can be challenging, particularly when presenting dynamic data visualization (i.e., animated charts). To this end, we introduce VisConductor, an authoring and presentation tool that equips presenters with the ability to configure gestures that control affect-varying visualization animation, foreshadow visualization transitions, direct attention to notable data points, and animate the disclosure of annotations. These gestures are integrated into configurable widgets, allowing presenters to trigger content transformations by executing gestures within widget boundaries, with feedback visible only to them. Altogether, our palette of widgets provides a level of flexibility appropriate for improvisational presentations and ad-hoc content transformations, such as when responding to audience engagement. To evaluate VisConductor, we conducted two studies focusing on presenters (𝑁=11) and audience members (𝑁=11). Our findings indicate that our approach taken with VisConductor can facilitate interactive and engaging remote presentations with dynamic visual aids. Reflecting on our findings, we also offer insights to inform the future of augmented video presentation tools. Article Search Video ISS '24: "Planar or Spatial: Exploring ..." Planar or Spatial: Exploring Design Aspects and Challenges for Presentations in Virtual Reality with No-coding Interface Liwei Wu, Yilin Zhang, Justin Leung, Jingyi Gao, April Li, and Jian Zhao (University of Waterloo, Canada) The proliferation of virtual reality (VR) has led to its increasing adoption as an immersive medium for delivering presentations, distinct from other VR experiences like games and 360-degree videos by sharing information in richly interactive environments. However, creating engaging VR presentations remains a challenging and time-consuming task for users, hindering the full realization of VR presentation's capabilities. This research aims to explore the potential of VR presentation, analyze users' opinions, and investigate these via providing a user-friendly no-coding authoring tool. Through an examination of popular presentation software and interviews with seven professionals, we identified five design aspects and four design challenges for VR presentations. Based on the findings, we developed VRStory, a prototype for presentation authoring without coding to explore the design aspects and strategies for addressing the challenges. VRStory offers a variety of predefined and customizable VR elements, as well as modules for layout design, navigation control, and asset generation. A user study was then conducted with 12 participants to investigate their opinions and authoring experience with VRStory. Our results demonstrated that, while acknowledging the advantages of immersive and spatial features in VR, users often have a consistent mental model for traditional 2D presentations and may still prefer planar and static formats in VR for better accessibility and efficient communication. We finally shared our learned design considerations for future development of VR presentation tools, emphasizing the importance of balancing of promoting immersive features and ensuring accessibility. Article Search |
|
Zhao, Shengdong |
ISS '24: "GestureGPT: Toward Zero-Shot ..."
GestureGPT: Toward Zero-Shot Free-Form Hand Gesture Understanding with Large Language Model Agents
Xin Zeng, Xiaoyu Wang, Tengxiang Zhang, Chun Yu, Shengdong Zhao, and Yiqiang Chen (Chinese Academy of Sciences, China; Institute of Computing Technology at Chinese Academy of Sciences, China; Tsinghua University, China; City University of Hong Kong, China) Existing gesture interfaces only work with a fixed set of gestures defined either by interface designers or by users themselves, which introduces learning or demonstration efforts that diminish their naturalness. Humans, on the other hand, understand free-form gestures by synthesizing the gesture, context, experience, and common sense. In this way, the user does not need to learn, demonstrate, or associate gestures. We introduce GestureGPT, a free-form hand gesture understanding framework that mimics human gesture understanding procedures to enable a natural free-form gestural interface. Our framework leverages multiple Large Language Model agents to manage and synthesize gesture and context information, then infers the interaction intent by associating the gesture with an interface function. More specifically, our triple-agent framework includes a Gesture Description Agent that automatically segments and formulates natural language descriptions of hand poses and movements based on hand landmark coordinates. The description is deciphered by a Gesture Inference Agent through self-reasoning and querying about the interaction context (e.g., interaction history, gaze data), which is managed by a Context Management Agent. Following iterative exchanges, the Gesture Inference Agent discerns the user’s intent by grounding it to an interactive function. We validated our framework offline under two real-world scenarios: smart home control and online video streaming. The average zero-shot Top-1/Top-5 grounding accuracies are 44.79%/83.59% for smart home tasks and 37.50%/73.44% for video streaming tasks. We also provide an extensive discussion that includes rationale for model selection, generalizability, and future research directions for a practical system etc. Article Search |
|
Zhou, Xiaoyan |
ISS '24: "Lights, Headset, Tablet, Action: ..."
Lights, Headset, Tablet, Action: Hybrid User Interfaces for Situated Analytics
Xiaoyan Zhou, Benjamin Lee, Francisco Raul Ortega, Anil Ufuk Batmaz, and Yalong Yang (Colorado State University, USA; University of Stuttgart, Germany; JPMorganChase, New York, USA; Concordia University, Canada; Georgia Institute of Technology, USA) While augmented reality (AR) headsets provide entirely new ways of seeing and interacting with data, traditional computing devices can play a symbiotic role when used in conjunction with AR as a hybrid user interface. A promising use case for this setup is situated analytics. AR can provide embedded views that are integrated with their physical referents, and a separate device such as a tablet can provide a familiar situated overview of the entire dataset being examined. While prior work has explored similar setups, we sought to understand how people perceive and make use of visualizations presented on both embedded visualizations (in AR) and situated visualizations (on a tablet) to achieve their own goals. To this end, we conducted an exploratory study using a scenario and task familiar to most: adjusting light levels in a smart home based on personal preference and energy usage. In a prototype that simulates AR in virtual reality, embedded visualizations are positioned next to lights distributed across an apartment, and situated visualizations are provided on a handheld tablet. We observed and interviewed 19 participants using the prototype. Participants were easily able to perform the task, though the extent the visualizations were used during the task varied, with some making decisions based on the data and others only on their own preferences. Our findings also suggest the two distinct roles that situated and embedded visualizations can have, and how this clear separation might improve user satisfaction and minimize attention-switching overheads in this hybrid user interface setup. We conclude by discussing the importance of considering the user's needs, goals, and the physical environment for designing and evaluating effective situated analytics applications. Article Search |
|
Zhu, Zhengzhe |
ISS '24: "AdapTUI: Adaptation of Geometric-Feature-Based ..."
AdapTUI: Adaptation of Geometric-Feature-Based Tangible User Interfaces in Augmented Reality
Fengming He, Xiyun Hu, Xun Qian, Zhengzhe Zhu, and Karthik Ramani (Purdue University, USA; Google Research, USA) With the advents in geometry perception and Augmented Reality (AR), end-users can customize Tangible User Interfaces (TUIs) that control digital assets using intuitive and comfortable interactions with physical geometries (e.g., edges and surfaces). However, it remains challenging to adapt such TUIs in varied physical environments while maintaining the same spatial and ergonomic affordance. We propose AdapTUI, an end-to- end system that enables an end-user to author geometric-based TUIs and automatically adapts the TUIs when the user moves to a new environment. Leveraging a geometry detection module and the spatial awareness of AR, AdapTUI first lets users create custom mappings between geometric features and digital functions. Then, AdapTUI uses an optimization-based adaptation framework, which considers both the geometric variations and human-factor nuances, to dynamically adjust the attachment of the user-authored TUIs. We demonstrate three application scenarios where end-users can utilize TUIs at different locations, including portable car play, efficient AR workstation, and entertainment. We evaluated the effectiveness of the adaptation method as well as the overall usability through a comparison user study (N=12). The satisfactory adaptation of the user-authored TUIs and the positive qualitative feedback demonstrate the effectiveness of our system. Article Search Video |
129 authors
proc time: 7.6