Powered by
Conference on Interactive Surfaces and Spaces (ISS 2023),
November 5–8, 2023,
Pittsburgh, PA, USA
Frontmatter
Welcome Message from the General Chairs
Welcome to the program of ACM International Conference on Interactive Surfaces and Spaces.
As an annual conference series starting in 2006, ACM ISS (formerly known as ACM ITS, International Conference on Interactive Tabletops and Surfaces) is the premier venue for research addressing the design, development, and use of new and emerging tabletop, digital surface, interactive spaces, and multi-surface technologies. Interactive Surfaces and Spaces increasingly pervade our everyday life, appearing in various sizes, shapes, and application contexts, offering a rich variety of ways to interact. ISS has been a venue for research and applications in these important areas of interactive surfaces as well as spaces.
Posters
Students with Attention-Deficit/Hyperactivity Disorder and Utilizing Virtual Reality to Improve Driving Skills
Filip Trzcinka,
Oyewole Oyekoya, and
Daniel Chan
(City University of New York, New York City, USA; City University of New York, Bronxville, USA)
Attention-deficit hyperactivity disorder (ADHD) is a developmental disability that affects both adolescents and adults in the current time. With driving being a staple part of the American lifestyle, it is clear that such disabilities can inhibit the progress for one's journey of learning to drive a motor vehicle. Relevant research and studies suggest that there is a correlation between an increased number of driving citations and people with ADHD, along with evidence of moderate driving issues within sample groups of people with ADHD. With virtual reality (VR) becoming a principal technology of the modern world, perhaps its uses can extend to benefiting those with developmental disabilities such as ADHD. Through the creation of a driving simulation, users can use VR technology to practice the necessary skills needed to drive, without the risk of physical injury to themselves or others. with the purpose to aid the learning experience of those with ADHD, the simulation can be designed with specific features present to help them maintain focus on the important details needed for practicing safe driving.
@InProceedings{ISS23p1,
author = {Filip Trzcinka and Oyewole Oyekoya and Daniel Chan},
title = {Students with Attention-Deficit/Hyperactivity Disorder and Utilizing Virtual Reality to Improve Driving Skills},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {1--4},
doi = {10.1145/3626485.3626529},
year = {2023},
}
Publisher's Version
Exploring Virtual Reality Game Development as an Interactive Art Medium: A Case Study with the Community Game Development Toolkit
Habin Park,
Daniel Lichtman, and
Oyewole Oyekoya
(CUNY Baccalaureate for Unique and Interdisciplinary Studies, New York City, USA; Stockton University, Galloway, USA; City University of New York, New York City, USA)
This research paper examines the development and utilization of "The Community Game Development Toolkit," a virtual reality (VR) game development tool, for the creation of interactive art experiences. The primary objective of the toolkit is to enable artist, students, researchers, and a diverse set of peoples to design story- and game-based art presentations without requiring prior game development expertise. By incorporating VR technology into the toolkit, artists are empowered to construct immersive and interactive art encounters. This study employs a case study approach to explore the potential of VR game development as an artistic medium, focusing on how artists utilize the Toolkit to construct art presentations. The research findings presented in this paper aim to contribute to the progressive field of VR art by demonstrating the diverse possibilities for accessible artistic creation in the field of VR. Ultimately, the study aims to inspire artists and researchers to delve into the artistic potential of VR game development and foster continued advancements within the field.
@InProceedings{ISS23p5,
author = {Habin Park and Daniel Lichtman and Oyewole Oyekoya},
title = {Exploring Virtual Reality Game Development as an Interactive Art Medium: A Case Study with the Community Game Development Toolkit},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {5--9},
doi = {10.1145/3626485.3626530},
year = {2023},
}
Publisher's Version
Info
Visualization of Point Mutations in Fibronectin Type-III Domain-Containing Protein 3 in Prostate Cancer
Samantha Vos,
Oyewole Oyekoya, and
Olorunseun Ogunwobi
(Virginia Wesleyan University, Virginia Beach, USA; City University of New York, New York City, USA)
MicroRNA, or miRNA, are small, single-stranded, noncoding RNAs
that are involved in RNA silencing and post-transcriptional regulation of gene expression. MiRNA can silence mRNA that codes for cancerous proteins, or it can silence miRNA that codes for antitumor proteins. MiRNA-1207-3p can silence the mRNA that codes for FNDC1, a protein that is associated with aggressive prostate cancer. Fibronectin type III domain-containing (FNDC) proteins are an adipokine/myokine family that has many proteins involved in different functions in cells. A handful of the proteins from this family have been directly correlated to proliferation of either aggression or concentration of certain cancer cells. FNDC3 was the chosen protein from this family for visualization. It has been shown that an increase in FNDC3 concentration has been linked to more aggressive prostate cancer. The overexpression of another related protein, FNDC1, has been linked to a resistance in chemotherapy. The purpose of this project is to use visualization to see how mutations in the amino acid sequence of FNDC3 can change its structure, and in the process, change its functionality.
@InProceedings{ISS23p10,
author = {Samantha Vos and Oyewole Oyekoya and Olorunseun Ogunwobi},
title = {Visualization of Point Mutations in Fibronectin Type-III Domain-Containing Protein 3 in Prostate Cancer},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {10--13},
doi = {10.1145/3626485.3626531},
year = {2023},
}
Publisher's Version
iOS Augmented Reality Application for Immersive Structural Biology Education
Sabrina Chow,
Kendra Krueger, and
Oyewole Oyekoya
(Cornell University, Ithaca, USA; City University of New York, New York City, USA)
In this day and age, there is a need for more tools in education that take advantage of the new frontiers of technology. Students have grown up in a digital world and can quickly adapt to new methods of learning. XR, or extended reality, involves the use of technology to upgrade the physical world. Augmented reality (AR), in particular, has a nice balance of enhancing the physical world with digital models while also remaining convenient and accessible since it can be done through a smartphone. This study looked at whether an AR iOS application could be an effective and engaging learning tool. The application was designed to be an augmented reality game and exergame by being a scavenger hunt that accompanied a tour. Preliminary results reveal that participants enjoyed the game and found that it made the tour more educational and slightly more engaging.
@InProceedings{ISS23p14,
author = {Sabrina Chow and Kendra Krueger and Oyewole Oyekoya},
title = {iOS Augmented Reality Application for Immersive Structural Biology Education},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {14--18},
doi = {10.1145/3626485.3626532},
year = {2023},
}
Publisher's Version
Exploring Perceptions of Structural Racism in Housing Valuation through 3D Visualizations
Lisa Haye,
Courtney D. Cogburn, and
Oyewole Oyekoya
(City University of New York, New York City, USA; Columbia University, New York, USA)
This research, formatted as an exploratory study, attempted to investigate perceptions concerning the consequences of redlining and structural racism in housing valuation via three-dimensional (3D) visualization models. Unity3D and Mapbox SDK for Unity were used to visualize two neighborhoods in the Bronx County of New York; single or multiple dimensions of visualization to represent both racial differences and the presence of condominiums in the respective neighborhoods were used. Thirty-three respondents participated in a user study to capture perceptions of seventeen visualizations, and responses generally favored the use of multiple dimensions of congruent visualizations. This work attempts to encourage future development of 3D visualization techniques to stimulate interactive understanding of structural racism.
@InProceedings{ISS23p19,
author = {Lisa Haye and Courtney D. Cogburn and Oyewole Oyekoya},
title = {Exploring Perceptions of Structural Racism in Housing Valuation through 3D Visualizations},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {19--23},
doi = {10.1145/3626485.3626533},
year = {2023},
}
Publisher's Version
Effects of Varying Avatar Sizes on Food Choices in Virtual Environments
Richard Chinedu Erem,
Oyewole Oyekoya, and
Margrethe Horlyck-Romanovsky
(University of Connecticut, Storrs, USA; City University of New York, New York City, USA; Brooklyn College, Brooklyn, USA)
The escalating global obesity crisis necessitates innovative interventions to promote healthier eating habits. This study investigates the potential of Virtual Reality (VR) as a novel approach to this challenge. We developed a VR simulation of a supermarket shopping experience, where the player’s virtual physique changes immediately based on their dietary choices. The simulation was tested with seven participants, who reported high levels of immersion (mean score: 7.67 out of 10) and presence (mean score: 6.3 out of 10). Initial findings revealed a discrepancy in weight loss between genders, which was addressed by introducing a customization feature for gender-specific dietary adjustments. Notably, participants generally consumed fewer calories within the VR environment compared to their self-reported real-life habits. These preliminary findings suggest VR’s potential as a compelling tool for promoting healthier eating habits and combating obesity. However, these results should be interpreted with caution due to the small sample size, and further research is warranted to substantiate these promising initial findings.
@InProceedings{ISS23p24,
author = {Richard Chinedu Erem and Oyewole Oyekoya and Margrethe Horlyck-Romanovsky},
title = {Effects of Varying Avatar Sizes on Food Choices in Virtual Environments},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {24--26},
doi = {10.1145/3626485.3626534},
year = {2023},
}
Publisher's Version
The Effect of Attention Saturating Task on Eyes-Free Gesture Production on Mobile Devices
Milad Jamalzadeh,
Yosra Rekik, and
Laurent Grisoni
(Lille University, Lille, France; Université Polytechnique Hauts-de-France, Valenciennes, France; University of Lille, Villeneuve d'Ascq, France)
Touch screens are popular input methods for mobile devices, but they compete for visual attention with users' real-world tasks, leading to performance hindrances. In this study, we investigated the effect of an attention-saturating task on eyes-free gestures. A user study was conducted with 13 participants who performed eyes-free gestures on a smartphone using their dominant or non-dominant hand, alone or while performing a primary task that saturates attention. The results indicated that performing another task while drawing a gesture shortened the length and size of the gesture, reduced the duration of gesture entry, while the finger maintained the same speed across the touchscreen. Additionally, drawing a gesture with the nondominant hand increased the length of the gesture but generated less directional movements around the z-axis.
@InProceedings{ISS23p27,
author = {Milad Jamalzadeh and Yosra Rekik and Laurent Grisoni},
title = {The Effect of Attention Saturating Task on Eyes-Free Gesture Production on Mobile Devices},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {27--31},
doi = {10.1145/3626485.3626535},
year = {2023},
}
Publisher's Version
A Concept of User State Analysis for Evaluations of Interaction Design in Armored Vehicles
Thomas E. F. Witte and
Jessica Schwarz
(Fraunhofer FKIE, Wachtberg, Germany)
Crewmembers of armored vehicles are exposed to high-risk tasks in operational conditions. In modern armored vehicles multiple displays within the vehicle provide information necessary for maneuvering and tactical decision making. As hostile threats can cause stressful situations with high time pressure and potentially lethal outcomes of decision-making, it is important that the human-machine interface of armored vehicles is tailored to the specific impact factors of this work domain. In this respect, evaluations of the human-machine interface concept are necessary to ensure that the design concept meets the specific requirements of the identified use cases. We created a multimodal approach to facilitate evaluations of such complex safety critical human-machine interfaces and to support design decisions. The approach gathers manifold information on the crewmembers’ mental states and their interaction with the user interfaces. A concept is proposed for the use of user state analysis purpose fitted for armored vehicles. The concept highlights benefits of such an approach as an example of evaluations of military human-machine systems.
@InProceedings{ISS23p32,
author = {Thomas E. F. Witte and Jessica Schwarz},
title = {A Concept of User State Analysis for Evaluations of Interaction Design in Armored Vehicles},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {32--36},
doi = {10.1145/3626485.3626536},
year = {2023},
}
Publisher's Version
Synchronized Expressions: An Auditory Interface for Naturally Harmonizing Facial Expressions between People with Visual Impairment and Sighted People
Takayuki Komoda,
Hisham Elser Bilal Salih,
Tadashi Ebihara,
Naoto Wakatsuki, and
Keiichi Zempo
(University of Tsukuba, Tsukuba, Japan)
We propose a sound design concept that supports empathic communication between people with visual impairment (PVI) and interlocutors. The combination of sounds with short notes that repeat short notes (riff) representing the facial expressions of the interlocutor and PVI elicit synchronization and interlocking of the facial expressions between them. This synchronization fosters empathic communication between the PVI and the interlocutor. To demonstrate this concept, we conducted a participant experiment. We evaluated the auditory comfort of each riff pair generated from the facial expressions of the interlocutor and PVI. We compared the ratings of riff pairs when the facial expressions were synchronized and when they were not. The results confirmed that riff pairs with synchronized facial expressions were significantly more comfortable than those without synchronization, supporting our concept.
@InProceedings{ISS23p37,
author = {Takayuki Komoda and Hisham Elser Bilal Salih and Tadashi Ebihara and Naoto Wakatsuki and Keiichi Zempo},
title = {Synchronized Expressions: An Auditory Interface for Naturally Harmonizing Facial Expressions between People with Visual Impairment and Sighted People},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {37--41},
doi = {10.1145/3626485.3626537},
year = {2023},
}
Publisher's Version
Safar: Heuristics for Augmented Reality Integration in Cultural Heritage
Cyrus Monteiro,
Ipsita Rajasekar,
Prakhar Bhargava, and
Anmol Srivastava
(IIIT Delhi, Delhi, India)
Abstract: This research explores the integration of Augmented Reality (AR) technology to enhance historical site exploration, with a particular focus on the Indian context. The primary objective is to establish comprehensive design guidelines that seamlessly blend AR and cultural heritage, ultimately enriching the heritage tourism experience. Through a case study approach centred around a prominent historical site, the research utilises co-creation sessions, alongside thorough primary and secondary research practices, to analyse user interactions with AR and Virtual Reality (VR) tools. While relying primarily on qualitative insights, this study uncovers the potential of AR in heightening the heritage encounter and bridging the gap between traditional narratives and contemporary technology. By distilling findings from these methodologies, the paper contributes practical and informed design guidelines for effectively integrating AR within cultural sites. The outcomes of this research provide valuable insights for scholars, practitioners, and enthusiasts seeking to navigate the evolving landscape of technology-driven heritage tourism.
@InProceedings{ISS23p42,
author = {Cyrus Monteiro and Ipsita Rajasekar and Prakhar Bhargava and Anmol Srivastava},
title = {Safar: Heuristics for Augmented Reality Integration in Cultural Heritage},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {42--46},
doi = {10.1145/3626485.3626538},
year = {2023},
}
Publisher's Version
In-Place Virtual Exploration Using a Virtual Cane: An Initial Study
Richard Yeung,
Oyewole Oyekoya, and
Hao Tang
(City College, New York City, USA; City University of New York, New York City, USA)
In this initial study, we addressed the challenge of assisting individuals who are blind or have low vision (BLV) in familiarizing themselves with new environments. Navigating unfamiliar areas presents numerous challenges for BLV individuals. We sought to explore the potential of Virtual Reality (VR) technology to replicate real-world settings, thereby allowing users to virtually experience these spaces at their convenience, often from the comfort of their homes. As part of our preliminary investigation, we designed an interface tailored to facilitate movement for BLV users without needing physical mobility. Our study involved six blind participants. Early findings revealed that participants encountered difficulties adapting to the interface. Post-experiment interviews illuminated the reasons for these challenges, including issues with interface usage, the complexity of managing multiple interface elements, and the disparity between physical movement and interface use. Given the early stage of this research, these findings provide valuable insights for refining the approach and improving the interface in future iterations.
@InProceedings{ISS23p47,
author = {Richard Yeung and Oyewole Oyekoya and Hao Tang},
title = {In-Place Virtual Exploration Using a Virtual Cane: An Initial Study},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {47--51},
doi = {10.1145/3626485.3626539},
year = {2023},
}
Publisher's Version
Tutorials
SAGE3 for Interactive Collaborative Visualization, Analysis, and Storytelling
Jesse Harden,
Nurit Kirshenbaum,
Roderick S. Tabalba Jr.,
Jason Leigh,
Luc Renambot, and
Chris North
(Virginia Tech, Blacksburg, USA; University of Hawaii at Manoa, Honolulu, USA; University of Illinois at Chicago, Chicago, USA)
SAGE3, the newest and most advanced generation of the Smart Amplified Group Environment, is an open-source software designed to facilitate collaboration among scientists, researchers, students, and professionals across various fields. This tutorial aims to introduce attendees to the capabilities of SAGE3, demonstrating its ability to enhance collaboration and productivity in diverse settings, from co-located office collaboration to remote collaboration to both at once, with diverse displays, from personal laptops to large-scale display walls. Participants will learn how to effectively use SAGE3 for brainstorming, data analysis, and presentation purposes, as well as installation of private collaboration servers and development of custom applications.
@InProceedings{ISS23p52,
author = {Jesse Harden and Nurit Kirshenbaum and Roderick S. Tabalba Jr. and Jason Leigh and Luc Renambot and Chris North},
title = {SAGE3 for Interactive Collaborative Visualization, Analysis, and Storytelling},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {52--54},
doi = {10.1145/3626485.3626541},
year = {2023},
}
Publisher's Version
Demonstrations
Assisting the Multi-directional Limb Motion Exercise with Spatial Audio and Interactive Feedback
Tian Min,
Chengshuo Xia, and
Yuta Sugiura
(Keio University, Yokohama, Japan; Xidian University, Guangzhou, China)
Guiding users with limb motion exercises can assist them in their physical recovery and muscle training. However, traditional visual tracking methods often require multiple camera angles to help the user understand their movements and require the user to be within the range of the screen. This is not effective for more ubiquitous situations. Therefore, we propose a method that uses spatial audio to guide the user with a multiple-directional limb motion exercise. Depending on the user's perception of spatial audio's direction, we designed feedback to help the user adjust the movement height and complete the move to the designated position. We attached a smartphone to the body limb to capture the user's motion data and generate feedback. The experiment showed that with our designed methods, the user could conduct the multiple-directional limb exercise to the designated position. The spatial audio-based limb's motion exercise system could create a natural, pervasive, and non-visual exercise training system in daily life.
@InProceedings{ISS23p55,
author = {Tian Min and Chengshuo Xia and Yuta Sugiura},
title = {Assisting the Multi-directional Limb Motion Exercise with Spatial Audio and Interactive Feedback},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {55--58},
doi = {10.1145/3626485.3626542},
year = {2023},
}
Publisher's Version
MarbLED: Embedded and Transmissive LED Touch Display System and Its Application Platform for Surface Computing with Engineered Marble
Yoshito Nakaue,
Chihiro Ura,
Hiroshi Kano, and
Shigeyuki Hirai
(Kyoto Sangyo University, Kyoto, Japan)
We propose and develop MarbLED, which can change existing kitchen worktops or counters with engineered marble into a surface computing platform. The hardware was assembled with through-hole packages previously. We have designed and assembled surface-mount packages for sensing boards for the MarbLED system. In addition, we have also developed the software platform for MarbLED as a software development kit to facilitate and enable the implementation of various applications easily. As a result, this system can be installed and used in the kitchen more practically and realistically. This paper describes the outline of the hardware and the functions of the application software platform. This paper also introduces various example applications implemented for smart kitchens using the software development kit.
@InProceedings{ISS23p59,
author = {Yoshito Nakaue and Chihiro Ura and Hiroshi Kano and Shigeyuki Hirai},
title = {MarbLED: Embedded and Transmissive LED Touch Display System and Its Application Platform for Surface Computing with Engineered Marble},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {59--62},
doi = {10.1145/3626485.3626543},
year = {2023},
}
Publisher's Version
Augmenting Welding Training: An XR Platform to Foster Muscle Memory and Mindfulness for Skills Development
Tate Johnson,
Ann Li,
Andrew Knowles,
Zhenfang Chen,
Semina Yi,
Yumeng Zhuang,
Dina El-Zanfaly, and
Daragh Byrne
(Carnegie Mellon University, Pittsburgh, USA)
Metal welding is a craft manufacturing skill that can be unusually difficult to externalize and represent to novices. Building competency requires an apprentice to iteratively practice embodied skills and sensitize themselves to a sensorially complex practice. To explore these challenges, we identified opportunities for mixed reality and meditation processes to augment welding training and practice. Our demo showcases an extended reality (XR) welding helmet and torch that enhances the embodied learning of welding. We do this in two key ways: biometric sensing that enhances mindfulness and stress management in sensorially challenging environments, and combined motion-sensing and visual XR feedback that helps improve proprioceptive and embodied learning.
@InProceedings{ISS23p63,
author = {Tate Johnson and Ann Li and Andrew Knowles and Zhenfang Chen and Semina Yi and Yumeng Zhuang and Dina El-Zanfaly and Daragh Byrne},
title = {Augmenting Welding Training: An XR Platform to Foster Muscle Memory and Mindfulness for Skills Development},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {63--66},
doi = {10.1145/3626485.3626544},
year = {2023},
}
Publisher's Version
Demonstrating SurfaceCast: Ubiquitous, Cross-Device Surface Sharing
Florian Echtler,
Vitus Maierhöfer,
Nicolai Brodersen Hansen, and
Raphael Wimmer
(Aalborg University, Aalborg, Denmark; University of Regensburg, Regensburg, Germany)
Real-time online interaction is the norm today. Tabletops and other dedicated interactive surface devices with direct input and tangible interaction can enhance remote collaboration, and open up new interaction scenarios based on mixed physical/virtual components. However, they are only available to a small subset of users, as they usually require identical bespoke hardware for every participant, are complex to setup, and need custom scenario-specific applications.
We present SurfaceCast, a software toolkit designed to merge multiple distributed, heterogeneous end-user devices into a single, shared mixed-reality surface. Supported devices include regular desktop and laptop computers, tablets, and mixed-reality headsets, as well as projector-camera setups and dedicated interactive tabletop systems.
This device-agnostic approach provides a fundamental building block for exploration of a far wider range of usage scenarios than previously feasible, including future clients using our provided API.
In this paper, we present various example application scenarios which we enhance through the multi-user and multi-device features of the framework.
Our results show that the hardware- and content-agnostic architecture of SurfaceCast can run on a wide variety of devices with sufficient performance and fidelity for real-time interaction.
@InProceedings{ISS23p67,
author = {Florian Echtler and Vitus Maierhöfer and Nicolai Brodersen Hansen and Raphael Wimmer},
title = {Demonstrating SurfaceCast: Ubiquitous, Cross-Device Surface Sharing},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {67--70},
doi = {10.1145/3626485.3626545},
year = {2023},
}
Publisher's Version
Interactive 3D Annotation of Objects in Moving Videos from Sparse Multi-view Frames
Kotaro Oomori,
Wataru Kawabe,
Fabrice Matulic,
Takeo Igarashi, and
Keita Higuchi
(Preferred Networks, Tokyo, Japan; University of Tokyo, Tokyo, Japan)
This demonstration is invited from ISS 2023 paper track of https://doi.org/10.1145/3626476. Segmenting and determining the 3D bounding boxes of objects of interest in RGB videos is an important task for a variety of applications such as augmented reality, navigation, and robotics. Supervised machine learning techniques are commonly used for this, but they need training datasets: sets of images with associated 3D bounding boxes manually defined by human annotators using a labelling tool. However, precisely placing 3D bounding boxes can be difficult using conventional 3D manipulation tools on a 2D interface. To alleviate that burden, we propose a novel technique with which 3D bounding boxes can be created by simply drawing 2D bounding rectangles on multiple frames of a video sequence showing the object from different angles. The method uses reconstructed dense 3D point clouds from the video and computes tightly fitting 3D bounding boxes of desired objects selected by back-projecting the 2D rectangles. We show concrete application scenarios of our interface, including training dataset creation and editing 3D spaces and videos. An evaluation comparing our technique with a conventional 3D annotation tool shows that our method results in higher accuracy. We also confirm that the bounding boxes created with our interface have a lower variance, likely yielding more consistent labels and datasets.
@InProceedings{ISS23p71,
author = {Kotaro Oomori and Wataru Kawabe and Fabrice Matulic and Takeo Igarashi and Keita Higuchi},
title = {Interactive 3D Annotation of Objects in Moving Videos from Sparse Multi-view Frames},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {71--71},
doi = {10.1145/3626485.3626546},
year = {2023},
}
Publisher's Version
Holographic Sports Training
Manuel Rebol,
Becky Lake,
Michael Reinisch,
Krzysztof Pietroszek, and
Christian Gütl
(American University, Washington, USA; Graz University of Technology, Graz, Austria)
Sport practicing through video can be challenging because of missing spatial information. Hence, we present a holographic sports library of short sports exercises used to practice sports. The sports holograms were captured in a volumetric recording studio. Users can watch the holograms on augmented reality (AR) devices like mobile phones and headsets. The user can take advantage of the spatial information and watch the holograms from multiple angles. Moreover, the user can imitate the hologram's motion, an innovative method to teach sports.
@InProceedings{ISS23p72,
author = {Manuel Rebol and Becky Lake and Michael Reinisch and Krzysztof Pietroszek and Christian Gütl},
title = {Holographic Sports Training},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {72--75},
doi = {10.1145/3626485.3626547},
year = {2023},
}
Publisher's Version
MindfulBloom: Spatial Finger Painting for Mindfulness Intervention in Augmented Reality
Sunniva Liu,
Eric Zhao,
Anthony Anton Julian Alexander Renouf, and
Dina El-Zanfaly
(Carnegie Mellon University, Pittsburgh, USA)
Mindfulness interventions emphasize the importance of being present in the moment. While many digital mindfulness tools are limited to flat screens, extended reality (XR) offers more immersive solutions. However, existing XR mindfulness practices often lack the connections to user's real-world present moment, essential for mindfulness. We introduces MindfulBloom, a novel approach merging augmented reality (AR) and finger painting interaction for digitally supported mindfulness. Building on studies that highlight finger painting's therapeutic potential, our system guides users through an intuitive finger painting interaction in their workspace, translating stress clutters into spatial, artistic expressions. We explores its potential to direct body attention to the present moment, offering a creative solution to the in-the-moment mindfulness interventions.
@InProceedings{ISS23p76,
author = {Sunniva Liu and Eric Zhao and Anthony Anton Julian Alexander Renouf and Dina El-Zanfaly},
title = {MindfulBloom: Spatial Finger Painting for Mindfulness Intervention in Augmented Reality},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {76--80},
doi = {10.1145/3626485.3626548},
year = {2023},
}
Publisher's Version
Workshops
Framing Seamlessness-Enhancing Future Multimodal Interaction from Physical to Virtual Spaces
Nan Qie,
Nick Bryan-Kinns,
Fangli Song,
Francisco Rebelo,
Roger Ball,
Yijing Yang,
Le Du, and
Wei Wang
(OPPO, Shanghai, China; University of the Arts London, London, UK; Hunan University, Changsha, China; University of Lisbon, Lisbon, Portugal; Georgia Institute of Technology, Atlanta, USA; OPPO, Nanjing, China)
Seamless human-computer interaction (HCI) was first proposed at the end of the last century. Since then, it has evolved, faced challenges, and recently experienced a revival across emerging HCI domains, driven by advancements in AI and 5G technologies. However, its concept has undergone significant changes. Within the context of user experience (UX) design, this term typically emphasizes the harmonious integration of various input, output modalities, and platforms to create a cohesive and consistent experience in interactive surface and spaces. In this workshop, we bring together academia and industry voices from around the globe, spanning mobile interaction, automotive human-machine interface (HMI), musical entertainment, mixed reality (MR) and wearables domains. We delve into the technology-driven transformation of seamless multimodal experience (SME), through addressing theoretical and methodological issues, exploring dimensions, guidelines, opportunities, as well as challenges for seamlessness innovation in HCI. Concurrently, we investigate approaches to harness these technologies to unlock the potential of multimodal user interfaces (MUIs) in seamless interactions. More broadly, we explore innovative SME concepts using a participatory design tool. Our collective goal is to achieve a shared understanding among participants and establish a framework of SME that will guide future human-computer interaction from physical to virtual spaces.
@InProceedings{ISS23p81,
author = {Nan Qie and Nick Bryan-Kinns and Fangli Song and Francisco Rebelo and Roger Ball and Yijing Yang and Le Du and Wei Wang},
title = {Framing Seamlessness-Enhancing Future Multimodal Interaction from Physical to Virtual Spaces},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {81--84},
doi = {10.1145/3626485.3626549},
year = {2023},
}
Publisher's Version
Augmented Reality for Exploring Social Spaces and Building Social Connections
Brandon Huynh,
Tatiana Lau, and
Kate A. Sieck
(Toyota Research Institute, Los Altos, USA)
Augmented reality technology has made strides in recent years, affording us novel ways to interact with the physical spaces we inhabit. Yet at the same time, these physical spaces are also home to the communities of peoples that surround us. Can we better design augmented reality technology so that it allows us to both explore and understand the physical spaces around us while also building social bridges with those in our communities? This workshop intends to bring together researchers interested in thinking about how augmented reality can be used to improve not only people's interactions with the spaces around them but also with the communities in which they live.
@InProceedings{ISS23p85,
author = {Brandon Huynh and Tatiana Lau and Kate A. Sieck},
title = {Augmented Reality for Exploring Social Spaces and Building Social Connections},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {85--86},
doi = {10.1145/3626485.3626550},
year = {2023},
}
Publisher's Version
Doctoral Symposium
Empowering Online Learning: AI-Embedded Design Patterns for Enhanced Student and Educator Experiences in Virtual Worlds
Arghavan (Nova) Ebrahimi
(University of North Carolina, USA)
Desktop virtual world learning environments (DVWLEs) hold significant promise for the forefront of online learning platforms, offering noteworthy potential for widespread adoption. These environments are characterized by two distinct characteristics: a sense of place and a sense of presence, setting them apart from conventional online learning platforms. These unique characteristics of DVWLEs align their design more closely with the physical architectural design elements and principles (AEPs). In addition to AEPs, interaction design elements and principles through affordances and signifiers, play a pivotal role in shaping DVWLEs. However, there is a lack of comprehensive research on the integration of AEPs, interaction design elements and principles, and the design of affordances and signifiers, including their associated design patterns, in desktop VWLEs design.
Our research aims to bridge this knowledge gap by providing conceptual frameworks for categorizing AEPs, and interaction design elements and principles in DVWLEs design based on their affordances. We seek to identify and develop their common design patterns across various DVWLEs, with the goal of integrating them into an AI agent. This AI co-designer will empower educators to create customized educational environments tailored to their instructional needs. By simplifying and automating this process, our research aims to make the educational environment design more straightforward, streamlined and efficient for educators, thereby improving students' educational experiences.
@InProceedings{ISS23p87,
author = {Arghavan (Nova) Ebrahimi},
title = {Empowering Online Learning: AI-Embedded Design Patterns for Enhanced Student and Educator Experiences in Virtual Worlds},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {87--91},
doi = {10.1145/3626485.3626551},
year = {2023},
}
Publisher's Version
Designing for Children's Social Play in Responsive Multi-sensory Environments
Yanjun Lyu
(Arizona State University, USA)
A responsive multi-sensory environment is designed to create an engaging experience by integrating various sensory stimuli such as sight, sound, and touch. Most prior research in the Human-computer Interaction (HCI) field has investigated its use for children with disabilities (e.g., autism) or for personal play or dyadic activities. Our work focuses on the use of responsive media for children without disabilities in primary education contexts and within a group of children. Our goal is to evaluate the capabilities of responsive multi-sensory environments, integrating wearable garments to foster children’s social interaction behavior and collaboration. The enhanced garments we are developing will feature multi-channel haptic actuators, allowing players to feel tactile sensations corresponding to the proximity and activity of other peers. Additionally, virtual creatures will be projected visually an acoustic floor, which players will be able to interact with. Our design involves a series of aquatic creatures that display different "greeting" behaviors as participants stand in different proximity zones to each other.
@InProceedings{ISS23p92,
author = {Yanjun Lyu},
title = {Designing for Children's Social Play in Responsive Multi-sensory Environments},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {92--97},
doi = {10.1145/3626485.3626552},
year = {2023},
}
Publisher's Version
Exploring Human Values in Mixed Reality
Mengxing Li
(Monash University, Australia)
Immersive technologies such as mixed reality (MR) are gradually emerging from laboratory testing to real-world applications. However, the real-world applications of MR technologies raise more complex human-value issues than in laboratory scenarios. The aim of this Ph.D. research is to understand human values in the MR context on different population groups and how to address future challenges. The main contributions of this research are (1) a review of human values in mixed-reality literature, (2) a designed workshop for exploring values in MR, and (3) insights for incorporating human values in MR applications.
@InProceedings{ISS23p98,
author = {Mengxing Li},
title = {Exploring Human Values in Mixed Reality},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {98--101},
doi = {10.1145/3626485.3626553},
year = {2023},
}
Publisher's Version
Exploring and Evaluating the Potential of 2D Computational Notebooks
Jesse Harden
(Virginia Tech, USA)
Computational notebooks are popular tools for data science and presentation of computational narratives. However, their 1D structure introduces and exacerbates user issues, such as messiness, tedious navigation, inefficient use of large screen space, performance of non-linear analyses, and presentation of non-linear narratives. In this Ph.D., we address these issues through the design, exploration, and evaluation of computational notebooks which use 2D space to organize cells, or 2D computational notebooks. Specifically, we explore whether users would use 2D space, design and evaluate a 2D computational notebook prototype for individual work, explore how users collaborate in 2D space for data science and education, create and validate a theoretical understanding of how nonlinear processes in data science cause problems when forced into a linear, 1D computational notebook, and build upon the foundation we have made to refine 2D computational notebooks. Our work contributes insights on if and how expanded space usage can improve computational notebooks.
@InProceedings{ISS23p102,
author = {Jesse Harden},
title = {Exploring and Evaluating the Potential of 2D Computational Notebooks},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {102--104},
doi = {10.1145/3626485.3626554},
year = {2023},
}
Publisher's Version
Designing and Evaluating Interactions for Handheld AR
Jonathan Wieland
(University of Konstanz, Germany)
Mobile devices such as phones and tablets have become the most prevalent AR devices and are applied in various domains, including architecture, entertainment, and education. However, due to device-specific characteristics, designing and evaluating handheld AR interactions can be especially challenging as handheld AR displays provide 1) limited input options, 2) a narrow camera field of view, and 3) restricted context awareness. To address these challenges with design recommendations and research implications, the dissertation follows a mixed-methods approach with three research questions (RQs): On the one hand, specific aspects of fundamental 3D object manipulation tasks (RQ1) and awareness and discoverability (RQ2) are explored and evaluated in controlled lab studies. For increased ecological validity, the developed interaction concepts are also investigated in public interactive exhibitions. These studies then inform the creation of a framework for designing and evaluating handheld AR experiences using VR simulations of the interaction context (RQ3).
@InProceedings{ISS23p105,
author = {Jonathan Wieland},
title = {Designing and Evaluating Interactions for Handheld AR},
booktitle = {Proc.\ ISS},
publisher = {ACM},
pages = {105--108},
doi = {10.1145/3626485.3626555},
year = {2023},
}
Publisher's Version
proc time: 3.75