Powered by
Conference Publishing Consulting

2022 Conference on Interactive Surfaces and Spaces (ISS 2022 Companion), November 20–23, 2022, Wellington, New Zealand

ISS 2022 Companion – Companion Proceedings

Contents - Abstracts - Authors


Title Page

Welcome Message from the General Chairs
Welcome to the adjunct program of ACM International Conference on Interactive Surfaces and Spaces. As an annual conference series starting in 2006, ACM ISS (formerly known as ACM ITS, International Conference on Interactive Tabletops and Surfaces) is the premier venue for research addressing the design, development, and use of new and emerging tabletop, digital surface, interactive spaces, and multi-surface technologies. Interactive Surfaces and Spaces increasingly pervade our everyday life, appearing in various sizes, shapes, and application contexts, offering a rich variety of ways to interact. ISS has been a venue for research and applications in these important areas of interactive surfaces as well as spaces.

Organizing Committee



Virtual Tabletops (VTT) for Role-Playing Games (RPG) and Do-It-Yourself (DYI) Interactive Surfaces as Examples of Vernacular Design
Jan K. Argasiński ORCID logo
(Jagiellonian University, Krakow, Poland)
Pop culture-driven nostalgia for the times of the second half of the 20th century - with their music, aesthetics and entertainment, in combination with the increased need for casual interpersonal contacts observed during the COVID-19 pandemic led to a kind of renaissance of classic tabletop board and Role-Playing Games. At the same time, the pandemic necessity of isolation has led to the emergence of new solutions in the field of social but remote entertainment - specially through software called Virtual Tabletops(VTT). In classic RPGs such as "Dungeons and Dragons" players often use boards, mockups, models, dioramas and miniatures for tactical orientation in situations where it is essential from the mechanics point of view (e.g. during combat sequences). Virtual game tables allowed for this types of props to be digitized including automation of some mechanical activities related to the game (statistics of objects, markers for area effects, fields of view, fogs of war etc.). This turned out to be so attractive that some players began to mix the classic pen and paper RPG’s gameplay (requiring the physical presence of players at the table and face to face communication), with virtual tools which then allow for projection onto table’s surface, walls or to the TVs, tablets and smartphones. This niche but extremely interesting example of vernacular practices in computer based media seem worth further investigation.

Publisher's Version
GesPlayer: Using Augmented Gestures to Empower Video Players
Xiang Li ORCID logo, Yuzheng Chen ORCID logo, and Xiaohang Tang ORCID logo
(University of Cambridge, Cambridge, UK; Xi’an Jiaotong-Liverpool University, Suzhou, China; University of Liverpool, Liverpool, UK)
In this paper, we introduce GesPlayer, a gesture-based empowered video player that explores how users can experience their hands as an interface through gestures. We provide three semantic gestures based on the camera of a computer or other smart device to detect and adjust the progress of video playback, volume, and screen brightness, respectively. Our goal is to enable users to control video playback simply by their gestures in the air, without the need to use a mouse or keyboard, especially when it is not convenient to do so. Ultimately, we hope to expand our understanding of gesture-based interaction by understanding the inclusiveness of designing the hand as an interactive interface, and further broaden the state of semantic gestures in an interactive environment through computational interaction methods.

Publisher's Version
Evaluation of Grasp Posture Detection Method using Corneal Reflection Images through a Crowdsourced Experiment
Xiang Zhang ORCID logo, Kaori Ikematsu ORCID logo, Kunihiro Kato ORCID logo, and Yuta Sugiura ORCID logo
(Keio University, Yokohama, Japan; Yahoo Japan Corporation, Tokyo, Japan; Tokyo University of Technology, Tokyo, Japan)
To achieve adaptive user interfaces (UI) for smartphones, researchers have been developing sensing methods to detect how a user is holding a smartphone. A variety of promising adaptive UIs have been demonstrated, such as those that automatically switch the displayed content and the position of interactive components in accordance with how the phone is being held. In this paper, we present a follow-up study on ReflecTouch, a state-of-the-art grasping posture detection method proposed by Zhang et al. that uses corneal reflection images captured by the front camera of a smartphone. We extend the previous work by investigating the performance of this method towards actual use and its potential challenges through a crowdsourced experiment with a large number of participants.

Publisher's Version
Retzzles: Engaging Users towards Retention through Touchscreen Puzzles
Nikola Kovačević ORCID logo, Maheshya Weerasinghe ORCID logo, and Jordan Aiko Deja ORCID logo
(University of Primorska, Koper, Slovenia; University of St Andrews, St Andrews, UK; De La Salle University, Manila, Philippines)
Textual sources provide limited information to their readers which could be underwhelming and may reduce engagement. Augmentation approaches have been introduced to present information more engagingly and have shown the potential in supporting information retention. In this research, we inquire further into this opportunity through the use of interactive touchscreen visual elements such as puzzle pieces. We present Retzzles, where users get to solve puzzles applied in a tourist use-case. To evaluate this, we will do a within-subject study with participants n=30 to determine whether such elements promote engagement which thereby supports information retention. Our preliminary findings shed light on some perspectives on the use of touchscreen displays for engagement but are subject to further investigation. We contribute to more discussions on the use of interactive screens in other similar learning scenarios.

Publisher's Version Video
Swipe-and-Tap Functional Programming
Michael Homer ORCID logo and Craig Anslow ORCID logo
(Victoria University of Wellington, Wellington, New Zealand)
Programming on touch-screen devices is notoriously difficult, with conventional programming affordances typically unavailable or unhelpful. Here we present a novel touch-screen programming environment for a style of functional programming that more closely matches typical touch-screen needs, where all editing operations are driven by concrete data values and selected by swipe and tap gestures. The environment provides live editing and supports exploratory programming, with direct display of all calculation values and earlier phases of development always available to edit in-place.

Publisher's Version
Impact on the Quality of Interpersonal Relationships by Proximity using the Ventriloquism Effect in a Virtual Environment
Azusa Yamazaki ORCID logo, Naoto Wakatsuki ORCID logo, Koichi Mizutani ORCID logo, Yukihiko Okada ORCID logo, and Keiichi Zempo ORCID logo
(University of Tsukuba, Tsukuba, Japan)
This paper discusses more effective interaction between a user immersed in a virtual environment (VE) and a salesperson avatar. Utilizing the superiority of visual information over auditory information, we implemented an interpersonal situation in which only a sound image invaded the user's personal space (PS) using a virtual salesperson avatar. We investigated the impressions that 16 participants had of the avatar. The experimental results showed that when the sound image invaded the PS, the correlation coefficient between the distance from the user to the sound image normalized by the interpersonal distance of each participant and the rapport for service quality was -0.24 (p<0.05). This indicates that in the range of the ventriloquism effect, the avatar's sound image intruding into the PS and approaching the user leads to an improvement in rapport. The application of the technique proposed in this paper, in which the position of the visual image and the sound image diverge, is expected to improve the value cocreation of the interpersonal service experience in VEs such as virtual stores, which have been increasing in recent years.

Publisher's Version Video
Video Analysis of Hand Gestures for Distinguishing Patients with Carpal Tunnel Syndrome
Ryota Matsui ORCID logo, Takuya Ibara ORCID logo, Kazuya Tsukamoto ORCID logo, Takafumi Koyama ORCID logo, Koji Fujita ORCID logo, and Yuta Sugiura ORCID logo
(Keio University, Yokohama, Japan; Tokyo Medical and Dental University, Tokyo, Japan)
Carpal tunnel syndrome (CTS) is a common condition characterized by hand dysfunction due to median nerve compression. Orthopedic surgeons often detect signs of the symptoms to screen for CTS; however, it is difficult to distinguish other diseases with symptoms similar to those of CTS. We previously introduced a method of evaluating fine hand movements to screen for cervical myelopathy (CM). The present work applies this method to screen for CTS, using videos of specific hand gestures to measure their quickness. Machine learning models are used to evaluate the gestures to estimate the probability that a patient has CTS. We cross-validated the models to evaluate our method’s effectiveness in screening for CTS. The results showed that the sensitivity and specificity were 90.0% and 85.3%, respectively. Furthermore, we found that our method can also be used to distinguish CTS and CM and may enable earlier detection and treatment of similar neurological diseases.

Publisher's Version
Spatialphonic360: Accuracy of the Arbitrary Sound Image Presentation using Surrounding Parametric Speakers
Noko Kuratomo ORCID logo, Hiroki Uchida ORCID logo, Tadashi Ebihara ORCID logo, Naoto Wakatsuki ORCID logo, Koichi Mizutani ORCID logo, and Keiichi Zempo ORCID logo
(University of Tsukuba, Tsukuba, Japan)
Digital content coexisting with the real world is increasing, and richer acoustic content is also being sought. In this study, we propose a "Spatialphonic360," which enables users to perceive stereophonic sound in all surroundings at arbitrary positions and in arbitrary postures. Through participant experiments, we compared the system's accuracy with that of conventional systems. The results showed that Spatialphonic360 could localize any sound image regardless of the user's posture. This research contributes to the improvement of the flexibility of stereophonic sound technology.

Publisher's Version Video

Doctoral Symposium

Perceived Affordances in Programmable Matter
Khyati Priya ORCID logo
(IIT Bombay, Mumbai, India)
The advantages of additive manufacturing technology combined with recent research in material sciences has led to the development of programmable matter- objects that can change their form or function or both in a pre-decided manner based on the user input or the environmental stimuli. Programmable matter is allowing us to develop a new paradigm of interactive and smart products. While these products are functional, they may or may not incorporate cues in their form that conveys information about how to interact with the product – a property termed as perceived affordance. Research on perceived affordances of programmable matter is lacking. We wish to bridge that gap by understanding what perceived affordance means in the context of programmable matter, and how to design products to create better perceived affordances.

Publisher's Version
Piano Learning and Improvisation through Adaptive Visualisation and Digital Augmentation
Jordan Aiko Deja ORCID logo
(University of Primorska, Koper, Slovenia; De La Salle University, Manila, Philippines)
The task of learning the piano has been a centuries-old challenge for novices, experts and technologists. Several innovations have been introduced to support proper posture, movement, and motivation, while sight-reading and improvisation remain the least-explored areas. In this PhD, we address this gap by redesigning the piano augmentation as an interactive and adaptive space. Specifically, we will explore how to support learners with adaptive visualisations through a two-pronged approach: (1) by designing adaptive visualisations based on the proficiency of the learner to support regular piano playing and (2) by assisting them with expert annotations projected on the piano to encourage improvisation. To this end, we will build a model to understand the complexities of learners' spatiotemporal data and use these to support learning. We will then evaluate our approach through user studies enabling practice and improvisation. Our work contributes to how adaptive visualisations can push music instrument learning and support multi-target selection tasks in immersive spaces.

Publisher's Version Info
XR for Improving Cardiac Electrophysiology Training
Nisal Manisha Udawatta Kankanamge Don ORCID logo
(Victoria University of Wellington, Wellington, New Zealand; University of Kelaniya, Kelaniya, Sri Lanka)
Arrhythmia refers to abnormalities of heart rhythm, and the catheter ablation procedure provides the best therapeutic outcomes to cure this life-threatening pathology. Electrophysiologists perform the catheter ablation procedure, which is a minimally invasive procedure involving catheter navigation into the heart’s chambers through peripheral blood vessels. The electrophysiology study identifies regions of the heart that cause the arrhythmia, which is then ablated, restoring a healthy heart rhythm. Electrophysiologists must possess a comprehensive understanding of cardiac electrophysiology and precise instrument handling due to the sensitivity of the procedure. In the conventional approach, electroanatomical mapping systems and fluoroscopic visualizations are utilized to assist the procedure; however, their limitations reduce the procedure’s effectiveness. In order to improve the procedure outcome, two main scenarios have been identified: intraoperative guidance and procedural training. This study aims to examine how extended reality technologies (eg. AR/VR) can be used to improve the cardiac catheter ablation procedure.

Publisher's Version
Social VR for Socially Isolated Adolescents with Significant Illnesses
Udapola Balage Hansi Shashiprabha Udapola ORCID logo
(Victoria University of Wellington, Wellington, New Zealand; University of Kelaniya, Kelaniya, Sri Lanka)
Adolescents with significant illnesses face various psychosocial and mental wellbeing challenges during hospitalisation. Social isolation from family and peers is identified as a significant concern for this group. Several digital interventions have been proposed to connect these young people with others, such as video conferencing, social media, social robots, and online games. Research so far has found those to be beneficial for adolescents’ wellbeing. Social VR is a relatively novel social interaction mechanism that allows users to interact socially within an immersive 3D virtual environment with embodiment experience. Game-play is generally identified as a motivational factor in user engagement. Therefore, applying gamification into social VR space would encourage and motivate socially isolated adolescents to engage socially. The main goal of this research project is to enhance the social engagement of socially isolated adolescents by fostering positive interactions with their peers. We expect to discover if engaging with the intervention will decrease problems associated with social isolation.

Publisher's Version


Space Ocean Library: Interactive Narrative Experience in VR
Becky Lake ORCID logo and Krzysztof Pietroszek ORCID logo
(American University, Washington, USA)
“Space Ocean Library” is an interactive narrative VR experience that transports you to a study-turned mystical purgatory. In this room, you explore the connectivity of humans through the objects we keep around to remember our lives. “Space Ocean Library” features actors recorded through volumetric capture and objects rendered using photogrammetry.

Publisher's Version


Design and Prototype Conversational Agents for Research Data Collection
Jing Wei ORCID logo, Young-Ho Kim ORCID logo, Samantha W. T. Chan ORCID logo, and Tilman Dingler ORCID logo
(University of Melbourne, Melbourne, Australia; NAVER AI Lab, Seongnam, South Korea; Massachusetts Institute of Technology, Cambridge, USA)
Conversational agents have gained increasing interest from researchers as a tool to collect data and administer interventions. They provide a natural user interface through conversations and hence have the potential to reach a wide population in their homes and on the go. Several developer tools and commercial as well as open-source frameworks allow for the deployment of both text-based chatbots and voice assistants. In this 90 min tutorial, participants will learn how to choose an appropriate platform, how to design and deploy their conversational agents, and how to transform traditional surveys through conversation agents.

Publisher's Version
Making Sustainable, Tangible Objects with Myco-materials
Phillip Gough ORCID logo, Anastasia Globa ORCID logo, Ali Hadigheh ORCID logo, and Anusha Withana ORCID logo
(University of Sydney, Sydney, Australia)
There is growing interest in using living materials as sustainable alternatives to conventional materials in product design. Particularly, the ability of some species of mushroom to grow lightweight, rigid materials that can be moulded into complex 3D forms is of interest to a range of interactive applications including data physicalisation, aesthetic experiences of a space, and wearable computing. This tutorial aims to provide a hands-on experience with living, mushroom-based "myco-materials" that can be grown into a range of complex shapes, and introduce a cheap, low-technology workflow for the design and fabrication of 3D designs using living, sustainable materials.

Publisher's Version


Indigenous CHI Workshop
Kevin Shedlock ORCID logo, Marta Vos ORCID logo, Petera Hudson ORCID logo, Jamey Hepi ORCID logo, Betty Kim ORCID logo, Zane Rawson ORCID logo, and Marino Doyle ORCID logo
(Victoria University of Wellington, Wellington, New Zealand; Whitireia New Zealand, Porirua, New Zealand; Massey University, Palmerston North, New Zealand)
This CHI workshop looked to connect indigenous peoples to surfaces and spaces using a media channel of experience to visit historical narrative, heritage and/or interactive and immersive media applications. The workshop was especially interested in contributions that displayed cultural resilience inside a digital environment or aligned with a common interest to challenge the status quo. We welcomed individuals or groups interested in submitting (but not limited to) interactive design, prototyping, methodology, human-computer interaction, conceptual works that progress indigenous ideas and creative works within a technical lens.

Publisher's Version
Rethinking Smart Objects: The International Workshop on Interacting with Smart Objects in Interactive Spaces
Martin Schmitz ORCID logo, Sebastian Günther ORCID logo, Karola Marky ORCID logo, Florian Müller ORCID logo, Andrii Matviienko ORCID logo, Alexandra Voit ORCID logo, Roberts Marky ORCID logo, Max Mühlhäuser ORCID logo, and Thomas Kosch ORCID logo
(Saarland University, Saarbrücken, Germany; TU Darmstadt, Darmstadt, Germany; Leibniz University Hannover, Germany; University of Glasgow, Glasgow, UK; LMU Munich, Munich, Germany; adesso, Dortmund, Germany; BlackArrow Financial Solutions, Glasgow, UK; Utrecht University, Utrecht, Netherlands)
The increasing proliferation of smart objects in everyday life has changed how we interact with computers. Instead of concentrating computational capabilities and interaction into one device, everyday objects have naturally integrated parts of interactive features. Although this has led to many practical applications, the possibilities for explicit or implicit interaction with such objects are still limited in interaction spaces. We still often rely on smartphones as interactive hubs for controlling smart objects, hence not fulfilling the vision of truly smart objects. The workshop "Rethinking Smart Objects" invites practitioners and researchers from both academia and industry to discuss novel interaction paradigms and the integration and societal implications of using smart objects in interactive space. This workshop will include an action plan with leading questions, aiming to move the research field forward.

Publisher's Version
Immersive Analytics Spaces and Surfaces
Marcos Serrano ORCID logo, Kadek Ananta Satriadi ORCID logo, Yalong Yang ORCID logo, Barrett Ens ORCID logo, Arnaud Prouzeau ORCID logo, and Stefanie Zollmann ORCID logo
(University of Toulouse, Toulouse, France; University of South Australia, Adelaide, Australia; Virginia Tech, Blacksburg, USA; Monash University, Melbourne, Australia; Inria, Bordeaux, France; University of Otago, Dunedin, New Zealand)
Immersive Analytics has now fully emerged as a research topic in the Visualisation and Human-Computer Interaction research communities. While evidence of its benefits has been accumulating, we have still attained only a basic understanding of the extent to which they can support human sensemaking, This workshop aims to define a roadmap for new directions in leveraging the benefits of spatial interaction to support sensemaking. In particular the main goal of this workshop will be to focus on understanding the benefits and applications of immersive interactive spaces and surfaces (e.g. body, walls, smartphones or other interactive surfaces such as tabletops) for enhancing human sensemaking.

Publisher's Version
Game Design Prototyping Workshop: Brainstorming and Designing Collaborative and Creative Game Prototypes with Immersive Surfaces
Erik ChampionORCID logo and Simon McCallum ORCID logo
(University of South Australia, Adelaide, Australia; Victoria University of Wellington, Wellington, New Zealand)
This short paper provides a short theoretical exposition of game prototyping, a discussion of available game prototyping software and typical challenges in designing games for educational purposes, and practical examples of game design workshops. In these workshops, the participants developed their own open-ended game ideas and related immersive environments in groups of three or four over half a day or longer.

Publisher's Version

proc time: 7.52