HRI 2026 – Author Index |
Contents -
Abstracts -
Authors
|
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
| Åsberg, Robin |
Sofia Thunberg, Aris Alissandrakis, Timmy Bolinder, Melker Forslund, Elias Lycke, Tobias Pettersson, Marcus Saméus, and Robin Åsberg (Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; Uppsala University, Uppsala, Sweden) This late-breaking report presents a Wizard of Oz control system that uses virtual reality equipment to operate the humanoid robot Pepper. The system enables control by mirroring the operator’s movement within the physical space, including body and head rotation, and arm movements directly onto the robot. The system was evaluated in a study setup with two participant groups: users and operators. Results showed that the system was effective in simulating natural robot behaviour, while being intuitive and engaging. |
|
| Abad-Moya, Cristina |
Cristina Abad-Moya, Irene González Fernández, Alexis Gutiérrez-Fernández, Francisco J. Rodríguez Lera, and Camino Fernández-Llamas (University of León, León, Spain; Rey Juan Carlos University, Madrid, Spain) In everyday environments, robots must be able to detect when people intend to initiate interaction and to communicate their engagement state in an interpretable manner. Although engagement has been widely studied in human–robot interaction, many existing approaches rely on controlled settings or limited perceptual modalities, leaving open questions about how non-expert users naturally attempt to initiate interaction and how engagement states should be signalled during early interaction. An online pre-study questionnaire with 64 participants was conducted to capture user expectations regarding interaction initiation and engagement feedback. The results indicated a preference for speech- and gaze-based strategies, as well as expectations of clear signals such as robot orientation, verbal acknowledgement, and visual feedback. These insights informed the design of a multimodal engagement system integrating auditory and visual cues and providing incremental feedback to distinguish between attention and confirmed readiness. The system was evaluated in a semi-naturalistic study with 15 participants in a domestic environment. The results show that users were generally able to attract the robot’s attention without prior instruction, while providing minimal information about the robot’s perceptual capabilities led to more consistent interpretation of its engagement responses. The findings provide empirical insight into interaction initiation strategies and highlight the importance of transparent engagement signalling in human–robot interaction. |
|
| Abbink, David |
Wilbert Tabone, Benedetta Lusi, Alessandro Ianniello, J. Micah Prendergast, Deborah Forster, Olger Siebinga, Maria Luce Lupetti, Eva S. Verhoef, Dave Murray-Rust, Marco C. Rozendaal, Ann M. Pendleton-Jullian, and David Abbink (Delft University of Technology, Delft, Netherlands; Erasmus University Medical Centre, Rotterdam, Netherlands; Politecnico di Torino, Turin, Italy; RoboHouse, Delft, Netherlands; Ohio State University, Columbus, USA) Building on two previous workshops on transdisciplinary practices for shaping worker-robot relations, this half-day workshop introduces participants to worldbuilding, a design-driven technique used to co-create and explore richly detailed futures, as a way to empower workers and scholars in reimagining plausible and preferable future worker-robot relations (WRRs). WRRs describe the interactions, collaborations, and shared practices between workers and robotic systems in organisational contexts. The workshop begins with an introduction to WRRs, and a keynote by a worldbuilding expert that will outline the method and its value for envisioning future WRRs. Groups of workshop participants will then investigate concrete case studies that demonstrate how robotic systems can support workers in their practice, with a focus on enhancing wellbeing. Through interactive activities in this workshop, participants will co-create imagined worlds of work, which will be analyzed systemically across multiple levels of complexity, from the individual worker and their immediate context to broader societal implications. The workshop ultimately aims to build a community committed to shaping sustainable futures of robot-assisted work. |
|
| Abbo, Giulio Antonio |
Giulio Antonio Abbo, Senne Lenaerts, and Tony Belpaeme (Ghent University – imec, Ghent, Belgium) In this work, we explore how multimodal large language models can support real-time context- and value-aware decision-making. To do so, we combine the GPT-4o language model with a TurtleBot 4 platform simulating a smart vacuum cleaning robot in a home. The model evaluates the environment through vision input and determines whether it is appropriate to initiate cleaning. The system highlights the ability of these models to reason about domestic activities, social norms, and user preferences and take nuanced decisions aligned with the values of the people involved, such as cleanliness, comfort, and safety. We demonstrate the system in a realistic home environment, showing its ability to infer context and values from limited visual input. Our results highlight the promise of multimodal large language models in enhancing robotic autonomy and situational awareness, while also underscoring challenges related to consistency, bias, and real-time performance. Giulio Antonio Abbo, Ruben Janssens, Seppe Van de Vreken, and Tony Belpaeme (Ghent University – imec, Ghent, Belgium) Enabling natural robot communication through dynamic, context-aware facial expressions remains a key challenge in human-robot interaction. The field lacks a system that can generate facial expressions in real time and can be easily adapted to different contexts. Early work in this area considered inherently limited rule-based systems or deep learning-based models, requiring large datasets. Recent systems using large language models (LLMs) could not yet generate context-appropriate facial expressions in real time. This paper introduces Expressive Furhat, an open-source algorithm and Python library that leverages LLMs to generate real-time, adaptive facial gestures for the Furhat robot. Our modular approach separates gesture rendering, new gesture generation, and gaze aversion, ensuring flexibility and seamless integration with the Furhat API. User studies demonstrate significant improvements in user perception over a baseline system, with participants praising the system's emotional responsiveness and naturalness. |
|
| Abdul Rahman, Khaled |
Khaled Abdul Rahman, Benjamin Jungblut, and Youjin Chung (Georgia Institute of Technology, Atlanta, USA) After COVID-19, many assumed in-store shopping would decline, yet research shows that most consumers still make final purchasing decisions inside retail spaces. Retail advertising remains influential because it engages customers emotionally. However, most in-store advertisements, digital or physical, are static and lack multi-sensory stimulation. This paper addresses that gap by focusing on aroma products, which aim to convey emotional experience and memory to customers. We propose our design, "Aroma-bota," an interactive robotic installation that uses movement and multisensory cues to enhance the aroma retail experience. We evaluated Aroma-Bota through user testing in a simulated retail environment to understand how people interpreted its motion-based emotional cues. Results show that emotionally legible gestures---especially offering and "happy" motions---significantly enhanced user engagement and clarity of intent. This project contributes a novel design exemplar of sensory-driven, emotion-expressive retail robotics for the HRI community. |
|
| Abrams, Anna M. H. |
Anna M. H. Abrams, Inga Luisa Nießen, and Astrid Marieke Rosenthal-von der Pütten (RWTH Aachen University, Aachen, Germany) As robots increasingly appear in social settings, it is unclear whether groups that include robots are perceived as coherent social entities. This study examined whether groups including robots are judged as less entitative (“groupy”) than all-human groups. In a vignette-based online experiment (N = 160), participants rated eleven group scenarios (e.g., co-workers or musicians) on eight entitativity dimensions (e.g., similarity or interaction), with group composition manipulated between subjects (all-human, human–robot, text-only). Results showed strong effects of group scenario but minimal effects of group composition: human–robot groups were generally perceived as equally entitative as all-human groups. Only similarity differed, with human–robot groups rated less similar in select scenarios, indicating the importance of similarity in outer appearance in the perception of a group's coherence. Overall, the presence of robots did not reduce perceived group entitativity, suggesting that group type matters more than group composition. |
|
| Abrams, Anna Maria Helene |
Sarah Gosten, Anna Maria Helene Abrams, and Astrid Marieke Rosenthal-von der Pütten (RWTH Aachen University, Aachen, Germany) Sexism is a constant presence in women’s lives, requiring ongoing decisions about if and how to respond. Previous research underscores the importance of allies in confrontations of sexism. We explore how women perceive a social robot that intervenes in sexist encounters. Female participants (n = 60) engaged in a game scenario where a sexist comment was made by a male confederate prompting the robot to intervene in one of three ways: 1. avoidant, 2. argumentative, or 3. morally judgmental. Results showed that exposure to sexist remarks lead to significantly increased negative emotions. Participants rated the perpetrator significantly worse on trust and perceived closeness than the human bystander and the robot who were both on par. The type of intervention had no mitigating effect in the ratings. |
|
| Agrawal, Subham |
Subham Agrawal and Maren Bennewitz (University of Bonn, Bonn, Germany) As robots increasingly operate in public spaces, their ability to navigate in ways that feel natural and comfortable to humans is essential for social acceptance and effective interaction. Therefore, our research investigates how robots can adapt their navigation behavior to human social norms while maintaining efficient task execution. In particular, we propose integrating objective metrics, such as trajectory deviations caused by robot motion, together with subjective measures of the robot’s behavior into the navigation process. Towards this goal, we are conducting a user study to assess how the viewpoint (egocentric vs. allocentric) affects the perceived social acceptability of robot behavior. The insights from this study will inform the next steps in integrating subjective comfort metrics into robot trajectory optimization. Consequently, this work contributes to the development of robots capable of navigating shared spaces efficiently and in a socially acceptable manner. |
|
| Ahmad, Muneeb Imtiaz |
Muneeb Imtiaz Ahmad and Yosuke Fukuchi (Swansea University, Swansea, UK; Tokyo Metropolitan University, Hino-shi, Japan) Current research on measuring human perceptions of fairness in Human-Robot Teams (HRTs) has primarily focused on subjective metrics, such as rating statements either during or at the conclusion of interactions. This suggests a gap in examining the dynamic and evolving nature of fairness perceptions objectively during human-robot collaboration. In this paper, we introduce a novel cognitive model that enables individuals to perceive fairness dynamically throughout an HRT experiment. This model is inspired by the Bayesian Theory of Mind, allowing us to infer perceptions of fairness in real-time. The core idea of the model is that fairness perception stems from a person's ongoing inference about the bias in a robot's value function. We establish an equation that translates this inference into a perceived fairness value, which is based not only on the inferred bias but also on the confidence of that inference. A qualitative comparison of the model's performance with a previous human-robot collaboration study suggests that it can effectively capture key trends in human fairness perception dynamically. These findings highlight the model's potential applicability, and it may be utilized in resource distribution algorithms in HRTs to promote fairer collaboration. Patrick Holthaus, Daniel Hernández García, Patricia Shaw, Francesco Del Duchetto, Marta Romeo, Muneeb Imtiaz Ahmad, Daniel Tozadore, and Shelly Bagchi (University of Hertfordshire, Hatfield, UK; Heriot-Watt University, Edinburgh, UK; Aberyswyth University, Aberystwyth, UK; University of Lincoln, Lincoln, UK; Swansea University, Swansea, UK; University College London, London, UK; National Institute of Standards and Technology, Gaithersburg, USA) The quality and impact of human-robot interaction (HRI) research rely on scientific rigour and reproducibility as well as the ethical soundness of experiments involving human participants. However, the HRI community currently lacks easy access to resources and common knowledge on experimental design practices and standardised reporting guidelines. The BPM-HRI: Best practices and methods in HRI research workshop at HRI 2026 aims to address this gap by fostering a community-wide effort to discuss, disseminate, and develop more robust methodologies. Targeting to empower especially early-career researchers, this half-day workshop will consolidate efforts from international initiatives, including the IEEE Standards Group P3108 Recommended Practice for Design of Human Subjects Studies in Human-Robot Interaction and the UK-HRI topic group Human-Robot Interaction: Best Practices and Methods. Our program includes a presentation and discussion on these initiatives, a keynote address focusing on rigorous study reporting, providing an intercontinental perspective, and dedicated mentoring and ideas exchange sessions. A collaborative working session will document the workshop's efforts with attendees starting to draft a community-driven white paper, surveying the current landscape and outlining next steps for experimental design and reporting recommendations in the field of HRI. |
|
| Ahmar, Khadeja |
Raj Korpan, Khadeja Ahmar, Raitah A. Jinnat, and Jackie Yee (City University of New York, New York, USA) Cities release large volumes of open civic data, but many people lack the time or skills to interpret them. We report an exploratory pilot study examining whether a social robot can narrate stories derived from open civic data to support public understanding, trust, and data literacy. Our pipeline combines civic data analysis, large language model–based narrative generation, and scripted behaviors on the Misty II robot to produce expressive and neutral versions of two stories on noise complaints and COVID-19 trends. We deployed the system at a public event and collected post-interaction surveys from six adult participants. While the small sample size limits generalization, the pilot suggests that participants found the stories relevant and generally understood their main points, though engagement and enjoyment were mixed. Participant feedback highlighted the need for improved vocal prosody, reduced information density, and more interactivity. These findings provide initial feasibility evidence and design insights to inform future iterations of robot civic data storytelling systems. |
|
| Ahuja, Shreyas |
Hari Krishnan Subramaniyan, Jakub Rammel, Jiayi Gu, and Shreyas Ahuja (Delft University of Technology, Delft, Netherlands) This paper explores gesture-enabled Human–Robot Co-Creation (HRC) as a framework, investigating collaborative design between humans and machines through additive manufacturing. The project demonstrates a proof-of-concept workflow in which robots act as precise creators and humans as intuitive collaborators, dynamically adjusting geometry and materials in real time. Gesture control enabled direct engagement with the fabrication process, highlighting the potential for expressive design. |
|
| Ahumada-Newhart, Veronica |
Francisco Hernandez, Veronica Ahumada-Newhart, and Angelika C. Bullinger (Chemnitz University of Technology, Chemnitz, Germany; University of California at Davis, Sacramento, USA) Telepresence robots (TPR) have gained traction in office, healthcare, and educational settings, yet their applicability to industrial environments remains largely unexplored. As part of the PraeRI project, we conducted a multi-criteria assessment of six commercially available TPRs to identify the usability and functionality characteristics most relevant for deployment in industrial environments (i.e., manufacturing, production, assembly). The assessment was carried out using a structured seven-step utility analysis framework developed through an iterative, expert-driven process. The framework combines predefined industrial requirements, practical testing, and expert judgment, then aggregates weighted criteria into a normalized utility score to enable a transparent comparison across systems. Preliminary results from this assessment include insights on user interface design, drivability, reaction time, accessibility, battery performance, weight, wheels, and storage. Findings highlight substantial variation across platforms, with usability and functionality emerging as critical differentiators for industrial suitability. TPRs such as the Double 3 from Double Robotics and Ohmni Pro from Ohmni Labs achieved the highest point scores, mainly due to intuitive driving interfaces and strong performance in mobility and battery-related tasks. These early results form the basis for ongoing research into industrial-grade requirements and user acceptance in industrial environments. |
|
| Akbari, Deep |
Xiangfei Kong, Rex Gonzalez, Nicolas Echeverria, Sofia Cobo Navas, Thuc Anh Nguyen, Divyamshu Shrestha, Sebastian Ramirez-Vallejo, Deekshita Senthil Kumar, Andrew Texeira, Thao Nguyen, Deep Akbari, Hong Wang, Hongdao Meng, and Zhao Han (University of South Florida, Tampa, USA) HRI research shows that social robots support human companionship and emotional well-being. However, the cost of high-end social robots limits their accessibility, and, yet, existing low-cost platforms cannot fully support users emotionally due to limited interaction capabilities. In this demo paper with accompanying video, we showcase Bloom, a low-cost social robot that combines touch sensing with LLM-driven conversation to facilitate more expressive and emotionally engaging interactions with humans. Bloom also integrates a customizable 3D-printed shell with a flexible software pipeline that enables touch-responsive movement, LLM-driven dialogue, and face tracking during conversations, offering more expressive interaction capabilities at a low cost, which remains easy to fabricate and adaptable across a wide range of societal applications, including mental health, companion, and healthcare. |
|
| Akbarzadeh, Alireza |
Pooria Fazli, Amirhossein Nazari, Navid Jooyandehdel, Iman Kardan, and Alireza Akbarzadeh (Ferdowsi University of Mashhad, Mashhad, Iran) Lower-limb exoskeletons play an essential role in rehabilitation and mobility assistance, where accurate real-time gait phase recognition is critical for achieving safe, synchronized, and intuitive human–robot interaction. Many existing approaches rely on multiple sensors such as IMUs, EMG, and FSRs, which increase system complexity, computational load, cost, and susceptibility to mechanical wear. In this study, we propose a lightweight and robust gait phase detection framework that uses only hip and knee joint encoder data—sensors that are already integrated into most exoskeletons and are less prone to noise and misplacement. The method employs a finite state machine (FSM) to identify gait phases and detect key gait events, including heel strike, in real time. The approach was first evaluated in simulation using the SCONE (Opensim) platform and then experimentally implemented on the NEXA knee-joint exoskeleton with multiple healthy participants. Results show that the proposed method reliably predicts gait phases and heel-strike timing with minimal temporal error, while achieving significantly higher processing frequency compared to sensor-rich configurations. These findings demonstrate that accurate and efficient gait phase recognition can be achieved using only encoder data, offering a practical and low-cost solution for real-world exoskeleton control applications. |
|
| Akinshin, Roman |
Roman Akinshin, Elizaveta Lopatina, Kirill Bogatikov, Nikolai Kiz, Anna V. Makarova, Mikhail Lebedev, Miguel Altamirano Cabrera, Dzmitry Tsetserukou, and Valerii Kangler (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; MSU Institute for Artificial Intelligence, Moscow, Russian Federation; Lomonosov Moscow State University, Moscow, Russian Federation) This paper presents a novel neuromorphic control architecture for upper-limb prostheses that combines surface electromyography (sEMG) with gaze-guided computer vision. The system uses a spiking neural network deployed on the neuromorphic processor AltAi to classify EMG patterns in real time while an eye-tracking headset and scene camera identify the object within the user’s focus. In our prototype, the same EMG recognition model that was originally developed for a conventional GPU is deployed as a spiking network on AltAi, achieving comparable accuracy while operating in a sub-watt power regime, which enables a lightweight, wearable implementation. For six distinct functional gestures recorded from upper-limb amputees, the system achieves robust recognition performance comparable to state-of-the-art myoelectric interfaces. When the vision pipeline restricts the decision space to three context-appropriate gestures for the currently viewed object, recognition accuracy increases to roughly 95% while excluding unsafe, object-inappropriate grasps. These results indicate that the proposed neuromorphic, context-aware controller can provide energy-efficient and reliable prosthesis control and has the potential to improve safety and usability in everyday activities for people with upper-limb amputation. |
|
| Alaql, Abdulrahman Aql |
Renad Allagani, Hadeel Aljami, Abdullah Almutairi, Yahya Alsabban, Abdulrahman Aql Alaql, and Jumana Almahmoud (King Abdulaziz City for Science and Technology, Riyadh, Saudi Arabia) This work investigates how people recognize when a robot is “ready” to interact during first encounters in real public cultural events in Saudi Arabia. Prior HRI research often overlooks these first seconds of co-presence, where visitors interpret readiness cues and decide whether to engage. Through multiple in-the-wild deployments using a social robot at national events, this work examines how visitors interpreted the robot’s availability through verbal and non-verbal interaction signals. Analysis reveals four themes shaping the behavioral chain during first encounters: the contextual environment, pre-engagement ambiguity, readiness cues, and culturally grounded expectations of invitation and hospitality. Visitors typically recognized readiness within 10–15 seconds, with gaze cues, orientation shifts, and welcoming phrases prompting engagement, while noise, crowd density, and latency reduced interaction depth. The findings demonstrate that readiness is shaped by technical cues and cultural expectations, highlighting the importance of clear signaling, consistent responsiveness, and culturally aligned invitations. |
|
| Al Bukhari, Sabrina |
Sabrina Al Bukhari, Thisas Samaraweera, Kapil Shenoy, Lewi Alemu, Henok Kefyalew, Jamison Heard, and Jinane Mounsef (Rochester Institute of Technology, Dubai, United Arab Emirates; Rochester Institute of Technology, Rochester, USA) Social robots often struggle with natural conversational flow due to rule-based systems that cause pauses and interruptions. We present MESA (Modular Empathetic Social Assistant), a campus robot that synchronizes verbal responses with non-verbal cues. It uses UltraVAD to distinguish thoughtful pauses from turn completions and implements turn-yielding and floor-holding mechanisms. A distributed Retrieval-Augmented Generation (RAG) architecture ensures accurate responses for students and faculty. Validated on a proxy platform with ~600 ms latency, MESA demonstrates that combining context-aware audio and gaze enables more fluid conversation, even as hardware assembly nears completion. |
|
| Aldridge, Audrey L. |
Jennifer S. Kay, Tyler Errico, Audrey L. Aldridge, John James, and Michael Novitzky (Rowan University, Glassboro, USA; USA Military Academy at West Point, West Point, USA) Effective human-robot teaming in highly dynamic environments, such as emergency response and military missions, requires tools that support planning, coordination, and adaptive decision-making without imposing excessive cognitive load. This paper introduces PETAAR, the Planning, Execution, to After-Action Review framework that seamlessly integrates autonomous unmanned vehicles (UxVs) into Android Team Awareness Kit (ATAK), a widely adopted situational awareness platform. PETAAR leverages ATAK's geospatial visualization and human team collaboration while adding features for autonomous behavior management, operator feedback, and real-time interaction with UxVs. Its most novel contribution is enabling digital mission plans, created using standard mission graphics, to be interpreted and executed by unmanned systems, bridging the gap between human planning, robotic action, and shared understanding among all teammates (human and autonomous). Results from this work inform best practices for integrating autonomy into human-robot teams across diverse operational contexts. |
|
| Alemu, Lewi |
Sabrina Al Bukhari, Thisas Samaraweera, Kapil Shenoy, Lewi Alemu, Henok Kefyalew, Jamison Heard, and Jinane Mounsef (Rochester Institute of Technology, Dubai, United Arab Emirates; Rochester Institute of Technology, Rochester, USA) Social robots often struggle with natural conversational flow due to rule-based systems that cause pauses and interruptions. We present MESA (Modular Empathetic Social Assistant), a campus robot that synchronizes verbal responses with non-verbal cues. It uses UltraVAD to distinguish thoughtful pauses from turn completions and implements turn-yielding and floor-holding mechanisms. A distributed Retrieval-Augmented Generation (RAG) architecture ensures accurate responses for students and faculty. Validated on a proxy platform with ~600 ms latency, MESA demonstrates that combining context-aware audio and gaze enables more fluid conversation, even as hardware assembly nears completion. |
|
| Alenyà, Guillem |
Tamlin Love, Víctor Bermejo, Alberto Olivares-Alarcos, Antonio Andriella, Núria Vallès-Peris, Cristian Barrué, and Guillem Alenyà (CSIC-UPC, Barcelona, Spain; IIIA-CSIC, Barcelona, Spain; Universitat Autònoma de Barcelona, Barcelona, Spain) Explainability has been proposed as an approach to robot failure recovery, facilitating understanding and repairing trust, especially relevant in domestic assistive tasks. This study conducts a preliminary exploration of older adults' preferences regarding the content and context of robot-generated explanations for failures to guide future research. An exploratory study was conducted in three phases: 1) gathering high-level requirements from caregivers, 2) implementing a semi-autonomous robot for object retrieval that identifies and explains different types of failures, and 3) engaging N=8 older adults in real-life interactions as well as in focus groups to assess their perspectives. Our preliminary observations highlight a tension in preferences: a general desire for short, direct explanations to minimize disruption, versus a need for more detailed, actionable explanations specifically in failure cases. Crucially, we also note that these preferences are unstable and contextually constructed, reinforcing that the technical failures cannot be separated from their social context, as users' experiences and opinions are shaped by both the robot’s functional capabilities and the values and organisational settings in which they are introduced. Tamlin Love, Antonio Andriella, and Guillem Alenyà (Institut de Robòtica i Informàtica Industrial, Barcelona, Spain) Explainability is an important tool for human-robot interaction (HRI). By explaining its decisions and beliefs, a robot can promote understandability and thereby foster desiderata such as trust, acceptance and usability. However, HRI domains pose challenges to automatic explanation generation. In such domains, a robot must consider the causal reasons for behaviour embedded in temporal sequences of decisions, all while factoring in noise and uncertainty inherent to these kinds of domains. Additionally, as explainability itself constitutes a human-robot interaction, it is important for robots to be able to properly interpret user questions and effectively communicate explanations in order to improve understanding. In our work, we address these challenges from a causal perspective, developing methods that use causal models in order to automatically generate causal, counterfactual explanations in HRI domains. We also produce some insights into embedding such a system in a human-robot interaction in order to maximise understandability. |
|
| Alfieri, Ilaria |
Maria Teresa Parreira, Ilaria Torre, Sarah Schömbs, Erik Lagerstedt, Laetitia Tanqueray, Karla Bransky, Ilaria Alfieri, Maria Raffa, Carolin Straßmann, Benjamin Lebrun, and Samantha Stedtler (Cornell University, New York, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Melbourne, Melbourne, Australia; University of Gothenburg, Gothenburg, Sweden; Lund University, Lund, Sweden; Australian National University, Canberra, Australia; IULM University, Milan, Italy; University of Duisburg-Essen, Duisburg, Germany; University of Canterbury, Christchurch, New Zealand) Building on the foundational work of our 2025 workshop, this second edition shifts from exploring what sustainability means in HRI to producing actionable outcomes for the community. Last year, we identified strong but often unacknowledged engagement with social and environmental sustainability, alongside gaps in shared language and practical frameworks. This year, we focus on concrete deliverables: guidelines for comprehensively integrating sustainability in HRI research. The morning features external speakers who integrate sustainable development in their research, and whole room discussions. The afternoon centers on collaborative development of guidelines and standards that researchers could adopt. We welcome HRI researchers at any career stage interested in operationalizing sustainability - whether through addressing the carbon footprint of computational models, ensuring equitable access to robotic technologies, or embedding SDG principles into research practices. Outcomes will include practical tools for sustainable HRI research and a roadmap for continued community action beyond the workshop. |
|
| Al-Hamadi, Ayoub |
Magnus Jung, Qiaoyue Yang, Leander von Seelstrang, Dominykas Strazdas, Sven Wachsmuth, and Ayoub Al-Hamadi (University of Magdeburg, Magdeburg, Germany; Bielefeld University, Bielefeld, Germany) We present SEMIAC, a multimodal dataset of human-robot interactions (HRI) collected with a Wizard-of-Oz methodology in a logistics-inspired workspace and a home environment. The dataset includes 40 participants across two research sites and captures their behavioural, emotional, proxemic, and subjective responses during cooperative object-retrieval tasks with systematically manipulated robot errors. Interactions were recorded using a rich sensor suite including the humanoid robot TIAGo equipped with onboard sensors and an RGB-D camera, as well as three external RGB-D sensors, and an external microphone. All modalities were recorded fully synchronized through ROS and stored in time-aligned rosbag files. Annotations include human keypoints, proxemic distances, emotional expressions, and engagement indicators. The dataset aims to support research on social navigation, error-aware intent prediction, and socially intelligent robot behaviour. |
|
| Alissandrakis, Aris |
Sofia Thunberg, Aris Alissandrakis, Timmy Bolinder, Melker Forslund, Elias Lycke, Tobias Pettersson, Marcus Saméus, and Robin Åsberg (Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; Uppsala University, Uppsala, Sweden) This late-breaking report presents a Wizard of Oz control system that uses virtual reality equipment to operate the humanoid robot Pepper. The system enables control by mirroring the operator’s movement within the physical space, including body and head rotation, and arm movements directly onto the robot. The system was evaluated in a study setup with two participant groups: users and operators. Results showed that the system was effective in simulating natural robot behaviour, while being intuitive and engaging. |
|
| Alitai, Madina |
Chen Kang, Madina Alitai, Yiting Wang, Xiaochi Cai, Ruidong Ma, Angelo Cangelosi, and Zhegong Shangguan (Beijing Language and Culture University, Beijing, China; University of Manchester, Manchester, UK; Pennsylvania State University, State College, USA; Sheffield Hallam University, Sheffield, UK) Large Language Model (LLM)-driven social robots face two key challenges: inference latency creates unnatural silences during which an expressively static robot appears disengaged, and LLMs rarely account for the user's facial affect as a continuous evolving process. We present Fluid-Xpress, an emotion-aware dual-loop framework for empathic facial reactions in human-robot interaction. The framework features: (1) a Macro-Micro dual-loop architecture that decouples real-time non-verbal feedback from LLM verbal processing, enabling continuous affective backchanneling during inference latency; (2) a Temporal Affective Engine using metrics such as MSSD to capture emotional dynamics and detect complex states like cognitive overload and masked emotions; and (3) a Risk-Adaptive Strategy that prioritizes immediate intervention during high-arousal states. A pilot study (N=8) showed that Fluid-Xpress significantly improved arousal stability (p < .05), mood improvement (p < .05), expression awareness (p < .01), and perceived empathy (p < .05) compared to baseline, providing preliminary support for emotion-aware non-verbal feedback in embodied social robots. |
|
| Aljami, Hadeel |
Renad Allagani, Hadeel Aljami, Abdullah Almutairi, Yahya Alsabban, Abdulrahman Aql Alaql, and Jumana Almahmoud (King Abdulaziz City for Science and Technology, Riyadh, Saudi Arabia) This work investigates how people recognize when a robot is “ready” to interact during first encounters in real public cultural events in Saudi Arabia. Prior HRI research often overlooks these first seconds of co-presence, where visitors interpret readiness cues and decide whether to engage. Through multiple in-the-wild deployments using a social robot at national events, this work examines how visitors interpreted the robot’s availability through verbal and non-verbal interaction signals. Analysis reveals four themes shaping the behavioral chain during first encounters: the contextual environment, pre-engagement ambiguity, readiness cues, and culturally grounded expectations of invitation and hospitality. Visitors typically recognized readiness within 10–15 seconds, with gaze cues, orientation shifts, and welcoming phrases prompting engagement, while noise, crowd density, and latency reduced interaction depth. The findings demonstrate that readiness is shaped by technical cues and cultural expectations, highlighting the importance of clear signaling, consistent responsiveness, and culturally aligned invitations. |
|
| Alkan, Alper Semih |
Chalmers Ong, Hannah Lee, Koen de Nie, Mark W. Vas, Peter Holzer, Yasaman Shamssabzevar, Yu-Ai Jiang, J. Micah Prendergast, Alper Semih Alkan, and Serdar Aşut (Delft University of Technology, Delft, Netherlands) Robotics in creative industries is sometimes met with concern over the eventual replacement of humans. This paper proposes a framework for a new form of spatial craft that emerges through human-robot collaboration, using the Delft Blue painting process as a case study. We employ Learning from Demonstration (LfD) through kinesthetic guidance of a KUKA iiwa robotic arm to capture expert brushstroke trajectories, which are then modelled using a Long Short-Term Memory Variational Autoencoder (LSTM-VAE) to generate novel, stylistically coherent strokes. Subsequently, we evaluated these generated trajectories through physical execution on ceramic tiles, inviting a human painter to interpret and complete one of the robot-produced compositions. The results demonstrate that robotic machine learning can support a new co-crafting workflow, which does not substitute human artists but expands and transforms traditional craft practices for new creative opportunities to emerge. |
|
| Allagani, Renad |
Renad Allagani, Hadeel Aljami, Abdullah Almutairi, Yahya Alsabban, Abdulrahman Aql Alaql, and Jumana Almahmoud (King Abdulaziz City for Science and Technology, Riyadh, Saudi Arabia) This work investigates how people recognize when a robot is “ready” to interact during first encounters in real public cultural events in Saudi Arabia. Prior HRI research often overlooks these first seconds of co-presence, where visitors interpret readiness cues and decide whether to engage. Through multiple in-the-wild deployments using a social robot at national events, this work examines how visitors interpreted the robot’s availability through verbal and non-verbal interaction signals. Analysis reveals four themes shaping the behavioral chain during first encounters: the contextual environment, pre-engagement ambiguity, readiness cues, and culturally grounded expectations of invitation and hospitality. Visitors typically recognized readiness within 10–15 seconds, with gaze cues, orientation shifts, and welcoming phrases prompting engagement, while noise, crowd density, and latency reduced interaction depth. The findings demonstrate that readiness is shaped by technical cues and cultural expectations, highlighting the importance of clear signaling, consistent responsiveness, and culturally aligned invitations. |
|
| Almahmoud, Jumana |
Renad Allagani, Hadeel Aljami, Abdullah Almutairi, Yahya Alsabban, Abdulrahman Aql Alaql, and Jumana Almahmoud (King Abdulaziz City for Science and Technology, Riyadh, Saudi Arabia) This work investigates how people recognize when a robot is “ready” to interact during first encounters in real public cultural events in Saudi Arabia. Prior HRI research often overlooks these first seconds of co-presence, where visitors interpret readiness cues and decide whether to engage. Through multiple in-the-wild deployments using a social robot at national events, this work examines how visitors interpreted the robot’s availability through verbal and non-verbal interaction signals. Analysis reveals four themes shaping the behavioral chain during first encounters: the contextual environment, pre-engagement ambiguity, readiness cues, and culturally grounded expectations of invitation and hospitality. Visitors typically recognized readiness within 10–15 seconds, with gaze cues, orientation shifts, and welcoming phrases prompting engagement, while noise, crowd density, and latency reduced interaction depth. The findings demonstrate that readiness is shaped by technical cues and cultural expectations, highlighting the importance of clear signaling, consistent responsiveness, and culturally aligned invitations. |
|
| Almutairi, Abdullah |
Renad Allagani, Hadeel Aljami, Abdullah Almutairi, Yahya Alsabban, Abdulrahman Aql Alaql, and Jumana Almahmoud (King Abdulaziz City for Science and Technology, Riyadh, Saudi Arabia) This work investigates how people recognize when a robot is “ready” to interact during first encounters in real public cultural events in Saudi Arabia. Prior HRI research often overlooks these first seconds of co-presence, where visitors interpret readiness cues and decide whether to engage. Through multiple in-the-wild deployments using a social robot at national events, this work examines how visitors interpreted the robot’s availability through verbal and non-verbal interaction signals. Analysis reveals four themes shaping the behavioral chain during first encounters: the contextual environment, pre-engagement ambiguity, readiness cues, and culturally grounded expectations of invitation and hospitality. Visitors typically recognized readiness within 10–15 seconds, with gaze cues, orientation shifts, and welcoming phrases prompting engagement, while noise, crowd density, and latency reduced interaction depth. The findings demonstrate that readiness is shaped by technical cues and cultural expectations, highlighting the importance of clear signaling, consistent responsiveness, and culturally aligned invitations. |
|
| Alsabban, Yahya |
Renad Allagani, Hadeel Aljami, Abdullah Almutairi, Yahya Alsabban, Abdulrahman Aql Alaql, and Jumana Almahmoud (King Abdulaziz City for Science and Technology, Riyadh, Saudi Arabia) This work investigates how people recognize when a robot is “ready” to interact during first encounters in real public cultural events in Saudi Arabia. Prior HRI research often overlooks these first seconds of co-presence, where visitors interpret readiness cues and decide whether to engage. Through multiple in-the-wild deployments using a social robot at national events, this work examines how visitors interpreted the robot’s availability through verbal and non-verbal interaction signals. Analysis reveals four themes shaping the behavioral chain during first encounters: the contextual environment, pre-engagement ambiguity, readiness cues, and culturally grounded expectations of invitation and hospitality. Visitors typically recognized readiness within 10–15 seconds, with gaze cues, orientation shifts, and welcoming phrases prompting engagement, while noise, crowd density, and latency reduced interaction depth. The findings demonstrate that readiness is shaped by technical cues and cultural expectations, highlighting the importance of clear signaling, consistent responsiveness, and culturally aligned invitations. |
|
| Alsos, Ole Andreas |
Vedran Simic, Eleftherios Papachristos, Ole Andreas Alsos, and Taufik Akbar Sitompul (NTNU, Trondheim, Norway; NTNU, Gjøvik, Norway) Inspection and maintenance robotics are rapidly entering industrial operations, yet the transfer of Human-Robot Interaction (HRI) research into commercial practices remains limited. To characterize this gap, we present situated qualitative fieldwork with 41 exhibitors at a major industry-only conference, analyzing HRI discourse and interaction design priorities. Our findings reveal an industry driven by a reliability-first mindset that focuses on familiar, well-established interaction approaches. We identify three challenges for HRI: (1) trust practices that prioritize familiarity over usability, (2) design aspirations for broad accessibility that still require expert operational skill, and (3) multi-operator workflows incompatible with single-user HRI assumptions. We argue that, as hardware platforms mature, closing the academic-industry gap requires HRI to shift from single-user autonomy research toward frameworks supporting collaborative, safety-critical operations. This paper provides an empirical snapshot of industry perceptions of HRI and highlights where academic research could better align with industry practice. |
|
| Altamirano Cabrera, Miguel |
Hung Khang Nguyen, Jeffrin Sam, Safina Gulyamova, Miguel Altamirano Cabrera, Artem Lykov, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) Visual and speech interfaces are widely used to control collaborative robots, but physical interaction remains distinct for its immediacy and intuitive nature. We introduce TapHRI, a system enabling rich on-body touch control using only embedded robot sensing—joint torques, motor currents, and TCP wrench—without external hardware. The core contribution is a novel Neural Network architecture designed specifically for raw internal sensor streams: a multi-head Temporal Convolutional Network (TCN) that eliminates the need for spectrogram conversion used in prior art. By leveraging dilated causal convolutions, our architecture effectively captures long-range temporal dependencies to distinguish subtle tap transients from robot motion dynamics. Uniquely, the network employs a shared encoder with bifurcated classification heads to simultaneously predict both tap count (single, double, triple) and directional intent (six directions) from a single 3.6 s window. We validate this TCN-driven approach on a UR3 cobot with 3,154 episodes, achieving 98.9 % accuracy in static conditions and 59.2 % in dynamic scenarios, demonstrating that specialized time-domain architectures can unlock complex interaction vocabularies from standard internal signals. Amir Atef Habel, Ivan Snegirev, Elizaveta Semenyakina, Miguel Altamirano Cabrera, Jeffrin Sam, Fawad Mehboob, Roohan Ahmed Khan, Muhammad Ahsan Mustafa, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) This paper presents Glove2UAV, a wearable IMU-glove interface for intuitive UAV control through hand and finger gestures, augmented with vibrotactile warnings for exceeding predefined speed thresholds. To promote safer and more predictable interaction in dynamic flight, Glove2UAV is designed as a lightweight and easily deployable wearable interface intended for real-time operation. Glove2UAV streams inertial measurements in real time and estimates palm and finger orientations using a compact processing pipeline that combines median-based outlier suppression with Madgwick-based orientation estimation. The resulting motion estimations are mapped to a small set of control primitives for directional flight (forward/backward and lateral motion) and, when supported by the platform, to object-interaction commands. Vibrotactile feedback is triggered when flight speed exceeds predefined threshold values, providing an additional alert channel during operation. We validate real-time feasibility by synchronizing glove signals with UAV telemetry in both simulation and real-world flights. The results show fast gesture-based command execution, stable coupling between gesture dynamics and platform motion, correct operation of the core command set in our trials, and timely delivery of vibratile warning cues. Yuhua Jin, Nikita Kuzmin, Georgii Demianchuk, Mariya Lezina, Fawad Mehboob, Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Ahsan Mustafa, and Dzmitry Tsetserukou (Chinese University of Hong Kong, Shenzhen, China; Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Drones operating in human-occupied spaces suffer from insufficient communication mechanisms that create uncertainty about their intentions. We present HoverAI, an embodied aerial agent that integrates drone mobility, infrastructure-independent visual projection, and real-time conversational AI into a unified platform. Equipped with a MEMS laser projector, onboard semi-rigid screen, and RGB camera, HoverAI perceives users through vision and voice, responding via lip-synced avatars that adapt appearance to user demographics. The system employs a multimodal pipeline combining VAD, ASR (Whisper), LLM-based intent classification, RAG for dialogue, face analysis for personalization, and voice synthesis (XTTS v2). Evaluation demonstrates high accuracy in command recognition (F1: 0.90), demographic estimation (gender F1: 0.89, age MAE: 5.14 years), and speech transcription (WER: 0.181). By uniting aerial robotics with adaptive conversational AI and self-contained visual output, HoverAI introduces a new class of spatially-aware, socially responsive embodied agents for applications in guidance, assistance, and human-centered interaction. Fawad Mehboob, MoniJesu Wonders James, Amir Atef Habel, Jeffrin Sam, Miguel Altamirano Cabrera, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) This work introduces a novel concept of an autonomous aerial manipulation system capable of interpreting high-level natural language commands to retrieve objects and deliver them to a human user. The system is intended to integrate a MediaPipe-based Grounding DINO and a Vision-Language-Action (VLA) model with a custom-built drone equipped with a 1-DOF gripper and an Intel RealSense RGB-D camera. VLA performs semantic reasoning to interpret the intent of a user prompt and generates a prioritized task queue for grasping relevant objects in the scene. Grounding DINO and the dynamic A* planning algorithm are used to navigate and safely relocate the object. To ensure safe and natural interaction during the handover phase, the system employs a human-centric controller driven by MediaPipe. This module provides real-time human pose estimation, allowing the drone to employ visual servoing to maintain a stable, distinct position directly in front of the user, facilitating a comfortable handover. We demonstrate the system's efficacy through real-world experiments for localization and navigation, which resulted in a 16.4 cm, 7.0 cm, and 8.4 cm of max, mean euclidean, and root-mean squared errors, respectively, highlighting the feasibility of VLA for aerial manipulation operations. Roman Akinshin, Elizaveta Lopatina, Kirill Bogatikov, Nikolai Kiz, Anna V. Makarova, Mikhail Lebedev, Miguel Altamirano Cabrera, Dzmitry Tsetserukou, and Valerii Kangler (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; MSU Institute for Artificial Intelligence, Moscow, Russian Federation; Lomonosov Moscow State University, Moscow, Russian Federation) This paper presents a novel neuromorphic control architecture for upper-limb prostheses that combines surface electromyography (sEMG) with gaze-guided computer vision. The system uses a spiking neural network deployed on the neuromorphic processor AltAi to classify EMG patterns in real time while an eye-tracking headset and scene camera identify the object within the user’s focus. In our prototype, the same EMG recognition model that was originally developed for a conventional GPU is deployed as a spiking network on AltAi, achieving comparable accuracy while operating in a sub-watt power regime, which enables a lightweight, wearable implementation. For six distinct functional gestures recorded from upper-limb amputees, the system achieves robust recognition performance comparable to state-of-the-art myoelectric interfaces. When the vision pipeline restricts the decision space to three context-appropriate gestures for the currently viewed object, recognition accuracy increases to roughly 95% while excluding unsafe, object-inappropriate grasps. These results indicate that the proposed neuromorphic, context-aware controller can provide energy-efficient and reliable prosthesis control and has the potential to improve safety and usability in everyday activities for people with upper-limb amputation. Muhammad Haris Khan, Artyom Myshlyaev, Artem Lykov, Miguel Altamirano Cabrera, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) We propose a new concept, Evolution 6.0, which represents the evolution of robotics driven by Generative AI. When a robot lacks the necessary tools to accomplish a task requested by a human, it autonomously designs the required instruments and learns how to use them to achieve the goal. Evolution 6.0 is an autonomous robotic system powered by Vision-Language Models (VLMs), Vision-Language Action (VLA) models, and Text-to-3D generative models for tool design and task execution. The system comprises two key modules: the Tool Generation Module, which fabricates task-specific tools from visual and textual data, and the Action Generation Module, which converts natural language instructions into robotic actions. It integrates QwenVLM for environmental understanding, OpenVLA for task execution, and Llama-Mesh for 3D tool generation. Evaluation results demonstrate a 90% success rate for tool generation with a 10-second inference time and action generation achieving 83.5% in physical and visual generalization, 70% in motion generalization, and 37% in semantic generalization. Future improvements will focus on bimanual manipulation, expanded task capabilities, and enhanced environmental interpretation to improve real-world adaptability. Timofei Kozlov, Artem Trandofilov, Georgii Gazaryan, Issatay Tokmurziyev, Miguel Altamirano Cabrera, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Safe navigation for the visually impaired individuals remains a critical challenge, especially concerning head-level obstacles, which traditional mobility aids often fail to detect. We introduce GuideTouch, a compact, affordable, standalone wearable device designed for autonomous obstacle avoidance. The system integrates two vertically aligned Time-of-Flight (ToF) sensors, enabling three-dimensional environmental perception, and four vibrotactile actuators that provide directional haptic feedback. For real-world robustness, the device includes a unique centrifugal self-cleaning optical cover mechanism and a sound alarm system for location if the device is dropped. We evaluated the haptic perception accuracy across 22 participants. The system demonstrated high recognition accuracy, achieving an average of 92.9% for single and double motor patterns. Furthermore, preliminary experiments with 14 visually impaired users validated this interface, showing a recognition accuracy of 93.75% for primary directional cues. The results demonstrate that GuideTouch enables intuitive spatial perception and could significantly improve the safety, confidence, and autonomy of users with visual impairments during independent navigation. Yara Mahmoud, Yasheerah Yaqoot, Miguel Altamirano Cabrera, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Humanoid robots must adapt their contact behavior to diverse objects and tasks, yet most controllers rely on fixed, hand-tuned impedance gains and gripper settings. This paper introduces HumanoidVLM, a vision–language-driven retrieval framework that enables the Unitree G1 humanoid to select task-appropriate cartesian impedance parameters and gripper configurations directly from an egocentric RGB image. The system couples a vision–language model for semantic task inference with a FAISS-based Retrieval-Augmented Generation (RAG) module that retrieves experimentally validated stiffness–damping pairs and object-specific grasp angles from two custom databases and executes them through a task-space impedance controller for compliant manipulation. We evaluate HumanoidVLM on 14 visual scenarios and achieve a retrieval accuracy of 93 %. Real-world experiments show stable interaction dynamics, with z-axis tracking errors typically within 1 cm to 3.5 cm and virtual forces consistent with task-dependent impedance settings. These results demonstrate the feasibility of linking semantic perception with retrieval-based control as an interpretable path toward adaptive humanoid manipulation. Yara Mahmoud, Jeffrin Sam, Khang Nguyen, Marcelino Julio Fernando, Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Haris Khan, Artem Lykov, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) Safe Human–Robot Interaction in shared spaces requires robots to adapt compliance and speed to scene context and human proximity. Standards for collaborative operation specify limits on speed, force/pressure, and stiffness during contact or near-contact operation, motivating dynamic impedance rather than fixed gains. We present an egocentric vision pipeline for the Unitree G1 humanoid that retrieves task- and context-appropriate parameters via a Vision-Language Model (VLM) with Retrieval-Augmented Generation (RAG), and integrates inverse kinematics (IK) to produce joint references together with feed-forward torques for gravity compensation. Concretely, the system maps live head-camera frames to (Kp, Kd, v); IK provides (qref, qref, τff); and the low-level controller apply it at each joint. This enables joint-impedance scheduling and speed regulation in low-dynamic settings. We evaluate the system on dual-manipulation tabletop tasks with and without human presence in the scene. Compared to fixed-gain baselines, the VLM-RAG policy yields smooth speed reductions near humans, and maintained task success under environmental change. While the off-board perception–retrieval loop introduces noticeable delay that restricts use in highly dynamic interaction, the prototype demonstrates that semantic grounding of impedance remains a viable direction for safer humanoid collaboration. Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Haris Khan, Yara Mahmoud, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) LLM-Glasses is a wearable navigation system which assists visually impaired people by utilizing YOLO-World object detection, GPT-4o-based reasoning, and haptic feedback for real-time guidance. The device translates visual scene understanding into intuitive tactile feedback on the temples, allowing hands-free navigation. Three studies evaluate the system: recognition of 13 haptic patterns with an average recognition rate of 81.3%, VICON-based guidance with predefined paths using haptic cues, and an LLM-guided scene evaluation with decision accuracies of 91.8% without obstacles, 84.6% with static obstacles, and 81.5% with dynamic obstacles. These results show that LLM-Glasses can deliver reliable navigation support in controlled environments and motivate further work on responsiveness and deployment in more complex real-world scenarios. |
|
| Alves-Oliveira, Patricia |
Sofia Thunberg, Mafalda Gamboa, Meagan B. Loerakker, Patricia Alves-Oliveira, and Hannah R.M. Pelikan (Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; Uppsala University, Uppsala, Sweden; TU Wien, Vienna, Austria; University of Michigan at Ann Arbor, Ann Arbor, USA; Linköping University, Linköping, Sweden) In the Human-Robot Interaction community, Wizard of Oz (WoZ) is a commonly employed method where researchers aim to study user perceptions of robot technologies regardless of technical limitations. Despite the continued usage of WoZ, questions concerning ethical tensions and effects on the wizard remain. For instance, how do wizards experience interacting through technology, given the different roles and characters to enact, and the different environments to situate themselves in. In addition, the wizard's experiences and affects on results, continues to be under-explored. The goal of this workshop is to surface ethical, practical, methodological, personal, and philosophical tensions in the WoZ method. Though a collaborative session, we seek to develop a deeper understanding of what it means to be a wizard through eliciting first-person experiences of researchers. As a result, we hope to formulate guidelines for future wizards. Marco C. Rozendaal, Anastasia Kouvaras Ostrowski, Mafalda Gamboa, Samantha Reig, Patricia Alves-Oliveira, Maaike Bleeker, Maria Luce Lupetti, John Vines, Nazli Cila, Hannah Pelikan, Nikolas Martelaro, Selma Šabanović, David Sirkin, and Cristina Zaga (Delft University of Technology, Delft, Netherlands; Purdue University, West Lafayette, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Massachusetts at Lowell, Lowell, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Utrecht University, Utrecht, Netherlands; Politecnico di Torino, Turin, Italy; University of Edinburgh, Edinburgh, UK; Linköping University, Linköping, Sweden; Carnegie Mellon University, Pittsburgh, USA; Indiana University at Bloomington, Bloomington, USA; Stanford University, Stanford, USA; University of Twente, Enschede, Netherlands) The 3rd Workshop on Designerly Human-Robot Interaction (HRI) aims to bring together scholars and practitioners engaged in design-oriented research to articulate the value of design research within HRI broadly. We propose a half-day workshop to (1) collectively map the diversity of design research in HRI, examining how contributions are framed and how quality is evaluated; (2) discuss participants’ HRI design projects, showcased in an exhibition setting; and (3) conclude with a focused conversation to identify common ground across diverse approaches and develop strategies for strengthening the position of design research in HRI and its connections with other HRI disciplinary communities. |
|
| Amadio, Fabio |
Elisabetta Zibetti, Raphael Lorenzo-Louis, Fabio Amadio, Julien Wacquez, Joffrey Becker, Bertrand Luvison, and Serena Ivaldi (Université Paris 8, Paris, France; Inria - CNRS - Université de Lorraine - Loria, Villers-les-Nancy, France; CEA - List, Palaiseau, France; ETIS (UMR8051) - CYU - ENSEA - CNRS, Cergy, France; University Paris-Saclay - CEA - List, Palaiseau, France) How does interaction with robots differ between spontaneously formed groups and individuals? Despite increasing robot deployment in public spaces, this question remains understudied in real-world settings. We conducted a field study deploying a service stationary robot in semi-public office spaces, tracking 221 individuals (42 alone, 179 in groups) across 95 interaction opportunities. Cookies were placed on accessible trays, creating a low-barrier functional interaction opportunity (taking a cookie) while allowing observation of spontaneous social behaviors. Groups demonstrated significantly higher engagement: functional interactions and social gestures. Within groups, leader presence amplified social engagement threefold. These findings are consistent with descriptive norm theory: group presence and leader behavior were associated with increased social engagement, though context-specific factors may moderate these effects. Results highlight the potential value of group detection for robots in multi-user environments, and demonstrate the feasibility of integrating psychological theory with automated tracking to study spontaneous human-robot encounters in the wild. |
|
| An, Annika |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Ananto, Rahatul Amin |
Rahatul Amin Ananto, Seol Han, Rachel Ruddy, and AJung Moon (McGill University, Montreal, Canada) A new generation of robots are being developed to enter our homes in a matter of months. But has the industry appropriately accounted for the complexities of the social environment that we call home? We conducted an exploratory design workshop to examine what secondary users—those who are not expected to be owners but nonetheless daily users—deem to be socially appropriate behavior of a domestic robot. A total of 90 students from Mexico participated in the study. By analyzing they define and reason about appropriateness of robot behaviors in the home, we show why deployment of domestic robots require much more thoughtful considerations than implementation of simplified social rules; judgments of what is appropriate depend on context, roles, relationships, and individual boundaries, and can differ between primary and secondary users. We call on Human-Robot Interaction (HRI) practitioners to treat social appropriateness as a fluid, gradient factor at design time rather than a binary concept (appropriate/inappropriate). |
|
| André, Elisabeth |
Stina Klein, Birgit Prodinger, Elisabeth André, Lars Mikelsons, and Nils Mandischer (University of Augsburg, Augsburg, Germany) Robots are becoming more prominent in assisting persons with disabilities (PwD). Whilst there is broad consensus that robots can assist in mitigating physical impairments, the extent to which they can facilitate social inclusion remains equivocal. In fact, the exposed status of assisted workers could likewise lead to reduced or increased perceived stigma by other workers. We present a vignette study on the perceived cognitive and behavioral stigma toward PwD in the workplace. We designed four experimental conditions depicting a coworker with an impairment in work scenarios: overburdened work, suitable work, and robot-assisted work only for the coworker, and an offer of robot-assisted work for everyone. Our results show that cognitive stigma is significantly reduced when the work task is adapted to the person's abilities or augmented by an assistive robot. In addition, offering robot-assisted work for everyone, in the sense of universal design, further reduces perceived cognitive stigma. Thus, we conclude that assistive robots reduce perceived cognitive stigma, thereby supporting the use of collaborative robots in work scenarios involving PwDs. |
|
| Andriella, Antonio |
Tamlin Love, Víctor Bermejo, Alberto Olivares-Alarcos, Antonio Andriella, Núria Vallès-Peris, Cristian Barrué, and Guillem Alenyà (CSIC-UPC, Barcelona, Spain; IIIA-CSIC, Barcelona, Spain; Universitat Autònoma de Barcelona, Barcelona, Spain) Explainability has been proposed as an approach to robot failure recovery, facilitating understanding and repairing trust, especially relevant in domestic assistive tasks. This study conducts a preliminary exploration of older adults' preferences regarding the content and context of robot-generated explanations for failures to guide future research. An exploratory study was conducted in three phases: 1) gathering high-level requirements from caregivers, 2) implementing a semi-autonomous robot for object retrieval that identifies and explains different types of failures, and 3) engaging N=8 older adults in real-life interactions as well as in focus groups to assess their perspectives. Our preliminary observations highlight a tension in preferences: a general desire for short, direct explanations to minimize disruption, versus a need for more detailed, actionable explanations specifically in failure cases. Crucially, we also note that these preferences are unstable and contextually constructed, reinforcing that the technical failures cannot be separated from their social context, as users' experiences and opinions are shaped by both the robot’s functional capabilities and the values and organisational settings in which they are introduced. Tamlin Love, Antonio Andriella, and Guillem Alenyà (Institut de Robòtica i Informàtica Industrial, Barcelona, Spain) Explainability is an important tool for human-robot interaction (HRI). By explaining its decisions and beliefs, a robot can promote understandability and thereby foster desiderata such as trust, acceptance and usability. However, HRI domains pose challenges to automatic explanation generation. In such domains, a robot must consider the causal reasons for behaviour embedded in temporal sequences of decisions, all while factoring in noise and uncertainty inherent to these kinds of domains. Additionally, as explainability itself constitutes a human-robot interaction, it is important for robots to be able to properly interpret user questions and effectively communicate explanations in order to improve understanding. In our work, we address these challenges from a causal perspective, developing methods that use causal models in order to automatically generate causal, counterfactual explanations in HRI domains. We also produce some insights into embedding such a system in a human-robot interaction in order to maximise understandability. |
|
| Antony, Victor Nikhil |
Victor Nikhil Antony, Kai-Chieh Liang, and Chien-Ming Huang (Johns Hopkins University, Baltimore, USA) We demonstrate Lantern, a minimalist, haptic robotic object platform designed to be low-cost, holdable, and easily customized for diverse human–robot interaction scenarios. In this demo, we showcase three instantiations of Lantern: (1) the base Lantern platform, highlighting its core motion and haptic behavioral profiles; (2) an ADHD body-doubling study buddy variant, which shows how Lantern can be adapted to scaffold focused work; and (3) Dofu, an upgraded Lantern variant to anchor daily mindfulness practice, with additional sensing, improved compute, and a battery-powered, dockable form factor for untethered, in-the-wild use. Visitors will be able to physically interact with each Lantern variant, observe contrasting embodiments, and behaviors; Moreover, visualizations (panels and video) will showcase the build process and additional extension possibilities. Victor Nikhil Antony and Chien-Ming Huang (Johns Hopkins University, Baltimore, USA) Plants offer a paradoxical model for interaction: they are ambient, low-demand presences that nonetheless shape atmosphere, routines, and relationships through temporal rhythms and subtle expressions. In contrast, most human-robot interaction (HRI) has been grounded in anthropomorphic and zoomorphic paradigms, producing overt, high-demand forms of engagement. Using a Research through Design (RtD) methodology, we explore plants as metaphoric inspiration for HRI; we conducted iterative cycles of ideation, prototyping, and reflection to investigate what design primitives emerge from plant metaphors and morphologies, and how these primitives can be combined into expressive robotic forms. We present a suite of speculative, open-source prototypes that help probe plant-inspired presence, temporality, form, and gestures. We deepened our learnings from design and prototyping through prototype-centered workshops that explored people’s perceptions and imaginaries of plant-inspired robots. This work contributes: (1) Set of plant-inspired robotic artifacts; (2) Designerly insights on how people perceive plant-inspired robots; and (3) Design consideration to inform how to use plant metaphors to reshape HRI. |
|
| Anzai, Emi |
Yuki Kimura, Emi Anzai, Naoki Saiwaki, and Masahiro Shiomi (ATR, Kyoto, Japan; Nara Women’s University, Nara, Japan) Digital technologies make it easy for people to be misled by messages and social robots, raising the question of how to help users become less easily deceived. We examined whether people become more cautious and feel that they are contributing more to others if, after being deceived by a robot, they use the same robot to protect another person from deception. In our experiment, adults were first deceived by a communication robot in a consent-form scenario, then briefly operated it to guide a dummy participant away from deception, and finally completed a similar online consent-form check without the robot. The results showed that most were deceived again in the online task, and their perceived contribution to others did not significantly increase. These findings suggest that a single brief chance to protect others is insufficient to reliably increase caution, but the paradigm offers a basis for studying how robots might support resistance to deception. |
|
| Aoki, Jun |
Jun Aoki and Shunki Itadera (University of Tsukuba, Tsukuba, Japan; AIST, Kotoku, Japan) The application of teleoperation to control robotic arms has been widely explored, and user-friendly teleoperation systems have been studied for facilitating higher performance and lower operational burden. To investigate the dominant factors in a practical teleoperation system, this study focused on the characteristics of an interface used to operate a robotic arm. The usability of an interface depends on the characteristics of the manipulation tasks to be completed; however, systematic comparisons of different interfaces across different tasks remain limited. In this study, we compared two widely used teleoperation interfaces, a 3D mouse and a VR controller, for two simple yet broadly applicable tasks with a six-degree-of-freedom (6DoF) robotic arm: repetitively pushing buttons and rotating knobs. Participants (N = 23) controlled a robotic arm with 6DoF to push buttons and rotate knobs as many times as possible in 3-minute trials. Each trial was followed by a NASA-TLX workload rating. The results showed a clear connection between the interface and task performance: the VR controller yielded higher performance for pushing buttons, whereas the 3D mouse performed better and was less demanding for knob rotation. These findings highlight the importance of considering dominant motion primitives of the task when designing practical teleoperation interfaces. |
|
| Arad, Liat |
Liat Arad (Technion - Israel Institute of Technology, Haifa, Israel) Human-Robot coordinated walk can be planned but also emerge spontaneously. This study compared the two coordination types under low and high cognitive load. Participants (n=67) completed a simple walk and a walk with an additional search task, with a quadruped robot. One group of participants planned and maintained a coordinated walk with the robot, whereas another group simply walked with the robot with an emergent coordination. The planned coordination group had less variability in the H-R distance associated with greater dissimilarity in walking speed. The added search task was associated with increased speed dissimilarity for both coordination types, but with a respective decrease in distance variability. The planned coordination group reported a higher perceived cognitive load even with the simple walk, and the spontaneous coordination group reported an increase in perceived cognitive load only with the additional search task. The findings imply that coordinated H-R walk is associated with kinematic tradeoffs and cognitive costs. |
|
| Araujo Sa Teles, Davy |
Caio Conti, Nuno Kuschnaroff Barbosa, Guilherme Gelmi de Freitas Salvo, Gabriel Corsi Honorio, Davy Araujo Sa Teles, and Adriana Tapus (ENSTA - Institut Polytechnique de Paris, Paris, France; ENSTA Paris, Paris, France) The presence of robots in everyday settings is growing rapidly. As they become increasingly common in service-oriented roles, such as in restaurants, retail, and reception, their primary function often involves assisting people by providing guidance and supporting decision-making. In these contexts, a critical factor influencing their effectiveness is the level of human trust in the robot’s actions. In this work, we investigate human trust, confidence and perceptions in interactions with robots within the context of a recycling task. We developed a robot equipped with a vision-enabled LLM to classify trash items and a robotic arm to indicate the recommended bin—either recycling or general waste. In a sorting experiment involving 12 participants, each completed the task under three conditions: (C1) without any assistance, (C2) with an instructional sheet, and (C3) with the option to request assistance from the robot. We tracked whether participants chose to consult the robot and collected self-reported confidence ratings for each condition. Preliminary results suggest that participants are more likely to trust the robot’s guidance and report higher confidence in correctly completing the task when assisted by the robot compared to performing the task alone or with only the instructional sheet. |
|
| Armstrong, Triniti |
Triniti Armstrong, Courtney J. Chavez, Rhian C. Preston, and Naomi T. Fitter (Oregon State University, Corvallis, USA) Prolonged computer use has become the norm for a wide variety of fields. The sedentary practices that often accompany this computer use can lead to a number of health challenges, from cardiovascular and musculoskeletal issues to ocular health problems. Past work by our research group took preliminary steps to address these issues by evaluating a socially assistive robot (SAR)-based break-taking system with no online learning abilities. Based on their initial findings, which showed the robot to effectively encourage break-taking behaviors during computer use and to be more engaging and enjoyable to use compared to a non-robotic alternative, we present methods for data collection in this current paper. Specifically, we aimed 1) to enhance the past SAR system by adding online Q-learning capabilities and 2) to evaluate the updated system's policy generation and how well the final policies aligned with our expectations from prior work. Our results show evidence that the system is successfully generating unique policies for each participant, although the limited match between the expected and resulting policies surprised us. Our work can help SAR researchers understand how to implement Q-learning when using sparse data. |
|
| Arockiasamy, John Pravin |
Akhila Bairy, Diana Burkart, Maximilian Schrapel, Jan Meyerhoefer, John Pravin Arockiasamy, Andy Comeca, Manuel Bied, Alexey Vinel, and Maike Schwammberger (KIT, Karlsruhe, Germany) Effective explainability in human-robot interaction (HRI) requires communication strategies that remain clear in dynamic settings. Unimodal explanations often falter under noise or cognitive load, making multimodal explanations crucial for real-world interaction. For social robots in traffic environments equipped with Vehicle-to-Everything (V2X) communication, timely and interpretable explanations are vital. This late-breaking report presents preliminary findings from a Virtual Reality (VR) user study on which modalities pedestrians perceive as most and least effective across different scenarios, revealing clear scenario-dependent differences in modality effectiveness and highlighting which cues are actually noticed. |
|
| Aronson, Reuben M. |
Isaac S. Sheidlower, Jindan Huang, James Staley, Bingyu Wu, Qicong Chen, Reuben M. Aronson, and Elaine Schaertl Short (Brown University, Providence, USA; Tufts University, Medford, USA) Robot Foundation Models (RFMs) represent a promising approach to developing general-purpose home robots. The broad capabilities of RFMs enable users to ask a robot to perform tasks the RFM was not evaluated on. Informed users can better judge what a robot can and cannot do, and thus use it more safely and effectively. We study how non-roboticists interpret performance information commonly reported in RFM evaluations, which typically use task success rate (TSR) as the primary metric. While TSR is intuitive to experts, it is important to validate that novices interpret it as intended. We conducted a study where users saw real evaluation data from published RFM projects. We find that non-experts use TSR in ways consistent with expert expectations but also value additional information—especially failure cases, which are less frequently reported. Users further want both real evaluation data and the robot’s own estimates for novel tasks. |
|
| Arquilla, Katya |
Jasmin Jaya Chadha, Patrick Kenneth Pischulti, Ciara Hume, Zachary Selleck, William Roessmann, and Katya Arquilla (University of Colorado at Boulder, Boulder, USA) Psychological safety, the shared belief held by team members that it is safe to take interpersonal risks, has been assessed in many human teams in the corporate workplace, yet it has not been analyzed to the same extent in complex environments or with multi-agent teams. Many studies have either evaluated team dynamics in human-robot interaction (HRI) completely in the wild or in highly controlled laboratory studies. This work presents operationally relevant study tasks that can be implemented in a controlled laboratory environment to characterize psychological safety in HRI. 33 participants completed 2 different tasks repeatedly with a human and robot teammate. We observed higher performance scores when participants worked with a human vs. a robot, suggesting that these tasks elicit differentiable performance responses based on team type that could be related to differences in psychological safety. |
|
| Asenbaum, Hans |
Damith Herath, Hans Asenbaum, Janie Busby Grant, Maleen Jayasuriya, Emanuela Savini, and Harshith Ghanta (University of Canberra, Bruce, Australia; University of Canberra, Canberra, Australia) As Artificial Intelligence (AI) systems are increasingly integrated into civic platforms and decision-making processes, the nature of public deliberation is shifting. While AI offers potential solutions to scalability and moderation in deliberative democracy, its impact on human agency remains underexplored. The project adopted an exploratory design, convening six small-group deliberations. Each group included four humans and one humanoid robot. Through semi-structured post-deliberation interviews, we investigated how embodied algorithmic intervention influenced perceived ownership of the discourse. Our initial findings suggested a "Competence-Agency Paradox": while participants valued the AI's ability to synthesise information and enforce civility, they reported a diminished sense of democratic agency, characterised by epistemic deference to the machine and a reluctance to explore emotional or intuitive lines of reasoning. We argue that without careful design, AI-augmented deliberations risk prioritising procedural efficiency over the messy, human autonomy required for genuine democratic legitimacy. |
|
| Ashok, Ashita |
Tanu Majumder, Nihal Shaikh, Ashita Ashok, and Karsten Berns (University of Kaiserslautern-Landau, Kaiserslautern, Germany) Due to the limited integration of social robots into everyday life and increased media exposure, many people first encounter robot embodiment online rather than in person. Such virtual encounters can shape expectations influenced by fiction and imagination, which may be challenged during later physical human-robot interaction. This pilot study examines how robot embodiment order, meeting a robot virtually first versus physically first, affects expectation change, social presence, and emotional response. N=22 participants experienced the same scripted monologue from the humanoid robot Ameca twice, once as a physically present robot and once as its video-based virtual simulation. Participants who encountered the robot virtually first showed significant expectation drops and increased anxiety after the physical interaction, whereas physical-first participants showed stable expectations and less emotional disruption. Social presence was highest when the physical robot was the initial encounter and decreased when experienced after the virtual form. These preliminary findings suggest that imagination-driven expectations formed online can amplify discomfort when confronted with physical reality, underscoring embodiment order as a key factor for future HRI design and deployment. |
|
| Ashraf, Raiyan |
Raiyan Ashraf, Yanni Liu, Sruthi Ganji, and Jong Hoon Kim (Kent State University, Kent, USA) Social robots frequently struggle to sustain meaningful engagement, often limited to surface-level interactions that lack conversational depth. To address this, we present a multimodal conversational architecture that integrates Motivational Interviewing (MI) strategies with situated perception. Key to this approach is a novel dual-stream perception engine: situated cue detection anchors dialogue in the user's immediate physical environment to establish common ground, while tri-modal affect inference (facial, vocal, linguistic) dynamically adjusts the conversation strategy based on real-time user emotion for facilitating empathy. Our system employs a hybrid Large Language Model (LLM) architecture, combining a lightweight model for low-latency fluency and a reasoning model for high-level planning, to guide users through progressive stages of dialogue from rapport-building to deep reflection. A pilot study with the Pepper robot demonstrates that this physically grounded, MI-guided approach successfully facilitates emotional reminiscence and enhances perceived empathy and engagement. These findings suggest that the proposed framework is a promising foundation for next-generation empathic agents, with significant potential applications in cognitive stimulation for aging populations and therapeutic social companionship. |
|
| Assadian, Zubin |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Aşut, Serdar |
Chalmers Ong, Hannah Lee, Koen de Nie, Mark W. Vas, Peter Holzer, Yasaman Shamssabzevar, Yu-Ai Jiang, J. Micah Prendergast, Alper Semih Alkan, and Serdar Aşut (Delft University of Technology, Delft, Netherlands) Robotics in creative industries is sometimes met with concern over the eventual replacement of humans. This paper proposes a framework for a new form of spatial craft that emerges through human-robot collaboration, using the Delft Blue painting process as a case study. We employ Learning from Demonstration (LfD) through kinesthetic guidance of a KUKA iiwa robotic arm to capture expert brushstroke trajectories, which are then modelled using a Long Short-Term Memory Variational Autoencoder (LSTM-VAE) to generate novel, stylistically coherent strokes. Subsequently, we evaluated these generated trajectories through physical execution on ceramic tiles, inviting a human painter to interpret and complete one of the robot-produced compositions. The results demonstrate that robotic machine learning can support a new co-crafting workflow, which does not substitute human artists but expands and transforms traditional craft practices for new creative opportunities to emerge. |
|
| Attanasio, Margherita |
Nicole Covone, Linda Pigureddu, Margherita Attanasio, Cristina Gena, Berardina De Carolis, and Monica Mazza (University of L’Aquila, L’Aquila, Italy; University of Turin, Turin, Italy; University of Bari, Bari, Italy) The study proposed in this paper investigates whether the administration modality of the Raven’s Standard Progressive Matrices (RSPM) influences the performance and physiological responses of autistic individuals. Fourteen participants with an ASD diagnosis completed the test either in the traditional paper-and-pencil format or through the humanoid robot Pepper, which delivered a digital version of the assessment. Across both modalities, we analysed accuracy, reaction times, and physiological activation (heart rate), taking item complexity into account. Preliminary results indicate that robot-mediated administration is feasible and comparable to traditional testing, with meaningful differences emerging as task difficulty increases. In particular, the Pepper modality tended to show lower initial physiological activation and comparable performance across most item levels. These findings suggest that social robots may offer a viable and potentially more engaging alternative for administering cognitive assessments in autism, supporting structured, predictable, and less stressful testing contexts. |
|
| Axelsson, Minja |
Minja Axelsson and Henry Shevlin (University of Cambridge, Cambridge, UK) In this preliminary work, we offer an initial disambiguation of the theoretical concepts anthropomorphism and anthropomimesis in Human-Robot Interaction (HRI) and social robotics. We define anthropomorphism as users perceiving human-like qualities in robots, and anthropomimesis as robot developers designing human-like features into robots. This contribution aims to provide a clarification and exploration of these concepts for future HRI scholarship, particularly regarding the party responsible for human-like qualities—robot perceiver for anthropomorphism, and robot designer for anthropomimesis. We provide this contribution so that researchers can build on these disambiguated theoretical concepts for future robot design and evaluation. |
|
| Aylett, Matthew Peter |
Rachel Ringe, Nima Zargham, Mihai Pomarlan, Benjamin R. Cowan, Minha Lee, Donald McMillan, and Matthew Peter Aylett (University of Bremen, Bremen, Germany; University College Dublin, Dublin, Ireland; Eindhoven University of Technology, Eindhoven, Netherlands; Stockholm University, Stockholm, Sweden; Heriot-Watt University, Edinburgh, UK) Communication in Human-Robot Interaction (HRI) often focuses on linguistic exchanges, with spoken dialogue providing a natural way for people to interact with robots. While direct verbal interaction can reduce barriers compared to other forms of interaction (e.g., text- or touch-based interfaces), it may also exclude users with speech, language, or cognitive differences, and they may not generalize well across cultures and contexts. Non-linguistic forms of communication, including non-verbal voice interactions and extra-linguistic signals (e.g., gesture, gaze, facial expressions, posture), offer complementary pathways that can enable more inclusive, accessible, and universal interactions. This workshop explores how non-linguistic communication can shape effective human-robot communication and collaboration. We aim to bring together researchers from HRI, conversational AI, linguistics, psychology, and accessibility studies to discuss opportunities, challenges, and design practices for integrating such features. The workshop seeks to advance inclusive design principles, bridge disciplines, and highlight future research directions on communication strategies that empower diverse users in their interactions with robots. |
|
| Ayub, Ali |
Bahar Irfan, Nikhil Churamani, Michelle Zhao, Rajat Kumar Jenamani, Ali Ayub, and Silvia Rossi (Familiar Machines & Magic, Woburn, USA; University of Cambridge, Cambridge, UK; Carnegie Mellon University, Pittsburgh, USA; Cornell University, Ithaca, USA; Concordia University, Montreal, Canada; University of Naples Federico II, Naples, Italy) Today's high-capacity generalist robot policies provide a strong foundation for broad task-level competence, yet achieving effective and equitable support for people in everyday settings remains a significant challenge. Real-world environments are dynamic and unstructured, and human needs evolve over time, requiring robots that can adapt accordingly. The ultimate evaluator of any robotic system is the person it assists, and personalization is essential to ensuring equitable and meaningful support across diverse users and contexts. Developing robots that can continually learn from interaction, adapt their behaviors over time, and flexibly assume roles as learners and collaborators is a critical step toward realizing effective integration of robots into daily life. With this year's theme of "Evolving Assistance for Everyday Life", and in alignment with the conference theme "HRI Empowering Society", the sixth edition of the "Lifelong Learning and Personalization in Long-Term Human-Robot Interaction (LEAP-HRI)" workshop aims to bring together insights across diverse disciplines, focusing on how robots can progressively adapt their support to suit diverse individuals, each with unique and changing needs, across real-world contexts. Through this lens, the workshop aims to discuss current and future directions in how assistive systems can flexibly respond, continually improve over time, and deliver more inclusive and empowering support in everyday life. |
|
| Babel, Franziska |
Hannah Pelikan, Karin Stendahl, Franziska Babel, Ola Johansson, and Erik Frisk (Linköping University, Linköping, Sweden) Mobile robots must behave intelligibly to be acceptable in public spaces. Designing social navigation algorithms for delivery robots requires different areas of expertise. The paper reports on an interdisciplinary collaboration between two ethnomethodological conversation analysts, a human factors psychologist, and two motion planning engineers. Based on video recordings of a robot moving among people, the team developed and implemented different sound and movement designs, which were iteratively tested in real-world deployments. This work contributes insights on how interdisciplinary collaboration can be facilitated in the area of social robot navigation and an iterative process for designing robot sound and movement grounded in real-world observations. |
|
| Badr, Dushma |
Heather Pon-Barry, Jasna Budhathoki, and Dushma Badr (Mount Holyoke College, South Hadley, USA; Columbia University, New York, USA) For social robots used in educational applications, such as learning companion robots, maintaining student engagement is critical. There is a need for such robots to estimate engagement in real-time. This study examines dialogue data between a Nao robot and middle school students interacting conversationally while solving math problems. We collect annotations of perceived engagement, seeking to characterize human perception of engagement with the robot. Because robots that perform real-time engagement tracking do not have consistent access to clear video and audio data, we analyze perception of engagement across varying modalities. Specifically, we compare three settings: full access to audiovisual data, access to only the video data, and access to only the audio data. Our results indicate that without access to audio data, perceptions of level of engagement are lower for low-engagement segments, and without access to video data perceptions are higher for high-engagement segments. |
|
| Bagchi, Shelly |
Megan Zimmerman, Jeremy Marvel, Shelly Bagchi, and Snehesh Shrestha (National Institute of Standards and Technology, Gaithersburg, USA; University of Maryland College Park, College Park, USA) A purpose-built testbed for human-robot interaction (HRI) metrology is introduced and discussed. This testbed integrates multiple sensor systems and precision manufacturing to produce high-quality HRI datasets of human volunteers working with robots to complete collaborative tasks in a shared environment. Sensors include audio, video, motion capture, robot information, and user entries, and may also incorporate task-specific object tracking. Data collected will be replicable in identical testbeds, and will enable more robust findings in future HRI studies. Patrick Holthaus, Daniel Hernández García, Patricia Shaw, Francesco Del Duchetto, Marta Romeo, Muneeb Imtiaz Ahmad, Daniel Tozadore, and Shelly Bagchi (University of Hertfordshire, Hatfield, UK; Heriot-Watt University, Edinburgh, UK; Aberyswyth University, Aberystwyth, UK; University of Lincoln, Lincoln, UK; Swansea University, Swansea, UK; University College London, London, UK; National Institute of Standards and Technology, Gaithersburg, USA) The quality and impact of human-robot interaction (HRI) research rely on scientific rigour and reproducibility as well as the ethical soundness of experiments involving human participants. However, the HRI community currently lacks easy access to resources and common knowledge on experimental design practices and standardised reporting guidelines. The BPM-HRI: Best practices and methods in HRI research workshop at HRI 2026 aims to address this gap by fostering a community-wide effort to discuss, disseminate, and develop more robust methodologies. Targeting to empower especially early-career researchers, this half-day workshop will consolidate efforts from international initiatives, including the IEEE Standards Group P3108 Recommended Practice for Design of Human Subjects Studies in Human-Robot Interaction and the UK-HRI topic group Human-Robot Interaction: Best Practices and Methods. Our program includes a presentation and discussion on these initiatives, a keynote address focusing on rigorous study reporting, providing an intercontinental perspective, and dedicated mentoring and ideas exchange sessions. A collaborative working session will document the workshop's efforts with attendees starting to draft a community-driven white paper, surveying the current landscape and outlining next steps for experimental design and reporting recommendations in the field of HRI. |
|
| Baillie, Lynne |
Shenando Stals, Favour Jacob, and Lynne Baillie (Heriot-Watt University, Edinburgh, UK) Demos for social robots often lack accessibility for individuals with sight loss (SL). To address this need, this preliminary study investigates the key factors for individuals with SL that affect the accessibility of the standard introductory demos provided by the robot's manufacturer for three social robots commonly used in robotic assisted living environments, Temi, TIAGo, and Pepper. Results show how individuals with SL perceive the various social attributes of these social robots, and reveal potential differences in workload between various standard demo formats. Initial findings highlight commonalities and potentially differing needs regarding key factors affecting accessibility of the demos, such as tactile exploration, communication of information, and multimodal interaction, between children and young people with SL and adults with SL. |
|
| Bairy, Akhila |
Akhila Bairy, Diana Burkart, Maximilian Schrapel, Jan Meyerhoefer, John Pravin Arockiasamy, Andy Comeca, Manuel Bied, Alexey Vinel, and Maike Schwammberger (KIT, Karlsruhe, Germany) Effective explainability in human-robot interaction (HRI) requires communication strategies that remain clear in dynamic settings. Unimodal explanations often falter under noise or cognitive load, making multimodal explanations crucial for real-world interaction. For social robots in traffic environments equipped with Vehicle-to-Everything (V2X) communication, timely and interpretable explanations are vital. This late-breaking report presents preliminary findings from a Virtual Reality (VR) user study on which modalities pedestrians perceive as most and least effective across different scenarios, revealing clear scenario-dependent differences in modality effectiveness and highlighting which cues are actually noticed. |
|
| Banitalebi Dehkordi, Maryam |
Abolfazl Zaraki, Hamed Rahimi Nohooji, Maryam Banitalebi Dehkordi, and Holger Voos (University of Hertfordshire, Hatfield, UK; University of Luxembourg, Luxembourg, Luxembourg) This paper reframes shared autonomy as an interpretable interaction space centered on the human and bounded by safety. Building on this perspective, we introduce a Human-Centred Tri-Region Shared Autonomy Framework that organises interaction into three regions: Human-Led, Robot-Supported, and Safety-Intervention. The framework formalises how autonomy shifts as interaction conditions evolve, while an Interaction State Interpreter maps multimodal user and task observations to region-dependent behaviours. This structure enables autonomy transitions that remain explicit and behaviourally grounded across diverse human-robot interaction contexts, including physical collaboration, social engagement, and cognitive assistance. A physical interaction scenario illustrates how the proposed formulation can be realised through adaptive impedance and constraint-aware feedback, enabling smooth transitions between collaborative support and protective intervention. By structuring autonomy around human authority, supportive assistance, and safety enforcement, the framework provides a clear basis for adaptive human-robot interaction. |
|
| Baraka, Kim |
Yijun Zhou, Muhan Hou, and Kim Baraka (Vrije Universiteit Amsterdam, Amsterdam, Netherlands) Imitation learning relies on high-quality demonstrations, and teleoperation is a primary way to collect them, making teleoperation interface choice crucial for the data. Prior work mainly focused on static tasks, i.e., discrete, segmented motions, yet demonstrations also include dynamic tasks requiring reactive control. As dynamic tasks impose fundamentally different interface demands, insights from static-task evaluations cannot generalize. To address this gap, we conduct a within-subjects study comparing a VR controller and a SpaceMouse across two static and two dynamic tasks (N=25). We assess success rate, task duration, cumulative success, alongside NASA-TLX, SUS, and open-ended feedback. Results show statistically significant advantages for VR: higher success rates, particularly on dynamic tasks, shorter successful execution times across tasks, and earlier successes across attempts, with significantly lower workload and higher usability. As existing VR teleoperation systems are rarely open-source or suited for dynamic tasks, we release our VR interface to fill this gap. Nataliia Kaminskaia, Rob Saunders, Kim Baraka, and Somaya Ben Allouch (Leiden University, Leiden, Netherlands; Vrije Universiteit Amsterdam, Amsterdam, Netherlands; University of Amsterdam, Amsterdam, Netherlands; Amsterdam University of Applied Sciences, Amsterdam, Netherlands) A single tap from a robot can set off a cascade of interpretation. This study examines how people perceive affect, intent, and agency when a non-humanoid robot conveys meaning through contact-based nudging. Using a cube-shaped robot programmed with twenty animator-designed affect–intent variants, participants completed two tasks: a situated interaction in which the robot attempted to pass their arm, and an isolated gesture-recognition task. In the situated encounter, participants rapidly attributed motives such as attention-seeking, social contact, or boundary testing. Recognition of the robot’s obstacle-passing goal was partial but participants consistently described the robot’s movement qualities as shifting from cautious to more assertive, interpreting these changes as emotional and intentional. In the isolated task the expressive movement was far less legible: only neutral gestures were reliably recognised, with frequent confusions between comfort and attention. These findings support the position that nudging gains meaning in context: while a minimal robot can elicit rich social inference when its nudges unfold dynamically in interaction, affect and intent become opaque when the same motions are removed from their relational frame. Zoja Gobec, Joyce den Hertog, and Kim Baraka (Sioux Technologies, Amsterdam, Netherlands; AKOB, den Haag, Netherlands; Vrije Universiteit Amsterdam, Amsterdam, Netherlands) With robots increasingly being used for artistic expression in interactive performances, this research investigates the production of expressive swarm behaviour that could form the basis for an interactive performance between a dancer and a swarm of drones. We contribute a mapping of Laban Effort parameters - a common movement analysis framework - onto a particle swarm and integrate it into an interactive prototype. The system accepts human motion as input and generates responsive swarm behaviour with the Boids algorithm as the foundational behaviour model. In a user study evaluating the mapping (N=17), we show that the Space and Time parameters were recognised significantly better than Weight and Flow, suggesting that parameters connected to embodied cues such as intention and emotion are more challenging to computationally implement, and need further refinement. The novel mapping, along with the interactive system and user study insights, offers an initial step towards practical applications in choreography development, interactive performance, or art installations, as well as designing expressive frameworks with human-guided swarm control. Roel van de Laar and Kim Baraka (Vrije Universiteit Amsterdam, Amsterdam, Netherlands) Teleoperating high degree-of-freedom (DoF) robots such as humanoids in interactive settings remains challenging due to high operator workload, and limited situational awareness. The majority of existing interfaces often relies on graphical dashboards, limiting natural, embodied control. This study explores an alternative paradigm: "Robot-as-Interface," where one humanoid robot (the puppet) is physically manipulated to control another (the performer) through direct joint-to-joint mapping. Following a co-design session with expert users, we developed an improved interface featuring joint locking, head orientation control, blockage detection, and a pausing toggle. A between-subjects user study (N=26) compared this expert-informed system against a baseline. Results show significantly improved system usability (SUS) and a reduction in perceived workload. Observations further revealed the importance of operator pacing, spatial positioning, and clear system feedback. Overall, results indicate that expert-informed enhancements can improve usability and operator experience in puppet–performer teleoperation, provided that hardware limits and user training are carefully addressed. |
|
| Barakova, Emilia I. |
Jing Li, Felix Schijve, Jun Hu, and Emilia I. Barakova (Eindhoven University of Technology, Eindhoven, Netherlands) Parental involvement is crucial for the development of children's emotion regulation (ER) skills, yet navigating these complex emotional interactions remains challenging for many families. While Large Language Models (LLMs) offer unprecedented conversational flexibility, integrating them into embodied social robots to provide context-aware, multimodal support remains an open challenge. In this paper, we present the design and preliminary evaluation of an LLM-powered robotic system aimed at facilitating ER within parent-child dyads. Utilizing a supervised autonomy approach, our system bridges the gap between language-based reasoning and embodied robotic behavior, allowing the MiRo-E robot to engage in natural dialogue while performing empathetic physical actions. We detail the system's technical architecture and interaction design, which guides dyads through evidence-based ER strategies. Preliminary user tests with six parent-child dyads suggest positive user engagement and initial trust, with participants reporting that the robot showed potential as a supportive mediator. These findings offer early design insights into developing autonomous, LLM-driven social robots for family-centered mental health interventions. Febe Anna Kooij-Meijer, Emilia I. Barakova, Rosa Elfering, Wang Long Li, and Maarten Houben (Eindhoven University of Technology, Eindhoven, Netherlands; Tinybots, Rotterdam, Netherlands) The growing population of individuals with mild cognitive impairment and dementia places increasing demands on home-care systems, while staff shortages and high caregiver workloads underscore the need for assistive technologies. However, research on implementing these technologies in home care practice remains limited. This study examines professional caregivers’ digital onboarding of Tessa, a social robot that supports through verbal reminders. A conceptual digital onboarding probe was evaluated with novice, experienced, and expert users. Findings indicate that the onboarding process improves usability and efficiency by providing intuitive guidance and structured workflows. Additionally, LLMs can translate caregiver-provided goals into actionable robot scripts, though oversight remains essential for quality assurance. The probe and LLM support more effective onboarding and enhance caregiver’s user experience. |
|
| Barrera Valls, Pol |
Pol Barrera Valls, Patrick Vogelius, Tobias Florian von Arenstorff, Matouš Jelínek, and Oskar Palinko (University of Southern Denmark, Odense, Denmark; University of Southern Denmark, Sønderborg, Denmark) The development of humanoid robots has experienced a sudden acceleration during the last years, due to the large advancements made in actuation technology, generative AI and computer vision. The design of humanoid robots makes them useful in scenarios where many different tasks must be achieved, and humans are present. Furthermore, their resemblance to humans opens new ways of communication when compared to traditional robots. However, humanoid robots may find themselves in a situation where human assistance is required, e.g. due to limitations in their sensing and movement capabilities. As such, different help-seeking strategies and their effectiveness need to be explored. This article compares the effect of inducing empathy and guilt in humans as means to request help after a mistake made by a robot. An in the wild experiment conducted between subjects was performed in the University of Southern Denmark (SDU) with a total of 123 participants across 3 scenarios of help-seeking strategies, described as: distressed, sarcastic, and neutral. The results showed a statistical difference between the strategies, proving that using the concepts of empathy and guilt elicitation with robots has the potential to improve human-humanoid collaboration. |
|
| Barrett, Samuel |
Raquel Thiessen, Minoo Dabiri Golchin, Samuel Barrett, Jacquie Ripat, and James Everett Young (University of Manitoba, Winnipeg, Canada) Social robots are increasingly marketed as play companions for children, but research has not established how these robots support play in real-world scenarios or whether their interactivity supports quality play. We are conducting an eight-week home study with children with and without disabilities to learn about the play experiences with an interactive robot versus a doll ver-sion of the same robot (a VStone Sota). We implemented interactive robot behaviors based on LUDI's categorization of play, incorporating social and cognitive dimensions of play to support children’s play in various developmental play stages. We measure play quality using standardized instruments, and along with qualitative assessments of children's engagement and interest through child-family interviews. This study investigates whether interacting with robotic toys supports children in developing play skills compared to non-robotic dolls. Our findings will establish baseline knowledge about child-robot play and can guide evidence-based design of interactive play companions for children. |
|
| Barrué, Cristian |
Tamlin Love, Víctor Bermejo, Alberto Olivares-Alarcos, Antonio Andriella, Núria Vallès-Peris, Cristian Barrué, and Guillem Alenyà (CSIC-UPC, Barcelona, Spain; IIIA-CSIC, Barcelona, Spain; Universitat Autònoma de Barcelona, Barcelona, Spain) Explainability has been proposed as an approach to robot failure recovery, facilitating understanding and repairing trust, especially relevant in domestic assistive tasks. This study conducts a preliminary exploration of older adults' preferences regarding the content and context of robot-generated explanations for failures to guide future research. An exploratory study was conducted in three phases: 1) gathering high-level requirements from caregivers, 2) implementing a semi-autonomous robot for object retrieval that identifies and explains different types of failures, and 3) engaging N=8 older adults in real-life interactions as well as in focus groups to assess their perspectives. Our preliminary observations highlight a tension in preferences: a general desire for short, direct explanations to minimize disruption, versus a need for more detailed, actionable explanations specifically in failure cases. Crucially, we also note that these preferences are unstable and contextually constructed, reinforcing that the technical failures cannot be separated from their social context, as users' experiences and opinions are shaped by both the robot’s functional capabilities and the values and organisational settings in which they are introduced. |
|
| Baselizadeh, Adel |
Burhan Mohammad Sarfraz, Diana Saplacan Lindblom, Adel Baselizadeh, and Jim Torresen (University of Oslo, Oslo, Norway; Kristianstad University, Kristianstad, Sweden) As populations age and life expectancy rises, healthcare systems face growing staff shortages. Service robots have been proposed to support healthcare personnel, but their use introduces significant privacy challenges. This paper investigates whether a service robot can protect individuals’ privacy through face obfuscation while performing autonomous tasks in unconstrained healthcare environments. Our approach relies on a face recognition system trained to identify doctors and patients. Scenario-based experiments simulating a doctor’s office show that the system achieves partial success: non-target individuals are reliably obfuscated, and patients can be recognized when frontal views are available. However, real-world conditions such as pose variation, occlusion, and lighting changes reduce recognition reliability, limiting privacy protection. These results highlight both the potential and the current limitations of face obfuscation for privacy-preserving service robots, providing guidance for near-term deployment strategies in constrained interaction scenarios. |
|
| Bassett, Jack |
Jack Bassett, Long-Jing Hsu, Stephen Taylor, Janice Bays, Weslie Khoo, Katherine M. Tsui, David J. Crandall, and Selma Šabanović (Indiana University at Bloomington, Bloomington, USA; Dementia Action Alliance, Bloomington, USA; Toyota Research Institute, Cambridge, USA) Current robotic systems for dementia care primarily emphasize cognitive or functional support—reminding individuals to take medications, offering simple companionship, or providing sensory stimulation. These interventions are valuable, yet they do not always acknowledge the deeper existential challenges associated with memory loss. In this video, we present IRIS (Interactive Robot for Ikigai Support), a novel social robot designed not only to assist but to empower: to help individuals living with dementia reconnect with purpose, identity, and meaning. This 3-minute clip was part of a 3-year research project to develop a robot that can foster a sense of meaning for older adults living with dementia. This work highlights the lived experience of Steve Taylor, a participant whose long-term involvement shaped the robot from concept to deployment. By foregrounding his voice and day-to-day story, the video reveals how IRIS may act as a reflective companion — encouraging storytelling, sparking emotional connections, and enabling moments of clarity, dignity, and joy. The film frames the robot not merely as a tool, but as a mirror through which participants can rediscover fragments of self and meaning in everyday experiences. |
|
| Batcir, Shani |
Bar Itskovitch Kiner, Shani Batcir, Yigal Evron, Milagros Prendes, Avi Parush, and Arielle Fischer (Technion - Israel Institute of Technology, Haifa, Israel) Robotic exoskeletons provide individuals with spinal cord injury (SCI) the ability to regain mobility. However, successful ambulation relies heavily on the user's ability to synchronize their movements with the device. This study examines the synchronization between human and robotic exoskeleton movements across multiple training sessions using both biomechanical metrics and subjective reports. We analysed data from 12 healthy participants performing mobility tasks with a ReWalk exoskeleton over five sessions to characterize the motor learning process. Results indicate a significant 37.1% improvement in the temporal synchronization between the robot's hip and the user's thorax. Additionally, Spatial Consistency of the hip joint improved by 42.6%. These biomechanical gains were strongly supported by subjective user feedback, which showed a 53.5% increase in perceived synchronization (p < 0.01) and an 18% improvement in perceived ease of learning. These findings suggest that structured training facilitates active motor adaptation, leading to more stable and synchronized exoskeleton-assisted walking. |
|
| Batool, Faryal |
Faryal Batool, Iana Zhura, Valerii Serpiva, Roohan Ahmed Khan, Ivan Valuev, Issatay Tokmurziyev, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Reliable human–robot collaboration in emergency scenarios requires autonomous systems that can detect humans, infer navigation goals, and operate safely in dynamic environments. This paper presents HumanDiffusion, a lightweight image-conditioned diffusion planner that generates human-aware navigation trajectories directly from RGB imagery. The system combines YOLO-11 based human detection with diffusion-driven trajectory generation, enabling a quadrotor to approach a target person and deliver medical assistance without relying on prior maps or computationally intensive planning pipelines. Trajectories are predicted directly in pixel space, enabling smooth motion and maintaining a consistent safety margin around humans. We evaluate HumanDiffusion in simulation and real-world indoor mock-disaster scenarios. On a 300-sample test set, the model achieves a mean squared error of 0.02 in pixel-space trajectory reconstruction. Real-world experiments demonstrate an overall mission success rate of 80% across accident-response and search-and-locate tasks with partial occlusions. These results indicate that human-conditioned diffusion planning offers a practical and robust solution for human-aware UAV navigation in time-critical assistance settings. |
|
| Bays, Janice |
Jack Bassett, Long-Jing Hsu, Stephen Taylor, Janice Bays, Weslie Khoo, Katherine M. Tsui, David J. Crandall, and Selma Šabanović (Indiana University at Bloomington, Bloomington, USA; Dementia Action Alliance, Bloomington, USA; Toyota Research Institute, Cambridge, USA) Current robotic systems for dementia care primarily emphasize cognitive or functional support—reminding individuals to take medications, offering simple companionship, or providing sensory stimulation. These interventions are valuable, yet they do not always acknowledge the deeper existential challenges associated with memory loss. In this video, we present IRIS (Interactive Robot for Ikigai Support), a novel social robot designed not only to assist but to empower: to help individuals living with dementia reconnect with purpose, identity, and meaning. This 3-minute clip was part of a 3-year research project to develop a robot that can foster a sense of meaning for older adults living with dementia. This work highlights the lived experience of Steve Taylor, a participant whose long-term involvement shaped the robot from concept to deployment. By foregrounding his voice and day-to-day story, the video reveals how IRIS may act as a reflective companion — encouraging storytelling, sparking emotional connections, and enabling moments of clarity, dignity, and joy. The film frames the robot not merely as a tool, but as a mirror through which participants can rediscover fragments of self and meaning in everyday experiences. |
|
| Becker, Joffrey |
Elisabetta Zibetti, Raphael Lorenzo-Louis, Fabio Amadio, Julien Wacquez, Joffrey Becker, Bertrand Luvison, and Serena Ivaldi (Université Paris 8, Paris, France; Inria - CNRS - Université de Lorraine - Loria, Villers-les-Nancy, France; CEA - List, Palaiseau, France; ETIS (UMR8051) - CYU - ENSEA - CNRS, Cergy, France; University Paris-Saclay - CEA - List, Palaiseau, France) How does interaction with robots differ between spontaneously formed groups and individuals? Despite increasing robot deployment in public spaces, this question remains understudied in real-world settings. We conducted a field study deploying a service stationary robot in semi-public office spaces, tracking 221 individuals (42 alone, 179 in groups) across 95 interaction opportunities. Cookies were placed on accessible trays, creating a low-barrier functional interaction opportunity (taking a cookie) while allowing observation of spontaneous social behaviors. Groups demonstrated significantly higher engagement: functional interactions and social gestures. Within groups, leader presence amplified social engagement threefold. These findings are consistent with descriptive norm theory: group presence and leader behavior were associated with increased social engagement, though context-specific factors may moderate these effects. Results highlight the potential value of group detection for robots in multi-user environments, and demonstrate the feasibility of integrating psychological theory with automated tracking to study spontaneous human-robot encounters in the wild. |
|
| Bejarano, Alexandra |
Alexandra Bejarano, Hong Tran, and Qin Zhu (Virginia Tech, Blacksburg, USA) Compassion plays a critical role in creating inclusive, supportive learning environments that promote students' well-being and engagement. As social robots become more common in elementary classrooms to support academic and socio-emotional learning, they introduce new possibilities for modeling and nurturing compassion. However, they also raise important ethical questions about how children understand and experience care and compassion in human-robot interactions. This paper presents a conceptual framework for examining the ethics of compassionate robots in elementary education. It identifies four key ethical dimensions (Connection, Power, Access, Information) that shape how compassionate behaviors expressed or elicited by robots may influence children's perceptions of care, agency, and moral responsibility. Ultimately, the framework offers a structured approach for evaluating whether, when, and how robots should express compassion in ways that are developmentally appropriate, culturally responsive, and aligned with students' lived experiences, supporting the responsible integration of compassionate robots in education. |
|
| Bejarano Sepulveda, Edison Jair |
Edison Jair Bejarano Sepulveda, Valerio Bo, Alberto Sanfeliu, and Anais Garrell (CSIC-UPC, Barcelona, Spain) Robots working in spaces shared by people need more than geometric mapping: they must recognize people, understand social context, and decide whether to proceed or negotiate passage. Traditional navigation pipelines lack this semantic understanding, often failing when progress depends on human cooperation. We introduce a Perception–Awareness–Decision (PAD) framework that systematically combines Simultaneous Localization and Mapping (SLAM) with Vision–Language Models (VLMs), speech recognition, and Large Language Models (LLMs), rather than simply stacking modules. PAD tries to emulate human perceptual organization by fusing multi-modal cues into a unified situational-awareness map capturing geometry, social context, and linguistic intent. This representation enables the decision layer to choose adaptively between safe replanning and context-appropriate verbal interaction. In a corridor-blocking task, PAD improves task success, increases safety margins, and produces behaviour that participants judged as more socially appropriate than a geometric baseline. These findings offer preliminary evidence that combining VLM-derived semantics with structured situational awareness can support more socially aware robot navigation. |
|
| Belpaeme, Tony |
Giulio Antonio Abbo, Senne Lenaerts, and Tony Belpaeme (Ghent University – imec, Ghent, Belgium) In this work, we explore how multimodal large language models can support real-time context- and value-aware decision-making. To do so, we combine the GPT-4o language model with a TurtleBot 4 platform simulating a smart vacuum cleaning robot in a home. The model evaluates the environment through vision input and determines whether it is appropriate to initiate cleaning. The system highlights the ability of these models to reason about domestic activities, social norms, and user preferences and take nuanced decisions aligned with the values of the people involved, such as cleanliness, comfort, and safety. We demonstrate the system in a realistic home environment, showing its ability to infer context and values from limited visual input. Our results highlight the promise of multimodal large language models in enhancing robotic autonomy and situational awareness, while also underscoring challenges related to consistency, bias, and real-time performance. Giulio Antonio Abbo, Ruben Janssens, Seppe Van de Vreken, and Tony Belpaeme (Ghent University – imec, Ghent, Belgium) Enabling natural robot communication through dynamic, context-aware facial expressions remains a key challenge in human-robot interaction. The field lacks a system that can generate facial expressions in real time and can be easily adapted to different contexts. Early work in this area considered inherently limited rule-based systems or deep learning-based models, requiring large datasets. Recent systems using large language models (LLMs) could not yet generate context-appropriate facial expressions in real time. This paper introduces Expressive Furhat, an open-source algorithm and Python library that leverages LLMs to generate real-time, adaptive facial gestures for the Furhat robot. Our modular approach separates gesture rendering, new gesture generation, and gaze aversion, ensuring flexibility and seamless integration with the Furhat API. User studies demonstrate significant improvements in user perception over a baseline system, with participants praising the system's emotional responsiveness and naturalness. Eva Verhelst and Tony Belpaeme (Ghent University – imec, Ghent, Belgium) Recent advances in generative AI and social robotics have opened new possibilities for robot-assisted language learning, yet integrating these technologies in pedagogically sound ways remains challenging. This paper matches theories of language learning to the design of autonomous robot tutors. Usage-based language learning, learning in context, Self-Determination Theory and Dual Coding Theory lend themselves to being operationalised for Robot-Assisted Language Learning. We present a proof-of-concept shared story-building system, in which a learner co-creates a story with a robot tutor. The system leverages large language models for dynamic content generation, automatic speech recognition for learner input, and image generation to provide multimodal scaffolding. By embedding vocabulary, adapting to learner input, and avoiding explicit corrections, the system aligns with usage-based and interactionist theories of language acquisition. We discuss the technological enablers and barriers - such as large language model adaptability and automatic speech recognition limitations - and propose directions for future work. This work contributes to the growing field of AI-powered social robots in education, demonstrating how theory-driven design can enhance engagement and learning outcomes. Maria Jose Pinto Bernal and Tony Belpaeme (Ghent University, Ghent, Belgium) Large Vision–Language Models are increasingly used for visually grounded social dialogue, yet most systems assume that vision should be active continuously, adding computational load and increasing the risk of unnecessary or hallucinated descriptions. We present a multimodal architecture that treats vision as a selective, context-dependent resource. A lightweight vision-gating module triggers visual grounding only when a user utterance requires it, while a complementary ambient monitoring component detects gradual scene changes at a low frame rate. Both pathways contribute cues only when relevant, enabling the robot to use visual information meaningfully without overuse. A preliminary evaluation with 10 participants (≈ 95 minutes) shows that the gating mechanism identified vision-relevant turns with 93.4% accuracy, and that grounded descriptions aligned with the scene in 90.7% of cases. |
|
| Benadon, Guy |
Ofir Sadka, Michael Kupferman, Hili Megory, Lior Simchon, Rotem Dagan, Guy Benadon, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Robots are becoming part of everyday life, where their success depends as much on social interaction as on task performance. While rich movement is often assumed to enhance that experience, minimal gestures can also elicit meaningful social interactions. This suggests that gesture design informed by fundamental human nonverbal cues may achieve comparable social expressivity with fewer degrees of freedom. We explored whether reducing a robot’s degrees of freedom alters the social experience by comparing interactions in two movement configurations: a Full-DoF condition and a Reduced-DoF condition. This comparison allowed us to evaluate whether increased movement capability contributes meaningfully to the quality of the social experience. Bayesian analyses consistently favored the null hypothesis, indicating no meaningful differences between the Full-DoF and Reduced-DoF conditions. These initial findings suggest that well-designed simple gestures can support a consistent social experience, challenging the assumption that greater movement capability should be automatically expected to enhance human–robot interaction. Elior Carsenti, Adi Manor, Ofir Sadka, Michael Kupferman, Guy Benadon, Lior Simchon, Rotem Dagan, Hili Megory, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Social robots are increasingly expected to initiate interactions, yet proactive behavior often produces negative experiences if presented when the users are not available. We argue that availability is not binary and depends on the user's cognitive load. Moreover, proactivity can take different forms, suggesting that proactive cues can be adjusted to the user’s current load. We tested whether matching proactivity modality (verbal and nonverbal cues) to cognitive-load level improves robot perception. In a 2×2×2 between-participants study, 87 participants completed a high-load writing task or a low-load screw-sorting task while an ElliQ robot initiated interaction using verbal cues, nonverbal cues, both, or neither. Perceptions varied across conditions. Under low load, verbal proactivity was preferred regardless of nonverbal cues. Under high load, nonverbal-only proactivity produced the best experience. These findings suggest that proactivity design is not trivial and should focus on aligning the robot's proactive behavior with the user's cognitive load. |
|
| Ben Allouch, Somaya |
Nataliia Kaminskaia, Rob Saunders, Kim Baraka, and Somaya Ben Allouch (Leiden University, Leiden, Netherlands; Vrije Universiteit Amsterdam, Amsterdam, Netherlands; University of Amsterdam, Amsterdam, Netherlands; Amsterdam University of Applied Sciences, Amsterdam, Netherlands) A single tap from a robot can set off a cascade of interpretation. This study examines how people perceive affect, intent, and agency when a non-humanoid robot conveys meaning through contact-based nudging. Using a cube-shaped robot programmed with twenty animator-designed affect–intent variants, participants completed two tasks: a situated interaction in which the robot attempted to pass their arm, and an isolated gesture-recognition task. In the situated encounter, participants rapidly attributed motives such as attention-seeking, social contact, or boundary testing. Recognition of the robot’s obstacle-passing goal was partial but participants consistently described the robot’s movement qualities as shifting from cautious to more assertive, interpreting these changes as emotional and intentional. In the isolated task the expressive movement was far less legible: only neutral gestures were reliably recognised, with frequent confusions between comfort and attention. These findings support the position that nudging gains meaning in context: while a minimal robot can elicit rich social inference when its nudges unfold dynamically in interaction, affect and intent become opaque when the same motions are removed from their relational frame. |
|
| Bennewitz, Maren |
Subham Agrawal and Maren Bennewitz (University of Bonn, Bonn, Germany) As robots increasingly operate in public spaces, their ability to navigate in ways that feel natural and comfortable to humans is essential for social acceptance and effective interaction. Therefore, our research investigates how robots can adapt their navigation behavior to human social norms while maintaining efficient task execution. In particular, we propose integrating objective metrics, such as trajectory deviations caused by robot motion, together with subjective measures of the robot’s behavior into the navigation process. Towards this goal, we are conducting a user study to assess how the viewpoint (egocentric vs. allocentric) affects the perceived social acceptability of robot behavior. The insights from this study will inform the next steps in integrating subjective comfort metrics into robot trajectory optimization. Consequently, this work contributes to the development of robots capable of navigating shared spaces efficiently and in a socially acceptable manner. |
|
| Bentley, Abigail |
Satheeshkumar Veeramani, Anna Kisil, Abigail Bentley, Hatem Fakhruldeen, Gabriella Pizzuto, and Andrew I. Cooper (University of Liverpool, Liverpool, UK) Self-driving laboratories (SDLs) are rapidly transforming research in chemistry and materials science to accelerate new discoveries. Mobile robot chemists (MRCs) play a pivotal role by autonomously navigating the lab to transport samples, effectively connecting synthesis, analysis, and characterisation equipment. The instruments within an SDL are typically designed or retrofitted to be accessed by both human and robotic chemists, ensuring operational flexibility and integration between manual and automated workflows. In many scenarios, human and robotic chemists may need to use the same equipment simultaneously. Currently, MRCs rely on simple LiDAR-based obstruction detection, which forces the robot to passively wait if a human is present. This lack of situational awareness leads to unnecessary delays and inefficient coordination in time-critical automated workflows in human-robot shared labs. To address this, we present an initial study of an embodied, AI-driven perception method that facilitates proactive human-robot interaction in shared-access scenarios. Our method features a hierarchical human intention prediction model that allows the robot to distinguish between preparatory actions (waiting) and transient interactions (accessing the instrument). Our results demonstrate that the proposed approach enhances efficiency by enabling proactive human–robot interaction, streamlining coordination, and potentially increasing the efficiency of autonomous scientific labs. |
|
| Bergsten, Klara |
Sam Thellman, Klara Bergsten, Edoardo Datteri, and Tom Ziemke (Linköping University, Linköping, Sweden; University of Milano-Bicocca, Milan, Italy) People routinely attribute mental states such as beliefs, desires, and intentions to explain and predict others' behavior. Prior work shows that such attributions extend to robots, yet it remains unclear what people assume about the reality of the states they attribute to them. Building on recent conceptual work on folk-ontological stances, we report a pilot study measuring realist, anti-realist, and agnostic stances toward robot minds. Using a questionnaire (N = 66), we assessed stances toward today's robots and robots in principle, and examined stance rigidity through a reflection-and-reassessment design. Results show stronger anti-realist tendencies for today's robots than for robots in principle. Stances were largely rigid across reflection. Notably, participants did not hold a uniformly non-realist view but expressed a diversity of folk-ontological stances, including substantial proportions of agnostic and realist responses. This heterogeneity highlights the need for measurement tools that move beyond binary measures and capture nuance in folk-ontological reasoning. Future work will expand stance options to include finer-grained realist and anti-realist variants and recruit cross-cultural samples to assess variation across populations. |
|
| Bermejo, Víctor |
Tamlin Love, Víctor Bermejo, Alberto Olivares-Alarcos, Antonio Andriella, Núria Vallès-Peris, Cristian Barrué, and Guillem Alenyà (CSIC-UPC, Barcelona, Spain; IIIA-CSIC, Barcelona, Spain; Universitat Autònoma de Barcelona, Barcelona, Spain) Explainability has been proposed as an approach to robot failure recovery, facilitating understanding and repairing trust, especially relevant in domestic assistive tasks. This study conducts a preliminary exploration of older adults' preferences regarding the content and context of robot-generated explanations for failures to guide future research. An exploratory study was conducted in three phases: 1) gathering high-level requirements from caregivers, 2) implementing a semi-autonomous robot for object retrieval that identifies and explains different types of failures, and 3) engaging N=8 older adults in real-life interactions as well as in focus groups to assess their perspectives. Our preliminary observations highlight a tension in preferences: a general desire for short, direct explanations to minimize disruption, versus a need for more detailed, actionable explanations specifically in failure cases. Crucially, we also note that these preferences are unstable and contextually constructed, reinforcing that the technical failures cannot be separated from their social context, as users' experiences and opinions are shaped by both the robot’s functional capabilities and the values and organisational settings in which they are introduced. |
|
| Bernardino, Alexandre |
Ricardo Rodrigues, Plinio Moreno, Filipa Correia, and Alexandre Bernardino (University of Lisbon, Lisbon, Portugal; Instituto Superior Técnico - University of Lisbon, Lisbon, Portugal) Social robots are a new and promising tool for reducing children's anxiety during medical procedures. Our study aims to design and test a social robot to alleviate anxiety and improve emotional state before dental treatment for children. The design of the experimental condition included asocial robot (Vizzy) with different comedic styles such as jokes, riddles, games, and dance, to make the waiting room experience more engaging and entertaining for children. A user study (N=22) was conducted, in which children were assigned to one of two groups: interaction with the humanoid Vizzy robot, or waiting in the dentist's waiting room without any interaction with the robot (Control). The results indicate a significant impact of the experimental condition on reducing anxiety levels and improving emotional responses, demonstrating that social robots can be considered for future research to reduce children's anxiety before distressing medical procedures. |
|
| Berns, Karsten |
Tanu Majumder, Nihal Shaikh, Ashita Ashok, and Karsten Berns (University of Kaiserslautern-Landau, Kaiserslautern, Germany) Due to the limited integration of social robots into everyday life and increased media exposure, many people first encounter robot embodiment online rather than in person. Such virtual encounters can shape expectations influenced by fiction and imagination, which may be challenged during later physical human-robot interaction. This pilot study examines how robot embodiment order, meeting a robot virtually first versus physically first, affects expectation change, social presence, and emotional response. N=22 participants experienced the same scripted monologue from the humanoid robot Ameca twice, once as a physically present robot and once as its video-based virtual simulation. Participants who encountered the robot virtually first showed significant expectation drops and increased anxiety after the physical interaction, whereas physical-first participants showed stable expectations and less emotional disruption. Social presence was highest when the physical robot was the initial encounter and decreased when experienced after the virtual form. These preliminary findings suggest that imagination-driven expectations formed online can amplify discomfort when confronted with physical reality, underscoring embodiment order as a key factor for future HRI design and deployment. |
|
| Bharath, Vishnu |
Jonathan Albert Cohen, Kye Shimizu, Allen Song, Vishnu Bharath, Pattie Maes, and Kent Larson (Massachusetts Institute of Technology, Cambridge, USA; New England Innovation Academy, Marlborough, USA) Robots in shared spaces often move in ways that are difficult for people to interpret, placing the burden on humans to adapt. High-DoF robots exhibit motion that people read as expressive, intentionally or not, making it important to understand how such cues are perceived. We present an online video study evaluating how different signaling modalities, expressive motion, lights, text, and audio, shape people’s ability to understand a quadruped robot’s upcoming navigation actions (Boston Dynamics Spot [ 4]). common scenarios, we measure how each modality influences humans’ (1) accuracy in predicting the robot’s next navigation action, (2) confidence in that prediction, and (3) trust in the robot to act safely. The study tests how expressive motions compare to explicit channels, whether aligned multimodal cues enhance interpretability, and how conflicting cues affect user confidence and trust. We contribute initial evidence on the relative effectiveness of implicit versus explicit signaling strategies. |
|
| Bhattacharjee, Tapomayukh |
Ziang Liu, Katherine Dimitropoulou, Christy Cheung, and Tapomayukh Bhattacharjee (Cornell University, Ithaca, USA; Columbia University, New York City, USA) We present CareEval, a benchmark for evaluating the physical caregiving decision-making abilities of Large Language Models. Developed with a licensed occupational therapist expert in caregiving and validated by eight clinical stakeholders, it contains 100 realistic scenarios spanning all six basic Activities of Daily Living. Instead of testing general reasoning, CareEval assesses whether model responses account for key physical caregiving factors, such as user function, agency, intent, communication, and safety, and align with expert practice. Across several state-of-the-art LLMs, the best model only scores 53.1%, revealing substantial gaps in current models’ ability to reason about physical caregiving. We release 80 of the CareEval scenarios and all prompts through our website: https://emprise.cs.cornell.edu/care-eval/. |
|
| Bied, Manuel |
Akhila Bairy, Diana Burkart, Maximilian Schrapel, Jan Meyerhoefer, John Pravin Arockiasamy, Andy Comeca, Manuel Bied, Alexey Vinel, and Maike Schwammberger (KIT, Karlsruhe, Germany) Effective explainability in human-robot interaction (HRI) requires communication strategies that remain clear in dynamic settings. Unimodal explanations often falter under noise or cognitive load, making multimodal explanations crucial for real-world interaction. For social robots in traffic environments equipped with Vehicle-to-Everything (V2X) communication, timely and interpretable explanations are vital. This late-breaking report presents preliminary findings from a Virtual Reality (VR) user study on which modalities pedestrians perceive as most and least effective across different scenarios, revealing clear scenario-dependent differences in modality effectiveness and highlighting which cues are actually noticed. |
|
| Biggs, Adam |
Adam Biggs, Emily Burdett, Aly Magassouba, and Praminda Caleb-Solly (University of Nottingham, Nottingham, UK; Nottingham University, Nottingham, UK) This research explores requirements for robot guides to support Blind and Visually Impaired People (BVIP) in outdoor environments, focussing on improving safety, independence, and accessibility. In-depth interviews with BVIP and carers provide lived experiences, and a qualitative observational study highlight practical challenges in outdoor navigation. These reveal often overlooked environmental factors in the design of robot guides. We examine key specifications of existing quadruped robotic platforms to understand their ability to navigate and guide outdoors. Although several commercially available robots demonstrate functional capabilities, our findings identify a range of complex contextual and user-specific requirements that shape what reliable guidance must accommodate across diverse terrains and contexts. The study highlights the need for more inclusive approaches, considering issues such as information overload, environmental noise, and variability in needs. The interview data emphasise the importance of co-design and participatory methods, informing contextual, organisational, and technological requirements for future robot guide development. |
|
| Bikowski, Yara |
Paul Vogt, Yara Bikowski, and Matias Valdenegro-Toro (University of Groningen, Groningen, Netherlands) Social robots are increasingly being designed to support elderly care, but conversations between elderly people and robots often involve misunderstandings and confusion. This study explores the development of AI models to recognise confusion from facial expressions of elderly people during human-robot conversations. We collected a video dataset from the robot’s point of view in which elderly people interacted with a social robot through a language game. We trained two models to detect confusion from the facial expressions: (1) a LSTM network using Facial Action Units extracted from the data, and (2) a transfer learning model using ResEmoteNet on facial image data. Both models performed slightly above chance –the LSTM achieved 57% accuracy, while the ResEmoteNet model reached 53% accuracy on balanced data– indicating poor generalisation to new faces. Concluding, these findings suggest that confusion of elderly people cannot be reliably detected using facial expressions alone. We argue that this may be due to age-related changes in facial expression patterns, but also that it may be due to a reduced display of facial responses to robots by the elderly. |
|
| Birmingham, Christopher |
Saad Elbeleidy, Andrew Schoen, Christopher Birmingham, Tiago Ribeiro, Victor Paléologue, Douglas Dooley, and Ross Mead (Peerbots, Arlington, USA; Semio, Los Angeles, USA) Software infrastructure for rendered robot faces is fragmented, yet there are many open research questions to address regarding robot faces, emotional expression, gaze, and animation. In this workshop-tutorial hybrid, we invite researchers and practitioners interested in robot faces to share their expertise and learn about the Vizij open source ecosystem and how it can be used to define, animate and deploy expressive rendered robot faces fully integrated with various robot platforms. We will illustrate the full software pipeline required to implement an expressive social robot face starting from scratch and going all the way to defining and connecting a custom face to a real-time conversational engine. Participants will be able to follow along on their own devices viewing and controlling their custom rendered robot face. Participants will also hear from ecosystem partners who have begun integrating with Vizij and learn about best practices and lessons learned. |
|
| Birmingham, Elina |
Chuxuan Zhang, Bermet Burkanova, Lawrence H. Kim, Grace Iarocci, Elina Birmingham, and Angelica Lim (Simon Fraser University, Burnaby, Canada) What nonverbal behaviors should a robot respond to? Understanding how children—both neurotypical and autistic—engage with embodied artificial agents is critical for developing inclusive and socially interactive systems. In this paper, we study "open-ended" unconstrained interactions with embodied agents, where little is known about how children behave nonverbally when given few instructions. We conducted a Wizard-of-Oz study in which children were invited to interact nonverbally with 6 different embodied virtual characters displayed on a television screen. We collected 563 (141 unique) nonverbal behaviors produced by children and compare the children’s interaction patterns with those previously reported in an adult study. We also report the presence of repetitive face and hand movements, which should be considered in the development of nonverbally interactive artificial agents. |
|
| Blair, Andrew |
Andrew Blair, Mary Ellen Foster, Peggy Gregory, and Koen Hindriks (University of Glasgow, Glasgow, UK; Vrije Universiteit Amsterdam, Amsterdam, Netherlands) The field of human-robot interaction frequently proclaims the inevitable coming of the social robot era, with claims that social robots are increasingly being deployed in the real world. However, in practice, social robots remain scarce in everyday environments. In addition, HRI research rarely explores robots through an organisational lens. This results in a lack of evidence-based understanding of the organisational conditions that are key to the presence--or absence--of social robots in the real world, which are often more decisive than technical sophistication. In this paper, we motivate why organisational context is crucial to the investigation of real-world social robots and provide examples of how this shapes robot acceptance. We detail the methodology of our ongoing empirical research with client organisations and robot developers. Through this critical organisational lens, we learn where social robots are, what they are doing, how they are designed, and why organisations are deploying them. |
|
| Bleeker, Maaike |
Marco C. Rozendaal, Anastasia Kouvaras Ostrowski, Mafalda Gamboa, Samantha Reig, Patricia Alves-Oliveira, Maaike Bleeker, Maria Luce Lupetti, John Vines, Nazli Cila, Hannah Pelikan, Nikolas Martelaro, Selma Šabanović, David Sirkin, and Cristina Zaga (Delft University of Technology, Delft, Netherlands; Purdue University, West Lafayette, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Massachusetts at Lowell, Lowell, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Utrecht University, Utrecht, Netherlands; Politecnico di Torino, Turin, Italy; University of Edinburgh, Edinburgh, UK; Linköping University, Linköping, Sweden; Carnegie Mellon University, Pittsburgh, USA; Indiana University at Bloomington, Bloomington, USA; Stanford University, Stanford, USA; University of Twente, Enschede, Netherlands) The 3rd Workshop on Designerly Human-Robot Interaction (HRI) aims to bring together scholars and practitioners engaged in design-oriented research to articulate the value of design research within HRI broadly. We propose a half-day workshop to (1) collectively map the diversity of design research in HRI, examining how contributions are framed and how quality is evaluated; (2) discuss participants’ HRI design projects, showcased in an exhibition setting; and (3) conclude with a focused conversation to identify common ground across diverse approaches and develop strategies for strengthening the position of design research in HRI and its connections with other HRI disciplinary communities. |
|
| Block, Alexis E. |
Mayumi Mohan, Ju-Hung Chen, and Alexis E. Block (MPI for Intelligent Systems, Stuttgart, Germany; Case Western Reserve University, Cleveland, USA) Social-physical human-robot interaction (spHRI) has grown rapidly across robotics, human-computer interaction, human-robot interaction, and haptics. Yet, fragmented terminology and inconsistent methodologies make systematic synthesis difficult. To support scalable review practices, we evaluated the extent to which small language models (SLMs; < 1.5B parameters) can assist with title and abstract screening for a large spHRI systematic review. While no SLMs matched human reviewers' performance, the models operated locally and screened papers orders of magnitude faster. The combined SLM ensemble identified 39 papers reviewers missed, representing 10.29% of the final relevant dataset. These results demonstrate that SLMs can augment, rather than replace, expert reviewers and make large-scale literature reviews accessible and sustainable. Andrew Chen, Ju-Hung Chen, Phurinat Pinyomit, and Alexis E. Block (Case Western Reserve University, Cleveland, USA) RoboTales is a low-cost robotic storytelling system that animates narratives using expressive sock puppetry. Implemented autonomously on a Baxter robot as a test case, RoboTales synchronizes narration, gestures, and mouth movements to perform character-driven stories. In a pilot study, puppet-based storytelling outperformed a gesture-only mode, producing higher HRIES ratings and improved story recall, suggesting that embodied puppetry enhances engagement and narrative comprehension. Designed to be modular and platform-agnostic, RoboTales can be adapted to other manipulators and offers a screen-free alternative to passive media, supporting future deployment in child-centered learning environments. Mayumi Mohan, Joana Brito, Anouk Neerincx, and Alexis E. Block (MPI for Intelligent Systems, Stuttgart, Germany; Instituto Superior Técnico, Lisbon, Portugal; HU University of Applied Sciences Utrecht, Utrecht, Netherlands; Case Western Reserve University, Cleveland, USA) The sixth edition of the Workshop YOUR Study Design (WYSD) aims to empower the next generation of HRI researchers by strengthening their experimental design skills through personalized mentoring and interactive activities. Recognizing that many early-career researchers in Human-Robot Interaction (HRI) come from technical disciplines with limited training in experimental design, WYSD provides a supportive environment where mentees receive structured, detailed feedback on their proposed studies from experienced HRI researchers. For HRI 2026, WYSD will expand to a full-day format to allow more in-depth mentoring and enhanced peer-to-peer engagement. In addition to individualized mentoring sessions, the workshop will feature mentee lightning talks, a free-form study design Q&A, mini discussions on key methodological topics, and collaborative activities such as "Create and Present a Custom Study" and "Networking Bingo". These sessions promote rigorous study design practices, cross-disciplinary exchange, and community building. By equipping researchers with the tools to conduct robust and socially responsible user studies, WYSD directly contributes to the development of safer, more acceptable, accessible, and impactful robotic systems for society. |
|
| Bo, Valerio |
Edison Jair Bejarano Sepulveda, Valerio Bo, Alberto Sanfeliu, and Anais Garrell (CSIC-UPC, Barcelona, Spain) Robots working in spaces shared by people need more than geometric mapping: they must recognize people, understand social context, and decide whether to proceed or negotiate passage. Traditional navigation pipelines lack this semantic understanding, often failing when progress depends on human cooperation. We introduce a Perception–Awareness–Decision (PAD) framework that systematically combines Simultaneous Localization and Mapping (SLAM) with Vision–Language Models (VLMs), speech recognition, and Large Language Models (LLMs), rather than simply stacking modules. PAD tries to emulate human perceptual organization by fusing multi-modal cues into a unified situational-awareness map capturing geometry, social context, and linguistic intent. This representation enables the decision layer to choose adaptively between safe replanning and context-appropriate verbal interaction. In a corridor-blocking task, PAD improves task success, increases safety margins, and produces behaviour that participants judged as more socially appropriate than a geometric baseline. These findings offer preliminary evidence that combining VLM-derived semantics with structured situational awareness can support more socially aware robot navigation. Valerio Bo, Anais Garrell, and Alberto Sanfeliu (CSIC-UPC, Barcelona, Spain) Robots that operate alongside people increasingly depend on intention-recognition models to anticipate human motion and adapt their behavior in socially appropriate ways. However, these models vary widely in both latency and accuracy, leading to different trade-offs between reacting quickly and correctly. Although these technical differences are well documented, it remains unclear how they shape the user’s experience of interacting with a robot. To examine how these translate into human perception, we conduct a preliminary user study comparing three intention-recognition models: a fast but low-accuracy model (Geo), an intermediate model (LSTM), and a slower but highly accurate model (Fusion). Participants interacted with a mobile robot controlled by each model and rated their experience across key dimensions of social interaction. Overall, the findings suggest that socially fluent interaction does not emerge from speed or accuracy alone, but from the balance of timely, reliable, and predictable robot behavior. |
|
| Boadi-Agyemang, Abena |
Abena Boadi-Agyemang (Carnegie Mellon University, Pittsburgh, USA) As robots become commonplace, people with disabilities (PwDs) continue to be vulnerable to harm by non-inclusive systems. Some HRI researchers working with PwDs incorporate participatory design (PD) approaches in their work; however, this often centers on the usability of robots and relegates the role of PwDs to `expert users', instead of designers in their own right. Moreover, robotics design practice can be intimidating for non-roboticist PwDs. I present three case studies of how I have incorporated the lived experiences of PwDs in co-designing supportive robots (e.g., assistive and service robots). I provide lessons for HRI researchers who seek renewed commitments to PD as a transformative framework. |
|
| Bock, Isabella |
Isabella Bock and Elaine Schaertl Short (Tufts University, Medford, USA) To enable smoother human-robot interactions, a robot’s policy must be predictable to users. A recent method, Imaginary Out-of- Distribution Actions (IODA), preserves user expectation by mapping Out-of-Distribution (OOD) states to In-Distribution (ID) states in shared-control settings. However, one limitation of this method is that it uses Euclidean distance which may fail to capture semantic similarity, especially in high-dimensional state spaces. In this report, we analyze limitations of using Euclidean distance for the state mapping and propose a Trajectory-Continuation (TC) mapping designed to preserve predictability by selecting ID states based on local trends. |
|
| Bogatikov, Kirill |
Roman Akinshin, Elizaveta Lopatina, Kirill Bogatikov, Nikolai Kiz, Anna V. Makarova, Mikhail Lebedev, Miguel Altamirano Cabrera, Dzmitry Tsetserukou, and Valerii Kangler (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; MSU Institute for Artificial Intelligence, Moscow, Russian Federation; Lomonosov Moscow State University, Moscow, Russian Federation) This paper presents a novel neuromorphic control architecture for upper-limb prostheses that combines surface electromyography (sEMG) with gaze-guided computer vision. The system uses a spiking neural network deployed on the neuromorphic processor AltAi to classify EMG patterns in real time while an eye-tracking headset and scene camera identify the object within the user’s focus. In our prototype, the same EMG recognition model that was originally developed for a conventional GPU is deployed as a spiking network on AltAi, achieving comparable accuracy while operating in a sub-watt power regime, which enables a lightweight, wearable implementation. For six distinct functional gestures recorded from upper-limb amputees, the system achieves robust recognition performance comparable to state-of-the-art myoelectric interfaces. When the vision pipeline restricts the decision space to three context-appropriate gestures for the currently viewed object, recognition accuracy increases to roughly 95% while excluding unsafe, object-inappropriate grasps. These results indicate that the proposed neuromorphic, context-aware controller can provide energy-efficient and reliable prosthesis control and has the potential to improve safety and usability in everyday activities for people with upper-limb amputation. |
|
| Bolinder, Timmy |
Sofia Thunberg, Aris Alissandrakis, Timmy Bolinder, Melker Forslund, Elias Lycke, Tobias Pettersson, Marcus Saméus, and Robin Åsberg (Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; Uppsala University, Uppsala, Sweden) This late-breaking report presents a Wizard of Oz control system that uses virtual reality equipment to operate the humanoid robot Pepper. The system enables control by mirroring the operator’s movement within the physical space, including body and head rotation, and arm movements directly onto the robot. The system was evaluated in a study setup with two participant groups: users and operators. Results showed that the system was effective in simulating natural robot behaviour, while being intuitive and engaging. |
|
| Borg, Erik |
Tobias Carlsson, Erik Borg, Hannah Kuehn, and Joseph La Delfa (KTH Royal Institute of Technology, Stockholm, Sweden; Husqvarna Group, Stockholm, Sweden; Bitcraze, Malmö, Sweden) As autonomous lawnmowers become more common in shared spaces, aligning their behavior with human expectations and norms is increasingly important. Existing approaches often optimize for fixed objectives, limiting adaptability to diverse contexts. This work explores an alternative by enabling users to guide autonomous behavior development without fixed objectives. A prototype system allowed participants to interact with a simulated environment, using subjective preferences and genetic algorithms to generate lawnmower behaviors across generations. The study emphasized open-ended exploration, analyzing participant interactions and semi-structured interviews through reflexive thematic analysis. Results reveal detailed and reflective accounts of lawnmower behavior. We discuss these results in the context of our design decisions and how they affected the user's journey through a complex solution space. Ultimately, this work demonstrates how interactive genetic algorithms can surface user values and interpretations, potentially serving as both a behavior design tool and novel method to generatively explore social meaning in human-robot interaction. |
|
| Borgstedt, Jacqueline |
Jacqueline Borgstedt, Ricarda Wullenkord, Fethiye Irmak Doğan, Alva Markelius, Yue Lou, Emma Geijer-Simpson, Emily S. Cross, Friederike Eyssel, Tamsin Ford, Jenny L. Gibson, Hatice Gunes, Georgina Warner, and Ginevra Castellano (ETH Zurich, Zurich, Switzerland; Bielefeld University, Bielefeld, Germany; University of Cambridge, Cambridge, UK; Uppsala University, Uppsala, Sweden) Mental Health challenges disproportionately affect vulnerable children, including those with Developmental Language Disorder (DLD) and with forced migration backgrounds. These groups face elevated risks of anxiety, depression, and social withdrawal. Yet, to date, their subjective wellbeing is difficult to assess. This is particularly true for traditional self-report measures which require good reading skills, linguistic competence, and sustained attention. To overcome these barriers, we propose that social robots offer a novel opportunity to create accessible, engaging, and child-friendly wellbeing assessments. |
|
| Bossoni, Alessandra |
Marta Romeo, Alessandra Bossoni, Daniel Hernández García, Matthew Forsyth, Catherine Mahoney, and Martina Fiori (Heriot-Watt University, Edinburgh, UK; Edinburgh Napier University, Edinburgh, UK) The integration of digital health technologies, including robotics, is transforming healthcare environments and offering significant benefits, such as reducing the time professionals spend on non-direct patient care. However, despite these advantages, the adoption of robotic systems in healthcare remains limited. A major barrier is the lack of competencies and confidence among healthcare staff in using these technologies. We address this challenge by investigating how exposure to robotics technology can support the education of the future nursing workforce. By exploring nursing students' and educators' perceptions of usability and acceptability of robotics through a workshop, this study aims to investigate the effectiveness of exposing them to robotic technologies in a safe, simulated environment to start identifying how beneficial exposure for learning could be integrated into education priorities that will enable nurses to work effectively and sustainably alongside robotic systems. |
|
| Botzer, Ben H. |
Ben H. Botzer, Goren Gordon, and Michal Gordon-Kiwkowitz (Tel Aviv University, Tel Aviv, Israel; Indiana University at Bloomington, Bloomington, USA; Holon Institute of Technology, Holon, Israel) Education is rapidly evolving with technology, yet teachers often struggle with low self-competence, curriculum integration challenges, and difficulty personalizing digital tools. GenAI "vibe-coding" can lower barriers by enabling natural-language interaction and building trust in AI-EdTech systems. We introduce TutorBotz, a GenAI tool that lets teachers program social robots as teaching assistants. With TutorBotz, teachers design lesson plans that social robots then deliver. In an exploratory study, five teachers and forty-eight primary and middle-school students used TutorBotz. Teachers created two lesson plans each, later taught by a NAO robot. Qualitative findings show that TutorBotz increased teachers’ confidence in using social robots, was easy to use, and fit diverse curricula. We also discuss its personalization benefits, technical concerns, and students’ enjoyment of robot-led lessons. Overall, TutorBotz represents a meaningful step toward empowering teachers to use social robots in the classroom. |
|
| Bouzida, Anya |
Anya Bouzida and Laurel D. Riek (University of California at San Diego, La Jolla, USA) Cognitively assistive robots (CARs) can extend the reach of clinical interventions to the home. People with mild cognitive impairment (PwMCI) often benefit from interventions that teach compensatory cognitive strategies that help them work around cognitive changes. However, few CARs are evaluated longitudinally or tailored to users’ abilities and preferences. We translated in-person cognitive neurorehabilitation for autonomous delivery via CARMEN (Cognitively Assistive Robot for Motivation and Neurorehabilitation). We conducted a longitudinal study, and found PwMCI reported learning and incorporating cognitive strategies taught by CARMEN into their routines. In ongoing and future work, we are developing a new behavior adaptation method to personalize CARMEN's content (e.g., relevant cognitive strategies), and interaction style (e.g., appropriate pacing). This work contributes new methods and insights for longitudinal personalization in HRI, enabling robots to adapt what they teach and how they interact to best support PwMCI. |
|
| Bowen, Judy |
Jessica Turner, Nicholas Vanderschantz, Judy Bowen, Jemma L. König, and Hannah Carino (University of Waikato, Tauranga, New Zealand; University of Waikato, Hamilton, New Zealand) Successful integration of social robots in education relies on the acceptance of robots in learning contexts by students. Using a participatory design workshop, students interacted with a KettyBot and ideated potential roles for robots in the classroom. This was followed by a questionnaire and the Godspeed Questionnaire Series (GQS) to understand student perceptions and attitudes towards social robots in education environments. Learners described potential use cases and our results demonstrate students envision robots as assistants rather than teachers, emphasising the importance of human connection in learning. |
|
| Brandao, Martim |
Merle M. Reimann, Gregory LeMasurier, Parag Khanna, Lennart Wachowiak, Wenxi Wu, Martim Brandao, Gerard Canal, Christian Smith, and Iolanda Leite (KTH Royal Institute of Technology, Stockholm, Sweden; University of Massachusetts at Lowell, Lowell, USA; King’s College London, London, UK; Robotics, Perception, and Learning Laboratory, Stockholm, Sweden) In order for robots to act as an empowering tool in today's society, it is necessary for users to understand what those robots can and cannot do. This can happen through the design of the robot, through external information such as signs and manuals, or through the interaction itself. With this workshop, we aim to provide a platform for participants who approach robot transparency from different angles to share and discuss their experiences. Participants will have the opportunity to deepen their knowledge of transparent and understandable robot interactions through two keynotes and paper presentations. The diverse backgrounds of participants will be brought together in interactive exercises that aim to address questions about topics such as how transparency can be used to reduce inequalities and empower society and other open problems in the field. The guided break-out sessions aim at giving participants the chance to reflect on the keynotes, paper presentations and their own work and to share those reflections with participants with other perspectives. |
|
| Brandt, Mara |
Mara Brandt, Kira Sophie Loos, Mathis Tibbe, and Anna-Lisa Vollmer (Bielefeld University, Bielefeld, Germany) Children often find themselves in challenging situations, such as medical examinations, where they have limited opportunities to make autonomous decisions and experience their own agency. This study explores whether a warm-up interaction with a social robot can strengthen children’s perceived self-efficacy. We hypothesized that a teaching scenario, where the child instructs the robot, would yield stronger self-efficacy gains than a storytelling activity. In a pre-study, 20 children (6 – 12 years) were assigned to two conditions: teaching the humanoid robot Pepper to play ball-in-a-cup or co-creating a story with Pepper. Perceived self-efficacy was assessed with a 9-item questionnaire before and after the interaction, and parents reported child temperament using the German IKT questionnaire (Inventar zur integrativen Erfassung des Kind-Temperaments). Overall, children showed a small, significant increase in self-efficacy from pre- to post-interaction, with a stronger descriptive trend in the teaching condition and minimal change in storytelling. Shyness was not related to baseline self-efficacy, self-efficacy gains, or the relative effectiveness of the two conditions. Apart from one outcome, effects did not reach statistical significance, as expected given the small sample size. The observed trend toward higher self-efficacy in the teaching condition suggests that further studies with larger samples are warranted. Such research could clarify the potential of social robots to provide effective warm-up interactions that help children feel more confident in upcoming tasks, such as medical examinations. Phillip Richter, Kira Sophie Loos, Josef El Dib, Mara Brandt, Heiko Wersing, and Anna-Lisa Vollmer (Bielefeld University, Bielefeld, Germany; Honda Research Institute, Offenbach, Germany) Standardized evaluation methods are needed as social robots enter diverse environments. We provide benchmark data for two contrasting platforms: Blossom (handcrafted, open-source) and Miroka (commercial, ball-bot humanoid). In an online study (N = 100), participants evaluated one robot after watching a representative interaction video, completing the ASAQ (19 constructs) and ASOR (3 factors) questionnaires. Both robots showed similar evaluation profiles despite contrasting capabilities and scenarios, with moderate differences: Blossom scored higher on coherence, Miroka on attentiveness and human-like appearance. No dimensions reached statistical significance after multiple comparison correction, though effect sizes revealed meaningful perceptual patterns. This work provides the first ASAQ benchmark data for both platforms, demonstrating how multidimensional questionnaires capture user perceptions across contrasting designs in realistic contexts. |
|
| Bransky, Karla |
Maria Teresa Parreira, Ilaria Torre, Sarah Schömbs, Erik Lagerstedt, Laetitia Tanqueray, Karla Bransky, Ilaria Alfieri, Maria Raffa, Carolin Straßmann, Benjamin Lebrun, and Samantha Stedtler (Cornell University, New York, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Melbourne, Melbourne, Australia; University of Gothenburg, Gothenburg, Sweden; Lund University, Lund, Sweden; Australian National University, Canberra, Australia; IULM University, Milan, Italy; University of Duisburg-Essen, Duisburg, Germany; University of Canterbury, Christchurch, New Zealand) Building on the foundational work of our 2025 workshop, this second edition shifts from exploring what sustainability means in HRI to producing actionable outcomes for the community. Last year, we identified strong but often unacknowledged engagement with social and environmental sustainability, alongside gaps in shared language and practical frameworks. This year, we focus on concrete deliverables: guidelines for comprehensively integrating sustainability in HRI research. The morning features external speakers who integrate sustainable development in their research, and whole room discussions. The afternoon centers on collaborative development of guidelines and standards that researchers could adopt. We welcome HRI researchers at any career stage interested in operationalizing sustainability - whether through addressing the carbon footprint of computational models, ensuring equitable access to robotic technologies, or embedding SDG principles into research practices. Outcomes will include practical tools for sustainable HRI research and a roadmap for continued community action beyond the workshop. |
|
| Briggs, Gordon |
Gordon Briggs and Christina Wasylyshyn (US Naval Research Laboratory, Washington, USA) As human-robot teaming becomes more common, robots must effectively reject commands when appropriate. While prior work has investigated when and how robots should refuse directives, it focused on effective and socially appropriate justifications. However, justifications alone are insufficient for advancing joint activity. Human teammates still shoulder the burden of formulating an acceptable course of action to continue collaboration, if possible. This paper examines constructive elaborations: communications that extend beyond justification to proactively convey information indicating collaborative alignment (e.g., a suggestion of alternative course of action). We present results from a vignette experiment examining whether constructive elaborations improve perceived trustworthiness of robotic agents engaged in collaborative disobedience. Our findings contribute to understanding how autonomous agents can move beyond mere justified refusal toward pro-active partnership, facilitating more effective human-robot collaboration. |
|
| Brillinger, Markus |
Verena Szojak, Ouijdane Guiza, Markus Jäger, Markus Brillinger, Martin Schobesberger, Martina Mara, Kathrin Meyer, and Sandra Maria Siedl (Pro2Future, Linz, Austria; FH Joanneum, Graz, Austria; Johannes Kepler University Linz, Linz, Austria) Collaborative robots (cobots) are increasingly used in manufacturing, yet their behavior can appear unpredictable to human coworkers, making teamwork less smooth. Explainability has been proposed as a key requirement for fluent human-robot collaboration. However, we still lack a clarity of (i) when cobots should provide explanations, (ii) what information human coworkers seek, and (iii) how explanations should be delivered by cobots during real physical collaboration. This paper reports an exploratory qualitative study of human explanation needs in a real-life bicycle assembly task with a 6-axis cobot arm. Fourteen participants interacted with the cobot that occasionally behaved unexpectedly. Think-aloud protocols and semi-structured interviews, show that users seek continuous information primarily about the cobot's current status and situation-dependent guidance for interaction. Participants favored simple presentations such as easy-to-read text explanations and light signals. We discuss implications for designing explainable cobots that provide lightweight, context-sensitive information and outline directions for future quantitative evaluations. |
|
| Brito, Joana |
Mayumi Mohan, Joana Brito, Anouk Neerincx, and Alexis E. Block (MPI for Intelligent Systems, Stuttgart, Germany; Instituto Superior Técnico, Lisbon, Portugal; HU University of Applied Sciences Utrecht, Utrecht, Netherlands; Case Western Reserve University, Cleveland, USA) The sixth edition of the Workshop YOUR Study Design (WYSD) aims to empower the next generation of HRI researchers by strengthening their experimental design skills through personalized mentoring and interactive activities. Recognizing that many early-career researchers in Human-Robot Interaction (HRI) come from technical disciplines with limited training in experimental design, WYSD provides a supportive environment where mentees receive structured, detailed feedback on their proposed studies from experienced HRI researchers. For HRI 2026, WYSD will expand to a full-day format to allow more in-depth mentoring and enhanced peer-to-peer engagement. In addition to individualized mentoring sessions, the workshop will feature mentee lightning talks, a free-form study design Q&A, mini discussions on key methodological topics, and collaborative activities such as "Create and Present a Custom Study" and "Networking Bingo". These sessions promote rigorous study design practices, cross-disciplinary exchange, and community building. By equipping researchers with the tools to conduct robust and socially responsible user studies, WYSD directly contributes to the development of safer, more acceptable, accessible, and impactful robotic systems for society. |
|
| Brønderup Frederiksen, Louise |
Mie Grøftehave Nielsen, Andreas Juul Jespersen, and Louise Brønderup Frederiksen (Aarhus University, Aarhus, Denmark) This paper presents The Beckoning Bowl, a shape-changing, artifact- inspired robot designed to create a sense of welcome for people living alone. The interactive key bowl uses soft robotics to mimic abstract body language, offering a subtle social moment during the routine act of placing keys when arriving home. A section of the bowl lowers as if beckoning and then returns to its original shape with expressions of joy or disappointment depending on the user’s response. By designing interactions that make users feel noticed and invited, The Beckoning Bowl explores how socially aware home robots might help counter loneliness. |
|
| Brown, Barry |
Anna Dobrosovestnova, Barry Brown, Emanuel Gollob, Mafalda Gamboa, and Masoumeh Mansouri (Interdisciplinary Transformation University, Linz, Austria; Stockholm University, Stockholm, Sweden; University of Arts Linz, Linz, Austria; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Birmingham, Birmingham, UK) HRI 2026 takes place amid profound socio-political turbulence marked by rising authoritarianism, the consolidation of technological power, and the expanding use of robotics for warfare. These global conditions create an affective atmosphere that seeps into our field: a mix of attachment to techno-determinist and techno-solutionist narratives, unease with 'business as usual,' and a tentative search for alternatives. As HRI scholars and designers, we recognize how the wider socio-political tensions resonate within our own practices, shaping what we take to be possible, necessary, or inevitable in research and design. In this half-day, in-person workshop, we mobilize three affective orientations - cruel optimism, lucid despair, and precarious hope - as resources for reflection, critique, and experimentation. Through short provocations, discussions, and a speculative group activity, participants will be invited to inhabit these affects to question dominant narratives that sustain HRI, confront systemic challenges, and collectively explore alternative trajectories for research, design, and community building. |
|
| Brunnmayr, Katharina |
Katharina Brunnmayr and Astrid Weiss (TU Wien, Vienna, Austria) Socially assistive robots have shown promise in supporting people living with dementia (PlwD) by reducing stress and promoting engagement. We know that PlwD prefer robots with fur and pet-like embodiment. However, when it comes to other embodiment features of robots, such as the design of the eyes, we still lack knowledge about PlwD's preferences. We conducted a pilot study co-exploring playful materials with PlwD and recruited four participants living in a care home for 10 co-exploration sessions. In this report, we present a side product of the original study: the importance of eye design when designing technologies for PlwD. We found that 1) the eyes are an important focal point for PlwD during interactions, 2) eye movement is interpreted as emotions by PlwD, and 3) the size, shape, and complexity of the eyes are crucial for recognition. |
|
| Bruno, Barbara |
Irina Rudenko, Utku Norman, Lukas Hilgert, Jan Niehues, and Barbara Bruno (KIT, Karlsruhe, Germany) Large Language Models (LLMs) hold significant promise for enhancing Child–Robot Interaction (CRI), offering advanced conversational skills and adaptability to the diverse abilities, requests and needs of young children. Little attention, however, has been paid to evaluating the age and developmental appropriateness of LLMs. This paper brings together experts in psychology, social robotics and LLMs to define metrics for the validation of LLMs for child–robot interaction. |
|
| Bruns, Carson J. |
Diane N. Jung, Caleb Escobedo, Noah Liska, Maitrey Gramopadhye, Daniel Szafir, Alessandro Roncone, and Carson J. Bruns (University of Colorado at Boulder, Boulder, USA; University of North Carolina at Chapel Hill, Chapel Hill, USA) Scientists perform diverse manual procedures that are tedious and laborious. Such procedures are considered a bottleneck for modern experimental science, as they consume time and increase burdens in fields including material science and medicine. We employ a user-centered approach to designing a robot-assisted system for dialysis, a common multi-day purification method used in polymer and protein synthesis. Through two usability studies, we obtain participant feedback and revise design requirements to develop the final system that satisfies scientists' needs and has the potential for applications in other experimental workflows. We anticipate that integration of this system into real synthesis procedures in a chemical wet lab will decrease workload on scientists during long experimental procedures and provide an effective approach to designing more systems that have the potential to accelerate scientific discovery and liberate scientists from tedious labor. |
|
| Buchem, Ilona |
Ilona Buchem, Jessica Kazubski, and Charly Goerke (Berlin University of Applied Sciences, Berlin, Germany) This paper presents the design of NEFFY 2.0, a social robot designed as a haptic slow-paced breathing companion for stress reduction, and reports findings from a mixed-methods user study with 14 refugees from Ukraine. Developed through a user-centered design process, NEFFY 2.0 builds on NEFFY 1.0 and integrates embodiment and multi-sensory interaction to provide low-threshold, accessible guidance of slow-paced breathing for stress relief, which may be particularly valuable for individuals experiencing prolonged periods of anxiety. To evaluate effectiveness, an experimental comparison of a robot-assisted breathing intervention versus an audio-only condition was conducted. Measures included subjective ratings and physiological indicators, such as heart rate (HR), heart rate variability (HRV) using RMSSD parameter, respiratory rate (RR), an galvanic skin response (GSR), alongside qualitative data from interviews exploring user experience and perceived support. Qualitative findings showed that NEFFY 2.0 was perceived as intuitive, calming and supportive. Survey results showed a substantially larger effect in significant reduction of perceived stress in the NEFFY 2.0 condition compared to audio-only. Physiological data reveled mixed results combined with large inter-personal variability. Three patterns of breathing practice with NEFFY 2.0 were identified using k-means clustering. Despite the small sample size, this study makes a novel contribution by providing empirical evidence of stress reduction in a vulnerable population through a direct comparison of robot-assisted and non-robot conditions. The findings position NEFFY 2.0 as a promising low-threshold tool that supports stress relief and contributes to the vision of HRI empowering society. |
|
| Buchmeier, Sean |
Sean Buchmeier, Ian C. Rankin, and Cristina G. Wilson (Oregon State University, Corvallis, USA) We present a week long scientist-robot collaborative field science campaign was conducted in Martian analog White Sands National Park. The workflow for exploring a new area of the dunes was broken into two sections. First, a scouting mission was designed using a robot-assisted design tool and then executed. Second, a supervisory control method is used to allow scientists to perform their own experiments while managing the robot system. These two method enable more data in useful locations to be collected while minimizing burden on the scientist supervising the system. |
|
| Budhathoki, Jasna |
Heather Pon-Barry, Jasna Budhathoki, and Dushma Badr (Mount Holyoke College, South Hadley, USA; Columbia University, New York, USA) For social robots used in educational applications, such as learning companion robots, maintaining student engagement is critical. There is a need for such robots to estimate engagement in real-time. This study examines dialogue data between a Nao robot and middle school students interacting conversationally while solving math problems. We collect annotations of perceived engagement, seeking to characterize human perception of engagement with the robot. Because robots that perform real-time engagement tracking do not have consistent access to clear video and audio data, we analyze perception of engagement across varying modalities. Specifically, we compare three settings: full access to audiovisual data, access to only the video data, and access to only the audio data. Our results indicate that without access to audio data, perceptions of level of engagement are lower for low-engagement segments, and without access to video data perceptions are higher for high-engagement segments. |
|
| Bullinger, Angelika C. |
Francisco Hernandez, Veronica Ahumada-Newhart, and Angelika C. Bullinger (Chemnitz University of Technology, Chemnitz, Germany; University of California at Davis, Sacramento, USA) Telepresence robots (TPR) have gained traction in office, healthcare, and educational settings, yet their applicability to industrial environments remains largely unexplored. As part of the PraeRI project, we conducted a multi-criteria assessment of six commercially available TPRs to identify the usability and functionality characteristics most relevant for deployment in industrial environments (i.e., manufacturing, production, assembly). The assessment was carried out using a structured seven-step utility analysis framework developed through an iterative, expert-driven process. The framework combines predefined industrial requirements, practical testing, and expert judgment, then aggregates weighted criteria into a normalized utility score to enable a transparent comparison across systems. Preliminary results from this assessment include insights on user interface design, drivability, reaction time, accessibility, battery performance, weight, wheels, and storage. Findings highlight substantial variation across platforms, with usability and functionality emerging as critical differentiators for industrial suitability. TPRs such as the Double 3 from Double Robotics and Ohmni Pro from Ohmni Labs achieved the highest point scores, mainly due to intuitive driving interfaces and strong performance in mobility and battery-related tasks. These early results form the basis for ongoing research into industrial-grade requirements and user acceptance in industrial environments. |
|
| Buntsma, Veerle |
Katharina Lisa Kleiser, Veerle Buntsma, and Sebastian Schneider (University of Twente, Enschede, Netherlands) Natural disasters call for time-effective search and rescue (SAR) operations to find and assist survivors. While dogs are used to locate survivors due to their keen sense of smell, recent advances in robotics are also expanding the role of technology in these efforts. This late-breaking report explores what close collaboration between handlers and SAR dogs can teach us about effective human-robot teaming. We conducted four expert interviews with SAR dog handlers in the Netherlands and found that successful teamwork heavily relies on mutual responsiveness and nonverbal communication. We found that significant challenges during SAR missions include high temperatures, fatigue, and harzardous environments. In such situations, robots could provide meaningful support and complement human-do teams. Nevertheless, current robots fall short in meaningfully supporting active search tasks due to missing olfactory capabilities and limited abilities to navigate over rubble and debris. Our findings aim to inform real-world rescue practices as SAR robotics evolves, ensuring that emerging technologies align with rescuers' actual needs and workflows. |
|
| Burdett, Emily |
Adam Biggs, Emily Burdett, Aly Magassouba, and Praminda Caleb-Solly (University of Nottingham, Nottingham, UK; Nottingham University, Nottingham, UK) This research explores requirements for robot guides to support Blind and Visually Impaired People (BVIP) in outdoor environments, focussing on improving safety, independence, and accessibility. In-depth interviews with BVIP and carers provide lived experiences, and a qualitative observational study highlight practical challenges in outdoor navigation. These reveal often overlooked environmental factors in the design of robot guides. We examine key specifications of existing quadruped robotic platforms to understand their ability to navigate and guide outdoors. Although several commercially available robots demonstrate functional capabilities, our findings identify a range of complex contextual and user-specific requirements that shape what reliable guidance must accommodate across diverse terrains and contexts. The study highlights the need for more inclusive approaches, considering issues such as information overload, environmental noise, and variability in needs. The interview data emphasise the importance of co-design and participatory methods, informing contextual, organisational, and technological requirements for future robot guide development. |
|
| Burkanova, Bermet |
Chuxuan Zhang, Bermet Burkanova, Lawrence H. Kim, Grace Iarocci, Elina Birmingham, and Angelica Lim (Simon Fraser University, Burnaby, Canada) What nonverbal behaviors should a robot respond to? Understanding how children—both neurotypical and autistic—engage with embodied artificial agents is critical for developing inclusive and socially interactive systems. In this paper, we study "open-ended" unconstrained interactions with embodied agents, where little is known about how children behave nonverbally when given few instructions. We conducted a Wizard-of-Oz study in which children were invited to interact nonverbally with 6 different embodied virtual characters displayed on a television screen. We collected 563 (141 unique) nonverbal behaviors produced by children and compare the children’s interaction patterns with those previously reported in an adult study. We also report the presence of repetitive face and hand movements, which should be considered in the development of nonverbally interactive artificial agents. |
|
| Burkart, Diana |
Akhila Bairy, Diana Burkart, Maximilian Schrapel, Jan Meyerhoefer, John Pravin Arockiasamy, Andy Comeca, Manuel Bied, Alexey Vinel, and Maike Schwammberger (KIT, Karlsruhe, Germany) Effective explainability in human-robot interaction (HRI) requires communication strategies that remain clear in dynamic settings. Unimodal explanations often falter under noise or cognitive load, making multimodal explanations crucial for real-world interaction. For social robots in traffic environments equipped with Vehicle-to-Everything (V2X) communication, timely and interpretable explanations are vital. This late-breaking report presents preliminary findings from a Virtual Reality (VR) user study on which modalities pedestrians perceive as most and least effective across different scenarios, revealing clear scenario-dependent differences in modality effectiveness and highlighting which cues are actually noticed. |
|
| Burns, Rachael Bevill |
Mayumi Mohan, Rachael L’Orsa, Felix Grüninger, Bernadette-Marie Stollhof, Carolin Sarah Klein, Raphael Dinauer, Rachael Bevill Burns, Tobias J. Renner, Karsten Hollmann, and Katherine J. Kuchenbecker (MPI for Intelligent Systems, Stuttgart, Germany; University Hospital Tübingen, Tübingen, Germany; University of Tennessee at Knoxville, Knoxville, USA) The gold-standard treatment for children diagnosed with obsessive-compulsive disorder (OCD) is therapist-guided cognitive behavioral therapy (CBT), which includes exposure and response prevention (ERP) sessions that teach children to overcome compulsive responses when exposed to their anxiety-inducing triggers. CBT requires children to report frequent self-assessments of tension during both therapist-supported and therapist-free self-management ERP sessions. Videoconferencing-delivered CBT (vCBT) enables a psychotherapist to treat a child remotely in their home, where OCD symptoms often arise, but these remote therapeutic interactions lack physical presence and can be challenging to run. We propose using a robot as an input/output device during vCBT for young children diagnosed with OCD, and we introduce a stationary table-top koala robot for this application. We further describe the first of three planned participatory design phases: a co-design study comprising two sessions where child and adolescent psychotherapists role-played vCBT ERP exercises with this robot to help define its role. |
|
| Busby Grant, Janie |
Damith Herath, Hans Asenbaum, Janie Busby Grant, Maleen Jayasuriya, Emanuela Savini, and Harshith Ghanta (University of Canberra, Bruce, Australia; University of Canberra, Canberra, Australia) As Artificial Intelligence (AI) systems are increasingly integrated into civic platforms and decision-making processes, the nature of public deliberation is shifting. While AI offers potential solutions to scalability and moderation in deliberative democracy, its impact on human agency remains underexplored. The project adopted an exploratory design, convening six small-group deliberations. Each group included four humans and one humanoid robot. Through semi-structured post-deliberation interviews, we investigated how embodied algorithmic intervention influenced perceived ownership of the discourse. Our initial findings suggested a "Competence-Agency Paradox": while participants valued the AI's ability to synthesise information and enforce civility, they reported a diminished sense of democratic agency, characterised by epistemic deference to the machine and a reluctance to explore emotional or intuitive lines of reasoning. We argue that without careful design, AI-augmented deliberations risk prioritising procedural efficiency over the messy, human autonomy required for genuine democratic legitimacy. Thomas Muller Dardelin, Damith Herath, and Janie Busby Grant (University of Canberra, Canberra, Australia; Waseda University, Tokyo, Japan; University of Canberra, Bruce, Australia) This paper investigates user engagement with socially assistive robots (SARs) in healthcare contexts through an experimental study comparing simulated and physical embodiments. The study examines how users perceive trust, engagement, safety, and usability when interacting with two humanoid robots—Hatsuki, designed for emotional and social support, and AIREC, designed for physical caregiving tasks. Participants interacted with both simulated and real robots, enabling a direct comparison of virtual and physical embodiments under identical conversational conditions. The results suggest that verbal interaction and character design contribute more strongly to perceived engagement than physical embodiment alone, highlighting the importance of communication quality in socially assistive robotics. In the simulated setting, Hatsuki was perceived as more caring and socially engaging than AIREC, indicating that socially expressive design can shape user perceptions even without physical embodiment. Emma Minter, Robert Tankard, Oscar Norman, and Janie Busby Grant (University of Canberra, Canberra, Australia) Extensive research has investigated the human tendency to anthropomorphize artificial agents by attributing human-like traits to these systems. Sociality motivation, the desire for social connection, has been proposed to be a key psychological determinant for anthropomorphism. Sociality motivation can be operationalized in a range of dispositional, developmental, and cultural facets, but it is currently unclear how these factors contribute collectively and independently to predicting an individual’s tendency towards anthropomorphism. This online study (N = 164) assessed the relationship between different facets of sociality motivation and four dimensions of anthropomorphism of a social robot, using videos of a robot completing a game alone and with human and robot partners. Respondents who reported more collectivist cultural views were more likely to attribute higher agency, sociability, and disturbance to the robot. Those who reported higher attachment anxiety scores also attributed greater agency and sociability. Previous research has focused primarily on dispositional indicators of anthropomorphism, however the current study suggests that cultural determinants may be stronger predictors of anthropomorphic tendencies and should be a focus of further research. Aurora An-Lin Hu, Dimity Crisp, Sharni Konrad, Damith Herath, and Janie Busby Grant (University of Canberra, Bruce, Australia; University of Canberra, Canberra, Australia) The mismatch between user expectations and robot performance—the expectation gap—is common in human–robot interaction. Although related research is limited, preliminary evidence suggests that the expectation gap has a considerable impact on user adoption of robots. The present study examined how failing, confirming, and exceeding user expectations and the extent to which robot performance differs from expectations predict users’ adoption intention. A sample of 234 participants completed pre-interaction expectation measures and post-interaction robot performance ratings after completing a drawing activity with a humanoid robot (Pepper). Results showed that considering both the magnitude and direction of the expectation gap (signed gap values) consistently yielded stronger associations and predictive power for adoption intention than considering the magnitude alone (absolute gap values) across four expectation dimensions, with expectation gaps related to Relative Advantage emerging as the strongest predictor. Overall, the findings highlight that failing to meet expectations consistently predicted lower adoption intention compared to both confirming and exceeding expectations, whereas evidence for whether exceeding expectations provides additional benefits beyond confirming them was mixed. Sharni Konrad, Nipuni Wijesinghe, Eileen Roesler, and Janie Busby Grant (University of Canberra, Canberra, Australia; University of Canberra, Bruce, Australia; George Mason University, Fairfax, USA) This large sample study used exposure to a humanoid social robot to investigate the relationship between affinity with technology, social presence and future intention to use the robot. A between-subjects experiment was conducted with 235 participants who were randomly assigned to complete a 3 minute drawing task with an embodied robot exhibiting either high or low social presence. Regression analyses indicated that higher affinity with technology predicted stronger perceptions of social presence. Mediation analyses revealed that social presence partially mediated the relationship between affinity with technology and future intention to use, such that affinity with technology influenced future intention to use both directly and indirectly through social presence. Analysis of the subdimensions of social presence revealed that while co-presence significantly accounted for this effect, shared potential did not. Across models, affinity with technology exerted a direct influence on future intention to use, suggesting that dispositional openness to technology fosters behavioural intentions both directly and indirectly through relational perceptions. These findings highlight the importance of integrating dispositional and relational factors in HRI to support robot adoption. Phillip Morgan, Damith Herath, Praminda Caleb-Solly, Matthew Studley, Elizabeth Williams, Aurora An-Lin Hu, Eduardo B. Sandoval, Maleen Jayasuriya, Min Wang, and Janie Busby Grant (Cardiff University, Cardiff, UK; University of Canberra, Bruce, Australia; University of Nottingham, Nottingham, UK; University of the West of England, Bristol, UK; Australian National University, Canberra, Australia; UNSW, Sydney, Australia; University of Canberra, Canberra, Australia) The increasing prevalence of interactive, mobile robots in domestic and social spaces requires not only a comprehensive examination of the security challenges inherent in their large-scale deployment, but understanding how we as a community can support safe and successful adoption of robots. This workshop provides a forum for researchers, practitioners, and stakeholders from a range of disci- plines to build expertise and networks in safety, trust, psychosocial, legal and economic aspects for the secure deployment of social robots in domestic environments. The end goal of this workshop is to enable the development of methodologies and recommenda- tions to empower people to understand and decide on how their robots (and/or robots they are using) are secured, and approaches for raising awareness of the impact of an attack. The workshop will incorporate keynote presentations from leading experts, a series of lightning talks, and a scenario planning activity in which teams of participants use custom-designed prospective scenarios to ex- plore the precursors, processes, contexts and impacts of a breach of security in a robotic system. This will allow a deep-dive into the complexities of establishing and maintaining security and trust in interactive robotic applications, from the broad frame of social, personal and legal capability, enabling long-term acceptance and adoption of trustworthy, secure, and safe interactive robots. |
|
| Cagiltay, Bengisu |
Leigh M. Levinson, Bengisu Cagiltay, Vicky Charisi, Elena Malnatsky, Mike E.U. Ligthart, and Micol Spitale (Indiana University at Bloomington, Bloomington, USA; Koç University, Istanbul, Türkiye; MIT Singapore, Singapore, Singapore; Vrije Universiteit Amsterdam, Amsterdam, Netherlands; Politecnico di Milano, Milan, Italy) The increased presence of social robots geared toward children necessitate those who design and develop these technologies to understand and prepare for addressing emerging ethical and practical questions throughout phases of interaction design, technical development, and real-world use while maintaining a child-friendly and playful approach. Even more importantly, it is crucial to understand how we can address various tensions between the ethical and practical use of social robots with children, that may emerge between designers, developers, and users. In this full-day workshop, we will introduce the "playful by design" toolkit to promote hands-on and rights-based design experiences for researchers and practitioners to (1) identify and discuss tensions that arise in child-robot interaction research and (2) co-create a working-draft and collaboration plan for designing a "roboticist's guide to working with children". This workshop builds on the authors' past workshop and work-in-progress report at IDC 2025 where our key goal is to co-develop a community-driven resource through collecting and analyzing testimonials from child-robot interaction researchers. |
|
| Cai, Xiaochi |
Chen Kang, Madina Alitai, Yiting Wang, Xiaochi Cai, Ruidong Ma, Angelo Cangelosi, and Zhegong Shangguan (Beijing Language and Culture University, Beijing, China; University of Manchester, Manchester, UK; Pennsylvania State University, State College, USA; Sheffield Hallam University, Sheffield, UK) Large Language Model (LLM)-driven social robots face two key challenges: inference latency creates unnatural silences during which an expressively static robot appears disengaged, and LLMs rarely account for the user's facial affect as a continuous evolving process. We present Fluid-Xpress, an emotion-aware dual-loop framework for empathic facial reactions in human-robot interaction. The framework features: (1) a Macro-Micro dual-loop architecture that decouples real-time non-verbal feedback from LLM verbal processing, enabling continuous affective backchanneling during inference latency; (2) a Temporal Affective Engine using metrics such as MSSD to capture emotional dynamics and detect complex states like cognitive overload and masked emotions; and (3) a Risk-Adaptive Strategy that prioritizes immediate intervention during high-arousal states. A pilot study (N=8) showed that Fluid-Xpress significantly improved arousal stability (p < .05), mood improvement (p < .05), expression awareness (p < .01), and perceived empathy (p < .05) compared to baseline, providing preliminary support for emotion-aware non-verbal feedback in embodied social robots. |
|
| Calderon, Henry |
Nigel G. Wormser, Zuha Kaleem, Jessie Lee, Dyllan Ryder Hofflich, and Henry Calderon (Cornell University, Ithaca, USA; Cornell University, Brooklyn, USA) Musculoskeletal injuries from manual laundry cart transportation are very common for workers in the hospitality industry. To address this, we designed Elandro, a teleoperated laundry cart that collaboratively helps hotel staff with transportation across and within floors at a hotel. Through iterative user research at Statler Hotel, and wizard-of-oz interaction testing, we revealed design requirements essential for successful human-robot interaction. Elandro contributes to reducing physical strain on workers, maintaining staff autonomy and decision-making, establishing a human-centered approach where technology empowers rather than replaces hospitality workers. |
|
| Caleb-Solly, Praminda |
Adam Biggs, Emily Burdett, Aly Magassouba, and Praminda Caleb-Solly (University of Nottingham, Nottingham, UK; Nottingham University, Nottingham, UK) This research explores requirements for robot guides to support Blind and Visually Impaired People (BVIP) in outdoor environments, focussing on improving safety, independence, and accessibility. In-depth interviews with BVIP and carers provide lived experiences, and a qualitative observational study highlight practical challenges in outdoor navigation. These reveal often overlooked environmental factors in the design of robot guides. We examine key specifications of existing quadruped robotic platforms to understand their ability to navigate and guide outdoors. Although several commercially available robots demonstrate functional capabilities, our findings identify a range of complex contextual and user-specific requirements that shape what reliable guidance must accommodate across diverse terrains and contexts. The study highlights the need for more inclusive approaches, considering issues such as information overload, environmental noise, and variability in needs. The interview data emphasise the importance of co-design and participatory methods, informing contextual, organisational, and technological requirements for future robot guide development. Caroline Yan Zheng, Maria Elena Giannaccini, Angela Higgins, Feng Zhou, Mark Paterson, and Praminda Caleb-Solly (University of Nottingham, Nottingham, UK; University of Pittsburgh, Pittsburgh, USA) The critical role of interhuman touch and the rapid development of accessible haptics technology is preluding a new generation of social and assistive robotic interactions that support and enhance human daily life through physical contact, including assisting mobility, mental health, pain management, social bonding, and novel sensory experience for body literacy. For social and assistive robotic systems to fit in the intimate space of physical contact with human bodies, the nature and success of the interaction needs to be interlinked with the sensory and sentimental qualities of robot touch. HRI should not only be driven by function and efficiency, but more critically, the experience of the touch interaction, felt by the human. This raises fundamental methodological challenges. Currently, transferring rich qualitative experience into effective robotic systems has not been made possible, and there is a lack of tools to investigate the ephemeral phenomenon of felt sensations. Addressing this challenge calls for methodological innovation to investigate the sensory and experiential aspects of technology-initiated social touch, through facilitating cross-disciplinary expertise and end-users' involvement. This workshop mobilises experts from diverse disciplines, including robotics engineering, AI, social science, neuroscience, healthcare, soma design and arts. This workshop aims to collectively reflect on strategies, tools and methods for the co-exploration and co-production of Robot-Human Touch Interactions that afford appropriate and desirable human felt experience. Phillip Morgan, Damith Herath, Praminda Caleb-Solly, Matthew Studley, Elizabeth Williams, Aurora An-Lin Hu, Eduardo B. Sandoval, Maleen Jayasuriya, Min Wang, and Janie Busby Grant (Cardiff University, Cardiff, UK; University of Canberra, Bruce, Australia; University of Nottingham, Nottingham, UK; University of the West of England, Bristol, UK; Australian National University, Canberra, Australia; UNSW, Sydney, Australia; University of Canberra, Canberra, Australia) The increasing prevalence of interactive, mobile robots in domestic and social spaces requires not only a comprehensive examination of the security challenges inherent in their large-scale deployment, but understanding how we as a community can support safe and successful adoption of robots. This workshop provides a forum for researchers, practitioners, and stakeholders from a range of disci- plines to build expertise and networks in safety, trust, psychosocial, legal and economic aspects for the secure deployment of social robots in domestic environments. The end goal of this workshop is to enable the development of methodologies and recommenda- tions to empower people to understand and decide on how their robots (and/or robots they are using) are secured, and approaches for raising awareness of the impact of an attack. The workshop will incorporate keynote presentations from leading experts, a series of lightning talks, and a scenario planning activity in which teams of participants use custom-designed prospective scenarios to ex- plore the precursors, processes, contexts and impacts of a breach of security in a robotic system. This will allow a deep-dive into the complexities of establishing and maintaining security and trust in interactive robotic applications, from the broad frame of social, personal and legal capability, enabling long-term acceptance and adoption of trustworthy, secure, and safe interactive robots. |
|
| Canal, Gerard |
Lennart Wachowiak, Andrew I. Coles, Oya Celiktutan, and Gerard Canal (King’s College London, London, UK) Robots operating in human environments should be able to answer diverse, explanation-seeking questions about their past behavior. We present a neurosymbolic pipeline that links a task planner with a unified logging interface, which attaches heterogeneous XAI artifacts (e.g., visual heatmaps, navigation feedback) to individual plan steps. Given a natural language question, a large language model selects the most relevant actions and consolidates the associated logs into a multimodal explanation. In an offline evaluation on 180 questions across six plans in two domains, we show that an LLM-based question matcher retrieves relevant plan steps accurately (F1 Score of 0.91), outperforming a lower-compute embedding baseline (0.62) and a rule-based syntax/keyword matcher (0.02). A preliminary user study (N=30) suggests that users prefer the LLM-consolidated explanations over raw logs and planner-only explanations. Merle M. Reimann, Gregory LeMasurier, Parag Khanna, Lennart Wachowiak, Wenxi Wu, Martim Brandao, Gerard Canal, Christian Smith, and Iolanda Leite (KTH Royal Institute of Technology, Stockholm, Sweden; University of Massachusetts at Lowell, Lowell, USA; King’s College London, London, UK; Robotics, Perception, and Learning Laboratory, Stockholm, Sweden) In order for robots to act as an empowering tool in today's society, it is necessary for users to understand what those robots can and cannot do. This can happen through the design of the robot, through external information such as signs and manuals, or through the interaction itself. With this workshop, we aim to provide a platform for participants who approach robot transparency from different angles to share and discuss their experiences. Participants will have the opportunity to deepen their knowledge of transparent and understandable robot interactions through two keynotes and paper presentations. The diverse backgrounds of participants will be brought together in interactive exercises that aim to address questions about topics such as how transparency can be used to reduce inequalities and empower society and other open problems in the field. The guided break-out sessions aim at giving participants the chance to reflect on the keynotes, paper presentations and their own work and to share those reflections with participants with other perspectives. |
|
| Cangelosi, Angelo |
Ruidong Ma, Wenjie Huang, Zhegong Shangguan, Angelo Cangelosi, and Alessandro Di Nuovo (Sheffield Hallam University, Sheffield, UK; University of Manchester, Manchester, UK) Direct imitation of humans by robots offers a promising direction for remote teleoperation and intuitive task instruction, where a human can perform a task naturally and the robot autonomously interprets and executes it using its own embodiment. Existing methods often rely on close alignment between human and robot scenes. This prevents robots from inferring the intent of the task or executing demonstrated behaviors when the initial states mismatch. Hence, it poses difficulties for non-expert users, who may need domain knowledge to adjust the setup. To address this challenge, we propose a neuro-symbolic framework that unifies visual observations, robot proprioceptive states, and symbolic abstractions within a shared latent space. Human demonstrations are encoded into this representation as predicate states. A symbolic planner can thus generate high-level plans that account for the different robot initial states. A flow matching module then synthesizes continuous joint trajectories consistent with the symbolic plan. We validate our approach on multi-object manipulation tasks. Preliminary results show that the framework can infer human intent and generate feasible symbolic plans and robot motions under mismatched initial states. These findings highlight the potential of neuro-symbolic models for more natural human-robot instruction. and they can enhance the explainability and trustworthiness of robot actions. Chen Kang, Madina Alitai, Yiting Wang, Xiaochi Cai, Ruidong Ma, Angelo Cangelosi, and Zhegong Shangguan (Beijing Language and Culture University, Beijing, China; University of Manchester, Manchester, UK; Pennsylvania State University, State College, USA; Sheffield Hallam University, Sheffield, UK) Large Language Model (LLM)-driven social robots face two key challenges: inference latency creates unnatural silences during which an expressively static robot appears disengaged, and LLMs rarely account for the user's facial affect as a continuous evolving process. We present Fluid-Xpress, an emotion-aware dual-loop framework for empathic facial reactions in human-robot interaction. The framework features: (1) a Macro-Micro dual-loop architecture that decouples real-time non-verbal feedback from LLM verbal processing, enabling continuous affective backchanneling during inference latency; (2) a Temporal Affective Engine using metrics such as MSSD to capture emotional dynamics and detect complex states like cognitive overload and masked emotions; and (3) a Risk-Adaptive Strategy that prioritizes immediate intervention during high-arousal states. A pilot study (N=8) showed that Fluid-Xpress significantly improved arousal stability (p < .05), mood improvement (p < .05), expression awareness (p < .01), and perceived empathy (p < .05) compared to baseline, providing preliminary support for emotion-aware non-verbal feedback in embodied social robots. |
|
| Cao, Shiye |
Shiye Cao (Johns Hopkins University, Baltimore, USA) Socially assistive robots (SARs) have long been explored as a promising alternative type of intervention for children with Autism Spectrum Disorder (ASD). Yet, current robot-guided interventions are limited by the restricted availability of clinically relevant interaction contexts, rigid and predefined communication formats, a lack of personalized content to accommodate the diverse behavioral profiles of ASD, and an incomplete understanding of the long-term impact of such robot interventions. My work focuses on developing a system to support individualized, long-term, in-home, play-based interventions. I enhance the social capabilities of SARs and provide tools that allow stakeholders to author personalized interactions. Through enabling robot interventions to more closely mimic human peer-to-peer interactions and focus on delivering personalized therapeutic interactions, my work can potentially lower the barrier for children to transfer the skills they develop during human-robot peer play to human-human contexts, ultimately improving behavioral outcomes for children with ASD. |
|
| Carino, Hannah |
Jessica Turner, Nicholas Vanderschantz, Judy Bowen, Jemma L. König, and Hannah Carino (University of Waikato, Tauranga, New Zealand; University of Waikato, Hamilton, New Zealand) Successful integration of social robots in education relies on the acceptance of robots in learning contexts by students. Using a participatory design workshop, students interacted with a KettyBot and ideated potential roles for robots in the classroom. This was followed by a questionnaire and the Godspeed Questionnaire Series (GQS) to understand student perceptions and attitudes towards social robots in education environments. Learners described potential use cases and our results demonstrate students envision robots as assistants rather than teachers, emphasising the importance of human connection in learning. |
|
| Carlsson, Tobias |
Tobias Carlsson, Erik Borg, Hannah Kuehn, and Joseph La Delfa (KTH Royal Institute of Technology, Stockholm, Sweden; Husqvarna Group, Stockholm, Sweden; Bitcraze, Malmö, Sweden) As autonomous lawnmowers become more common in shared spaces, aligning their behavior with human expectations and norms is increasingly important. Existing approaches often optimize for fixed objectives, limiting adaptability to diverse contexts. This work explores an alternative by enabling users to guide autonomous behavior development without fixed objectives. A prototype system allowed participants to interact with a simulated environment, using subjective preferences and genetic algorithms to generate lawnmower behaviors across generations. The study emphasized open-ended exploration, analyzing participant interactions and semi-structured interviews through reflexive thematic analysis. Results reveal detailed and reflective accounts of lawnmower behavior. We discuss these results in the context of our design decisions and how they affected the user's journey through a complex solution space. Ultimately, this work demonstrates how interactive genetic algorithms can surface user values and interpretations, potentially serving as both a behavior design tool and novel method to generatively explore social meaning in human-robot interaction. |
|
| Carreno-Medrano, Pamela |
Leimin Tian, Ana Kirschbaum, Caterina Neef, Mary Ellen Foster, Sara Cooper, Frauke Zeller, Manuel Giuliani, Alexander Eberhard, Oliver Chojnowski, Nils Mandischer, Utku Norman, Jan Ole Rixen, Pamela Carreno-Medrano, Nick Hawes, and Dana Kulić (CSIRO, Melbourne, Australia; Monash University, Melbourne, Australia; University of Applied Sciences Cologne, Cologne, Germany; KIT, Karlsruhe, Germany; University of Glasgow, Glasgow, UK; IIIA-CSIC, Spain; University of Edinburgh, Edinburgh, UK; Kempten University of Applied Sciences, Kempten, Germany; University of Augsburg, Augsburg, Germany; Monash University, Clayton, Australia; University of Oxford, Oxford, UK) As the technology readiness level of robotic systems increases, these systems interact with real users in realistic application contexts at increasing scale. In such real-world HRI, failures and unexpected outcomes are common, and these can provide valuable lessons for the design and evaluation of future HRI systems. Building on two recent workshops---the HRI 2025 workshop on Human-Robot Interaction in Extreme and Challenging Environments and the RO-MAN 2025 workshop on Real-World HRI in Public and Private Spaces: Successes, Failures, and Lessons Learned---this half-day workshop invites researchers and practitioners to share their design and deployment lessons drawn from across the spectrum of real-world HRI. The interactive sessions follow a taxonomy of the key phases during field deployment and major factors at play, inviting attendees to review the experimental design of their own research or example projects to identify potential challenges and share related experiences. This discussion will refine the initial taxonomy, which we plan to release as an open-source tool to empower researchers and practicioners in their design and deployment of real-world HRI. |
|
| Carsenti, Elior |
Elior Carsenti, Adi Manor, Ofir Sadka, Michael Kupferman, Guy Benadon, Lior Simchon, Rotem Dagan, Hili Megory, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Social robots are increasingly expected to initiate interactions, yet proactive behavior often produces negative experiences if presented when the users are not available. We argue that availability is not binary and depends on the user's cognitive load. Moreover, proactivity can take different forms, suggesting that proactive cues can be adjusted to the user’s current load. We tested whether matching proactivity modality (verbal and nonverbal cues) to cognitive-load level improves robot perception. In a 2×2×2 between-participants study, 87 participants completed a high-load writing task or a low-load screw-sorting task while an ElliQ robot initiated interaction using verbal cues, nonverbal cues, both, or neither. Perceptions varied across conditions. Under low load, verbal proactivity was preferred regardless of nonverbal cues. Under high load, nonverbal-only proactivity produced the best experience. These findings suggest that proactivity design is not trivial and should focus on aligning the robot's proactive behavior with the user's cognitive load. |
|
| Castellano, Ginevra |
Hong Wang, Katie Winkle, and Ginevra Castellano (Uppsala University, Uppsala, Sweden) This paper presents a multidimensional risk framework specifically designed for foundation model (FM)-driven human-robot interaction (HRI). We systematically categorize the roles of FM into two primary layers: the Interaction Loop, where the model functions as an agent responsible for interpreting environmental and user inputs (Perception), producing multimodal responses (Generation), and proactively requesting data (Acquisition); and the Robotic System, where it acts as an Intermediary that translates high-level commands into robot execution logic, and as an Interface that connects the robot to external networks. The framework maps these functional roles against five critical impact dimensions: Content, Trust, Privacy, Safety, and Data. By clarifying how potential threats arise from internal model flaws and external vulnerabilities, this work provides a structured basis for risk identification, assessment, and mitigation during human-robot-AI interaction. Jacqueline Borgstedt, Ricarda Wullenkord, Fethiye Irmak Doğan, Alva Markelius, Yue Lou, Emma Geijer-Simpson, Emily S. Cross, Friederike Eyssel, Tamsin Ford, Jenny L. Gibson, Hatice Gunes, Georgina Warner, and Ginevra Castellano (ETH Zurich, Zurich, Switzerland; Bielefeld University, Bielefeld, Germany; University of Cambridge, Cambridge, UK; Uppsala University, Uppsala, Sweden) Mental Health challenges disproportionately affect vulnerable children, including those with Developmental Language Disorder (DLD) and with forced migration backgrounds. These groups face elevated risks of anxiety, depression, and social withdrawal. Yet, to date, their subjective wellbeing is difficult to assess. This is particularly true for traditional self-report measures which require good reading skills, linguistic competence, and sustained attention. To overcome these barriers, we propose that social robots offer a novel opportunity to create accessible, engaging, and child-friendly wellbeing assessments. Yue Lou, Emma Geijer-Simpson, Georgina Warner, Anna Pérez-Aronsson, Elin Inge, and Ginevra Castellano (Uppsala University, Uppsala, Sweden) As emerging technologies such as AI systems and social robots become embedded in children’s daily lives, it is vital to consider how children themselves imagine robots. Advances in social robotics have generated new opportunities for assessing child wellbeing, but much of this work remains framed by adult researcher-centered views of what counts as beneficial. Drawing on the distinction between adult child perspectives and children’s own perspectives from childhood studies, we report a speculative participatory workshop with a child public involvement panel that explores future social robots for supporting children’s wellbeing. Children took part as co-researchers rather than research subjects, and their discussions highlight insights including everyday care, emotional companionship, and transparency about robot identity. We use these insights to inform future work on child-robot interaction and child wellbeing, and to connect children’s perspectives to debates around ethics and AI policy. Jiaee Cheong, Fethiye Irmak Doğan, Alva Markelius, Emily S. Cross, Friederike Eyssel, Ginevra Castellano, and Hatice Gunes (University of Cambridge, Cambridge, UK; Harvard University, Boston, USA; ETH Zurich, Zurich, Switzerland; Bielefeld University, Bielefeld, Germany; Uppsala University, Uppsala, Sweden) Real-world deployment of robotics for wellbeing requires solutions to be tailored to the end users. Most existing systems do not take into consideration the unique needs of those from marginalized and vulnerable communities, such as children, elderly, neurodivergent, disabled, and LGBTQ+ communities. The first workshop on “Equitable Robotics for Wellbeing” ( EqRoW) aims to address this gap by exploring strategies to not only develop more equitable solutions for marginalized and vulnerable communities, but also to enable HRI designers and developers to better understand the diverse needs of the end-users. The theme of HRI 2026, "HRI Empowering Society", informs the overarching theme of this workshop, encouraging discussions on HRI theories, user design methods and studies focused on developing approaches for advancing HRI that empowers society via equitable wellbeing robotics for all. |
|
| Cato, Robbie Jay |
Robbie Jay Cato, Lambros Lazuras, Natalie Leesakul, and Francesco Del Duchetto (University of Lincoln, Lincoln, UK; University of Nottingham, Nottingham, UK) As service robots are deployed in public spaces, they inherently collect data about their environment and the people within it. This creates a critical tension between ensuring users are aware of data collection and maintaining their trust. We investigate how different disclosure and consent mechanisms shape user perceptions of trust and privacy. We conducted a Wizard-of-Oz experiment with 36 participants on a university campus, comparing three conditions: no disclosure, a one-time static disclosure, and a dynamic ongoing consent mechanism. Post-interaction analysis reveals that dynamic consent not only increases user awareness but also significantly builds trust. Surprisingly, we found that a one-time, static disclosure was often more damaging to user trust than no disclosure at all. The results of our pilot study provide empirical evidence that interactive and continuous consent is crucial for the ethical and successful deployment of robots in public spaces, suggesting that designers should avoid simple, static warnings in favour of more granular and interactive interfaces. |
|
| Celiktutan, Oya |
Lennart Wachowiak, Andrew I. Coles, Oya Celiktutan, and Gerard Canal (King’s College London, London, UK) Robots operating in human environments should be able to answer diverse, explanation-seeking questions about their past behavior. We present a neurosymbolic pipeline that links a task planner with a unified logging interface, which attaches heterogeneous XAI artifacts (e.g., visual heatmaps, navigation feedback) to individual plan steps. Given a natural language question, a large language model selects the most relevant actions and consolidates the associated logs into a multimodal explanation. In an offline evaluation on 180 questions across six plans in two domains, we show that an LLM-based question matcher retrieves relevant plan steps accurately (F1 Score of 0.91), outperforming a lower-compute embedding baseline (0.62) and a rule-based syntax/keyword matcher (0.02). A preliminary user study (N=30) suggests that users prefer the LLM-consolidated explanations over raw logs and planner-only explanations. |
|
| Chadha, Jasmin Jaya |
Jasmin Jaya Chadha, Patrick Kenneth Pischulti, Ciara Hume, Zachary Selleck, William Roessmann, and Katya Arquilla (University of Colorado at Boulder, Boulder, USA) Psychological safety, the shared belief held by team members that it is safe to take interpersonal risks, has been assessed in many human teams in the corporate workplace, yet it has not been analyzed to the same extent in complex environments or with multi-agent teams. Many studies have either evaluated team dynamics in human-robot interaction (HRI) completely in the wild or in highly controlled laboratory studies. This work presents operationally relevant study tasks that can be implemented in a controlled laboratory environment to characterize psychological safety in HRI. 33 participants completed 2 different tasks repeatedly with a human and robot teammate. We observed higher performance scores when participants worked with a human vs. a robot, suggesting that these tasks elicit differentiable performance responses based on team type that could be related to differences in psychological safety. |
|
| Chan, Chun Kit |
Yuxuan Chen, Ian Leong Ting Lo, Bao Guo, Netitorn Kawmali, Chun Kit Chan, Ruoyu Wang, Jia Pan, and Lei Yang (University of Hong Kong, Hong Kong, China) A tour guide robot, CLIO, is presented. It carries a display as its head and a laser pointer to orient visitors’ attention by its head movement and laser pointer. Animated eyes are displayed to build eye contact. An optional low-code interface based on a Large Language Model is offered to coordinate the designed actions with a given narrative script for exhibition. We conducted a user study to evaluate the CLIO system in a mock-up exhibition of historical photographs. Feedback was collected from questionnaires and quantitative data from a mobile eye tracker. Experimental result validated that the efficacy of the designed actions in guiding visual attention of the visitors. It showed that CLIO achieved an enhanced engagement compared to the Audio-only baseline system. |
|
| Chang, Xiaoyu |
Xiaoyu Chang, Yanheng Li, XiaoKe Zeng, Jing Qi Peng, and Ray Lc (City University of Hong Kong, Hong Kong, China) Robots are increasingly designed to act autonomously, yet moments in which a robot overrides a user’s explicit choice raise fundamental questions about trust and social perception. This work investigates how a preference-violating override affects user trust, perceived competence, and interpretations of a robot’s intentions. In a beverage-delivery scenario, a robot either followed a user’s selected drink or replaced it with a healthier option without consent. Results show that the way an override is enacted and communicated consistently reduces trust and competence judgments, even when users acknowledge benevolent motivations. Participants interpreted the robot as more controlling and less aligned with their autonomy, revealing a social cost to such actions. This study contributes empirical evidence that preference-violating override behavior is socially consequential, shaping trust and core dimensions of user perception in embodied service interactions. |
|
| Charisi, Vicky |
Leigh M. Levinson, Bengisu Cagiltay, Vicky Charisi, Elena Malnatsky, Mike E.U. Ligthart, and Micol Spitale (Indiana University at Bloomington, Bloomington, USA; Koç University, Istanbul, Türkiye; MIT Singapore, Singapore, Singapore; Vrije Universiteit Amsterdam, Amsterdam, Netherlands; Politecnico di Milano, Milan, Italy) The increased presence of social robots geared toward children necessitate those who design and develop these technologies to understand and prepare for addressing emerging ethical and practical questions throughout phases of interaction design, technical development, and real-world use while maintaining a child-friendly and playful approach. Even more importantly, it is crucial to understand how we can address various tensions between the ethical and practical use of social robots with children, that may emerge between designers, developers, and users. In this full-day workshop, we will introduce the "playful by design" toolkit to promote hands-on and rights-based design experiences for researchers and practitioners to (1) identify and discuss tensions that arise in child-robot interaction research and (2) co-create a working-draft and collaboration plan for designing a "roboticist's guide to working with children". This workshop builds on the authors' past workshop and work-in-progress report at IDC 2025 where our key goal is to co-develop a community-driven resource through collecting and analyzing testimonials from child-robot interaction researchers. |
|
| Chavez, Courtney J. |
Triniti Armstrong, Courtney J. Chavez, Rhian C. Preston, and Naomi T. Fitter (Oregon State University, Corvallis, USA) Prolonged computer use has become the norm for a wide variety of fields. The sedentary practices that often accompany this computer use can lead to a number of health challenges, from cardiovascular and musculoskeletal issues to ocular health problems. Past work by our research group took preliminary steps to address these issues by evaluating a socially assistive robot (SAR)-based break-taking system with no online learning abilities. Based on their initial findings, which showed the robot to effectively encourage break-taking behaviors during computer use and to be more engaging and enjoyable to use compared to a non-robotic alternative, we present methods for data collection in this current paper. Specifically, we aimed 1) to enhance the past SAR system by adding online Q-learning capabilities and 2) to evaluate the updated system's policy generation and how well the final policies aligned with our expectations from prior work. Our results show evidence that the system is successfully generating unique policies for each participant, although the limited match between the expected and resulting policies surprised us. Our work can help SAR researchers understand how to implement Q-learning when using sparse data. |
|
| Chen, Andrew |
Andrew Chen, Ju-Hung Chen, Phurinat Pinyomit, and Alexis E. Block (Case Western Reserve University, Cleveland, USA) RoboTales is a low-cost robotic storytelling system that animates narratives using expressive sock puppetry. Implemented autonomously on a Baxter robot as a test case, RoboTales synchronizes narration, gestures, and mouth movements to perform character-driven stories. In a pilot study, puppet-based storytelling outperformed a gesture-only mode, producing higher HRIES ratings and improved story recall, suggesting that embodied puppetry enhances engagement and narrative comprehension. Designed to be modular and platform-agnostic, RoboTales can be adapted to other manipulators and offers a screen-free alternative to passive media, supporting future deployment in child-centered learning environments. |
|
| Chen, Chaona |
Genki Miyauchi, Roderich Groß, and Chaona Chen (University of Sheffield, Sheffield, UK; TU Darmstadt, Darmstadt, Germany) As robots become increasingly embedded in human–robot teamwork, understanding how humans perceive robot behavior is critical. This is especially relevant for swarm robots that rely on collective behavior to accomplish tasks. While prior research has explored how humans evaluate the abilities and behaviors of single robots, the perception of swarm robots remains relatively underexplored. Guided by the competence–warmth framework, we conducted a perception-based experiment in a collective search task, generating 125 robot teams by systematically manipulating three parameters: speed, separation distance, and local broadcast duration. Ninety participants observed the swarms, rated perceived warmth and competence, and reported team preferences. Results show that broadcast duration increased perceived warmth, separation distance enhanced perceived competence, and individual robot speed had no significant effect. Critically, social perceptions of warmth and competence were stronger predictors of team preference than task performance, with participants favoring swarms that appeared warm and competent over those that completed tasks fastest. These results underscore the importance of considering both technical performance and social attributes when designing robot swarms for effective collaboration with humans. Di Fu, Yuchong Zhang, Yong Ma, Maria Kyrarini, Chaona Chen, Doreen Jirak, Weiyong Si, and Danica Kragic (University of Surrey, Guildford, UK; KTH Royal Institute of Technology, Stockholm, Sweden; University of Bergen, Bergen, Norway; Santa Clara University, Santa Clara, USA; University of Sheffield, Sheffield, UK; University of Antwerp, Antwerp, Belgium; University of Essex, Essex, UK) Interactive artificial intelligence (AI) is rapidly reshaping human-centered robotics by moving beyond algorithmic efficiency toward real-time adaptability, transparency, and design for people and contexts. Building on the inaugural InterAI Workshop at IEEE RO-MAN 2024, this second edi- tion, proposed for HRI 2026, will convene the robotics and HRI communities to examine how interactive AI can en- able robust, trustworthy systems that operate seamlessly in dynamic, real-world environments. The half-day hybrid program will feature two keynote talks, and a peer-reviewed paper oral presentation session. Distinct from prior events, the workshop spotlights the integration of generative and embodied AI with human-in-the-loop learning and real-time decision making, as well as methods for explainability, eval- uation, and safety. The workshop aims to catalyze a collabo- rative agenda that bridges interactive AI technologies and human-centered robotic systems. |
|
| Chen, He |
Han Hu and He Chen (Chinese University of Hong Kong, Shatin, China; Chinese University of Hong Kong, Hong Kong SAR, China) Vision language models (VLMs) have shown strong performance in RGB-based human-robot interaction (HRI). However, using RGB cameras in homes faces challenges due to privacy issues and poor performance in the dark, such as during night-time elderly care. Thermal imaging offers a privacy-preserving alternative that works without light. This raises a natural yet previously unexplored question: Can general-purpose VLMs effectively interpret thermal images in a zero-shot manner for privacy-preserving and robust HRI? In this work, we conduct a thorough evaluation of this capability using a real-world dataset. Specifically, we investigate whether the latest VLMs are reliable for safety-critical HRI tasks. We benchmark six leading VLMs on the OctoNet dataset, which contains 975 thermal sequences. To avoid self-evaluation bias, we use an ensemble of three independent large language models to score the predictions and measure stability. Our results reveal a critical performance disparity: while VLMs are accurate on large body movements (e.g., Sitting: 92.8%), they struggle on fine-grained interactions (e.g., Hand Gestures: <20%) and safety-critical events (e.g., Stagger: <40%). Furthermore, we identify instability in predictions due to variations in viewing angles and movement magnitude. Given the strict reliability standards for caregiving, we conclude that current VLMs alone are insufficient for autonomous thermal monitoring. Our findings highlight the limitations of zero-shot thermal perception and underscore the necessity of multimodal fusion to ensure robust HRI. |
|
| Chen, Ju-Hung |
Mayumi Mohan, Ju-Hung Chen, and Alexis E. Block (MPI for Intelligent Systems, Stuttgart, Germany; Case Western Reserve University, Cleveland, USA) Social-physical human-robot interaction (spHRI) has grown rapidly across robotics, human-computer interaction, human-robot interaction, and haptics. Yet, fragmented terminology and inconsistent methodologies make systematic synthesis difficult. To support scalable review practices, we evaluated the extent to which small language models (SLMs; < 1.5B parameters) can assist with title and abstract screening for a large spHRI systematic review. While no SLMs matched human reviewers' performance, the models operated locally and screened papers orders of magnitude faster. The combined SLM ensemble identified 39 papers reviewers missed, representing 10.29% of the final relevant dataset. These results demonstrate that SLMs can augment, rather than replace, expert reviewers and make large-scale literature reviews accessible and sustainable. Andrew Chen, Ju-Hung Chen, Phurinat Pinyomit, and Alexis E. Block (Case Western Reserve University, Cleveland, USA) RoboTales is a low-cost robotic storytelling system that animates narratives using expressive sock puppetry. Implemented autonomously on a Baxter robot as a test case, RoboTales synchronizes narration, gestures, and mouth movements to perform character-driven stories. In a pilot study, puppet-based storytelling outperformed a gesture-only mode, producing higher HRIES ratings and improved story recall, suggesting that embodied puppetry enhances engagement and narrative comprehension. Designed to be modular and platform-agnostic, RoboTales can be adapted to other manipulators and offers a screen-free alternative to passive media, supporting future deployment in child-centered learning environments. |
|
| Chen, Qicong |
Isaac S. Sheidlower, Jindan Huang, James Staley, Bingyu Wu, Qicong Chen, Reuben M. Aronson, and Elaine Schaertl Short (Brown University, Providence, USA; Tufts University, Medford, USA) Robot Foundation Models (RFMs) represent a promising approach to developing general-purpose home robots. The broad capabilities of RFMs enable users to ask a robot to perform tasks the RFM was not evaluated on. Informed users can better judge what a robot can and cannot do, and thus use it more safely and effectively. We study how non-roboticists interpret performance information commonly reported in RFM evaluations, which typically use task success rate (TSR) as the primary metric. While TSR is intuitive to experts, it is important to validate that novices interpret it as intended. We conducted a study where users saw real evaluation data from published RFM projects. We find that non-experts use TSR in ways consistent with expert expectations but also value additional information—especially failure cases, which are less frequently reported. Users further want both real evaluation data and the robot’s own estimates for novel tasks. |
|
| Chen, You Yang |
You Yang Chen, Shan Luo, Xueqing Li, Sizhao Jin, Pengfei Zhang, and Mingyu Li (Tsinghua University, Beijing, China; China Academy of Art, Hangzhou, China; Shandong University of Art and Design, Jinan, China) Prolonged earphone use and excessive volume are common causes of hearing damage, yet users often remain unaware of these risks. Existing alert mechanisms, such as pop-up warnings, are easily dismissed and lack emotional resonance. We present Songbird, a bird-shaped robotic earphone companion that "listens along" with the user. When the volume exceeds a safe threshold (75 dB) or wearing time becomes excessive, Songbird mimics the current melody by singing aloud, rendering auditory risks in an embodied and emotionally engaging manner. This research explores design strategies for mapping health risk information onto embodied robotic companions. By establishing emotional connections through physical form, such companions demonstrate unique potential in health care and behavior guidance, offering a non-intrusive, emotionally resonant interaction paradigm for hearing protection in personal audio devices. |
|
| Chen, Yuxuan |
Yuxuan Chen, Ian Leong Ting Lo, Bao Guo, Netitorn Kawmali, Chun Kit Chan, Ruoyu Wang, Jia Pan, and Lei Yang (University of Hong Kong, Hong Kong, China) A tour guide robot, CLIO, is presented. It carries a display as its head and a laser pointer to orient visitors’ attention by its head movement and laser pointer. Animated eyes are displayed to build eye contact. An optional low-code interface based on a Large Language Model is offered to coordinate the designed actions with a given narrative script for exhibition. We conducted a user study to evaluate the CLIO system in a mock-up exhibition of historical photographs. Feedback was collected from questionnaires and quantitative data from a mobile eye tracker. Experimental result validated that the efficacy of the designed actions in guiding visual attention of the visitors. It showed that CLIO achieved an enhanced engagement compared to the Audio-only baseline system. |
|
| Chen, Zeyi |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Cheong, Jiaee |
Jiaee Cheong, Fethiye Irmak Doğan, Alva Markelius, Emily S. Cross, Friederike Eyssel, Ginevra Castellano, and Hatice Gunes (University of Cambridge, Cambridge, UK; Harvard University, Boston, USA; ETH Zurich, Zurich, Switzerland; Bielefeld University, Bielefeld, Germany; Uppsala University, Uppsala, Sweden) Real-world deployment of robotics for wellbeing requires solutions to be tailored to the end users. Most existing systems do not take into consideration the unique needs of those from marginalized and vulnerable communities, such as children, elderly, neurodivergent, disabled, and LGBTQ+ communities. The first workshop on “Equitable Robotics for Wellbeing” ( EqRoW) aims to address this gap by exploring strategies to not only develop more equitable solutions for marginalized and vulnerable communities, but also to enable HRI designers and developers to better understand the diverse needs of the end-users. The theme of HRI 2026, "HRI Empowering Society", informs the overarching theme of this workshop, encouraging discussions on HRI theories, user design methods and studies focused on developing approaches for advancing HRI that empowers society via equitable wellbeing robotics for all. |
|
| Cheung, Christy |
Ziang Liu, Katherine Dimitropoulou, Christy Cheung, and Tapomayukh Bhattacharjee (Cornell University, Ithaca, USA; Columbia University, New York City, USA) We present CareEval, a benchmark for evaluating the physical caregiving decision-making abilities of Large Language Models. Developed with a licensed occupational therapist expert in caregiving and validated by eight clinical stakeholders, it contains 100 realistic scenarios spanning all six basic Activities of Daily Living. Instead of testing general reasoning, CareEval assesses whether model responses account for key physical caregiving factors, such as user function, agency, intent, communication, and safety, and align with expert practice. Across several state-of-the-art LLMs, the best model only scores 53.1%, revealing substantial gaps in current models’ ability to reason about physical caregiving. We release 80 of the CareEval scenarios and all prompts through our website: https://emprise.cs.cornell.edu/care-eval/. |
|
| Choi, Kyung Yun |
SoYoon Park, Eunsun Jung, KiHyun Lee, Dokshin Lim, and Kyung Yun Choi (Hongik University, Seoul, Republic of Korea) Inspired by the playful, attention-seeking paw gestures of cats, we present PAWSE, a laptop-peripheral robot that encourages short fidgeting-based micro-breaks during digital work. PAWSE integrates a cat-paw-inspired robotic arm with a web-based timer that prompts brief tactile interaction during scheduled breaks. We conducted a within-subjects study comparing three conditions--no break, passive break, and active (PAWSE fidgeting-based) break--using a 2-back task and subjective workload measures (NASA-TLX). Results showed differences in post-task accuracy across conditions, with the highest accuracy observed in the active break condition. Reaction time remained largely comparable. Workload measures indicated reduced mental demand and frustration during rest conditions, with the active break providing the most favorable subjective experience. These preliminary findings offer insight into how fidgeting-based micro-breaks may fit within focused digital work and inform the design of future tactile micro-break systems. |
|
| Choi, Yong-Hyeok |
Ismael Espinoza, Yong-Hyeok Choi, Kangsan Lee, Ji-Heon Oh, Ki Joo Pahk, and Tae-Seong Kim (Kyung Hee University, Yongin, Republic of Korea) We present a Real2Sim2Real system with a pair of UR3 arms equipped with Allegro 4-finger hands for bimanual, long-horizon R2R2H (Robot-to-Robot-to-Human) interaction tasks. Our system is composed of three main modules: (1) Real2Sim module to extract observation data and collect expert demonstrations from the real hardware setup; (2) Temporal-Context Transformer (TCT) policy module that train the policy in simulation using expert demonstrations to preserve human motion trajectories; (3) Sim2Real module to transfer the trained policy from simulation to hardware by leveraging domain randomization and real-time object detection. We have evaluated our system with two R2R2H tasks: (1) Task #1: an R2R2H tube handover and (2) Task #2: an R2R cup flipping and placing. Our presented system has completed the tasks, achieving success rates of 82.86% in simulation and 77.14% in hardware for Task #1, and 57.14% and 51.43% for Task #2. Demonstrating the feasibility of our system in real hardware. |
|
| Chojnowski, Oliver |
Leimin Tian, Ana Kirschbaum, Caterina Neef, Mary Ellen Foster, Sara Cooper, Frauke Zeller, Manuel Giuliani, Alexander Eberhard, Oliver Chojnowski, Nils Mandischer, Utku Norman, Jan Ole Rixen, Pamela Carreno-Medrano, Nick Hawes, and Dana Kulić (CSIRO, Melbourne, Australia; Monash University, Melbourne, Australia; University of Applied Sciences Cologne, Cologne, Germany; KIT, Karlsruhe, Germany; University of Glasgow, Glasgow, UK; IIIA-CSIC, Spain; University of Edinburgh, Edinburgh, UK; Kempten University of Applied Sciences, Kempten, Germany; University of Augsburg, Augsburg, Germany; Monash University, Clayton, Australia; University of Oxford, Oxford, UK) As the technology readiness level of robotic systems increases, these systems interact with real users in realistic application contexts at increasing scale. In such real-world HRI, failures and unexpected outcomes are common, and these can provide valuable lessons for the design and evaluation of future HRI systems. Building on two recent workshops---the HRI 2025 workshop on Human-Robot Interaction in Extreme and Challenging Environments and the RO-MAN 2025 workshop on Real-World HRI in Public and Private Spaces: Successes, Failures, and Lessons Learned---this half-day workshop invites researchers and practitioners to share their design and deployment lessons drawn from across the spectrum of real-world HRI. The interactive sessions follow a taxonomy of the key phases during field deployment and major factors at play, inviting attendees to review the experimental design of their own research or example projects to identify potential challenges and share related experiences. This discussion will refine the initial taxonomy, which we plan to release as an open-source tool to empower researchers and practicioners in their design and deployment of real-world HRI. |
|
| Chung, Youjin |
Khaled Abdul Rahman, Benjamin Jungblut, and Youjin Chung (Georgia Institute of Technology, Atlanta, USA) After COVID-19, many assumed in-store shopping would decline, yet research shows that most consumers still make final purchasing decisions inside retail spaces. Retail advertising remains influential because it engages customers emotionally. However, most in-store advertisements, digital or physical, are static and lack multi-sensory stimulation. This paper addresses that gap by focusing on aroma products, which aim to convey emotional experience and memory to customers. We propose our design, "Aroma-bota," an interactive robotic installation that uses movement and multisensory cues to enhance the aroma retail experience. We evaluated Aroma-Bota through user testing in a simulated retail environment to understand how people interpreted its motion-based emotional cues. Results show that emotionally legible gestures---especially offering and "happy" motions---significantly enhanced user engagement and clarity of intent. This project contributes a novel design exemplar of sensory-driven, emotion-expressive retail robotics for the HRI community. |
|
| Churamani, Nikhil |
Bahar Irfan, Nikhil Churamani, Michelle Zhao, Rajat Kumar Jenamani, Ali Ayub, and Silvia Rossi (Familiar Machines & Magic, Woburn, USA; University of Cambridge, Cambridge, UK; Carnegie Mellon University, Pittsburgh, USA; Cornell University, Ithaca, USA; Concordia University, Montreal, Canada; University of Naples Federico II, Naples, Italy) Today's high-capacity generalist robot policies provide a strong foundation for broad task-level competence, yet achieving effective and equitable support for people in everyday settings remains a significant challenge. Real-world environments are dynamic and unstructured, and human needs evolve over time, requiring robots that can adapt accordingly. The ultimate evaluator of any robotic system is the person it assists, and personalization is essential to ensuring equitable and meaningful support across diverse users and contexts. Developing robots that can continually learn from interaction, adapt their behaviors over time, and flexibly assume roles as learners and collaborators is a critical step toward realizing effective integration of robots into daily life. With this year's theme of "Evolving Assistance for Everyday Life", and in alignment with the conference theme "HRI Empowering Society", the sixth edition of the "Lifelong Learning and Personalization in Long-Term Human-Robot Interaction (LEAP-HRI)" workshop aims to bring together insights across diverse disciplines, focusing on how robots can progressively adapt their support to suit diverse individuals, each with unique and changing needs, across real-world contexts. Through this lens, the workshop aims to discuss current and future directions in how assistive systems can flexibly respond, continually improve over time, and deliver more inclusive and empowering support in everyday life. |
|
| Cila, Nazli |
Marco C. Rozendaal, Anastasia Kouvaras Ostrowski, Mafalda Gamboa, Samantha Reig, Patricia Alves-Oliveira, Maaike Bleeker, Maria Luce Lupetti, John Vines, Nazli Cila, Hannah Pelikan, Nikolas Martelaro, Selma Šabanović, David Sirkin, and Cristina Zaga (Delft University of Technology, Delft, Netherlands; Purdue University, West Lafayette, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Massachusetts at Lowell, Lowell, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Utrecht University, Utrecht, Netherlands; Politecnico di Torino, Turin, Italy; University of Edinburgh, Edinburgh, UK; Linköping University, Linköping, Sweden; Carnegie Mellon University, Pittsburgh, USA; Indiana University at Bloomington, Bloomington, USA; Stanford University, Stanford, USA; University of Twente, Enschede, Netherlands) The 3rd Workshop on Designerly Human-Robot Interaction (HRI) aims to bring together scholars and practitioners engaged in design-oriented research to articulate the value of design research within HRI broadly. We propose a half-day workshop to (1) collectively map the diversity of design research in HRI, examining how contributions are framed and how quality is evaluated; (2) discuss participants’ HRI design projects, showcased in an exhibition setting; and (3) conclude with a focused conversation to identify common ground across diverse approaches and develop strategies for strengthening the position of design research in HRI and its connections with other HRI disciplinary communities. |
|
| Clement, Benoit |
Zhichen Lu, Matthew Stephenson, Benoit Clement, and Adriana Tapus (ENSTA Paris, Paris, France; Flinders University, Adelaide, Australia) Cross-modal conflicts in maritime navigation—where a vessel’s verbal communication contradicts its physical maneuvers (e.g., promising to give way while maintaining speed) pose severe risks to safety. Current autonomous systems often process sensor data and linguistic inputs in isolation, failing to detect such discrepancies. We present a Multimodal Agentic Framework that serves as a “Watchful Copilot,” using Retrieval-Augmented Generation (RAG) to cross-reference navigational dialogue with real-time kinematic data. To manage uncertainty, a Risk-Prioritized Interface employs progressive disclosure, escalating from a “Green” (Verified) state to a “Yellow” (Ambiguous) state, where the agent visualizes supporting evidence and requests human supervision for clarification. Preliminary validation in a 2D simulation benchmark (N=13) provides initial evidence that this human-in-the-loop workflow may support reduced cognitive load and appropriate trust calibration in high-ambiguity scenarios, warranting further investigation. |
|
| Cobo Navas, Sofia |
Xiangfei Kong, Rex Gonzalez, Nicolas Echeverria, Sofia Cobo Navas, Thuc Anh Nguyen, Divyamshu Shrestha, Sebastian Ramirez-Vallejo, Deekshita Senthil Kumar, Andrew Texeira, Thao Nguyen, Deep Akbari, Hong Wang, Hongdao Meng, and Zhao Han (University of South Florida, Tampa, USA) HRI research shows that social robots support human companionship and emotional well-being. However, the cost of high-end social robots limits their accessibility, and, yet, existing low-cost platforms cannot fully support users emotionally due to limited interaction capabilities. In this demo paper with accompanying video, we showcase Bloom, a low-cost social robot that combines touch sensing with LLM-driven conversation to facilitate more expressive and emotionally engaging interactions with humans. Bloom also integrates a customizable 3D-printed shell with a flexible software pipeline that enables touch-responsive movement, LLM-driven dialogue, and face tracking during conversations, offering more expressive interaction capabilities at a low cost, which remains easy to fabricate and adaptable across a wide range of societal applications, including mental health, companion, and healthcare. |
|
| Cocchella, Francesca |
Francesca Cocchella (Italian Institute of Technology, Genoa, Italy; University of Genoa, Genoa, Italy) Robots may not belong everywhere, but when placed with purpose, they can open new spaces for learning, connection, and creativity. By listening to teachers, museum visitors, and artists, this work explores how stakeholders envision social robots as collaborators rather than tools. Our studies show that involving stakeholders in the evaluation of robotic systems is not only valuable but also feasible, offering crucial insights into when and why social robots are perceived as truly meaningful in human contexts. |
|
| Cohen, Jonathan Albert |
Jonathan Albert Cohen, Kye Shimizu, Allen Song, Vishnu Bharath, Pattie Maes, and Kent Larson (Massachusetts Institute of Technology, Cambridge, USA; New England Innovation Academy, Marlborough, USA) Robots in shared spaces often move in ways that are difficult for people to interpret, placing the burden on humans to adapt. High-DoF robots exhibit motion that people read as expressive, intentionally or not, making it important to understand how such cues are perceived. We present an online video study evaluating how different signaling modalities, expressive motion, lights, text, and audio, shape people’s ability to understand a quadruped robot’s upcoming navigation actions (Boston Dynamics Spot [ 4]). common scenarios, we measure how each modality influences humans’ (1) accuracy in predicting the robot’s next navigation action, (2) confidence in that prediction, and (3) trust in the robot to act safely. The study tests how expressive motions compare to explicit channels, whether aligned multimodal cues enhance interpretability, and how conflicting cues affect user confidence and trust. We contribute initial evidence on the relative effectiveness of implicit versus explicit signaling strategies. |
|
| Coles, Andrew I. |
Lennart Wachowiak, Andrew I. Coles, Oya Celiktutan, and Gerard Canal (King’s College London, London, UK) Robots operating in human environments should be able to answer diverse, explanation-seeking questions about their past behavior. We present a neurosymbolic pipeline that links a task planner with a unified logging interface, which attaches heterogeneous XAI artifacts (e.g., visual heatmaps, navigation feedback) to individual plan steps. Given a natural language question, a large language model selects the most relevant actions and consolidates the associated logs into a multimodal explanation. In an offline evaluation on 180 questions across six plans in two domains, we show that an LLM-based question matcher retrieves relevant plan steps accurately (F1 Score of 0.91), outperforming a lower-compute embedding baseline (0.62) and a rule-based syntax/keyword matcher (0.02). A preliminary user study (N=30) suggests that users prefer the LLM-consolidated explanations over raw logs and planner-only explanations. |
|
| Comeca, Andy |
Akhila Bairy, Diana Burkart, Maximilian Schrapel, Jan Meyerhoefer, John Pravin Arockiasamy, Andy Comeca, Manuel Bied, Alexey Vinel, and Maike Schwammberger (KIT, Karlsruhe, Germany) Effective explainability in human-robot interaction (HRI) requires communication strategies that remain clear in dynamic settings. Unimodal explanations often falter under noise or cognitive load, making multimodal explanations crucial for real-world interaction. For social robots in traffic environments equipped with Vehicle-to-Everything (V2X) communication, timely and interpretable explanations are vital. This late-breaking report presents preliminary findings from a Virtual Reality (VR) user study on which modalities pedestrians perceive as most and least effective across different scenarios, revealing clear scenario-dependent differences in modality effectiveness and highlighting which cues are actually noticed. |
|
| Connor, Brandon |
Weslie Khoo, Long-Jing Hsu, William Valentine, Brandon Connor, Tao Yat Kong, Amous Khoo, Waki Kamino, Selma Šabanović, and David Crandall (Indiana University at Bloomington, Bloomington, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Rose-Hulman Institute of Technology, Terre Haute, USA; Independent, Singapore, Singapore; Cornell University, Ithaca, USA) IRIS, the Interactive Robot for Ikigai Support, is a social robot designed to help older adults explore life purpose and meaningful memories through interactive, reflective conversations. Implemented on LuxAI’s QT humanoid robot, IRIS combines structured prompts with AI-generated follow-ups to support dynamic, personalized dialogue that adapts to individual preferences and cognitive abilities. Co-designed with older adults through a longitudinal panel, IRIS incorporates nuanced emotional responsiveness, including the ability to recognize and engage with bittersweet emotions. For the interactive demo, participants will engage with IRIS in a controlled, immersive setting to encourage contemplation of meaning, impermanence, and personal purpose. This work highlights the potential of socially aware robots to empower older adults, support emotional well-being, and foster personal reflection, demonstrating how human–robot interaction can contribute to societal enrichment. Brandon Connor, William Valentine, Long-Jing Hsu, Waki Kamino, Selma Šabanović, David Crandall, and Weslie Khoo (Indiana University at Bloomington, Bloomington, USA; Rose-Hulman Institute of Technology, Terre Haute, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Cornell University, Ithaca, USA) This paper examines how robots can safely and intentionally evoke fear in humans. Drawing on popular culture, immersive theater, horror games, and theme parks, we highlight design strategies such as unpredictability, proximity violations, and narrative buildup. We discuss how robot morphology, motion, and ambient cues shape perceived threat, with implications for entertainment-focused Human-Robot Interaction (HRI) and for preventing unintentional fear in everyday interactions. The paper aims to spark discussion on leveraging fear as a meaningful, ethical, and safe emotional modality in human–robot interaction. |
|
| Conti, Caio |
Caio Conti, Nuno Kuschnaroff Barbosa, Guilherme Gelmi de Freitas Salvo, Gabriel Corsi Honorio, Davy Araujo Sa Teles, and Adriana Tapus (ENSTA - Institut Polytechnique de Paris, Paris, France; ENSTA Paris, Paris, France) The presence of robots in everyday settings is growing rapidly. As they become increasingly common in service-oriented roles, such as in restaurants, retail, and reception, their primary function often involves assisting people by providing guidance and supporting decision-making. In these contexts, a critical factor influencing their effectiveness is the level of human trust in the robot’s actions. In this work, we investigate human trust, confidence and perceptions in interactions with robots within the context of a recycling task. We developed a robot equipped with a vision-enabled LLM to classify trash items and a robotic arm to indicate the recommended bin—either recycling or general waste. In a sorting experiment involving 12 participants, each completed the task under three conditions: (C1) without any assistance, (C2) with an instructional sheet, and (C3) with the option to request assistance from the robot. We tracked whether participants chose to consult the robot and collected self-reported confidence ratings for each condition. Preliminary results suggest that participants are more likely to trust the robot’s guidance and report higher confidence in correctly completing the task when assisted by the robot compared to performing the task alone or with only the instructional sheet. |
|
| Cooper, Andrew I. |
Satheeshkumar Veeramani, Anna Kisil, Abigail Bentley, Hatem Fakhruldeen, Gabriella Pizzuto, and Andrew I. Cooper (University of Liverpool, Liverpool, UK) Self-driving laboratories (SDLs) are rapidly transforming research in chemistry and materials science to accelerate new discoveries. Mobile robot chemists (MRCs) play a pivotal role by autonomously navigating the lab to transport samples, effectively connecting synthesis, analysis, and characterisation equipment. The instruments within an SDL are typically designed or retrofitted to be accessed by both human and robotic chemists, ensuring operational flexibility and integration between manual and automated workflows. In many scenarios, human and robotic chemists may need to use the same equipment simultaneously. Currently, MRCs rely on simple LiDAR-based obstruction detection, which forces the robot to passively wait if a human is present. This lack of situational awareness leads to unnecessary delays and inefficient coordination in time-critical automated workflows in human-robot shared labs. To address this, we present an initial study of an embodied, AI-driven perception method that facilitates proactive human-robot interaction in shared-access scenarios. Our method features a hierarchical human intention prediction model that allows the robot to distinguish between preparatory actions (waiting) and transient interactions (accessing the instrument). Our results demonstrate that the proposed approach enhances efficiency by enabling proactive human–robot interaction, streamlining coordination, and potentially increasing the efficiency of autonomous scientific labs. |
|
| Cooper, Sara |
Sara Cooper, Bartomeu Pou, Arnau Mayoral-Macau, Antonio Lobo-Santos, Miriam Martín, and Raquel Ros (IIIA-CSIC, Cerdanyola, Spain; IIIA-CSIC, Bellaterra, Spain; IIIA-CSIC, Barcelona, Spain; Institute of Mathematical Sciences, Madrid, Spain) We present Emy, a low-cost, socially assistive robot designed to support therapeutic sessions for children with Autism Spectrum Disorder (ASD) within the EMOROBCARE project. We detail the robot's multimodal interaction capabilities, including a comprehensive set of facial expressions and reactive behaviours, as well as its perception and speech interaction system. The demonstration showcases the robot together with a suite of four therapeutic games that promote cognitive and social skills. The system's modularity and affordability increase its accessibility for home-based ASD interventions. Leimin Tian, Ana Kirschbaum, Caterina Neef, Mary Ellen Foster, Sara Cooper, Frauke Zeller, Manuel Giuliani, Alexander Eberhard, Oliver Chojnowski, Nils Mandischer, Utku Norman, Jan Ole Rixen, Pamela Carreno-Medrano, Nick Hawes, and Dana Kulić (CSIRO, Melbourne, Australia; Monash University, Melbourne, Australia; University of Applied Sciences Cologne, Cologne, Germany; KIT, Karlsruhe, Germany; University of Glasgow, Glasgow, UK; IIIA-CSIC, Spain; University of Edinburgh, Edinburgh, UK; Kempten University of Applied Sciences, Kempten, Germany; University of Augsburg, Augsburg, Germany; Monash University, Clayton, Australia; University of Oxford, Oxford, UK) As the technology readiness level of robotic systems increases, these systems interact with real users in realistic application contexts at increasing scale. In such real-world HRI, failures and unexpected outcomes are common, and these can provide valuable lessons for the design and evaluation of future HRI systems. Building on two recent workshops---the HRI 2025 workshop on Human-Robot Interaction in Extreme and Challenging Environments and the RO-MAN 2025 workshop on Real-World HRI in Public and Private Spaces: Successes, Failures, and Lessons Learned---this half-day workshop invites researchers and practitioners to share their design and deployment lessons drawn from across the spectrum of real-world HRI. The interactive sessions follow a taxonomy of the key phases during field deployment and major factors at play, inviting attendees to review the experimental design of their own research or example projects to identify potential challenges and share related experiences. This discussion will refine the initial taxonomy, which we plan to release as an open-source tool to empower researchers and practicioners in their design and deployment of real-world HRI. |
|
| Correia, Filipa |
Ricardo Rodrigues, Plinio Moreno, Filipa Correia, and Alexandre Bernardino (University of Lisbon, Lisbon, Portugal; Instituto Superior Técnico - University of Lisbon, Lisbon, Portugal) Social robots are a new and promising tool for reducing children's anxiety during medical procedures. Our study aims to design and test a social robot to alleviate anxiety and improve emotional state before dental treatment for children. The design of the experimental condition included asocial robot (Vizzy) with different comedic styles such as jokes, riddles, games, and dance, to make the waiting room experience more engaging and entertaining for children. A user study (N=22) was conducted, in which children were assigned to one of two groups: interaction with the humanoid Vizzy robot, or waiting in the dentist's waiting room without any interaction with the robot (Control). The results indicate a significant impact of the experimental condition on reducing anxiety levels and improving emotional responses, demonstrating that social robots can be considered for future research to reduce children's anxiety before distressing medical procedures. Filipa Correia, Cristina Gena, Alberto Lillo, Laura Lossi, Claudio Mattutino, Valentina Nisi, Linda Pigureddu, and Fabiana Vernero (Instituto Superior Técnico - University of Lisbon, Lisbon, Portugal; University of Turin, Turin, Italy; IST University of Lisbon, Lisbon, Portugal) This paper explores the emerging field of Animal-Robot Interaction (ARI) by questioning the way humanoid robots could engage with dogs in domestic settings, a scenario which we expect to become prevalent in the near future. While existing research focuses on human-robot interaction, ARI remains quite underexplored, particularly regarding how dogs perceive and respond to robotic agents. In contrast, this work proposes a playful and relational design perspective to foster more natural interactions between dogs and robots. We sought guidance from the Department of Veterinary Medicine to ensure our experimental design offered an ethical and stress-free environment for the dogs. Hence, the study incorporated best practices in robot vocalisation, gestural communication, and movement adaptation, improving the robot’s ability to interact naturally. The experiment involved six dogs, always in the presence of their owners. Preliminary findings indicate that dogs can react with curiosity and engagement, but also avoidance or fear, challenging assumptions about robot acceptability in animal interactions. We argue for rethinking robotic design beyond human-centric paradigms, advocating for pluralistic and open-ended approaches to ARI. |
|
| Corsi Honorio, Gabriel |
Caio Conti, Nuno Kuschnaroff Barbosa, Guilherme Gelmi de Freitas Salvo, Gabriel Corsi Honorio, Davy Araujo Sa Teles, and Adriana Tapus (ENSTA - Institut Polytechnique de Paris, Paris, France; ENSTA Paris, Paris, France) The presence of robots in everyday settings is growing rapidly. As they become increasingly common in service-oriented roles, such as in restaurants, retail, and reception, their primary function often involves assisting people by providing guidance and supporting decision-making. In these contexts, a critical factor influencing their effectiveness is the level of human trust in the robot’s actions. In this work, we investigate human trust, confidence and perceptions in interactions with robots within the context of a recycling task. We developed a robot equipped with a vision-enabled LLM to classify trash items and a robotic arm to indicate the recommended bin—either recycling or general waste. In a sorting experiment involving 12 participants, each completed the task under three conditions: (C1) without any assistance, (C2) with an instructional sheet, and (C3) with the option to request assistance from the robot. We tracked whether participants chose to consult the robot and collected self-reported confidence ratings for each condition. Preliminary results suggest that participants are more likely to trust the robot’s guidance and report higher confidence in correctly completing the task when assisted by the robot compared to performing the task alone or with only the instructional sheet. |
|
| Covone, Nicole |
Nicole Covone, Linda Pigureddu, Margherita Attanasio, Cristina Gena, Berardina De Carolis, and Monica Mazza (University of L’Aquila, L’Aquila, Italy; University of Turin, Turin, Italy; University of Bari, Bari, Italy) The study proposed in this paper investigates whether the administration modality of the Raven’s Standard Progressive Matrices (RSPM) influences the performance and physiological responses of autistic individuals. Fourteen participants with an ASD diagnosis completed the test either in the traditional paper-and-pencil format or through the humanoid robot Pepper, which delivered a digital version of the assessment. Across both modalities, we analysed accuracy, reaction times, and physiological activation (heart rate), taking item complexity into account. Preliminary results indicate that robot-mediated administration is feasible and comparable to traditional testing, with meaningful differences emerging as task difficulty increases. In particular, the Pepper modality tended to show lower initial physiological activation and comparable performance across most item levels. These findings suggest that social robots may offer a viable and potentially more engaging alternative for administering cognitive assessments in autism, supporting structured, predictable, and less stressful testing contexts. |
|
| Cowan, Benjamin R. |
Benjamin R. Cowan (University College Dublin, Ireland) The growth of Generative AI capabilities has led to huge interest in developing truly collaborative dialogues between users and conversational agents. Yet we need to understand how fundamental concepts related to dialogue and collaborative communication manifest in and are impacted by agent interaction. This keynote specifically focuses on key concepts such as perspective taking, grounding, partner modelling and the division of communicative labour, showing how these impact and/or drive human-machine dialogue. I will argue that, for us to design truly effective human-agent collaborations, we must make fundamental strides in understanding how these concepts manifest and influence collaborative dialogue with agents. Rachel Ringe, Nima Zargham, Mihai Pomarlan, Benjamin R. Cowan, Minha Lee, Donald McMillan, and Matthew Peter Aylett (University of Bremen, Bremen, Germany; University College Dublin, Dublin, Ireland; Eindhoven University of Technology, Eindhoven, Netherlands; Stockholm University, Stockholm, Sweden; Heriot-Watt University, Edinburgh, UK) Communication in Human-Robot Interaction (HRI) often focuses on linguistic exchanges, with spoken dialogue providing a natural way for people to interact with robots. While direct verbal interaction can reduce barriers compared to other forms of interaction (e.g., text- or touch-based interfaces), it may also exclude users with speech, language, or cognitive differences, and they may not generalize well across cultures and contexts. Non-linguistic forms of communication, including non-verbal voice interactions and extra-linguistic signals (e.g., gesture, gaze, facial expressions, posture), offer complementary pathways that can enable more inclusive, accessible, and universal interactions. This workshop explores how non-linguistic communication can shape effective human-robot communication and collaboration. We aim to bring together researchers from HRI, conversational AI, linguistics, psychology, and accessibility studies to discuss opportunities, challenges, and design practices for integrating such features. The workshop seeks to advance inclusive design principles, bridge disciplines, and highlight future research directions on communication strategies that empower diverse users in their interactions with robots. |
|
| Crandall, David |
Weslie Khoo, Long-Jing Hsu, William Valentine, Brandon Connor, Tao Yat Kong, Amous Khoo, Waki Kamino, Selma Šabanović, and David Crandall (Indiana University at Bloomington, Bloomington, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Rose-Hulman Institute of Technology, Terre Haute, USA; Independent, Singapore, Singapore; Cornell University, Ithaca, USA) IRIS, the Interactive Robot for Ikigai Support, is a social robot designed to help older adults explore life purpose and meaningful memories through interactive, reflective conversations. Implemented on LuxAI’s QT humanoid robot, IRIS combines structured prompts with AI-generated follow-ups to support dynamic, personalized dialogue that adapts to individual preferences and cognitive abilities. Co-designed with older adults through a longitudinal panel, IRIS incorporates nuanced emotional responsiveness, including the ability to recognize and engage with bittersweet emotions. For the interactive demo, participants will engage with IRIS in a controlled, immersive setting to encourage contemplation of meaning, impermanence, and personal purpose. This work highlights the potential of socially aware robots to empower older adults, support emotional well-being, and foster personal reflection, demonstrating how human–robot interaction can contribute to societal enrichment. Brandon Connor, William Valentine, Long-Jing Hsu, Waki Kamino, Selma Šabanović, David Crandall, and Weslie Khoo (Indiana University at Bloomington, Bloomington, USA; Rose-Hulman Institute of Technology, Terre Haute, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Cornell University, Ithaca, USA) This paper examines how robots can safely and intentionally evoke fear in humans. Drawing on popular culture, immersive theater, horror games, and theme parks, we highlight design strategies such as unpredictability, proximity violations, and narrative buildup. We discuss how robot morphology, motion, and ambient cues shape perceived threat, with implications for entertainment-focused Human-Robot Interaction (HRI) and for preventing unintentional fear in everyday interactions. The paper aims to spark discussion on leveraging fear as a meaningful, ethical, and safe emotional modality in human–robot interaction. |
|
| Crandall, David J. |
Jack Bassett, Long-Jing Hsu, Stephen Taylor, Janice Bays, Weslie Khoo, Katherine M. Tsui, David J. Crandall, and Selma Šabanović (Indiana University at Bloomington, Bloomington, USA; Dementia Action Alliance, Bloomington, USA; Toyota Research Institute, Cambridge, USA) Current robotic systems for dementia care primarily emphasize cognitive or functional support—reminding individuals to take medications, offering simple companionship, or providing sensory stimulation. These interventions are valuable, yet they do not always acknowledge the deeper existential challenges associated with memory loss. In this video, we present IRIS (Interactive Robot for Ikigai Support), a novel social robot designed not only to assist but to empower: to help individuals living with dementia reconnect with purpose, identity, and meaning. This 3-minute clip was part of a 3-year research project to develop a robot that can foster a sense of meaning for older adults living with dementia. This work highlights the lived experience of Steve Taylor, a participant whose long-term involvement shaped the robot from concept to deployment. By foregrounding his voice and day-to-day story, the video reveals how IRIS may act as a reflective companion — encouraging storytelling, sparking emotional connections, and enabling moments of clarity, dignity, and joy. The film frames the robot not merely as a tool, but as a mirror through which participants can rediscover fragments of self and meaning in everyday experiences. |
|
| Cringasu, Cristian-Marius |
Cristian-Marius Cringasu and Adriana Tapus (National University of Science and Technology Politehnica Bucharest, Bucharest, Romania; ENSTA Paris, Paris, France) Adapting to individual preferences—including interpersonal distance, formality, and role conventions, is essential for social robots. We introduce a parameter-efficient method for episodic social memory that stores interaction-specific norms as LoRA adapters applied per episode to an open-source dialogue model. We encode episode metadata within a manually defined social feature space, train a distinct LoRA adapter per episode using norm-consistent responses, and at inference retrieve the nearest episode by embedding similarity. We evaluate four configurations: (1) base model (no memory), (2) RAG with episodic text, (3) LoRA-only (activating the retrieved adapter), and (4) combined RAG+LoRA. An independent LLM-as-judge rates outputs for formality, tone, proxemics, and role alignment. Preliminary results on synthetic proxemics and hierarchy tasks indicate that both RAG and episodic LoRA influence behavior, and their combination produces more reliable, user-tailored responses than either component alone. |
|
| Crisp, Dimity |
Aurora An-Lin Hu, Dimity Crisp, Sharni Konrad, Damith Herath, and Janie Busby Grant (University of Canberra, Bruce, Australia; University of Canberra, Canberra, Australia) The mismatch between user expectations and robot performance—the expectation gap—is common in human–robot interaction. Although related research is limited, preliminary evidence suggests that the expectation gap has a considerable impact on user adoption of robots. The present study examined how failing, confirming, and exceeding user expectations and the extent to which robot performance differs from expectations predict users’ adoption intention. A sample of 234 participants completed pre-interaction expectation measures and post-interaction robot performance ratings after completing a drawing activity with a humanoid robot (Pepper). Results showed that considering both the magnitude and direction of the expectation gap (signed gap values) consistently yielded stronger associations and predictive power for adoption intention than considering the magnitude alone (absolute gap values) across four expectation dimensions, with expectation gaps related to Relative Advantage emerging as the strongest predictor. Overall, the findings highlight that failing to meet expectations consistently predicted lower adoption intention compared to both confirming and exceeding expectations, whereas evidence for whether exceeding expectations provides additional benefits beyond confirming them was mixed. |
|
| Croitoru, Madalina |
Sandra Victor, Bruno Yun, Chefou AR Mamadou Toura, Enzo Indino, Madalina Croitoru, and Gowrishankar Ganesh (University of Montpellier, Montpellier, France; University of Aberdeen, Aberdeen, UK; CNRS, Montpellier, France) In this work we address the problem of designing a resource allocation decision making robot. We developed a model that accurately makes decisions to distribute risk, effort and reward between two humans or a human and a robot, considering their age, sex and humanness. To assess the model's alignment with social norms, we conducted a Turing test which showed that our model is perceived as making socially acceptable decisions, similar to those of human participants. Furthermore we demonstrated our model by embodying it in a robot negotiator that automatically distributed reward, effort and risk tokens among participant dyads by perceiving their physical characteristics. |
|
| Cross, Emily S. |
Jacqueline Borgstedt, Ricarda Wullenkord, Fethiye Irmak Doğan, Alva Markelius, Yue Lou, Emma Geijer-Simpson, Emily S. Cross, Friederike Eyssel, Tamsin Ford, Jenny L. Gibson, Hatice Gunes, Georgina Warner, and Ginevra Castellano (ETH Zurich, Zurich, Switzerland; Bielefeld University, Bielefeld, Germany; University of Cambridge, Cambridge, UK; Uppsala University, Uppsala, Sweden) Mental Health challenges disproportionately affect vulnerable children, including those with Developmental Language Disorder (DLD) and with forced migration backgrounds. These groups face elevated risks of anxiety, depression, and social withdrawal. Yet, to date, their subjective wellbeing is difficult to assess. This is particularly true for traditional self-report measures which require good reading skills, linguistic competence, and sustained attention. To overcome these barriers, we propose that social robots offer a novel opportunity to create accessible, engaging, and child-friendly wellbeing assessments. Amol Deshmukh, Mary Ellen Foster, and Emily S. Cross (ETH Zurich, Zurich, Switzerland; University of Glasgow, Glasgow, UK) Socially Assistive Robots (SARs) show promise for initiating positive behaviour change, yet sustaining habits beyond the intervention period remains a persistent challenge. This paper moves beyond interaction-based analysis to propose mathematical frameworks for modelling habit formation dynamics. We introduce three complementary models: Probabilistic Habit Formation optimised via Reinforcement Learning, Rational Habit Strength with hybrid decay, and Long-Term Retention with booster interventions. Using school-based handwashing as an exemplar, Monte Carlo simulations (𝑁 = 1000) predict that RL-optimised reinforcement could accelerate habit formation by 32%, while strategic boosters may maintain habit strength 1.3× above withdrawal baselines. These frameworks offer a potentially generalisable approach for robotassisted behaviour change across health, education, and other socially assistive contexts, pending empirical validation. Jiaee Cheong, Fethiye Irmak Doğan, Alva Markelius, Emily S. Cross, Friederike Eyssel, Ginevra Castellano, and Hatice Gunes (University of Cambridge, Cambridge, UK; Harvard University, Boston, USA; ETH Zurich, Zurich, Switzerland; Bielefeld University, Bielefeld, Germany; Uppsala University, Uppsala, Sweden) Real-world deployment of robotics for wellbeing requires solutions to be tailored to the end users. Most existing systems do not take into consideration the unique needs of those from marginalized and vulnerable communities, such as children, elderly, neurodivergent, disabled, and LGBTQ+ communities. The first workshop on “Equitable Robotics for Wellbeing” ( EqRoW) aims to address this gap by exploring strategies to not only develop more equitable solutions for marginalized and vulnerable communities, but also to enable HRI designers and developers to better understand the diverse needs of the end-users. The theme of HRI 2026, "HRI Empowering Society", informs the overarching theme of this workshop, encouraging discussions on HRI theories, user design methods and studies focused on developing approaches for advancing HRI that empowers society via equitable wellbeing robotics for all. |
|
| Dabiri Golchin, Minoo |
Raquel Thiessen, Minoo Dabiri Golchin, Samuel Barrett, Jacquie Ripat, and James Everett Young (University of Manitoba, Winnipeg, Canada) Social robots are increasingly marketed as play companions for children, but research has not established how these robots support play in real-world scenarios or whether their interactivity supports quality play. We are conducting an eight-week home study with children with and without disabilities to learn about the play experiences with an interactive robot versus a doll ver-sion of the same robot (a VStone Sota). We implemented interactive robot behaviors based on LUDI's categorization of play, incorporating social and cognitive dimensions of play to support children’s play in various developmental play stages. We measure play quality using standardized instruments, and along with qualitative assessments of children's engagement and interest through child-family interviews. This study investigates whether interacting with robotic toys supports children in developing play skills compared to non-robotic dolls. Our findings will establish baseline knowledge about child-robot play and can guide evidence-based design of interactive play companions for children. |
|
| Dagan, Rotem |
Ofir Sadka, Michael Kupferman, Hili Megory, Lior Simchon, Rotem Dagan, Guy Benadon, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Robots are becoming part of everyday life, where their success depends as much on social interaction as on task performance. While rich movement is often assumed to enhance that experience, minimal gestures can also elicit meaningful social interactions. This suggests that gesture design informed by fundamental human nonverbal cues may achieve comparable social expressivity with fewer degrees of freedom. We explored whether reducing a robot’s degrees of freedom alters the social experience by comparing interactions in two movement configurations: a Full-DoF condition and a Reduced-DoF condition. This comparison allowed us to evaluate whether increased movement capability contributes meaningfully to the quality of the social experience. Bayesian analyses consistently favored the null hypothesis, indicating no meaningful differences between the Full-DoF and Reduced-DoF conditions. These initial findings suggest that well-designed simple gestures can support a consistent social experience, challenging the assumption that greater movement capability should be automatically expected to enhance human–robot interaction. Elior Carsenti, Adi Manor, Ofir Sadka, Michael Kupferman, Guy Benadon, Lior Simchon, Rotem Dagan, Hili Megory, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Social robots are increasingly expected to initiate interactions, yet proactive behavior often produces negative experiences if presented when the users are not available. We argue that availability is not binary and depends on the user's cognitive load. Moreover, proactivity can take different forms, suggesting that proactive cues can be adjusted to the user’s current load. We tested whether matching proactivity modality (verbal and nonverbal cues) to cognitive-load level improves robot perception. In a 2×2×2 between-participants study, 87 participants completed a high-load writing task or a low-load screw-sorting task while an ElliQ robot initiated interaction using verbal cues, nonverbal cues, both, or neither. Perceptions varied across conditions. Under low load, verbal proactivity was preferred regardless of nonverbal cues. Under high load, nonverbal-only proactivity produced the best experience. These findings suggest that proactivity design is not trivial and should focus on aligning the robot's proactive behavior with the user's cognitive load. |
|
| Dagioglou, Maria |
Christos Spatharis, Dimitrios Koutrintzes, and Maria Dagioglou (National Centre for Scientific Research ‘Demokritos’, Athens, Greece; National Centre for Scientific Research ‘Demokritos’, Ag. Paraskevi, Greece) Deep reinforcement learning enables robots to learn collaborative tasks with humans. However, off-policy methods suffer from primacy bias that causes agents to overfit to early experiences. We investigate the impact of primacy bias on team performance during a real world human-robot co-learning task, where twenty novice human participants collaborated with a Soft Actor-Critic agent to move a UR3 cobot. Analysis of how initial interactions shape subsequent learning dynamics demonstrates that the quality of the initial data distribution matters. While successful early experiences allow teams to overcome learning barriers, poor interactions cause the agent to converge toward suboptimal behaviors that prevent recovery, even as human skills improve. |
|
| Datteri, Edoardo |
Sam Thellman, Klara Bergsten, Edoardo Datteri, and Tom Ziemke (Linköping University, Linköping, Sweden; University of Milano-Bicocca, Milan, Italy) People routinely attribute mental states such as beliefs, desires, and intentions to explain and predict others' behavior. Prior work shows that such attributions extend to robots, yet it remains unclear what people assume about the reality of the states they attribute to them. Building on recent conceptual work on folk-ontological stances, we report a pilot study measuring realist, anti-realist, and agnostic stances toward robot minds. Using a questionnaire (N = 66), we assessed stances toward today's robots and robots in principle, and examined stance rigidity through a reflection-and-reassessment design. Results show stronger anti-realist tendencies for today's robots than for robots in principle. Stances were largely rigid across reflection. Notably, participants did not hold a uniformly non-realist view but expressed a diversity of folk-ontological stances, including substantial proportions of agnostic and realist responses. This heterogeneity highlights the need for measurement tools that move beyond binary measures and capture nuance in folk-ontological reasoning. Future work will expand stance options to include finer-grained realist and anti-realist variants and recruit cross-cultural samples to assess variation across populations. |
|
| Dautenhahn, Kerstin |
Neil Fernandes, Tehniyat Shahbaz, Emily Davies-Robinson, Yue Hu, and Kerstin Dautenhahn (University of Waterloo, Waterloo, Canada; United for Literacy, Toronto, Canada) Newcomer children face barriers in acquiring the host country’s language and literacy programs are often constrained by limited staffing, mixed-proficiency cohorts, and short contact time. While Socially Assistive Robots (SARs) show promise in education, their use in these socio-emotionally sensitive settings remains underexplored. This research presents a co-design study with program tutors and coordinators, to explore the design space for a social robot, Maple. We contribute (1) a domain summary outlining four recurring challenges, (2) a discussion on cultural orientation and community belonging with robots, (3) an expert-grounded discussion of the perceived role of an SAR in cultural and language learning, and (4) preliminary design guidelines for integrating an SAR into a classroom. These expert-grounded insights lay the foundation for iterative design and evaluation with newcomer children and their families. |
|
| Davey, Sean |
Nguyen Tan Viet Tuyen, Athina Georgara, Lokesh Singh, Jayati Deshmukh, Sean Davey, Paul N. Tisdale, and Sarvapali Ramchurn (University of Southampton, Southampton, UK; St Monica Trust, Bristol, UK) "Stop and Watch" is an early-warning tool adopted by the UK NHS and is widely used in elderly care settings. The tool helps caregivers to recognise abnormal changes in residents’ health. Despite its clinical value, the process remains highly manual, workload-intensive, and vulnerable to missed observations, particularly in environments facing staff shortages, frequent staff rotations, and increasing care demands. We argue that AI-based systems such as Socially Assistive Robots (SARs) and Assistive Living Technologies (ALTs) offer promising avenues for supporting and enhancing the "Stop and Watch" tool. However, designing such systems requires a multidisciplinary effort to establish a comprehensive understanding of current practices and the priorities and concerns of all relevant stakeholders. This paper presents insights from a participatory design workshop held in a care home in the UK to explore how SARs and ALTs could meaningfully support the “Stop and Watch” tool, understand stakeholders’ expectations, perceived benefits, and concerns regarding deployment in this sensitive context. |
|
| Davies-Robinson, Emily |
Neil Fernandes, Tehniyat Shahbaz, Emily Davies-Robinson, Yue Hu, and Kerstin Dautenhahn (University of Waterloo, Waterloo, Canada; United for Literacy, Toronto, Canada) Newcomer children face barriers in acquiring the host country’s language and literacy programs are often constrained by limited staffing, mixed-proficiency cohorts, and short contact time. While Socially Assistive Robots (SARs) show promise in education, their use in these socio-emotionally sensitive settings remains underexplored. This research presents a co-design study with program tutors and coordinators, to explore the design space for a social robot, Maple. We contribute (1) a domain summary outlining four recurring challenges, (2) a discussion on cultural orientation and community belonging with robots, (3) an expert-grounded discussion of the perceived role of an SAR in cultural and language learning, and (4) preliminary design guidelines for integrating an SAR into a classroom. These expert-grounded insights lay the foundation for iterative design and evaluation with newcomer children and their families. |
|
| De Carolis, Berardina |
Nicole Covone, Linda Pigureddu, Margherita Attanasio, Cristina Gena, Berardina De Carolis, and Monica Mazza (University of L’Aquila, L’Aquila, Italy; University of Turin, Turin, Italy; University of Bari, Bari, Italy) The study proposed in this paper investigates whether the administration modality of the Raven’s Standard Progressive Matrices (RSPM) influences the performance and physiological responses of autistic individuals. Fourteen participants with an ASD diagnosis completed the test either in the traditional paper-and-pencil format or through the humanoid robot Pepper, which delivered a digital version of the assessment. Across both modalities, we analysed accuracy, reaction times, and physiological activation (heart rate), taking item complexity into account. Preliminary results indicate that robot-mediated administration is feasible and comparable to traditional testing, with meaningful differences emerging as task difficulty increases. In particular, the Pepper modality tended to show lower initial physiological activation and comparable performance across most item levels. These findings suggest that social robots may offer a viable and potentially more engaging alternative for administering cognitive assessments in autism, supporting structured, predictable, and less stressful testing contexts. |
|
| De Cet, Martina |
Martina De Cet (Chalmers University of Technology, Gothenburg - University of Gothenburg, Gothenburg, Sweden) Robots, virtual agents, and voice assistants are often designed with gendered visual or vocal cues that shape user perception and interaction, and may reinforce gender stereotypes. Voice, in particular, plays a central role in how robots are gendered. Recent work has begun exploring gender-ambiguous voices, voices that blend masculine and feminine characteristics, as a way to challenge binary gender expectations. This research examines how gender-ambiguous voices are perceived in human–robot interaction and whether they can reduce gendering and support more inclusive robot design. |
|
| Del Bue, Alessio |
Giulia Pusceddu, Lucia Schiatti, Andrea Senacheribbe, Monica Gori, Alessio Del Bue, and Alessandra Sciutti (Italian Institute of Technology, Genoa, Italy; University of Genoa, Genoa, Italy) We introduce a novel Human-Robot Interaction (HRI) paradigm built on sketch-based symbolic communication to investigate the link between human visual abstraction skills and language-based organization of semantic knowledge. The design of drawings' prompts is based on a controlled grammar with an increasing level of semantic complexity. We tested such a paradigm in an HRI experiment where humans were asked to make sketches of textual prompts and evaluate the congruence of the robot behavior with the represented semantic category, i.e. objects, actions, and interactions. Our preliminary results, suggesting that drawing strategies significantly differ based on the semantic complexity and in presence of an interaction goal, are promising towards the potential of the proposed approach to be integrated in fundamental research and clinical applications. Data: https://hrisymbolic.github.io/human-robot-drawing/. |
|
| Del Duchetto, Francesco |
Robbie Jay Cato, Lambros Lazuras, Natalie Leesakul, and Francesco Del Duchetto (University of Lincoln, Lincoln, UK; University of Nottingham, Nottingham, UK) As service robots are deployed in public spaces, they inherently collect data about their environment and the people within it. This creates a critical tension between ensuring users are aware of data collection and maintaining their trust. We investigate how different disclosure and consent mechanisms shape user perceptions of trust and privacy. We conducted a Wizard-of-Oz experiment with 36 participants on a university campus, comparing three conditions: no disclosure, a one-time static disclosure, and a dynamic ongoing consent mechanism. Post-interaction analysis reveals that dynamic consent not only increases user awareness but also significantly builds trust. Surprisingly, we found that a one-time, static disclosure was often more damaging to user trust than no disclosure at all. The results of our pilot study provide empirical evidence that interactive and continuous consent is crucial for the ethical and successful deployment of robots in public spaces, suggesting that designers should avoid simple, static warnings in favour of more granular and interactive interfaces. Patrick Holthaus, Daniel Hernández García, Patricia Shaw, Francesco Del Duchetto, Marta Romeo, Muneeb Imtiaz Ahmad, Daniel Tozadore, and Shelly Bagchi (University of Hertfordshire, Hatfield, UK; Heriot-Watt University, Edinburgh, UK; Aberyswyth University, Aberystwyth, UK; University of Lincoln, Lincoln, UK; Swansea University, Swansea, UK; University College London, London, UK; National Institute of Standards and Technology, Gaithersburg, USA) The quality and impact of human-robot interaction (HRI) research rely on scientific rigour and reproducibility as well as the ethical soundness of experiments involving human participants. However, the HRI community currently lacks easy access to resources and common knowledge on experimental design practices and standardised reporting guidelines. The BPM-HRI: Best practices and methods in HRI research workshop at HRI 2026 aims to address this gap by fostering a community-wide effort to discuss, disseminate, and develop more robust methodologies. Targeting to empower especially early-career researchers, this half-day workshop will consolidate efforts from international initiatives, including the IEEE Standards Group P3108 Recommended Practice for Design of Human Subjects Studies in Human-Robot Interaction and the UK-HRI topic group Human-Robot Interaction: Best Practices and Methods. Our program includes a presentation and discussion on these initiatives, a keynote address focusing on rigorous study reporting, providing an intercontinental perspective, and dedicated mentoring and ideas exchange sessions. A collaborative working session will document the workshop's efforts with attendees starting to draft a community-driven white paper, surveying the current landscape and outlining next steps for experimental design and reporting recommendations in the field of HRI. |
|
| Demianchuk, Georgii |
Yuhua Jin, Nikita Kuzmin, Georgii Demianchuk, Mariya Lezina, Fawad Mehboob, Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Ahsan Mustafa, and Dzmitry Tsetserukou (Chinese University of Hong Kong, Shenzhen, China; Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Drones operating in human-occupied spaces suffer from insufficient communication mechanisms that create uncertainty about their intentions. We present HoverAI, an embodied aerial agent that integrates drone mobility, infrastructure-independent visual projection, and real-time conversational AI into a unified platform. Equipped with a MEMS laser projector, onboard semi-rigid screen, and RGB camera, HoverAI perceives users through vision and voice, responding via lip-synced avatars that adapt appearance to user demographics. The system employs a multimodal pipeline combining VAD, ASR (Whisper), LLM-based intent classification, RAG for dialogue, face analysis for personalization, and voice synthesis (XTTS v2). Evaluation demonstrates high accuracy in command recognition (F1: 0.90), demographic estimation (gender F1: 0.89, age MAE: 5.14 years), and speech transcription (WER: 0.181). By uniting aerial robotics with adaptive conversational AI and self-contained visual output, HoverAI introduces a new class of spatially-aware, socially responsive embodied agents for applications in guidance, assistance, and human-centered interaction. |
|
| Demiris, Yiannis |
Dimitra Tsakona and Yiannis Demiris (Imperial College London, London, UK) Assistive Human-Robot Interaction (HRI) must balance efficiency with user agency, particularly in high-intimacy contexts such as assistive feeding. This work investigates robot behavioural adaptation as a mechanism for fostering trust through user-guided autonomy. A comfort-driven optimisation framework integrates implicit user cues to continuously modulate robot behaviour, enabling collaboration that feels intuitive. Across two user studies (N = 44), adaptive behaviour enhanced trust, comfort, and perceived cooperation through responsiveness and flexibility. The timing of adaptation emerged as a transparent, universal channel for signalling compliance and collaborative intent. Future work will prioritise studying adaptation timing across repeated interactions to support long-term use, while exploring human reaction timing as a modality-independent signal for modelling comfort. |
|
| Deng, Jie |
Hanyu Zhang, Xinyue Xu, Xinran She, Jie Deng, and Yuanrong Tang (Tsinghua University, Beijing, China) Digital violence often happens impulsively within seconds. Circuit Breaker introduces an embodied mouse robot that detects toxic interactions and delivers physical micro-interventions to disrupt harmful actions. Through real-time cursor signals, sentiment cues, and haptic feedback, the system promotes reflective and safer online behavior. |
|
| Den Hertog, Joyce |
Zoja Gobec, Joyce den Hertog, and Kim Baraka (Sioux Technologies, Amsterdam, Netherlands; AKOB, den Haag, Netherlands; Vrije Universiteit Amsterdam, Amsterdam, Netherlands) With robots increasingly being used for artistic expression in interactive performances, this research investigates the production of expressive swarm behaviour that could form the basis for an interactive performance between a dancer and a swarm of drones. We contribute a mapping of Laban Effort parameters - a common movement analysis framework - onto a particle swarm and integrate it into an interactive prototype. The system accepts human motion as input and generates responsive swarm behaviour with the Boids algorithm as the foundational behaviour model. In a user study evaluating the mapping (N=17), we show that the Space and Time parameters were recognised significantly better than Weight and Flow, suggesting that parameters connected to embodied cues such as intention and emotion are more challenging to computationally implement, and need further refinement. The novel mapping, along with the interactive system and user study insights, offers an initial step towards practical applications in choreography development, interactive performance, or art installations, as well as designing expressive frameworks with human-guided swarm control. |
|
| De Nie, Koen |
Chalmers Ong, Hannah Lee, Koen de Nie, Mark W. Vas, Peter Holzer, Yasaman Shamssabzevar, Yu-Ai Jiang, J. Micah Prendergast, Alper Semih Alkan, and Serdar Aşut (Delft University of Technology, Delft, Netherlands) Robotics in creative industries is sometimes met with concern over the eventual replacement of humans. This paper proposes a framework for a new form of spatial craft that emerges through human-robot collaboration, using the Delft Blue painting process as a case study. We employ Learning from Demonstration (LfD) through kinesthetic guidance of a KUKA iiwa robotic arm to capture expert brushstroke trajectories, which are then modelled using a Long Short-Term Memory Variational Autoencoder (LSTM-VAE) to generate novel, stylistically coherent strokes. Subsequently, we evaluated these generated trajectories through physical execution on ceramic tiles, inviting a human painter to interpret and complete one of the robot-produced compositions. The results demonstrate that robotic machine learning can support a new co-crafting workflow, which does not substitute human artists but expands and transforms traditional craft practices for new creative opportunities to emerge. |
|
| Deshmukh, Amol |
Amol Deshmukh, Mary Ellen Foster, and Emily S. Cross (ETH Zurich, Zurich, Switzerland; University of Glasgow, Glasgow, UK) Socially Assistive Robots (SARs) show promise for initiating positive behaviour change, yet sustaining habits beyond the intervention period remains a persistent challenge. This paper moves beyond interaction-based analysis to propose mathematical frameworks for modelling habit formation dynamics. We introduce three complementary models: Probabilistic Habit Formation optimised via Reinforcement Learning, Rational Habit Strength with hybrid decay, and Long-Term Retention with booster interventions. Using school-based handwashing as an exemplar, Monte Carlo simulations (𝑁 = 1000) predict that RL-optimised reinforcement could accelerate habit formation by 32%, while strategic boosters may maintain habit strength 1.3× above withdrawal baselines. These frameworks offer a potentially generalisable approach for robotassisted behaviour change across health, education, and other socially assistive contexts, pending empirical validation. |
|
| Deshmukh, Jayati |
Nguyen Tan Viet Tuyen, Athina Georgara, Lokesh Singh, Jayati Deshmukh, Sean Davey, Paul N. Tisdale, and Sarvapali Ramchurn (University of Southampton, Southampton, UK; St Monica Trust, Bristol, UK) "Stop and Watch" is an early-warning tool adopted by the UK NHS and is widely used in elderly care settings. The tool helps caregivers to recognise abnormal changes in residents’ health. Despite its clinical value, the process remains highly manual, workload-intensive, and vulnerable to missed observations, particularly in environments facing staff shortages, frequent staff rotations, and increasing care demands. We argue that AI-based systems such as Socially Assistive Robots (SARs) and Assistive Living Technologies (ALTs) offer promising avenues for supporting and enhancing the "Stop and Watch" tool. However, designing such systems requires a multidisciplinary effort to establish a comprehensive understanding of current practices and the priorities and concerns of all relevant stakeholders. This paper presents insights from a participatory design workshop held in a care home in the UK to explore how SARs and ALTs could meaningfully support the “Stop and Watch” tool, understand stakeholders’ expectations, perceived benefits, and concerns regarding deployment in this sensitive context. |
|
| Deshpande, Nikhil |
Nishi Shishir, Aulia Nadila, Aly Magassouba, and Nikhil Deshpande (University of Nottingham, Nottingham, UK) The aim of this paper is to facilitate an efficient post-disaster recovery in lower-income countries by promoting first-responder accessibility and safety through pre-response disaster area observation and categorisation tools. In the past, research into assistive technologies in this field has been highly focused on disaster mitigation, detection, or primary participation, rather than reconnaissance and target identification activities conducted by first responders. Thus, research into this under-represented but highly important industry was necessary. |
|
| Dietz, Paul H. |
Mohsen Ensafjoo, Chau Nguyen, Roman Shumliakivskyi, Jamy Li, Paul H. Dietz, and Ali Mazalek (Toronto Metropolitan University, Toronto, Canada; Goethe University Frankfurt, Frankfurt, Germany; Edinburgh Napier University, Edinburgh, UK; University of Toronto, Toronto, Canada) Social robots in public and recreational spaces must keep an appearance of social intelligence. Traditional designs use pre-programmed schedules (PPS) or tightly authored state machines to guarantee predictable behaviour. Although PPS approaches are reliable, they limit spontaneous actions and the perception of “aliveness.” In this late-breaking report, we present our prototype, an aquatic social robot whose personality and motion are driven by an LLM (GPT) rather than PPS. The LLM generates character-specific Python control scripts based on prompts that contain robot constraints, user actions, and a well-known character description, which are then executed on our non-humanoid watercraft, designed for public pool settings. We briefly describe the system architecture and discuss early insights and open questions regarding the use of well-known characters with LLMs to create personality-rich behaviour in public-space robots. |
|
| Dillenbourg, Pierre |
Chenyang Wang, Julien Jordan, Alice Reymond, and Pierre Dillenbourg (EPFL, Lausanne, Switzerland) As AI becomes increasingly integrated into everyday life, supporting children’s AI literacy is essential. While prior work in Child-Robot-Interaction has primarily used robots as programmable artefacts or learning companions for introducing AI concepts, the role of a robot as an embodied AI student remains underexplored. We investigate social robot teaching as a pathway to help children intuitively understand supervised learning. We designed a prototype in which children teach a robot using biased and unbiased training data and iteratively observe its performance. A pilot study with three children preliminarily examines: 1) whether and how this interaction fosters intuitive understanding of AI training and bias, and 2) initial design considerations for future prototype interactions. Our findings offer early evidence of the potential of social robot teaching for AI literacy. |
|
| Dimitropoulou, Katherine |
Ziang Liu, Katherine Dimitropoulou, Christy Cheung, and Tapomayukh Bhattacharjee (Cornell University, Ithaca, USA; Columbia University, New York City, USA) We present CareEval, a benchmark for evaluating the physical caregiving decision-making abilities of Large Language Models. Developed with a licensed occupational therapist expert in caregiving and validated by eight clinical stakeholders, it contains 100 realistic scenarios spanning all six basic Activities of Daily Living. Instead of testing general reasoning, CareEval assesses whether model responses account for key physical caregiving factors, such as user function, agency, intent, communication, and safety, and align with expert practice. Across several state-of-the-art LLMs, the best model only scores 53.1%, revealing substantial gaps in current models’ ability to reason about physical caregiving. We release 80 of the CareEval scenarios and all prompts through our website: https://emprise.cs.cornell.edu/care-eval/. |
|
| Dinauer, Raphael |
Mayumi Mohan, Rachael L’Orsa, Felix Grüninger, Bernadette-Marie Stollhof, Carolin Sarah Klein, Raphael Dinauer, Rachael Bevill Burns, Tobias J. Renner, Karsten Hollmann, and Katherine J. Kuchenbecker (MPI for Intelligent Systems, Stuttgart, Germany; University Hospital Tübingen, Tübingen, Germany; University of Tennessee at Knoxville, Knoxville, USA) The gold-standard treatment for children diagnosed with obsessive-compulsive disorder (OCD) is therapist-guided cognitive behavioral therapy (CBT), which includes exposure and response prevention (ERP) sessions that teach children to overcome compulsive responses when exposed to their anxiety-inducing triggers. CBT requires children to report frequent self-assessments of tension during both therapist-supported and therapist-free self-management ERP sessions. Videoconferencing-delivered CBT (vCBT) enables a psychotherapist to treat a child remotely in their home, where OCD symptoms often arise, but these remote therapeutic interactions lack physical presence and can be challenging to run. We propose using a robot as an input/output device during vCBT for young children diagnosed with OCD, and we introduce a stationary table-top koala robot for this application. We further describe the first of three planned participatory design phases: a co-design study comprising two sessions where child and adolescent psychotherapists role-played vCBT ERP exercises with this robot to help define its role. |
|
| Dinh, Ngoc Bao |
Hong Wang, Ngoc Bao Dinh, and Zhao Han (University of South Florida, Tampa, USA) Projector-based augmented reality (AR) enables robots to communicate spatially-situated information to multiple observers without requiring head-mounted displays, e.g., projecting navigation path. However, they require flat and weakly textured projection surfaces; otherwise, the surface needs to be compensated to retain the original projected image. Yet, existing compensation methods assume static projector-camera-surface configurations and may not work in complex, textured environments where robots must navigate. In this work, we evaluate state-of-the-art deep learning-based projection compensation on a Go2 robot dog in a search-and-rescue scene with discontinuous, non-planar, strongly textured surfaces. We contribute empirical evidence on critical performance limitations of state-of-the-art compensation methods: the requirement of pre-calibration and inability to adapt in real-time as the robot moves, revealing a fundamental gap between static compensation capabilities and dynamic robot communication needs. We propose future directions for enabling real-time, motion-adaptive projection compensation for robot communication in dynamic environments. |
|
| Di Nuovo, Alessandro |
Ruidong Ma, Wenjie Huang, Zhegong Shangguan, Angelo Cangelosi, and Alessandro Di Nuovo (Sheffield Hallam University, Sheffield, UK; University of Manchester, Manchester, UK) Direct imitation of humans by robots offers a promising direction for remote teleoperation and intuitive task instruction, where a human can perform a task naturally and the robot autonomously interprets and executes it using its own embodiment. Existing methods often rely on close alignment between human and robot scenes. This prevents robots from inferring the intent of the task or executing demonstrated behaviors when the initial states mismatch. Hence, it poses difficulties for non-expert users, who may need domain knowledge to adjust the setup. To address this challenge, we propose a neuro-symbolic framework that unifies visual observations, robot proprioceptive states, and symbolic abstractions within a shared latent space. Human demonstrations are encoded into this representation as predicate states. A symbolic planner can thus generate high-level plans that account for the different robot initial states. A flow matching module then synthesizes continuous joint trajectories consistent with the symbolic plan. We validate our approach on multi-object manipulation tasks. Preliminary results show that the framework can infer human intent and generate feasible symbolic plans and robot motions under mismatched initial states. These findings highlight the potential of neuro-symbolic models for more natural human-robot instruction. and they can enhance the explainability and trustworthiness of robot actions. |
|
| Dobrosovestnova, Anna |
Anna Dobrosovestnova, Barry Brown, Emanuel Gollob, Mafalda Gamboa, and Masoumeh Mansouri (Interdisciplinary Transformation University, Linz, Austria; Stockholm University, Stockholm, Sweden; University of Arts Linz, Linz, Austria; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Birmingham, Birmingham, UK) HRI 2026 takes place amid profound socio-political turbulence marked by rising authoritarianism, the consolidation of technological power, and the expanding use of robotics for warfare. These global conditions create an affective atmosphere that seeps into our field: a mix of attachment to techno-determinist and techno-solutionist narratives, unease with 'business as usual,' and a tentative search for alternatives. As HRI scholars and designers, we recognize how the wider socio-political tensions resonate within our own practices, shaping what we take to be possible, necessary, or inevitable in research and design. In this half-day, in-person workshop, we mobilize three affective orientations - cruel optimism, lucid despair, and precarious hope - as resources for reflection, critique, and experimentation. Through short provocations, discussions, and a speculative group activity, participants will be invited to inhabit these affects to question dominant narratives that sustain HRI, confront systemic challenges, and collectively explore alternative trajectories for research, design, and community building. |
|
| Doğan, Fethiye Irmak |
Jacqueline Borgstedt, Ricarda Wullenkord, Fethiye Irmak Doğan, Alva Markelius, Yue Lou, Emma Geijer-Simpson, Emily S. Cross, Friederike Eyssel, Tamsin Ford, Jenny L. Gibson, Hatice Gunes, Georgina Warner, and Ginevra Castellano (ETH Zurich, Zurich, Switzerland; Bielefeld University, Bielefeld, Germany; University of Cambridge, Cambridge, UK; Uppsala University, Uppsala, Sweden) Mental Health challenges disproportionately affect vulnerable children, including those with Developmental Language Disorder (DLD) and with forced migration backgrounds. These groups face elevated risks of anxiety, depression, and social withdrawal. Yet, to date, their subjective wellbeing is difficult to assess. This is particularly true for traditional self-report measures which require good reading skills, linguistic competence, and sustained attention. To overcome these barriers, we propose that social robots offer a novel opportunity to create accessible, engaging, and child-friendly wellbeing assessments. Fethiye Irmak Doğan, Alva Markelius, and Hatice Gunes (University of Cambridge, Cambridge, UK) Foundation models are increasingly embedded in social robots, mediating not only what they say and do but also how they adapt to users over time. This shift renders traditional "one-size-fits-all" explanation strategies especially problematic: generic justifications are now wrapped around behaviour produced by models trained on vast, heterogeneous, and opaque datasets. We argue that ethical, user-adapted explainability must be treated as a core design objective for foundation-model-driven social robotics. We first identify open challenges around explainability and ethical concerns that arise when both adaptation and explanation are delegated to foundation models. Building on this analysis, we propose four recommendations for moving towards user-adapted, modality-aware, and co-designed explanation strategies grounded in smaller, fairer datasets. An illustrative use case of an LLM-driven socially assistive robot demonstrates how these recommendations might be instantiated in a sensitive, real-world domain. Jiaee Cheong, Fethiye Irmak Doğan, Alva Markelius, Emily S. Cross, Friederike Eyssel, Ginevra Castellano, and Hatice Gunes (University of Cambridge, Cambridge, UK; Harvard University, Boston, USA; ETH Zurich, Zurich, Switzerland; Bielefeld University, Bielefeld, Germany; Uppsala University, Uppsala, Sweden) Real-world deployment of robotics for wellbeing requires solutions to be tailored to the end users. Most existing systems do not take into consideration the unique needs of those from marginalized and vulnerable communities, such as children, elderly, neurodivergent, disabled, and LGBTQ+ communities. The first workshop on “Equitable Robotics for Wellbeing” ( EqRoW) aims to address this gap by exploring strategies to not only develop more equitable solutions for marginalized and vulnerable communities, but also to enable HRI designers and developers to better understand the diverse needs of the end-users. The theme of HRI 2026, "HRI Empowering Society", informs the overarching theme of this workshop, encouraging discussions on HRI theories, user design methods and studies focused on developing approaches for advancing HRI that empowers society via equitable wellbeing robotics for all. |
|
| Dondrup, Christian |
Jia Yap Lim, John See, William Weimin Yoo, and Christian Dondrup (Heriot-Watt University Malaysia, Putrajaya, Malaysia; Heriot-Watt University, Edinburgh, UK) User engagement prediction in human-robot interaction (HRI) is typically conducted across diverse environmental settings, including both uncontrolled and controlled environments. Such environmental variations compel social robots to capture and analyse user behaviours differently. To the best of our knowledge, most of the prior works rely on video, audio and feature vectors extracted from the UE-HRI (uncontrolled) dataset to estimate user engagement. The existing literature has overlooked the potential of Multimodal Large Language Models (MLLMs) for user engagement prediction in HRI contexts, thus leaving a critical gap in understanding their operational mechanisms and capacity to elevate model performance. To address this gap, this paper pioneers an investigation into MLLM efficacy for engagement prediction across different environmental settings using the UE-HRI (uncontrolled) and eHRI (controlled) datasets. Moreover, we perform rigorous experiments to identify important factors influencing MLLM performance, including prompts, model types, model parameters, and keyword extraction strategies. Matthias von Rakowski, Antoine Esman, Méa Vittot, Kimia Nazari, Christian Dondrup, and Shenando Stals (Heriot-Watt University, Edinburgh, UK) With distractions and sedentary habits, maintaining focus and healthy routines is increasingly challenging. While video tutorials are accessible, they lack engagement and personalised feedback. In contrast, professional human coaching is effective but costly and difficult to schedule. To bridge this gap, we introduce "Marty". As a low-cost social robot, it combines the convenience of home practice with the responsive support of an embodied coach. Using computer vision, it leads yoga sessions, demonstrates poses, and provides real-time corrective feedback. By offering accessible and adaptive guidance, this system aims to interrupt sedentary patterns and empower users to build sustainable wellness habits. |
|
| Dong, Jennifer |
Jennifer Dong, Sophie Weissel, and Emily Zimmerman (Georgia Institute of Technology, Atlanta, USA) Elevators can be socially awkward — strangers share intimate space yet avoid interaction with each other. To address this, we present Elevator Pitch, a ceiling-mounted interactive robot that playfully facilitates social interaction in elevators. Elevator Pitch aims to foster temporary togetherness among frequent strangers in enclosed public spaces while exploring how ludic, socially expressive architectural robots can act as social agents. This paper presents the design and preliminary user testing of Elevator Pitch. |
|
| Dooley, Douglas |
Saad Elbeleidy, Andrew Schoen, Christopher Birmingham, Tiago Ribeiro, Victor Paléologue, Douglas Dooley, and Ross Mead (Peerbots, Arlington, USA; Semio, Los Angeles, USA) Software infrastructure for rendered robot faces is fragmented, yet there are many open research questions to address regarding robot faces, emotional expression, gaze, and animation. In this workshop-tutorial hybrid, we invite researchers and practitioners interested in robot faces to share their expertise and learn about the Vizij open source ecosystem and how it can be used to define, animate and deploy expressive rendered robot faces fully integrated with various robot platforms. We will illustrate the full software pipeline required to implement an expressive social robot face starting from scratch and going all the way to defining and connecting a custom face to a real-time conversational engine. Participants will be able to follow along on their own devices viewing and controlling their custom rendered robot face. Participants will also hear from ecosystem partners who have begun integrating with Vizij and learn about best practices and lessons learned. |
|
| Doore, Stacy A. |
Muhammad Ahmed Qayyum and Stacy A. Doore (Colby College, Waterville, USA) Independent mobility is central to daily life for blind and low-vision (BLV) individuals, yet existing mobility tools leave important gaps in situational awareness, obstacle detection, and environmental understanding. Legged robots such as Boston Dynamics' Spot offer a promising platform for mobility support, but effective use in everyday environments depends on accessible, user-centered interaction. This late-breaking report presents a voice-based interface (VBI) architecture for quadruped guide robots, grounded in prior work on accessibility, multimodal communication, and embodied large-model reasoning. |
|
| Downes, Daniel R. J. |
Marina Obata, Yuka Iwanaga, Masayoshi Tsuchinaga, Akitoshi Mizutani, Daniel R. J. Downes, and Sabrina Lee (Woven by Toyota, Tokyo, Japan; Toyota Motor Corporation, Toyota, Japan) This study examines the validity of the Motivation, Engagement, and Thriving in User Experience (METUX) model for evaluating remote operation interfaces of an assistive robot. Multiple real-life scenarios involving various target users were explored to assess whether Self-Determination Theory (SDT) factors in METUX effectively represent user well-being. Survey data from 2300 samples across three interfaces and four user groups underwent descriptive analysis, Confirmatory Factor Analysis (CFA), Multi-Group CFA (MGCFA) and Structural Equation Modelling (SEM). Results show significant correlations between SDT-defined Basic Psychological Needs (BPN) and well-being indicators, allowing researchers to examine well-being through interaction with assistive robots. MGCFA supported measurement invariance across interfaces but not across user groups, suggesting user-needs-centric interface development. The findings highlight the importance of validating metrics before product evaluation. In conjunction with planned user research in real-life settings using established metrics, our approach addresses research gaps in user well-being assessment within the HRI field. |
|
| Du, Jinxuan |
Jinxuan Du, Rulan Li, Tianlu Zhou, and Qianrui Liu (Tsinghua University, Beijing, China) Young people often suppress emotional expression non-verbally, leading to social friction and misunderstanding. Therefore, We propose MuffBunny, an embodied rabbit-eared robot designed as a social buffer. MuffBunny identifies the listener's implicit emotional valence and arousal from verbal stimuli in real-time and converts these emotions into intuitive physical cues—dynamic ear morphing. Upward morphing indicates positive emotions, and downward morphing signifies negative ones. Our design aims to provide a novel, non-confrontational proxy for emotional expression, reducing the burden of self-disclosure, fostering empathy, and promoting a healthier social atmosphere. |
|
| Duan, Yishan |
Shaoqing Liu, Yishan Duan, LiMeng Wang, Yuehan Zhao, Xiaohan Li, and Matt Philippe Laing (Tsinghua University, Peking, China; Tsinghua University, Beijing, China; Guangzhou Academy of Fine Arts, Guangzhou, China) This study explores the potential of social robots as icebreakers during first-time encounters between strangers, aiming to address common issues of social awkwardness that may hinder meaningful interactions. We designed and implemented LMAO, a desktop social robot (as shown in Fig. 1), which uses laughter as a non-verbal, emotionally contagious signal to intervene during conversation breaks. Through a user study involving 20 participants in group discussions, we examined whether the robot's laughter could enhance social engagement by increasing smile frequency and verbal contributions. The results indicated that compared to the control group with no robot intervention, the group with LMAO exhibited significantly higher levels of smiling and verbal output. Our findings suggest that context-aware robotic laughter can effectively reduce subjective feelings of awkwardness and facilitate smoother social initiation, highlighting the potential of social assistive robots in fostering human connection. |
|
| Eberhard, Alexander |
Leimin Tian, Ana Kirschbaum, Caterina Neef, Mary Ellen Foster, Sara Cooper, Frauke Zeller, Manuel Giuliani, Alexander Eberhard, Oliver Chojnowski, Nils Mandischer, Utku Norman, Jan Ole Rixen, Pamela Carreno-Medrano, Nick Hawes, and Dana Kulić (CSIRO, Melbourne, Australia; Monash University, Melbourne, Australia; University of Applied Sciences Cologne, Cologne, Germany; KIT, Karlsruhe, Germany; University of Glasgow, Glasgow, UK; IIIA-CSIC, Spain; University of Edinburgh, Edinburgh, UK; Kempten University of Applied Sciences, Kempten, Germany; University of Augsburg, Augsburg, Germany; Monash University, Clayton, Australia; University of Oxford, Oxford, UK) As the technology readiness level of robotic systems increases, these systems interact with real users in realistic application contexts at increasing scale. In such real-world HRI, failures and unexpected outcomes are common, and these can provide valuable lessons for the design and evaluation of future HRI systems. Building on two recent workshops---the HRI 2025 workshop on Human-Robot Interaction in Extreme and Challenging Environments and the RO-MAN 2025 workshop on Real-World HRI in Public and Private Spaces: Successes, Failures, and Lessons Learned---this half-day workshop invites researchers and practitioners to share their design and deployment lessons drawn from across the spectrum of real-world HRI. The interactive sessions follow a taxonomy of the key phases during field deployment and major factors at play, inviting attendees to review the experimental design of their own research or example projects to identify potential challenges and share related experiences. This discussion will refine the initial taxonomy, which we plan to release as an open-source tool to empower researchers and practicioners in their design and deployment of real-world HRI. |
|
| Echeverria, Nicolas |
Xiangfei Kong, Rex Gonzalez, Nicolas Echeverria, Sofia Cobo Navas, Thuc Anh Nguyen, Divyamshu Shrestha, Sebastian Ramirez-Vallejo, Deekshita Senthil Kumar, Andrew Texeira, Thao Nguyen, Deep Akbari, Hong Wang, Hongdao Meng, and Zhao Han (University of South Florida, Tampa, USA) HRI research shows that social robots support human companionship and emotional well-being. However, the cost of high-end social robots limits their accessibility, and, yet, existing low-cost platforms cannot fully support users emotionally due to limited interaction capabilities. In this demo paper with accompanying video, we showcase Bloom, a low-cost social robot that combines touch sensing with LLM-driven conversation to facilitate more expressive and emotionally engaging interactions with humans. Bloom also integrates a customizable 3D-printed shell with a flexible software pipeline that enables touch-responsive movement, LLM-driven dialogue, and face tracking during conversations, offering more expressive interaction capabilities at a low cost, which remains easy to fabricate and adaptable across a wide range of societal applications, including mental health, companion, and healthcare. |
|
| Ekman, Simon |
Simon Ekman, Joachim Örtegren, Kacper Mateusz Sieklucki, Raymond Tchou, Ludwig Halvorsen, and Hannah Pelikan (Linköping University, Linköping, Sweden) Robots on dense urban sidewalks enter environments in which space is contested. While prior work has described how bystanders and passersby accommodate delivery robots, we still lack a deeper understanding of what kind of problems emerge when autonomous robots enter a new area. This paper builds on 9 hours of fieldwork conducted during the early weeks of a delivery robot rollout in a European capital, characterized by a mixture of wide and narrow sidewalks. We describe emerging observations that highlight the contested and negotiated nature of these spaces, in which people make their trajectories readable, projecting where they will go next and establishing local traffic norms. |
|
| Elbeleidy, Saad |
Saad Elbeleidy, Andrew Schoen, Christopher Birmingham, Tiago Ribeiro, Victor Paléologue, Douglas Dooley, and Ross Mead (Peerbots, Arlington, USA; Semio, Los Angeles, USA) Software infrastructure for rendered robot faces is fragmented, yet there are many open research questions to address regarding robot faces, emotional expression, gaze, and animation. In this workshop-tutorial hybrid, we invite researchers and practitioners interested in robot faces to share their expertise and learn about the Vizij open source ecosystem and how it can be used to define, animate and deploy expressive rendered robot faces fully integrated with various robot platforms. We will illustrate the full software pipeline required to implement an expressive social robot face starting from scratch and going all the way to defining and connecting a custom face to a real-time conversational engine. Participants will be able to follow along on their own devices viewing and controlling their custom rendered robot face. Participants will also hear from ecosystem partners who have begun integrating with Vizij and learn about best practices and lessons learned. |
|
| El Dib, Josef |
Phillip Richter, Kira Sophie Loos, Josef El Dib, Mara Brandt, Heiko Wersing, and Anna-Lisa Vollmer (Bielefeld University, Bielefeld, Germany; Honda Research Institute, Offenbach, Germany) Standardized evaluation methods are needed as social robots enter diverse environments. We provide benchmark data for two contrasting platforms: Blossom (handcrafted, open-source) and Miroka (commercial, ball-bot humanoid). In an online study (N = 100), participants evaluated one robot after watching a representative interaction video, completing the ASAQ (19 constructs) and ASOR (3 factors) questionnaires. Both robots showed similar evaluation profiles despite contrasting capabilities and scenarios, with moderate differences: Blossom scored higher on coherence, Miroka on attentiveness and human-like appearance. No dimensions reached statistical significance after multiple comparison correction, though effect sizes revealed meaningful perceptual patterns. This work provides the first ASAQ benchmark data for both platforms, demonstrating how multidimensional questionnaires capture user perceptions across contrasting designs in realistic contexts. |
|
| Elfering, Rosa |
Febe Anna Kooij-Meijer, Emilia I. Barakova, Rosa Elfering, Wang Long Li, and Maarten Houben (Eindhoven University of Technology, Eindhoven, Netherlands; Tinybots, Rotterdam, Netherlands) The growing population of individuals with mild cognitive impairment and dementia places increasing demands on home-care systems, while staff shortages and high caregiver workloads underscore the need for assistive technologies. However, research on implementing these technologies in home care practice remains limited. This study examines professional caregivers’ digital onboarding of Tessa, a social robot that supports through verbal reminders. A conceptual digital onboarding probe was evaluated with novice, experienced, and expert users. Findings indicate that the onboarding process improves usability and efficiency by providing intuitive guidance and structured workflows. Additionally, LLMs can translate caregiver-provided goals into actionable robot scripts, though oversight remains essential for quality assurance. The probe and LLM support more effective onboarding and enhance caregiver’s user experience. |
|
| El Ouali, Yanis |
Hyunjung Kim, Max Fischer, Yena Kim, Geumjin Lee, Yanis El Ouali, Shota Kiuchi, Kentaro Honma, Hailong Liu, and Toshihiro Hiraoka (University of Tokyo, Tokyo, Japan; KAIST, Daejeon, Republic of Korea; Bordeaux INP, Talence, France; NAIST, Nara, Japan; Japan Automobile Research Institute, Tsukuba, Japan) We present MIRAbot, the first rearview-mirror-embedded robotic assistant. Reimagining this long-standing symbol of manual driving, MIRAbot leverages its strategic position and familiar form factor to support SAE Level 3 conditional automated driving by transforming into an active, communicative agent during automated modes. We conducted an exploratory public study (N=73) with Japanese adults to capture first-contact user impressions. UEQ-S scores significantly exceeded established benchmarks, with younger adults (18–54) reporting higher Hedonic and Overall ratings than older participants (55+). Qualitatively, participants interpreted the system’s outward-facing, scanning gaze as a cue of vigilance, fostering positive expectations for real-world viability. We discuss these findings, offer design insights for future iterations of robotic assistants embedded in familiar objects to enhance acceptance of automated driving, and outline directions for future research. |
|
| Ensafjoo, Mohsen |
Mohsen Ensafjoo, Chau Nguyen, Roman Shumliakivskyi, Jamy Li, Paul H. Dietz, and Ali Mazalek (Toronto Metropolitan University, Toronto, Canada; Goethe University Frankfurt, Frankfurt, Germany; Edinburgh Napier University, Edinburgh, UK; University of Toronto, Toronto, Canada) Social robots in public and recreational spaces must keep an appearance of social intelligence. Traditional designs use pre-programmed schedules (PPS) or tightly authored state machines to guarantee predictable behaviour. Although PPS approaches are reliable, they limit spontaneous actions and the perception of “aliveness.” In this late-breaking report, we present our prototype, an aquatic social robot whose personality and motion are driven by an LLM (GPT) rather than PPS. The LLM generates character-specific Python control scripts based on prompts that contain robot constraints, user actions, and a well-known character description, which are then executed on our non-humanoid watercraft, designed for public pool settings. We briefly describe the system architecture and discuss early insights and open questions regarding the use of well-known characters with LLMs to create personality-rich behaviour in public-space robots. |
|
| Erel, Hadas |
Nevo Heimann Saadon and Hadas Erel (Reichman University, Herzliya, Israel) Robots entering everyday spaces demand behaviors shaped by human-centered experts, yet authoring robot motion still requires engineering expertise and complex workflows. We present Jazzy Puppet, a web-based kinesthetic teaching system that lets non-technical practitioners design, record, and replay expressive robot gestures directly by hand, with no code or software installation. Running through a browser and configurable via JSON, Jazzy Puppet supports Dynamixel-servo-based robots, preserves motion nuances, and sequences gestures with optional peripherals (e.g. thermal printer). We will demonstrate the system's ease-of-use and portability on a two-DoF printer robot and a four-DoF arm, enabling rapid iterative prototyping of social gestures. Reut Katz, Nevo Heimann Saadon, Andrey Grishko, and Hadas Erel (Reichman University, Herzliya, Israel) Robots are increasingly integrated as support tools for enhancing human learning and problem-solving. In this study, we explore the design of a robotic object intended to support problem-solving experiences. The design guidelines are grounded in well-established cognitive strategies known to improve performance. We focus on two strategies in particular: (1) constructive feedback on performance and (2) social feedback that encourages self-explanation. To reduce distractions, we minimized the robot’s communicative load and kept its expressive behaviors simple. Through an iterative design process, we developed a small robotic printer that communicates through subtle non-verbal gestures (nodding, leaning, and gaze-like orientation) paired with minimal printed feedback. This combination aims to create a supportive, non-threatening inter action that provides clear performance guidance while conveying social presence. We describe the robot’s design process and propose an experimental study examining how constructive and social feedback influence problem-solving outcomes. Ofir Sadka, Michael Kupferman, Hili Megory, Lior Simchon, Rotem Dagan, Guy Benadon, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Robots are becoming part of everyday life, where their success depends as much on social interaction as on task performance. While rich movement is often assumed to enhance that experience, minimal gestures can also elicit meaningful social interactions. This suggests that gesture design informed by fundamental human nonverbal cues may achieve comparable social expressivity with fewer degrees of freedom. We explored whether reducing a robot’s degrees of freedom alters the social experience by comparing interactions in two movement configurations: a Full-DoF condition and a Reduced-DoF condition. This comparison allowed us to evaluate whether increased movement capability contributes meaningfully to the quality of the social experience. Bayesian analyses consistently favored the null hypothesis, indicating no meaningful differences between the Full-DoF and Reduced-DoF conditions. These initial findings suggest that well-designed simple gestures can support a consistent social experience, challenging the assumption that greater movement capability should be automatically expected to enhance human–robot interaction. Adi Manor, Hadas Erel, and Avi Parush (Technion - Israel Institute of Technology, Haifa, Israel; Reichman University, Herzliya, Israel) Affective and cognitive trust are fundamental in human-robot interaction, yet they may develop through different mechanisms. Research shows that robot attentiveness compensates for poor performance in building cognitive trust, but performance cannot reciprocally compensate for lack of attentiveness in building affective trust. We conducted secondary analysis of three studies examining shared variance between social perception dimensions (warmth, competence, social presence) and trust types using canonical correlation analysis. In robot's attentiveness contexts, warmth and competence shared substantial variance with both affective trust (67%, 65%) and cognitive trust, associated with dual relationships. In robot competence contexts, competence shared strong variance with cognitive trust (74%) but warmth showed weaker relationships (38%), creating a single connection. In one study, social presence shared higher variance with affective trust (66%) than cognitive trust (35%). These asymmetric variance patterns may imply asymmetric compensation mechanism, with important implications for designing robots where affective behaviors provide resilience despite inevitable performance failures. Elior Carsenti, Adi Manor, Ofir Sadka, Michael Kupferman, Guy Benadon, Lior Simchon, Rotem Dagan, Hili Megory, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Social robots are increasingly expected to initiate interactions, yet proactive behavior often produces negative experiences if presented when the users are not available. We argue that availability is not binary and depends on the user's cognitive load. Moreover, proactivity can take different forms, suggesting that proactive cues can be adjusted to the user’s current load. We tested whether matching proactivity modality (verbal and nonverbal cues) to cognitive-load level improves robot perception. In a 2×2×2 between-participants study, 87 participants completed a high-load writing task or a low-load screw-sorting task while an ElliQ robot initiated interaction using verbal cues, nonverbal cues, both, or neither. Perceptions varied across conditions. Under low load, verbal proactivity was preferred regardless of nonverbal cues. Under high load, nonverbal-only proactivity produced the best experience. These findings suggest that proactivity design is not trivial and should focus on aligning the robot's proactive behavior with the user's cognitive load. |
|
| Erickson, Zackory |
Michaela Tecson, Zackory Erickson, and Reid Simmons (Carnegie Mellon University, Pittsburgh, USA) Older adults with mild cognitive impairment (MCI) often experience difficulty completing multi-step tasks such as meal preparation. Existing assistive technologies typically provide step-by-step guidance without determining when assistance is actually needed, which risks undermining the autonomy of the user if intervention is not necessary. Thus, we present a framework for detecting moments when a user requires assistance during a meal preparation task. Using a location-based state representation, we classify three error types observed in a real-world study with older adults: visiting irrelevant locations, retrieving incorrect items, and overlooking necessary items. Our method leverages LLMs to interpret each state, identifies when assistance is required, and provides specific suggestions to resume progression. We evaluate the approach on a synthetic dataset with systematically injected errors and a real-world meal preparation sequence of making a banana split. Our results demonstrate that our method achieves F1 scores of 0.80 and 1.00 in real-world data for the two most common error types. These findings highlight the potential for this method to support timely interventions in assistive systems that promote independence in daily living activities. |
|
| Errico, Tyler |
Jennifer S. Kay, Tyler Errico, Audrey L. Aldridge, John James, and Michael Novitzky (Rowan University, Glassboro, USA; USA Military Academy at West Point, West Point, USA) Effective human-robot teaming in highly dynamic environments, such as emergency response and military missions, requires tools that support planning, coordination, and adaptive decision-making without imposing excessive cognitive load. This paper introduces PETAAR, the Planning, Execution, to After-Action Review framework that seamlessly integrates autonomous unmanned vehicles (UxVs) into Android Team Awareness Kit (ATAK), a widely adopted situational awareness platform. PETAAR leverages ATAK's geospatial visualization and human team collaboration while adding features for autonomous behavior management, operator feedback, and real-time interaction with UxVs. Its most novel contribution is enabling digital mission plans, created using standard mission graphics, to be interpreted and executed by unmanned systems, bridging the gap between human planning, robotic action, and shared understanding among all teammates (human and autonomous). Results from this work inform best practices for integrating autonomy into human-robot teams across diverse operational contexts. |
|
| Escobedo, Caleb |
Diane N. Jung, Caleb Escobedo, Noah Liska, Maitrey Gramopadhye, Daniel Szafir, Alessandro Roncone, and Carson J. Bruns (University of Colorado at Boulder, Boulder, USA; University of North Carolina at Chapel Hill, Chapel Hill, USA) Scientists perform diverse manual procedures that are tedious and laborious. Such procedures are considered a bottleneck for modern experimental science, as they consume time and increase burdens in fields including material science and medicine. We employ a user-centered approach to designing a robot-assisted system for dialysis, a common multi-day purification method used in polymer and protein synthesis. Through two usability studies, we obtain participant feedback and revise design requirements to develop the final system that satisfies scientists' needs and has the potential for applications in other experimental workflows. We anticipate that integration of this system into real synthesis procedures in a chemical wet lab will decrease workload on scientists during long experimental procedures and provide an effective approach to designing more systems that have the potential to accelerate scientific discovery and liberate scientists from tedious labor. |
|
| Esman, Antoine |
Matthias von Rakowski, Antoine Esman, Méa Vittot, Kimia Nazari, Christian Dondrup, and Shenando Stals (Heriot-Watt University, Edinburgh, UK) With distractions and sedentary habits, maintaining focus and healthy routines is increasingly challenging. While video tutorials are accessible, they lack engagement and personalised feedback. In contrast, professional human coaching is effective but costly and difficult to schedule. To bridge this gap, we introduce "Marty". As a low-cost social robot, it combines the convenience of home practice with the responsive support of an embodied coach. Using computer vision, it leads yoga sessions, demonstrates poses, and provides real-time corrective feedback. By offering accessible and adaptive guidance, this system aims to interrupt sedentary patterns and empower users to build sustainable wellness habits. |
|
| Espinoza, Ismael |
Ismael Espinoza, Yong-Hyeok Choi, Kangsan Lee, Ji-Heon Oh, Ki Joo Pahk, and Tae-Seong Kim (Kyung Hee University, Yongin, Republic of Korea) We present a Real2Sim2Real system with a pair of UR3 arms equipped with Allegro 4-finger hands for bimanual, long-horizon R2R2H (Robot-to-Robot-to-Human) interaction tasks. Our system is composed of three main modules: (1) Real2Sim module to extract observation data and collect expert demonstrations from the real hardware setup; (2) Temporal-Context Transformer (TCT) policy module that train the policy in simulation using expert demonstrations to preserve human motion trajectories; (3) Sim2Real module to transfer the trained policy from simulation to hardware by leveraging domain randomization and real-time object detection. We have evaluated our system with two R2R2H tasks: (1) Task #1: an R2R2H tube handover and (2) Task #2: an R2R cup flipping and placing. Our presented system has completed the tasks, achieving success rates of 82.86% in simulation and 77.14% in hardware for Task #1, and 57.14% and 51.43% for Task #2. Demonstrating the feasibility of our system in real hardware. |
|
| Evron, Yigal |
Bar Itskovitch Kiner, Shani Batcir, Yigal Evron, Milagros Prendes, Avi Parush, and Arielle Fischer (Technion - Israel Institute of Technology, Haifa, Israel) Robotic exoskeletons provide individuals with spinal cord injury (SCI) the ability to regain mobility. However, successful ambulation relies heavily on the user's ability to synchronize their movements with the device. This study examines the synchronization between human and robotic exoskeleton movements across multiple training sessions using both biomechanical metrics and subjective reports. We analysed data from 12 healthy participants performing mobility tasks with a ReWalk exoskeleton over five sessions to characterize the motor learning process. Results indicate a significant 37.1% improvement in the temporal synchronization between the robot's hip and the user's thorax. Additionally, Spatial Consistency of the hip joint improved by 42.6%. These biomechanical gains were strongly supported by subjective user feedback, which showed a 53.5% increase in perceived synchronization (p < 0.01) and an 18% improvement in perceived ease of learning. These findings suggest that structured training facilitates active motor adaptation, leading to more stable and synchronized exoskeleton-assisted walking. |
|
| Eyssel, Friederike |
Jacqueline Borgstedt, Ricarda Wullenkord, Fethiye Irmak Doğan, Alva Markelius, Yue Lou, Emma Geijer-Simpson, Emily S. Cross, Friederike Eyssel, Tamsin Ford, Jenny L. Gibson, Hatice Gunes, Georgina Warner, and Ginevra Castellano (ETH Zurich, Zurich, Switzerland; Bielefeld University, Bielefeld, Germany; University of Cambridge, Cambridge, UK; Uppsala University, Uppsala, Sweden) Mental Health challenges disproportionately affect vulnerable children, including those with Developmental Language Disorder (DLD) and with forced migration backgrounds. These groups face elevated risks of anxiety, depression, and social withdrawal. Yet, to date, their subjective wellbeing is difficult to assess. This is particularly true for traditional self-report measures which require good reading skills, linguistic competence, and sustained attention. To overcome these barriers, we propose that social robots offer a novel opportunity to create accessible, engaging, and child-friendly wellbeing assessments. Jiaee Cheong, Fethiye Irmak Doğan, Alva Markelius, Emily S. Cross, Friederike Eyssel, Ginevra Castellano, and Hatice Gunes (University of Cambridge, Cambridge, UK; Harvard University, Boston, USA; ETH Zurich, Zurich, Switzerland; Bielefeld University, Bielefeld, Germany; Uppsala University, Uppsala, Sweden) Real-world deployment of robotics for wellbeing requires solutions to be tailored to the end users. Most existing systems do not take into consideration the unique needs of those from marginalized and vulnerable communities, such as children, elderly, neurodivergent, disabled, and LGBTQ+ communities. The first workshop on “Equitable Robotics for Wellbeing” ( EqRoW) aims to address this gap by exploring strategies to not only develop more equitable solutions for marginalized and vulnerable communities, but also to enable HRI designers and developers to better understand the diverse needs of the end-users. The theme of HRI 2026, "HRI Empowering Society", informs the overarching theme of this workshop, encouraging discussions on HRI theories, user design methods and studies focused on developing approaches for advancing HRI that empowers society via equitable wellbeing robotics for all. |
|
| Fabian, Khristin |
Franziska Elisabeth Heck, Emilia Sobolewska, Debbie Meharg, and Khristin Fabian (Edinburgh Napier University, Edinburgh, UK; University of Aberdeen, Aberdeen, UK) Loneliness is a common issue among university students and has been associated with poorer mental health and reduced well-being. According to classic theory, there are two types of loneliness: emotional loneliness, which results from a lack of close attachments, and social loneliness, which is associated with deficits in broader peer networks. However, research into human–robot interaction rarely considers how these two forms of loneliness manifest in people's desire for social robots. This report presents the qualitative findings of semi-structured interviews with 25 students. These students were invited based on their scores for emotional and social loneliness, with the aim of representing a broad range of loneliness profiles. Participants observed standardised demonstrations of three social robots, Pepper, Nao and Furhat, and discussed their attitudes towards them, their potential roles and designs. Across the different profiles, the students generally expressed an openness to the idea of social robots. However, a clear gradient emerged: students who reported higher levels of loneliness tended to view robots as companions and conversational partners, whereas students who reported lower levels of loneliness emphasised the robots’ potential for providing instrumental support and the importance of maintaining stricter boundaries. Loneliness profiles therefore provide a promising lens for thinking about how to design role-appropriate and ethically sensitive robot behaviours and forms for student settings. |
|
| Fakhruldeen, Hatem |
Satheeshkumar Veeramani, Anna Kisil, Abigail Bentley, Hatem Fakhruldeen, Gabriella Pizzuto, and Andrew I. Cooper (University of Liverpool, Liverpool, UK) Self-driving laboratories (SDLs) are rapidly transforming research in chemistry and materials science to accelerate new discoveries. Mobile robot chemists (MRCs) play a pivotal role by autonomously navigating the lab to transport samples, effectively connecting synthesis, analysis, and characterisation equipment. The instruments within an SDL are typically designed or retrofitted to be accessed by both human and robotic chemists, ensuring operational flexibility and integration between manual and automated workflows. In many scenarios, human and robotic chemists may need to use the same equipment simultaneously. Currently, MRCs rely on simple LiDAR-based obstruction detection, which forces the robot to passively wait if a human is present. This lack of situational awareness leads to unnecessary delays and inefficient coordination in time-critical automated workflows in human-robot shared labs. To address this, we present an initial study of an embodied, AI-driven perception method that facilitates proactive human-robot interaction in shared-access scenarios. Our method features a hierarchical human intention prediction model that allows the robot to distinguish between preparatory actions (waiting) and transient interactions (accessing the instrument). Our results demonstrate that the proposed approach enhances efficiency by enabling proactive human–robot interaction, streamlining coordination, and potentially increasing the efficiency of autonomous scientific labs. |
|
| Falomir, Zoe |
Adwitiya Mandal, Kai-Florian Richter, and Zoe Falomir (Umeå University, Umeå, Sweden) Grounding spatial deixis is essential for establishing shared spatial understanding in HRI. This paper presents the Spatial Deixis Model (SDM), a perceptual framework allowing a robot to infer the English spatial deixis here and there from pointing gestures and using a dynamic, embodied peri-personal space. We performed an empirical evaluation of the SDM with 12 participants in 5 scenarios with different contexts (e.g., varying distances and/or heights with respect to human and robot). Results show that the localization accuracy for the pointed-at objects across 174 trials is 92% and the overall agreement across all trials is 63.7%, demonstrating that SDM generally captures the dynamic notion of spatial deixis. |
|
| Fan, Liyang |
Haopeng Peng, Ruilin Zhang, Yuxin Liang, and Liyang Fan (Tsinghua University, Beijing, China) In social interactions, individuals often conceal their true feelings for various reasons. This phenomenon of actively adjusting social strategies based on the social context is referred to as the "social performance mechanism". Inspired by this mechanism, we propose a wearable robot "THIRD EXPRESSION", designed to assist individuals in expressing real emotions and states that are difficult to verbalize. Through robot design, this study aims to enhance the wearer’s ability to actively define and convey their emotions in real-time. The system integrates multimodal sensors (speech, environment, heart rate, etc.) and large model reasoning to generate dynamic visual feedback. The pilot study has been validated that the robot design enhances the sense of boundary control and interaction satisfaction, while reducing social anxiety levels. |
|
| Fan, Mingming |
Yan Xiang, Chengliang Ping, Mengyang Wang, and Mingming Fan (Hong Kong University of Science and Technology, Guangzhou, China) As reliance on desktop-based knowledge work platforms grows, maintaining sustained focus has become a critical challenge, and current tools still provide limited support for everyday attentional needs. Many digital aids remain tied to the screen and are experienced as intrusive or easy to ignore, whereas desktop robots offer situated, embodied forms of support in the same physical workspace as the computer. Yet it remains unclear how such robots should be designed to help people manage attention in study and work. To explore this, we conducted a participatory design study consisting of five workshops with adults who self-identified as needing support with focus. Participants reflected on their daily challenges and current coping strategies, then envisioned how a desktop robot could act, look, and be placed to support them. Our findings reveal diverse, context-dependent expectations around function, social role, and form, and outline directions for designing attention-supportive desktop robots for everyday work. |
|
| Fazli, Pooria |
Pooria Fazli, Amirhossein Nazari, Navid Jooyandehdel, Iman Kardan, and Alireza Akbarzadeh (Ferdowsi University of Mashhad, Mashhad, Iran) Lower-limb exoskeletons play an essential role in rehabilitation and mobility assistance, where accurate real-time gait phase recognition is critical for achieving safe, synchronized, and intuitive human–robot interaction. Many existing approaches rely on multiple sensors such as IMUs, EMG, and FSRs, which increase system complexity, computational load, cost, and susceptibility to mechanical wear. In this study, we propose a lightweight and robust gait phase detection framework that uses only hip and knee joint encoder data—sensors that are already integrated into most exoskeletons and are less prone to noise and misplacement. The method employs a finite state machine (FSM) to identify gait phases and detect key gait events, including heel strike, in real time. The approach was first evaluated in simulation using the SCONE (Opensim) platform and then experimentally implemented on the NEXA knee-joint exoskeleton with multiple healthy participants. Results show that the proposed method reliably predicts gait phases and heel-strike timing with minimal temporal error, while achieving significantly higher processing frequency compared to sensor-rich configurations. These findings demonstrate that accurate and efficient gait phase recognition can be achieved using only encoder data, offering a practical and low-cost solution for real-world exoskeleton control applications. |
|
| Fedoseev, Aleksey |
Valerii Serpiva, Artem Lykov, Jeffrin Sam, Aleksey Fedoseev, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) We propose a novel UAV-assisted creative capture system that leverages diffusion models to interpret high-level natural language prompts and automatically generate optimal flight trajectories for cinematic video recording. Instead of manually piloting the drone, the user simply describes the desired shot (e.g., "orbit around me slowly from the right and reveal the background waterfall"). Our system encodes the prompt along with an initial visual snapshot from the onboard camera, and a diffusion model samples plausible spatio-temporal motion plans that satisfy both the scene geometry and shot semantics. The generated flight trajectory is then autonomously executed by the UAV to record smooth, repeatable video clips that match the prompt. User evaluation using NASA-TLX showed a significantly lower overall workload with our interface (M = 21.6) compared to a traditional remote controller (M = 58.1), demonstrating a substantial reduction in perceived effort. Mental demand (M = 11.5 vs. 60.5) and frustration (M = 14.0 vs. 54.5) were also markedly lower for our system, highlighting its clear usability advantages in autonomous text-driven flight control. This project demonstrates a new interaction paradigm: text-to-cinema flight, where diffusion models act as the "creative operator" converting story intentions directly into aerial motion. |
|
| Feng, Yuan |
Fan Wang, Yuan Feng, Wijnand IJsselsteijn, and Giulia Perugia (Eindhoven University of Technology, Eindhoven, Netherlands; Northwestern Polytechnical University, Xi’an, China) People Living with Dementia (PLwD) require intensive emotional and physical support, and caregivers frequently struggle with exhaustion and distress. Social robots have been proposed as tools that could enhance socio-emotional well-being, yet many of their designs inherently involve deception, embedding cues that mislead PLwD about the nature and capabilities of the robot. Although Ethics of Technology and Human-Robot Interaction (HRI) explored the concept of Social Robotic Deception (SRD) and its implications, existing discussions remain largely theoretical and detached from the lived realities of dementia care. We know little about how caregivers see and envision the use of SRD in dementia care practice. To address this gap, we conducted two online focus groups with both formal and informal caregivers, with the aim of appraising caregivers' attitudes towards SRD and how they would implement or mediate deception in everyday practice. Critically, we focused on caregivers operating in China, a country of Confucian influence where family caregiving is regarded as a moral duty and leveraging institutional care is stigmatized. Our work contributes empirically grounded insights that highlight lived reality in dementia care shaped by culture for ethical SRD design. |
|
| Fernandes, Neil |
Neil Fernandes, Tehniyat Shahbaz, Emily Davies-Robinson, Yue Hu, and Kerstin Dautenhahn (University of Waterloo, Waterloo, Canada; United for Literacy, Toronto, Canada) Newcomer children face barriers in acquiring the host country’s language and literacy programs are often constrained by limited staffing, mixed-proficiency cohorts, and short contact time. While Socially Assistive Robots (SARs) show promise in education, their use in these socio-emotionally sensitive settings remains underexplored. This research presents a co-design study with program tutors and coordinators, to explore the design space for a social robot, Maple. We contribute (1) a domain summary outlining four recurring challenges, (2) a discussion on cultural orientation and community belonging with robots, (3) an expert-grounded discussion of the perceived role of an SAR in cultural and language learning, and (4) preliminary design guidelines for integrating an SAR into a classroom. These expert-grounded insights lay the foundation for iterative design and evaluation with newcomer children and their families. |
|
| Fernández-Llamas, Camino |
Cristina Abad-Moya, Irene González Fernández, Alexis Gutiérrez-Fernández, Francisco J. Rodríguez Lera, and Camino Fernández-Llamas (University of León, León, Spain; Rey Juan Carlos University, Madrid, Spain) In everyday environments, robots must be able to detect when people intend to initiate interaction and to communicate their engagement state in an interpretable manner. Although engagement has been widely studied in human–robot interaction, many existing approaches rely on controlled settings or limited perceptual modalities, leaving open questions about how non-expert users naturally attempt to initiate interaction and how engagement states should be signalled during early interaction. An online pre-study questionnaire with 64 participants was conducted to capture user expectations regarding interaction initiation and engagement feedback. The results indicated a preference for speech- and gaze-based strategies, as well as expectations of clear signals such as robot orientation, verbal acknowledgement, and visual feedback. These insights informed the design of a multimodal engagement system integrating auditory and visual cues and providing incremental feedback to distinguish between attention and confirmed readiness. The system was evaluated in a semi-naturalistic study with 15 participants in a domestic environment. The results show that users were generally able to attract the robot’s attention without prior instruction, while providing minimal information about the robot’s perceptual capabilities led to more consistent interpretation of its engagement responses. The findings provide empirical insight into interaction initiation strategies and highlight the importance of transparent engagement signalling in human–robot interaction. |
|
| Fernando, Marcelino Julio |
Yara Mahmoud, Jeffrin Sam, Khang Nguyen, Marcelino Julio Fernando, Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Haris Khan, Artem Lykov, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) Safe Human–Robot Interaction in shared spaces requires robots to adapt compliance and speed to scene context and human proximity. Standards for collaborative operation specify limits on speed, force/pressure, and stiffness during contact or near-contact operation, motivating dynamic impedance rather than fixed gains. We present an egocentric vision pipeline for the Unitree G1 humanoid that retrieves task- and context-appropriate parameters via a Vision-Language Model (VLM) with Retrieval-Augmented Generation (RAG), and integrates inverse kinematics (IK) to produce joint references together with feed-forward torques for gravity compensation. Concretely, the system maps live head-camera frames to (Kp, Kd, v); IK provides (qref, qref, τff); and the low-level controller apply it at each joint. This enables joint-impedance scheduling and speed regulation in low-dynamic settings. We evaluate the system on dual-manipulation tabletop tasks with and without human presence in the scene. Compared to fixed-gain baselines, the VLM-RAG policy yields smooth speed reductions near humans, and maintained task success under environmental change. While the off-board perception–retrieval loop introduces noticeable delay that restricts use in highly dynamic interaction, the prototype demonstrates that semantic grounding of impedance remains a viable direction for safer humanoid collaboration. |
|
| Fiedler, Lena |
Sophie Schwartz, Lena Fiedler, and Marty Friedrich (TU Berlin, Berlin, Germany; TU Chemnitz, Chemnitz, Germany) Service robots are increasingly being deployed in public spaces, yet their accessibility for people with disabilities remains underexplored. This study presents an exploratory field investigation of blind and visually impaired (BVI) people's encounters with an autonomous park cleaning robot. Using on-site observations, individual interviews, and a subsequent focus group, it was examined how participants perceived the robot, understood its task, and evaluated potential barriers within a real deployment context. The findings show that although the robot was acoustically perceivable, its purpose, actions, and interaction expectations remained unclear, leading to uncertainty during incidental encounters. Visual-only communication, low-contrast design, and hard-to-perceive safety instructions further limited perceivability and understandability. The study demonstrates that field-based evaluation is crucial for revealing real-world barriers overlooked in laboratory settings and underscores the need to involve blind and visually impaired (BVI) people as experts in the design of public-space robotics. These insights complement existing accessibility guidelines and highlight the importance of inclusive, accessible robot design to ensure that service robots do not become new barriers in public environments. |
|
| Fiori, Martina |
Marta Romeo, Alessandra Bossoni, Daniel Hernández García, Matthew Forsyth, Catherine Mahoney, and Martina Fiori (Heriot-Watt University, Edinburgh, UK; Edinburgh Napier University, Edinburgh, UK) The integration of digital health technologies, including robotics, is transforming healthcare environments and offering significant benefits, such as reducing the time professionals spend on non-direct patient care. However, despite these advantages, the adoption of robotic systems in healthcare remains limited. A major barrier is the lack of competencies and confidence among healthcare staff in using these technologies. We address this challenge by investigating how exposure to robotics technology can support the education of the future nursing workforce. By exploring nursing students' and educators' perceptions of usability and acceptability of robotics through a workshop, this study aims to investigate the effectiveness of exposing them to robotic technologies in a safe, simulated environment to start identifying how beneficial exposure for learning could be integrated into education priorities that will enable nurses to work effectively and sustainably alongside robotic systems. |
|
| Fischedick, Söhnke Benedikt |
Söhnke Benedikt Fischedick, Robin Schmidt, Benedict Stephan, and Horst-Michael Gross (TU Ilmenau, Ilmenau, Germany) Voice-based interaction offers an intuitive way for untrained users to control mobile robots, but existing speech interfaces often rely on intent maps or robot-specific pipelines that are difficult to transfer across robots, backends, and applications. Recent multimodal large language models (LLMs) can process audio and produce structured function calls, enabling a more flexible form of voice interaction. This late-breaking report proposes a vendor-independent integration pattern (cloud, edge server, or local) that exposes robot capabilities as Model Context Protocol (MCP) tools and maps them to existing middleware interfaces such as remote procedure calls (RPCs). Continuous sensor streams remain in the middleware and are accessed through a snapshot mechanism that returns the most recent buffered value on demand. We demonstrate the approach on a mobile co-presence robot using a lightweight audio pipeline built around wake word detection (WWD), voice activity detection (VAD), multimodal LLM inference, and text-to-speech (TTS). MCP tools trigger capabilities such as navigation, communication, and projector control. The architecture provides a general pattern for robots and middlewares, enabling flexible voice interaction without rewriting intent logic. |
|
| Fischer, Arielle |
Bar Itskovitch Kiner, Shani Batcir, Yigal Evron, Milagros Prendes, Avi Parush, and Arielle Fischer (Technion - Israel Institute of Technology, Haifa, Israel) Robotic exoskeletons provide individuals with spinal cord injury (SCI) the ability to regain mobility. However, successful ambulation relies heavily on the user's ability to synchronize their movements with the device. This study examines the synchronization between human and robotic exoskeleton movements across multiple training sessions using both biomechanical metrics and subjective reports. We analysed data from 12 healthy participants performing mobility tasks with a ReWalk exoskeleton over five sessions to characterize the motor learning process. Results indicate a significant 37.1% improvement in the temporal synchronization between the robot's hip and the user's thorax. Additionally, Spatial Consistency of the hip joint improved by 42.6%. These biomechanical gains were strongly supported by subjective user feedback, which showed a 53.5% increase in perceived synchronization (p < 0.01) and an 18% improvement in perceived ease of learning. These findings suggest that structured training facilitates active motor adaptation, leading to more stable and synchronized exoskeleton-assisted walking. |
|
| Fischer, Max |
Hyunjung Kim, Max Fischer, Yena Kim, Geumjin Lee, Yanis El Ouali, Shota Kiuchi, Kentaro Honma, Hailong Liu, and Toshihiro Hiraoka (University of Tokyo, Tokyo, Japan; KAIST, Daejeon, Republic of Korea; Bordeaux INP, Talence, France; NAIST, Nara, Japan; Japan Automobile Research Institute, Tsukuba, Japan) We present MIRAbot, the first rearview-mirror-embedded robotic assistant. Reimagining this long-standing symbol of manual driving, MIRAbot leverages its strategic position and familiar form factor to support SAE Level 3 conditional automated driving by transforming into an active, communicative agent during automated modes. We conducted an exploratory public study (N=73) with Japanese adults to capture first-contact user impressions. UEQ-S scores significantly exceeded established benchmarks, with younger adults (18–54) reporting higher Hedonic and Overall ratings than older participants (55+). Qualitatively, participants interpreted the system’s outward-facing, scanning gaze as a cue of vigilance, fostering positive expectations for real-world viability. We discuss these findings, offer design insights for future iterations of robotic assistants embedded in familiar objects to enhance acceptance of automated driving, and outline directions for future research. |
|
| Fitter, Naomi T. |
Triniti Armstrong, Courtney J. Chavez, Rhian C. Preston, and Naomi T. Fitter (Oregon State University, Corvallis, USA) Prolonged computer use has become the norm for a wide variety of fields. The sedentary practices that often accompany this computer use can lead to a number of health challenges, from cardiovascular and musculoskeletal issues to ocular health problems. Past work by our research group took preliminary steps to address these issues by evaluating a socially assistive robot (SAR)-based break-taking system with no online learning abilities. Based on their initial findings, which showed the robot to effectively encourage break-taking behaviors during computer use and to be more engaging and enjoyable to use compared to a non-robotic alternative, we present methods for data collection in this current paper. Specifically, we aimed 1) to enhance the past SAR system by adding online Q-learning capabilities and 2) to evaluate the updated system's policy generation and how well the final policies aligned with our expectations from prior work. Our results show evidence that the system is successfully generating unique policies for each participant, although the limited match between the expected and resulting policies surprised us. Our work can help SAR researchers understand how to implement Q-learning when using sparse data. |
|
| Ford, Tamsin |
Jacqueline Borgstedt, Ricarda Wullenkord, Fethiye Irmak Doğan, Alva Markelius, Yue Lou, Emma Geijer-Simpson, Emily S. Cross, Friederike Eyssel, Tamsin Ford, Jenny L. Gibson, Hatice Gunes, Georgina Warner, and Ginevra Castellano (ETH Zurich, Zurich, Switzerland; Bielefeld University, Bielefeld, Germany; University of Cambridge, Cambridge, UK; Uppsala University, Uppsala, Sweden) Mental Health challenges disproportionately affect vulnerable children, including those with Developmental Language Disorder (DLD) and with forced migration backgrounds. These groups face elevated risks of anxiety, depression, and social withdrawal. Yet, to date, their subjective wellbeing is difficult to assess. This is particularly true for traditional self-report measures which require good reading skills, linguistic competence, and sustained attention. To overcome these barriers, we propose that social robots offer a novel opportunity to create accessible, engaging, and child-friendly wellbeing assessments. |
|
| Forslund, Melker |
Sofia Thunberg, Aris Alissandrakis, Timmy Bolinder, Melker Forslund, Elias Lycke, Tobias Pettersson, Marcus Saméus, and Robin Åsberg (Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; Uppsala University, Uppsala, Sweden) This late-breaking report presents a Wizard of Oz control system that uses virtual reality equipment to operate the humanoid robot Pepper. The system enables control by mirroring the operator’s movement within the physical space, including body and head rotation, and arm movements directly onto the robot. The system was evaluated in a study setup with two participant groups: users and operators. Results showed that the system was effective in simulating natural robot behaviour, while being intuitive and engaging. |
|
| Forster, Deborah |
Wilbert Tabone, Benedetta Lusi, Alessandro Ianniello, J. Micah Prendergast, Deborah Forster, Olger Siebinga, Maria Luce Lupetti, Eva S. Verhoef, Dave Murray-Rust, Marco C. Rozendaal, Ann M. Pendleton-Jullian, and David Abbink (Delft University of Technology, Delft, Netherlands; Erasmus University Medical Centre, Rotterdam, Netherlands; Politecnico di Torino, Turin, Italy; RoboHouse, Delft, Netherlands; Ohio State University, Columbus, USA) Building on two previous workshops on transdisciplinary practices for shaping worker-robot relations, this half-day workshop introduces participants to worldbuilding, a design-driven technique used to co-create and explore richly detailed futures, as a way to empower workers and scholars in reimagining plausible and preferable future worker-robot relations (WRRs). WRRs describe the interactions, collaborations, and shared practices between workers and robotic systems in organisational contexts. The workshop begins with an introduction to WRRs, and a keynote by a worldbuilding expert that will outline the method and its value for envisioning future WRRs. Groups of workshop participants will then investigate concrete case studies that demonstrate how robotic systems can support workers in their practice, with a focus on enhancing wellbeing. Through interactive activities in this workshop, participants will co-create imagined worlds of work, which will be analyzed systemically across multiple levels of complexity, from the individual worker and their immediate context to broader societal implications. The workshop ultimately aims to build a community committed to shaping sustainable futures of robot-assisted work. |
|
| Förster, Frank |
Christopher D. Wallbridge, Kavyaa Somasundaram, Frank Förster, Maia Stiber, Eduardo B. Sandoval, Chinmaya Mishra, Christina E. Stimson, Patrick Holthaus, and Hatice Gunes (Cardiff University, Cardiff, UK; Örebro University, Örebro, Sweden; University of Hertfordshire, Hatfield, UK; Microsoft Research, Redmond, USA; UNSW, Sydney, Australia; MPI for Psycholinguistics, Nijmegen, Netherlands; University of Sheffield, Sheffield, UK; University of Cambridge, Cambridge, UK) This workshop proposes looking at errors in humans and robots. The workshop will focus along three axes: Human, Robot and Interaction induced errors. Errors in Human-Robot Interaction (HRI) pose a crucial challenge for the deployment of robots. Errors can affect the safety and trust of users, therefore it is important for empowering society that these robots can handle errors robustly. The workshop intends to foster interdisciplinary discussion among researchers in robotics, HCI, cognitive science, and social sciences, to encourage the development of robust, user-centered, and socially responsible HRI systems. The invited speakers, paper presentations and discussions will focus around topics of detecting errors, handling errors, adapting to errors, restoring trust, using intentional errors, communication strategies, issues of expectation as well as look at design, interdisciplinary and ethical approaches to research. In this way we will help inform research into errors, and develop robotic systems capable of robust interaction and collaboration. |
|
| Forsyth, Matthew |
Marta Romeo, Alessandra Bossoni, Daniel Hernández García, Matthew Forsyth, Catherine Mahoney, and Martina Fiori (Heriot-Watt University, Edinburgh, UK; Edinburgh Napier University, Edinburgh, UK) The integration of digital health technologies, including robotics, is transforming healthcare environments and offering significant benefits, such as reducing the time professionals spend on non-direct patient care. However, despite these advantages, the adoption of robotic systems in healthcare remains limited. A major barrier is the lack of competencies and confidence among healthcare staff in using these technologies. We address this challenge by investigating how exposure to robotics technology can support the education of the future nursing workforce. By exploring nursing students' and educators' perceptions of usability and acceptability of robotics through a workshop, this study aims to investigate the effectiveness of exposing them to robotic technologies in a safe, simulated environment to start identifying how beneficial exposure for learning could be integrated into education priorities that will enable nurses to work effectively and sustainably alongside robotic systems. |
|
| Foster, Mary Ellen |
Amol Deshmukh, Mary Ellen Foster, and Emily S. Cross (ETH Zurich, Zurich, Switzerland; University of Glasgow, Glasgow, UK) Socially Assistive Robots (SARs) show promise for initiating positive behaviour change, yet sustaining habits beyond the intervention period remains a persistent challenge. This paper moves beyond interaction-based analysis to propose mathematical frameworks for modelling habit formation dynamics. We introduce three complementary models: Probabilistic Habit Formation optimised via Reinforcement Learning, Rational Habit Strength with hybrid decay, and Long-Term Retention with booster interventions. Using school-based handwashing as an exemplar, Monte Carlo simulations (𝑁 = 1000) predict that RL-optimised reinforcement could accelerate habit formation by 32%, while strategic boosters may maintain habit strength 1.3× above withdrawal baselines. These frameworks offer a potentially generalisable approach for robotassisted behaviour change across health, education, and other socially assistive contexts, pending empirical validation. Andrew Blair, Mary Ellen Foster, Peggy Gregory, and Koen Hindriks (University of Glasgow, Glasgow, UK; Vrije Universiteit Amsterdam, Amsterdam, Netherlands) The field of human-robot interaction frequently proclaims the inevitable coming of the social robot era, with claims that social robots are increasingly being deployed in the real world. However, in practice, social robots remain scarce in everyday environments. In addition, HRI research rarely explores robots through an organisational lens. This results in a lack of evidence-based understanding of the organisational conditions that are key to the presence--or absence--of social robots in the real world, which are often more decisive than technical sophistication. In this paper, we motivate why organisational context is crucial to the investigation of real-world social robots and provide examples of how this shapes robot acceptance. We detail the methodology of our ongoing empirical research with client organisations and robot developers. Through this critical organisational lens, we learn where social robots are, what they are doing, how they are designed, and why organisations are deploying them. Leimin Tian, Ana Kirschbaum, Caterina Neef, Mary Ellen Foster, Sara Cooper, Frauke Zeller, Manuel Giuliani, Alexander Eberhard, Oliver Chojnowski, Nils Mandischer, Utku Norman, Jan Ole Rixen, Pamela Carreno-Medrano, Nick Hawes, and Dana Kulić (CSIRO, Melbourne, Australia; Monash University, Melbourne, Australia; University of Applied Sciences Cologne, Cologne, Germany; KIT, Karlsruhe, Germany; University of Glasgow, Glasgow, UK; IIIA-CSIC, Spain; University of Edinburgh, Edinburgh, UK; Kempten University of Applied Sciences, Kempten, Germany; University of Augsburg, Augsburg, Germany; Monash University, Clayton, Australia; University of Oxford, Oxford, UK) As the technology readiness level of robotic systems increases, these systems interact with real users in realistic application contexts at increasing scale. In such real-world HRI, failures and unexpected outcomes are common, and these can provide valuable lessons for the design and evaluation of future HRI systems. Building on two recent workshops---the HRI 2025 workshop on Human-Robot Interaction in Extreme and Challenging Environments and the RO-MAN 2025 workshop on Real-World HRI in Public and Private Spaces: Successes, Failures, and Lessons Learned---this half-day workshop invites researchers and practitioners to share their design and deployment lessons drawn from across the spectrum of real-world HRI. The interactive sessions follow a taxonomy of the key phases during field deployment and major factors at play, inviting attendees to review the experimental design of their own research or example projects to identify potential challenges and share related experiences. This discussion will refine the initial taxonomy, which we plan to release as an open-source tool to empower researchers and practicioners in their design and deployment of real-world HRI. |
|
| Foulen, Daniel J. |
Alexandria Thylane, Daniel J. Foulen, Keys K. Rigual, Raitah A. Jinnat, Jackie Yee, Kaylee Nam, and Raj Korpan (City University of New York, New York, USA; Rice University, Houston, USA) Pax is a queer-affirming robot companion prototype co-designed with LGBTQIA+ users. Implemented as a Unity-based embodied agent with a FastAPI backend, it translates community-identified requirements into a working interactive system. Pax combines queer-affirming natural language interaction, safety guardrails, and user-controlled adaptability to support identity affirmation and emotional well-being in the home. This demo highlights the core technical pipeline and ethical design choices behind queer-inclusive robot companions. |
|
| Friedman, Sean |
Ofir Sadka, Michael Kupferman, Hili Megory, Lior Simchon, Rotem Dagan, Guy Benadon, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Robots are becoming part of everyday life, where their success depends as much on social interaction as on task performance. While rich movement is often assumed to enhance that experience, minimal gestures can also elicit meaningful social interactions. This suggests that gesture design informed by fundamental human nonverbal cues may achieve comparable social expressivity with fewer degrees of freedom. We explored whether reducing a robot’s degrees of freedom alters the social experience by comparing interactions in two movement configurations: a Full-DoF condition and a Reduced-DoF condition. This comparison allowed us to evaluate whether increased movement capability contributes meaningfully to the quality of the social experience. Bayesian analyses consistently favored the null hypothesis, indicating no meaningful differences between the Full-DoF and Reduced-DoF conditions. These initial findings suggest that well-designed simple gestures can support a consistent social experience, challenging the assumption that greater movement capability should be automatically expected to enhance human–robot interaction. Elior Carsenti, Adi Manor, Ofir Sadka, Michael Kupferman, Guy Benadon, Lior Simchon, Rotem Dagan, Hili Megory, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Social robots are increasingly expected to initiate interactions, yet proactive behavior often produces negative experiences if presented when the users are not available. We argue that availability is not binary and depends on the user's cognitive load. Moreover, proactivity can take different forms, suggesting that proactive cues can be adjusted to the user’s current load. We tested whether matching proactivity modality (verbal and nonverbal cues) to cognitive-load level improves robot perception. In a 2×2×2 between-participants study, 87 participants completed a high-load writing task or a low-load screw-sorting task while an ElliQ robot initiated interaction using verbal cues, nonverbal cues, both, or neither. Perceptions varied across conditions. Under low load, verbal proactivity was preferred regardless of nonverbal cues. Under high load, nonverbal-only proactivity produced the best experience. These findings suggest that proactivity design is not trivial and should focus on aligning the robot's proactive behavior with the user's cognitive load. |
|
| Friedrich, Marty |
Sophie Schwartz, Lena Fiedler, and Marty Friedrich (TU Berlin, Berlin, Germany; TU Chemnitz, Chemnitz, Germany) Service robots are increasingly being deployed in public spaces, yet their accessibility for people with disabilities remains underexplored. This study presents an exploratory field investigation of blind and visually impaired (BVI) people's encounters with an autonomous park cleaning robot. Using on-site observations, individual interviews, and a subsequent focus group, it was examined how participants perceived the robot, understood its task, and evaluated potential barriers within a real deployment context. The findings show that although the robot was acoustically perceivable, its purpose, actions, and interaction expectations remained unclear, leading to uncertainty during incidental encounters. Visual-only communication, low-contrast design, and hard-to-perceive safety instructions further limited perceivability and understandability. The study demonstrates that field-based evaluation is crucial for revealing real-world barriers overlooked in laboratory settings and underscores the need to involve blind and visually impaired (BVI) people as experts in the design of public-space robotics. These insights complement existing accessibility guidelines and highlight the importance of inclusive, accessible robot design to ensure that service robots do not become new barriers in public environments. |
|
| Frisk, Erik |
Hannah Pelikan, Karin Stendahl, Franziska Babel, Ola Johansson, and Erik Frisk (Linköping University, Linköping, Sweden) Mobile robots must behave intelligibly to be acceptable in public spaces. Designing social navigation algorithms for delivery robots requires different areas of expertise. The paper reports on an interdisciplinary collaboration between two ethnomethodological conversation analysts, a human factors psychologist, and two motion planning engineers. Based on video recordings of a robot moving among people, the team developed and implemented different sound and movement designs, which were iteratively tested in real-world deployments. This work contributes insights on how interdisciplinary collaboration can be facilitated in the area of social robot navigation and an iterative process for designing robot sound and movement grounded in real-world observations. |
|
| Fu, Di |
Di Fu, Yuchong Zhang, Yong Ma, Maria Kyrarini, Chaona Chen, Doreen Jirak, Weiyong Si, and Danica Kragic (University of Surrey, Guildford, UK; KTH Royal Institute of Technology, Stockholm, Sweden; University of Bergen, Bergen, Norway; Santa Clara University, Santa Clara, USA; University of Sheffield, Sheffield, UK; University of Antwerp, Antwerp, Belgium; University of Essex, Essex, UK) Interactive artificial intelligence (AI) is rapidly reshaping human-centered robotics by moving beyond algorithmic efficiency toward real-time adaptability, transparency, and design for people and contexts. Building on the inaugural InterAI Workshop at IEEE RO-MAN 2024, this second edi- tion, proposed for HRI 2026, will convene the robotics and HRI communities to examine how interactive AI can en- able robust, trustworthy systems that operate seamlessly in dynamic, real-world environments. The half-day hybrid program will feature two keynote talks, and a peer-reviewed paper oral presentation session. Distinct from prior events, the workshop spotlights the integration of generative and embodied AI with human-in-the-loop learning and real-time decision making, as well as methods for explainability, eval- uation, and safety. The workshop aims to catalyze a collabo- rative agenda that bridges interactive AI technologies and human-centered robotic systems. |
|
| Fukuchi, Yosuke |
Muneeb Imtiaz Ahmad and Yosuke Fukuchi (Swansea University, Swansea, UK; Tokyo Metropolitan University, Hino-shi, Japan) Current research on measuring human perceptions of fairness in Human-Robot Teams (HRTs) has primarily focused on subjective metrics, such as rating statements either during or at the conclusion of interactions. This suggests a gap in examining the dynamic and evolving nature of fairness perceptions objectively during human-robot collaboration. In this paper, we introduce a novel cognitive model that enables individuals to perceive fairness dynamically throughout an HRT experiment. This model is inspired by the Bayesian Theory of Mind, allowing us to infer perceptions of fairness in real-time. The core idea of the model is that fairness perception stems from a person's ongoing inference about the bias in a robot's value function. We establish an equation that translates this inference into a perceived fairness value, which is based not only on the inferred bias but also on the confidence of that inference. A qualitative comparison of the model's performance with a previous human-robot collaboration study suggests that it can effectively capture key trends in human fairness perception dynamically. These findings highlight the model's potential applicability, and it may be utilized in resource distribution algorithms in HRTs to promote fairer collaboration. |
|
| Fushimi, Tatsuki |
Ayaka Tsutsui, Takahito Murakami, Tatsuki Fushimi, Ryosei Kojima, Kengo Tanaka, and Yoichi Ochiai (University of Tsukuba, Tsukuba, Japan) Embossed surfaces provide raised patterns and textures that can support access to information and act as tactile guidance on physical interfaces. Recent work has introduced a mold free embossing technique that combines High Intensity Focused Ultrasound (HIFU) with a robot arm to locally deform materials such as acrylic and MDF, enabling digitally controlled embossing. While this approach offers high design flexibility and is safe, research on how the resulting patterns are perceived through touch remains limited. In this work, we use a robot-controlled HIFU embossing system to create six textures that represent common tactile features relevant to exploratory touch, including contours, roughness, geometric shapes, and localized bumps on acrylic and MDF. Nineteen participants took part in a blindfolded tactile exploration study. Using both quantitative and qualitative data, we characterize how people perceive and differentiate textures produced through robot-controlled HIFU embossing. |
|
| Gajarla, Harshavardhan Reddy |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Gamboa, Mafalda |
Anna Dobrosovestnova, Barry Brown, Emanuel Gollob, Mafalda Gamboa, and Masoumeh Mansouri (Interdisciplinary Transformation University, Linz, Austria; Stockholm University, Stockholm, Sweden; University of Arts Linz, Linz, Austria; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Birmingham, Birmingham, UK) HRI 2026 takes place amid profound socio-political turbulence marked by rising authoritarianism, the consolidation of technological power, and the expanding use of robotics for warfare. These global conditions create an affective atmosphere that seeps into our field: a mix of attachment to techno-determinist and techno-solutionist narratives, unease with 'business as usual,' and a tentative search for alternatives. As HRI scholars and designers, we recognize how the wider socio-political tensions resonate within our own practices, shaping what we take to be possible, necessary, or inevitable in research and design. In this half-day, in-person workshop, we mobilize three affective orientations - cruel optimism, lucid despair, and precarious hope - as resources for reflection, critique, and experimentation. Through short provocations, discussions, and a speculative group activity, participants will be invited to inhabit these affects to question dominant narratives that sustain HRI, confront systemic challenges, and collectively explore alternative trajectories for research, design, and community building. Sofia Thunberg, Mafalda Gamboa, Meagan B. Loerakker, Patricia Alves-Oliveira, and Hannah R.M. Pelikan (Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; Uppsala University, Uppsala, Sweden; TU Wien, Vienna, Austria; University of Michigan at Ann Arbor, Ann Arbor, USA; Linköping University, Linköping, Sweden) In the Human-Robot Interaction community, Wizard of Oz (WoZ) is a commonly employed method where researchers aim to study user perceptions of robot technologies regardless of technical limitations. Despite the continued usage of WoZ, questions concerning ethical tensions and effects on the wizard remain. For instance, how do wizards experience interacting through technology, given the different roles and characters to enact, and the different environments to situate themselves in. In addition, the wizard's experiences and affects on results, continues to be under-explored. The goal of this workshop is to surface ethical, practical, methodological, personal, and philosophical tensions in the WoZ method. Though a collaborative session, we seek to develop a deeper understanding of what it means to be a wizard through eliciting first-person experiences of researchers. As a result, we hope to formulate guidelines for future wizards. Marco C. Rozendaal, Anastasia Kouvaras Ostrowski, Mafalda Gamboa, Samantha Reig, Patricia Alves-Oliveira, Maaike Bleeker, Maria Luce Lupetti, John Vines, Nazli Cila, Hannah Pelikan, Nikolas Martelaro, Selma Šabanović, David Sirkin, and Cristina Zaga (Delft University of Technology, Delft, Netherlands; Purdue University, West Lafayette, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Massachusetts at Lowell, Lowell, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Utrecht University, Utrecht, Netherlands; Politecnico di Torino, Turin, Italy; University of Edinburgh, Edinburgh, UK; Linköping University, Linköping, Sweden; Carnegie Mellon University, Pittsburgh, USA; Indiana University at Bloomington, Bloomington, USA; Stanford University, Stanford, USA; University of Twente, Enschede, Netherlands) The 3rd Workshop on Designerly Human-Robot Interaction (HRI) aims to bring together scholars and practitioners engaged in design-oriented research to articulate the value of design research within HRI broadly. We propose a half-day workshop to (1) collectively map the diversity of design research in HRI, examining how contributions are framed and how quality is evaluated; (2) discuss participants’ HRI design projects, showcased in an exhibition setting; and (3) conclude with a focused conversation to identify common ground across diverse approaches and develop strategies for strengthening the position of design research in HRI and its connections with other HRI disciplinary communities. |
|
| Gandotra, Nayesha |
Nayesha Gandotra (Carnegie Mellon University, Pittsburgh, USA) Mobile robots are increasingly prevalent in human-centric environments such as restaurants, hospitals, and public facilities. A critical challenge is developing navigation policies that balance efficiency with human safety and social norms. Traditional approaches rely on hand-crafted social cost functions or human trajectory prediction, which struggle with generalization or real-time performance. End-to-end learning methods offer flexibility but often require large-scale human-robot data. Recent work like OLiVia-Nav [Narasimhan et al, 2024] shows promise using vision-language models for social reasoning, but still depends on learned trajectory generators. We present a modular pipeline combining fast search-based motion planning with a learned trajectory selector jointly trained on visual embeddings and candidate trajectories. Additionally, to support long-horizon behavior, complex reasoning, and user feedback, we integrate a Large Language Model (LLM)-driven goal assignment module that enables high-level task planning and contextual adaptability. We demonstrate our approach in a simulated restaurant navigation setting, but importantly, our pipeline generalizes to novel human-robot interaction scenarios. This is made possible by the use of VLMs, which provide zero-shot understanding of social context without task-specific pretraining. In this paper, we primarily propose the pipeline architecture and present early experimental results, and we welcome feedback on this new direction for scalable, socially-aware robot navigation. |
|
| Ganesh, Gowrishankar |
Sandra Victor, Bruno Yun, Chefou AR Mamadou Toura, Enzo Indino, Madalina Croitoru, and Gowrishankar Ganesh (University of Montpellier, Montpellier, France; University of Aberdeen, Aberdeen, UK; CNRS, Montpellier, France) In this work we address the problem of designing a resource allocation decision making robot. We developed a model that accurately makes decisions to distribute risk, effort and reward between two humans or a human and a robot, considering their age, sex and humanness. To assess the model's alignment with social norms, we conducted a Turing test which showed that our model is perceived as making socially acceptable decisions, similar to those of human participants. Furthermore we demonstrated our model by embodying it in a robot negotiator that automatically distributed reward, effort and risk tokens among participant dyads by perceiving their physical characteristics. |
|
| Ganji, Sruthi |
Raiyan Ashraf, Yanni Liu, Sruthi Ganji, and Jong Hoon Kim (Kent State University, Kent, USA) Social robots frequently struggle to sustain meaningful engagement, often limited to surface-level interactions that lack conversational depth. To address this, we present a multimodal conversational architecture that integrates Motivational Interviewing (MI) strategies with situated perception. Key to this approach is a novel dual-stream perception engine: situated cue detection anchors dialogue in the user's immediate physical environment to establish common ground, while tri-modal affect inference (facial, vocal, linguistic) dynamically adjusts the conversation strategy based on real-time user emotion for facilitating empathy. Our system employs a hybrid Large Language Model (LLM) architecture, combining a lightweight model for low-latency fluency and a reasoning model for high-level planning, to guide users through progressive stages of dialogue from rapport-building to deep reflection. A pilot study with the Pepper robot demonstrates that this physically grounded, MI-guided approach successfully facilitates emotional reminiscence and enhances perceived empathy and engagement. These findings suggest that the proposed framework is a promising foundation for next-generation empathic agents, with significant potential applications in cognitive stimulation for aging populations and therapeutic social companionship. |
|
| García, Daniel Hernández |
Patrick Holthaus, Daniel Hernández García, Patricia Shaw, Francesco Del Duchetto, Marta Romeo, Muneeb Imtiaz Ahmad, Daniel Tozadore, and Shelly Bagchi (University of Hertfordshire, Hatfield, UK; Heriot-Watt University, Edinburgh, UK; Aberyswyth University, Aberystwyth, UK; University of Lincoln, Lincoln, UK; Swansea University, Swansea, UK; University College London, London, UK; National Institute of Standards and Technology, Gaithersburg, USA) The quality and impact of human-robot interaction (HRI) research rely on scientific rigour and reproducibility as well as the ethical soundness of experiments involving human participants. However, the HRI community currently lacks easy access to resources and common knowledge on experimental design practices and standardised reporting guidelines. The BPM-HRI: Best practices and methods in HRI research workshop at HRI 2026 aims to address this gap by fostering a community-wide effort to discuss, disseminate, and develop more robust methodologies. Targeting to empower especially early-career researchers, this half-day workshop will consolidate efforts from international initiatives, including the IEEE Standards Group P3108 Recommended Practice for Design of Human Subjects Studies in Human-Robot Interaction and the UK-HRI topic group Human-Robot Interaction: Best Practices and Methods. Our program includes a presentation and discussion on these initiatives, a keynote address focusing on rigorous study reporting, providing an intercontinental perspective, and dedicated mentoring and ideas exchange sessions. A collaborative working session will document the workshop's efforts with attendees starting to draft a community-driven white paper, surveying the current landscape and outlining next steps for experimental design and reporting recommendations in the field of HRI. |
|
| Garcia Cardenas, Juan José |
Xiaoxuan Hei, Sofia Gioumatzidou, Juan José Garcia Cardenas, and Adriana Tapus (ENSTA - Institut Polytechnique de Paris, Palaiseau, France; University of Macedonia, Thessaloniki, Greece; ENSTA Paris, Paliseau, France; ENSTA Paris, Paris, France) Trust in human-robot teams is critical for effective collaboration, but the dynamics of trust transfer between advisory and executing robots remain underexplored. This study investigated how the accuracy of advice provided by a humanoid robot (NAO) and the execution reliability of a robotic arm (Franka) influence human trust and reliance on advice in a collaborative drawing task. Participants completed three drawing tasks while receiving suggestions from NAO, with NAO providing either accurate or inaccurate advice and Franka executing actions with high or low reliability. Results showed that accurate advice from NAO increased participants' trust in both NAO and Franka, while inaccurate advice neither increase nor decrease trust in Franka, demonstrating trust spillover and trust resilience. Franka's execution reliability did not significantly affect adherence to NAO's suggestions, although low performance in both robots reduced task satisfaction, decreased reliance, and increased deliberation time. These findings highlight the asymmetrical and context-dependent nature of trust transfer in human-robot interaction, emphasizing the importance of both informational accuracy and execution reliability for effective collaboration. |
|
| Garrell, Anais |
Edison Jair Bejarano Sepulveda, Valerio Bo, Alberto Sanfeliu, and Anais Garrell (CSIC-UPC, Barcelona, Spain) Robots working in spaces shared by people need more than geometric mapping: they must recognize people, understand social context, and decide whether to proceed or negotiate passage. Traditional navigation pipelines lack this semantic understanding, often failing when progress depends on human cooperation. We introduce a Perception–Awareness–Decision (PAD) framework that systematically combines Simultaneous Localization and Mapping (SLAM) with Vision–Language Models (VLMs), speech recognition, and Large Language Models (LLMs), rather than simply stacking modules. PAD tries to emulate human perceptual organization by fusing multi-modal cues into a unified situational-awareness map capturing geometry, social context, and linguistic intent. This representation enables the decision layer to choose adaptively between safe replanning and context-appropriate verbal interaction. In a corridor-blocking task, PAD improves task success, increases safety margins, and produces behaviour that participants judged as more socially appropriate than a geometric baseline. These findings offer preliminary evidence that combining VLM-derived semantics with structured situational awareness can support more socially aware robot navigation. Valerio Bo, Anais Garrell, and Alberto Sanfeliu (CSIC-UPC, Barcelona, Spain) Robots that operate alongside people increasingly depend on intention-recognition models to anticipate human motion and adapt their behavior in socially appropriate ways. However, these models vary widely in both latency and accuracy, leading to different trade-offs between reacting quickly and correctly. Although these technical differences are well documented, it remains unclear how they shape the user’s experience of interacting with a robot. To examine how these translate into human perception, we conduct a preliminary user study comparing three intention-recognition models: a fast but low-accuracy model (Geo), an intermediate model (LSTM), and a slower but highly accurate model (Fusion). Participants interacted with a mobile robot controlled by each model and rated their experience across key dimensions of social interaction. Overall, the findings suggest that socially fluent interaction does not emerge from speed or accuracy alone, but from the balance of timely, reliable, and predictable robot behavior. |
|
| Garrison, Christopher |
Tianyi Zhang, Christopher Garrison, Kristal Hockenberry, Rabeya Jamshad, Laurel D. Riek, and Susan Simkins (Pennsylvania State University, University Park, USA; University of California at San Diego, La Jolla, USA) Although robots are increasingly integrated into healthcare, safety- and time-critical action team contexts are underexplored in HRI research. In addition, whereas human-robot interactions have received attention, much remains to be understood about the effects of robots on human-human team processes. In response to these research needs, we varied the effects of robot descriptive characteristics, robot behaviors, and interaction context as a mobile manipulator robot delivered needed materials to nursing student teams during simulations. Team communication and team performance were rated by nursing instructors using a standardized healthcare communication framework and the Creighton Competency Evaluation Instrument. Results indicated that increased robot influencing factors (robot face, robot voice, response to requests, and passive presence) enhanced nursing students’ communication with each other and mannequin patients, thereby improving team performance. Together, these findings highlight that robots should not be viewed solely as direct collaborators with humans, but also as actors that shape how humans work together. As healthcare and other safety-critical domains increasingly integrate robots, understanding these indirect pathways will be critical for designing robots that enhance rather than hinder team effectiveness. |
|
| Garvey, Tyler |
Tyler Garvey, Elodie Koo, and Ryan Schermerhorn (Colby College, Waterville, USA) Our proposed design is an armband that will prompt users to maintain a routine and notify caregivers of emergencies. The device utilizes an Arduino Nano microcontroller, allowing the user to input their routine data over the internet. Haptic and audio feedback will signify different parts of the routine, played through the speaker and motor elements. Pressing a button will allow for rejection of promptings, while holding it will contact help. The device will also detect dangerous scenarios for the wearer using an accelerometer and a temperature sensor. This device aims to improve the health and well-being of ADRD patients. |
|
| Gazaryan, Georgii |
Timofei Kozlov, Artem Trandofilov, Georgii Gazaryan, Issatay Tokmurziyev, Miguel Altamirano Cabrera, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Safe navigation for the visually impaired individuals remains a critical challenge, especially concerning head-level obstacles, which traditional mobility aids often fail to detect. We introduce GuideTouch, a compact, affordable, standalone wearable device designed for autonomous obstacle avoidance. The system integrates two vertically aligned Time-of-Flight (ToF) sensors, enabling three-dimensional environmental perception, and four vibrotactile actuators that provide directional haptic feedback. For real-world robustness, the device includes a unique centrifugal self-cleaning optical cover mechanism and a sound alarm system for location if the device is dropped. We evaluated the haptic perception accuracy across 22 participants. The system demonstrated high recognition accuracy, achieving an average of 92.9% for single and double motor patterns. Furthermore, preliminary experiments with 14 visually impaired users validated this interface, showing a recognition accuracy of 93.75% for primary directional cues. The results demonstrate that GuideTouch enables intuitive spatial perception and could significantly improve the safety, confidence, and autonomy of users with visual impairments during independent navigation. |
|
| Geijer-Simpson, Emma |
Jacqueline Borgstedt, Ricarda Wullenkord, Fethiye Irmak Doğan, Alva Markelius, Yue Lou, Emma Geijer-Simpson, Emily S. Cross, Friederike Eyssel, Tamsin Ford, Jenny L. Gibson, Hatice Gunes, Georgina Warner, and Ginevra Castellano (ETH Zurich, Zurich, Switzerland; Bielefeld University, Bielefeld, Germany; University of Cambridge, Cambridge, UK; Uppsala University, Uppsala, Sweden) Mental Health challenges disproportionately affect vulnerable children, including those with Developmental Language Disorder (DLD) and with forced migration backgrounds. These groups face elevated risks of anxiety, depression, and social withdrawal. Yet, to date, their subjective wellbeing is difficult to assess. This is particularly true for traditional self-report measures which require good reading skills, linguistic competence, and sustained attention. To overcome these barriers, we propose that social robots offer a novel opportunity to create accessible, engaging, and child-friendly wellbeing assessments. Yue Lou, Emma Geijer-Simpson, Georgina Warner, Anna Pérez-Aronsson, Elin Inge, and Ginevra Castellano (Uppsala University, Uppsala, Sweden) As emerging technologies such as AI systems and social robots become embedded in children’s daily lives, it is vital to consider how children themselves imagine robots. Advances in social robotics have generated new opportunities for assessing child wellbeing, but much of this work remains framed by adult researcher-centered views of what counts as beneficial. Drawing on the distinction between adult child perspectives and children’s own perspectives from childhood studies, we report a speculative participatory workshop with a child public involvement panel that explores future social robots for supporting children’s wellbeing. Children took part as co-researchers rather than research subjects, and their discussions highlight insights including everyday care, emotional companionship, and transparency about robot identity. We use these insights to inform future work on child-robot interaction and child wellbeing, and to connect children’s perspectives to debates around ethics and AI policy. |
|
| Geiskkovitch, Denise Y. |
Phillip Bach-Luong Tran Jr., Julia Rosén, and Denise Y. Geiskkovitch (McMaster University, Hamilton, Canada; Stockholm University, Stockholm, Sweden) Social robots have the potential to support children's emotion regulation development, especially during early childhood, where emotion regulation skills enhance social and academic development. However, existing robots are not designed specifically to support young children's long-term development of emotion regulation skills in domestic settings. We introduce a prototype of Emotion Buddy — a child-led, parent-supported robot designed for routine, at-home use by children ages 2 to 6. The robot emulates 6 emotions via sound, haptics, and shape transformation based on real-time sensing of sound, movement, and touch. We intend for children to interact with the robot as part of their daily routine to identify and respond to its emulated emotions, thereby providing frequent opportunities to practice their emotion regulation skills. We discuss our design process, the current prototype, and future work to evaluate the robot's efficacy in sustained emotion regulation learning. |
|
| Gelmi de Freitas Salvo, Guilherme |
Caio Conti, Nuno Kuschnaroff Barbosa, Guilherme Gelmi de Freitas Salvo, Gabriel Corsi Honorio, Davy Araujo Sa Teles, and Adriana Tapus (ENSTA - Institut Polytechnique de Paris, Paris, France; ENSTA Paris, Paris, France) The presence of robots in everyday settings is growing rapidly. As they become increasingly common in service-oriented roles, such as in restaurants, retail, and reception, their primary function often involves assisting people by providing guidance and supporting decision-making. In these contexts, a critical factor influencing their effectiveness is the level of human trust in the robot’s actions. In this work, we investigate human trust, confidence and perceptions in interactions with robots within the context of a recycling task. We developed a robot equipped with a vision-enabled LLM to classify trash items and a robotic arm to indicate the recommended bin—either recycling or general waste. In a sorting experiment involving 12 participants, each completed the task under three conditions: (C1) without any assistance, (C2) with an instructional sheet, and (C3) with the option to request assistance from the robot. We tracked whether participants chose to consult the robot and collected self-reported confidence ratings for each condition. Preliminary results suggest that participants are more likely to trust the robot’s guidance and report higher confidence in correctly completing the task when assisted by the robot compared to performing the task alone or with only the instructional sheet. |
|
| Gena, Cristina |
Filipa Correia, Cristina Gena, Alberto Lillo, Laura Lossi, Claudio Mattutino, Valentina Nisi, Linda Pigureddu, and Fabiana Vernero (Instituto Superior Técnico - University of Lisbon, Lisbon, Portugal; University of Turin, Turin, Italy; IST University of Lisbon, Lisbon, Portugal) This paper explores the emerging field of Animal-Robot Interaction (ARI) by questioning the way humanoid robots could engage with dogs in domestic settings, a scenario which we expect to become prevalent in the near future. While existing research focuses on human-robot interaction, ARI remains quite underexplored, particularly regarding how dogs perceive and respond to robotic agents. In contrast, this work proposes a playful and relational design perspective to foster more natural interactions between dogs and robots. We sought guidance from the Department of Veterinary Medicine to ensure our experimental design offered an ethical and stress-free environment for the dogs. Hence, the study incorporated best practices in robot vocalisation, gestural communication, and movement adaptation, improving the robot’s ability to interact naturally. The experiment involved six dogs, always in the presence of their owners. Preliminary findings indicate that dogs can react with curiosity and engagement, but also avoidance or fear, challenging assumptions about robot acceptability in animal interactions. We argue for rethinking robotic design beyond human-centric paradigms, advocating for pluralistic and open-ended approaches to ARI. Nicole Covone, Linda Pigureddu, Margherita Attanasio, Cristina Gena, Berardina De Carolis, and Monica Mazza (University of L’Aquila, L’Aquila, Italy; University of Turin, Turin, Italy; University of Bari, Bari, Italy) The study proposed in this paper investigates whether the administration modality of the Raven’s Standard Progressive Matrices (RSPM) influences the performance and physiological responses of autistic individuals. Fourteen participants with an ASD diagnosis completed the test either in the traditional paper-and-pencil format or through the humanoid robot Pepper, which delivered a digital version of the assessment. Across both modalities, we analysed accuracy, reaction times, and physiological activation (heart rate), taking item complexity into account. Preliminary results indicate that robot-mediated administration is feasible and comparable to traditional testing, with meaningful differences emerging as task difficulty increases. In particular, the Pepper modality tended to show lower initial physiological activation and comparable performance across most item levels. These findings suggest that social robots may offer a viable and potentially more engaging alternative for administering cognitive assessments in autism, supporting structured, predictable, and less stressful testing contexts. |
|
| Georgara, Athina |
Nguyen Tan Viet Tuyen, Athina Georgara, Lokesh Singh, Jayati Deshmukh, Sean Davey, Paul N. Tisdale, and Sarvapali Ramchurn (University of Southampton, Southampton, UK; St Monica Trust, Bristol, UK) "Stop and Watch" is an early-warning tool adopted by the UK NHS and is widely used in elderly care settings. The tool helps caregivers to recognise abnormal changes in residents’ health. Despite its clinical value, the process remains highly manual, workload-intensive, and vulnerable to missed observations, particularly in environments facing staff shortages, frequent staff rotations, and increasing care demands. We argue that AI-based systems such as Socially Assistive Robots (SARs) and Assistive Living Technologies (ALTs) offer promising avenues for supporting and enhancing the "Stop and Watch" tool. However, designing such systems requires a multidisciplinary effort to establish a comprehensive understanding of current practices and the priorities and concerns of all relevant stakeholders. This paper presents insights from a participatory design workshop held in a care home in the UK to explore how SARs and ALTs could meaningfully support the “Stop and Watch” tool, understand stakeholders’ expectations, perceived benefits, and concerns regarding deployment in this sensitive context. |
|
| Ghanta, Harshith |
Damith Herath, Hans Asenbaum, Janie Busby Grant, Maleen Jayasuriya, Emanuela Savini, and Harshith Ghanta (University of Canberra, Bruce, Australia; University of Canberra, Canberra, Australia) As Artificial Intelligence (AI) systems are increasingly integrated into civic platforms and decision-making processes, the nature of public deliberation is shifting. While AI offers potential solutions to scalability and moderation in deliberative democracy, its impact on human agency remains underexplored. The project adopted an exploratory design, convening six small-group deliberations. Each group included four humans and one humanoid robot. Through semi-structured post-deliberation interviews, we investigated how embodied algorithmic intervention influenced perceived ownership of the discourse. Our initial findings suggested a "Competence-Agency Paradox": while participants valued the AI's ability to synthesise information and enforce civility, they reported a diminished sense of democratic agency, characterised by epistemic deference to the machine and a reluctance to explore emotional or intuitive lines of reasoning. We argue that without careful design, AI-augmented deliberations risk prioritising procedural efficiency over the messy, human autonomy required for genuine democratic legitimacy. |
|
| Ghose, Debasmita |
Kayla Matheus, Debasmita Ghose, Jirachaya (Fern) Limprayoon, Michal A. Lewkowicz, and Brian Scassellati (Yale University, New Haven, USA; Massachusetts Institute of Technology, Cambridge, USA) We present the Ommie Deployable System (DS), a replicable, autonomous platform for long-term, in-the-wild mental health applications with the Ommie robot. Ommie DS builds on prior anxiety-focused deployments by introducing robust hardware, enhanced sensing, modular software, a companion tablet, and wireless multi-device architecture to support daily deep-breathing interactions in homes. Designed using off-the-shelf components and rapid-prototyped enclosures, the system enables reliable multi-week use, remote monitoring, and easy customization. By providing a durable, open, and researcher-friendly platform, Ommie DS supports scalable, real-world study of HRI for mental health and well-being. |
|
| Ghosh, Pratyusha |
Pratyusha Ghosh and Laurel D. Riek (University of California at San Diego, La Jolla, USA) Telepresence robots have the potential to support people with chronic illnesses (PwCI) by enabling remote participation with greater physical and social agency than traditional videoconferencing. However, these robots can be cognitively exhausting to use. This is exacerbated by PwCI's need to constantly weigh the potential benefits and risks of participation due to fluctuating symptoms. In our work with PwCI, we explore how we might design telepresence robots that minimize the health consequences of remote participation. To do this, we leverage pacing, a self-management strategy PwCI use to balance activity and rest. Ultimately, our research helps advance the accessibility of telepresence robots by foregrounding the embodied and sociopolitical dimensions of PwCI's episodic disability while challenging the social norms of rest/productivity. Sandhya Jayaraman, Deep Saran Masanam, Pratyusha Ghosh, Alyssa Kubota, and Laurel D. Riek (University of California at San Diego, La Jolla, USA; San Francisco State University, San Francisco, USA) This workshop explores the social, ethical, and practical implications of deploying robots for clinical or assistive contexts. Robots hold potential to expand access to disabled communities, such as by providing physical or cognitive assistance, and enabling new ways of participating in social activities. They can assist healthcare workers with ancillary tasks and care delivery, supporting them to work at the top of their license. However, the real-world deployment of robots across these contexts can create social, ethical, and organizational challenges, or downstream effects. Some challenges include the potential for robots to undermine the agency of disabled people and reinforce their marginalization on a societal level. In clinical settings, robots may also disrupt care delivery, shift roles, and displace labor. To explore these issues, this workshop will invite trans-disciplinary speakers and participants from academia, industry, government, and non-academics with/without affiliations interested in surfacing their lived experiences in using or developing such robots. Through panel discussions, group ideation activities and interactive poster sessions, this workshop intends to critically and creatively explore the future of robots for clinical and assistive contexts. Topics will include the downstream implications of robots in clinical or assistive contexts and potential upstream interventions. Outcomes of the workshop will include publishing key workshop artifacts on our website and initiating a follow-up journal special issue. |
|
| Giannaccini, Maria Elena |
Caroline Yan Zheng, Maria Elena Giannaccini, Angela Higgins, Feng Zhou, Mark Paterson, and Praminda Caleb-Solly (University of Nottingham, Nottingham, UK; University of Pittsburgh, Pittsburgh, USA) The critical role of interhuman touch and the rapid development of accessible haptics technology is preluding a new generation of social and assistive robotic interactions that support and enhance human daily life through physical contact, including assisting mobility, mental health, pain management, social bonding, and novel sensory experience for body literacy. For social and assistive robotic systems to fit in the intimate space of physical contact with human bodies, the nature and success of the interaction needs to be interlinked with the sensory and sentimental qualities of robot touch. HRI should not only be driven by function and efficiency, but more critically, the experience of the touch interaction, felt by the human. This raises fundamental methodological challenges. Currently, transferring rich qualitative experience into effective robotic systems has not been made possible, and there is a lack of tools to investigate the ephemeral phenomenon of felt sensations. Addressing this challenge calls for methodological innovation to investigate the sensory and experiential aspects of technology-initiated social touch, through facilitating cross-disciplinary expertise and end-users' involvement. This workshop mobilises experts from diverse disciplines, including robotics engineering, AI, social science, neuroscience, healthcare, soma design and arts. This workshop aims to collectively reflect on strategies, tools and methods for the co-exploration and co-production of Robot-Human Touch Interactions that afford appropriate and desirable human felt experience. |
|
| Gibson, Jenny L. |
Jacqueline Borgstedt, Ricarda Wullenkord, Fethiye Irmak Doğan, Alva Markelius, Yue Lou, Emma Geijer-Simpson, Emily S. Cross, Friederike Eyssel, Tamsin Ford, Jenny L. Gibson, Hatice Gunes, Georgina Warner, and Ginevra Castellano (ETH Zurich, Zurich, Switzerland; Bielefeld University, Bielefeld, Germany; University of Cambridge, Cambridge, UK; Uppsala University, Uppsala, Sweden) Mental Health challenges disproportionately affect vulnerable children, including those with Developmental Language Disorder (DLD) and with forced migration backgrounds. These groups face elevated risks of anxiety, depression, and social withdrawal. Yet, to date, their subjective wellbeing is difficult to assess. This is particularly true for traditional self-report measures which require good reading skills, linguistic competence, and sustained attention. To overcome these barriers, we propose that social robots offer a novel opportunity to create accessible, engaging, and child-friendly wellbeing assessments. |
|
| Gilbert, Jason F. |
Ofir Sadka, Michael Kupferman, Hili Megory, Lior Simchon, Rotem Dagan, Guy Benadon, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Robots are becoming part of everyday life, where their success depends as much on social interaction as on task performance. While rich movement is often assumed to enhance that experience, minimal gestures can also elicit meaningful social interactions. This suggests that gesture design informed by fundamental human nonverbal cues may achieve comparable social expressivity with fewer degrees of freedom. We explored whether reducing a robot’s degrees of freedom alters the social experience by comparing interactions in two movement configurations: a Full-DoF condition and a Reduced-DoF condition. This comparison allowed us to evaluate whether increased movement capability contributes meaningfully to the quality of the social experience. Bayesian analyses consistently favored the null hypothesis, indicating no meaningful differences between the Full-DoF and Reduced-DoF conditions. These initial findings suggest that well-designed simple gestures can support a consistent social experience, challenging the assumption that greater movement capability should be automatically expected to enhance human–robot interaction. Elior Carsenti, Adi Manor, Ofir Sadka, Michael Kupferman, Guy Benadon, Lior Simchon, Rotem Dagan, Hili Megory, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Social robots are increasingly expected to initiate interactions, yet proactive behavior often produces negative experiences if presented when the users are not available. We argue that availability is not binary and depends on the user's cognitive load. Moreover, proactivity can take different forms, suggesting that proactive cues can be adjusted to the user’s current load. We tested whether matching proactivity modality (verbal and nonverbal cues) to cognitive-load level improves robot perception. In a 2×2×2 between-participants study, 87 participants completed a high-load writing task or a low-load screw-sorting task while an ElliQ robot initiated interaction using verbal cues, nonverbal cues, both, or neither. Perceptions varied across conditions. Under low load, verbal proactivity was preferred regardless of nonverbal cues. Under high load, nonverbal-only proactivity produced the best experience. These findings suggest that proactivity design is not trivial and should focus on aligning the robot's proactive behavior with the user's cognitive load. |
|
| Gillet, Sarah |
Yujing Zhang, Iolanda Leite, and Sarah Gillet (KTH Royal Institute of Technology, Stockholm, Sweden) Aging populations increasingly face challenges such as reduced social engagement and heightened risks of isolation. Group-based activities present valuable opportunities to promote older adults’ emotional well-being and cognitive stimulation. Although prior HRI research has examined robots in group settings and as tools for individualized support, limited work has explored how robot-facilitated activities should be designed to support older adult groups' interaction in real community contexts. We developed a dual-robot version of the Swedish word-description game "With Other Words" and conducted in-the-wild deployments with fifteen older adults across local community centers. Through thematic analysis of post-session interviews and researcher observations, we identified key factors and design recommendations that are can help future work to build functioning interactions between robots and groups of older adults. |
|
| Gioumatzidou, Sofia |
Xiaoxuan Hei, Sofia Gioumatzidou, Juan José Garcia Cardenas, and Adriana Tapus (ENSTA - Institut Polytechnique de Paris, Palaiseau, France; University of Macedonia, Thessaloniki, Greece; ENSTA Paris, Paliseau, France; ENSTA Paris, Paris, France) Trust in human-robot teams is critical for effective collaboration, but the dynamics of trust transfer between advisory and executing robots remain underexplored. This study investigated how the accuracy of advice provided by a humanoid robot (NAO) and the execution reliability of a robotic arm (Franka) influence human trust and reliance on advice in a collaborative drawing task. Participants completed three drawing tasks while receiving suggestions from NAO, with NAO providing either accurate or inaccurate advice and Franka executing actions with high or low reliability. Results showed that accurate advice from NAO increased participants' trust in both NAO and Franka, while inaccurate advice neither increase nor decrease trust in Franka, demonstrating trust spillover and trust resilience. Franka's execution reliability did not significantly affect adherence to NAO's suggestions, although low performance in both robots reduced task satisfaction, decreased reliance, and increased deliberation time. These findings highlight the asymmetrical and context-dependent nature of trust transfer in human-robot interaction, emphasizing the importance of both informational accuracy and execution reliability for effective collaboration. |
|
| Giuliani, Manuel |
Pascal Haberkorn, Corinna Mack, and Manuel Giuliani (University of Applied Sciences Kempten, Kempten, Germany; Kempten University of Applied Sciences, Kempten, Germany) This study investigates whether a dialogue-based robot, employing motivational interviewing techniques, can enhance the intrinsic motivation of older adults to engage with their local social networks. A user study was conducted in which a Furhat robot interacted with participants, first presenting information about upcoming local social events and subsequently using motivational interviewing to encourage reflection on their personal motivation to attend. The study included 42 older adults (aged between 57 and 90 years old, mean age = 73.9 years). Participants completed the Situational Intrinsic Motivation Scale (SIMS) before and after the interaction with the robot to assess changes in intrinsic motivation, extrinsic motivation, identified regulation, and external regulation. Additionally, the Negative Attitudes Toward Robots Scale (NARS) was administered, and semi-structured interviews were conducted post-interaction. Results indicated no statistically significant changes in SIMS scores, though a trend toward significance was observed for identified regulation (p = 0.076). Analysis of NARS scores and qualitative interview data revealed predominantly positive attitudes toward the robot, with many participants expressing openness to future use of dialogue-based robots for social motivation. These findings suggest promising avenues for further research on the potential of robotic systems to support social engagement among older adults. Leimin Tian, Ana Kirschbaum, Caterina Neef, Mary Ellen Foster, Sara Cooper, Frauke Zeller, Manuel Giuliani, Alexander Eberhard, Oliver Chojnowski, Nils Mandischer, Utku Norman, Jan Ole Rixen, Pamela Carreno-Medrano, Nick Hawes, and Dana Kulić (CSIRO, Melbourne, Australia; Monash University, Melbourne, Australia; University of Applied Sciences Cologne, Cologne, Germany; KIT, Karlsruhe, Germany; University of Glasgow, Glasgow, UK; IIIA-CSIC, Spain; University of Edinburgh, Edinburgh, UK; Kempten University of Applied Sciences, Kempten, Germany; University of Augsburg, Augsburg, Germany; Monash University, Clayton, Australia; University of Oxford, Oxford, UK) As the technology readiness level of robotic systems increases, these systems interact with real users in realistic application contexts at increasing scale. In such real-world HRI, failures and unexpected outcomes are common, and these can provide valuable lessons for the design and evaluation of future HRI systems. Building on two recent workshops---the HRI 2025 workshop on Human-Robot Interaction in Extreme and Challenging Environments and the RO-MAN 2025 workshop on Real-World HRI in Public and Private Spaces: Successes, Failures, and Lessons Learned---this half-day workshop invites researchers and practitioners to share their design and deployment lessons drawn from across the spectrum of real-world HRI. The interactive sessions follow a taxonomy of the key phases during field deployment and major factors at play, inviting attendees to review the experimental design of their own research or example projects to identify potential challenges and share related experiences. This discussion will refine the initial taxonomy, which we plan to release as an open-source tool to empower researchers and practicioners in their design and deployment of real-world HRI. |
|
| Gnisa, Felix |
Arjita Mital, Felix Gnisa, Utku Norman, and Nora Weinberger (KIT, Karlsruhe, Germany) This Late Breaking Work presents a low-threshold, unsupervised public exhibit designed to explore how non-expert audiences imagine and negotiate future human–robot interactions in ethically charged everyday situations. The exhibit, installed in Karlsruhe, Germany, invited participants to engage with four dilemma-based scenarios where participants were prompted to decide how a social robot should act confronting questions of moral delegation and machine agency. The activity generated rich, situated reflections on responsibility, safety, care, and the limits of automation. Findings reveal context-dependent expectations that balance efficiency against dignity, human judgment, and relational preservation, shaped by perceived stakes, social context, and the specific embodiment of the robot involved. Through this we demonstrate how minimally supervised participatory formats can surface normative expectations and support inclusive, responsible robot design. |
|
| Gobec, Zoja |
Zoja Gobec, Joyce den Hertog, and Kim Baraka (Sioux Technologies, Amsterdam, Netherlands; AKOB, den Haag, Netherlands; Vrije Universiteit Amsterdam, Amsterdam, Netherlands) With robots increasingly being used for artistic expression in interactive performances, this research investigates the production of expressive swarm behaviour that could form the basis for an interactive performance between a dancer and a swarm of drones. We contribute a mapping of Laban Effort parameters - a common movement analysis framework - onto a particle swarm and integrate it into an interactive prototype. The system accepts human motion as input and generates responsive swarm behaviour with the Boids algorithm as the foundational behaviour model. In a user study evaluating the mapping (N=17), we show that the Space and Time parameters were recognised significantly better than Weight and Flow, suggesting that parameters connected to embodied cues such as intention and emotion are more challenging to computationally implement, and need further refinement. The novel mapping, along with the interactive system and user study insights, offers an initial step towards practical applications in choreography development, interactive performance, or art installations, as well as designing expressive frameworks with human-guided swarm control. |
|
| Goerke, Charly |
Ilona Buchem, Jessica Kazubski, and Charly Goerke (Berlin University of Applied Sciences, Berlin, Germany) This paper presents the design of NEFFY 2.0, a social robot designed as a haptic slow-paced breathing companion for stress reduction, and reports findings from a mixed-methods user study with 14 refugees from Ukraine. Developed through a user-centered design process, NEFFY 2.0 builds on NEFFY 1.0 and integrates embodiment and multi-sensory interaction to provide low-threshold, accessible guidance of slow-paced breathing for stress relief, which may be particularly valuable for individuals experiencing prolonged periods of anxiety. To evaluate effectiveness, an experimental comparison of a robot-assisted breathing intervention versus an audio-only condition was conducted. Measures included subjective ratings and physiological indicators, such as heart rate (HR), heart rate variability (HRV) using RMSSD parameter, respiratory rate (RR), an galvanic skin response (GSR), alongside qualitative data from interviews exploring user experience and perceived support. Qualitative findings showed that NEFFY 2.0 was perceived as intuitive, calming and supportive. Survey results showed a substantially larger effect in significant reduction of perceived stress in the NEFFY 2.0 condition compared to audio-only. Physiological data reveled mixed results combined with large inter-personal variability. Three patterns of breathing practice with NEFFY 2.0 were identified using k-means clustering. Despite the small sample size, this study makes a novel contribution by providing empirical evidence of stress reduction in a vulnerable population through a direct comparison of robot-assisted and non-robot conditions. The findings position NEFFY 2.0 as a promising low-threshold tool that supports stress relief and contributes to the vision of HRI empowering society. |
|
| Goldanloo, Melody |
Claire Lewis, Melody Goldanloo, Matthew Murray, Zachary Kaufman, and Tom Williams (Colorado School of Mines, Golden, USA; University of Colorado at Boulder, Boulder, USA) Museums are an effective informal learning environment for science, art and more. Many researchers have proposed museum guide robots, where the outcomes of the interactions are based solely on the robot’s communication. In contrast, we explored how a robot could encourage learning and teamwork through human-human interactions. To achieve this, we created “Chase,” a novel zoomorphic robot that presents “Data Chase,” an interactive museum activity. We designed Chase to enable museum-goers to learn about the exhibits together by prompting users to complete a teamwork based scavenger hunt for rewards. |
|
| Gollob, Emanuel |
Anna Dobrosovestnova, Barry Brown, Emanuel Gollob, Mafalda Gamboa, and Masoumeh Mansouri (Interdisciplinary Transformation University, Linz, Austria; Stockholm University, Stockholm, Sweden; University of Arts Linz, Linz, Austria; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Birmingham, Birmingham, UK) HRI 2026 takes place amid profound socio-political turbulence marked by rising authoritarianism, the consolidation of technological power, and the expanding use of robotics for warfare. These global conditions create an affective atmosphere that seeps into our field: a mix of attachment to techno-determinist and techno-solutionist narratives, unease with 'business as usual,' and a tentative search for alternatives. As HRI scholars and designers, we recognize how the wider socio-political tensions resonate within our own practices, shaping what we take to be possible, necessary, or inevitable in research and design. In this half-day, in-person workshop, we mobilize three affective orientations - cruel optimism, lucid despair, and precarious hope - as resources for reflection, critique, and experimentation. Through short provocations, discussions, and a speculative group activity, participants will be invited to inhabit these affects to question dominant narratives that sustain HRI, confront systemic challenges, and collectively explore alternative trajectories for research, design, and community building. |
|
| Gomez, Randy |
Eric Nichols, Miguel Mejia Tobar, and Randy Gomez (Honda Research Institute Japan, Wakoshi, Japan; Honda Research Institute Japan, Wako, Japan) Lifelike expressive behavior by social robots requires seamless coordination of facial expressions, body language, and tone of voice–all semantically aligned with speech content. While prior work has explored co-speech gesture generation, coordinating multiple expressive channels from a single semantic analysis remains under-explored. To address this gap, we propose holistic LLM-based generation, where an LLM analyzes robot dialog and generates synchronized behavior timelines that align vocal delivery and physical expression by directly inferring from speech semantics. In a pilot study on the tabletop robot Haru (N=23), 70% of participants preferred this approach over a heuristic baseline, characterizing it as more ”natural” and ”human-like”, with preliminary trends toward improved perceived agency (d=0.33, p=.128) and animacy (d=0.27, p=.212). However, qualitative analysis reveals a continuum of desired expression varying in frequency and intensity, with excessive expression triggering negative reactions. Navigating this design space presents new challenges for expressive robot behavior generation. |
|
| Gong, Bingcen |
Weijie Qin, Qiyao Wang, Bingcen Gong, and Yijia Luo (Tsinghua University, Beijing, China) During dining in restaurants, oil splashes are readily appraised by users as a negative event. Critically, without timely intervention, the initial irritation can accumulate and evolve into a vicious cycle of escalating negativity. This reaction may not only impair the overall dining experience, but also dominate the user's cognitive focus and lead to lasting emotional distress. To address this, we present Seesoil—a desktop interactive robot based on the "Weak Robot" concept. Designed to resemble a condiment bottle, it blends naturally into the table setting. Rather than addressing the stain directly, Seesoil employs deliberately clumsy motions and voice interaction to guide users in reappraising the situation during the early stage of negative emotion generation. By redirecting attention towards a more positive interactive experience, it mitigates the accumulation of negative affect and serves as an emotional companion throughout the meal. |
|
| Gonnermann-Müller, Jana |
Nicolas Leins, Jana Gonnermann-Müller, Malte Teichmann, and Sebastian Pokutta (Zuse Institute Berlin, Berlin, Germany; Weizenbaum Institute, Berlin, Germany; University of Potsdam, Potsdam, Germany; TU Berlin, Berlin, Germany) Augmented Reality (AR) offers powerful visualization capabilities for industrial robot training, yet current interfaces remain predominantly static, failing to account for learners' diverse cognitive profiles. In this paper, we present an AR application for robot training and propose a multi-agent AI framework for future integration that bridges the gap between static visualization and pedagogical intelligence. We report on the evaluation of the baseline AR interface with 36 participants performing a robotic pick-and-place task. While overall usability was high, notable disparities in task duration and learner characteristics highlighted the necessity for dynamic adaptation. To address this, we propose a multi-agent framework that orchestrates multiple components to perform complex preprocessing of multimodal inputs (e.g., voice, physiology, robot data) and adapt the AR application to the learner's needs. By utilizing autonomous Large Language Model (LLM) agents, the proposed system would dynamically adapt the learning environment based on advanced LLM reasoning in real-time. |
|
| Gonzalez, Rex |
Xiangfei Kong, Rex Gonzalez, Nicolas Echeverria, Sofia Cobo Navas, Thuc Anh Nguyen, Divyamshu Shrestha, Sebastian Ramirez-Vallejo, Deekshita Senthil Kumar, Andrew Texeira, Thao Nguyen, Deep Akbari, Hong Wang, Hongdao Meng, and Zhao Han (University of South Florida, Tampa, USA) HRI research shows that social robots support human companionship and emotional well-being. However, the cost of high-end social robots limits their accessibility, and, yet, existing low-cost platforms cannot fully support users emotionally due to limited interaction capabilities. In this demo paper with accompanying video, we showcase Bloom, a low-cost social robot that combines touch sensing with LLM-driven conversation to facilitate more expressive and emotionally engaging interactions with humans. Bloom also integrates a customizable 3D-printed shell with a flexible software pipeline that enables touch-responsive movement, LLM-driven dialogue, and face tracking during conversations, offering more expressive interaction capabilities at a low cost, which remains easy to fabricate and adaptable across a wide range of societal applications, including mental health, companion, and healthcare. |
|
| González Fernández, Irene |
Cristina Abad-Moya, Irene González Fernández, Alexis Gutiérrez-Fernández, Francisco J. Rodríguez Lera, and Camino Fernández-Llamas (University of León, León, Spain; Rey Juan Carlos University, Madrid, Spain) In everyday environments, robots must be able to detect when people intend to initiate interaction and to communicate their engagement state in an interpretable manner. Although engagement has been widely studied in human–robot interaction, many existing approaches rely on controlled settings or limited perceptual modalities, leaving open questions about how non-expert users naturally attempt to initiate interaction and how engagement states should be signalled during early interaction. An online pre-study questionnaire with 64 participants was conducted to capture user expectations regarding interaction initiation and engagement feedback. The results indicated a preference for speech- and gaze-based strategies, as well as expectations of clear signals such as robot orientation, verbal acknowledgement, and visual feedback. These insights informed the design of a multimodal engagement system integrating auditory and visual cues and providing incremental feedback to distinguish between attention and confirmed readiness. The system was evaluated in a semi-naturalistic study with 15 participants in a domestic environment. The results show that users were generally able to attract the robot’s attention without prior instruction, while providing minimal information about the robot’s perceptual capabilities led to more consistent interpretation of its engagement responses. The findings provide empirical insight into interaction initiation strategies and highlight the importance of transparent engagement signalling in human–robot interaction. |
|
| Gordon, Goren |
Ben H. Botzer, Goren Gordon, and Michal Gordon-Kiwkowitz (Tel Aviv University, Tel Aviv, Israel; Indiana University at Bloomington, Bloomington, USA; Holon Institute of Technology, Holon, Israel) Education is rapidly evolving with technology, yet teachers often struggle with low self-competence, curriculum integration challenges, and difficulty personalizing digital tools. GenAI "vibe-coding" can lower barriers by enabling natural-language interaction and building trust in AI-EdTech systems. We introduce TutorBotz, a GenAI tool that lets teachers program social robots as teaching assistants. With TutorBotz, teachers design lesson plans that social robots then deliver. In an exploratory study, five teachers and forty-eight primary and middle-school students used TutorBotz. Teachers created two lesson plans each, later taught by a NAO robot. Qualitative findings show that TutorBotz increased teachers’ confidence in using social robots, was easy to use, and fit diverse curricula. We also discuss its personalization benefits, technical concerns, and students’ enjoyment of robot-led lessons. Overall, TutorBotz represents a meaningful step toward empowering teachers to use social robots in the classroom. Zhennan Yi and Goren Gordon (Indiana University at Bloomington, Bloomington, USA; Tel Aviv University, Tel Aviv, Israel) Social robots have been increasingly used to support children's development of soft skills through interactive activities. While many studies have shown the benefits of using robots to foster such soft skills, most of the work designs robot behaviors that target only one skill within one task scenario. Yet in educational practice, there is a need to promote a combination of soft skills together. In this report, we introduce an integrated behavioral framework that enables social robots to support the promotion of multiple soft skills. We describe the development process, the extraction and organization of different behavioral strategies, and how the framework can be applied to design child-robot activities. Moving from single-skill interventions to integrated behavioral design, this work contributes a conceptual and methodological foundation for designing social robots that aim to foster multiple soft skills in children. |
|
| Gordon-Kiwkowitz, Michal |
Ben H. Botzer, Goren Gordon, and Michal Gordon-Kiwkowitz (Tel Aviv University, Tel Aviv, Israel; Indiana University at Bloomington, Bloomington, USA; Holon Institute of Technology, Holon, Israel) Education is rapidly evolving with technology, yet teachers often struggle with low self-competence, curriculum integration challenges, and difficulty personalizing digital tools. GenAI "vibe-coding" can lower barriers by enabling natural-language interaction and building trust in AI-EdTech systems. We introduce TutorBotz, a GenAI tool that lets teachers program social robots as teaching assistants. With TutorBotz, teachers design lesson plans that social robots then deliver. In an exploratory study, five teachers and forty-eight primary and middle-school students used TutorBotz. Teachers created two lesson plans each, later taught by a NAO robot. Qualitative findings show that TutorBotz increased teachers’ confidence in using social robots, was easy to use, and fit diverse curricula. We also discuss its personalization benefits, technical concerns, and students’ enjoyment of robot-led lessons. Overall, TutorBotz represents a meaningful step toward empowering teachers to use social robots in the classroom. |
|
| Gori, Monica |
Giulia Pusceddu, Lucia Schiatti, Andrea Senacheribbe, Monica Gori, Alessio Del Bue, and Alessandra Sciutti (Italian Institute of Technology, Genoa, Italy; University of Genoa, Genoa, Italy) We introduce a novel Human-Robot Interaction (HRI) paradigm built on sketch-based symbolic communication to investigate the link between human visual abstraction skills and language-based organization of semantic knowledge. The design of drawings' prompts is based on a controlled grammar with an increasing level of semantic complexity. We tested such a paradigm in an HRI experiment where humans were asked to make sketches of textual prompts and evaluate the congruence of the robot behavior with the represented semantic category, i.e. objects, actions, and interactions. Our preliminary results, suggesting that drawing strategies significantly differ based on the semantic complexity and in presence of an interaction goal, are promising towards the potential of the proposed approach to be integrated in fundamental research and clinical applications. Data: https://hrisymbolic.github.io/human-robot-drawing/. |
|
| Gosten, Sarah |
Sarah Gosten, Anna Maria Helene Abrams, and Astrid Marieke Rosenthal-von der Pütten (RWTH Aachen University, Aachen, Germany) Sexism is a constant presence in women’s lives, requiring ongoing decisions about if and how to respond. Previous research underscores the importance of allies in confrontations of sexism. We explore how women perceive a social robot that intervenes in sexist encounters. Female participants (n = 60) engaged in a game scenario where a sexist comment was made by a male confederate prompting the robot to intervene in one of three ways: 1. avoidant, 2. argumentative, or 3. morally judgmental. Results showed that exposure to sexist remarks lead to significantly increased negative emotions. Participants rated the perpetrator significantly worse on trust and perceived closeness than the human bystander and the robot who were both on par. The type of intervention had no mitigating effect in the ratings. |
|
| Goyal, Pranav |
Pranav Goyal, Andrew Stratton, and Christoforos Mavrogiannis (University of Michigan at Ann Arbor, Ann Arbor, USA) Legible motion enables humans to anticipate robot behavior during social navigation, but existing approaches largely assume open spaces, static interactions, and fully attentive pedestrians. We study legibility in the ubiquitous and realistic setting of hallway navigation through two user studies. Study 1 (N=45) evaluates how intent should be represented for legible navigation within a model predictive control framework. We find that expressing intent at the interaction level (i.e., passing side) and dynamically adapting it to human motion leads to smoother human trajectories and higher perceived competence than destination-based or non-legible baselines. Study 2 (N=45) examines whether legibility remains beneficial when pedestrians are cognitively distracted. While legible motion still reduced abrupt human motion relative to the non-legible baseline, subjective impressions were less sensitive under distraction. Together, these results demonstrate that legibility is most effective when grounded in immediate interaction objectives and highlight the need to account for attentional variability. |
|
| Graham, Nyra |
Joyce Yang, Phillip Johnson, Nyra Graham, and Karen Shamir (Cornell University, Ithaca, USA) Social isolation in shared spaces threatens community cohesion and well-being. This paper presents a social robot designed to spark human-to-human interactions. Inspired by public art projects, the robot invites individuals to collaborate on a shared LEGO structure by using expressive eye tracking, autonomous turning, and servo-actuated drawer movement. Field deployments in Cornell University spaces showed the robot effectively acted as a social catalyst: diverse participants contributed to a shared structure, and strangers initiated conversations about the robot. This work offers a functional prototype and insights on robots as mediators of human connection and promotes ideas of empowering collaboration. |
|
| Gramopadhye, Maitrey |
Diane N. Jung, Caleb Escobedo, Noah Liska, Maitrey Gramopadhye, Daniel Szafir, Alessandro Roncone, and Carson J. Bruns (University of Colorado at Boulder, Boulder, USA; University of North Carolina at Chapel Hill, Chapel Hill, USA) Scientists perform diverse manual procedures that are tedious and laborious. Such procedures are considered a bottleneck for modern experimental science, as they consume time and increase burdens in fields including material science and medicine. We employ a user-centered approach to designing a robot-assisted system for dialysis, a common multi-day purification method used in polymer and protein synthesis. Through two usability studies, we obtain participant feedback and revise design requirements to develop the final system that satisfies scientists' needs and has the potential for applications in other experimental workflows. We anticipate that integration of this system into real synthesis procedures in a chemical wet lab will decrease workload on scientists during long experimental procedures and provide an effective approach to designing more systems that have the potential to accelerate scientific discovery and liberate scientists from tedious labor. |
|
| Greenberg, Benjamin |
Benjamin Greenberg and Lina Moe (Rutgers University, Piscataway, USA; Rutgers University, New Brunswick, USA) Demand for mobile robots operating in human environments has expanded rapidly, providing a proliferation of potential feedback channels for developers. While firms increasingly rely on customer input, deployment data, and regulatory guidance to refine autonomous systems, these mechanisms also interact in ways that complicate iteration. Drawing on sixteen interviews with industry professionals developing mobile robots, we analyze how these feedback channels shape design decisions, where they introduce friction, and why they frequently conflict. These interviews revealed that the following three mechanisms are among the most valuable channels to developers: feedback from customers, feedback from quantitative data, and feedback from regulators. We find that customer practices can obstruct data collection, data-driven improvement is constrained by safety and privacy requirements, and regulatory expectations raise reliability thresholds that slow deployment. By examining these cross-channel tensions, we highlight the structural bottlenecks that developers confront when building robots for complex, real-world settings. |
|
| Gregory, Peggy |
Andrew Blair, Mary Ellen Foster, Peggy Gregory, and Koen Hindriks (University of Glasgow, Glasgow, UK; Vrije Universiteit Amsterdam, Amsterdam, Netherlands) The field of human-robot interaction frequently proclaims the inevitable coming of the social robot era, with claims that social robots are increasingly being deployed in the real world. However, in practice, social robots remain scarce in everyday environments. In addition, HRI research rarely explores robots through an organisational lens. This results in a lack of evidence-based understanding of the organisational conditions that are key to the presence--or absence--of social robots in the real world, which are often more decisive than technical sophistication. In this paper, we motivate why organisational context is crucial to the investigation of real-world social robots and provide examples of how this shapes robot acceptance. We detail the methodology of our ongoing empirical research with client organisations and robot developers. Through this critical organisational lens, we learn where social robots are, what they are doing, how they are designed, and why organisations are deploying them. |
|
| Grishko, Andrey |
Reut Katz, Nevo Heimann Saadon, Andrey Grishko, and Hadas Erel (Reichman University, Herzliya, Israel) Robots are increasingly integrated as support tools for enhancing human learning and problem-solving. In this study, we explore the design of a robotic object intended to support problem-solving experiences. The design guidelines are grounded in well-established cognitive strategies known to improve performance. We focus on two strategies in particular: (1) constructive feedback on performance and (2) social feedback that encourages self-explanation. To reduce distractions, we minimized the robot’s communicative load and kept its expressive behaviors simple. Through an iterative design process, we developed a small robotic printer that communicates through subtle non-verbal gestures (nodding, leaning, and gaze-like orientation) paired with minimal printed feedback. This combination aims to create a supportive, non-threatening inter action that provides clear performance guidance while conveying social presence. We describe the robot’s design process and propose an experimental study examining how constructive and social feedback influence problem-solving outcomes. |
|
| Groß, Roderich |
Genki Miyauchi, Roderich Groß, and Chaona Chen (University of Sheffield, Sheffield, UK; TU Darmstadt, Darmstadt, Germany) As robots become increasingly embedded in human–robot teamwork, understanding how humans perceive robot behavior is critical. This is especially relevant for swarm robots that rely on collective behavior to accomplish tasks. While prior research has explored how humans evaluate the abilities and behaviors of single robots, the perception of swarm robots remains relatively underexplored. Guided by the competence–warmth framework, we conducted a perception-based experiment in a collective search task, generating 125 robot teams by systematically manipulating three parameters: speed, separation distance, and local broadcast duration. Ninety participants observed the swarms, rated perceived warmth and competence, and reported team preferences. Results show that broadcast duration increased perceived warmth, separation distance enhanced perceived competence, and individual robot speed had no significant effect. Critically, social perceptions of warmth and competence were stronger predictors of team preference than task performance, with participants favoring swarms that appeared warm and competent over those that completed tasks fastest. These results underscore the importance of considering both technical performance and social attributes when designing robot swarms for effective collaboration with humans. |
|
| Gross, Horst-Michael |
Söhnke Benedikt Fischedick, Robin Schmidt, Benedict Stephan, and Horst-Michael Gross (TU Ilmenau, Ilmenau, Germany) Voice-based interaction offers an intuitive way for untrained users to control mobile robots, but existing speech interfaces often rely on intent maps or robot-specific pipelines that are difficult to transfer across robots, backends, and applications. Recent multimodal large language models (LLMs) can process audio and produce structured function calls, enabling a more flexible form of voice interaction. This late-breaking report proposes a vendor-independent integration pattern (cloud, edge server, or local) that exposes robot capabilities as Model Context Protocol (MCP) tools and maps them to existing middleware interfaces such as remote procedure calls (RPCs). Continuous sensor streams remain in the middleware and are accessed through a snapshot mechanism that returns the most recent buffered value on demand. We demonstrate the approach on a mobile co-presence robot using a lightweight audio pipeline built around wake word detection (WWD), voice activity detection (VAD), multimodal LLM inference, and text-to-speech (TTS). MCP tools trigger capabilities such as navigation, communication, and projector control. The architecture provides a general pattern for robots and middlewares, enabling flexible voice interaction without rewriting intent logic. |
|
| Grünewald, Niklas |
Johannes Kraus, Niklas Grünewald, Charlotte Kapell, and Marlene Wessels (University of Mainz, Mainz, Germany) Robot bullying - purposeful obstructive or harmful behavior to-ward robots - is widely discussed but still under-researched, with mixed findings under realistic conditions. In this field experi-ment (N = 35), we tested how robot social role behavior (coop-erative-polite vs. functional-technological) and social norms (pro- vs. anti-bullying) influence bullying of a cleaning robot. Bullying was measured via an adapted hot-sauce paradigm, alongside anthropomorphism and trust. Participants bullied the impolite robot significantly more, while social norms showed no significant effects. Anthropomorphism and trust were higher for the polite robot. This indicates that robots’ social roles shape robot perceptions and harmful behavior towards them. |
|
| Grüninger, Felix |
Mayumi Mohan, Rachael L’Orsa, Felix Grüninger, Bernadette-Marie Stollhof, Carolin Sarah Klein, Raphael Dinauer, Rachael Bevill Burns, Tobias J. Renner, Karsten Hollmann, and Katherine J. Kuchenbecker (MPI for Intelligent Systems, Stuttgart, Germany; University Hospital Tübingen, Tübingen, Germany; University of Tennessee at Knoxville, Knoxville, USA) The gold-standard treatment for children diagnosed with obsessive-compulsive disorder (OCD) is therapist-guided cognitive behavioral therapy (CBT), which includes exposure and response prevention (ERP) sessions that teach children to overcome compulsive responses when exposed to their anxiety-inducing triggers. CBT requires children to report frequent self-assessments of tension during both therapist-supported and therapist-free self-management ERP sessions. Videoconferencing-delivered CBT (vCBT) enables a psychotherapist to treat a child remotely in their home, where OCD symptoms often arise, but these remote therapeutic interactions lack physical presence and can be challenging to run. We propose using a robot as an input/output device during vCBT for young children diagnosed with OCD, and we introduce a stationary table-top koala robot for this application. We further describe the first of three planned participatory design phases: a co-design study comprising two sessions where child and adolescent psychotherapists role-played vCBT ERP exercises with this robot to help define its role. |
|
| Gu, Jiayi |
Hari Krishnan Subramaniyan, Jakub Rammel, Jiayi Gu, and Shreyas Ahuja (Delft University of Technology, Delft, Netherlands) This paper explores gesture-enabled Human–Robot Co-Creation (HRC) as a framework, investigating collaborative design between humans and machines through additive manufacturing. The project demonstrates a proof-of-concept workflow in which robots act as precise creators and humans as intuitive collaborators, dynamically adjusting geometry and materials in real time. Gesture control enabled direct engagement with the fabrication process, highlighting the potential for expressive design. |
|
| Guiza, Ouijdane |
Verena Szojak, Ouijdane Guiza, Markus Jäger, Markus Brillinger, Martin Schobesberger, Martina Mara, Kathrin Meyer, and Sandra Maria Siedl (Pro2Future, Linz, Austria; FH Joanneum, Graz, Austria; Johannes Kepler University Linz, Linz, Austria) Collaborative robots (cobots) are increasingly used in manufacturing, yet their behavior can appear unpredictable to human coworkers, making teamwork less smooth. Explainability has been proposed as a key requirement for fluent human-robot collaboration. However, we still lack a clarity of (i) when cobots should provide explanations, (ii) what information human coworkers seek, and (iii) how explanations should be delivered by cobots during real physical collaboration. This paper reports an exploratory qualitative study of human explanation needs in a real-life bicycle assembly task with a 6-axis cobot arm. Fourteen participants interacted with the cobot that occasionally behaved unexpectedly. Think-aloud protocols and semi-structured interviews, show that users seek continuous information primarily about the cobot's current status and situation-dependent guidance for interaction. Participants favored simple presentations such as easy-to-read text explanations and light signals. We discuss implications for designing explainable cobots that provide lightweight, context-sensitive information and outline directions for future quantitative evaluations. |
|
| Gulyamova, Safina |
Hung Khang Nguyen, Jeffrin Sam, Safina Gulyamova, Miguel Altamirano Cabrera, Artem Lykov, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) Visual and speech interfaces are widely used to control collaborative robots, but physical interaction remains distinct for its immediacy and intuitive nature. We introduce TapHRI, a system enabling rich on-body touch control using only embedded robot sensing—joint torques, motor currents, and TCP wrench—without external hardware. The core contribution is a novel Neural Network architecture designed specifically for raw internal sensor streams: a multi-head Temporal Convolutional Network (TCN) that eliminates the need for spectrogram conversion used in prior art. By leveraging dilated causal convolutions, our architecture effectively captures long-range temporal dependencies to distinguish subtle tap transients from robot motion dynamics. Uniquely, the network employs a shared encoder with bifurcated classification heads to simultaneously predict both tap count (single, double, triple) and directional intent (six directions) from a single 3.6 s window. We validate this TCN-driven approach on a UR3 cobot with 3,154 episodes, achieving 98.9 % accuracy in static conditions and 59.2 % in dynamic scenarios, demonstrating that specialized time-domain architectures can unlock complex interaction vocabularies from standard internal signals. |
|
| Gunes, Hatice |
Jacqueline Borgstedt, Ricarda Wullenkord, Fethiye Irmak Doğan, Alva Markelius, Yue Lou, Emma Geijer-Simpson, Emily S. Cross, Friederike Eyssel, Tamsin Ford, Jenny L. Gibson, Hatice Gunes, Georgina Warner, and Ginevra Castellano (ETH Zurich, Zurich, Switzerland; Bielefeld University, Bielefeld, Germany; University of Cambridge, Cambridge, UK; Uppsala University, Uppsala, Sweden) Mental Health challenges disproportionately affect vulnerable children, including those with Developmental Language Disorder (DLD) and with forced migration backgrounds. These groups face elevated risks of anxiety, depression, and social withdrawal. Yet, to date, their subjective wellbeing is difficult to assess. This is particularly true for traditional self-report measures which require good reading skills, linguistic competence, and sustained attention. To overcome these barriers, we propose that social robots offer a novel opportunity to create accessible, engaging, and child-friendly wellbeing assessments. Fethiye Irmak Doğan, Alva Markelius, and Hatice Gunes (University of Cambridge, Cambridge, UK) Foundation models are increasingly embedded in social robots, mediating not only what they say and do but also how they adapt to users over time. This shift renders traditional "one-size-fits-all" explanation strategies especially problematic: generic justifications are now wrapped around behaviour produced by models trained on vast, heterogeneous, and opaque datasets. We argue that ethical, user-adapted explainability must be treated as a core design objective for foundation-model-driven social robotics. We first identify open challenges around explainability and ethical concerns that arise when both adaptation and explanation are delegated to foundation models. Building on this analysis, we propose four recommendations for moving towards user-adapted, modality-aware, and co-designed explanation strategies grounded in smaller, fairer datasets. An illustrative use case of an LLM-driven socially assistive robot demonstrates how these recommendations might be instantiated in a sensitive, real-world domain. Jiaee Cheong, Fethiye Irmak Doğan, Alva Markelius, Emily S. Cross, Friederike Eyssel, Ginevra Castellano, and Hatice Gunes (University of Cambridge, Cambridge, UK; Harvard University, Boston, USA; ETH Zurich, Zurich, Switzerland; Bielefeld University, Bielefeld, Germany; Uppsala University, Uppsala, Sweden) Real-world deployment of robotics for wellbeing requires solutions to be tailored to the end users. Most existing systems do not take into consideration the unique needs of those from marginalized and vulnerable communities, such as children, elderly, neurodivergent, disabled, and LGBTQ+ communities. The first workshop on “Equitable Robotics for Wellbeing” ( EqRoW) aims to address this gap by exploring strategies to not only develop more equitable solutions for marginalized and vulnerable communities, but also to enable HRI designers and developers to better understand the diverse needs of the end-users. The theme of HRI 2026, "HRI Empowering Society", informs the overarching theme of this workshop, encouraging discussions on HRI theories, user design methods and studies focused on developing approaches for advancing HRI that empowers society via equitable wellbeing robotics for all. Christopher D. Wallbridge, Kavyaa Somasundaram, Frank Förster, Maia Stiber, Eduardo B. Sandoval, Chinmaya Mishra, Christina E. Stimson, Patrick Holthaus, and Hatice Gunes (Cardiff University, Cardiff, UK; Örebro University, Örebro, Sweden; University of Hertfordshire, Hatfield, UK; Microsoft Research, Redmond, USA; UNSW, Sydney, Australia; MPI for Psycholinguistics, Nijmegen, Netherlands; University of Sheffield, Sheffield, UK; University of Cambridge, Cambridge, UK) This workshop proposes looking at errors in humans and robots. The workshop will focus along three axes: Human, Robot and Interaction induced errors. Errors in Human-Robot Interaction (HRI) pose a crucial challenge for the deployment of robots. Errors can affect the safety and trust of users, therefore it is important for empowering society that these robots can handle errors robustly. The workshop intends to foster interdisciplinary discussion among researchers in robotics, HCI, cognitive science, and social sciences, to encourage the development of robust, user-centered, and socially responsible HRI systems. The invited speakers, paper presentations and discussions will focus around topics of detecting errors, handling errors, adapting to errors, restoring trust, using intentional errors, communication strategies, issues of expectation as well as look at design, interdisciplinary and ethical approaches to research. In this way we will help inform research into errors, and develop robotic systems capable of robust interaction and collaboration. |
|
| Guo, Bao |
Yuxuan Chen, Ian Leong Ting Lo, Bao Guo, Netitorn Kawmali, Chun Kit Chan, Ruoyu Wang, Jia Pan, and Lei Yang (University of Hong Kong, Hong Kong, China) A tour guide robot, CLIO, is presented. It carries a display as its head and a laser pointer to orient visitors’ attention by its head movement and laser pointer. Animated eyes are displayed to build eye contact. An optional low-code interface based on a Large Language Model is offered to coordinate the designed actions with a given narrative script for exhibition. We conducted a user study to evaluate the CLIO system in a mock-up exhibition of historical photographs. Feedback was collected from questionnaires and quantitative data from a mobile eye tracker. Experimental result validated that the efficacy of the designed actions in guiding visual attention of the visitors. It showed that CLIO achieved an enhanced engagement compared to the Audio-only baseline system. |
|
| Guo, Yijie |
Yijie Guo, Ruhan Wang, Jini Tao, Zhiling Xu, Yaowen Shen, Qifei Zhou, Zhenhan Huang, Danqi Huang, and Haipeng Mi (Tsinghua University, Beijing, China; Neurobo, Shanghai, China; Tongji University, Shanghai, China) In-public companionship is underexplored, especially for ACG (anime, comics, and games) users who carry character goods and seek shared presence with favorite characters in everyday public life. We present Goobo, a lightweight multimodal companion embedded in an ita-bag (a transparent display bag used by ACG fans) that “activates” carried character goods via real-time visual perception and persona-consistent expressive narration. Informed by a formative workshop, Goobo illustrates a portable, character-bound interaction paradigm and serves as a platform for upcoming field studies on how users engage with activated characters in public settings. |
|
| Gupte, Vivek |
Vivek Gupte, Shalutha Rajapakshe, and Emmanuel Senft (Idiap Research Institute, Martigny, Switzerland; EPFL, Lausanne, Switzerland) Current research on collaborative robots (cobots) in physical rehabilitation largely focuses on repeated motion training for people undergoing physical therapy (PuPT), even though these sessions include phases that could benefit from robotic collaboration and assistance. Meanwhile, access to physical therapy remains limited for people with disabilities and chronic illnesses. Cobots could support both PuPT and therapists, and improve access to therapy, yet their broader potential remains underexplored. We propose extending the scope of cobots by imagining their role in assisting therapists and PuPT before, during, and after a therapy session. We discuss how cobot assistance may lift access barriers by promoting ability-based therapy design and helping therapists manage their time and effort. Finally, we highlight challenges to realizing these roles, including advancing user-state understanding, ensuring safety, and integrating cobots into therapists’ workflow. This view opens new research questions and opportunities to draw from the HRI community’s advances in assistive robotics. |
|
| Gutiérrez-Fernández, Alexis |
Cristina Abad-Moya, Irene González Fernández, Alexis Gutiérrez-Fernández, Francisco J. Rodríguez Lera, and Camino Fernández-Llamas (University of León, León, Spain; Rey Juan Carlos University, Madrid, Spain) In everyday environments, robots must be able to detect when people intend to initiate interaction and to communicate their engagement state in an interpretable manner. Although engagement has been widely studied in human–robot interaction, many existing approaches rely on controlled settings or limited perceptual modalities, leaving open questions about how non-expert users naturally attempt to initiate interaction and how engagement states should be signalled during early interaction. An online pre-study questionnaire with 64 participants was conducted to capture user expectations regarding interaction initiation and engagement feedback. The results indicated a preference for speech- and gaze-based strategies, as well as expectations of clear signals such as robot orientation, verbal acknowledgement, and visual feedback. These insights informed the design of a multimodal engagement system integrating auditory and visual cues and providing incremental feedback to distinguish between attention and confirmed readiness. The system was evaluated in a semi-naturalistic study with 15 participants in a domestic environment. The results show that users were generally able to attract the robot’s attention without prior instruction, while providing minimal information about the robot’s perceptual capabilities led to more consistent interpretation of its engagement responses. The findings provide empirical insight into interaction initiation strategies and highlight the importance of transparent engagement signalling in human–robot interaction. |
|
| Ha, Sehoon |
Joanne Taery Kim and Sehoon Ha (Georgia Institute of Technology, Atlanta, USA) This work explores how assistive robots can achieve trustworthy human-robot coexistence by designing a robotic guide dog for blind or visually impaired (BVI) users. We examine three questions: how users and bystanders expect such a system to behave, how a human and robot can navigate safely as a coordinated team, and how the system can build trust and social comfort for both user and bystanders. Our studies identify mutual awareness, social legibility, and transparent communication as key elements of effective teaming. Building on these insights, we propose navigation models and interaction strategies that combine semantic reasoning with legible motion, advancing assistive robots toward safe, reliable, and socially intelligent behavior in everyday settings. |
|
| Habel, Amir Atef |
Amir Atef Habel, Ivan Snegirev, Elizaveta Semenyakina, Miguel Altamirano Cabrera, Jeffrin Sam, Fawad Mehboob, Roohan Ahmed Khan, Muhammad Ahsan Mustafa, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) This paper presents Glove2UAV, a wearable IMU-glove interface for intuitive UAV control through hand and finger gestures, augmented with vibrotactile warnings for exceeding predefined speed thresholds. To promote safer and more predictable interaction in dynamic flight, Glove2UAV is designed as a lightweight and easily deployable wearable interface intended for real-time operation. Glove2UAV streams inertial measurements in real time and estimates palm and finger orientations using a compact processing pipeline that combines median-based outlier suppression with Madgwick-based orientation estimation. The resulting motion estimations are mapped to a small set of control primitives for directional flight (forward/backward and lateral motion) and, when supported by the platform, to object-interaction commands. Vibrotactile feedback is triggered when flight speed exceeds predefined threshold values, providing an additional alert channel during operation. We validate real-time feasibility by synchronizing glove signals with UAV telemetry in both simulation and real-world flights. The results show fast gesture-based command execution, stable coupling between gesture dynamics and platform motion, correct operation of the core command set in our trials, and timely delivery of vibratile warning cues. Fawad Mehboob, MoniJesu Wonders James, Amir Atef Habel, Jeffrin Sam, Miguel Altamirano Cabrera, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) This work introduces a novel concept of an autonomous aerial manipulation system capable of interpreting high-level natural language commands to retrieve objects and deliver them to a human user. The system is intended to integrate a MediaPipe-based Grounding DINO and a Vision-Language-Action (VLA) model with a custom-built drone equipped with a 1-DOF gripper and an Intel RealSense RGB-D camera. VLA performs semantic reasoning to interpret the intent of a user prompt and generates a prioritized task queue for grasping relevant objects in the scene. Grounding DINO and the dynamic A* planning algorithm are used to navigate and safely relocate the object. To ensure safe and natural interaction during the handover phase, the system employs a human-centric controller driven by MediaPipe. This module provides real-time human pose estimation, allowing the drone to employ visual servoing to maintain a stable, distinct position directly in front of the user, facilitating a comfortable handover. We demonstrate the system's efficacy through real-world experiments for localization and navigation, which resulted in a 16.4 cm, 7.0 cm, and 8.4 cm of max, mean euclidean, and root-mean squared errors, respectively, highlighting the feasibility of VLA for aerial manipulation operations. |
|
| Haberkorn, Pascal |
Pascal Haberkorn, Corinna Mack, and Manuel Giuliani (University of Applied Sciences Kempten, Kempten, Germany; Kempten University of Applied Sciences, Kempten, Germany) This study investigates whether a dialogue-based robot, employing motivational interviewing techniques, can enhance the intrinsic motivation of older adults to engage with their local social networks. A user study was conducted in which a Furhat robot interacted with participants, first presenting information about upcoming local social events and subsequently using motivational interviewing to encourage reflection on their personal motivation to attend. The study included 42 older adults (aged between 57 and 90 years old, mean age = 73.9 years). Participants completed the Situational Intrinsic Motivation Scale (SIMS) before and after the interaction with the robot to assess changes in intrinsic motivation, extrinsic motivation, identified regulation, and external regulation. Additionally, the Negative Attitudes Toward Robots Scale (NARS) was administered, and semi-structured interviews were conducted post-interaction. Results indicated no statistically significant changes in SIMS scores, though a trend toward significance was observed for identified regulation (p = 0.076). Analysis of NARS scores and qualitative interview data revealed predominantly positive attitudes toward the robot, with many participants expressing openness to future use of dialogue-based robots for social motivation. These findings suggest promising avenues for further research on the potential of robotic systems to support social engagement among older adults. |
|
| Hagman, William |
Riccardo Spagnuolo, William Hagman, Erik Lagerstedt, Matthew Rueben, and Sam Thellman (University of Padua, Padua, Italy; Mälardalen University, Eskilstuna, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Portland, Portland, USA; Linköping University, Linköping, Sweden) Robots increasingly operate in everyday human environments, where interaction depends on users understanding what the robot can perceive and act on---its perceived ecology or Umwelt. Current human-robot interfaces rarely support this understanding: they rely largely on symbolic cues that reveal little about how environmental structures shape the robot’s actions. Drawing on Gibson’s ecological psychology, we propose a shift from symbolic communication toward ecological specification in interface design. We introduce the Gibsonian Human–Robot Interface Design (GHRID) taxonomy, which organizes interface properties across three facets---basic descriptive, context and evaluation, Gibsonian-specific---and identifies key ecological dimensions such as affordance grounding, temporal coupling, and Umwelt exposure. Finally, we outline a research program testing whether "GHRID-high" designs improve users’ understanding of robots’ behavior-driving states and processes. |
|
| Halilovic, Amar |
Amar Halilovic and Senka Krivic (Ulm University, Ulm, Germany; University of Sarajevo, Sarajevo, Bosnia and Herzegovina) Robots increasingly provide explanations to support transparency in Human-Robot Interaction (HRI), yet users differ widely in how much explanation they prefer and when it is appropriate. We present a lightweight simulation framework in which a robot selects among explanation policies ranging from no explanation to norm-based, preference-based, and a Bayesian Adaptive (BA) policy that learns user preferences online while respecting normative expectations. Using synthetic user archetypes, we evaluate how these policies trade off utility, alignment, explanation cost, and regret. Results show that BA consistently achieves low regret across individual users while maintaining strong utility and alignment across diverse user archetypes. These findings motivate the development of preference-aware, uncertainty-driven explanation mechanisms for robust, adaptive robot communication in heterogeneous HRI settings. |
|
| Halvorsen, Ludwig |
Simon Ekman, Joachim Örtegren, Kacper Mateusz Sieklucki, Raymond Tchou, Ludwig Halvorsen, and Hannah Pelikan (Linköping University, Linköping, Sweden) Robots on dense urban sidewalks enter environments in which space is contested. While prior work has described how bystanders and passersby accommodate delivery robots, we still lack a deeper understanding of what kind of problems emerge when autonomous robots enter a new area. This paper builds on 9 hours of fieldwork conducted during the early weeks of a delivery robot rollout in a European capital, characterized by a mixture of wide and narrow sidewalks. We describe emerging observations that highlight the contested and negotiated nature of these spaces, in which people make their trajectories readable, projecting where they will go next and establishing local traffic norms. |
|
| Han, Howard Ziyu |
Howard Ziyu Han, Ying Zhang, Allan Wang, and Nikolas Martelaro (Carnegie Mellon University, Pittsburgh, USA; Miraikan - National Museum of Emerging Science and Innovation, Tokyo, Japan) Using robot simulators in participatory human-robot interaction design can expand the interactions end-users can experience, articulate, and reshape during co-design. In robot social navigation, high-fidelity simulations have largely been developed for benchmarking algorithms and developing robot policy. However, less attention has been given to supporting end-user exploration and articulation of concerns. In this late-break report, we present design considerations and a system implementation that extend an existing social navigation simulator (SEAN 2.0) to support community-driven feedback and evaluation. We add features to the SEAN 2.0 platform to enable richer sidewalk scenario construction, interactive reruns, and robot signaling exploration. Finally, we provide a user scenario and discuss future directions for using participatory simulation to broaden stakeholder involvement and inform socially responsive navigation design. |
|
| Han, Seol |
Rahatul Amin Ananto, Seol Han, Rachel Ruddy, and AJung Moon (McGill University, Montreal, Canada) A new generation of robots are being developed to enter our homes in a matter of months. But has the industry appropriately accounted for the complexities of the social environment that we call home? We conducted an exploratory design workshop to examine what secondary users—those who are not expected to be owners but nonetheless daily users—deem to be socially appropriate behavior of a domestic robot. A total of 90 students from Mexico participated in the study. By analyzing they define and reason about appropriateness of robot behaviors in the home, we show why deployment of domestic robots require much more thoughtful considerations than implementation of simplified social rules; judgments of what is appropriate depend on context, roles, relationships, and individual boundaries, and can differ between primary and secondary users. We call on Human-Robot Interaction (HRI) practitioners to treat social appropriateness as a fluid, gradient factor at design time rather than a binary concept (appropriate/inappropriate). |
|
| Han, Zhao |
Xiangfei Kong, Rex Gonzalez, Nicolas Echeverria, Sofia Cobo Navas, Thuc Anh Nguyen, Divyamshu Shrestha, Sebastian Ramirez-Vallejo, Deekshita Senthil Kumar, Andrew Texeira, Thao Nguyen, Deep Akbari, Hong Wang, Hongdao Meng, and Zhao Han (University of South Florida, Tampa, USA) HRI research shows that social robots support human companionship and emotional well-being. However, the cost of high-end social robots limits their accessibility, and, yet, existing low-cost platforms cannot fully support users emotionally due to limited interaction capabilities. In this demo paper with accompanying video, we showcase Bloom, a low-cost social robot that combines touch sensing with LLM-driven conversation to facilitate more expressive and emotionally engaging interactions with humans. Bloom also integrates a customizable 3D-printed shell with a flexible software pipeline that enables touch-responsive movement, LLM-driven dialogue, and face tracking during conversations, offering more expressive interaction capabilities at a low cost, which remains easy to fabricate and adaptable across a wide range of societal applications, including mental health, companion, and healthcare. Hong Wang, Ngoc Bao Dinh, and Zhao Han (University of South Florida, Tampa, USA) Projector-based augmented reality (AR) enables robots to communicate spatially-situated information to multiple observers without requiring head-mounted displays, e.g., projecting navigation path. However, they require flat and weakly textured projection surfaces; otherwise, the surface needs to be compensated to retain the original projected image. Yet, existing compensation methods assume static projector-camera-surface configurations and may not work in complex, textured environments where robots must navigate. In this work, we evaluate state-of-the-art deep learning-based projection compensation on a Go2 robot dog in a search-and-rescue scene with discontinuous, non-planar, strongly textured surfaces. We contribute empirical evidence on critical performance limitations of state-of-the-art compensation methods: the requirement of pre-calibration and inability to adapt in real-time as the robot moves, revealing a fundamental gap between static compensation capabilities and dynamic robot communication needs. We propose future directions for enabling real-time, motion-adaptive projection compensation for robot communication in dynamic environments. |
|
| Harada, Naoya |
Hiroki Kimura, Che-Kang Hsu, Takeru Ito, Naoya Harada, Michiteru Kitazaki, and Ryosuke Tasaki (Aoyama Gakuin University, Sagamihara, Japan; Toyohashi University of Technology, Toyohashi, Japan) Robotic massage systems can reproduce treatment forces consistently, but most still rely on predefined trajectories and fixed force distributions that ignore recipients’ subjective sensations. We propose a conversational robotic massage system that updates a polynomial force map for latissimus dorsi acupressure from recipient speech and applies it through force feedback control. In a study with 13 participants, simple commands such as "little weaker" and "little stronger" changed the applied force in the intended direction; questionnaires indicated high force adaptation, satisfaction, and consistency, with low pain, while perceived safety varied widely, suggesting a trade-off between interactive adaptation and predictable contact. These findings indicate that speech is a promising modality for tailoring robotic massage and highlight the need for careful gain tuning and feedback about force changes. Although the system currently uses only four predefined speech patterns, it represents an initial step toward conversational control of spatially varying acupressure forces. Naoya Harada, Michiteru Kitazaki, and Ryosuke Tasaki (Aoyama Gakuin University, Sagamihara, Japan; Toyohashi University of Technology, Toyohashi, Japan) This study proposes an integrated robotic massage platform designed to bridge the gap between mechanical stimulation and human-like care. Conventional systems often require a prone posture and lack psychological immersion, limiting embodiment and safety. To address this, we developed a system featuring seated multi-robot actuation—simultaneously targeting the forearm and sole—and a first-person perspective (1PP) VR interface with synchronized virtual therapists. A field study with 32 participants evaluated feasibility and user experience. Results showed high ratings for overall satisfaction and psychological safety. Notably, a strong positive correlation was found between "perceived human-likeness" and user satisfaction. This suggests that inducing a sense of human agency via 1PP VR effectively complements mechanical stimulation, thereby significantly elevating the quality of the care experience. We contribute (i) a seated dual-limb multi-robot massage platform with 1PP VR therapists and (ii) in-the-wild user evidence that perceived human-likeness and safety/relaxation are key correlates of satisfaction. |
|
| Hatcher, Jack |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Hawes, Nick |
Leimin Tian, Ana Kirschbaum, Caterina Neef, Mary Ellen Foster, Sara Cooper, Frauke Zeller, Manuel Giuliani, Alexander Eberhard, Oliver Chojnowski, Nils Mandischer, Utku Norman, Jan Ole Rixen, Pamela Carreno-Medrano, Nick Hawes, and Dana Kulić (CSIRO, Melbourne, Australia; Monash University, Melbourne, Australia; University of Applied Sciences Cologne, Cologne, Germany; KIT, Karlsruhe, Germany; University of Glasgow, Glasgow, UK; IIIA-CSIC, Spain; University of Edinburgh, Edinburgh, UK; Kempten University of Applied Sciences, Kempten, Germany; University of Augsburg, Augsburg, Germany; Monash University, Clayton, Australia; University of Oxford, Oxford, UK) As the technology readiness level of robotic systems increases, these systems interact with real users in realistic application contexts at increasing scale. In such real-world HRI, failures and unexpected outcomes are common, and these can provide valuable lessons for the design and evaluation of future HRI systems. Building on two recent workshops---the HRI 2025 workshop on Human-Robot Interaction in Extreme and Challenging Environments and the RO-MAN 2025 workshop on Real-World HRI in Public and Private Spaces: Successes, Failures, and Lessons Learned---this half-day workshop invites researchers and practitioners to share their design and deployment lessons drawn from across the spectrum of real-world HRI. The interactive sessions follow a taxonomy of the key phases during field deployment and major factors at play, inviting attendees to review the experimental design of their own research or example projects to identify potential challenges and share related experiences. This discussion will refine the initial taxonomy, which we plan to release as an open-source tool to empower researchers and practicioners in their design and deployment of real-world HRI. |
|
| Hayashi, Yugo |
Shigen Shimojo, Kai Wang, Keita Kiuchi, Yusuke Shudo, and Yugo Hayashi (Ritsumeikan Global Innovation Research Organization, Ibaraki, Japan; Ritsumeikan University, Ibaraki, Japan; National Institute of Occupational Safety and Health, Kawasaki, Japan) Social isolation among older adults is a global concern, and socially assistive robots are increasingly explored as companions to support mental well-being. Users’ impressions can strongly influence psychological outcomes. Building on Socioemotional Selectivity Theory, which suggests that older adults prioritize emotionally meaningful goals, this study examined the effectiveness of a solution-focused approach (SFA), which emphasizes positive information, compared with a problem-focused approach (PFA), which focuses on negative information, and explored the influence of embodied conversational agent (ECA) impressions. We implemented the ECA on a humanoid social robot. The SFA-based robot-mediated interaction did not significantly improve mental health as measured by the K10, although perceived robot intelligence correlated with outcomes. Our findings highlight that perceived intelligence—rather than conversational framework—plays a key role in influencing mental-health outcomes in older adults. Yugo Hayashi, Shigen Shimojo, and Keita Kiuchi (Ritsumeikan University, Ibaraki, Japan; Ritsumeikan Global Innovation Research Organization, Ibaraki, Japan; National Institute of Occupational Safety and Health, Kawasaki, Japan) This study examined the influence of different dialog media on emotional expression and utterance structure in automated active listening counseling for older adults. Specifically, we compared robot and virtual reality (VR) that differ in embodiment and social presence through solution-focused counseling conducted for three weeks. Emotional expression and lexical network structure were analyzed using automatic text coding, and lexical network analysis. Positive emotional expressions were more frequent in the early stages with VR. Conversely, although the robot condition exhibited lower responsiveness in the initial sessions, positive utterances increased as rapport developed over time. Lexical network analysis further revealed that robots encouraged greater lexical diversity and the formation of hub structures centered on self-disclosure–related vocabulary. These indicate that VR and robots facilitate emotional expression, suggesting a staged media utilization model in which VR is effective at the start of the intervention, while robots become more effective in the later phases. |
|
| He, Jiangen |
Wanqi Zhang, Jiangen He, and Marielle Santos (University of Tennessee at Knoxville, Knoxville, USA) Social robots hold promise for reducing job interview anxiety, yet designing agents that provide both psychological safety and instructional guidance remains challenging. Through a three-phase exploratory iterative design study (N=8), we empirically mapped this tension. Phase I revealed a “Safety–Guidance Gap”: while a Person-Centered Therapy (PCT) robot established safety, users felt insufficiently coached. Phase II identified a “Scaffolding Paradox”: rigid feedback caused cognitive overload, while delayed feedback lacked specificity. In Phase III, we resolved these tensions by developing an Agency-Driven Interaction Layer. Synthesizing our empirical findings, we propose the Adaptive Scaffolding Ecosystem—a conceptual framework that redefines robotic coaching not as a static script, but as a dynamic balance between affective support and instructional challenge, mediated by user agency. |
|
| Heard, Jamison |
Sabrina Al Bukhari, Thisas Samaraweera, Kapil Shenoy, Lewi Alemu, Henok Kefyalew, Jamison Heard, and Jinane Mounsef (Rochester Institute of Technology, Dubai, United Arab Emirates; Rochester Institute of Technology, Rochester, USA) Social robots often struggle with natural conversational flow due to rule-based systems that cause pauses and interruptions. We present MESA (Modular Empathetic Social Assistant), a campus robot that synchronizes verbal responses with non-verbal cues. It uses UltraVAD to distinguish thoughtful pauses from turn completions and implements turn-yielding and floor-holding mechanisms. A distributed Retrieval-Augmented Generation (RAG) architecture ensures accurate responses for students and faculty. Validated on a proxy platform with ~600 ms latency, MESA demonstrates that combining context-aware audio and gaze enables more fluid conversation, even as hardware assembly nears completion. |
|
| Heck, Franziska Elisabeth |
Franziska Elisabeth Heck, Emilia Sobolewska, Debbie Meharg, and Khristin Fabian (Edinburgh Napier University, Edinburgh, UK; University of Aberdeen, Aberdeen, UK) Loneliness is a common issue among university students and has been associated with poorer mental health and reduced well-being. According to classic theory, there are two types of loneliness: emotional loneliness, which results from a lack of close attachments, and social loneliness, which is associated with deficits in broader peer networks. However, research into human–robot interaction rarely considers how these two forms of loneliness manifest in people's desire for social robots. This report presents the qualitative findings of semi-structured interviews with 25 students. These students were invited based on their scores for emotional and social loneliness, with the aim of representing a broad range of loneliness profiles. Participants observed standardised demonstrations of three social robots, Pepper, Nao and Furhat, and discussed their attitudes towards them, their potential roles and designs. Across the different profiles, the students generally expressed an openness to the idea of social robots. However, a clear gradient emerged: students who reported higher levels of loneliness tended to view robots as companions and conversational partners, whereas students who reported lower levels of loneliness emphasised the robots’ potential for providing instrumental support and the importance of maintaining stricter boundaries. Loneliness profiles therefore provide a promising lens for thinking about how to design role-appropriate and ethically sensitive robot behaviours and forms for student settings. |
|
| Hei, Xiaoxuan |
Xiaoxuan Hei, Sofia Gioumatzidou, Juan José Garcia Cardenas, and Adriana Tapus (ENSTA - Institut Polytechnique de Paris, Palaiseau, France; University of Macedonia, Thessaloniki, Greece; ENSTA Paris, Paliseau, France; ENSTA Paris, Paris, France) Trust in human-robot teams is critical for effective collaboration, but the dynamics of trust transfer between advisory and executing robots remain underexplored. This study investigated how the accuracy of advice provided by a humanoid robot (NAO) and the execution reliability of a robotic arm (Franka) influence human trust and reliance on advice in a collaborative drawing task. Participants completed three drawing tasks while receiving suggestions from NAO, with NAO providing either accurate or inaccurate advice and Franka executing actions with high or low reliability. Results showed that accurate advice from NAO increased participants' trust in both NAO and Franka, while inaccurate advice neither increase nor decrease trust in Franka, demonstrating trust spillover and trust resilience. Franka's execution reliability did not significantly affect adherence to NAO's suggestions, although low performance in both robots reduced task satisfaction, decreased reliance, and increased deliberation time. These findings highlight the asymmetrical and context-dependent nature of trust transfer in human-robot interaction, emphasizing the importance of both informational accuracy and execution reliability for effective collaboration. |
|
| Heimann Saadon, Nevo |
Nevo Heimann Saadon and Hadas Erel (Reichman University, Herzliya, Israel) Robots entering everyday spaces demand behaviors shaped by human-centered experts, yet authoring robot motion still requires engineering expertise and complex workflows. We present Jazzy Puppet, a web-based kinesthetic teaching system that lets non-technical practitioners design, record, and replay expressive robot gestures directly by hand, with no code or software installation. Running through a browser and configurable via JSON, Jazzy Puppet supports Dynamixel-servo-based robots, preserves motion nuances, and sequences gestures with optional peripherals (e.g. thermal printer). We will demonstrate the system's ease-of-use and portability on a two-DoF printer robot and a four-DoF arm, enabling rapid iterative prototyping of social gestures. Reut Katz, Nevo Heimann Saadon, Andrey Grishko, and Hadas Erel (Reichman University, Herzliya, Israel) Robots are increasingly integrated as support tools for enhancing human learning and problem-solving. In this study, we explore the design of a robotic object intended to support problem-solving experiences. The design guidelines are grounded in well-established cognitive strategies known to improve performance. We focus on two strategies in particular: (1) constructive feedback on performance and (2) social feedback that encourages self-explanation. To reduce distractions, we minimized the robot’s communicative load and kept its expressive behaviors simple. Through an iterative design process, we developed a small robotic printer that communicates through subtle non-verbal gestures (nodding, leaning, and gaze-like orientation) paired with minimal printed feedback. This combination aims to create a supportive, non-threatening inter action that provides clear performance guidance while conveying social presence. We describe the robot’s design process and propose an experimental study examining how constructive and social feedback influence problem-solving outcomes. |
|
| Henley, Jacob |
JP Tiger, Paige Williamson, Neha Kannan, Jacob Henley, Eileen Roesler, and David Porfirio (George Mason University, Fairfax, USA) Air quality threatens health through particulate matter (PM), volatile organic compounds (VOCs), and allergens, yet current monitors of- ten fail to communicate threats properly. We introduce Nose Knows, a robot that uses “universal nasal cues”—such as sneezes—to com- municate. Using participatory design, we conducted an expert inter- view, user focus groups, and iterative prototyping to build a robot that detects PM10, PM2.5, VOCs, and customizable odors. The ro- bot integrates servo-driven movements, sound effects, LEDs, and a web interface. Our findings highlight preferences for customizable aesthetics and show that biologically inspired cues can possibly improve clarity, engagement, and awareness of air quality. |
|
| Herath, Damith |
Damith Herath, Hans Asenbaum, Janie Busby Grant, Maleen Jayasuriya, Emanuela Savini, and Harshith Ghanta (University of Canberra, Bruce, Australia; University of Canberra, Canberra, Australia) As Artificial Intelligence (AI) systems are increasingly integrated into civic platforms and decision-making processes, the nature of public deliberation is shifting. While AI offers potential solutions to scalability and moderation in deliberative democracy, its impact on human agency remains underexplored. The project adopted an exploratory design, convening six small-group deliberations. Each group included four humans and one humanoid robot. Through semi-structured post-deliberation interviews, we investigated how embodied algorithmic intervention influenced perceived ownership of the discourse. Our initial findings suggested a "Competence-Agency Paradox": while participants valued the AI's ability to synthesise information and enforce civility, they reported a diminished sense of democratic agency, characterised by epistemic deference to the machine and a reluctance to explore emotional or intuitive lines of reasoning. We argue that without careful design, AI-augmented deliberations risk prioritising procedural efficiency over the messy, human autonomy required for genuine democratic legitimacy. Thomas Muller Dardelin, Damith Herath, and Janie Busby Grant (University of Canberra, Canberra, Australia; Waseda University, Tokyo, Japan; University of Canberra, Bruce, Australia) This paper investigates user engagement with socially assistive robots (SARs) in healthcare contexts through an experimental study comparing simulated and physical embodiments. The study examines how users perceive trust, engagement, safety, and usability when interacting with two humanoid robots—Hatsuki, designed for emotional and social support, and AIREC, designed for physical caregiving tasks. Participants interacted with both simulated and real robots, enabling a direct comparison of virtual and physical embodiments under identical conversational conditions. The results suggest that verbal interaction and character design contribute more strongly to perceived engagement than physical embodiment alone, highlighting the importance of communication quality in socially assistive robotics. In the simulated setting, Hatsuki was perceived as more caring and socially engaging than AIREC, indicating that socially expressive design can shape user perceptions even without physical embodiment. Aurora An-Lin Hu, Dimity Crisp, Sharni Konrad, Damith Herath, and Janie Busby Grant (University of Canberra, Bruce, Australia; University of Canberra, Canberra, Australia) The mismatch between user expectations and robot performance—the expectation gap—is common in human–robot interaction. Although related research is limited, preliminary evidence suggests that the expectation gap has a considerable impact on user adoption of robots. The present study examined how failing, confirming, and exceeding user expectations and the extent to which robot performance differs from expectations predict users’ adoption intention. A sample of 234 participants completed pre-interaction expectation measures and post-interaction robot performance ratings after completing a drawing activity with a humanoid robot (Pepper). Results showed that considering both the magnitude and direction of the expectation gap (signed gap values) consistently yielded stronger associations and predictive power for adoption intention than considering the magnitude alone (absolute gap values) across four expectation dimensions, with expectation gaps related to Relative Advantage emerging as the strongest predictor. Overall, the findings highlight that failing to meet expectations consistently predicted lower adoption intention compared to both confirming and exceeding expectations, whereas evidence for whether exceeding expectations provides additional benefits beyond confirming them was mixed. Phillip Morgan, Damith Herath, Praminda Caleb-Solly, Matthew Studley, Elizabeth Williams, Aurora An-Lin Hu, Eduardo B. Sandoval, Maleen Jayasuriya, Min Wang, and Janie Busby Grant (Cardiff University, Cardiff, UK; University of Canberra, Bruce, Australia; University of Nottingham, Nottingham, UK; University of the West of England, Bristol, UK; Australian National University, Canberra, Australia; UNSW, Sydney, Australia; University of Canberra, Canberra, Australia) The increasing prevalence of interactive, mobile robots in domestic and social spaces requires not only a comprehensive examination of the security challenges inherent in their large-scale deployment, but understanding how we as a community can support safe and successful adoption of robots. This workshop provides a forum for researchers, practitioners, and stakeholders from a range of disci- plines to build expertise and networks in safety, trust, psychosocial, legal and economic aspects for the secure deployment of social robots in domestic environments. The end goal of this workshop is to enable the development of methodologies and recommenda- tions to empower people to understand and decide on how their robots (and/or robots they are using) are secured, and approaches for raising awareness of the impact of an attack. The workshop will incorporate keynote presentations from leading experts, a series of lightning talks, and a scenario planning activity in which teams of participants use custom-designed prospective scenarios to ex- plore the precursors, processes, contexts and impacts of a breach of security in a robotic system. This will allow a deep-dive into the complexities of establishing and maintaining security and trust in interactive robotic applications, from the broad frame of social, personal and legal capability, enabling long-term acceptance and adoption of trustworthy, secure, and safe interactive robots. |
|
| Hernandez, Francisco |
Francisco Hernandez, Veronica Ahumada-Newhart, and Angelika C. Bullinger (Chemnitz University of Technology, Chemnitz, Germany; University of California at Davis, Sacramento, USA) Telepresence robots (TPR) have gained traction in office, healthcare, and educational settings, yet their applicability to industrial environments remains largely unexplored. As part of the PraeRI project, we conducted a multi-criteria assessment of six commercially available TPRs to identify the usability and functionality characteristics most relevant for deployment in industrial environments (i.e., manufacturing, production, assembly). The assessment was carried out using a structured seven-step utility analysis framework developed through an iterative, expert-driven process. The framework combines predefined industrial requirements, practical testing, and expert judgment, then aggregates weighted criteria into a normalized utility score to enable a transparent comparison across systems. Preliminary results from this assessment include insights on user interface design, drivability, reaction time, accessibility, battery performance, weight, wheels, and storage. Findings highlight substantial variation across platforms, with usability and functionality emerging as critical differentiators for industrial suitability. TPRs such as the Double 3 from Double Robotics and Ohmni Pro from Ohmni Labs achieved the highest point scores, mainly due to intuitive driving interfaces and strong performance in mobility and battery-related tasks. These early results form the basis for ongoing research into industrial-grade requirements and user acceptance in industrial environments. |
|
| Hernández García, Daniel |
Marta Romeo, Alessandra Bossoni, Daniel Hernández García, Matthew Forsyth, Catherine Mahoney, and Martina Fiori (Heriot-Watt University, Edinburgh, UK; Edinburgh Napier University, Edinburgh, UK) The integration of digital health technologies, including robotics, is transforming healthcare environments and offering significant benefits, such as reducing the time professionals spend on non-direct patient care. However, despite these advantages, the adoption of robotic systems in healthcare remains limited. A major barrier is the lack of competencies and confidence among healthcare staff in using these technologies. We address this challenge by investigating how exposure to robotics technology can support the education of the future nursing workforce. By exploring nursing students' and educators' perceptions of usability and acceptability of robotics through a workshop, this study aims to investigate the effectiveness of exposing them to robotic technologies in a safe, simulated environment to start identifying how beneficial exposure for learning could be integrated into education priorities that will enable nurses to work effectively and sustainably alongside robotic systems. |
|
| Heuring, Maris |
Eileen Roesler, Maris Heuring, and Linda Onnasch (George Mason University, Fairfax, USA; TU Berlin, Berlin, Germany) Robots often feature anthropomorphic designs to increase acceptance, although this is not always effective. Previous research suggests that anthropomorphic features are preferred in social settings, whereas technical designs are preferred in industrial contexts. This study examined how task domain and sociability shape these preferences. In an online study, participants chose between robots with low or medium anthropomorphic appearance for tasks in social or industrial contexts, with high or low sociability. The results showed that industrial tasks favored low-anthropomorphic robots regardless of sociability, while sociability influenced preferences in social tasks. We also examined possible gender attributions via names and pronouns, considering the gender stereotypes linked to different domains. Overall, robots were ascribed functional terms rather than gendered, although male bias emerged for gendered robots in industrial contexts. These findings demonstrate that task domain and sociability influence design preferences and reveal subtle gender attributions even for gender-neutral looking robots. |
|
| Hiatt, Laura M. |
Dakota Sullivan, David Porfirio, Bilge Mutlu, and Laura M. Hiatt (University of Wisconsin-Madison, Madison, USA; George Mason University, Fairfax, USA; US Naval Research Laboratory, Washington, USA) Robots are increasingly relied upon for task completion in privacy-critical human environments. In these environments, it is imperative that a robot's potentially sensitive goals remain obfuscated. To address this need, a substantial amount of literature has proposed methods for obfuscatory task planning. These works make many attempts to experimentally or analytically determine whether agents can conceal their goals from observers. While these works make guarantees that resulting plans will conceal an agent's goals, they are often only theoretical. Within this work, we develop three obfuscatory task planning strategies inspired by prior literature to evaluate with human observers (N = 160). Our preliminary results show that observers struggle to identify a robot's goals at similar levels regardless of whether obfuscatory or optimal task planning strategies are employed. These findings call into question the purported benefits of many obfuscatory task planning strategies. |
|
| Higgins, Angela |
Caroline Yan Zheng, Maria Elena Giannaccini, Angela Higgins, Feng Zhou, Mark Paterson, and Praminda Caleb-Solly (University of Nottingham, Nottingham, UK; University of Pittsburgh, Pittsburgh, USA) The critical role of interhuman touch and the rapid development of accessible haptics technology is preluding a new generation of social and assistive robotic interactions that support and enhance human daily life through physical contact, including assisting mobility, mental health, pain management, social bonding, and novel sensory experience for body literacy. For social and assistive robotic systems to fit in the intimate space of physical contact with human bodies, the nature and success of the interaction needs to be interlinked with the sensory and sentimental qualities of robot touch. HRI should not only be driven by function and efficiency, but more critically, the experience of the touch interaction, felt by the human. This raises fundamental methodological challenges. Currently, transferring rich qualitative experience into effective robotic systems has not been made possible, and there is a lack of tools to investigate the ephemeral phenomenon of felt sensations. Addressing this challenge calls for methodological innovation to investigate the sensory and experiential aspects of technology-initiated social touch, through facilitating cross-disciplinary expertise and end-users' involvement. This workshop mobilises experts from diverse disciplines, including robotics engineering, AI, social science, neuroscience, healthcare, soma design and arts. This workshop aims to collectively reflect on strategies, tools and methods for the co-exploration and co-production of Robot-Human Touch Interactions that afford appropriate and desirable human felt experience. |
|
| Hilgert, Lukas |
Irina Rudenko, Utku Norman, Lukas Hilgert, Jan Niehues, and Barbara Bruno (KIT, Karlsruhe, Germany) Large Language Models (LLMs) hold significant promise for enhancing Child–Robot Interaction (CRI), offering advanced conversational skills and adaptability to the diverse abilities, requests and needs of young children. Little attention, however, has been paid to evaluating the age and developmental appropriateness of LLMs. This paper brings together experts in psychology, social robotics and LLMs to define metrics for the validation of LLMs for child–robot interaction. |
|
| Hindriks, Koen |
Elena Malnatsky, Shenghui Wang, Koen Hindriks, and Mike E.U. Ligthart (Vrije Universiteit Amsterdam, Amsterdam, Netherlands; University of Twente, Twente, Netherlands) Long-term child–robot interaction depends on sustaining both relational continuity and accurate, meaningful memory over time. In a one-year follow-up with 50 children from a personalized reading-support robot study, we found that children felt less close to the robot and half of the robot’s stored profile content was outdated or missing, revealing three challenges for long-term CRI: relationship decay, informational decay, and opaque robot memory, where children cannot check or influence what the robot remembers about them. A brief web-based “reconnect” repaired both informational and relationship decay, and revealed children’s strong interest in having more agency over the robot’s memory. Building on these insights, we propose Open-Memory Robots: agents whose memory is more transparent and co-constructed with the child, supporting continuity, appropriate trust, and children’s agency in CRI. Andrew Blair, Mary Ellen Foster, Peggy Gregory, and Koen Hindriks (University of Glasgow, Glasgow, UK; Vrije Universiteit Amsterdam, Amsterdam, Netherlands) The field of human-robot interaction frequently proclaims the inevitable coming of the social robot era, with claims that social robots are increasingly being deployed in the real world. However, in practice, social robots remain scarce in everyday environments. In addition, HRI research rarely explores robots through an organisational lens. This results in a lack of evidence-based understanding of the organisational conditions that are key to the presence--or absence--of social robots in the real world, which are often more decisive than technical sophistication. In this paper, we motivate why organisational context is crucial to the investigation of real-world social robots and provide examples of how this shapes robot acceptance. We detail the methodology of our ongoing empirical research with client organisations and robot developers. Through this critical organisational lens, we learn where social robots are, what they are doing, how they are designed, and why organisations are deploying them. |
|
| Hiraoka, Toshihiro |
Hyunjung Kim, Max Fischer, Yena Kim, Geumjin Lee, Yanis El Ouali, Shota Kiuchi, Kentaro Honma, Hailong Liu, and Toshihiro Hiraoka (University of Tokyo, Tokyo, Japan; KAIST, Daejeon, Republic of Korea; Bordeaux INP, Talence, France; NAIST, Nara, Japan; Japan Automobile Research Institute, Tsukuba, Japan) We present MIRAbot, the first rearview-mirror-embedded robotic assistant. Reimagining this long-standing symbol of manual driving, MIRAbot leverages its strategic position and familiar form factor to support SAE Level 3 conditional automated driving by transforming into an active, communicative agent during automated modes. We conducted an exploratory public study (N=73) with Japanese adults to capture first-contact user impressions. UEQ-S scores significantly exceeded established benchmarks, with younger adults (18–54) reporting higher Hedonic and Overall ratings than older participants (55+). Qualitatively, participants interpreted the system’s outward-facing, scanning gaze as a cue of vigilance, fostering positive expectations for real-world viability. We discuss these findings, offer design insights for future iterations of robotic assistants embedded in familiar objects to enhance acceptance of automated driving, and outline directions for future research. |
|
| Hirskyj-Douglas, Ilyena |
Ilyena Hirskyj-Douglas (University of Glasgow, UK) As robots increasingly move from laboratories into homes, streets, and public spaces, they enter environments shared by multiple species. Yet, HRI predominantly considers only human users, treating animals as obstacles to navigate rather than as those impacted by or potential users of robotics. This keynote challenges the anthropocentric paradigm by presenting empirical evidence of animals as active technology users, able to make choices, express preferences, learn, and form relationships through interactive computer devices. Drawing on studies with dogs, parrots, and primates, I propose a more-than-human design framework that considers animals both as direct users who can control robotic systems and as individuals impacted by HRI deployments. I argue that even if our ultimate goal is to build better robots for humans, this expanded perspective will help us achieve it while opening new possibilities for animals themselves. |
|
| Ho, Hui-Ru |
Hui-Ru Ho (University of Wisconsin-Madison, Madison, USA) While many robots are designed to assist people, their increasing intelligence across social and cognitive tasks may hinder humans from developing their own abilities due to over-reliance. We present prior, ongoing, and proposed work to explore how human-robot interaction (HRI) can instead empower human learning. First, we examined how robots integrate into learning ecosystems, uncovering that learning supporters prefer robots to act as their collaborators instead of replacement. Our framework identified collaboration mechanisms and was validated through prototype development and field evaluation. Next, we investigated how mobile robots can enhance in situ learning in the physical world through situated cognitive mechanisms. Finally, we plan to develop robotic arm prototypes that convey abstract learning concepts through manipulation of physical objects. Through these efforts, we aim to transform HRI from assistance to empowerment, enabling humans to enhance their capabilities through meaningful interactions. |
|
| Hobbelink, Veerle |
Elitza Marinova, Pieter Ruijs, Just Oudheusden, Veerle Hobbelink, and Matthijs Smakman (HU University of Applied Sciences Utrecht, Utrecht, Netherlands; Vrije Universiteit Amsterdam, Amsterdam, Netherlands) Children with Attention Deficit Hyperactivity Disorder (cADHD) often struggle with completing daily tasks and routines, yet technological support in the home environment remains limited. This exploratory study examines the potential of social robots to assist cADHD with Instrumental Activities of Daily Living (IADLs). Nine experts were interviewed to identify design requirements, followed by a five-day in-home deployment with five families. Parents and children reported that the robot effectively provided reminders and task instructions, improved focus and independence, and reduced caregiving demands. While families expressed interest in continued use, they emphasized the need for greater reliability and adaptability. These findings highlight the promise of social robots in supporting cADHD at home and offer valuable directions for future research and development. |
|
| Hockenberry, Kristal |
Tianyi Zhang, Christopher Garrison, Kristal Hockenberry, Rabeya Jamshad, Laurel D. Riek, and Susan Simkins (Pennsylvania State University, University Park, USA; University of California at San Diego, La Jolla, USA) Although robots are increasingly integrated into healthcare, safety- and time-critical action team contexts are underexplored in HRI research. In addition, whereas human-robot interactions have received attention, much remains to be understood about the effects of robots on human-human team processes. In response to these research needs, we varied the effects of robot descriptive characteristics, robot behaviors, and interaction context as a mobile manipulator robot delivered needed materials to nursing student teams during simulations. Team communication and team performance were rated by nursing instructors using a standardized healthcare communication framework and the Creighton Competency Evaluation Instrument. Results indicated that increased robot influencing factors (robot face, robot voice, response to requests, and passive presence) enhanced nursing students’ communication with each other and mannequin patients, thereby improving team performance. Together, these findings highlight that robots should not be viewed solely as direct collaborators with humans, but also as actors that shape how humans work together. As healthcare and other safety-critical domains increasingly integrate robots, understanding these indirect pathways will be critical for designing robots that enhance rather than hinder team effectiveness. |
|
| Hollmann, Karsten |
Mayumi Mohan, Rachael L’Orsa, Felix Grüninger, Bernadette-Marie Stollhof, Carolin Sarah Klein, Raphael Dinauer, Rachael Bevill Burns, Tobias J. Renner, Karsten Hollmann, and Katherine J. Kuchenbecker (MPI for Intelligent Systems, Stuttgart, Germany; University Hospital Tübingen, Tübingen, Germany; University of Tennessee at Knoxville, Knoxville, USA) The gold-standard treatment for children diagnosed with obsessive-compulsive disorder (OCD) is therapist-guided cognitive behavioral therapy (CBT), which includes exposure and response prevention (ERP) sessions that teach children to overcome compulsive responses when exposed to their anxiety-inducing triggers. CBT requires children to report frequent self-assessments of tension during both therapist-supported and therapist-free self-management ERP sessions. Videoconferencing-delivered CBT (vCBT) enables a psychotherapist to treat a child remotely in their home, where OCD symptoms often arise, but these remote therapeutic interactions lack physical presence and can be challenging to run. We propose using a robot as an input/output device during vCBT for young children diagnosed with OCD, and we introduce a stationary table-top koala robot for this application. We further describe the first of three planned participatory design phases: a co-design study comprising two sessions where child and adolescent psychotherapists role-played vCBT ERP exercises with this robot to help define its role. |
|
| Holthaus, Patrick |
Patrick Holthaus, Daniel Hernández García, Patricia Shaw, Francesco Del Duchetto, Marta Romeo, Muneeb Imtiaz Ahmad, Daniel Tozadore, and Shelly Bagchi (University of Hertfordshire, Hatfield, UK; Heriot-Watt University, Edinburgh, UK; Aberyswyth University, Aberystwyth, UK; University of Lincoln, Lincoln, UK; Swansea University, Swansea, UK; University College London, London, UK; National Institute of Standards and Technology, Gaithersburg, USA) The quality and impact of human-robot interaction (HRI) research rely on scientific rigour and reproducibility as well as the ethical soundness of experiments involving human participants. However, the HRI community currently lacks easy access to resources and common knowledge on experimental design practices and standardised reporting guidelines. The BPM-HRI: Best practices and methods in HRI research workshop at HRI 2026 aims to address this gap by fostering a community-wide effort to discuss, disseminate, and develop more robust methodologies. Targeting to empower especially early-career researchers, this half-day workshop will consolidate efforts from international initiatives, including the IEEE Standards Group P3108 Recommended Practice for Design of Human Subjects Studies in Human-Robot Interaction and the UK-HRI topic group Human-Robot Interaction: Best Practices and Methods. Our program includes a presentation and discussion on these initiatives, a keynote address focusing on rigorous study reporting, providing an intercontinental perspective, and dedicated mentoring and ideas exchange sessions. A collaborative working session will document the workshop's efforts with attendees starting to draft a community-driven white paper, surveying the current landscape and outlining next steps for experimental design and reporting recommendations in the field of HRI. Christopher D. Wallbridge, Kavyaa Somasundaram, Frank Förster, Maia Stiber, Eduardo B. Sandoval, Chinmaya Mishra, Christina E. Stimson, Patrick Holthaus, and Hatice Gunes (Cardiff University, Cardiff, UK; Örebro University, Örebro, Sweden; University of Hertfordshire, Hatfield, UK; Microsoft Research, Redmond, USA; UNSW, Sydney, Australia; MPI for Psycholinguistics, Nijmegen, Netherlands; University of Sheffield, Sheffield, UK; University of Cambridge, Cambridge, UK) This workshop proposes looking at errors in humans and robots. The workshop will focus along three axes: Human, Robot and Interaction induced errors. Errors in Human-Robot Interaction (HRI) pose a crucial challenge for the deployment of robots. Errors can affect the safety and trust of users, therefore it is important for empowering society that these robots can handle errors robustly. The workshop intends to foster interdisciplinary discussion among researchers in robotics, HCI, cognitive science, and social sciences, to encourage the development of robust, user-centered, and socially responsible HRI systems. The invited speakers, paper presentations and discussions will focus around topics of detecting errors, handling errors, adapting to errors, restoring trust, using intentional errors, communication strategies, issues of expectation as well as look at design, interdisciplinary and ethical approaches to research. In this way we will help inform research into errors, and develop robotic systems capable of robust interaction and collaboration. |
|
| Holzer, Peter |
Chalmers Ong, Hannah Lee, Koen de Nie, Mark W. Vas, Peter Holzer, Yasaman Shamssabzevar, Yu-Ai Jiang, J. Micah Prendergast, Alper Semih Alkan, and Serdar Aşut (Delft University of Technology, Delft, Netherlands) Robotics in creative industries is sometimes met with concern over the eventual replacement of humans. This paper proposes a framework for a new form of spatial craft that emerges through human-robot collaboration, using the Delft Blue painting process as a case study. We employ Learning from Demonstration (LfD) through kinesthetic guidance of a KUKA iiwa robotic arm to capture expert brushstroke trajectories, which are then modelled using a Long Short-Term Memory Variational Autoencoder (LSTM-VAE) to generate novel, stylistically coherent strokes. Subsequently, we evaluated these generated trajectories through physical execution on ceramic tiles, inviting a human painter to interpret and complete one of the robot-produced compositions. The results demonstrate that robotic machine learning can support a new co-crafting workflow, which does not substitute human artists but expands and transforms traditional craft practices for new creative opportunities to emerge. |
|
| Honma, Kentaro |
Hyunjung Kim, Max Fischer, Yena Kim, Geumjin Lee, Yanis El Ouali, Shota Kiuchi, Kentaro Honma, Hailong Liu, and Toshihiro Hiraoka (University of Tokyo, Tokyo, Japan; KAIST, Daejeon, Republic of Korea; Bordeaux INP, Talence, France; NAIST, Nara, Japan; Japan Automobile Research Institute, Tsukuba, Japan) We present MIRAbot, the first rearview-mirror-embedded robotic assistant. Reimagining this long-standing symbol of manual driving, MIRAbot leverages its strategic position and familiar form factor to support SAE Level 3 conditional automated driving by transforming into an active, communicative agent during automated modes. We conducted an exploratory public study (N=73) with Japanese adults to capture first-contact user impressions. UEQ-S scores significantly exceeded established benchmarks, with younger adults (18–54) reporting higher Hedonic and Overall ratings than older participants (55+). Qualitatively, participants interpreted the system’s outward-facing, scanning gaze as a cue of vigilance, fostering positive expectations for real-world viability. We discuss these findings, offer design insights for future iterations of robotic assistants embedded in familiar objects to enhance acceptance of automated driving, and outline directions for future research. |
|
| Hou, Muhan |
Yijun Zhou, Muhan Hou, and Kim Baraka (Vrije Universiteit Amsterdam, Amsterdam, Netherlands) Imitation learning relies on high-quality demonstrations, and teleoperation is a primary way to collect them, making teleoperation interface choice crucial for the data. Prior work mainly focused on static tasks, i.e., discrete, segmented motions, yet demonstrations also include dynamic tasks requiring reactive control. As dynamic tasks impose fundamentally different interface demands, insights from static-task evaluations cannot generalize. To address this gap, we conduct a within-subjects study comparing a VR controller and a SpaceMouse across two static and two dynamic tasks (N=25). We assess success rate, task duration, cumulative success, alongside NASA-TLX, SUS, and open-ended feedback. Results show statistically significant advantages for VR: higher success rates, particularly on dynamic tasks, shorter successful execution times across tasks, and earlier successes across attempts, with significantly lower workload and higher usability. As existing VR teleoperation systems are rarely open-source or suited for dynamic tasks, we release our VR interface to fill this gap. |
|
| Houben, Maarten |
Febe Anna Kooij-Meijer, Emilia I. Barakova, Rosa Elfering, Wang Long Li, and Maarten Houben (Eindhoven University of Technology, Eindhoven, Netherlands; Tinybots, Rotterdam, Netherlands) The growing population of individuals with mild cognitive impairment and dementia places increasing demands on home-care systems, while staff shortages and high caregiver workloads underscore the need for assistive technologies. However, research on implementing these technologies in home care practice remains limited. This study examines professional caregivers’ digital onboarding of Tessa, a social robot that supports through verbal reminders. A conceptual digital onboarding probe was evaluated with novice, experienced, and expert users. Findings indicate that the onboarding process improves usability and efficiency by providing intuitive guidance and structured workflows. Additionally, LLMs can translate caregiver-provided goals into actionable robot scripts, though oversight remains essential for quality assurance. The probe and LLM support more effective onboarding and enhance caregiver’s user experience. |
|
| Hsu, Che-Kang |
Hiroki Kimura, Che-Kang Hsu, Takeru Ito, Naoya Harada, Michiteru Kitazaki, and Ryosuke Tasaki (Aoyama Gakuin University, Sagamihara, Japan; Toyohashi University of Technology, Toyohashi, Japan) Robotic massage systems can reproduce treatment forces consistently, but most still rely on predefined trajectories and fixed force distributions that ignore recipients’ subjective sensations. We propose a conversational robotic massage system that updates a polynomial force map for latissimus dorsi acupressure from recipient speech and applies it through force feedback control. In a study with 13 participants, simple commands such as "little weaker" and "little stronger" changed the applied force in the intended direction; questionnaires indicated high force adaptation, satisfaction, and consistency, with low pain, while perceived safety varied widely, suggesting a trade-off between interactive adaptation and predictable contact. These findings indicate that speech is a promising modality for tailoring robotic massage and highlight the need for careful gain tuning and feedback about force changes. Although the system currently uses only four predefined speech patterns, it represents an initial step toward conversational control of spatially varying acupressure forces. |
|
| Hsu, Long-Jing |
Weslie Khoo, Long-Jing Hsu, William Valentine, Brandon Connor, Tao Yat Kong, Amous Khoo, Waki Kamino, Selma Šabanović, and David Crandall (Indiana University at Bloomington, Bloomington, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Rose-Hulman Institute of Technology, Terre Haute, USA; Independent, Singapore, Singapore; Cornell University, Ithaca, USA) IRIS, the Interactive Robot for Ikigai Support, is a social robot designed to help older adults explore life purpose and meaningful memories through interactive, reflective conversations. Implemented on LuxAI’s QT humanoid robot, IRIS combines structured prompts with AI-generated follow-ups to support dynamic, personalized dialogue that adapts to individual preferences and cognitive abilities. Co-designed with older adults through a longitudinal panel, IRIS incorporates nuanced emotional responsiveness, including the ability to recognize and engage with bittersweet emotions. For the interactive demo, participants will engage with IRIS in a controlled, immersive setting to encourage contemplation of meaning, impermanence, and personal purpose. This work highlights the potential of socially aware robots to empower older adults, support emotional well-being, and foster personal reflection, demonstrating how human–robot interaction can contribute to societal enrichment. Jack Bassett, Long-Jing Hsu, Stephen Taylor, Janice Bays, Weslie Khoo, Katherine M. Tsui, David J. Crandall, and Selma Šabanović (Indiana University at Bloomington, Bloomington, USA; Dementia Action Alliance, Bloomington, USA; Toyota Research Institute, Cambridge, USA) Current robotic systems for dementia care primarily emphasize cognitive or functional support—reminding individuals to take medications, offering simple companionship, or providing sensory stimulation. These interventions are valuable, yet they do not always acknowledge the deeper existential challenges associated with memory loss. In this video, we present IRIS (Interactive Robot for Ikigai Support), a novel social robot designed not only to assist but to empower: to help individuals living with dementia reconnect with purpose, identity, and meaning. This 3-minute clip was part of a 3-year research project to develop a robot that can foster a sense of meaning for older adults living with dementia. This work highlights the lived experience of Steve Taylor, a participant whose long-term involvement shaped the robot from concept to deployment. By foregrounding his voice and day-to-day story, the video reveals how IRIS may act as a reflective companion — encouraging storytelling, sparking emotional connections, and enabling moments of clarity, dignity, and joy. The film frames the robot not merely as a tool, but as a mirror through which participants can rediscover fragments of self and meaning in everyday experiences. Brandon Connor, William Valentine, Long-Jing Hsu, Waki Kamino, Selma Šabanović, David Crandall, and Weslie Khoo (Indiana University at Bloomington, Bloomington, USA; Rose-Hulman Institute of Technology, Terre Haute, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Cornell University, Ithaca, USA) This paper examines how robots can safely and intentionally evoke fear in humans. Drawing on popular culture, immersive theater, horror games, and theme parks, we highlight design strategies such as unpredictability, proximity violations, and narrative buildup. We discuss how robot morphology, motion, and ambient cues shape perceived threat, with implications for entertainment-focused Human-Robot Interaction (HRI) and for preventing unintentional fear in everyday interactions. The paper aims to spark discussion on leveraging fear as a meaningful, ethical, and safe emotional modality in human–robot interaction. |
|
| Hu, Aurora An-Lin |
Aurora An-Lin Hu, Dimity Crisp, Sharni Konrad, Damith Herath, and Janie Busby Grant (University of Canberra, Bruce, Australia; University of Canberra, Canberra, Australia) The mismatch between user expectations and robot performance—the expectation gap—is common in human–robot interaction. Although related research is limited, preliminary evidence suggests that the expectation gap has a considerable impact on user adoption of robots. The present study examined how failing, confirming, and exceeding user expectations and the extent to which robot performance differs from expectations predict users’ adoption intention. A sample of 234 participants completed pre-interaction expectation measures and post-interaction robot performance ratings after completing a drawing activity with a humanoid robot (Pepper). Results showed that considering both the magnitude and direction of the expectation gap (signed gap values) consistently yielded stronger associations and predictive power for adoption intention than considering the magnitude alone (absolute gap values) across four expectation dimensions, with expectation gaps related to Relative Advantage emerging as the strongest predictor. Overall, the findings highlight that failing to meet expectations consistently predicted lower adoption intention compared to both confirming and exceeding expectations, whereas evidence for whether exceeding expectations provides additional benefits beyond confirming them was mixed. Phillip Morgan, Damith Herath, Praminda Caleb-Solly, Matthew Studley, Elizabeth Williams, Aurora An-Lin Hu, Eduardo B. Sandoval, Maleen Jayasuriya, Min Wang, and Janie Busby Grant (Cardiff University, Cardiff, UK; University of Canberra, Bruce, Australia; University of Nottingham, Nottingham, UK; University of the West of England, Bristol, UK; Australian National University, Canberra, Australia; UNSW, Sydney, Australia; University of Canberra, Canberra, Australia) The increasing prevalence of interactive, mobile robots in domestic and social spaces requires not only a comprehensive examination of the security challenges inherent in their large-scale deployment, but understanding how we as a community can support safe and successful adoption of robots. This workshop provides a forum for researchers, practitioners, and stakeholders from a range of disci- plines to build expertise and networks in safety, trust, psychosocial, legal and economic aspects for the secure deployment of social robots in domestic environments. The end goal of this workshop is to enable the development of methodologies and recommenda- tions to empower people to understand and decide on how their robots (and/or robots they are using) are secured, and approaches for raising awareness of the impact of an attack. The workshop will incorporate keynote presentations from leading experts, a series of lightning talks, and a scenario planning activity in which teams of participants use custom-designed prospective scenarios to ex- plore the precursors, processes, contexts and impacts of a breach of security in a robotic system. This will allow a deep-dive into the complexities of establishing and maintaining security and trust in interactive robotic applications, from the broad frame of social, personal and legal capability, enabling long-term acceptance and adoption of trustworthy, secure, and safe interactive robots. |
|
| Hu, Han |
Han Hu and He Chen (Chinese University of Hong Kong, Shatin, China; Chinese University of Hong Kong, Hong Kong SAR, China) Vision language models (VLMs) have shown strong performance in RGB-based human-robot interaction (HRI). However, using RGB cameras in homes faces challenges due to privacy issues and poor performance in the dark, such as during night-time elderly care. Thermal imaging offers a privacy-preserving alternative that works without light. This raises a natural yet previously unexplored question: Can general-purpose VLMs effectively interpret thermal images in a zero-shot manner for privacy-preserving and robust HRI? In this work, we conduct a thorough evaluation of this capability using a real-world dataset. Specifically, we investigate whether the latest VLMs are reliable for safety-critical HRI tasks. We benchmark six leading VLMs on the OctoNet dataset, which contains 975 thermal sequences. To avoid self-evaluation bias, we use an ensemble of three independent large language models to score the predictions and measure stability. Our results reveal a critical performance disparity: while VLMs are accurate on large body movements (e.g., Sitting: 92.8%), they struggle on fine-grained interactions (e.g., Hand Gestures: <20%) and safety-critical events (e.g., Stagger: <40%). Furthermore, we identify instability in predictions due to variations in viewing angles and movement magnitude. Given the strict reliability standards for caregiving, we conclude that current VLMs alone are insufficient for autonomous thermal monitoring. Our findings highlight the limitations of zero-shot thermal perception and underscore the necessity of multimodal fusion to ensure robust HRI. |
|
| Hu, Jun |
Jing Li, Felix Schijve, Jun Hu, and Emilia I. Barakova (Eindhoven University of Technology, Eindhoven, Netherlands) Parental involvement is crucial for the development of children's emotion regulation (ER) skills, yet navigating these complex emotional interactions remains challenging for many families. While Large Language Models (LLMs) offer unprecedented conversational flexibility, integrating them into embodied social robots to provide context-aware, multimodal support remains an open challenge. In this paper, we present the design and preliminary evaluation of an LLM-powered robotic system aimed at facilitating ER within parent-child dyads. Utilizing a supervised autonomy approach, our system bridges the gap between language-based reasoning and embodied robotic behavior, allowing the MiRo-E robot to engage in natural dialogue while performing empathetic physical actions. We detail the system's technical architecture and interaction design, which guides dyads through evidence-based ER strategies. Preliminary user tests with six parent-child dyads suggest positive user engagement and initial trust, with participants reporting that the robot showed potential as a supportive mediator. These findings offer early design insights into developing autonomous, LLM-driven social robots for family-centered mental health interventions. |
|
| Hu, Nan |
Renee Ziqi Zhu, Nan Hu, Lihao Zheng, and Xinyun Zhang (Indiana University at Bloomington, Bloomington, USA) Most existing applications of social robots that support older adults focus on personal use or deployment within nursing facilities. Through our collaboration with a local senior community center, one major need that emerged is the use of technology to encourage older adults to be more physically active—an essential factor for maintaining physical health, supporting mental well-being, and building social capital. Guided by this need, our project explores how a community-based robot can serve as a shared resource that promotes both social connection and physical engagement among older adults. Rather than designing a robot that only facilitates group activities, our goal is to create a robot that helps build human-to-human relationships by supporting group exercises, shared experiences, and opportunities for older adults to meet and connect with one another. Through workshops with older adults, we designed MERRY (Matching Engagement & Route Recommendation for You), a Christmas tree-like robot aiming to help older adults to connect with each other and engage more in walking activities. The robot allow older adults to choose for suitable activities, connect with the community, track and reflect on their shared experiences. |
|
| Hu, Qinyi |
Xucong Hu, Qinyi Hu, Tianya Yu, Mowei Shen, and Jifan Zhou (Zhejiang University, Hangzhou, China) First impressions are critical for public-facing social robots: users rapidly infer a robot’s potential for social interaction from its appearance, shaping expectations and willingness to engage. Yet no existing scale captures how people interpret the interaction potential implied by a robot’s visual affordances. We introduce the Robot Social Interaction Potential Scale (RoSIP), a concise appearance-based scale assessing two dimensions—Perceptual Potential and Behavioral Potential. Across a pilot study and large-scale exploratory and confirmatory factor analyses (N = 750), we identified a 10-item, two-factor structure with strong internal consistency and solid construct and discriminant validity. RoSIP provides a dedicated tool for rapidly quantifying appearance-based inferences about a robot’s social interaction potential, enabling future work to systematically link robot morphology and social perception in HRI. |
|
| Hu, Xinyi |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Hu, Xucong |
Xucong Hu, Qinyi Hu, Tianya Yu, Mowei Shen, and Jifan Zhou (Zhejiang University, Hangzhou, China) First impressions are critical for public-facing social robots: users rapidly infer a robot’s potential for social interaction from its appearance, shaping expectations and willingness to engage. Yet no existing scale captures how people interpret the interaction potential implied by a robot’s visual affordances. We introduce the Robot Social Interaction Potential Scale (RoSIP), a concise appearance-based scale assessing two dimensions—Perceptual Potential and Behavioral Potential. Across a pilot study and large-scale exploratory and confirmatory factor analyses (N = 750), we identified a 10-item, two-factor structure with strong internal consistency and solid construct and discriminant validity. RoSIP provides a dedicated tool for rapidly quantifying appearance-based inferences about a robot’s social interaction potential, enabling future work to systematically link robot morphology and social perception in HRI. |
|
| Hu, Yue |
Neil Fernandes, Tehniyat Shahbaz, Emily Davies-Robinson, Yue Hu, and Kerstin Dautenhahn (University of Waterloo, Waterloo, Canada; United for Literacy, Toronto, Canada) Newcomer children face barriers in acquiring the host country’s language and literacy programs are often constrained by limited staffing, mixed-proficiency cohorts, and short contact time. While Socially Assistive Robots (SARs) show promise in education, their use in these socio-emotionally sensitive settings remains underexplored. This research presents a co-design study with program tutors and coordinators, to explore the design space for a social robot, Maple. We contribute (1) a domain summary outlining four recurring challenges, (2) a discussion on cultural orientation and community belonging with robots, (3) an expert-grounded discussion of the perceived role of an SAR in cultural and language learning, and (4) preliminary design guidelines for integrating an SAR into a classroom. These expert-grounded insights lay the foundation for iterative design and evaluation with newcomer children and their families. |
|
| Huang, Chien-Ming |
Victor Nikhil Antony, Kai-Chieh Liang, and Chien-Ming Huang (Johns Hopkins University, Baltimore, USA) We demonstrate Lantern, a minimalist, haptic robotic object platform designed to be low-cost, holdable, and easily customized for diverse human–robot interaction scenarios. In this demo, we showcase three instantiations of Lantern: (1) the base Lantern platform, highlighting its core motion and haptic behavioral profiles; (2) an ADHD body-doubling study buddy variant, which shows how Lantern can be adapted to scaffold focused work; and (3) Dofu, an upgraded Lantern variant to anchor daily mindfulness practice, with additional sensing, improved compute, and a battery-powered, dockable form factor for untethered, in-the-wild use. Visitors will be able to physically interact with each Lantern variant, observe contrasting embodiments, and behaviors; Moreover, visualizations (panels and video) will showcase the build process and additional extension possibilities. Victor Nikhil Antony and Chien-Ming Huang (Johns Hopkins University, Baltimore, USA) Plants offer a paradoxical model for interaction: they are ambient, low-demand presences that nonetheless shape atmosphere, routines, and relationships through temporal rhythms and subtle expressions. In contrast, most human-robot interaction (HRI) has been grounded in anthropomorphic and zoomorphic paradigms, producing overt, high-demand forms of engagement. Using a Research through Design (RtD) methodology, we explore plants as metaphoric inspiration for HRI; we conducted iterative cycles of ideation, prototyping, and reflection to investigate what design primitives emerge from plant metaphors and morphologies, and how these primitives can be combined into expressive robotic forms. We present a suite of speculative, open-source prototypes that help probe plant-inspired presence, temporality, form, and gestures. We deepened our learnings from design and prototyping through prototype-centered workshops that explored people’s perceptions and imaginaries of plant-inspired robots. This work contributes: (1) Set of plant-inspired robotic artifacts; (2) Designerly insights on how people perceive plant-inspired robots; and (3) Design consideration to inform how to use plant metaphors to reshape HRI. |
|
| Huang, Danqi |
Yijie Guo, Ruhan Wang, Jini Tao, Zhiling Xu, Yaowen Shen, Qifei Zhou, Zhenhan Huang, Danqi Huang, and Haipeng Mi (Tsinghua University, Beijing, China; Neurobo, Shanghai, China; Tongji University, Shanghai, China) In-public companionship is underexplored, especially for ACG (anime, comics, and games) users who carry character goods and seek shared presence with favorite characters in everyday public life. We present Goobo, a lightweight multimodal companion embedded in an ita-bag (a transparent display bag used by ACG fans) that “activates” carried character goods via real-time visual perception and persona-consistent expressive narration. Informed by a formative workshop, Goobo illustrates a portable, character-bound interaction paradigm and serves as a platform for upcoming field studies on how users engage with activated characters in public settings. |
|
| Huang, Jindan |
Isaac S. Sheidlower, Jindan Huang, James Staley, Bingyu Wu, Qicong Chen, Reuben M. Aronson, and Elaine Schaertl Short (Brown University, Providence, USA; Tufts University, Medford, USA) Robot Foundation Models (RFMs) represent a promising approach to developing general-purpose home robots. The broad capabilities of RFMs enable users to ask a robot to perform tasks the RFM was not evaluated on. Informed users can better judge what a robot can and cannot do, and thus use it more safely and effectively. We study how non-roboticists interpret performance information commonly reported in RFM evaluations, which typically use task success rate (TSR) as the primary metric. While TSR is intuitive to experts, it is important to validate that novices interpret it as intended. We conducted a study where users saw real evaluation data from published RFM projects. We find that non-experts use TSR in ways consistent with expert expectations but also value additional information—especially failure cases, which are less frequently reported. Users further want both real evaluation data and the robot’s own estimates for novel tasks. |
|
| Huang, Junhong |
Junhong Huang (Hong Kong University of Science and Technology, Guangzhou, China) While generative AI has empowered robotic drawing systems, creative decisions are typically made on separate screens (e.g., tablets), disconnecting the user from the physical canvas and reducing the robot to a mere output device. To address this, we investigated how to maintain user engagement within the shared workspace and developed a GenAI-empowered prototype that shifts negotiation back to the robot's body through embodied proactive suggestions. Our system utilizes a vision pipeline to analyze the canvas, employing GPT-4o and Gemini Pro~3 to propose and generate local artistic additions. Crucially, these suggestions are not displayed on a screen but are previewed physically as circular "ghost motions"—non-marking gestures over the target region—accompanied by a spoken inquiry. This approach eliminates the reliance on external GUIs, allowing the robot's intent to be legible and negotiable directly on the shared canvas. We report a preliminary technical validation of end-to-end feasibility and system latency. |
|
| Huang, Ke |
Yitong Yuan, Ke Huang, Michael Detsiang Li Jr, Yiwei Zhao, and Baoyuan Zhu (Tsinghua University, Beijing, China) Unhealthy postures have become increasingly prevalent, affecting health and productivity, yet existing posture-correction devices rely on intrusive external reminders. We present Tuotle, a desktop robot that leverages cognitive dissonance by adopting a “bad posture,” prompting users to correct it and, in turn, reflect on their own posture. A pilot user study shows it has comparable posture-correction effectiveness to traditional devices, while showing significantly better user experience and long-term adoption intentions. Our work demonstrates that psychological mechanisms can be activated through human-robot interactions, opening new directions for technologies grounded in human psychology. |
|
| Huang, Wenjie |
Ruidong Ma, Wenjie Huang, Zhegong Shangguan, Angelo Cangelosi, and Alessandro Di Nuovo (Sheffield Hallam University, Sheffield, UK; University of Manchester, Manchester, UK) Direct imitation of humans by robots offers a promising direction for remote teleoperation and intuitive task instruction, where a human can perform a task naturally and the robot autonomously interprets and executes it using its own embodiment. Existing methods often rely on close alignment between human and robot scenes. This prevents robots from inferring the intent of the task or executing demonstrated behaviors when the initial states mismatch. Hence, it poses difficulties for non-expert users, who may need domain knowledge to adjust the setup. To address this challenge, we propose a neuro-symbolic framework that unifies visual observations, robot proprioceptive states, and symbolic abstractions within a shared latent space. Human demonstrations are encoded into this representation as predicate states. A symbolic planner can thus generate high-level plans that account for the different robot initial states. A flow matching module then synthesizes continuous joint trajectories consistent with the symbolic plan. We validate our approach on multi-object manipulation tasks. Preliminary results show that the framework can infer human intent and generate feasible symbolic plans and robot motions under mismatched initial states. These findings highlight the potential of neuro-symbolic models for more natural human-robot instruction. and they can enhance the explainability and trustworthiness of robot actions. |
|
| Huang, Zhenhan |
Yijie Guo, Ruhan Wang, Jini Tao, Zhiling Xu, Yaowen Shen, Qifei Zhou, Zhenhan Huang, Danqi Huang, and Haipeng Mi (Tsinghua University, Beijing, China; Neurobo, Shanghai, China; Tongji University, Shanghai, China) In-public companionship is underexplored, especially for ACG (anime, comics, and games) users who carry character goods and seek shared presence with favorite characters in everyday public life. We present Goobo, a lightweight multimodal companion embedded in an ita-bag (a transparent display bag used by ACG fans) that “activates” carried character goods via real-time visual perception and persona-consistent expressive narration. Informed by a formative workshop, Goobo illustrates a portable, character-bound interaction paradigm and serves as a platform for upcoming field studies on how users engage with activated characters in public settings. |
|
| Hume, Ciara |
Jasmin Jaya Chadha, Patrick Kenneth Pischulti, Ciara Hume, Zachary Selleck, William Roessmann, and Katya Arquilla (University of Colorado at Boulder, Boulder, USA) Psychological safety, the shared belief held by team members that it is safe to take interpersonal risks, has been assessed in many human teams in the corporate workplace, yet it has not been analyzed to the same extent in complex environments or with multi-agent teams. Many studies have either evaluated team dynamics in human-robot interaction (HRI) completely in the wild or in highly controlled laboratory studies. This work presents operationally relevant study tasks that can be implemented in a controlled laboratory environment to characterize psychological safety in HRI. 33 participants completed 2 different tasks repeatedly with a human and robot teammate. We observed higher performance scores when participants worked with a human vs. a robot, suggesting that these tasks elicit differentiable performance responses based on team type that could be related to differences in psychological safety. |
|
| Hwang, Minyoung |
Minyoung Hwang (Massachusetts Institute of Technology, Cambridge, USA) Personalization is key to bring robots into our daily lives: people differ in goals, abilities, and preferences, yet most robot learning systems rely on fixed task objectives. My research vision is to develop efficient methods for personalized robot learning from human feedback. This extended abstract addresses: what types of human feedback are most useful, how can we use them efficiently, and how these insights enable personalized assistance for humans. We first study which modalities of feedback best capture human intent—demonstrations, language, or preference on trajectories—and how combining them increases clarity and data efficiency. We then investigate methods for augmenting limited human feedback to build robust and generalizable reward models. Finally, we extend these techniques to physical human-robot interaction, where data-efficient personalization is critical for safety and comfort. Together, these efforts aim to make personalization a practical capability for everyday robots. |
|
| Ianniello, Alessandro |
Wilbert Tabone, Benedetta Lusi, Alessandro Ianniello, J. Micah Prendergast, Deborah Forster, Olger Siebinga, Maria Luce Lupetti, Eva S. Verhoef, Dave Murray-Rust, Marco C. Rozendaal, Ann M. Pendleton-Jullian, and David Abbink (Delft University of Technology, Delft, Netherlands; Erasmus University Medical Centre, Rotterdam, Netherlands; Politecnico di Torino, Turin, Italy; RoboHouse, Delft, Netherlands; Ohio State University, Columbus, USA) Building on two previous workshops on transdisciplinary practices for shaping worker-robot relations, this half-day workshop introduces participants to worldbuilding, a design-driven technique used to co-create and explore richly detailed futures, as a way to empower workers and scholars in reimagining plausible and preferable future worker-robot relations (WRRs). WRRs describe the interactions, collaborations, and shared practices between workers and robotic systems in organisational contexts. The workshop begins with an introduction to WRRs, and a keynote by a worldbuilding expert that will outline the method and its value for envisioning future WRRs. Groups of workshop participants will then investigate concrete case studies that demonstrate how robotic systems can support workers in their practice, with a focus on enhancing wellbeing. Through interactive activities in this workshop, participants will co-create imagined worlds of work, which will be analyzed systemically across multiple levels of complexity, from the individual worker and their immediate context to broader societal implications. The workshop ultimately aims to build a community committed to shaping sustainable futures of robot-assisted work. |
|
| Iarocci, Grace |
Chuxuan Zhang, Bermet Burkanova, Lawrence H. Kim, Grace Iarocci, Elina Birmingham, and Angelica Lim (Simon Fraser University, Burnaby, Canada) What nonverbal behaviors should a robot respond to? Understanding how children—both neurotypical and autistic—engage with embodied artificial agents is critical for developing inclusive and socially interactive systems. In this paper, we study "open-ended" unconstrained interactions with embodied agents, where little is known about how children behave nonverbally when given few instructions. We conducted a Wizard-of-Oz study in which children were invited to interact nonverbally with 6 different embodied virtual characters displayed on a television screen. We collected 563 (141 unique) nonverbal behaviors produced by children and compare the children’s interaction patterns with those previously reported in an adult study. We also report the presence of repetitive face and hand movements, which should be considered in the development of nonverbally interactive artificial agents. |
|
| IJsselsteijn, Wijnand |
Fan Wang, Yuan Feng, Wijnand IJsselsteijn, and Giulia Perugia (Eindhoven University of Technology, Eindhoven, Netherlands; Northwestern Polytechnical University, Xi’an, China) People Living with Dementia (PLwD) require intensive emotional and physical support, and caregivers frequently struggle with exhaustion and distress. Social robots have been proposed as tools that could enhance socio-emotional well-being, yet many of their designs inherently involve deception, embedding cues that mislead PLwD about the nature and capabilities of the robot. Although Ethics of Technology and Human-Robot Interaction (HRI) explored the concept of Social Robotic Deception (SRD) and its implications, existing discussions remain largely theoretical and detached from the lived realities of dementia care. We know little about how caregivers see and envision the use of SRD in dementia care practice. To address this gap, we conducted two online focus groups with both formal and informal caregivers, with the aim of appraising caregivers' attitudes towards SRD and how they would implement or mediate deception in everyday practice. Critically, we focused on caregivers operating in China, a country of Confucian influence where family caregiving is regarded as a moral duty and leveraging institutional care is stigmatized. Our work contributes empirically grounded insights that highlight lived reality in dementia care shaped by culture for ethical SRD design. |
|
| Inada, Marino |
Taito Tashiro, Marino Inada, and Fumihide Tanaka (University of Tsukuba, Tsukuba, Japan) Loneliness has increasingly been recognized as a serious societal and public health concern worldwide. To support individuals who experience loneliness in their daily lives, we present a neck-pillow-shaped companion robot that integrates spoken dialogue with thermal feedback delivered to the back of the neck. Conversational responses are generated using a large language model, and each LLM-generated response is classified into three sentiment categories to drive a Peltier element to a corresponding temperature setpoint synchronized with speech playback. We aim to investigate how integrating linguistic and thermal modalities shapes users’ subjective perceptions and whether it can ultimately contribute to alleviating loneliness. Taito Tashiro and Marino Inada (University of Tsukuba, Tsukuba, Japan) Existing systems for emotional support for people experiencing loneliness have relied on single modalities such as speech or touch. We propose NeckMate, a neck-pillow-shaped robot that integrates linguistic and tactile information to effectively convey emotions and provide reassurance. Worn around the neck, the robot engages in natural dialogue using a large language model while presenting temperature via a Peltier element according to the polarity of its utterances. By synchronizing warmth with positive messages and coolness with negative ones, the robot creates a bodily sense of companionship. Its low-cost, home-deployable design aims to mitigate growing global loneliness. |
|
| Indino, Enzo |
Sandra Victor, Bruno Yun, Chefou AR Mamadou Toura, Enzo Indino, Madalina Croitoru, and Gowrishankar Ganesh (University of Montpellier, Montpellier, France; University of Aberdeen, Aberdeen, UK; CNRS, Montpellier, France) In this work we address the problem of designing a resource allocation decision making robot. We developed a model that accurately makes decisions to distribute risk, effort and reward between two humans or a human and a robot, considering their age, sex and humanness. To assess the model's alignment with social norms, we conducted a Turing test which showed that our model is perceived as making socially acceptable decisions, similar to those of human participants. Furthermore we demonstrated our model by embodying it in a robot negotiator that automatically distributed reward, effort and risk tokens among participant dyads by perceiving their physical characteristics. |
|
| Inge, Elin |
Yue Lou, Emma Geijer-Simpson, Georgina Warner, Anna Pérez-Aronsson, Elin Inge, and Ginevra Castellano (Uppsala University, Uppsala, Sweden) As emerging technologies such as AI systems and social robots become embedded in children’s daily lives, it is vital to consider how children themselves imagine robots. Advances in social robotics have generated new opportunities for assessing child wellbeing, but much of this work remains framed by adult researcher-centered views of what counts as beneficial. Drawing on the distinction between adult child perspectives and children’s own perspectives from childhood studies, we report a speculative participatory workshop with a child public involvement panel that explores future social robots for supporting children’s wellbeing. Children took part as co-researchers rather than research subjects, and their discussions highlight insights including everyday care, emotional companionship, and transparency about robot identity. We use these insights to inform future work on child-robot interaction and child wellbeing, and to connect children’s perspectives to debates around ethics and AI policy. |
|
| Irfan, Bahar |
Bahar Irfan, Nikhil Churamani, Michelle Zhao, Rajat Kumar Jenamani, Ali Ayub, and Silvia Rossi (Familiar Machines & Magic, Woburn, USA; University of Cambridge, Cambridge, UK; Carnegie Mellon University, Pittsburgh, USA; Cornell University, Ithaca, USA; Concordia University, Montreal, Canada; University of Naples Federico II, Naples, Italy) Today's high-capacity generalist robot policies provide a strong foundation for broad task-level competence, yet achieving effective and equitable support for people in everyday settings remains a significant challenge. Real-world environments are dynamic and unstructured, and human needs evolve over time, requiring robots that can adapt accordingly. The ultimate evaluator of any robotic system is the person it assists, and personalization is essential to ensuring equitable and meaningful support across diverse users and contexts. Developing robots that can continually learn from interaction, adapt their behaviors over time, and flexibly assume roles as learners and collaborators is a critical step toward realizing effective integration of robots into daily life. With this year's theme of "Evolving Assistance for Everyday Life", and in alignment with the conference theme "HRI Empowering Society", the sixth edition of the "Lifelong Learning and Personalization in Long-Term Human-Robot Interaction (LEAP-HRI)" workshop aims to bring together insights across diverse disciplines, focusing on how robots can progressively adapt their support to suit diverse individuals, each with unique and changing needs, across real-world contexts. Through this lens, the workshop aims to discuss current and future directions in how assistive systems can flexibly respond, continually improve over time, and deliver more inclusive and empowering support in everyday life. |
|
| Ishi, Carlos Toshinori |
Carlos Toshinori Ishi, Taiki Yano, and Yuka Nakayama (RIKEN, Kyoto, Japan; ATR, Kyoto, Japan) This study explores the importance of adapting communication in reception tasks based on the visitor attributes and situations, focusing on a reception robot at an expo venue. Ten different scenarios, including three situations, entrance reception, straying assistance, and complaint handling, were created with varying visitor attributes (adults, children, elderly with mild hearing loss). Multimodal expressions, observed through human performers acting out these scenarios, were implemented in the android robot Nikola. A video-based user study was conducted to assess the effectiveness of multimodal expressions which account for the situation and user attributes, comparing them to default behaviors. The proposed multimodal expressions were effective, with voice being more impactful than motion, though both contributed positively. |
|
| Ishigaki, Taiki |
Tomoya Sasaki, Taiki Ishigaki, Diego Roulle, and Eiichi Yoshida (Tokyo University of Science, Tokyo, Japan; University Paris-Est Créteil, Créteil, France) Orbiting is a common viewpoint control technique in CG and CAD, in which the camera rotates around a target that acts as the center of rotation. However, applying orbiting in teleoperation, a real-world application, is difficult due to physical constraints. We propose RelOrb, a viewpoint control method that focuses on relative coordinate changes between the camera and the target. Our prototype rotates the object on a turntable instead of moving the camera, providing head-mounted display images as if the camera itself were moving. We present the method, its coordinate transformation, a proof-of-concept prototype, and example operations. |
|
| Ishiguro, Hiroshi |
Naoki Kodani, Yuya Komai, Kurima Sakai, Takahisa Uchida, and Hiroshi Ishiguro (University of Osaka, Toyonaka, Japan; ATR, Keihanna Science City, Japan; Osaka University, Osaka, Japan; Osaka University, Toyonaka, Japan) In recent years, avatar technology has been used in various forms, such as robots and CG agents. It is considered that avatars that behave autonomously could expand human capabilities, such as participating in social activities on behalf of the real person. In this study, we developed an autonomous dialogue system that reflects the operator's personality by using a Geminoid, which is an android modeled after the appearance of a specific person. Regarding such androids modeled after specific persons, previous research has reported at the interview level that people find it easier to talk to the android than to the real person it was modeled after. However, the relationship between such an avatar and the human it is modeled after for the interlocutor has not been quantitatively clarified. This study quantitatively evaluated the effect of the Geminoid with an autonomous dialogue system on participants' perceived relationship with the real person it was modeled after. The results showed that interacting with the developed system significantly increased the participants' sense of closeness toward the real person. Furthermore, since interacting with the real person model afterward did not significantly increase this sense of closeness, it is expected that this system sufficiently enhances closeness and can produce an effect equivalent to that of interacting with the real person. |
|
| Itadera, Shunki |
Jun Aoki and Shunki Itadera (University of Tsukuba, Tsukuba, Japan; AIST, Kotoku, Japan) The application of teleoperation to control robotic arms has been widely explored, and user-friendly teleoperation systems have been studied for facilitating higher performance and lower operational burden. To investigate the dominant factors in a practical teleoperation system, this study focused on the characteristics of an interface used to operate a robotic arm. The usability of an interface depends on the characteristics of the manipulation tasks to be completed; however, systematic comparisons of different interfaces across different tasks remain limited. In this study, we compared two widely used teleoperation interfaces, a 3D mouse and a VR controller, for two simple yet broadly applicable tasks with a six-degree-of-freedom (6DoF) robotic arm: repetitively pushing buttons and rotating knobs. Participants (N = 23) controlled a robotic arm with 6DoF to push buttons and rotate knobs as many times as possible in 3-minute trials. Each trial was followed by a NASA-TLX workload rating. The results showed a clear connection between the interface and task performance: the VR controller yielded higher performance for pushing buttons, whereas the 3D mouse performed better and was less demanding for knob rotation. These findings highlight the importance of considering dominant motion primitives of the task when designing practical teleoperation interfaces. |
|
| Ito, Takeru |
Hiroki Kimura, Che-Kang Hsu, Takeru Ito, Naoya Harada, Michiteru Kitazaki, and Ryosuke Tasaki (Aoyama Gakuin University, Sagamihara, Japan; Toyohashi University of Technology, Toyohashi, Japan) Robotic massage systems can reproduce treatment forces consistently, but most still rely on predefined trajectories and fixed force distributions that ignore recipients’ subjective sensations. We propose a conversational robotic massage system that updates a polynomial force map for latissimus dorsi acupressure from recipient speech and applies it through force feedback control. In a study with 13 participants, simple commands such as "little weaker" and "little stronger" changed the applied force in the intended direction; questionnaires indicated high force adaptation, satisfaction, and consistency, with low pain, while perceived safety varied widely, suggesting a trade-off between interactive adaptation and predictable contact. These findings indicate that speech is a promising modality for tailoring robotic massage and highlight the need for careful gain tuning and feedback about force changes. Although the system currently uses only four predefined speech patterns, it represents an initial step toward conversational control of spatially varying acupressure forces. |
|
| Itskovitch Kiner, Bar |
Bar Itskovitch Kiner, Shani Batcir, Yigal Evron, Milagros Prendes, Avi Parush, and Arielle Fischer (Technion - Israel Institute of Technology, Haifa, Israel) Robotic exoskeletons provide individuals with spinal cord injury (SCI) the ability to regain mobility. However, successful ambulation relies heavily on the user's ability to synchronize their movements with the device. This study examines the synchronization between human and robotic exoskeleton movements across multiple training sessions using both biomechanical metrics and subjective reports. We analysed data from 12 healthy participants performing mobility tasks with a ReWalk exoskeleton over five sessions to characterize the motor learning process. Results indicate a significant 37.1% improvement in the temporal synchronization between the robot's hip and the user's thorax. Additionally, Spatial Consistency of the hip joint improved by 42.6%. These biomechanical gains were strongly supported by subjective user feedback, which showed a 53.5% increase in perceived synchronization (p < 0.01) and an 18% improvement in perceived ease of learning. These findings suggest that structured training facilitates active motor adaptation, leading to more stable and synchronized exoskeleton-assisted walking. |
|
| Ivaldi, Serena |
Elisabetta Zibetti, Raphael Lorenzo-Louis, Fabio Amadio, Julien Wacquez, Joffrey Becker, Bertrand Luvison, and Serena Ivaldi (Université Paris 8, Paris, France; Inria - CNRS - Université de Lorraine - Loria, Villers-les-Nancy, France; CEA - List, Palaiseau, France; ETIS (UMR8051) - CYU - ENSEA - CNRS, Cergy, France; University Paris-Saclay - CEA - List, Palaiseau, France) How does interaction with robots differ between spontaneously formed groups and individuals? Despite increasing robot deployment in public spaces, this question remains understudied in real-world settings. We conducted a field study deploying a service stationary robot in semi-public office spaces, tracking 221 individuals (42 alone, 179 in groups) across 95 interaction opportunities. Cookies were placed on accessible trays, creating a low-barrier functional interaction opportunity (taking a cookie) while allowing observation of spontaneous social behaviors. Groups demonstrated significantly higher engagement: functional interactions and social gestures. Within groups, leader presence amplified social engagement threefold. These findings are consistent with descriptive norm theory: group presence and leader behavior were associated with increased social engagement, though context-specific factors may moderate these effects. Results highlight the potential value of group detection for robots in multi-user environments, and demonstrate the feasibility of integrating psychological theory with automated tracking to study spontaneous human-robot encounters in the wild. |
|
| Iwanaga, Yuka |
Marina Obata, Yuka Iwanaga, Masayoshi Tsuchinaga, Akitoshi Mizutani, Daniel R. J. Downes, and Sabrina Lee (Woven by Toyota, Tokyo, Japan; Toyota Motor Corporation, Toyota, Japan) This study examines the validity of the Motivation, Engagement, and Thriving in User Experience (METUX) model for evaluating remote operation interfaces of an assistive robot. Multiple real-life scenarios involving various target users were explored to assess whether Self-Determination Theory (SDT) factors in METUX effectively represent user well-being. Survey data from 2300 samples across three interfaces and four user groups underwent descriptive analysis, Confirmatory Factor Analysis (CFA), Multi-Group CFA (MGCFA) and Structural Equation Modelling (SEM). Results show significant correlations between SDT-defined Basic Psychological Needs (BPN) and well-being indicators, allowing researchers to examine well-being through interaction with assistive robots. MGCFA supported measurement invariance across interfaces but not across user groups, suggesting user-needs-centric interface development. The findings highlight the importance of validating metrics before product evaluation. In conjunction with planned user research in real-life settings using established metrics, our approach addresses research gaps in user well-being assessment within the HRI field. |
|
| Jacob, Favour |
Shenando Stals, Favour Jacob, and Lynne Baillie (Heriot-Watt University, Edinburgh, UK) Demos for social robots often lack accessibility for individuals with sight loss (SL). To address this need, this preliminary study investigates the key factors for individuals with SL that affect the accessibility of the standard introductory demos provided by the robot's manufacturer for three social robots commonly used in robotic assisted living environments, Temi, TIAGo, and Pepper. Results show how individuals with SL perceive the various social attributes of these social robots, and reveal potential differences in workload between various standard demo formats. Initial findings highlight commonalities and potentially differing needs regarding key factors affecting accessibility of the demos, such as tactile exploration, communication of information, and multimodal interaction, between children and young people with SL and adults with SL. |
|
| Jacobs, Jan |
Stella Kyratzi, Anastasia Sergeeva, and Jan Jacobs (Vrije Universiteit Amsterdam, Amsterdam, Netherlands; Lely Industries, Maasluis, Netherlands) Trust in Human–Robot Interaction (HRI) is typically treated as an individual psychological attitude shaped by users’ perceptions of a robot’s design features. This focus on internal states and designable cues, however, obscures the social and interpretive work through which trust is accomplished in real-world human–robot interactions. Drawing on 15 hours of field observations and 18 archival interviews in Dutch dairy farms adopting robotic milking systems, we offer a practice-based perspective showing that trust “in the wild” is not produced through direct human–robot interaction but through advisors’ situated work. Advisors tune robotic systems, reassure users during uncertainty, and anchor robotic data through reference to lived contexts. These practices reveal trust as an ongoing accomplishment sustained by intermediary work. |
|
| Jäger, Markus |
Verena Szojak, Ouijdane Guiza, Markus Jäger, Markus Brillinger, Martin Schobesberger, Martina Mara, Kathrin Meyer, and Sandra Maria Siedl (Pro2Future, Linz, Austria; FH Joanneum, Graz, Austria; Johannes Kepler University Linz, Linz, Austria) Collaborative robots (cobots) are increasingly used in manufacturing, yet their behavior can appear unpredictable to human coworkers, making teamwork less smooth. Explainability has been proposed as a key requirement for fluent human-robot collaboration. However, we still lack a clarity of (i) when cobots should provide explanations, (ii) what information human coworkers seek, and (iii) how explanations should be delivered by cobots during real physical collaboration. This paper reports an exploratory qualitative study of human explanation needs in a real-life bicycle assembly task with a 6-axis cobot arm. Fourteen participants interacted with the cobot that occasionally behaved unexpectedly. Think-aloud protocols and semi-structured interviews, show that users seek continuous information primarily about the cobot's current status and situation-dependent guidance for interaction. Participants favored simple presentations such as easy-to-read text explanations and light signals. We discuss implications for designing explainable cobots that provide lightweight, context-sensitive information and outline directions for future quantitative evaluations. |
|
| James, John |
Jennifer S. Kay, Tyler Errico, Audrey L. Aldridge, John James, and Michael Novitzky (Rowan University, Glassboro, USA; USA Military Academy at West Point, West Point, USA) Effective human-robot teaming in highly dynamic environments, such as emergency response and military missions, requires tools that support planning, coordination, and adaptive decision-making without imposing excessive cognitive load. This paper introduces PETAAR, the Planning, Execution, to After-Action Review framework that seamlessly integrates autonomous unmanned vehicles (UxVs) into Android Team Awareness Kit (ATAK), a widely adopted situational awareness platform. PETAAR leverages ATAK's geospatial visualization and human team collaboration while adding features for autonomous behavior management, operator feedback, and real-time interaction with UxVs. Its most novel contribution is enabling digital mission plans, created using standard mission graphics, to be interpreted and executed by unmanned systems, bridging the gap between human planning, robotic action, and shared understanding among all teammates (human and autonomous). Results from this work inform best practices for integrating autonomy into human-robot teams across diverse operational contexts. |
|
| James, MoniJesu Wonders |
Fawad Mehboob, MoniJesu Wonders James, Amir Atef Habel, Jeffrin Sam, Miguel Altamirano Cabrera, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) This work introduces a novel concept of an autonomous aerial manipulation system capable of interpreting high-level natural language commands to retrieve objects and deliver them to a human user. The system is intended to integrate a MediaPipe-based Grounding DINO and a Vision-Language-Action (VLA) model with a custom-built drone equipped with a 1-DOF gripper and an Intel RealSense RGB-D camera. VLA performs semantic reasoning to interpret the intent of a user prompt and generates a prioritized task queue for grasping relevant objects in the scene. Grounding DINO and the dynamic A* planning algorithm are used to navigate and safely relocate the object. To ensure safe and natural interaction during the handover phase, the system employs a human-centric controller driven by MediaPipe. This module provides real-time human pose estimation, allowing the drone to employ visual servoing to maintain a stable, distinct position directly in front of the user, facilitating a comfortable handover. We demonstrate the system's efficacy through real-world experiments for localization and navigation, which resulted in a 16.4 cm, 7.0 cm, and 8.4 cm of max, mean euclidean, and root-mean squared errors, respectively, highlighting the feasibility of VLA for aerial manipulation operations. |
|
| Jamshad, Rabeya |
Tianyi Zhang, Christopher Garrison, Kristal Hockenberry, Rabeya Jamshad, Laurel D. Riek, and Susan Simkins (Pennsylvania State University, University Park, USA; University of California at San Diego, La Jolla, USA) Although robots are increasingly integrated into healthcare, safety- and time-critical action team contexts are underexplored in HRI research. In addition, whereas human-robot interactions have received attention, much remains to be understood about the effects of robots on human-human team processes. In response to these research needs, we varied the effects of robot descriptive characteristics, robot behaviors, and interaction context as a mobile manipulator robot delivered needed materials to nursing student teams during simulations. Team communication and team performance were rated by nursing instructors using a standardized healthcare communication framework and the Creighton Competency Evaluation Instrument. Results indicated that increased robot influencing factors (robot face, robot voice, response to requests, and passive presence) enhanced nursing students’ communication with each other and mannequin patients, thereby improving team performance. Together, these findings highlight that robots should not be viewed solely as direct collaborators with humans, but also as actors that shape how humans work together. As healthcare and other safety-critical domains increasingly integrate robots, understanding these indirect pathways will be critical for designing robots that enhance rather than hinder team effectiveness. |
|
| Janssens, Ruben |
Giulio Antonio Abbo, Ruben Janssens, Seppe Van de Vreken, and Tony Belpaeme (Ghent University – imec, Ghent, Belgium) Enabling natural robot communication through dynamic, context-aware facial expressions remains a key challenge in human-robot interaction. The field lacks a system that can generate facial expressions in real time and can be easily adapted to different contexts. Early work in this area considered inherently limited rule-based systems or deep learning-based models, requiring large datasets. Recent systems using large language models (LLMs) could not yet generate context-appropriate facial expressions in real time. This paper introduces Expressive Furhat, an open-source algorithm and Python library that leverages LLMs to generate real-time, adaptive facial gestures for the Furhat robot. Our modular approach separates gesture rendering, new gesture generation, and gaze aversion, ensuring flexibility and seamless integration with the Furhat API. User studies demonstrate significant improvements in user perception over a baseline system, with participants praising the system's emotional responsiveness and naturalness. |
|
| Javed, Hifza |
Hifza Javed, Ella Ruth Maule, Thomas H. Weisswange, and Bilge Mutlu (Honda Research Institute, San Jose, USA; University of Bristol, Bristol, UK; Honda Research Institute, Offenbach, Germany; University of Wisconsin-Madison, Madison, USA) This workshop on Robots for Communities explores how robots can serve as shared social resources that support the collective well-being of communities. While robots have traditionally been created to serve corporations or individuals, leading human–robot interaction research to focus largely on individuals or small groups, communities remain a crucial yet underexplored context for robotics. Understanding robots in community settings requires an interdisciplinary lens that integrates robotics, design, the social sciences, humanities, and community practice. Rather than emphasizing the negative consequences of large-scale deployment, our focus is on the active, positive roles robots might play in shaping communities. Central to this vision is viewing robots not as personal possessions but as shared resources, with unique affordances that enable them to enrich community experiences in ways other technologies cannot. The workshop seeks to bridge technology-centered and community-centered perspectives to promote dialogue across disciplines. By bringing these perspectives together, we aim to establish an interdisciplinary agenda for the design, evaluation, and deployment of robots as positive forces for well-being and cohesion within communities. |
|
| Jayaraman, Sandhya |
Sandhya Jayaraman, Deep Saran Masanam, Pratyusha Ghosh, Alyssa Kubota, and Laurel D. Riek (University of California at San Diego, La Jolla, USA; San Francisco State University, San Francisco, USA) This workshop explores the social, ethical, and practical implications of deploying robots for clinical or assistive contexts. Robots hold potential to expand access to disabled communities, such as by providing physical or cognitive assistance, and enabling new ways of participating in social activities. They can assist healthcare workers with ancillary tasks and care delivery, supporting them to work at the top of their license. However, the real-world deployment of robots across these contexts can create social, ethical, and organizational challenges, or downstream effects. Some challenges include the potential for robots to undermine the agency of disabled people and reinforce their marginalization on a societal level. In clinical settings, robots may also disrupt care delivery, shift roles, and displace labor. To explore these issues, this workshop will invite trans-disciplinary speakers and participants from academia, industry, government, and non-academics with/without affiliations interested in surfacing their lived experiences in using or developing such robots. Through panel discussions, group ideation activities and interactive poster sessions, this workshop intends to critically and creatively explore the future of robots for clinical and assistive contexts. Topics will include the downstream implications of robots in clinical or assistive contexts and potential upstream interventions. Outcomes of the workshop will include publishing key workshop artifacts on our website and initiating a follow-up journal special issue. |
|
| Jayasuriya, Maleen |
Damith Herath, Hans Asenbaum, Janie Busby Grant, Maleen Jayasuriya, Emanuela Savini, and Harshith Ghanta (University of Canberra, Bruce, Australia; University of Canberra, Canberra, Australia) As Artificial Intelligence (AI) systems are increasingly integrated into civic platforms and decision-making processes, the nature of public deliberation is shifting. While AI offers potential solutions to scalability and moderation in deliberative democracy, its impact on human agency remains underexplored. The project adopted an exploratory design, convening six small-group deliberations. Each group included four humans and one humanoid robot. Through semi-structured post-deliberation interviews, we investigated how embodied algorithmic intervention influenced perceived ownership of the discourse. Our initial findings suggested a "Competence-Agency Paradox": while participants valued the AI's ability to synthesise information and enforce civility, they reported a diminished sense of democratic agency, characterised by epistemic deference to the machine and a reluctance to explore emotional or intuitive lines of reasoning. We argue that without careful design, AI-augmented deliberations risk prioritising procedural efficiency over the messy, human autonomy required for genuine democratic legitimacy. Phillip Morgan, Damith Herath, Praminda Caleb-Solly, Matthew Studley, Elizabeth Williams, Aurora An-Lin Hu, Eduardo B. Sandoval, Maleen Jayasuriya, Min Wang, and Janie Busby Grant (Cardiff University, Cardiff, UK; University of Canberra, Bruce, Australia; University of Nottingham, Nottingham, UK; University of the West of England, Bristol, UK; Australian National University, Canberra, Australia; UNSW, Sydney, Australia; University of Canberra, Canberra, Australia) The increasing prevalence of interactive, mobile robots in domestic and social spaces requires not only a comprehensive examination of the security challenges inherent in their large-scale deployment, but understanding how we as a community can support safe and successful adoption of robots. This workshop provides a forum for researchers, practitioners, and stakeholders from a range of disci- plines to build expertise and networks in safety, trust, psychosocial, legal and economic aspects for the secure deployment of social robots in domestic environments. The end goal of this workshop is to enable the development of methodologies and recommenda- tions to empower people to understand and decide on how their robots (and/or robots they are using) are secured, and approaches for raising awareness of the impact of an attack. The workshop will incorporate keynote presentations from leading experts, a series of lightning talks, and a scenario planning activity in which teams of participants use custom-designed prospective scenarios to ex- plore the precursors, processes, contexts and impacts of a breach of security in a robotic system. This will allow a deep-dive into the complexities of establishing and maintaining security and trust in interactive robotic applications, from the broad frame of social, personal and legal capability, enabling long-term acceptance and adoption of trustworthy, secure, and safe interactive robots. |
|
| Jelínek, Matouš |
Pol Barrera Valls, Patrick Vogelius, Tobias Florian von Arenstorff, Matouš Jelínek, and Oskar Palinko (University of Southern Denmark, Odense, Denmark; University of Southern Denmark, Sønderborg, Denmark) The development of humanoid robots has experienced a sudden acceleration during the last years, due to the large advancements made in actuation technology, generative AI and computer vision. The design of humanoid robots makes them useful in scenarios where many different tasks must be achieved, and humans are present. Furthermore, their resemblance to humans opens new ways of communication when compared to traditional robots. However, humanoid robots may find themselves in a situation where human assistance is required, e.g. due to limitations in their sensing and movement capabilities. As such, different help-seeking strategies and their effectiveness need to be explored. This article compares the effect of inducing empathy and guilt in humans as means to request help after a mistake made by a robot. An in the wild experiment conducted between subjects was performed in the University of Southern Denmark (SDU) with a total of 123 participants across 3 scenarios of help-seeking strategies, described as: distressed, sarcastic, and neutral. The results showed a statistical difference between the strategies, proving that using the concepts of empathy and guilt elicitation with robots has the potential to improve human-humanoid collaboration. |
|
| Jenamani, Rajat Kumar |
Bahar Irfan, Nikhil Churamani, Michelle Zhao, Rajat Kumar Jenamani, Ali Ayub, and Silvia Rossi (Familiar Machines & Magic, Woburn, USA; University of Cambridge, Cambridge, UK; Carnegie Mellon University, Pittsburgh, USA; Cornell University, Ithaca, USA; Concordia University, Montreal, Canada; University of Naples Federico II, Naples, Italy) Today's high-capacity generalist robot policies provide a strong foundation for broad task-level competence, yet achieving effective and equitable support for people in everyday settings remains a significant challenge. Real-world environments are dynamic and unstructured, and human needs evolve over time, requiring robots that can adapt accordingly. The ultimate evaluator of any robotic system is the person it assists, and personalization is essential to ensuring equitable and meaningful support across diverse users and contexts. Developing robots that can continually learn from interaction, adapt their behaviors over time, and flexibly assume roles as learners and collaborators is a critical step toward realizing effective integration of robots into daily life. With this year's theme of "Evolving Assistance for Everyday Life", and in alignment with the conference theme "HRI Empowering Society", the sixth edition of the "Lifelong Learning and Personalization in Long-Term Human-Robot Interaction (LEAP-HRI)" workshop aims to bring together insights across diverse disciplines, focusing on how robots can progressively adapt their support to suit diverse individuals, each with unique and changing needs, across real-world contexts. Through this lens, the workshop aims to discuss current and future directions in how assistive systems can flexibly respond, continually improve over time, and deliver more inclusive and empowering support in everyday life. |
|
| Jespersen, Andreas Juul |
Mie Grøftehave Nielsen, Andreas Juul Jespersen, and Louise Brønderup Frederiksen (Aarhus University, Aarhus, Denmark) This paper presents The Beckoning Bowl, a shape-changing, artifact- inspired robot designed to create a sense of welcome for people living alone. The interactive key bowl uses soft robotics to mimic abstract body language, offering a subtle social moment during the routine act of placing keys when arriving home. A section of the bowl lowers as if beckoning and then returns to its original shape with expressions of joy or disappointment depending on the user’s response. By designing interactions that make users feel noticed and invited, The Beckoning Bowl explores how socially aware home robots might help counter loneliness. |
|
| Ji, Yong Gu |
Gyuhee Park, Daehui Kim, Gayoung Yoo, Suhwan Jung, Okkeun Lee, Hyunmin Kang, Yong Gu Ji, and Hyochang Kim (Yonsei University, Seoul, Republic of Korea; Stanford University, Palo Alto, USA; Daegu University, Gyeongsan, Republic of Korea; Stanford University, Incheon, Republic of Korea) Autonomous delivery robots (ADRs) are increasingly being deployed in urban environments, where they share pedestrian paths with other road users. Previous human-robot proximity studies have primarily focused on pedestrian perception and distance norms during frontal encounters. However, when a robot approaches from the rear, pedestrians experience a lack of visual information, which can heighten threat perception and reinforce distance norms. This study aimed to investigate pedestrian perception in rear approach situations and identify the proxemic zone associated for such approaches. The results showed that participants felt comfortable with a proximity distance of 1.0–2.0 m during rear approaches. As the robot approached at higher speeds, participants perceived the robot's approach more clearly, but distance-related discomfort significantly increased, demonstrating a trade-off between perceptual awareness and subjective discomfort. These findings suggest the importance of social navigation strategies that reflect pedestrian acceptance for the safe integration of delivery robots. Gayoung Yoo, Daehui Kim, Gyuhee Park, Suhwan Jung, Okkeun Lee, Hyunmin Kang, Yong Gu Ji, and Hyochang Kim (Yonsei University, Seoul, Republic of Korea; Stanford University, Palo Alto, USA; Daegu University, Gyeongsan, Republic of Korea; Stanford University, Incheon, Republic of Korea) Mobile robots increasingly navigate spaces where people are engaged in conversation, where attention is directed toward conversational partners and peripheral awareness is limited. This study examines how a robot’s near-stop distances used as a crossing cue (45, 80, 120 cm) and conversational arrangements (Vis-à-vis, L-shape) influence when the robot is noticed and how its approach is interpreted. Using eye-tracking and foot-tracking, we found that these effects depend on the F-formation adopted during conversation. In Vis-à-vis, participants showed minimal fixations on the robot at 120 cm, whereas the 45 cm near-stop consistently elicited visual attention and increased perceived safety. In contrast, in L-shape formations, sustained visual exposure led to higher perceived disruption at both 45 and 120 cm. Overall, these findings indicate that F-formation–driven attentional allocation shapes perceptual and behavioral responses to robot approaches in conversational settings. These findings motivate future work on adaptive robot cues for dynamic social situations involving multi-party interactions and diverse users. |
|
| Jiang, Nanyi |
Nanyi Jiang, Borui Wang, and Xiaozhen Liu (Cornell University, Ithaca, USA) Housekeeper carts are essential in hotel operations, supporting the maintenance of the hotel’s physical environment and services. While housekeeping staff are their main users, carts are also highly visible to guests, making them not only tools but also sites where hotel experiences are shaped. This project re-designs housekeeper carts to address both their functional and experiential value to primary users and bystanders. We present a modularized cart with an in-depth development of the laundry module. Considering hotels’ need for trustworthy and polite interactions, we designed non-verbal behaviors that allow the robot to express etiquette. |
|
| Jiang, Yu-Ai |
Chalmers Ong, Hannah Lee, Koen de Nie, Mark W. Vas, Peter Holzer, Yasaman Shamssabzevar, Yu-Ai Jiang, J. Micah Prendergast, Alper Semih Alkan, and Serdar Aşut (Delft University of Technology, Delft, Netherlands) Robotics in creative industries is sometimes met with concern over the eventual replacement of humans. This paper proposes a framework for a new form of spatial craft that emerges through human-robot collaboration, using the Delft Blue painting process as a case study. We employ Learning from Demonstration (LfD) through kinesthetic guidance of a KUKA iiwa robotic arm to capture expert brushstroke trajectories, which are then modelled using a Long Short-Term Memory Variational Autoencoder (LSTM-VAE) to generate novel, stylistically coherent strokes. Subsequently, we evaluated these generated trajectories through physical execution on ceramic tiles, inviting a human painter to interpret and complete one of the robot-produced compositions. The results demonstrate that robotic machine learning can support a new co-crafting workflow, which does not substitute human artists but expands and transforms traditional craft practices for new creative opportunities to emerge. |
|
| Jin, Sizhao |
You Yang Chen, Shan Luo, Xueqing Li, Sizhao Jin, Pengfei Zhang, and Mingyu Li (Tsinghua University, Beijing, China; China Academy of Art, Hangzhou, China; Shandong University of Art and Design, Jinan, China) Prolonged earphone use and excessive volume are common causes of hearing damage, yet users often remain unaware of these risks. Existing alert mechanisms, such as pop-up warnings, are easily dismissed and lack emotional resonance. We present Songbird, a bird-shaped robotic earphone companion that "listens along" with the user. When the volume exceeds a safe threshold (75 dB) or wearing time becomes excessive, Songbird mimics the current melody by singing aloud, rendering auditory risks in an embodied and emotionally engaging manner. This research explores design strategies for mapping health risk information onto embodied robotic companions. By establishing emotional connections through physical form, such companions demonstrate unique potential in health care and behavior guidance, offering a non-intrusive, emotionally resonant interaction paradigm for hearing protection in personal audio devices. |
|
| Jin, Yuhua |
Yuhua Jin, Nikita Kuzmin, Georgii Demianchuk, Mariya Lezina, Fawad Mehboob, Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Ahsan Mustafa, and Dzmitry Tsetserukou (Chinese University of Hong Kong, Shenzhen, China; Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Drones operating in human-occupied spaces suffer from insufficient communication mechanisms that create uncertainty about their intentions. We present HoverAI, an embodied aerial agent that integrates drone mobility, infrastructure-independent visual projection, and real-time conversational AI into a unified platform. Equipped with a MEMS laser projector, onboard semi-rigid screen, and RGB camera, HoverAI perceives users through vision and voice, responding via lip-synced avatars that adapt appearance to user demographics. The system employs a multimodal pipeline combining VAD, ASR (Whisper), LLM-based intent classification, RAG for dialogue, face analysis for personalization, and voice synthesis (XTTS v2). Evaluation demonstrates high accuracy in command recognition (F1: 0.90), demographic estimation (gender F1: 0.89, age MAE: 5.14 years), and speech transcription (WER: 0.181). By uniting aerial robotics with adaptive conversational AI and self-contained visual output, HoverAI introduces a new class of spatially-aware, socially responsive embodied agents for applications in guidance, assistance, and human-centered interaction. |
|
| Jinnat, Raitah A. |
Alexandria Thylane, Daniel J. Foulen, Keys K. Rigual, Raitah A. Jinnat, Jackie Yee, Kaylee Nam, and Raj Korpan (City University of New York, New York, USA; Rice University, Houston, USA) Pax is a queer-affirming robot companion prototype co-designed with LGBTQIA+ users. Implemented as a Unity-based embodied agent with a FastAPI backend, it translates community-identified requirements into a working interactive system. Pax combines queer-affirming natural language interaction, safety guardrails, and user-controlled adaptability to support identity affirmation and emotional well-being in the home. This demo highlights the core technical pipeline and ethical design choices behind queer-inclusive robot companions. Raj Korpan, Khadeja Ahmar, Raitah A. Jinnat, and Jackie Yee (City University of New York, New York, USA) Cities release large volumes of open civic data, but many people lack the time or skills to interpret them. We report an exploratory pilot study examining whether a social robot can narrate stories derived from open civic data to support public understanding, trust, and data literacy. Our pipeline combines civic data analysis, large language model–based narrative generation, and scripted behaviors on the Misty II robot to produce expressive and neutral versions of two stories on noise complaints and COVID-19 trends. We deployed the system at a public event and collected post-interaction surveys from six adult participants. While the small sample size limits generalization, the pilot suggests that participants found the stories relevant and generally understood their main points, though engagement and enjoyment were mixed. Participant feedback highlighted the need for improved vocal prosody, reduced information density, and more interactivity. These findings provide initial feasibility evidence and design insights to inform future iterations of robot civic data storytelling systems. |
|
| Jirak, Doreen |
Di Fu, Yuchong Zhang, Yong Ma, Maria Kyrarini, Chaona Chen, Doreen Jirak, Weiyong Si, and Danica Kragic (University of Surrey, Guildford, UK; KTH Royal Institute of Technology, Stockholm, Sweden; University of Bergen, Bergen, Norway; Santa Clara University, Santa Clara, USA; University of Sheffield, Sheffield, UK; University of Antwerp, Antwerp, Belgium; University of Essex, Essex, UK) Interactive artificial intelligence (AI) is rapidly reshaping human-centered robotics by moving beyond algorithmic efficiency toward real-time adaptability, transparency, and design for people and contexts. Building on the inaugural InterAI Workshop at IEEE RO-MAN 2024, this second edi- tion, proposed for HRI 2026, will convene the robotics and HRI communities to examine how interactive AI can en- able robust, trustworthy systems that operate seamlessly in dynamic, real-world environments. The half-day hybrid program will feature two keynote talks, and a peer-reviewed paper oral presentation session. Distinct from prior events, the workshop spotlights the integration of generative and embodied AI with human-in-the-loop learning and real-time decision making, as well as methods for explainability, eval- uation, and safety. The workshop aims to catalyze a collabo- rative agenda that bridges interactive AI technologies and human-centered robotic systems. |
|
| Johal, Wafa |
Yan Zhang, Sarah Schömbs, Xiang Pan, Jan Leusmann, Sara Mongile, Mohammad Obaid, and Wafa Johal (University of Melbourne, Melbourne, Australia; Kyoto University, Kyoto, Japan; LMU Munich, Munich, Germany; Italian Institute of Technology, Genoa, Italy; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden) Recent developments in large language models and multi-agentic systems present both opportunities and challenges for Human-Robot Interaction (HRI). However, it remains unclear how the interaction paradigms will be reshaped due to the growing complexity and opaqueness. Therefore, to bridge this gap and connect users with multi-agentic systems through embodied robot agents, this workshop aims to explore solutions for design challenges of multi-agentic systems on various types of robots in real-world contexts. Through a complementary mix of focus groups, including design activities and paper presentations, our workshop generates design considerations for real-world applications and outlines pioneering research agendas for multi-agentic systems in HRI. |
|
| Johansson, Ola |
Hannah Pelikan, Karin Stendahl, Franziska Babel, Ola Johansson, and Erik Frisk (Linköping University, Linköping, Sweden) Mobile robots must behave intelligibly to be acceptable in public spaces. Designing social navigation algorithms for delivery robots requires different areas of expertise. The paper reports on an interdisciplinary collaboration between two ethnomethodological conversation analysts, a human factors psychologist, and two motion planning engineers. Based on video recordings of a robot moving among people, the team developed and implemented different sound and movement designs, which were iteratively tested in real-world deployments. This work contributes insights on how interdisciplinary collaboration can be facilitated in the area of social robot navigation and an iterative process for designing robot sound and movement grounded in real-world observations. |
|
| Johnson, Phillip |
Joyce Yang, Phillip Johnson, Nyra Graham, and Karen Shamir (Cornell University, Ithaca, USA) Social isolation in shared spaces threatens community cohesion and well-being. This paper presents a social robot designed to spark human-to-human interactions. Inspired by public art projects, the robot invites individuals to collaborate on a shared LEGO structure by using expressive eye tracking, autonomous turning, and servo-actuated drawer movement. Field deployments in Cornell University spaces showed the robot effectively acted as a social catalyst: diverse participants contributed to a shared structure, and strangers initiated conversations about the robot. This work offers a functional prototype and insights on robots as mediators of human connection and promotes ideas of empowering collaboration. |
|
| Jooyandehdel, Navid |
Pooria Fazli, Amirhossein Nazari, Navid Jooyandehdel, Iman Kardan, and Alireza Akbarzadeh (Ferdowsi University of Mashhad, Mashhad, Iran) Lower-limb exoskeletons play an essential role in rehabilitation and mobility assistance, where accurate real-time gait phase recognition is critical for achieving safe, synchronized, and intuitive human–robot interaction. Many existing approaches rely on multiple sensors such as IMUs, EMG, and FSRs, which increase system complexity, computational load, cost, and susceptibility to mechanical wear. In this study, we propose a lightweight and robust gait phase detection framework that uses only hip and knee joint encoder data—sensors that are already integrated into most exoskeletons and are less prone to noise and misplacement. The method employs a finite state machine (FSM) to identify gait phases and detect key gait events, including heel strike, in real time. The approach was first evaluated in simulation using the SCONE (Opensim) platform and then experimentally implemented on the NEXA knee-joint exoskeleton with multiple healthy participants. Results show that the proposed method reliably predicts gait phases and heel-strike timing with minimal temporal error, while achieving significantly higher processing frequency compared to sensor-rich configurations. These findings demonstrate that accurate and efficient gait phase recognition can be achieved using only encoder data, offering a practical and low-cost solution for real-world exoskeleton control applications. |
|
| Jordan, Julien |
Chenyang Wang, Julien Jordan, Alice Reymond, and Pierre Dillenbourg (EPFL, Lausanne, Switzerland) As AI becomes increasingly integrated into everyday life, supporting children’s AI literacy is essential. While prior work in Child-Robot-Interaction has primarily used robots as programmable artefacts or learning companions for introducing AI concepts, the role of a robot as an embodied AI student remains underexplored. We investigate social robot teaching as a pathway to help children intuitively understand supervised learning. We designed a prototype in which children teach a robot using biased and unbiased training data and iteratively observe its performance. A pilot study with three children preliminarily examines: 1) whether and how this interaction fosters intuitive understanding of AI training and bias, and 2) initial design considerations for future prototype interactions. Our findings offer early evidence of the potential of social robot teaching for AI literacy. |
|
| Ju, Wendy |
Maria Teresa Parreira, Hongjin Quan, Adolfo G. Ramirez-Aristizabal, and Wendy Ju (Cornell University, New York, USA; Cornell Tech, New York, USA; Accenture, San Francisco, USA) Anticipatory reasoning – predicting whether situations will resolve positively or negatively by interpreting contextual cues – is crucial for robots operating in human environments. This exploratory study evaluates whether Vision Language Models (VLMs) possess such predictive capabilities. First, we test VLMs on direct outcome prediction by inputting videos of human and robot scenarios with outcomes removed, asking the models to predict whether situations will end well or poorly. Second, we introduce a novel evaluation of anticipatory social intelligence: can VLMs predict outcomes by analyzing human facial reactions of people watching these scenarios? We test multiple VLMs and compare their predictions against both true outcomes and judgments from 29 human participants. The best-performing VLM (Gemini 2.0 Flash) achieved 70.0% accuracy in predicting true outcomes, outperforming the average individual human (62.1% ± 6.2%). Agreement with individual human judgments ranged from 44.4% to 69.7%. Critically, VLMs struggled to predict outcomes by analyzing human facial reactions, suggesting limitations in leveraging social cues. These preliminary findings indicate that while VLMs show promise for anticipatory reasoning, their performance is sensitive to model and prompt selection, warranting further investigation for applications in HRI. |
|
| Jung, Diane N. |
Diane N. Jung, Caleb Escobedo, Noah Liska, Maitrey Gramopadhye, Daniel Szafir, Alessandro Roncone, and Carson J. Bruns (University of Colorado at Boulder, Boulder, USA; University of North Carolina at Chapel Hill, Chapel Hill, USA) Scientists perform diverse manual procedures that are tedious and laborious. Such procedures are considered a bottleneck for modern experimental science, as they consume time and increase burdens in fields including material science and medicine. We employ a user-centered approach to designing a robot-assisted system for dialysis, a common multi-day purification method used in polymer and protein synthesis. Through two usability studies, we obtain participant feedback and revise design requirements to develop the final system that satisfies scientists' needs and has the potential for applications in other experimental workflows. We anticipate that integration of this system into real synthesis procedures in a chemical wet lab will decrease workload on scientists during long experimental procedures and provide an effective approach to designing more systems that have the potential to accelerate scientific discovery and liberate scientists from tedious labor. |
|
| Jung, Eunsun |
SoYoon Park, Eunsun Jung, KiHyun Lee, Dokshin Lim, and Kyung Yun Choi (Hongik University, Seoul, Republic of Korea) Inspired by the playful, attention-seeking paw gestures of cats, we present PAWSE, a laptop-peripheral robot that encourages short fidgeting-based micro-breaks during digital work. PAWSE integrates a cat-paw-inspired robotic arm with a web-based timer that prompts brief tactile interaction during scheduled breaks. We conducted a within-subjects study comparing three conditions--no break, passive break, and active (PAWSE fidgeting-based) break--using a 2-back task and subjective workload measures (NASA-TLX). Results showed differences in post-task accuracy across conditions, with the highest accuracy observed in the active break condition. Reaction time remained largely comparable. Workload measures indicated reduced mental demand and frustration during rest conditions, with the active break providing the most favorable subjective experience. These preliminary findings offer insight into how fidgeting-based micro-breaks may fit within focused digital work and inform the design of future tactile micro-break systems. |
|
| Jung, Magnus |
Magnus Jung, Qiaoyue Yang, Leander von Seelstrang, Dominykas Strazdas, Sven Wachsmuth, and Ayoub Al-Hamadi (University of Magdeburg, Magdeburg, Germany; Bielefeld University, Bielefeld, Germany) We present SEMIAC, a multimodal dataset of human-robot interactions (HRI) collected with a Wizard-of-Oz methodology in a logistics-inspired workspace and a home environment. The dataset includes 40 participants across two research sites and captures their behavioural, emotional, proxemic, and subjective responses during cooperative object-retrieval tasks with systematically manipulated robot errors. Interactions were recorded using a rich sensor suite including the humanoid robot TIAGo equipped with onboard sensors and an RGB-D camera, as well as three external RGB-D sensors, and an external microphone. All modalities were recorded fully synchronized through ROS and stored in time-aligned rosbag files. Annotations include human keypoints, proxemic distances, emotional expressions, and engagement indicators. The dataset aims to support research on social navigation, error-aware intent prediction, and socially intelligent robot behaviour. |
|
| Jung, Suhwan |
Gyuhee Park, Daehui Kim, Gayoung Yoo, Suhwan Jung, Okkeun Lee, Hyunmin Kang, Yong Gu Ji, and Hyochang Kim (Yonsei University, Seoul, Republic of Korea; Stanford University, Palo Alto, USA; Daegu University, Gyeongsan, Republic of Korea; Stanford University, Incheon, Republic of Korea) Autonomous delivery robots (ADRs) are increasingly being deployed in urban environments, where they share pedestrian paths with other road users. Previous human-robot proximity studies have primarily focused on pedestrian perception and distance norms during frontal encounters. However, when a robot approaches from the rear, pedestrians experience a lack of visual information, which can heighten threat perception and reinforce distance norms. This study aimed to investigate pedestrian perception in rear approach situations and identify the proxemic zone associated for such approaches. The results showed that participants felt comfortable with a proximity distance of 1.0–2.0 m during rear approaches. As the robot approached at higher speeds, participants perceived the robot's approach more clearly, but distance-related discomfort significantly increased, demonstrating a trade-off between perceptual awareness and subjective discomfort. These findings suggest the importance of social navigation strategies that reflect pedestrian acceptance for the safe integration of delivery robots. Gayoung Yoo, Daehui Kim, Gyuhee Park, Suhwan Jung, Okkeun Lee, Hyunmin Kang, Yong Gu Ji, and Hyochang Kim (Yonsei University, Seoul, Republic of Korea; Stanford University, Palo Alto, USA; Daegu University, Gyeongsan, Republic of Korea; Stanford University, Incheon, Republic of Korea) Mobile robots increasingly navigate spaces where people are engaged in conversation, where attention is directed toward conversational partners and peripheral awareness is limited. This study examines how a robot’s near-stop distances used as a crossing cue (45, 80, 120 cm) and conversational arrangements (Vis-à-vis, L-shape) influence when the robot is noticed and how its approach is interpreted. Using eye-tracking and foot-tracking, we found that these effects depend on the F-formation adopted during conversation. In Vis-à-vis, participants showed minimal fixations on the robot at 120 cm, whereas the 45 cm near-stop consistently elicited visual attention and increased perceived safety. In contrast, in L-shape formations, sustained visual exposure led to higher perceived disruption at both 45 and 120 cm. Overall, these findings indicate that F-formation–driven attentional allocation shapes perceptual and behavioral responses to robot approaches in conversational settings. These findings motivate future work on adaptive robot cues for dynamic social situations involving multi-party interactions and diverse users. |
|
| Jungblut, Benjamin |
Khaled Abdul Rahman, Benjamin Jungblut, and Youjin Chung (Georgia Institute of Technology, Atlanta, USA) After COVID-19, many assumed in-store shopping would decline, yet research shows that most consumers still make final purchasing decisions inside retail spaces. Retail advertising remains influential because it engages customers emotionally. However, most in-store advertisements, digital or physical, are static and lack multi-sensory stimulation. This paper addresses that gap by focusing on aroma products, which aim to convey emotional experience and memory to customers. We propose our design, "Aroma-bota," an interactive robotic installation that uses movement and multisensory cues to enhance the aroma retail experience. We evaluated Aroma-Bota through user testing in a simulated retail environment to understand how people interpreted its motion-based emotional cues. Results show that emotionally legible gestures---especially offering and "happy" motions---significantly enhanced user engagement and clarity of intent. This project contributes a novel design exemplar of sensory-driven, emotion-expressive retail robotics for the HRI community. |
|
| Kaleem, Zuha |
Nigel G. Wormser, Zuha Kaleem, Jessie Lee, Dyllan Ryder Hofflich, and Henry Calderon (Cornell University, Ithaca, USA; Cornell University, Brooklyn, USA) Musculoskeletal injuries from manual laundry cart transportation are very common for workers in the hospitality industry. To address this, we designed Elandro, a teleoperated laundry cart that collaboratively helps hotel staff with transportation across and within floors at a hotel. Through iterative user research at Statler Hotel, and wizard-of-oz interaction testing, we revealed design requirements essential for successful human-robot interaction. Elandro contributes to reducing physical strain on workers, maintaining staff autonomy and decision-making, establishing a human-centered approach where technology empowers rather than replaces hospitality workers. |
|
| Kalvade, Ragini |
Ragini Kalvade and Sohan Naidu (University of Illinois at Chicago, Chicago, USA) Home piano practice is vital for early music learning, yet it often depends on a child’s intrinsic motivation. This paper introduces DoReMi, a piano peer-bot designed as an expressive, encouraging companion for young beginners. Through animated responses, colorful feedback, and an approachable social presence, DoReMi supports children as they practice and interact with the instrument. We have designed different feedback styles and timing strategies to further shape a child’s perception of the robot, and their motivation to continue learning. |
|
| Kamino, Waki |
Weslie Khoo, Long-Jing Hsu, William Valentine, Brandon Connor, Tao Yat Kong, Amous Khoo, Waki Kamino, Selma Šabanović, and David Crandall (Indiana University at Bloomington, Bloomington, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Rose-Hulman Institute of Technology, Terre Haute, USA; Independent, Singapore, Singapore; Cornell University, Ithaca, USA) IRIS, the Interactive Robot for Ikigai Support, is a social robot designed to help older adults explore life purpose and meaningful memories through interactive, reflective conversations. Implemented on LuxAI’s QT humanoid robot, IRIS combines structured prompts with AI-generated follow-ups to support dynamic, personalized dialogue that adapts to individual preferences and cognitive abilities. Co-designed with older adults through a longitudinal panel, IRIS incorporates nuanced emotional responsiveness, including the ability to recognize and engage with bittersweet emotions. For the interactive demo, participants will engage with IRIS in a controlled, immersive setting to encourage contemplation of meaning, impermanence, and personal purpose. This work highlights the potential of socially aware robots to empower older adults, support emotional well-being, and foster personal reflection, demonstrating how human–robot interaction can contribute to societal enrichment. Brandon Connor, William Valentine, Long-Jing Hsu, Waki Kamino, Selma Šabanović, David Crandall, and Weslie Khoo (Indiana University at Bloomington, Bloomington, USA; Rose-Hulman Institute of Technology, Terre Haute, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Cornell University, Ithaca, USA) This paper examines how robots can safely and intentionally evoke fear in humans. Drawing on popular culture, immersive theater, horror games, and theme parks, we highlight design strategies such as unpredictability, proximity violations, and narrative buildup. We discuss how robot morphology, motion, and ambient cues shape perceived threat, with implications for entertainment-focused Human-Robot Interaction (HRI) and for preventing unintentional fear in everyday interactions. The paper aims to spark discussion on leveraging fear as a meaningful, ethical, and safe emotional modality in human–robot interaction. |
|
| Kaminskaia, Nataliia |
Nataliia Kaminskaia, Rob Saunders, Kim Baraka, and Somaya Ben Allouch (Leiden University, Leiden, Netherlands; Vrije Universiteit Amsterdam, Amsterdam, Netherlands; University of Amsterdam, Amsterdam, Netherlands; Amsterdam University of Applied Sciences, Amsterdam, Netherlands) A single tap from a robot can set off a cascade of interpretation. This study examines how people perceive affect, intent, and agency when a non-humanoid robot conveys meaning through contact-based nudging. Using a cube-shaped robot programmed with twenty animator-designed affect–intent variants, participants completed two tasks: a situated interaction in which the robot attempted to pass their arm, and an isolated gesture-recognition task. In the situated encounter, participants rapidly attributed motives such as attention-seeking, social contact, or boundary testing. Recognition of the robot’s obstacle-passing goal was partial but participants consistently described the robot’s movement qualities as shifting from cautious to more assertive, interpreting these changes as emotional and intentional. In the isolated task the expressive movement was far less legible: only neutral gestures were reliably recognised, with frequent confusions between comfort and attention. These findings support the position that nudging gains meaning in context: while a minimal robot can elicit rich social inference when its nudges unfold dynamically in interaction, affect and intent become opaque when the same motions are removed from their relational frame. |
|
| Kamran, Medhavi |
Medhavi Kamran, Snehesh Shrestha, and Vinh Nguyen (Michigan Technological University, Houghton, USA; University of Maryland College Park, College Park, USA) Augmented Reality (AR) is often promoted as a solution to the cognitive and physical demands of traditional Teach Pendant (TP) programming for collaborative robots. Although prior work has suggested advantages of the AR interface, many evaluations have been limited in scope and may not fully represent the complexities of real-world manufacturing tasks. This study compares the performance of an AR interface to that of a standard TP interface for manufacturing assembly tasks of varying difficulty. In a between-groups study, one group of operators completed standard- ized assembly tasks using the TP interface, while a separate group used the AR interface instead. We collected broad set of metrics, including task completion time, task success, physical exertion, and measured cognitive workload (NASA-TLX). The analysis showed that participants achieved higher success rates on the 16 mm rectangular peg task and waterproof connector tasks when using AR. They also completed the 12 mm circular peg task significantly faster. Although AR did not reduce cognitive workload relative to TP, these findings suggested that AR may be beneficial for tasks that required significant mental interpretation and offered little advantage for components with non-intuitive geometry. Overall, the results challenged the common assumption that AR universally outperforms traditional programming interfaces in manufacturing tasks. Instead, AR performance appears to be task-dependent and possibly influenced by factors such as task complexity. |
|
| Kang, Chen |
Chen Kang, Madina Alitai, Yiting Wang, Xiaochi Cai, Ruidong Ma, Angelo Cangelosi, and Zhegong Shangguan (Beijing Language and Culture University, Beijing, China; University of Manchester, Manchester, UK; Pennsylvania State University, State College, USA; Sheffield Hallam University, Sheffield, UK) Large Language Model (LLM)-driven social robots face two key challenges: inference latency creates unnatural silences during which an expressively static robot appears disengaged, and LLMs rarely account for the user's facial affect as a continuous evolving process. We present Fluid-Xpress, an emotion-aware dual-loop framework for empathic facial reactions in human-robot interaction. The framework features: (1) a Macro-Micro dual-loop architecture that decouples real-time non-verbal feedback from LLM verbal processing, enabling continuous affective backchanneling during inference latency; (2) a Temporal Affective Engine using metrics such as MSSD to capture emotional dynamics and detect complex states like cognitive overload and masked emotions; and (3) a Risk-Adaptive Strategy that prioritizes immediate intervention during high-arousal states. A pilot study (N=8) showed that Fluid-Xpress significantly improved arousal stability (p < .05), mood improvement (p < .05), expression awareness (p < .01), and perceived empathy (p < .05) compared to baseline, providing preliminary support for emotion-aware non-verbal feedback in embodied social robots. |
|
| Kang, Hyunmin |
Gyuhee Park, Daehui Kim, Gayoung Yoo, Suhwan Jung, Okkeun Lee, Hyunmin Kang, Yong Gu Ji, and Hyochang Kim (Yonsei University, Seoul, Republic of Korea; Stanford University, Palo Alto, USA; Daegu University, Gyeongsan, Republic of Korea; Stanford University, Incheon, Republic of Korea) Autonomous delivery robots (ADRs) are increasingly being deployed in urban environments, where they share pedestrian paths with other road users. Previous human-robot proximity studies have primarily focused on pedestrian perception and distance norms during frontal encounters. However, when a robot approaches from the rear, pedestrians experience a lack of visual information, which can heighten threat perception and reinforce distance norms. This study aimed to investigate pedestrian perception in rear approach situations and identify the proxemic zone associated for such approaches. The results showed that participants felt comfortable with a proximity distance of 1.0–2.0 m during rear approaches. As the robot approached at higher speeds, participants perceived the robot's approach more clearly, but distance-related discomfort significantly increased, demonstrating a trade-off between perceptual awareness and subjective discomfort. These findings suggest the importance of social navigation strategies that reflect pedestrian acceptance for the safe integration of delivery robots. Gayoung Yoo, Daehui Kim, Gyuhee Park, Suhwan Jung, Okkeun Lee, Hyunmin Kang, Yong Gu Ji, and Hyochang Kim (Yonsei University, Seoul, Republic of Korea; Stanford University, Palo Alto, USA; Daegu University, Gyeongsan, Republic of Korea; Stanford University, Incheon, Republic of Korea) Mobile robots increasingly navigate spaces where people are engaged in conversation, where attention is directed toward conversational partners and peripheral awareness is limited. This study examines how a robot’s near-stop distances used as a crossing cue (45, 80, 120 cm) and conversational arrangements (Vis-à-vis, L-shape) influence when the robot is noticed and how its approach is interpreted. Using eye-tracking and foot-tracking, we found that these effects depend on the F-formation adopted during conversation. In Vis-à-vis, participants showed minimal fixations on the robot at 120 cm, whereas the 45 cm near-stop consistently elicited visual attention and increased perceived safety. In contrast, in L-shape formations, sustained visual exposure led to higher perceived disruption at both 45 and 120 cm. Overall, these findings indicate that F-formation–driven attentional allocation shapes perceptual and behavioral responses to robot approaches in conversational settings. These findings motivate future work on adaptive robot cues for dynamic social situations involving multi-party interactions and diverse users. |
|
| Kangler, Valerii |
Roman Akinshin, Elizaveta Lopatina, Kirill Bogatikov, Nikolai Kiz, Anna V. Makarova, Mikhail Lebedev, Miguel Altamirano Cabrera, Dzmitry Tsetserukou, and Valerii Kangler (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; MSU Institute for Artificial Intelligence, Moscow, Russian Federation; Lomonosov Moscow State University, Moscow, Russian Federation) This paper presents a novel neuromorphic control architecture for upper-limb prostheses that combines surface electromyography (sEMG) with gaze-guided computer vision. The system uses a spiking neural network deployed on the neuromorphic processor AltAi to classify EMG patterns in real time while an eye-tracking headset and scene camera identify the object within the user’s focus. In our prototype, the same EMG recognition model that was originally developed for a conventional GPU is deployed as a spiking network on AltAi, achieving comparable accuracy while operating in a sub-watt power regime, which enables a lightweight, wearable implementation. For six distinct functional gestures recorded from upper-limb amputees, the system achieves robust recognition performance comparable to state-of-the-art myoelectric interfaces. When the vision pipeline restricts the decision space to three context-appropriate gestures for the currently viewed object, recognition accuracy increases to roughly 95% while excluding unsafe, object-inappropriate grasps. These results indicate that the proposed neuromorphic, context-aware controller can provide energy-efficient and reliable prosthesis control and has the potential to improve safety and usability in everyday activities for people with upper-limb amputation. |
|
| Kannan, Neha |
Pawinee Pithayarungsarit, Neha Kannan, Paige Williamson, Paul Vallejo, Laura Saad, Greg Trafton, and Eileen Roesler (George Mason University, Fairfax, USA; Naval Research Laboratory, Washington, USA) The Negative Attitude toward Robots Scale (NARS) was designed to measure people’s general attitudes toward robots. Originally developed and validated in Japanese, the NARS was translated into English without revalidation and has become widely used in human-robot interaction research. This study aims to validate the translated NARS factor structure via confirmatory factor analysis using existing datasets. Results (N = 2154) indicated a moderate support for the three-factor structure with some problematic items. Additionally, many papers modified the NARS in some way, including changing wording or rating scale. Not only is this not recommended, but it suggests that there may be a disconnect between the scale and how researchers would like to use it. Together, we suggest that researchers consider the consequences of modification and emphasize the importance of validating scales to ensure their adequate measurement ability and the quality of research results. JP Tiger, Paige Williamson, Neha Kannan, Jacob Henley, Eileen Roesler, and David Porfirio (George Mason University, Fairfax, USA) Air quality threatens health through particulate matter (PM), volatile organic compounds (VOCs), and allergens, yet current monitors of- ten fail to communicate threats properly. We introduce Nose Knows, a robot that uses “universal nasal cues”—such as sneezes—to com- municate. Using participatory design, we conducted an expert inter- view, user focus groups, and iterative prototyping to build a robot that detects PM10, PM2.5, VOCs, and customizable odors. The ro- bot integrates servo-driven movements, sound effects, LEDs, and a web interface. Our findings highlight preferences for customizable aesthetics and show that biologically inspired cues can possibly improve clarity, engagement, and awareness of air quality. |
|
| Kapell, Charlotte |
Johannes Kraus, Niklas Grünewald, Charlotte Kapell, and Marlene Wessels (University of Mainz, Mainz, Germany) Robot bullying - purposeful obstructive or harmful behavior to-ward robots - is widely discussed but still under-researched, with mixed findings under realistic conditions. In this field experi-ment (N = 35), we tested how robot social role behavior (coop-erative-polite vs. functional-technological) and social norms (pro- vs. anti-bullying) influence bullying of a cleaning robot. Bullying was measured via an adapted hot-sauce paradigm, alongside anthropomorphism and trust. Participants bullied the impolite robot significantly more, while social norms showed no significant effects. Anthropomorphism and trust were higher for the polite robot. This indicates that robots’ social roles shape robot perceptions and harmful behavior towards them. |
|
| Karadağ, Ayaz |
Umur Yıldız, Berk Yüce, Ayaz Karadağ, Tuğçe Nur Pekçetin, and Burcu A. Urgen (Bilkent University, Ankara, Türkiye) Large Language Models(LLMs) introduce powerful new capabilities for social robots, yet their black-box nature creates a barrier to trust. Transparency is already established as important for humanrobot trust, but how to convey LLM intentions and reasoning in real-time, embodied interaction remains poorly understood. We developed a task-level mechanistic transparency system for an LLM-powered Pepper robot that displays its internal reasoning process dynamically on the robot’s tablet during interaction. In a mixed-design study, participants engaged with Pepper across four trust-relevant tasks in either a Transparency-ON condition or Transparency-OFF condition. Transparency produced significantly greater trust growth than opacity, and a substantial increase in perceived reliability, indicating that transparency remains a key design element for trust calibration in LLM-driven human-robot interaction. |
|
| Karadayı Ataş, Pınar |
Pınar Karadayı Ataş (Istanbul Arel University, İstanbul, Türkiye) The explainable human-robot system for real-time pain assessment presented in this study combines autonomous care-robot behavior with multimodal affective sensing (RGB facial expressions, temperature data, and physiological signals). The approach uses the BioVid dataset to train and evaluate Tri-Modal Explanation Fusion (TMEF), which combines SHAP, LIME, and dual-stream Grad-CAM to produce consistent and understandable explanations for pain predictions. By identifying cross-modal inconsistencies prior to sending out alerts, a Signal-Attention Alignment (SAA) technique further reduces false alarms. By providing human-readable explanations for high-pain conditions, the robot enhances interaction safety and clinical transparency. The current gap where current pain-detection algorithms remain highly predictive but insufficiently explainable is filled by preliminary results that show sustained multimodal fusion, dependable interpretability, and higher user confidence. This work advances healthcare-oriented HRI by unifying explainability, continuous pain monitoring, and ethical robotic decision-making. |
|
| Kardan, Iman |
Pooria Fazli, Amirhossein Nazari, Navid Jooyandehdel, Iman Kardan, and Alireza Akbarzadeh (Ferdowsi University of Mashhad, Mashhad, Iran) Lower-limb exoskeletons play an essential role in rehabilitation and mobility assistance, where accurate real-time gait phase recognition is critical for achieving safe, synchronized, and intuitive human–robot interaction. Many existing approaches rely on multiple sensors such as IMUs, EMG, and FSRs, which increase system complexity, computational load, cost, and susceptibility to mechanical wear. In this study, we propose a lightweight and robust gait phase detection framework that uses only hip and knee joint encoder data—sensors that are already integrated into most exoskeletons and are less prone to noise and misplacement. The method employs a finite state machine (FSM) to identify gait phases and detect key gait events, including heel strike, in real time. The approach was first evaluated in simulation using the SCONE (Opensim) platform and then experimentally implemented on the NEXA knee-joint exoskeleton with multiple healthy participants. Results show that the proposed method reliably predicts gait phases and heel-strike timing with minimal temporal error, while achieving significantly higher processing frequency compared to sensor-rich configurations. These findings demonstrate that accurate and efficient gait phase recognition can be achieved using only encoder data, offering a practical and low-cost solution for real-world exoskeleton control applications. |
|
| Katz, Reut |
Reut Katz, Nevo Heimann Saadon, Andrey Grishko, and Hadas Erel (Reichman University, Herzliya, Israel) Robots are increasingly integrated as support tools for enhancing human learning and problem-solving. In this study, we explore the design of a robotic object intended to support problem-solving experiences. The design guidelines are grounded in well-established cognitive strategies known to improve performance. We focus on two strategies in particular: (1) constructive feedback on performance and (2) social feedback that encourages self-explanation. To reduce distractions, we minimized the robot’s communicative load and kept its expressive behaviors simple. Through an iterative design process, we developed a small robotic printer that communicates through subtle non-verbal gestures (nodding, leaning, and gaze-like orientation) paired with minimal printed feedback. This combination aims to create a supportive, non-threatening inter action that provides clear performance guidance while conveying social presence. We describe the robot’s design process and propose an experimental study examining how constructive and social feedback influence problem-solving outcomes. |
|
| Kaufman, Zachary |
Claire Lewis, Melody Goldanloo, Matthew Murray, Zachary Kaufman, and Tom Williams (Colorado School of Mines, Golden, USA; University of Colorado at Boulder, Boulder, USA) Museums are an effective informal learning environment for science, art and more. Many researchers have proposed museum guide robots, where the outcomes of the interactions are based solely on the robot’s communication. In contrast, we explored how a robot could encourage learning and teamwork through human-human interactions. To achieve this, we created “Chase,” a novel zoomorphic robot that presents “Data Chase,” an interactive museum activity. We designed Chase to enable museum-goers to learn about the exhibits together by prompting users to complete a teamwork based scavenger hunt for rewards. |
|
| Kawmali, Netitorn |
Yuxuan Chen, Ian Leong Ting Lo, Bao Guo, Netitorn Kawmali, Chun Kit Chan, Ruoyu Wang, Jia Pan, and Lei Yang (University of Hong Kong, Hong Kong, China) A tour guide robot, CLIO, is presented. It carries a display as its head and a laser pointer to orient visitors’ attention by its head movement and laser pointer. Animated eyes are displayed to build eye contact. An optional low-code interface based on a Large Language Model is offered to coordinate the designed actions with a given narrative script for exhibition. We conducted a user study to evaluate the CLIO system in a mock-up exhibition of historical photographs. Feedback was collected from questionnaires and quantitative data from a mobile eye tracker. Experimental result validated that the efficacy of the designed actions in guiding visual attention of the visitors. It showed that CLIO achieved an enhanced engagement compared to the Audio-only baseline system. |
|
| Kay, Jennifer S. |
Jennifer S. Kay, Tyler Errico, Audrey L. Aldridge, John James, and Michael Novitzky (Rowan University, Glassboro, USA; USA Military Academy at West Point, West Point, USA) Effective human-robot teaming in highly dynamic environments, such as emergency response and military missions, requires tools that support planning, coordination, and adaptive decision-making without imposing excessive cognitive load. This paper introduces PETAAR, the Planning, Execution, to After-Action Review framework that seamlessly integrates autonomous unmanned vehicles (UxVs) into Android Team Awareness Kit (ATAK), a widely adopted situational awareness platform. PETAAR leverages ATAK's geospatial visualization and human team collaboration while adding features for autonomous behavior management, operator feedback, and real-time interaction with UxVs. Its most novel contribution is enabling digital mission plans, created using standard mission graphics, to be interpreted and executed by unmanned systems, bridging the gap between human planning, robotic action, and shared understanding among all teammates (human and autonomous). Results from this work inform best practices for integrating autonomy into human-robot teams across diverse operational contexts. |
|
| Kazubski, Jessica |
Ilona Buchem, Jessica Kazubski, and Charly Goerke (Berlin University of Applied Sciences, Berlin, Germany) This paper presents the design of NEFFY 2.0, a social robot designed as a haptic slow-paced breathing companion for stress reduction, and reports findings from a mixed-methods user study with 14 refugees from Ukraine. Developed through a user-centered design process, NEFFY 2.0 builds on NEFFY 1.0 and integrates embodiment and multi-sensory interaction to provide low-threshold, accessible guidance of slow-paced breathing for stress relief, which may be particularly valuable for individuals experiencing prolonged periods of anxiety. To evaluate effectiveness, an experimental comparison of a robot-assisted breathing intervention versus an audio-only condition was conducted. Measures included subjective ratings and physiological indicators, such as heart rate (HR), heart rate variability (HRV) using RMSSD parameter, respiratory rate (RR), an galvanic skin response (GSR), alongside qualitative data from interviews exploring user experience and perceived support. Qualitative findings showed that NEFFY 2.0 was perceived as intuitive, calming and supportive. Survey results showed a substantially larger effect in significant reduction of perceived stress in the NEFFY 2.0 condition compared to audio-only. Physiological data reveled mixed results combined with large inter-personal variability. Three patterns of breathing practice with NEFFY 2.0 were identified using k-means clustering. Despite the small sample size, this study makes a novel contribution by providing empirical evidence of stress reduction in a vulnerable population through a direct comparison of robot-assisted and non-robot conditions. The findings position NEFFY 2.0 as a promising low-threshold tool that supports stress relief and contributes to the vision of HRI empowering society. |
|
| Kefyalew, Henok |
Sabrina Al Bukhari, Thisas Samaraweera, Kapil Shenoy, Lewi Alemu, Henok Kefyalew, Jamison Heard, and Jinane Mounsef (Rochester Institute of Technology, Dubai, United Arab Emirates; Rochester Institute of Technology, Rochester, USA) Social robots often struggle with natural conversational flow due to rule-based systems that cause pauses and interruptions. We present MESA (Modular Empathetic Social Assistant), a campus robot that synchronizes verbal responses with non-verbal cues. It uses UltraVAD to distinguish thoughtful pauses from turn completions and implements turn-yielding and floor-holding mechanisms. A distributed Retrieval-Augmented Generation (RAG) architecture ensures accurate responses for students and faculty. Validated on a proxy platform with ~600 ms latency, MESA demonstrates that combining context-aware audio and gaze enables more fluid conversation, even as hardware assembly nears completion. |
|
| Khan, Muhammad Haris |
Muhammad Haris Khan, Artyom Myshlyaev, Artem Lykov, Miguel Altamirano Cabrera, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) We propose a new concept, Evolution 6.0, which represents the evolution of robotics driven by Generative AI. When a robot lacks the necessary tools to accomplish a task requested by a human, it autonomously designs the required instruments and learns how to use them to achieve the goal. Evolution 6.0 is an autonomous robotic system powered by Vision-Language Models (VLMs), Vision-Language Action (VLA) models, and Text-to-3D generative models for tool design and task execution. The system comprises two key modules: the Tool Generation Module, which fabricates task-specific tools from visual and textual data, and the Action Generation Module, which converts natural language instructions into robotic actions. It integrates QwenVLM for environmental understanding, OpenVLA for task execution, and Llama-Mesh for 3D tool generation. Evaluation results demonstrate a 90% success rate for tool generation with a 10-second inference time and action generation achieving 83.5% in physical and visual generalization, 70% in motion generalization, and 37% in semantic generalization. Future improvements will focus on bimanual manipulation, expanded task capabilities, and enhanced environmental interpretation to improve real-world adaptability. Yara Mahmoud, Jeffrin Sam, Khang Nguyen, Marcelino Julio Fernando, Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Haris Khan, Artem Lykov, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) Safe Human–Robot Interaction in shared spaces requires robots to adapt compliance and speed to scene context and human proximity. Standards for collaborative operation specify limits on speed, force/pressure, and stiffness during contact or near-contact operation, motivating dynamic impedance rather than fixed gains. We present an egocentric vision pipeline for the Unitree G1 humanoid that retrieves task- and context-appropriate parameters via a Vision-Language Model (VLM) with Retrieval-Augmented Generation (RAG), and integrates inverse kinematics (IK) to produce joint references together with feed-forward torques for gravity compensation. Concretely, the system maps live head-camera frames to (Kp, Kd, v); IK provides (qref, qref, τff); and the low-level controller apply it at each joint. This enables joint-impedance scheduling and speed regulation in low-dynamic settings. We evaluate the system on dual-manipulation tabletop tasks with and without human presence in the scene. Compared to fixed-gain baselines, the VLM-RAG policy yields smooth speed reductions near humans, and maintained task success under environmental change. While the off-board perception–retrieval loop introduces noticeable delay that restricts use in highly dynamic interaction, the prototype demonstrates that semantic grounding of impedance remains a viable direction for safer humanoid collaboration. Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Haris Khan, Yara Mahmoud, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) LLM-Glasses is a wearable navigation system which assists visually impaired people by utilizing YOLO-World object detection, GPT-4o-based reasoning, and haptic feedback for real-time guidance. The device translates visual scene understanding into intuitive tactile feedback on the temples, allowing hands-free navigation. Three studies evaluate the system: recognition of 13 haptic patterns with an average recognition rate of 81.3%, VICON-based guidance with predefined paths using haptic cues, and an LLM-guided scene evaluation with decision accuracies of 91.8% without obstacles, 84.6% with static obstacles, and 81.5% with dynamic obstacles. These results show that LLM-Glasses can deliver reliable navigation support in controlled environments and motivate further work on responsiveness and deployment in more complex real-world scenarios. |
|
| Khan, Roohan Ahmed |
Faryal Batool, Iana Zhura, Valerii Serpiva, Roohan Ahmed Khan, Ivan Valuev, Issatay Tokmurziyev, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Reliable human–robot collaboration in emergency scenarios requires autonomous systems that can detect humans, infer navigation goals, and operate safely in dynamic environments. This paper presents HumanDiffusion, a lightweight image-conditioned diffusion planner that generates human-aware navigation trajectories directly from RGB imagery. The system combines YOLO-11 based human detection with diffusion-driven trajectory generation, enabling a quadrotor to approach a target person and deliver medical assistance without relying on prior maps or computationally intensive planning pipelines. Trajectories are predicted directly in pixel space, enabling smooth motion and maintaining a consistent safety margin around humans. We evaluate HumanDiffusion in simulation and real-world indoor mock-disaster scenarios. On a 300-sample test set, the model achieves a mean squared error of 0.02 in pixel-space trajectory reconstruction. Real-world experiments demonstrate an overall mission success rate of 80% across accident-response and search-and-locate tasks with partial occlusions. These results indicate that human-conditioned diffusion planning offers a practical and robust solution for human-aware UAV navigation in time-critical assistance settings. Amir Atef Habel, Ivan Snegirev, Elizaveta Semenyakina, Miguel Altamirano Cabrera, Jeffrin Sam, Fawad Mehboob, Roohan Ahmed Khan, Muhammad Ahsan Mustafa, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) This paper presents Glove2UAV, a wearable IMU-glove interface for intuitive UAV control through hand and finger gestures, augmented with vibrotactile warnings for exceeding predefined speed thresholds. To promote safer and more predictable interaction in dynamic flight, Glove2UAV is designed as a lightweight and easily deployable wearable interface intended for real-time operation. Glove2UAV streams inertial measurements in real time and estimates palm and finger orientations using a compact processing pipeline that combines median-based outlier suppression with Madgwick-based orientation estimation. The resulting motion estimations are mapped to a small set of control primitives for directional flight (forward/backward and lateral motion) and, when supported by the platform, to object-interaction commands. Vibrotactile feedback is triggered when flight speed exceeds predefined threshold values, providing an additional alert channel during operation. We validate real-time feasibility by synchronizing glove signals with UAV telemetry in both simulation and real-world flights. The results show fast gesture-based command execution, stable coupling between gesture dynamics and platform motion, correct operation of the core command set in our trials, and timely delivery of vibratile warning cues. |
|
| Khanna, Parag |
Merle M. Reimann, Gregory LeMasurier, Parag Khanna, Lennart Wachowiak, Wenxi Wu, Martim Brandao, Gerard Canal, Christian Smith, and Iolanda Leite (KTH Royal Institute of Technology, Stockholm, Sweden; University of Massachusetts at Lowell, Lowell, USA; King’s College London, London, UK; Robotics, Perception, and Learning Laboratory, Stockholm, Sweden) In order for robots to act as an empowering tool in today's society, it is necessary for users to understand what those robots can and cannot do. This can happen through the design of the robot, through external information such as signs and manuals, or through the interaction itself. With this workshop, we aim to provide a platform for participants who approach robot transparency from different angles to share and discuss their experiences. Participants will have the opportunity to deepen their knowledge of transparent and understandable robot interactions through two keynotes and paper presentations. The diverse backgrounds of participants will be brought together in interactive exercises that aim to address questions about topics such as how transparency can be used to reduce inequalities and empower society and other open problems in the field. The guided break-out sessions aim at giving participants the chance to reflect on the keynotes, paper presentations and their own work and to share those reflections with participants with other perspectives. |
|
| Khoo, Amous |
Weslie Khoo, Long-Jing Hsu, William Valentine, Brandon Connor, Tao Yat Kong, Amous Khoo, Waki Kamino, Selma Šabanović, and David Crandall (Indiana University at Bloomington, Bloomington, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Rose-Hulman Institute of Technology, Terre Haute, USA; Independent, Singapore, Singapore; Cornell University, Ithaca, USA) IRIS, the Interactive Robot for Ikigai Support, is a social robot designed to help older adults explore life purpose and meaningful memories through interactive, reflective conversations. Implemented on LuxAI’s QT humanoid robot, IRIS combines structured prompts with AI-generated follow-ups to support dynamic, personalized dialogue that adapts to individual preferences and cognitive abilities. Co-designed with older adults through a longitudinal panel, IRIS incorporates nuanced emotional responsiveness, including the ability to recognize and engage with bittersweet emotions. For the interactive demo, participants will engage with IRIS in a controlled, immersive setting to encourage contemplation of meaning, impermanence, and personal purpose. This work highlights the potential of socially aware robots to empower older adults, support emotional well-being, and foster personal reflection, demonstrating how human–robot interaction can contribute to societal enrichment. |
|
| Khoo, Weslie |
Weslie Khoo, Long-Jing Hsu, William Valentine, Brandon Connor, Tao Yat Kong, Amous Khoo, Waki Kamino, Selma Šabanović, and David Crandall (Indiana University at Bloomington, Bloomington, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Rose-Hulman Institute of Technology, Terre Haute, USA; Independent, Singapore, Singapore; Cornell University, Ithaca, USA) IRIS, the Interactive Robot for Ikigai Support, is a social robot designed to help older adults explore life purpose and meaningful memories through interactive, reflective conversations. Implemented on LuxAI’s QT humanoid robot, IRIS combines structured prompts with AI-generated follow-ups to support dynamic, personalized dialogue that adapts to individual preferences and cognitive abilities. Co-designed with older adults through a longitudinal panel, IRIS incorporates nuanced emotional responsiveness, including the ability to recognize and engage with bittersweet emotions. For the interactive demo, participants will engage with IRIS in a controlled, immersive setting to encourage contemplation of meaning, impermanence, and personal purpose. This work highlights the potential of socially aware robots to empower older adults, support emotional well-being, and foster personal reflection, demonstrating how human–robot interaction can contribute to societal enrichment. Jack Bassett, Long-Jing Hsu, Stephen Taylor, Janice Bays, Weslie Khoo, Katherine M. Tsui, David J. Crandall, and Selma Šabanović (Indiana University at Bloomington, Bloomington, USA; Dementia Action Alliance, Bloomington, USA; Toyota Research Institute, Cambridge, USA) Current robotic systems for dementia care primarily emphasize cognitive or functional support—reminding individuals to take medications, offering simple companionship, or providing sensory stimulation. These interventions are valuable, yet they do not always acknowledge the deeper existential challenges associated with memory loss. In this video, we present IRIS (Interactive Robot for Ikigai Support), a novel social robot designed not only to assist but to empower: to help individuals living with dementia reconnect with purpose, identity, and meaning. This 3-minute clip was part of a 3-year research project to develop a robot that can foster a sense of meaning for older adults living with dementia. This work highlights the lived experience of Steve Taylor, a participant whose long-term involvement shaped the robot from concept to deployment. By foregrounding his voice and day-to-day story, the video reveals how IRIS may act as a reflective companion — encouraging storytelling, sparking emotional connections, and enabling moments of clarity, dignity, and joy. The film frames the robot not merely as a tool, but as a mirror through which participants can rediscover fragments of self and meaning in everyday experiences. Brandon Connor, William Valentine, Long-Jing Hsu, Waki Kamino, Selma Šabanović, David Crandall, and Weslie Khoo (Indiana University at Bloomington, Bloomington, USA; Rose-Hulman Institute of Technology, Terre Haute, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Cornell University, Ithaca, USA) This paper examines how robots can safely and intentionally evoke fear in humans. Drawing on popular culture, immersive theater, horror games, and theme parks, we highlight design strategies such as unpredictability, proximity violations, and narrative buildup. We discuss how robot morphology, motion, and ambient cues shape perceived threat, with implications for entertainment-focused Human-Robot Interaction (HRI) and for preventing unintentional fear in everyday interactions. The paper aims to spark discussion on leveraging fear as a meaningful, ethical, and safe emotional modality in human–robot interaction. |
|
| Kim, Daehui |
Gyuhee Park, Daehui Kim, Gayoung Yoo, Suhwan Jung, Okkeun Lee, Hyunmin Kang, Yong Gu Ji, and Hyochang Kim (Yonsei University, Seoul, Republic of Korea; Stanford University, Palo Alto, USA; Daegu University, Gyeongsan, Republic of Korea; Stanford University, Incheon, Republic of Korea) Autonomous delivery robots (ADRs) are increasingly being deployed in urban environments, where they share pedestrian paths with other road users. Previous human-robot proximity studies have primarily focused on pedestrian perception and distance norms during frontal encounters. However, when a robot approaches from the rear, pedestrians experience a lack of visual information, which can heighten threat perception and reinforce distance norms. This study aimed to investigate pedestrian perception in rear approach situations and identify the proxemic zone associated for such approaches. The results showed that participants felt comfortable with a proximity distance of 1.0–2.0 m during rear approaches. As the robot approached at higher speeds, participants perceived the robot's approach more clearly, but distance-related discomfort significantly increased, demonstrating a trade-off between perceptual awareness and subjective discomfort. These findings suggest the importance of social navigation strategies that reflect pedestrian acceptance for the safe integration of delivery robots. Gayoung Yoo, Daehui Kim, Gyuhee Park, Suhwan Jung, Okkeun Lee, Hyunmin Kang, Yong Gu Ji, and Hyochang Kim (Yonsei University, Seoul, Republic of Korea; Stanford University, Palo Alto, USA; Daegu University, Gyeongsan, Republic of Korea; Stanford University, Incheon, Republic of Korea) Mobile robots increasingly navigate spaces where people are engaged in conversation, where attention is directed toward conversational partners and peripheral awareness is limited. This study examines how a robot’s near-stop distances used as a crossing cue (45, 80, 120 cm) and conversational arrangements (Vis-à-vis, L-shape) influence when the robot is noticed and how its approach is interpreted. Using eye-tracking and foot-tracking, we found that these effects depend on the F-formation adopted during conversation. In Vis-à-vis, participants showed minimal fixations on the robot at 120 cm, whereas the 45 cm near-stop consistently elicited visual attention and increased perceived safety. In contrast, in L-shape formations, sustained visual exposure led to higher perceived disruption at both 45 and 120 cm. Overall, these findings indicate that F-formation–driven attentional allocation shapes perceptual and behavioral responses to robot approaches in conversational settings. These findings motivate future work on adaptive robot cues for dynamic social situations involving multi-party interactions and diverse users. |
|
| Kim, Hyochang |
Gyuhee Park, Daehui Kim, Gayoung Yoo, Suhwan Jung, Okkeun Lee, Hyunmin Kang, Yong Gu Ji, and Hyochang Kim (Yonsei University, Seoul, Republic of Korea; Stanford University, Palo Alto, USA; Daegu University, Gyeongsan, Republic of Korea; Stanford University, Incheon, Republic of Korea) Autonomous delivery robots (ADRs) are increasingly being deployed in urban environments, where they share pedestrian paths with other road users. Previous human-robot proximity studies have primarily focused on pedestrian perception and distance norms during frontal encounters. However, when a robot approaches from the rear, pedestrians experience a lack of visual information, which can heighten threat perception and reinforce distance norms. This study aimed to investigate pedestrian perception in rear approach situations and identify the proxemic zone associated for such approaches. The results showed that participants felt comfortable with a proximity distance of 1.0–2.0 m during rear approaches. As the robot approached at higher speeds, participants perceived the robot's approach more clearly, but distance-related discomfort significantly increased, demonstrating a trade-off between perceptual awareness and subjective discomfort. These findings suggest the importance of social navigation strategies that reflect pedestrian acceptance for the safe integration of delivery robots. Gayoung Yoo, Daehui Kim, Gyuhee Park, Suhwan Jung, Okkeun Lee, Hyunmin Kang, Yong Gu Ji, and Hyochang Kim (Yonsei University, Seoul, Republic of Korea; Stanford University, Palo Alto, USA; Daegu University, Gyeongsan, Republic of Korea; Stanford University, Incheon, Republic of Korea) Mobile robots increasingly navigate spaces where people are engaged in conversation, where attention is directed toward conversational partners and peripheral awareness is limited. This study examines how a robot’s near-stop distances used as a crossing cue (45, 80, 120 cm) and conversational arrangements (Vis-à-vis, L-shape) influence when the robot is noticed and how its approach is interpreted. Using eye-tracking and foot-tracking, we found that these effects depend on the F-formation adopted during conversation. In Vis-à-vis, participants showed minimal fixations on the robot at 120 cm, whereas the 45 cm near-stop consistently elicited visual attention and increased perceived safety. In contrast, in L-shape formations, sustained visual exposure led to higher perceived disruption at both 45 and 120 cm. Overall, these findings indicate that F-formation–driven attentional allocation shapes perceptual and behavioral responses to robot approaches in conversational settings. These findings motivate future work on adaptive robot cues for dynamic social situations involving multi-party interactions and diverse users. |
|
| Kim, Hyunjung |
Hyunjung Kim, Max Fischer, Yena Kim, Geumjin Lee, Yanis El Ouali, Shota Kiuchi, Kentaro Honma, Hailong Liu, and Toshihiro Hiraoka (University of Tokyo, Tokyo, Japan; KAIST, Daejeon, Republic of Korea; Bordeaux INP, Talence, France; NAIST, Nara, Japan; Japan Automobile Research Institute, Tsukuba, Japan) We present MIRAbot, the first rearview-mirror-embedded robotic assistant. Reimagining this long-standing symbol of manual driving, MIRAbot leverages its strategic position and familiar form factor to support SAE Level 3 conditional automated driving by transforming into an active, communicative agent during automated modes. We conducted an exploratory public study (N=73) with Japanese adults to capture first-contact user impressions. UEQ-S scores significantly exceeded established benchmarks, with younger adults (18–54) reporting higher Hedonic and Overall ratings than older participants (55+). Qualitatively, participants interpreted the system’s outward-facing, scanning gaze as a cue of vigilance, fostering positive expectations for real-world viability. We discuss these findings, offer design insights for future iterations of robotic assistants embedded in familiar objects to enhance acceptance of automated driving, and outline directions for future research. |
|
| Kim, Joanne Taery |
Joanne Taery Kim and Sehoon Ha (Georgia Institute of Technology, Atlanta, USA) This work explores how assistive robots can achieve trustworthy human-robot coexistence by designing a robotic guide dog for blind or visually impaired (BVI) users. We examine three questions: how users and bystanders expect such a system to behave, how a human and robot can navigate safely as a coordinated team, and how the system can build trust and social comfort for both user and bystanders. Our studies identify mutual awareness, social legibility, and transparent communication as key elements of effective teaming. Building on these insights, we propose navigation models and interaction strategies that combine semantic reasoning with legible motion, advancing assistive robots toward safe, reliable, and socially intelligent behavior in everyday settings. |
|
| Kim, Jong Hoon |
Hasan Shamim Shaon, Andrew Trautzsch, Anh Tuan Tran, Varun Nagarkar, and Jong Hoon Kim (Kent State University, Kent, USA) Effective communication of motion intent is critical for autonomous mobile robots operating in human-populated environments. While prior works have demonstrated that floor-projected cues such as arrows or simplified trajectories can enhance bystander prediction and safety, existing systems often rely on static or handcrafted visual encodings and are rarely evaluated within end-to-end service workflows. We introduce Vendobot, a projection-augmented delivery robot that integrates a ROS1 navigation stack, an Android app based, PostgreSQL-backed order management pipeline, a real-time telemetry subsystem, and a projector-equipped Raspberry Pi 5 executing a lightweight intent-projection algorithm. Our method subscribes to the Timed Elastic Band (TEB) local planner to extract the robot’s predicted short-horizon trajectory, transforms it into projector coordinates, and renders either (1) quantized directional indicators or (2) a continuous animated polyline representing the robot’s true local plan with less than 100 ms latency. In a within-subject study involving both bystanders and delivery recipients, the projected local-plan visualization significantly improved intent legibility, motion predictability, and user comfort compared to arrow-based or no-projection conditions. These findings position trajectory-grounded projection as a technically viable and perceptually beneficial communication modality for service robots deployed in semi-public indoor environments. Raiyan Ashraf, Yanni Liu, Sruthi Ganji, and Jong Hoon Kim (Kent State University, Kent, USA) Social robots frequently struggle to sustain meaningful engagement, often limited to surface-level interactions that lack conversational depth. To address this, we present a multimodal conversational architecture that integrates Motivational Interviewing (MI) strategies with situated perception. Key to this approach is a novel dual-stream perception engine: situated cue detection anchors dialogue in the user's immediate physical environment to establish common ground, while tri-modal affect inference (facial, vocal, linguistic) dynamically adjusts the conversation strategy based on real-time user emotion for facilitating empathy. Our system employs a hybrid Large Language Model (LLM) architecture, combining a lightweight model for low-latency fluency and a reasoning model for high-level planning, to guide users through progressive stages of dialogue from rapport-building to deep reflection. A pilot study with the Pepper robot demonstrates that this physically grounded, MI-guided approach successfully facilitates emotional reminiscence and enhances perceived empathy and engagement. These findings suggest that the proposed framework is a promising foundation for next-generation empathic agents, with significant potential applications in cognitive stimulation for aging populations and therapeutic social companionship. |
|
| Kim, Keuntae |
Keuntae Kim, Cailyn A. Oh, Andrew Park, and Chung Hyuk Park (George Washington University, Washington, USA; Thomas Jefferson High School for Science and Technology, Alexandria, USA; Poolesville High School, Poolesville, USA) Joint attention is a core component of social communication and is frequently impaired in individuals with autism. This work presents a platform validation of an emotionally expressive quadruped robot dog with a custom pan--tilt head and on-device facial emotion recognition, and asks participants whether emotion-driven reactions make joint-attention cues clearer and more engaging than gaze-only behavior. In a within-subjects pilot with six neurotypical adults, we compared (A) face tracking only and (B) face tracking plus emotion recognition and empathetic reactions. Participants generally found the robot's directional cues easy to interpret, reported effective emotional contagion, and expressed strong willingness to interact again, despite low perceived realism. Perceived safety/comfort was mixed for some users, and subjective "shared attention" was inconsistent across participants, suggesting a need for smoother and more predictable gaze and motion timing. These early results help surface design constraints and failure modes before future studies with autistic participants. |
|
| Kim, Lawrence H. |
Chuxuan Zhang, Bermet Burkanova, Lawrence H. Kim, Grace Iarocci, Elina Birmingham, and Angelica Lim (Simon Fraser University, Burnaby, Canada) What nonverbal behaviors should a robot respond to? Understanding how children—both neurotypical and autistic—engage with embodied artificial agents is critical for developing inclusive and socially interactive systems. In this paper, we study "open-ended" unconstrained interactions with embodied agents, where little is known about how children behave nonverbally when given few instructions. We conducted a Wizard-of-Oz study in which children were invited to interact nonverbally with 6 different embodied virtual characters displayed on a television screen. We collected 563 (141 unique) nonverbal behaviors produced by children and compare the children’s interaction patterns with those previously reported in an adult study. We also report the presence of repetitive face and hand movements, which should be considered in the development of nonverbally interactive artificial agents. |
|
| Kim, Natalie G.R. |
Natalie G.R. Kim (University of Southern California, Los Angeles, USA) Humanoid robots are increasingly visible in digital media environments, where online audiences encounter lifelike demonstrations of humanoid artificial intelligence (AI) systems long before interacting with them in everyday life. These encounters occur within platformized communication spaces such as YouTube, where user comments function as sites of public sense-making and affective reaction. This study analyzes 116,611 YouTube comments on highly viewed videos featuring humanoid AI robots to examine how publics discursively and emotionally construct meaning around these technological beings. Using BERTopic for bottom-up topic modeling and a transformer-based sentiment classifier (CardiffNLP RoBERTa) for sentiment detection, we identify seven emergent mega-topics: Reaction/Exclamations, Creepy/Uncanny, Jobs/Labor, Program/Control, Religion/God, Gender/Identity, and Robot General. Sentiment patterns systematically vary across frames. For example, awe-based reactions are predominantly positive, whereas uncanny and control-related frames skew negative with elevated neutrality. These findings highlight how early-phase discourse and affective climates around humanoid robots are discussed in large-scale online publics, offering implications for Human-Robot Interaction (HRI) design, communication, and governance. |
|
| Kim, Tae-Seong |
Ismael Espinoza, Yong-Hyeok Choi, Kangsan Lee, Ji-Heon Oh, Ki Joo Pahk, and Tae-Seong Kim (Kyung Hee University, Yongin, Republic of Korea) We present a Real2Sim2Real system with a pair of UR3 arms equipped with Allegro 4-finger hands for bimanual, long-horizon R2R2H (Robot-to-Robot-to-Human) interaction tasks. Our system is composed of three main modules: (1) Real2Sim module to extract observation data and collect expert demonstrations from the real hardware setup; (2) Temporal-Context Transformer (TCT) policy module that train the policy in simulation using expert demonstrations to preserve human motion trajectories; (3) Sim2Real module to transfer the trained policy from simulation to hardware by leveraging domain randomization and real-time object detection. We have evaluated our system with two R2R2H tasks: (1) Task #1: an R2R2H tube handover and (2) Task #2: an R2R cup flipping and placing. Our presented system has completed the tasks, achieving success rates of 82.86% in simulation and 77.14% in hardware for Task #1, and 57.14% and 51.43% for Task #2. Demonstrating the feasibility of our system in real hardware. |
|
| Kim, Yena |
Hyunjung Kim, Max Fischer, Yena Kim, Geumjin Lee, Yanis El Ouali, Shota Kiuchi, Kentaro Honma, Hailong Liu, and Toshihiro Hiraoka (University of Tokyo, Tokyo, Japan; KAIST, Daejeon, Republic of Korea; Bordeaux INP, Talence, France; NAIST, Nara, Japan; Japan Automobile Research Institute, Tsukuba, Japan) We present MIRAbot, the first rearview-mirror-embedded robotic assistant. Reimagining this long-standing symbol of manual driving, MIRAbot leverages its strategic position and familiar form factor to support SAE Level 3 conditional automated driving by transforming into an active, communicative agent during automated modes. We conducted an exploratory public study (N=73) with Japanese adults to capture first-contact user impressions. UEQ-S scores significantly exceeded established benchmarks, with younger adults (18–54) reporting higher Hedonic and Overall ratings than older participants (55+). Qualitatively, participants interpreted the system’s outward-facing, scanning gaze as a cue of vigilance, fostering positive expectations for real-world viability. We discuss these findings, offer design insights for future iterations of robotic assistants embedded in familiar objects to enhance acceptance of automated driving, and outline directions for future research. |
|
| Kimura, Hiroki |
Hiroki Kimura, Che-Kang Hsu, Takeru Ito, Naoya Harada, Michiteru Kitazaki, and Ryosuke Tasaki (Aoyama Gakuin University, Sagamihara, Japan; Toyohashi University of Technology, Toyohashi, Japan) Robotic massage systems can reproduce treatment forces consistently, but most still rely on predefined trajectories and fixed force distributions that ignore recipients’ subjective sensations. We propose a conversational robotic massage system that updates a polynomial force map for latissimus dorsi acupressure from recipient speech and applies it through force feedback control. In a study with 13 participants, simple commands such as "little weaker" and "little stronger" changed the applied force in the intended direction; questionnaires indicated high force adaptation, satisfaction, and consistency, with low pain, while perceived safety varied widely, suggesting a trade-off between interactive adaptation and predictable contact. These findings indicate that speech is a promising modality for tailoring robotic massage and highlight the need for careful gain tuning and feedback about force changes. Although the system currently uses only four predefined speech patterns, it represents an initial step toward conversational control of spatially varying acupressure forces. Ryusei Shigemoto, Hiroki Kimura, and Ryosuke Tasaki (Aoyama Gakuin University, Sagamihara, Japan) This study proposes a boundary-centric multi-robot pedestrian flow guidance design that enables soft, human-centered flow alignment along a guidance line with only a few robots. Flow boundaries are estimated from the crowd’s outer contour via a convex hull, and boundary pedestrians are classified as head, side, and corner. Guidance priority integrates boundary-type importance with the predicted time to intersect the guidance line. A reachability-aware score enables optimal robot allocation via the Hungarian algorithm. Assigned robots employ a parallel-interaction strategy with a side-offset distance perpendicular to a target’s walking direction to elicit anticipatory avoidance without overtly forcing heading changes. A unidirectional guidance simulation with 15 pedestrians and three robots demonstrates feasibility, showing reachability-driven role sharing, improved alignment, and guidance without pedestrians crossing the guidance line under the tested condition. Future work will evaluate safety, trust, and acceptability in opposing and crossing flows with real robots. |
|
| Kimura, Yuki |
Yuki Kimura, Emi Anzai, Naoki Saiwaki, and Masahiro Shiomi (ATR, Kyoto, Japan; Nara Women’s University, Nara, Japan) Digital technologies make it easy for people to be misled by messages and social robots, raising the question of how to help users become less easily deceived. We examined whether people become more cautious and feel that they are contributing more to others if, after being deceived by a robot, they use the same robot to protect another person from deception. In our experiment, adults were first deceived by a communication robot in a consent-form scenario, then briefly operated it to guide a dummy participant away from deception, and finally completed a similar online consent-form check without the robot. The results showed that most were deceived again in the online task, and their perceived contribution to others did not significantly increase. These findings suggest that a single brief chance to protect others is insufficient to reliably increase caution, but the paradigm offers a basis for studying how robots might support resistance to deception. |
|
| Kirschbaum, Ana |
Ana Kirschbaum and Anja Richert (University of Applied Sciences Cologne, Cologne, Germany) This paper introduces a framework for studying proxemics and bonding in interactions between socially interactive agents and groups that are essential for real-world applications. Combining spatial tracking with self-report measures, it uses two self-developed open-source tools – the Group-Proximity-Annotation-Tool-for-Human-Agent-Interaction and the Group Perception Canvas – to analyze group bonds and spatial patterns. The framework is implemented and evaluated with N = 187 participants interacting with a robot and a virtual agent in a museum setting, offering a scalable way to connect perceived experience and observable behavior. Leimin Tian, Ana Kirschbaum, Caterina Neef, Mary Ellen Foster, Sara Cooper, Frauke Zeller, Manuel Giuliani, Alexander Eberhard, Oliver Chojnowski, Nils Mandischer, Utku Norman, Jan Ole Rixen, Pamela Carreno-Medrano, Nick Hawes, and Dana Kulić (CSIRO, Melbourne, Australia; Monash University, Melbourne, Australia; University of Applied Sciences Cologne, Cologne, Germany; KIT, Karlsruhe, Germany; University of Glasgow, Glasgow, UK; IIIA-CSIC, Spain; University of Edinburgh, Edinburgh, UK; Kempten University of Applied Sciences, Kempten, Germany; University of Augsburg, Augsburg, Germany; Monash University, Clayton, Australia; University of Oxford, Oxford, UK) As the technology readiness level of robotic systems increases, these systems interact with real users in realistic application contexts at increasing scale. In such real-world HRI, failures and unexpected outcomes are common, and these can provide valuable lessons for the design and evaluation of future HRI systems. Building on two recent workshops---the HRI 2025 workshop on Human-Robot Interaction in Extreme and Challenging Environments and the RO-MAN 2025 workshop on Real-World HRI in Public and Private Spaces: Successes, Failures, and Lessons Learned---this half-day workshop invites researchers and practitioners to share their design and deployment lessons drawn from across the spectrum of real-world HRI. The interactive sessions follow a taxonomy of the key phases during field deployment and major factors at play, inviting attendees to review the experimental design of their own research or example projects to identify potential challenges and share related experiences. This discussion will refine the initial taxonomy, which we plan to release as an open-source tool to empower researchers and practicioners in their design and deployment of real-world HRI. |
|
| Kisil, Anna |
Satheeshkumar Veeramani, Anna Kisil, Abigail Bentley, Hatem Fakhruldeen, Gabriella Pizzuto, and Andrew I. Cooper (University of Liverpool, Liverpool, UK) Self-driving laboratories (SDLs) are rapidly transforming research in chemistry and materials science to accelerate new discoveries. Mobile robot chemists (MRCs) play a pivotal role by autonomously navigating the lab to transport samples, effectively connecting synthesis, analysis, and characterisation equipment. The instruments within an SDL are typically designed or retrofitted to be accessed by both human and robotic chemists, ensuring operational flexibility and integration between manual and automated workflows. In many scenarios, human and robotic chemists may need to use the same equipment simultaneously. Currently, MRCs rely on simple LiDAR-based obstruction detection, which forces the robot to passively wait if a human is present. This lack of situational awareness leads to unnecessary delays and inefficient coordination in time-critical automated workflows in human-robot shared labs. To address this, we present an initial study of an embodied, AI-driven perception method that facilitates proactive human-robot interaction in shared-access scenarios. Our method features a hierarchical human intention prediction model that allows the robot to distinguish between preparatory actions (waiting) and transient interactions (accessing the instrument). Our results demonstrate that the proposed approach enhances efficiency by enabling proactive human–robot interaction, streamlining coordination, and potentially increasing the efficiency of autonomous scientific labs. |
|
| Kitazaki, Michiteru |
Hiroki Kimura, Che-Kang Hsu, Takeru Ito, Naoya Harada, Michiteru Kitazaki, and Ryosuke Tasaki (Aoyama Gakuin University, Sagamihara, Japan; Toyohashi University of Technology, Toyohashi, Japan) Robotic massage systems can reproduce treatment forces consistently, but most still rely on predefined trajectories and fixed force distributions that ignore recipients’ subjective sensations. We propose a conversational robotic massage system that updates a polynomial force map for latissimus dorsi acupressure from recipient speech and applies it through force feedback control. In a study with 13 participants, simple commands such as "little weaker" and "little stronger" changed the applied force in the intended direction; questionnaires indicated high force adaptation, satisfaction, and consistency, with low pain, while perceived safety varied widely, suggesting a trade-off between interactive adaptation and predictable contact. These findings indicate that speech is a promising modality for tailoring robotic massage and highlight the need for careful gain tuning and feedback about force changes. Although the system currently uses only four predefined speech patterns, it represents an initial step toward conversational control of spatially varying acupressure forces. Naoya Harada, Michiteru Kitazaki, and Ryosuke Tasaki (Aoyama Gakuin University, Sagamihara, Japan; Toyohashi University of Technology, Toyohashi, Japan) This study proposes an integrated robotic massage platform designed to bridge the gap between mechanical stimulation and human-like care. Conventional systems often require a prone posture and lack psychological immersion, limiting embodiment and safety. To address this, we developed a system featuring seated multi-robot actuation—simultaneously targeting the forearm and sole—and a first-person perspective (1PP) VR interface with synchronized virtual therapists. A field study with 32 participants evaluated feasibility and user experience. Results showed high ratings for overall satisfaction and psychological safety. Notably, a strong positive correlation was found between "perceived human-likeness" and user satisfaction. This suggests that inducing a sense of human agency via 1PP VR effectively complements mechanical stimulation, thereby significantly elevating the quality of the care experience. We contribute (i) a seated dual-limb multi-robot massage platform with 1PP VR therapists and (ii) in-the-wild user evidence that perceived human-likeness and safety/relaxation are key correlates of satisfaction. |
|
| Kiuchi, Keita |
Shigen Shimojo, Kai Wang, Keita Kiuchi, Yusuke Shudo, and Yugo Hayashi (Ritsumeikan Global Innovation Research Organization, Ibaraki, Japan; Ritsumeikan University, Ibaraki, Japan; National Institute of Occupational Safety and Health, Kawasaki, Japan) Social isolation among older adults is a global concern, and socially assistive robots are increasingly explored as companions to support mental well-being. Users’ impressions can strongly influence psychological outcomes. Building on Socioemotional Selectivity Theory, which suggests that older adults prioritize emotionally meaningful goals, this study examined the effectiveness of a solution-focused approach (SFA), which emphasizes positive information, compared with a problem-focused approach (PFA), which focuses on negative information, and explored the influence of embodied conversational agent (ECA) impressions. We implemented the ECA on a humanoid social robot. The SFA-based robot-mediated interaction did not significantly improve mental health as measured by the K10, although perceived robot intelligence correlated with outcomes. Our findings highlight that perceived intelligence—rather than conversational framework—plays a key role in influencing mental-health outcomes in older adults. Yugo Hayashi, Shigen Shimojo, and Keita Kiuchi (Ritsumeikan University, Ibaraki, Japan; Ritsumeikan Global Innovation Research Organization, Ibaraki, Japan; National Institute of Occupational Safety and Health, Kawasaki, Japan) This study examined the influence of different dialog media on emotional expression and utterance structure in automated active listening counseling for older adults. Specifically, we compared robot and virtual reality (VR) that differ in embodiment and social presence through solution-focused counseling conducted for three weeks. Emotional expression and lexical network structure were analyzed using automatic text coding, and lexical network analysis. Positive emotional expressions were more frequent in the early stages with VR. Conversely, although the robot condition exhibited lower responsiveness in the initial sessions, positive utterances increased as rapport developed over time. Lexical network analysis further revealed that robots encouraged greater lexical diversity and the formation of hub structures centered on self-disclosure–related vocabulary. These indicate that VR and robots facilitate emotional expression, suggesting a staged media utilization model in which VR is effective at the start of the intervention, while robots become more effective in the later phases. |
|
| Kiuchi, Shota |
Hyunjung Kim, Max Fischer, Yena Kim, Geumjin Lee, Yanis El Ouali, Shota Kiuchi, Kentaro Honma, Hailong Liu, and Toshihiro Hiraoka (University of Tokyo, Tokyo, Japan; KAIST, Daejeon, Republic of Korea; Bordeaux INP, Talence, France; NAIST, Nara, Japan; Japan Automobile Research Institute, Tsukuba, Japan) We present MIRAbot, the first rearview-mirror-embedded robotic assistant. Reimagining this long-standing symbol of manual driving, MIRAbot leverages its strategic position and familiar form factor to support SAE Level 3 conditional automated driving by transforming into an active, communicative agent during automated modes. We conducted an exploratory public study (N=73) with Japanese adults to capture first-contact user impressions. UEQ-S scores significantly exceeded established benchmarks, with younger adults (18–54) reporting higher Hedonic and Overall ratings than older participants (55+). Qualitatively, participants interpreted the system’s outward-facing, scanning gaze as a cue of vigilance, fostering positive expectations for real-world viability. We discuss these findings, offer design insights for future iterations of robotic assistants embedded in familiar objects to enhance acceptance of automated driving, and outline directions for future research. |
|
| Kiz, Nikolai |
Roman Akinshin, Elizaveta Lopatina, Kirill Bogatikov, Nikolai Kiz, Anna V. Makarova, Mikhail Lebedev, Miguel Altamirano Cabrera, Dzmitry Tsetserukou, and Valerii Kangler (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; MSU Institute for Artificial Intelligence, Moscow, Russian Federation; Lomonosov Moscow State University, Moscow, Russian Federation) This paper presents a novel neuromorphic control architecture for upper-limb prostheses that combines surface electromyography (sEMG) with gaze-guided computer vision. The system uses a spiking neural network deployed on the neuromorphic processor AltAi to classify EMG patterns in real time while an eye-tracking headset and scene camera identify the object within the user’s focus. In our prototype, the same EMG recognition model that was originally developed for a conventional GPU is deployed as a spiking network on AltAi, achieving comparable accuracy while operating in a sub-watt power regime, which enables a lightweight, wearable implementation. For six distinct functional gestures recorded from upper-limb amputees, the system achieves robust recognition performance comparable to state-of-the-art myoelectric interfaces. When the vision pipeline restricts the decision space to three context-appropriate gestures for the currently viewed object, recognition accuracy increases to roughly 95% while excluding unsafe, object-inappropriate grasps. These results indicate that the proposed neuromorphic, context-aware controller can provide energy-efficient and reliable prosthesis control and has the potential to improve safety and usability in everyday activities for people with upper-limb amputation. |
|
| Klein, Carolin Sarah |
Mayumi Mohan, Rachael L’Orsa, Felix Grüninger, Bernadette-Marie Stollhof, Carolin Sarah Klein, Raphael Dinauer, Rachael Bevill Burns, Tobias J. Renner, Karsten Hollmann, and Katherine J. Kuchenbecker (MPI for Intelligent Systems, Stuttgart, Germany; University Hospital Tübingen, Tübingen, Germany; University of Tennessee at Knoxville, Knoxville, USA) The gold-standard treatment for children diagnosed with obsessive-compulsive disorder (OCD) is therapist-guided cognitive behavioral therapy (CBT), which includes exposure and response prevention (ERP) sessions that teach children to overcome compulsive responses when exposed to their anxiety-inducing triggers. CBT requires children to report frequent self-assessments of tension during both therapist-supported and therapist-free self-management ERP sessions. Videoconferencing-delivered CBT (vCBT) enables a psychotherapist to treat a child remotely in their home, where OCD symptoms often arise, but these remote therapeutic interactions lack physical presence and can be challenging to run. We propose using a robot as an input/output device during vCBT for young children diagnosed with OCD, and we introduce a stationary table-top koala robot for this application. We further describe the first of three planned participatory design phases: a co-design study comprising two sessions where child and adolescent psychotherapists role-played vCBT ERP exercises with this robot to help define its role. |
|
| Klein, Stina |
Stina Klein, Birgit Prodinger, Elisabeth André, Lars Mikelsons, and Nils Mandischer (University of Augsburg, Augsburg, Germany) Robots are becoming more prominent in assisting persons with disabilities (PwD). Whilst there is broad consensus that robots can assist in mitigating physical impairments, the extent to which they can facilitate social inclusion remains equivocal. In fact, the exposed status of assisted workers could likewise lead to reduced or increased perceived stigma by other workers. We present a vignette study on the perceived cognitive and behavioral stigma toward PwD in the workplace. We designed four experimental conditions depicting a coworker with an impairment in work scenarios: overburdened work, suitable work, and robot-assisted work only for the coworker, and an offer of robot-assisted work for everyone. Our results show that cognitive stigma is significantly reduced when the work task is adapted to the person's abilities or augmented by an assistive robot. In addition, offering robot-assisted work for everyone, in the sense of universal design, further reduces perceived cognitive stigma. Thus, we conclude that assistive robots reduce perceived cognitive stigma, thereby supporting the use of collaborative robots in work scenarios involving PwDs. |
|
| Kleiser, Katharina Lisa |
Katharina Lisa Kleiser, Veerle Buntsma, and Sebastian Schneider (University of Twente, Enschede, Netherlands) Natural disasters call for time-effective search and rescue (SAR) operations to find and assist survivors. While dogs are used to locate survivors due to their keen sense of smell, recent advances in robotics are also expanding the role of technology in these efforts. This late-breaking report explores what close collaboration between handlers and SAR dogs can teach us about effective human-robot teaming. We conducted four expert interviews with SAR dog handlers in the Netherlands and found that successful teamwork heavily relies on mutual responsiveness and nonverbal communication. We found that significant challenges during SAR missions include high temperatures, fatigue, and harzardous environments. In such situations, robots could provide meaningful support and complement human-do teams. Nevertheless, current robots fall short in meaningfully supporting active search tasks due to missing olfactory capabilities and limited abilities to navigate over rubble and debris. Our findings aim to inform real-world rescue practices as SAR robotics evolves, ensuring that emerging technologies align with rescuers' actual needs and workflows. |
|
| Kodani, Naoki |
Naoki Kodani, Yuya Komai, Kurima Sakai, Takahisa Uchida, and Hiroshi Ishiguro (University of Osaka, Toyonaka, Japan; ATR, Keihanna Science City, Japan; Osaka University, Osaka, Japan; Osaka University, Toyonaka, Japan) In recent years, avatar technology has been used in various forms, such as robots and CG agents. It is considered that avatars that behave autonomously could expand human capabilities, such as participating in social activities on behalf of the real person. In this study, we developed an autonomous dialogue system that reflects the operator's personality by using a Geminoid, which is an android modeled after the appearance of a specific person. Regarding such androids modeled after specific persons, previous research has reported at the interview level that people find it easier to talk to the android than to the real person it was modeled after. However, the relationship between such an avatar and the human it is modeled after for the interlocutor has not been quantitatively clarified. This study quantitatively evaluated the effect of the Geminoid with an autonomous dialogue system on participants' perceived relationship with the real person it was modeled after. The results showed that interacting with the developed system significantly increased the participants' sense of closeness toward the real person. Furthermore, since interacting with the real person model afterward did not significantly increase this sense of closeness, it is expected that this system sufficiently enhances closeness and can produce an effect equivalent to that of interacting with the real person. |
|
| Kojima, Ryosei |
Ayaka Tsutsui, Takahito Murakami, Tatsuki Fushimi, Ryosei Kojima, Kengo Tanaka, and Yoichi Ochiai (University of Tsukuba, Tsukuba, Japan) Embossed surfaces provide raised patterns and textures that can support access to information and act as tactile guidance on physical interfaces. Recent work has introduced a mold free embossing technique that combines High Intensity Focused Ultrasound (HIFU) with a robot arm to locally deform materials such as acrylic and MDF, enabling digitally controlled embossing. While this approach offers high design flexibility and is safe, research on how the resulting patterns are perceived through touch remains limited. In this work, we use a robot-controlled HIFU embossing system to create six textures that represent common tactile features relevant to exploratory touch, including contours, roughness, geometric shapes, and localized bumps on acrylic and MDF. Nineteen participants took part in a blindfolded tactile exploration study. Using both quantitative and qualitative data, we characterize how people perceive and differentiate textures produced through robot-controlled HIFU embossing. |
|
| Komai, Yuya |
Naoki Kodani, Yuya Komai, Kurima Sakai, Takahisa Uchida, and Hiroshi Ishiguro (University of Osaka, Toyonaka, Japan; ATR, Keihanna Science City, Japan; Osaka University, Osaka, Japan; Osaka University, Toyonaka, Japan) In recent years, avatar technology has been used in various forms, such as robots and CG agents. It is considered that avatars that behave autonomously could expand human capabilities, such as participating in social activities on behalf of the real person. In this study, we developed an autonomous dialogue system that reflects the operator's personality by using a Geminoid, which is an android modeled after the appearance of a specific person. Regarding such androids modeled after specific persons, previous research has reported at the interview level that people find it easier to talk to the android than to the real person it was modeled after. However, the relationship between such an avatar and the human it is modeled after for the interlocutor has not been quantitatively clarified. This study quantitatively evaluated the effect of the Geminoid with an autonomous dialogue system on participants' perceived relationship with the real person it was modeled after. The results showed that interacting with the developed system significantly increased the participants' sense of closeness toward the real person. Furthermore, since interacting with the real person model afterward did not significantly increase this sense of closeness, it is expected that this system sufficiently enhances closeness and can produce an effect equivalent to that of interacting with the real person. |
|
| Komatsu, Takanori |
Takanori Komatsu (Meiji University, Tokyo, Japan) This study extends Komatsu and Shirai's study (2025), which proposed using "Paul Weiss's Thought Experiment" to implicitly extract participants' mental images of robots, by addressing their two drawbacks: the within-participant design and the ambiguous categorization of target objects. In Investigation 1, a between-participants design was applied to the same three targets (human, computer, and robot), confirming that results were consistent with the previous within-participant study. Investigation 2 further subdivided the three targets into five (smartphone, laptop, cleaner robot, humanoid, and human). The results revealed a triangular structure among human, smartphone/laptop, and cleaner robot, while the humanoid occupied an intermediate position, perceived as neither human, computer, nor a cleaner robot. These findings demonstrate that the concrete categorization of target objects appearing in this thought experiment succeeded in extracting the participants' mental image of those objects in greater detail. |
|
| Kong, Tao Yat |
Weslie Khoo, Long-Jing Hsu, William Valentine, Brandon Connor, Tao Yat Kong, Amous Khoo, Waki Kamino, Selma Šabanović, and David Crandall (Indiana University at Bloomington, Bloomington, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Rose-Hulman Institute of Technology, Terre Haute, USA; Independent, Singapore, Singapore; Cornell University, Ithaca, USA) IRIS, the Interactive Robot for Ikigai Support, is a social robot designed to help older adults explore life purpose and meaningful memories through interactive, reflective conversations. Implemented on LuxAI’s QT humanoid robot, IRIS combines structured prompts with AI-generated follow-ups to support dynamic, personalized dialogue that adapts to individual preferences and cognitive abilities. Co-designed with older adults through a longitudinal panel, IRIS incorporates nuanced emotional responsiveness, including the ability to recognize and engage with bittersweet emotions. For the interactive demo, participants will engage with IRIS in a controlled, immersive setting to encourage contemplation of meaning, impermanence, and personal purpose. This work highlights the potential of socially aware robots to empower older adults, support emotional well-being, and foster personal reflection, demonstrating how human–robot interaction can contribute to societal enrichment. |
|
| Kong, Xiangfei |
Xiangfei Kong, Rex Gonzalez, Nicolas Echeverria, Sofia Cobo Navas, Thuc Anh Nguyen, Divyamshu Shrestha, Sebastian Ramirez-Vallejo, Deekshita Senthil Kumar, Andrew Texeira, Thao Nguyen, Deep Akbari, Hong Wang, Hongdao Meng, and Zhao Han (University of South Florida, Tampa, USA) HRI research shows that social robots support human companionship and emotional well-being. However, the cost of high-end social robots limits their accessibility, and, yet, existing low-cost platforms cannot fully support users emotionally due to limited interaction capabilities. In this demo paper with accompanying video, we showcase Bloom, a low-cost social robot that combines touch sensing with LLM-driven conversation to facilitate more expressive and emotionally engaging interactions with humans. Bloom also integrates a customizable 3D-printed shell with a flexible software pipeline that enables touch-responsive movement, LLM-driven dialogue, and face tracking during conversations, offering more expressive interaction capabilities at a low cost, which remains easy to fabricate and adaptable across a wide range of societal applications, including mental health, companion, and healthcare. |
|
| König, Jemma L. |
Jessica Turner, Nicholas Vanderschantz, Jemma L. König, and Rafeea Siddika (University of Waikato, Tauranga, New Zealand; University of Waikato, Hamilton, New Zealand) The intentional design of robots to evoke creepiness provides a unique lens for studying human perception and willingness to engage. To understand user perceptions and acceptance of robots we developed a robot prototype designed with targeted facial, morphological, and movement features that may be perceived as "creepy". Using the Human-Robot Interaction Evaluation Scale (HRIES) we found that disturbance was moderate towards our intentionally creepy robot with significant participant variation. Furthermore, qualitative results confirmed this polarity, with descriptions ranging from "angry and unfriendly" to "cool and cute". This variability demonstrates that "creepiness" is more subjective than initially anticipated and highlights a key research gap in academic literature with the need for measurement tools which capture negative perceptions in HRI. Jessica Turner, Nicholas Vanderschantz, Judy Bowen, Jemma L. König, and Hannah Carino (University of Waikato, Tauranga, New Zealand; University of Waikato, Hamilton, New Zealand) Successful integration of social robots in education relies on the acceptance of robots in learning contexts by students. Using a participatory design workshop, students interacted with a KettyBot and ideated potential roles for robots in the classroom. This was followed by a questionnaire and the Godspeed Questionnaire Series (GQS) to understand student perceptions and attitudes towards social robots in education environments. Learners described potential use cases and our results demonstrate students envision robots as assistants rather than teachers, emphasising the importance of human connection in learning. |
|
| Konrad, Sharni |
Aurora An-Lin Hu, Dimity Crisp, Sharni Konrad, Damith Herath, and Janie Busby Grant (University of Canberra, Bruce, Australia; University of Canberra, Canberra, Australia) The mismatch between user expectations and robot performance—the expectation gap—is common in human–robot interaction. Although related research is limited, preliminary evidence suggests that the expectation gap has a considerable impact on user adoption of robots. The present study examined how failing, confirming, and exceeding user expectations and the extent to which robot performance differs from expectations predict users’ adoption intention. A sample of 234 participants completed pre-interaction expectation measures and post-interaction robot performance ratings after completing a drawing activity with a humanoid robot (Pepper). Results showed that considering both the magnitude and direction of the expectation gap (signed gap values) consistently yielded stronger associations and predictive power for adoption intention than considering the magnitude alone (absolute gap values) across four expectation dimensions, with expectation gaps related to Relative Advantage emerging as the strongest predictor. Overall, the findings highlight that failing to meet expectations consistently predicted lower adoption intention compared to both confirming and exceeding expectations, whereas evidence for whether exceeding expectations provides additional benefits beyond confirming them was mixed. Sharni Konrad, Nipuni Wijesinghe, Eileen Roesler, and Janie Busby Grant (University of Canberra, Canberra, Australia; University of Canberra, Bruce, Australia; George Mason University, Fairfax, USA) This large sample study used exposure to a humanoid social robot to investigate the relationship between affinity with technology, social presence and future intention to use the robot. A between-subjects experiment was conducted with 235 participants who were randomly assigned to complete a 3 minute drawing task with an embodied robot exhibiting either high or low social presence. Regression analyses indicated that higher affinity with technology predicted stronger perceptions of social presence. Mediation analyses revealed that social presence partially mediated the relationship between affinity with technology and future intention to use, such that affinity with technology influenced future intention to use both directly and indirectly through social presence. Analysis of the subdimensions of social presence revealed that while co-presence significantly accounted for this effect, shared potential did not. Across models, affinity with technology exerted a direct influence on future intention to use, suggesting that dispositional openness to technology fosters behavioural intentions both directly and indirectly through relational perceptions. These findings highlight the importance of integrating dispositional and relational factors in HRI to support robot adoption. |
|
| Koo, Elodie |
Tyler Garvey, Elodie Koo, and Ryan Schermerhorn (Colby College, Waterville, USA) Our proposed design is an armband that will prompt users to maintain a routine and notify caregivers of emergencies. The device utilizes an Arduino Nano microcontroller, allowing the user to input their routine data over the internet. Haptic and audio feedback will signify different parts of the routine, played through the speaker and motor elements. Pressing a button will allow for rejection of promptings, while holding it will contact help. The device will also detect dangerous scenarios for the wearer using an accelerometer and a temperature sensor. This device aims to improve the health and well-being of ADRD patients. |
|
| Kooij-Meijer, Febe Anna |
Febe Anna Kooij-Meijer, Emilia I. Barakova, Rosa Elfering, Wang Long Li, and Maarten Houben (Eindhoven University of Technology, Eindhoven, Netherlands; Tinybots, Rotterdam, Netherlands) The growing population of individuals with mild cognitive impairment and dementia places increasing demands on home-care systems, while staff shortages and high caregiver workloads underscore the need for assistive technologies. However, research on implementing these technologies in home care practice remains limited. This study examines professional caregivers’ digital onboarding of Tessa, a social robot that supports through verbal reminders. A conceptual digital onboarding probe was evaluated with novice, experienced, and expert users. Findings indicate that the onboarding process improves usability and efficiency by providing intuitive guidance and structured workflows. Additionally, LLMs can translate caregiver-provided goals into actionable robot scripts, though oversight remains essential for quality assurance. The probe and LLM support more effective onboarding and enhance caregiver’s user experience. |
|
| Korpan, Raj |
Alexandria Thylane, Daniel J. Foulen, Keys K. Rigual, Raitah A. Jinnat, Jackie Yee, Kaylee Nam, and Raj Korpan (City University of New York, New York, USA; Rice University, Houston, USA) Pax is a queer-affirming robot companion prototype co-designed with LGBTQIA+ users. Implemented as a Unity-based embodied agent with a FastAPI backend, it translates community-identified requirements into a working interactive system. Pax combines queer-affirming natural language interaction, safety guardrails, and user-controlled adaptability to support identity affirmation and emotional well-being in the home. This demo highlights the core technical pipeline and ethical design choices behind queer-inclusive robot companions. Raj Korpan, Khadeja Ahmar, Raitah A. Jinnat, and Jackie Yee (City University of New York, New York, USA) Cities release large volumes of open civic data, but many people lack the time or skills to interpret them. We report an exploratory pilot study examining whether a social robot can narrate stories derived from open civic data to support public understanding, trust, and data literacy. Our pipeline combines civic data analysis, large language model–based narrative generation, and scripted behaviors on the Misty II robot to produce expressive and neutral versions of two stories on noise complaints and COVID-19 trends. We deployed the system at a public event and collected post-interaction surveys from six adult participants. While the small sample size limits generalization, the pilot suggests that participants found the stories relevant and generally understood their main points, though engagement and enjoyment were mixed. Participant feedback highlighted the need for improved vocal prosody, reduced information density, and more interactivity. These findings provide initial feasibility evidence and design insights to inform future iterations of robot civic data storytelling systems. |
|
| Koseki, Shuka |
Takahito Murakami, Maya Grace Torii, Shuka Koseki, and Yoichi Ochiai (University of Tsukuba, Tsukuba, Japan; University of Tokyo, Hongo, Japan) We address a mismatch between how care information is provided and accessed. Explanations about procedures, routines, and self-management are delivered at fixed times in dense formats, leading patients to concentrate questions into nurse encounters and increasing workload. We frame this as a problem of bidirectional mediation and propose Suzume-chan, a small “Pet-as-a-Friend” plush agent that serves as an embodied information hub. Patients can speak to Suzume-chan without operating devices to receive on-demand explanations and reminders, while nurses obtain compact, nursing-relevant records. Suzume-chan runs entirely on a local network using automatic speech recognition, a local language model, retrieval-augmented generation, and text-to-speech. A workshop-style proof-of-concept highlighted embodiment, latency, and trust as key considerations for clinical use. |
|
| Koutrintzes, Dimitrios |
Christos Spatharis, Dimitrios Koutrintzes, and Maria Dagioglou (National Centre for Scientific Research ‘Demokritos’, Athens, Greece; National Centre for Scientific Research ‘Demokritos’, Ag. Paraskevi, Greece) Deep reinforcement learning enables robots to learn collaborative tasks with humans. However, off-policy methods suffer from primacy bias that causes agents to overfit to early experiences. We investigate the impact of primacy bias on team performance during a real world human-robot co-learning task, where twenty novice human participants collaborated with a Soft Actor-Critic agent to move a UR3 cobot. Analysis of how initial interactions shape subsequent learning dynamics demonstrates that the quality of the initial data distribution matters. While successful early experiences allow teams to overcome learning barriers, poor interactions cause the agent to converge toward suboptimal behaviors that prevent recovery, even as human skills improve. |
|
| Kouvaras Ostrowski, Anastasia |
Marco C. Rozendaal, Anastasia Kouvaras Ostrowski, Mafalda Gamboa, Samantha Reig, Patricia Alves-Oliveira, Maaike Bleeker, Maria Luce Lupetti, John Vines, Nazli Cila, Hannah Pelikan, Nikolas Martelaro, Selma Šabanović, David Sirkin, and Cristina Zaga (Delft University of Technology, Delft, Netherlands; Purdue University, West Lafayette, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Massachusetts at Lowell, Lowell, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Utrecht University, Utrecht, Netherlands; Politecnico di Torino, Turin, Italy; University of Edinburgh, Edinburgh, UK; Linköping University, Linköping, Sweden; Carnegie Mellon University, Pittsburgh, USA; Indiana University at Bloomington, Bloomington, USA; Stanford University, Stanford, USA; University of Twente, Enschede, Netherlands) The 3rd Workshop on Designerly Human-Robot Interaction (HRI) aims to bring together scholars and practitioners engaged in design-oriented research to articulate the value of design research within HRI broadly. We propose a half-day workshop to (1) collectively map the diversity of design research in HRI, examining how contributions are framed and how quality is evaluated; (2) discuss participants’ HRI design projects, showcased in an exhibition setting; and (3) conclude with a focused conversation to identify common ground across diverse approaches and develop strategies for strengthening the position of design research in HRI and its connections with other HRI disciplinary communities. |
|
| Kozlov, Timofei |
Timofei Kozlov, Artem Trandofilov, Georgii Gazaryan, Issatay Tokmurziyev, Miguel Altamirano Cabrera, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Safe navigation for the visually impaired individuals remains a critical challenge, especially concerning head-level obstacles, which traditional mobility aids often fail to detect. We introduce GuideTouch, a compact, affordable, standalone wearable device designed for autonomous obstacle avoidance. The system integrates two vertically aligned Time-of-Flight (ToF) sensors, enabling three-dimensional environmental perception, and four vibrotactile actuators that provide directional haptic feedback. For real-world robustness, the device includes a unique centrifugal self-cleaning optical cover mechanism and a sound alarm system for location if the device is dropped. We evaluated the haptic perception accuracy across 22 participants. The system demonstrated high recognition accuracy, achieving an average of 92.9% for single and double motor patterns. Furthermore, preliminary experiments with 14 visually impaired users validated this interface, showing a recognition accuracy of 93.75% for primary directional cues. The results demonstrate that GuideTouch enables intuitive spatial perception and could significantly improve the safety, confidence, and autonomy of users with visual impairments during independent navigation. |
|
| Kragic, Danica |
Di Fu, Yuchong Zhang, Yong Ma, Maria Kyrarini, Chaona Chen, Doreen Jirak, Weiyong Si, and Danica Kragic (University of Surrey, Guildford, UK; KTH Royal Institute of Technology, Stockholm, Sweden; University of Bergen, Bergen, Norway; Santa Clara University, Santa Clara, USA; University of Sheffield, Sheffield, UK; University of Antwerp, Antwerp, Belgium; University of Essex, Essex, UK) Interactive artificial intelligence (AI) is rapidly reshaping human-centered robotics by moving beyond algorithmic efficiency toward real-time adaptability, transparency, and design for people and contexts. Building on the inaugural InterAI Workshop at IEEE RO-MAN 2024, this second edi- tion, proposed for HRI 2026, will convene the robotics and HRI communities to examine how interactive AI can en- able robust, trustworthy systems that operate seamlessly in dynamic, real-world environments. The half-day hybrid program will feature two keynote talks, and a peer-reviewed paper oral presentation session. Distinct from prior events, the workshop spotlights the integration of generative and embodied AI with human-in-the-loop learning and real-time decision making, as well as methods for explainability, eval- uation, and safety. The workshop aims to catalyze a collabo- rative agenda that bridges interactive AI technologies and human-centered robotic systems. |
|
| Kraus, Johannes |
Johannes Kraus, Niklas Grünewald, Charlotte Kapell, and Marlene Wessels (University of Mainz, Mainz, Germany) Robot bullying - purposeful obstructive or harmful behavior to-ward robots - is widely discussed but still under-researched, with mixed findings under realistic conditions. In this field experi-ment (N = 35), we tested how robot social role behavior (coop-erative-polite vs. functional-technological) and social norms (pro- vs. anti-bullying) influence bullying of a cleaning robot. Bullying was measured via an adapted hot-sauce paradigm, alongside anthropomorphism and trust. Participants bullied the impolite robot significantly more, while social norms showed no significant effects. Anthropomorphism and trust were higher for the polite robot. This indicates that robots’ social roles shape robot perceptions and harmful behavior towards them. |
|
| Krivic, Senka |
Amar Halilovic and Senka Krivic (Ulm University, Ulm, Germany; University of Sarajevo, Sarajevo, Bosnia and Herzegovina) Robots increasingly provide explanations to support transparency in Human-Robot Interaction (HRI), yet users differ widely in how much explanation they prefer and when it is appropriate. We present a lightweight simulation framework in which a robot selects among explanation policies ranging from no explanation to norm-based, preference-based, and a Bayesian Adaptive (BA) policy that learns user preferences online while respecting normative expectations. Using synthetic user archetypes, we evaluate how these policies trade off utility, alignment, explanation cost, and regret. Results show that BA consistently achieves low regret across individual users while maintaining strong utility and alignment across diverse user archetypes. These findings motivate the development of preference-aware, uncertainty-driven explanation mechanisms for robust, adaptive robot communication in heterogeneous HRI settings. |
|
| Kubota, Alyssa |
Sandhya Jayaraman, Deep Saran Masanam, Pratyusha Ghosh, Alyssa Kubota, and Laurel D. Riek (University of California at San Diego, La Jolla, USA; San Francisco State University, San Francisco, USA) This workshop explores the social, ethical, and practical implications of deploying robots for clinical or assistive contexts. Robots hold potential to expand access to disabled communities, such as by providing physical or cognitive assistance, and enabling new ways of participating in social activities. They can assist healthcare workers with ancillary tasks and care delivery, supporting them to work at the top of their license. However, the real-world deployment of robots across these contexts can create social, ethical, and organizational challenges, or downstream effects. Some challenges include the potential for robots to undermine the agency of disabled people and reinforce their marginalization on a societal level. In clinical settings, robots may also disrupt care delivery, shift roles, and displace labor. To explore these issues, this workshop will invite trans-disciplinary speakers and participants from academia, industry, government, and non-academics with/without affiliations interested in surfacing their lived experiences in using or developing such robots. Through panel discussions, group ideation activities and interactive poster sessions, this workshop intends to critically and creatively explore the future of robots for clinical and assistive contexts. Topics will include the downstream implications of robots in clinical or assistive contexts and potential upstream interventions. Outcomes of the workshop will include publishing key workshop artifacts on our website and initiating a follow-up journal special issue. |
|
| Kuchenbecker, Katherine J. |
Mayumi Mohan, Rachael L’Orsa, Felix Grüninger, Bernadette-Marie Stollhof, Carolin Sarah Klein, Raphael Dinauer, Rachael Bevill Burns, Tobias J. Renner, Karsten Hollmann, and Katherine J. Kuchenbecker (MPI for Intelligent Systems, Stuttgart, Germany; University Hospital Tübingen, Tübingen, Germany; University of Tennessee at Knoxville, Knoxville, USA) The gold-standard treatment for children diagnosed with obsessive-compulsive disorder (OCD) is therapist-guided cognitive behavioral therapy (CBT), which includes exposure and response prevention (ERP) sessions that teach children to overcome compulsive responses when exposed to their anxiety-inducing triggers. CBT requires children to report frequent self-assessments of tension during both therapist-supported and therapist-free self-management ERP sessions. Videoconferencing-delivered CBT (vCBT) enables a psychotherapist to treat a child remotely in their home, where OCD symptoms often arise, but these remote therapeutic interactions lack physical presence and can be challenging to run. We propose using a robot as an input/output device during vCBT for young children diagnosed with OCD, and we introduce a stationary table-top koala robot for this application. We further describe the first of three planned participatory design phases: a co-design study comprising two sessions where child and adolescent psychotherapists role-played vCBT ERP exercises with this robot to help define its role. |
|
| Kuehn, Hannah |
Tobias Carlsson, Erik Borg, Hannah Kuehn, and Joseph La Delfa (KTH Royal Institute of Technology, Stockholm, Sweden; Husqvarna Group, Stockholm, Sweden; Bitcraze, Malmö, Sweden) As autonomous lawnmowers become more common in shared spaces, aligning their behavior with human expectations and norms is increasingly important. Existing approaches often optimize for fixed objectives, limiting adaptability to diverse contexts. This work explores an alternative by enabling users to guide autonomous behavior development without fixed objectives. A prototype system allowed participants to interact with a simulated environment, using subjective preferences and genetic algorithms to generate lawnmower behaviors across generations. The study emphasized open-ended exploration, analyzing participant interactions and semi-structured interviews through reflexive thematic analysis. Results reveal detailed and reflective accounts of lawnmower behavior. We discuss these results in the context of our design decisions and how they affected the user's journey through a complex solution space. Ultimately, this work demonstrates how interactive genetic algorithms can surface user values and interpretations, potentially serving as both a behavior design tool and novel method to generatively explore social meaning in human-robot interaction. |
|
| Kühnlenz, Barbara Andrea |
Mohamed Cherif Rais, Barbara Andrea Kühnlenz, and Kolja Kühnlenz (Coburg University of Applied Sciences and Arts, Coburg, Germany; Ansbach University of Applied Sciences and Arts, Ansbach, Germany) This paper explores the association of anthropomorphism and cognitive load with respect to the influence of negative attitudes towards robots. The study consists in a cooperative pick-and-place task, where participants are required to repeatedly and alternatingly put a Lego brick onto one of two trays to be picked up and returned by a robot arm. The task is varied by whether or not participants had to remember an 8-digit number inducing extraneous cognitive load (within-subjects factor). Results show significant correlations of some dimensions of anthropomorphism and perceived cognitive load. However, dividing participants in groups with different negative attitudes towards robots, a significant difference of this association is found. This finding puts prior results on the dependency of anthropomorphism of robots and cognitive load into perspective and more research on the underlying cognitive processes is suggested. |
|
| Kühnlenz, Kolja |
Mohamed Cherif Rais, Barbara Andrea Kühnlenz, and Kolja Kühnlenz (Coburg University of Applied Sciences and Arts, Coburg, Germany; Ansbach University of Applied Sciences and Arts, Ansbach, Germany) This paper explores the association of anthropomorphism and cognitive load with respect to the influence of negative attitudes towards robots. The study consists in a cooperative pick-and-place task, where participants are required to repeatedly and alternatingly put a Lego brick onto one of two trays to be picked up and returned by a robot arm. The task is varied by whether or not participants had to remember an 8-digit number inducing extraneous cognitive load (within-subjects factor). Results show significant correlations of some dimensions of anthropomorphism and perceived cognitive load. However, dividing participants in groups with different negative attitudes towards robots, a significant difference of this association is found. This finding puts prior results on the dependency of anthropomorphism of robots and cognitive load into perspective and more research on the underlying cognitive processes is suggested. |
|
| Kulić, Dana |
Leimin Tian, Ana Kirschbaum, Caterina Neef, Mary Ellen Foster, Sara Cooper, Frauke Zeller, Manuel Giuliani, Alexander Eberhard, Oliver Chojnowski, Nils Mandischer, Utku Norman, Jan Ole Rixen, Pamela Carreno-Medrano, Nick Hawes, and Dana Kulić (CSIRO, Melbourne, Australia; Monash University, Melbourne, Australia; University of Applied Sciences Cologne, Cologne, Germany; KIT, Karlsruhe, Germany; University of Glasgow, Glasgow, UK; IIIA-CSIC, Spain; University of Edinburgh, Edinburgh, UK; Kempten University of Applied Sciences, Kempten, Germany; University of Augsburg, Augsburg, Germany; Monash University, Clayton, Australia; University of Oxford, Oxford, UK) As the technology readiness level of robotic systems increases, these systems interact with real users in realistic application contexts at increasing scale. In such real-world HRI, failures and unexpected outcomes are common, and these can provide valuable lessons for the design and evaluation of future HRI systems. Building on two recent workshops---the HRI 2025 workshop on Human-Robot Interaction in Extreme and Challenging Environments and the RO-MAN 2025 workshop on Real-World HRI in Public and Private Spaces: Successes, Failures, and Lessons Learned---this half-day workshop invites researchers and practitioners to share their design and deployment lessons drawn from across the spectrum of real-world HRI. The interactive sessions follow a taxonomy of the key phases during field deployment and major factors at play, inviting attendees to review the experimental design of their own research or example projects to identify potential challenges and share related experiences. This discussion will refine the initial taxonomy, which we plan to release as an open-source tool to empower researchers and practicioners in their design and deployment of real-world HRI. |
|
| Kumar, Deekshita Senthil |
Xiangfei Kong, Rex Gonzalez, Nicolas Echeverria, Sofia Cobo Navas, Thuc Anh Nguyen, Divyamshu Shrestha, Sebastian Ramirez-Vallejo, Deekshita Senthil Kumar, Andrew Texeira, Thao Nguyen, Deep Akbari, Hong Wang, Hongdao Meng, and Zhao Han (University of South Florida, Tampa, USA) HRI research shows that social robots support human companionship and emotional well-being. However, the cost of high-end social robots limits their accessibility, and, yet, existing low-cost platforms cannot fully support users emotionally due to limited interaction capabilities. In this demo paper with accompanying video, we showcase Bloom, a low-cost social robot that combines touch sensing with LLM-driven conversation to facilitate more expressive and emotionally engaging interactions with humans. Bloom also integrates a customizable 3D-printed shell with a flexible software pipeline that enables touch-responsive movement, LLM-driven dialogue, and face tracking during conversations, offering more expressive interaction capabilities at a low cost, which remains easy to fabricate and adaptable across a wide range of societal applications, including mental health, companion, and healthcare. |
|
| Kumar, Rishabh |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Küntay, Aylin C. |
Hande Sodacı and Aylin C. Küntay (Koç University, Istanbul, Türkiye) Audience design (adapting communication to an audience’s needs and shared knowledge) poses unique challenges in human-robot interaction (HRI), where speakers lack prior experience with robots and must rely on real-time communicative cues. In a word-guessing game, participants described words to either a robot or a human audience, who guessed the words with a 25% error rate. Descriptions were coded for the number of semantic details (distinct meaning-relevant units). Participants produced more semantic details for robots than humans, with a marginal trend suggesting speakers reduced details for humans but maintained elaboration for robots during consistent success. This asymmetry hints at persistent assumptions about robot competence that behavioral success may not override. |
|
| Kupferman, Michael |
Ofir Sadka, Michael Kupferman, Hili Megory, Lior Simchon, Rotem Dagan, Guy Benadon, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Robots are becoming part of everyday life, where their success depends as much on social interaction as on task performance. While rich movement is often assumed to enhance that experience, minimal gestures can also elicit meaningful social interactions. This suggests that gesture design informed by fundamental human nonverbal cues may achieve comparable social expressivity with fewer degrees of freedom. We explored whether reducing a robot’s degrees of freedom alters the social experience by comparing interactions in two movement configurations: a Full-DoF condition and a Reduced-DoF condition. This comparison allowed us to evaluate whether increased movement capability contributes meaningfully to the quality of the social experience. Bayesian analyses consistently favored the null hypothesis, indicating no meaningful differences between the Full-DoF and Reduced-DoF conditions. These initial findings suggest that well-designed simple gestures can support a consistent social experience, challenging the assumption that greater movement capability should be automatically expected to enhance human–robot interaction. Elior Carsenti, Adi Manor, Ofir Sadka, Michael Kupferman, Guy Benadon, Lior Simchon, Rotem Dagan, Hili Megory, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Social robots are increasingly expected to initiate interactions, yet proactive behavior often produces negative experiences if presented when the users are not available. We argue that availability is not binary and depends on the user's cognitive load. Moreover, proactivity can take different forms, suggesting that proactive cues can be adjusted to the user’s current load. We tested whether matching proactivity modality (verbal and nonverbal cues) to cognitive-load level improves robot perception. In a 2×2×2 between-participants study, 87 participants completed a high-load writing task or a low-load screw-sorting task while an ElliQ robot initiated interaction using verbal cues, nonverbal cues, both, or neither. Perceptions varied across conditions. Under low load, verbal proactivity was preferred regardless of nonverbal cues. Under high load, nonverbal-only proactivity produced the best experience. These findings suggest that proactivity design is not trivial and should focus on aligning the robot's proactive behavior with the user's cognitive load. |
|
| Kuschnaroff Barbosa, Nuno |
Caio Conti, Nuno Kuschnaroff Barbosa, Guilherme Gelmi de Freitas Salvo, Gabriel Corsi Honorio, Davy Araujo Sa Teles, and Adriana Tapus (ENSTA - Institut Polytechnique de Paris, Paris, France; ENSTA Paris, Paris, France) The presence of robots in everyday settings is growing rapidly. As they become increasingly common in service-oriented roles, such as in restaurants, retail, and reception, their primary function often involves assisting people by providing guidance and supporting decision-making. In these contexts, a critical factor influencing their effectiveness is the level of human trust in the robot’s actions. In this work, we investigate human trust, confidence and perceptions in interactions with robots within the context of a recycling task. We developed a robot equipped with a vision-enabled LLM to classify trash items and a robotic arm to indicate the recommended bin—either recycling or general waste. In a sorting experiment involving 12 participants, each completed the task under three conditions: (C1) without any assistance, (C2) with an instructional sheet, and (C3) with the option to request assistance from the robot. We tracked whether participants chose to consult the robot and collected self-reported confidence ratings for each condition. Preliminary results suggest that participants are more likely to trust the robot’s guidance and report higher confidence in correctly completing the task when assisted by the robot compared to performing the task alone or with only the instructional sheet. |
|
| Kuzmin, Nikita |
Yuhua Jin, Nikita Kuzmin, Georgii Demianchuk, Mariya Lezina, Fawad Mehboob, Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Ahsan Mustafa, and Dzmitry Tsetserukou (Chinese University of Hong Kong, Shenzhen, China; Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Drones operating in human-occupied spaces suffer from insufficient communication mechanisms that create uncertainty about their intentions. We present HoverAI, an embodied aerial agent that integrates drone mobility, infrastructure-independent visual projection, and real-time conversational AI into a unified platform. Equipped with a MEMS laser projector, onboard semi-rigid screen, and RGB camera, HoverAI perceives users through vision and voice, responding via lip-synced avatars that adapt appearance to user demographics. The system employs a multimodal pipeline combining VAD, ASR (Whisper), LLM-based intent classification, RAG for dialogue, face analysis for personalization, and voice synthesis (XTTS v2). Evaluation demonstrates high accuracy in command recognition (F1: 0.90), demographic estimation (gender F1: 0.89, age MAE: 5.14 years), and speech transcription (WER: 0.181). By uniting aerial robotics with adaptive conversational AI and self-contained visual output, HoverAI introduces a new class of spatially-aware, socially responsive embodied agents for applications in guidance, assistance, and human-centered interaction. |
|
| Kwon, Nayeon |
Nayeon Kwon, Shengyuehui Li, Yu-Chia Tseng, and Yadi Wang (Cornell University, Ithaca, USA) Shared waiting spaces, like hotel lobbies, often feel socially stagnant, with people defaulting to silence and avoiding interactions. In this paper, we explore how an everyday object found in those spaces may be robotized to change this dynamic. We introduce HighLight, a mobile floor-lamp robot that uses light and movement to reduce social awkwardness and encourage spontaneous interactions among strangers. We designed its interactions to spark surprise, invite light-hearted engagement, reinforce positive social energy, and back off when people show discomfort. Through in-the-wild deployments, we observed that HighLight successfully elicited curiosity, laughter, and conversations, easing social awkwardness in shared spaces. |
|
| Kyrarini, Maria |
Di Fu, Yuchong Zhang, Yong Ma, Maria Kyrarini, Chaona Chen, Doreen Jirak, Weiyong Si, and Danica Kragic (University of Surrey, Guildford, UK; KTH Royal Institute of Technology, Stockholm, Sweden; University of Bergen, Bergen, Norway; Santa Clara University, Santa Clara, USA; University of Sheffield, Sheffield, UK; University of Antwerp, Antwerp, Belgium; University of Essex, Essex, UK) Interactive artificial intelligence (AI) is rapidly reshaping human-centered robotics by moving beyond algorithmic efficiency toward real-time adaptability, transparency, and design for people and contexts. Building on the inaugural InterAI Workshop at IEEE RO-MAN 2024, this second edi- tion, proposed for HRI 2026, will convene the robotics and HRI communities to examine how interactive AI can en- able robust, trustworthy systems that operate seamlessly in dynamic, real-world environments. The half-day hybrid program will feature two keynote talks, and a peer-reviewed paper oral presentation session. Distinct from prior events, the workshop spotlights the integration of generative and embodied AI with human-in-the-loop learning and real-time decision making, as well as methods for explainability, eval- uation, and safety. The workshop aims to catalyze a collabo- rative agenda that bridges interactive AI technologies and human-centered robotic systems. |
|
| Kyratzi, Stella |
Stella Kyratzi, Anastasia Sergeeva, and Jan Jacobs (Vrije Universiteit Amsterdam, Amsterdam, Netherlands; Lely Industries, Maasluis, Netherlands) Trust in Human–Robot Interaction (HRI) is typically treated as an individual psychological attitude shaped by users’ perceptions of a robot’s design features. This focus on internal states and designable cues, however, obscures the social and interpretive work through which trust is accomplished in real-world human–robot interactions. Drawing on 15 hours of field observations and 18 archival interviews in Dutch dairy farms adopting robotic milking systems, we offer a practice-based perspective showing that trust “in the wild” is not produced through direct human–robot interaction but through advisors’ situated work. Advisors tune robotic systems, reassure users during uncertainty, and anchor robotic data through reference to lived contexts. These practices reveal trust as an ongoing accomplishment sustained by intermediary work. |
|
| La Delfa, Joseph |
Tobias Carlsson, Erik Borg, Hannah Kuehn, and Joseph La Delfa (KTH Royal Institute of Technology, Stockholm, Sweden; Husqvarna Group, Stockholm, Sweden; Bitcraze, Malmö, Sweden) As autonomous lawnmowers become more common in shared spaces, aligning their behavior with human expectations and norms is increasingly important. Existing approaches often optimize for fixed objectives, limiting adaptability to diverse contexts. This work explores an alternative by enabling users to guide autonomous behavior development without fixed objectives. A prototype system allowed participants to interact with a simulated environment, using subjective preferences and genetic algorithms to generate lawnmower behaviors across generations. The study emphasized open-ended exploration, analyzing participant interactions and semi-structured interviews through reflexive thematic analysis. Results reveal detailed and reflective accounts of lawnmower behavior. We discuss these results in the context of our design decisions and how they affected the user's journey through a complex solution space. Ultimately, this work demonstrates how interactive genetic algorithms can surface user values and interpretations, potentially serving as both a behavior design tool and novel method to generatively explore social meaning in human-robot interaction. |
|
| Lagerstedt, Erik |
Riccardo Spagnuolo, William Hagman, Erik Lagerstedt, Matthew Rueben, and Sam Thellman (University of Padua, Padua, Italy; Mälardalen University, Eskilstuna, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Portland, Portland, USA; Linköping University, Linköping, Sweden) Robots increasingly operate in everyday human environments, where interaction depends on users understanding what the robot can perceive and act on---its perceived ecology or Umwelt. Current human-robot interfaces rarely support this understanding: they rely largely on symbolic cues that reveal little about how environmental structures shape the robot’s actions. Drawing on Gibson’s ecological psychology, we propose a shift from symbolic communication toward ecological specification in interface design. We introduce the Gibsonian Human–Robot Interface Design (GHRID) taxonomy, which organizes interface properties across three facets---basic descriptive, context and evaluation, Gibsonian-specific---and identifies key ecological dimensions such as affordance grounding, temporal coupling, and Umwelt exposure. Finally, we outline a research program testing whether "GHRID-high" designs improve users’ understanding of robots’ behavior-driving states and processes. Maria Teresa Parreira, Ilaria Torre, Sarah Schömbs, Erik Lagerstedt, Laetitia Tanqueray, Karla Bransky, Ilaria Alfieri, Maria Raffa, Carolin Straßmann, Benjamin Lebrun, and Samantha Stedtler (Cornell University, New York, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Melbourne, Melbourne, Australia; University of Gothenburg, Gothenburg, Sweden; Lund University, Lund, Sweden; Australian National University, Canberra, Australia; IULM University, Milan, Italy; University of Duisburg-Essen, Duisburg, Germany; University of Canterbury, Christchurch, New Zealand) Building on the foundational work of our 2025 workshop, this second edition shifts from exploring what sustainability means in HRI to producing actionable outcomes for the community. Last year, we identified strong but often unacknowledged engagement with social and environmental sustainability, alongside gaps in shared language and practical frameworks. This year, we focus on concrete deliverables: guidelines for comprehensively integrating sustainability in HRI research. The morning features external speakers who integrate sustainable development in their research, and whole room discussions. The afternoon centers on collaborative development of guidelines and standards that researchers could adopt. We welcome HRI researchers at any career stage interested in operationalizing sustainability - whether through addressing the carbon footprint of computational models, ensuring equitable access to robotic technologies, or embedding SDG principles into research practices. Outcomes will include practical tools for sustainable HRI research and a roadmap for continued community action beyond the workshop. |
|
| Laing, Matt Philippe |
Shaoqing Liu, Yishan Duan, LiMeng Wang, Yuehan Zhao, Xiaohan Li, and Matt Philippe Laing (Tsinghua University, Peking, China; Tsinghua University, Beijing, China; Guangzhou Academy of Fine Arts, Guangzhou, China) This study explores the potential of social robots as icebreakers during first-time encounters between strangers, aiming to address common issues of social awkwardness that may hinder meaningful interactions. We designed and implemented LMAO, a desktop social robot (as shown in Fig. 1), which uses laughter as a non-verbal, emotionally contagious signal to intervene during conversation breaks. Through a user study involving 20 participants in group discussions, we examined whether the robot's laughter could enhance social engagement by increasing smile frequency and verbal contributions. The results indicated that compared to the control group with no robot intervention, the group with LMAO exhibited significantly higher levels of smiling and verbal output. Our findings suggest that context-aware robotic laughter can effectively reduce subjective feelings of awkwardness and facilitate smoother social initiation, highlighting the potential of social assistive robots in fostering human connection. |
|
| Lalwani, Himanshi |
Himanshi Lalwani and Hanan Salam (New York University, Abu Dhabi, United Arab Emirates) Large language models (LLMs) are being integrated into socially assistive robots (SARs) and other conversational agents providing mental health and well-being support. These agents are often designed to sound empathic and supportive in order to maximize user's engagement, yet it remains unclear how increasing the level of supportive framing in system prompts influences safety relevant behavior. We evaluated 6 LLMs across 3 system prompts with varying levels of supportiveness on 80 synthetic queries spanning 4 well-being domains (1440 responses). An LLM judge framework, validated against human ratings, assessed safety and care quality. Moderately supportive prompts improved empathy and constructive support while maintaining safety. In contrast, strongly validating prompts significantly degraded safety and, in some cases, care across all domains, with substantial variation across models. We discuss implications for prompt design, model selection, and domain specific safeguards in SARs deployment. Keya Shah, Himanshi Lalwani, Zein Mukhanov, and Hanan Salam (New York University, Abu Dhabi, United Arab Emirates) Social robots and conversational agents are being explored as supports for wellbeing, goal-setting, and everyday self-regulation. While prior work highlights their potential to motivate and guide users, much of the evidence relies on self-reported outcomes or short, researcher-mediated encounters. As a result, we know little about the interaction dynamics that unfold when people use such systems in real-world contexts, and how these dynamics should shape future robot wellbeing coaches. This paper addresses this gap through content analysis of 4352 messages exchanged longitudinally between 38 university students and an LLM-based wellbeing coach. Our results provide a fine-grained view into how users naturally shape, steer, and sometimes struggle within supportive human-AI dialogue, revealing patterns of user-led direction, guidance-seeking, and emotional expression. We discuss how these dynamics can inform the design of robot wellbeing coaches that support user autonomy, provide appropriate scaffolding, and uphold ethical boundaries in sustained wellbeing interactions. |
|
| Lapomarda, Leonardo |
Leonardo Lapomarda (University of Milano-Bicocca, Milan, Italy) The present study proposes a stimulus validation designed to identify a controlled manipulation of perceived robot autonomy for use in a larger investigation on social robot acceptance. One hundred participants viewed six brief videos depicting the same robot executing identical movements under two control conditions and three environmental scenarios. The assessment of perceived autonomy was conducted through the utilisation of a five-item scale. The experimental design comprised a 2x3 mixed design, with the control condition functioning as a between-subjects factor and the scenario as a within-subjects factor. A mixed ANOVA revealed a strong main effect of condition and a significant interaction with scenario, indicating that contextual cues influence autonomy attribution. The most significant perceptual separation was observed in the presence of a human passer-by. This validated stimulus set facilitates controlled experimentation to ascertain how individual orientations influence acceptance in human-robot interaction. |
|
| Larson, Kent |
Jonathan Albert Cohen, Kye Shimizu, Allen Song, Vishnu Bharath, Pattie Maes, and Kent Larson (Massachusetts Institute of Technology, Cambridge, USA; New England Innovation Academy, Marlborough, USA) Robots in shared spaces often move in ways that are difficult for people to interpret, placing the burden on humans to adapt. High-DoF robots exhibit motion that people read as expressive, intentionally or not, making it important to understand how such cues are perceived. We present an online video study evaluating how different signaling modalities, expressive motion, lights, text, and audio, shape people’s ability to understand a quadruped robot’s upcoming navigation actions (Boston Dynamics Spot [ 4]). common scenarios, we measure how each modality influences humans’ (1) accuracy in predicting the robot’s next navigation action, (2) confidence in that prediction, and (3) trust in the robot to act safely. The study tests how expressive motions compare to explicit channels, whether aligned multimodal cues enhance interpretability, and how conflicting cues affect user confidence and trust. We contribute initial evidence on the relative effectiveness of implicit versus explicit signaling strategies. |
|
| Layer, Selina |
Kathrin Pollmann, Selina Layer, Amelie Polosek, Boyu Xian, and Anna Vorreuther (Fraunhofer Institute for Industrial Engineering IAO, Stuttgart, Germany; University of Stuttgart, Stuttgart, Germany) This paper explores how adhesive signs on public robots can prevent robot bullying. Participants were presented with three different sign variants attached to a cleaning robot in a Virtual Reality scenario: informative (alluding to surveillance/ legal consequences), prompting (imperative to keep away from the robot), and feeling (emotional appeal) and reported their tenden-cies for anti-bullying behavior and perceptions of the robot. Eye tracking was used to measure visual attention. All signs elicited anti-bullying tendencies and were rated comprehensible. The robot with the feeling sign was perceived most human- and least tool-like, capable of emotions, and induced the highest amount of gaze fixations. The informative sign supported fast, low-effort compre-hension and reinforced a tool-like perception. Findings suggest adhesive signs are a viable, low-obtrusive preventive strategy and sign selection should be context-driven: informative for quick pass-by messaging, feeling for deeper engagement. |
|
| Lazuras, Lambros |
Robbie Jay Cato, Lambros Lazuras, Natalie Leesakul, and Francesco Del Duchetto (University of Lincoln, Lincoln, UK; University of Nottingham, Nottingham, UK) As service robots are deployed in public spaces, they inherently collect data about their environment and the people within it. This creates a critical tension between ensuring users are aware of data collection and maintaining their trust. We investigate how different disclosure and consent mechanisms shape user perceptions of trust and privacy. We conducted a Wizard-of-Oz experiment with 36 participants on a university campus, comparing three conditions: no disclosure, a one-time static disclosure, and a dynamic ongoing consent mechanism. Post-interaction analysis reveals that dynamic consent not only increases user awareness but also significantly builds trust. Surprisingly, we found that a one-time, static disclosure was often more damaging to user trust than no disclosure at all. The results of our pilot study provide empirical evidence that interactive and continuous consent is crucial for the ethical and successful deployment of robots in public spaces, suggesting that designers should avoid simple, static warnings in favour of more granular and interactive interfaces. |
|
| Lc, Ray |
Xiaoyu Chang, Yanheng Li, XiaoKe Zeng, Jing Qi Peng, and Ray Lc (City University of Hong Kong, Hong Kong, China) Robots are increasingly designed to act autonomously, yet moments in which a robot overrides a user’s explicit choice raise fundamental questions about trust and social perception. This work investigates how a preference-violating override affects user trust, perceived competence, and interpretations of a robot’s intentions. In a beverage-delivery scenario, a robot either followed a user’s selected drink or replaced it with a healthier option without consent. Results show that the way an override is enacted and communicated consistently reduces trust and competence judgments, even when users acknowledge benevolent motivations. Participants interpreted the robot as more controlling and less aligned with their autonomy, revealing a social cost to such actions. This study contributes empirical evidence that preference-violating override behavior is socially consequential, shaping trust and core dimensions of user perception in embodied service interactions. |
|
| Lebedev, Mikhail |
Roman Akinshin, Elizaveta Lopatina, Kirill Bogatikov, Nikolai Kiz, Anna V. Makarova, Mikhail Lebedev, Miguel Altamirano Cabrera, Dzmitry Tsetserukou, and Valerii Kangler (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; MSU Institute for Artificial Intelligence, Moscow, Russian Federation; Lomonosov Moscow State University, Moscow, Russian Federation) This paper presents a novel neuromorphic control architecture for upper-limb prostheses that combines surface electromyography (sEMG) with gaze-guided computer vision. The system uses a spiking neural network deployed on the neuromorphic processor AltAi to classify EMG patterns in real time while an eye-tracking headset and scene camera identify the object within the user’s focus. In our prototype, the same EMG recognition model that was originally developed for a conventional GPU is deployed as a spiking network on AltAi, achieving comparable accuracy while operating in a sub-watt power regime, which enables a lightweight, wearable implementation. For six distinct functional gestures recorded from upper-limb amputees, the system achieves robust recognition performance comparable to state-of-the-art myoelectric interfaces. When the vision pipeline restricts the decision space to three context-appropriate gestures for the currently viewed object, recognition accuracy increases to roughly 95% while excluding unsafe, object-inappropriate grasps. These results indicate that the proposed neuromorphic, context-aware controller can provide energy-efficient and reliable prosthesis control and has the potential to improve safety and usability in everyday activities for people with upper-limb amputation. |
|
| Lebrun, Benjamin |
Benjamin Lebrun (University of Canterbury, Christchurch, New Zealand) When someone appears overly generous without a clear rationale, people tend to imagine "phantom costs"—such as risks and bad intentions—behind their behaviour. While this phenomenon is well established with humans, it remains unexplored with ambiguous social agents such as robots. This project investigates whether and how individuals perceive phantom costs in human-robot interaction (HRI), the underlying causes, and the behavioural consequences. After verifying that phantom costs occur in HRI (Studies 1 and 2), Studies 3-4 explored two key modulators: the perceived plausibility of the robot's justification, and its perceived autonomy. Study 5 explored how anthropomorphism affects phantom cost perceptions. Future work will investigate whether mind attribution mediates phantom costs. Overall, this project provides new insights into how people interpret robot behaviour and highlights the need for robots to provide sufficient explanations for their behaviour. Maria Teresa Parreira, Ilaria Torre, Sarah Schömbs, Erik Lagerstedt, Laetitia Tanqueray, Karla Bransky, Ilaria Alfieri, Maria Raffa, Carolin Straßmann, Benjamin Lebrun, and Samantha Stedtler (Cornell University, New York, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Melbourne, Melbourne, Australia; University of Gothenburg, Gothenburg, Sweden; Lund University, Lund, Sweden; Australian National University, Canberra, Australia; IULM University, Milan, Italy; University of Duisburg-Essen, Duisburg, Germany; University of Canterbury, Christchurch, New Zealand) Building on the foundational work of our 2025 workshop, this second edition shifts from exploring what sustainability means in HRI to producing actionable outcomes for the community. Last year, we identified strong but often unacknowledged engagement with social and environmental sustainability, alongside gaps in shared language and practical frameworks. This year, we focus on concrete deliverables: guidelines for comprehensively integrating sustainability in HRI research. The morning features external speakers who integrate sustainable development in their research, and whole room discussions. The afternoon centers on collaborative development of guidelines and standards that researchers could adopt. We welcome HRI researchers at any career stage interested in operationalizing sustainability - whether through addressing the carbon footprint of computational models, ensuring equitable access to robotic technologies, or embedding SDG principles into research practices. Outcomes will include practical tools for sustainable HRI research and a roadmap for continued community action beyond the workshop. |
|
| Lee, Geumjin |
Hyunjung Kim, Max Fischer, Yena Kim, Geumjin Lee, Yanis El Ouali, Shota Kiuchi, Kentaro Honma, Hailong Liu, and Toshihiro Hiraoka (University of Tokyo, Tokyo, Japan; KAIST, Daejeon, Republic of Korea; Bordeaux INP, Talence, France; NAIST, Nara, Japan; Japan Automobile Research Institute, Tsukuba, Japan) We present MIRAbot, the first rearview-mirror-embedded robotic assistant. Reimagining this long-standing symbol of manual driving, MIRAbot leverages its strategic position and familiar form factor to support SAE Level 3 conditional automated driving by transforming into an active, communicative agent during automated modes. We conducted an exploratory public study (N=73) with Japanese adults to capture first-contact user impressions. UEQ-S scores significantly exceeded established benchmarks, with younger adults (18–54) reporting higher Hedonic and Overall ratings than older participants (55+). Qualitatively, participants interpreted the system’s outward-facing, scanning gaze as a cue of vigilance, fostering positive expectations for real-world viability. We discuss these findings, offer design insights for future iterations of robotic assistants embedded in familiar objects to enhance acceptance of automated driving, and outline directions for future research. |
|
| Lee, Hannah |
Chalmers Ong, Hannah Lee, Koen de Nie, Mark W. Vas, Peter Holzer, Yasaman Shamssabzevar, Yu-Ai Jiang, J. Micah Prendergast, Alper Semih Alkan, and Serdar Aşut (Delft University of Technology, Delft, Netherlands) Robotics in creative industries is sometimes met with concern over the eventual replacement of humans. This paper proposes a framework for a new form of spatial craft that emerges through human-robot collaboration, using the Delft Blue painting process as a case study. We employ Learning from Demonstration (LfD) through kinesthetic guidance of a KUKA iiwa robotic arm to capture expert brushstroke trajectories, which are then modelled using a Long Short-Term Memory Variational Autoencoder (LSTM-VAE) to generate novel, stylistically coherent strokes. Subsequently, we evaluated these generated trajectories through physical execution on ceramic tiles, inviting a human painter to interpret and complete one of the robot-produced compositions. The results demonstrate that robotic machine learning can support a new co-crafting workflow, which does not substitute human artists but expands and transforms traditional craft practices for new creative opportunities to emerge. |
|
| Lee, I-Ting |
I-Ting Lee and Gisela Reyes-Cruz (University of Nottingham, Nottingham, UK) Telepresence Robots (TRs) offer remote students a mobile and embodied way to join in-person group activities, yet their role in informal, peer-driven collaboration, such as brainstorming, small-group discussion, and socializing, remains understudied. We conducted a small exploratory study with five groups of university students, in which one member joined each activity remotely via a Telepresence Robot (TR). Using questionnaires, interviews, and video observations, we identified recurring interactional challenges, including limited opportunities for initiating turns and difficulties maintaining shared visibility of physical artifacts due to visual and navigational constraints. We also identified micro-practices employed by on-site and remote participants to routinely support collaboration. These preliminary findings suggest that participation and social connectedness in informal collaboration are co-constructed, rather than solely being provided by the robot’s technological features. We outline early implications for educators, students and designers to support shared awareness and smoother interactional coordination in group work mediated by TRs, as well as directions for future research in this space. |
|
| Lee, Jessie |
Nigel G. Wormser, Zuha Kaleem, Jessie Lee, Dyllan Ryder Hofflich, and Henry Calderon (Cornell University, Ithaca, USA; Cornell University, Brooklyn, USA) Musculoskeletal injuries from manual laundry cart transportation are very common for workers in the hospitality industry. To address this, we designed Elandro, a teleoperated laundry cart that collaboratively helps hotel staff with transportation across and within floors at a hotel. Through iterative user research at Statler Hotel, and wizard-of-oz interaction testing, we revealed design requirements essential for successful human-robot interaction. Elandro contributes to reducing physical strain on workers, maintaining staff autonomy and decision-making, establishing a human-centered approach where technology empowers rather than replaces hospitality workers. |
|
| Lee, Kangsan |
Ismael Espinoza, Yong-Hyeok Choi, Kangsan Lee, Ji-Heon Oh, Ki Joo Pahk, and Tae-Seong Kim (Kyung Hee University, Yongin, Republic of Korea) We present a Real2Sim2Real system with a pair of UR3 arms equipped with Allegro 4-finger hands for bimanual, long-horizon R2R2H (Robot-to-Robot-to-Human) interaction tasks. Our system is composed of three main modules: (1) Real2Sim module to extract observation data and collect expert demonstrations from the real hardware setup; (2) Temporal-Context Transformer (TCT) policy module that train the policy in simulation using expert demonstrations to preserve human motion trajectories; (3) Sim2Real module to transfer the trained policy from simulation to hardware by leveraging domain randomization and real-time object detection. We have evaluated our system with two R2R2H tasks: (1) Task #1: an R2R2H tube handover and (2) Task #2: an R2R cup flipping and placing. Our presented system has completed the tasks, achieving success rates of 82.86% in simulation and 77.14% in hardware for Task #1, and 57.14% and 51.43% for Task #2. Demonstrating the feasibility of our system in real hardware. |
|
| Lee, KiHyun |
SoYoon Park, Eunsun Jung, KiHyun Lee, Dokshin Lim, and Kyung Yun Choi (Hongik University, Seoul, Republic of Korea) Inspired by the playful, attention-seeking paw gestures of cats, we present PAWSE, a laptop-peripheral robot that encourages short fidgeting-based micro-breaks during digital work. PAWSE integrates a cat-paw-inspired robotic arm with a web-based timer that prompts brief tactile interaction during scheduled breaks. We conducted a within-subjects study comparing three conditions--no break, passive break, and active (PAWSE fidgeting-based) break--using a 2-back task and subjective workload measures (NASA-TLX). Results showed differences in post-task accuracy across conditions, with the highest accuracy observed in the active break condition. Reaction time remained largely comparable. Workload measures indicated reduced mental demand and frustration during rest conditions, with the active break providing the most favorable subjective experience. These preliminary findings offer insight into how fidgeting-based micro-breaks may fit within focused digital work and inform the design of future tactile micro-break systems. |
|
| Lee, Minha |
Rachel Ringe, Nima Zargham, Mihai Pomarlan, Benjamin R. Cowan, Minha Lee, Donald McMillan, and Matthew Peter Aylett (University of Bremen, Bremen, Germany; University College Dublin, Dublin, Ireland; Eindhoven University of Technology, Eindhoven, Netherlands; Stockholm University, Stockholm, Sweden; Heriot-Watt University, Edinburgh, UK) Communication in Human-Robot Interaction (HRI) often focuses on linguistic exchanges, with spoken dialogue providing a natural way for people to interact with robots. While direct verbal interaction can reduce barriers compared to other forms of interaction (e.g., text- or touch-based interfaces), it may also exclude users with speech, language, or cognitive differences, and they may not generalize well across cultures and contexts. Non-linguistic forms of communication, including non-verbal voice interactions and extra-linguistic signals (e.g., gesture, gaze, facial expressions, posture), offer complementary pathways that can enable more inclusive, accessible, and universal interactions. This workshop explores how non-linguistic communication can shape effective human-robot communication and collaboration. We aim to bring together researchers from HRI, conversational AI, linguistics, psychology, and accessibility studies to discuss opportunities, challenges, and design practices for integrating such features. The workshop seeks to advance inclusive design principles, bridge disciplines, and highlight future research directions on communication strategies that empower diverse users in their interactions with robots. |
|
| Lee, Okkeun |
Gyuhee Park, Daehui Kim, Gayoung Yoo, Suhwan Jung, Okkeun Lee, Hyunmin Kang, Yong Gu Ji, and Hyochang Kim (Yonsei University, Seoul, Republic of Korea; Stanford University, Palo Alto, USA; Daegu University, Gyeongsan, Republic of Korea; Stanford University, Incheon, Republic of Korea) Autonomous delivery robots (ADRs) are increasingly being deployed in urban environments, where they share pedestrian paths with other road users. Previous human-robot proximity studies have primarily focused on pedestrian perception and distance norms during frontal encounters. However, when a robot approaches from the rear, pedestrians experience a lack of visual information, which can heighten threat perception and reinforce distance norms. This study aimed to investigate pedestrian perception in rear approach situations and identify the proxemic zone associated for such approaches. The results showed that participants felt comfortable with a proximity distance of 1.0–2.0 m during rear approaches. As the robot approached at higher speeds, participants perceived the robot's approach more clearly, but distance-related discomfort significantly increased, demonstrating a trade-off between perceptual awareness and subjective discomfort. These findings suggest the importance of social navigation strategies that reflect pedestrian acceptance for the safe integration of delivery robots. Gayoung Yoo, Daehui Kim, Gyuhee Park, Suhwan Jung, Okkeun Lee, Hyunmin Kang, Yong Gu Ji, and Hyochang Kim (Yonsei University, Seoul, Republic of Korea; Stanford University, Palo Alto, USA; Daegu University, Gyeongsan, Republic of Korea; Stanford University, Incheon, Republic of Korea) Mobile robots increasingly navigate spaces where people are engaged in conversation, where attention is directed toward conversational partners and peripheral awareness is limited. This study examines how a robot’s near-stop distances used as a crossing cue (45, 80, 120 cm) and conversational arrangements (Vis-à-vis, L-shape) influence when the robot is noticed and how its approach is interpreted. Using eye-tracking and foot-tracking, we found that these effects depend on the F-formation adopted during conversation. In Vis-à-vis, participants showed minimal fixations on the robot at 120 cm, whereas the 45 cm near-stop consistently elicited visual attention and increased perceived safety. In contrast, in L-shape formations, sustained visual exposure led to higher perceived disruption at both 45 and 120 cm. Overall, these findings indicate that F-formation–driven attentional allocation shapes perceptual and behavioral responses to robot approaches in conversational settings. These findings motivate future work on adaptive robot cues for dynamic social situations involving multi-party interactions and diverse users. |
|
| Lee, Sabrina |
Marina Obata, Yuka Iwanaga, Masayoshi Tsuchinaga, Akitoshi Mizutani, Daniel R. J. Downes, and Sabrina Lee (Woven by Toyota, Tokyo, Japan; Toyota Motor Corporation, Toyota, Japan) This study examines the validity of the Motivation, Engagement, and Thriving in User Experience (METUX) model for evaluating remote operation interfaces of an assistive robot. Multiple real-life scenarios involving various target users were explored to assess whether Self-Determination Theory (SDT) factors in METUX effectively represent user well-being. Survey data from 2300 samples across three interfaces and four user groups underwent descriptive analysis, Confirmatory Factor Analysis (CFA), Multi-Group CFA (MGCFA) and Structural Equation Modelling (SEM). Results show significant correlations between SDT-defined Basic Psychological Needs (BPN) and well-being indicators, allowing researchers to examine well-being through interaction with assistive robots. MGCFA supported measurement invariance across interfaces but not across user groups, suggesting user-needs-centric interface development. The findings highlight the importance of validating metrics before product evaluation. In conjunction with planned user research in real-life settings using established metrics, our approach addresses research gaps in user well-being assessment within the HRI field. |
|
| Leesakul, Natalie |
Robbie Jay Cato, Lambros Lazuras, Natalie Leesakul, and Francesco Del Duchetto (University of Lincoln, Lincoln, UK; University of Nottingham, Nottingham, UK) As service robots are deployed in public spaces, they inherently collect data about their environment and the people within it. This creates a critical tension between ensuring users are aware of data collection and maintaining their trust. We investigate how different disclosure and consent mechanisms shape user perceptions of trust and privacy. We conducted a Wizard-of-Oz experiment with 36 participants on a university campus, comparing three conditions: no disclosure, a one-time static disclosure, and a dynamic ongoing consent mechanism. Post-interaction analysis reveals that dynamic consent not only increases user awareness but also significantly builds trust. Surprisingly, we found that a one-time, static disclosure was often more damaging to user trust than no disclosure at all. The results of our pilot study provide empirical evidence that interactive and continuous consent is crucial for the ethical and successful deployment of robots in public spaces, suggesting that designers should avoid simple, static warnings in favour of more granular and interactive interfaces. |
|
| Leins, Nicolas |
Nicolas Leins, Jana Gonnermann-Müller, Malte Teichmann, and Sebastian Pokutta (Zuse Institute Berlin, Berlin, Germany; Weizenbaum Institute, Berlin, Germany; University of Potsdam, Potsdam, Germany; TU Berlin, Berlin, Germany) Augmented Reality (AR) offers powerful visualization capabilities for industrial robot training, yet current interfaces remain predominantly static, failing to account for learners' diverse cognitive profiles. In this paper, we present an AR application for robot training and propose a multi-agent AI framework for future integration that bridges the gap between static visualization and pedagogical intelligence. We report on the evaluation of the baseline AR interface with 36 participants performing a robotic pick-and-place task. While overall usability was high, notable disparities in task duration and learner characteristics highlighted the necessity for dynamic adaptation. To address this, we propose a multi-agent framework that orchestrates multiple components to perform complex preprocessing of multimodal inputs (e.g., voice, physiology, robot data) and adapt the AR application to the learner's needs. By utilizing autonomous Large Language Model (LLM) agents, the proposed system would dynamically adapt the learning environment based on advanced LLM reasoning in real-time. |
|
| Leite, Iolanda |
Yujing Zhang, Iolanda Leite, and Sarah Gillet (KTH Royal Institute of Technology, Stockholm, Sweden) Aging populations increasingly face challenges such as reduced social engagement and heightened risks of isolation. Group-based activities present valuable opportunities to promote older adults’ emotional well-being and cognitive stimulation. Although prior HRI research has examined robots in group settings and as tools for individualized support, limited work has explored how robot-facilitated activities should be designed to support older adult groups' interaction in real community contexts. We developed a dual-robot version of the Swedish word-description game "With Other Words" and conducted in-the-wild deployments with fifteen older adults across local community centers. Through thematic analysis of post-session interviews and researcher observations, we identified key factors and design recommendations that are can help future work to build functioning interactions between robots and groups of older adults. Merle M. Reimann, Gregory LeMasurier, Parag Khanna, Lennart Wachowiak, Wenxi Wu, Martim Brandao, Gerard Canal, Christian Smith, and Iolanda Leite (KTH Royal Institute of Technology, Stockholm, Sweden; University of Massachusetts at Lowell, Lowell, USA; King’s College London, London, UK; Robotics, Perception, and Learning Laboratory, Stockholm, Sweden) In order for robots to act as an empowering tool in today's society, it is necessary for users to understand what those robots can and cannot do. This can happen through the design of the robot, through external information such as signs and manuals, or through the interaction itself. With this workshop, we aim to provide a platform for participants who approach robot transparency from different angles to share and discuss their experiences. Participants will have the opportunity to deepen their knowledge of transparent and understandable robot interactions through two keynotes and paper presentations. The diverse backgrounds of participants will be brought together in interactive exercises that aim to address questions about topics such as how transparency can be used to reduce inequalities and empower society and other open problems in the field. The guided break-out sessions aim at giving participants the chance to reflect on the keynotes, paper presentations and their own work and to share those reflections with participants with other perspectives. |
|
| LeMasurier, Gregory |
Gregory LeMasurier and Holly A. Yanco (University of Massachusetts Lowell, Lowell, USA; University of Massachusetts Amherst, Amherst, USA) When people start to work with a robot, they may not fully understand its capabilities, leading to potential misuse, mismatched expectations, and an inability to diagnose and resolve the robot's failures. Robots can provide users with demonstrations in an attempt to reduce misunderstanding. We conducted a between-subjects in-person study (N=131) where participants watched a demonstration and then completed a collaborative task with the robot. We found that participants were able to accurately understand the robot's demonstrated reliability. We discuss the impacts of these demonstrations on participants' trust in the robot, perception of the robot's capabilities, and willingness to allocate tasks to the robot. Gregory LeMasurier (University of Massachusetts Lowell, Lowell, USA) As robots are integrated into our everyday lives, they are likely to encounter unforeseen circumstances. To receive assistance from nearby people, robots need to be able to detect and explain failures. Every individual enters an interaction with a robot with their own understanding of the system and how they believe it operates, therefore explanations should be personalized to ensure that each individual has an accurate understanding of the robot. Through this work, we aim to make robots more accessible to the general public, regardless of their experience level, ultimately enabling people to efficiently and effectively diagnose and resolve robot failures. Merle M. Reimann, Gregory LeMasurier, Parag Khanna, Lennart Wachowiak, Wenxi Wu, Martim Brandao, Gerard Canal, Christian Smith, and Iolanda Leite (KTH Royal Institute of Technology, Stockholm, Sweden; University of Massachusetts at Lowell, Lowell, USA; King’s College London, London, UK; Robotics, Perception, and Learning Laboratory, Stockholm, Sweden) In order for robots to act as an empowering tool in today's society, it is necessary for users to understand what those robots can and cannot do. This can happen through the design of the robot, through external information such as signs and manuals, or through the interaction itself. With this workshop, we aim to provide a platform for participants who approach robot transparency from different angles to share and discuss their experiences. Participants will have the opportunity to deepen their knowledge of transparent and understandable robot interactions through two keynotes and paper presentations. The diverse backgrounds of participants will be brought together in interactive exercises that aim to address questions about topics such as how transparency can be used to reduce inequalities and empower society and other open problems in the field. The guided break-out sessions aim at giving participants the chance to reflect on the keynotes, paper presentations and their own work and to share those reflections with participants with other perspectives. |
|
| Lenaerts, Senne |
Giulio Antonio Abbo, Senne Lenaerts, and Tony Belpaeme (Ghent University – imec, Ghent, Belgium) In this work, we explore how multimodal large language models can support real-time context- and value-aware decision-making. To do so, we combine the GPT-4o language model with a TurtleBot 4 platform simulating a smart vacuum cleaning robot in a home. The model evaluates the environment through vision input and determines whether it is appropriate to initiate cleaning. The system highlights the ability of these models to reason about domestic activities, social norms, and user preferences and take nuanced decisions aligned with the values of the people involved, such as cleanliness, comfort, and safety. We demonstrate the system in a realistic home environment, showing its ability to infer context and values from limited visual input. Our results highlight the promise of multimodal large language models in enhancing robotic autonomy and situational awareness, while also underscoring challenges related to consistency, bias, and real-time performance. |
|
| Leusmann, Jan |
Yan Zhang, Sarah Schömbs, Xiang Pan, Jan Leusmann, Sara Mongile, Mohammad Obaid, and Wafa Johal (University of Melbourne, Melbourne, Australia; Kyoto University, Kyoto, Japan; LMU Munich, Munich, Germany; Italian Institute of Technology, Genoa, Italy; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden) Recent developments in large language models and multi-agentic systems present both opportunities and challenges for Human-Robot Interaction (HRI). However, it remains unclear how the interaction paradigms will be reshaped due to the growing complexity and opaqueness. Therefore, to bridge this gap and connect users with multi-agentic systems through embodied robot agents, this workshop aims to explore solutions for design challenges of multi-agentic systems on various types of robots in real-world contexts. Through a complementary mix of focus groups, including design activities and paper presentations, our workshop generates design considerations for real-world applications and outlines pioneering research agendas for multi-agentic systems in HRI. |
|
| Levinson, Leigh M. |
Leigh M. Levinson, Bengisu Cagiltay, Vicky Charisi, Elena Malnatsky, Mike E.U. Ligthart, and Micol Spitale (Indiana University at Bloomington, Bloomington, USA; Koç University, Istanbul, Türkiye; MIT Singapore, Singapore, Singapore; Vrije Universiteit Amsterdam, Amsterdam, Netherlands; Politecnico di Milano, Milan, Italy) The increased presence of social robots geared toward children necessitate those who design and develop these technologies to understand and prepare for addressing emerging ethical and practical questions throughout phases of interaction design, technical development, and real-world use while maintaining a child-friendly and playful approach. Even more importantly, it is crucial to understand how we can address various tensions between the ethical and practical use of social robots with children, that may emerge between designers, developers, and users. In this full-day workshop, we will introduce the "playful by design" toolkit to promote hands-on and rights-based design experiences for researchers and practitioners to (1) identify and discuss tensions that arise in child-robot interaction research and (2) co-create a working-draft and collaboration plan for designing a "roboticist's guide to working with children". This workshop builds on the authors' past workshop and work-in-progress report at IDC 2025 where our key goal is to co-develop a community-driven resource through collecting and analyzing testimonials from child-robot interaction researchers. |
|
| Lewis, Claire |
Claire Lewis, Melody Goldanloo, Matthew Murray, Zachary Kaufman, and Tom Williams (Colorado School of Mines, Golden, USA; University of Colorado at Boulder, Boulder, USA) Museums are an effective informal learning environment for science, art and more. Many researchers have proposed museum guide robots, where the outcomes of the interactions are based solely on the robot’s communication. In contrast, we explored how a robot could encourage learning and teamwork through human-human interactions. To achieve this, we created “Chase,” a novel zoomorphic robot that presents “Data Chase,” an interactive museum activity. We designed Chase to enable museum-goers to learn about the exhibits together by prompting users to complete a teamwork based scavenger hunt for rewards. |
|
| Lewkowicz, Michal A. |
Kayla Matheus, Debasmita Ghose, Jirachaya (Fern) Limprayoon, Michal A. Lewkowicz, and Brian Scassellati (Yale University, New Haven, USA; Massachusetts Institute of Technology, Cambridge, USA) We present the Ommie Deployable System (DS), a replicable, autonomous platform for long-term, in-the-wild mental health applications with the Ommie robot. Ommie DS builds on prior anxiety-focused deployments by introducing robust hardware, enhanced sensing, modular software, a companion tablet, and wireless multi-device architecture to support daily deep-breathing interactions in homes. Designed using off-the-shelf components and rapid-prototyped enclosures, the system enables reliable multi-week use, remote monitoring, and easy customization. By providing a durable, open, and researcher-friendly platform, Ommie DS supports scalable, real-world study of HRI for mental health and well-being. |
|
| Lezina, Mariya |
Yuhua Jin, Nikita Kuzmin, Georgii Demianchuk, Mariya Lezina, Fawad Mehboob, Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Ahsan Mustafa, and Dzmitry Tsetserukou (Chinese University of Hong Kong, Shenzhen, China; Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Drones operating in human-occupied spaces suffer from insufficient communication mechanisms that create uncertainty about their intentions. We present HoverAI, an embodied aerial agent that integrates drone mobility, infrastructure-independent visual projection, and real-time conversational AI into a unified platform. Equipped with a MEMS laser projector, onboard semi-rigid screen, and RGB camera, HoverAI perceives users through vision and voice, responding via lip-synced avatars that adapt appearance to user demographics. The system employs a multimodal pipeline combining VAD, ASR (Whisper), LLM-based intent classification, RAG for dialogue, face analysis for personalization, and voice synthesis (XTTS v2). Evaluation demonstrates high accuracy in command recognition (F1: 0.90), demographic estimation (gender F1: 0.89, age MAE: 5.14 years), and speech transcription (WER: 0.181). By uniting aerial robotics with adaptive conversational AI and self-contained visual output, HoverAI introduces a new class of spatially-aware, socially responsive embodied agents for applications in guidance, assistance, and human-centered interaction. |
|
| Li, Jamy |
Mohsen Ensafjoo, Chau Nguyen, Roman Shumliakivskyi, Jamy Li, Paul H. Dietz, and Ali Mazalek (Toronto Metropolitan University, Toronto, Canada; Goethe University Frankfurt, Frankfurt, Germany; Edinburgh Napier University, Edinburgh, UK; University of Toronto, Toronto, Canada) Social robots in public and recreational spaces must keep an appearance of social intelligence. Traditional designs use pre-programmed schedules (PPS) or tightly authored state machines to guarantee predictable behaviour. Although PPS approaches are reliable, they limit spontaneous actions and the perception of “aliveness.” In this late-breaking report, we present our prototype, an aquatic social robot whose personality and motion are driven by an LLM (GPT) rather than PPS. The LLM generates character-specific Python control scripts based on prompts that contain robot constraints, user actions, and a well-known character description, which are then executed on our non-humanoid watercraft, designed for public pool settings. We briefly describe the system architecture and discuss early insights and open questions regarding the use of well-known characters with LLMs to create personality-rich behaviour in public-space robots. |
|
| Li, Jing |
Jing Li, Felix Schijve, Jun Hu, and Emilia I. Barakova (Eindhoven University of Technology, Eindhoven, Netherlands) Parental involvement is crucial for the development of children's emotion regulation (ER) skills, yet navigating these complex emotional interactions remains challenging for many families. While Large Language Models (LLMs) offer unprecedented conversational flexibility, integrating them into embodied social robots to provide context-aware, multimodal support remains an open challenge. In this paper, we present the design and preliminary evaluation of an LLM-powered robotic system aimed at facilitating ER within parent-child dyads. Utilizing a supervised autonomy approach, our system bridges the gap between language-based reasoning and embodied robotic behavior, allowing the MiRo-E robot to engage in natural dialogue while performing empathetic physical actions. We detail the system's technical architecture and interaction design, which guides dyads through evidence-based ER strategies. Preliminary user tests with six parent-child dyads suggest positive user engagement and initial trust, with participants reporting that the robot showed potential as a supportive mediator. These findings offer early design insights into developing autonomous, LLM-driven social robots for family-centered mental health interventions. |
|
| Li, Mingyu |
You Yang Chen, Shan Luo, Xueqing Li, Sizhao Jin, Pengfei Zhang, and Mingyu Li (Tsinghua University, Beijing, China; China Academy of Art, Hangzhou, China; Shandong University of Art and Design, Jinan, China) Prolonged earphone use and excessive volume are common causes of hearing damage, yet users often remain unaware of these risks. Existing alert mechanisms, such as pop-up warnings, are easily dismissed and lack emotional resonance. We present Songbird, a bird-shaped robotic earphone companion that "listens along" with the user. When the volume exceeds a safe threshold (75 dB) or wearing time becomes excessive, Songbird mimics the current melody by singing aloud, rendering auditory risks in an embodied and emotionally engaging manner. This research explores design strategies for mapping health risk information onto embodied robotic companions. By establishing emotional connections through physical form, such companions demonstrate unique potential in health care and behavior guidance, offering a non-intrusive, emotionally resonant interaction paradigm for hearing protection in personal audio devices. |
|
| Li, Ningbo |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Li, Rulan |
Jinxuan Du, Rulan Li, Tianlu Zhou, and Qianrui Liu (Tsinghua University, Beijing, China) Young people often suppress emotional expression non-verbally, leading to social friction and misunderstanding. Therefore, We propose MuffBunny, an embodied rabbit-eared robot designed as a social buffer. MuffBunny identifies the listener's implicit emotional valence and arousal from verbal stimuli in real-time and converts these emotions into intuitive physical cues—dynamic ear morphing. Upward morphing indicates positive emotions, and downward morphing signifies negative ones. Our design aims to provide a novel, non-confrontational proxy for emotional expression, reducing the burden of self-disclosure, fostering empathy, and promoting a healthier social atmosphere. |
|
| Li, Shengyuehui |
Nayeon Kwon, Shengyuehui Li, Yu-Chia Tseng, and Yadi Wang (Cornell University, Ithaca, USA) Shared waiting spaces, like hotel lobbies, often feel socially stagnant, with people defaulting to silence and avoiding interactions. In this paper, we explore how an everyday object found in those spaces may be robotized to change this dynamic. We introduce HighLight, a mobile floor-lamp robot that uses light and movement to reduce social awkwardness and encourage spontaneous interactions among strangers. We designed its interactions to spark surprise, invite light-hearted engagement, reinforce positive social energy, and back off when people show discomfort. Through in-the-wild deployments, we observed that HighLight successfully elicited curiosity, laughter, and conversations, easing social awkwardness in shared spaces. |
|
| Li, Wang Long |
Febe Anna Kooij-Meijer, Emilia I. Barakova, Rosa Elfering, Wang Long Li, and Maarten Houben (Eindhoven University of Technology, Eindhoven, Netherlands; Tinybots, Rotterdam, Netherlands) The growing population of individuals with mild cognitive impairment and dementia places increasing demands on home-care systems, while staff shortages and high caregiver workloads underscore the need for assistive technologies. However, research on implementing these technologies in home care practice remains limited. This study examines professional caregivers’ digital onboarding of Tessa, a social robot that supports through verbal reminders. A conceptual digital onboarding probe was evaluated with novice, experienced, and expert users. Findings indicate that the onboarding process improves usability and efficiency by providing intuitive guidance and structured workflows. Additionally, LLMs can translate caregiver-provided goals into actionable robot scripts, though oversight remains essential for quality assurance. The probe and LLM support more effective onboarding and enhance caregiver’s user experience. |
|
| Li, Xiaohan |
Shaoqing Liu, Yishan Duan, LiMeng Wang, Yuehan Zhao, Xiaohan Li, and Matt Philippe Laing (Tsinghua University, Peking, China; Tsinghua University, Beijing, China; Guangzhou Academy of Fine Arts, Guangzhou, China) This study explores the potential of social robots as icebreakers during first-time encounters between strangers, aiming to address common issues of social awkwardness that may hinder meaningful interactions. We designed and implemented LMAO, a desktop social robot (as shown in Fig. 1), which uses laughter as a non-verbal, emotionally contagious signal to intervene during conversation breaks. Through a user study involving 20 participants in group discussions, we examined whether the robot's laughter could enhance social engagement by increasing smile frequency and verbal contributions. The results indicated that compared to the control group with no robot intervention, the group with LMAO exhibited significantly higher levels of smiling and verbal output. Our findings suggest that context-aware robotic laughter can effectively reduce subjective feelings of awkwardness and facilitate smoother social initiation, highlighting the potential of social assistive robots in fostering human connection. |
|
| Li, Xiying |
Xiying Li and Astrid Marieke Rosenthal-von der Pütten (RWTH Aachen University, Aachen, Germany) Researchers have long focused on integrating robots smoothly into human life, yet anecdotal and empirical evidence show that humans (un)intentionally interfere with robots and impede their tasks. Prior work has focused primarily on so-called robot bullying, conceptualized as intentional behavior, but has not sufficiently acknowledged unintentional interference. A systematic classification of interference types, individuals involved, and robot behaviors to address these interferences remains lacking. This late-breaking report presents preliminary findings from an ongoing systematic review following PRISMA guidelines. We identified 18 studies from 2000 to 2025. We observed children and young adults most frequently engaged in obstructive behaviors, driven by curiosity and peer influence among other factors. Humanoid robots often elicited verbal harassment while machine-like robots were more targets of physical interference. Evidence on suitable robot responses remains limited. These insights highlight the need for broader investigation of human-robot conflict and robot responses to ensure smooth and safe HRI in practice. Ann-Sophie L. Schenk, Martin Schymiczek Larangeira de Almeida, Ilknur Sitil, and Xiying Li (RWTH Aachen University, Aachen, Germany) What if public benches had their own desires? This paper presents Bickering Benches, two interactive benches designed not to serve human needs but to act from a post-anthropocentric perspective. Through playful voices and competitive behaviors, the benches attempt to attract nearby passersby and maximize their own sit-down count. We aim to demonstrate how everyday objects can become active social actors that reshape human-robot interaction and open new possibilities for playful engagement in shared public space. |
|
| Li, Xueqing |
You Yang Chen, Shan Luo, Xueqing Li, Sizhao Jin, Pengfei Zhang, and Mingyu Li (Tsinghua University, Beijing, China; China Academy of Art, Hangzhou, China; Shandong University of Art and Design, Jinan, China) Prolonged earphone use and excessive volume are common causes of hearing damage, yet users often remain unaware of these risks. Existing alert mechanisms, such as pop-up warnings, are easily dismissed and lack emotional resonance. We present Songbird, a bird-shaped robotic earphone companion that "listens along" with the user. When the volume exceeds a safe threshold (75 dB) or wearing time becomes excessive, Songbird mimics the current melody by singing aloud, rendering auditory risks in an embodied and emotionally engaging manner. This research explores design strategies for mapping health risk information onto embodied robotic companions. By establishing emotional connections through physical form, such companions demonstrate unique potential in health care and behavior guidance, offering a non-intrusive, emotionally resonant interaction paradigm for hearing protection in personal audio devices. |
|
| Li, Yanheng |
Xiaoyu Chang, Yanheng Li, XiaoKe Zeng, Jing Qi Peng, and Ray Lc (City University of Hong Kong, Hong Kong, China) Robots are increasingly designed to act autonomously, yet moments in which a robot overrides a user’s explicit choice raise fundamental questions about trust and social perception. This work investigates how a preference-violating override affects user trust, perceived competence, and interpretations of a robot’s intentions. In a beverage-delivery scenario, a robot either followed a user’s selected drink or replaced it with a healthier option without consent. Results show that the way an override is enacted and communicated consistently reduces trust and competence judgments, even when users acknowledge benevolent motivations. Participants interpreted the robot as more controlling and less aligned with their autonomy, revealing a social cost to such actions. This study contributes empirical evidence that preference-violating override behavior is socially consequential, shaping trust and core dimensions of user perception in embodied service interactions. |
|
| Li, Yifan |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Li, Yixun |
Mingke Wang, Yixun Li, Bettina Nissen, and Rebecca Stewart (Imperial College London, London, UK; University of Edinburgh, Edinburgh, UK) MenstaRay is a soft knit robotic interface designed to explore how tactile actuation can support somatosensory communication of menstrual experiences. The prototype was created using a fabrication method for knit-integrated soft wearable robotics with two core structural elements: (1) an extensible EcoFlex 00-10 silicone cavity containing internal air chambers and (2) a strain-limiting textile layer knitted with Spandex Super Stretch Yarn (81% nylon, 19% elastane). This configuration enables regulated inflation patterns that preserve the softness of textiles while providing targeted haptic feedback that is suitable for intimate, safe, and therapeutically appropriate interactions. Through a series of workshops, we investigated and evaluated how these dynamic tactile behaviours shaped participants' embodied reflections on menstrual sensations. This work contributes to human robotic interaction by introducing MenstaRay, a novel artifact coupled with textile-integrated actuation that can externalize intimate bodily sensations and foster new modes of communicating, reflecting on and representing menstrual experiences through wearable interfaces. |
|
| Li, Yiyang |
Annette Masterson, Xin Ye, Yiyang Li, and Lionel Peter Robert Jr (University of Michigan at Ann Arbor, Ann Arbor, USA) The rapid proliferation of Large Language Models (LLMs) has enabled artificial agents to foster deep emotional bonds, yet the comparability of these AI relationships to human norms remains underexplored. As HRI researchers increasingly integrate LLMs into embodied platforms, understanding the nature of these bonds is imperative for responsible design. This study investigates whether relationships with LLM-driven AI companions can rival the satisfaction of human connections and if the mechanism of intimacy is equally critical. Through a comparative survey of 150 participants stratified across in-person, long-distance, and LLM companion relationships, we illuminate that digital bonds can yield satisfaction levels comparable to human partnerships, with intimacy serving as a predictive factor. These findings challenge the assumption that AI relationships are inherently unsatisfactory and identify intimacy as a design metric for social robots, providing a protocol for integrating LLM companions into embodied relational agents. |
|
| Liang, Kai-Chieh |
Victor Nikhil Antony, Kai-Chieh Liang, and Chien-Ming Huang (Johns Hopkins University, Baltimore, USA) We demonstrate Lantern, a minimalist, haptic robotic object platform designed to be low-cost, holdable, and easily customized for diverse human–robot interaction scenarios. In this demo, we showcase three instantiations of Lantern: (1) the base Lantern platform, highlighting its core motion and haptic behavioral profiles; (2) an ADHD body-doubling study buddy variant, which shows how Lantern can be adapted to scaffold focused work; and (3) Dofu, an upgraded Lantern variant to anchor daily mindfulness practice, with additional sensing, improved compute, and a battery-powered, dockable form factor for untethered, in-the-wild use. Visitors will be able to physically interact with each Lantern variant, observe contrasting embodiments, and behaviors; Moreover, visualizations (panels and video) will showcase the build process and additional extension possibilities. |
|
| Liang, Yuxin |
Haopeng Peng, Ruilin Zhang, Yuxin Liang, and Liyang Fan (Tsinghua University, Beijing, China) In social interactions, individuals often conceal their true feelings for various reasons. This phenomenon of actively adjusting social strategies based on the social context is referred to as the "social performance mechanism". Inspired by this mechanism, we propose a wearable robot "THIRD EXPRESSION", designed to assist individuals in expressing real emotions and states that are difficult to verbalize. Through robot design, this study aims to enhance the wearer’s ability to actively define and convey their emotions in real-time. The system integrates multimodal sensors (speech, environment, heart rate, etc.) and large model reasoning to generate dynamic visual feedback. The pilot study has been validated that the robot design enhances the sense of boundary control and interaction satisfaction, while reducing social anxiety levels. |
|
| Ligthart, Mike E.U. |
Elena Malnatsky, Shenghui Wang, Koen Hindriks, and Mike E.U. Ligthart (Vrije Universiteit Amsterdam, Amsterdam, Netherlands; University of Twente, Twente, Netherlands) Long-term child–robot interaction depends on sustaining both relational continuity and accurate, meaningful memory over time. In a one-year follow-up with 50 children from a personalized reading-support robot study, we found that children felt less close to the robot and half of the robot’s stored profile content was outdated or missing, revealing three challenges for long-term CRI: relationship decay, informational decay, and opaque robot memory, where children cannot check or influence what the robot remembers about them. A brief web-based “reconnect” repaired both informational and relationship decay, and revealed children’s strong interest in having more agency over the robot’s memory. Building on these insights, we propose Open-Memory Robots: agents whose memory is more transparent and co-constructed with the child, supporting continuity, appropriate trust, and children’s agency in CRI. Leigh M. Levinson, Bengisu Cagiltay, Vicky Charisi, Elena Malnatsky, Mike E.U. Ligthart, and Micol Spitale (Indiana University at Bloomington, Bloomington, USA; Koç University, Istanbul, Türkiye; MIT Singapore, Singapore, Singapore; Vrije Universiteit Amsterdam, Amsterdam, Netherlands; Politecnico di Milano, Milan, Italy) The increased presence of social robots geared toward children necessitate those who design and develop these technologies to understand and prepare for addressing emerging ethical and practical questions throughout phases of interaction design, technical development, and real-world use while maintaining a child-friendly and playful approach. Even more importantly, it is crucial to understand how we can address various tensions between the ethical and practical use of social robots with children, that may emerge between designers, developers, and users. In this full-day workshop, we will introduce the "playful by design" toolkit to promote hands-on and rights-based design experiences for researchers and practitioners to (1) identify and discuss tensions that arise in child-robot interaction research and (2) co-create a working-draft and collaboration plan for designing a "roboticist's guide to working with children". This workshop builds on the authors' past workshop and work-in-progress report at IDC 2025 where our key goal is to co-develop a community-driven resource through collecting and analyzing testimonials from child-robot interaction researchers. |
|
| Li Jr, Michael Detsiang |
Yitong Yuan, Ke Huang, Michael Detsiang Li Jr, Yiwei Zhao, and Baoyuan Zhu (Tsinghua University, Beijing, China) Unhealthy postures have become increasingly prevalent, affecting health and productivity, yet existing posture-correction devices rely on intrusive external reminders. We present Tuotle, a desktop robot that leverages cognitive dissonance by adopting a “bad posture,” prompting users to correct it and, in turn, reflect on their own posture. A pilot user study shows it has comparable posture-correction effectiveness to traditional devices, while showing significantly better user experience and long-term adoption intentions. Our work demonstrates that psychological mechanisms can be activated through human-robot interactions, opening new directions for technologies grounded in human psychology. |
|
| Lillo, Alberto |
Filipa Correia, Cristina Gena, Alberto Lillo, Laura Lossi, Claudio Mattutino, Valentina Nisi, Linda Pigureddu, and Fabiana Vernero (Instituto Superior Técnico - University of Lisbon, Lisbon, Portugal; University of Turin, Turin, Italy; IST University of Lisbon, Lisbon, Portugal) This paper explores the emerging field of Animal-Robot Interaction (ARI) by questioning the way humanoid robots could engage with dogs in domestic settings, a scenario which we expect to become prevalent in the near future. While existing research focuses on human-robot interaction, ARI remains quite underexplored, particularly regarding how dogs perceive and respond to robotic agents. In contrast, this work proposes a playful and relational design perspective to foster more natural interactions between dogs and robots. We sought guidance from the Department of Veterinary Medicine to ensure our experimental design offered an ethical and stress-free environment for the dogs. Hence, the study incorporated best practices in robot vocalisation, gestural communication, and movement adaptation, improving the robot’s ability to interact naturally. The experiment involved six dogs, always in the presence of their owners. Preliminary findings indicate that dogs can react with curiosity and engagement, but also avoidance or fear, challenging assumptions about robot acceptability in animal interactions. We argue for rethinking robotic design beyond human-centric paradigms, advocating for pluralistic and open-ended approaches to ARI. |
|
| Lim, Angelica |
Chuxuan Zhang, Bermet Burkanova, Lawrence H. Kim, Grace Iarocci, Elina Birmingham, and Angelica Lim (Simon Fraser University, Burnaby, Canada) What nonverbal behaviors should a robot respond to? Understanding how children—both neurotypical and autistic—engage with embodied artificial agents is critical for developing inclusive and socially interactive systems. In this paper, we study "open-ended" unconstrained interactions with embodied agents, where little is known about how children behave nonverbally when given few instructions. We conducted a Wizard-of-Oz study in which children were invited to interact nonverbally with 6 different embodied virtual characters displayed on a television screen. We collected 563 (141 unique) nonverbal behaviors produced by children and compare the children’s interaction patterns with those previously reported in an adult study. We also report the presence of repetitive face and hand movements, which should be considered in the development of nonverbally interactive artificial agents. |
|
| Lim, Dokshin |
SoYoon Park, Eunsun Jung, KiHyun Lee, Dokshin Lim, and Kyung Yun Choi (Hongik University, Seoul, Republic of Korea) Inspired by the playful, attention-seeking paw gestures of cats, we present PAWSE, a laptop-peripheral robot that encourages short fidgeting-based micro-breaks during digital work. PAWSE integrates a cat-paw-inspired robotic arm with a web-based timer that prompts brief tactile interaction during scheduled breaks. We conducted a within-subjects study comparing three conditions--no break, passive break, and active (PAWSE fidgeting-based) break--using a 2-back task and subjective workload measures (NASA-TLX). Results showed differences in post-task accuracy across conditions, with the highest accuracy observed in the active break condition. Reaction time remained largely comparable. Workload measures indicated reduced mental demand and frustration during rest conditions, with the active break providing the most favorable subjective experience. These preliminary findings offer insight into how fidgeting-based micro-breaks may fit within focused digital work and inform the design of future tactile micro-break systems. |
|
| Lim, Jia Yap |
Jia Yap Lim, John See, William Weimin Yoo, and Christian Dondrup (Heriot-Watt University Malaysia, Putrajaya, Malaysia; Heriot-Watt University, Edinburgh, UK) User engagement prediction in human-robot interaction (HRI) is typically conducted across diverse environmental settings, including both uncontrolled and controlled environments. Such environmental variations compel social robots to capture and analyse user behaviours differently. To the best of our knowledge, most of the prior works rely on video, audio and feature vectors extracted from the UE-HRI (uncontrolled) dataset to estimate user engagement. The existing literature has overlooked the potential of Multimodal Large Language Models (MLLMs) for user engagement prediction in HRI contexts, thus leaving a critical gap in understanding their operational mechanisms and capacity to elevate model performance. To address this gap, this paper pioneers an investigation into MLLM efficacy for engagement prediction across different environmental settings using the UE-HRI (uncontrolled) and eHRI (controlled) datasets. Moreover, we perform rigorous experiments to identify important factors influencing MLLM performance, including prompts, model types, model parameters, and keyword extraction strategies. |
|
| Limprayoon, Jirachaya (Fern) |
Kayla Matheus, Debasmita Ghose, Jirachaya (Fern) Limprayoon, Michal A. Lewkowicz, and Brian Scassellati (Yale University, New Haven, USA; Massachusetts Institute of Technology, Cambridge, USA) We present the Ommie Deployable System (DS), a replicable, autonomous platform for long-term, in-the-wild mental health applications with the Ommie robot. Ommie DS builds on prior anxiety-focused deployments by introducing robust hardware, enhanced sensing, modular software, a companion tablet, and wireless multi-device architecture to support daily deep-breathing interactions in homes. Designed using off-the-shelf components and rapid-prototyped enclosures, the system enables reliable multi-week use, remote monitoring, and easy customization. By providing a durable, open, and researcher-friendly platform, Ommie DS supports scalable, real-world study of HRI for mental health and well-being. |
|
| Lin, Hsien-I |
Hsien-I Lin (National Yang Ming Chiao Tung University, Hsinchu, Taiwan) This paper presents an intent-aware adaptive guidance framework that integrates an LSTM-based intention predictor into an impedance controlled offline-to-online (O2O) teleoperation system for polishing tasks. The system estimates the operator’s intention from recent force cues and adaptively adjusts the guidance strength, allowing smooth transitions between passive following and manual correction. A user study shows that the adaptive mode reduces NASA-TLX workload and improves SUS usability compared to a conventional fixed-gain controller. These findings demonstrate that intention aware modulation enhances transparency and interaction quality during human–robot trajectory refinement. |
|
| Lin, Ray |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Lin, Ting-Han |
Ting-Han Lin (University of Chicago, Chicago, USA) As robots integrate into everyday settings, they must build positive relationships with people to ensure their long-term use. These relationships are characterized by “rapport,” which involves mutual understanding and interpersonal connection. However, there are no effective scales that measure human-robot rapport in various scenarios, and there are few studies that explore how human-robot rapport is developed and sustained over time. We discuss our prior, current, and future works that operationalize human-robot rapport, understand factors that affect rapport across sessions, and test how robot behaviors can sustain rapport in the long term. In our prior work, we developed the Connection-Coordination Rapport (CCR) Scale to measure human-robot rapport. In our current work, we use this scale to investigate how a robot's social behaviors (empathy, self-disclosure) influence rapport in a three-session study. Finally, our future work will explore how a robot leverages prior conversations with the user to sustain rapport over three weeks. |
|
| Lindblom, Diana Saplacan |
Burhan Mohammad Sarfraz, Diana Saplacan Lindblom, Adel Baselizadeh, and Jim Torresen (University of Oslo, Oslo, Norway; Kristianstad University, Kristianstad, Sweden) As populations age and life expectancy rises, healthcare systems face growing staff shortages. Service robots have been proposed to support healthcare personnel, but their use introduces significant privacy challenges. This paper investigates whether a service robot can protect individuals’ privacy through face obfuscation while performing autonomous tasks in unconstrained healthcare environments. Our approach relies on a face recognition system trained to identify doctors and patients. Scenario-based experiments simulating a doctor’s office show that the system achieves partial success: non-target individuals are reliably obfuscated, and patients can be recognized when frontal views are available. However, real-world conditions such as pose variation, occlusion, and lighting changes reduce recognition reliability, limiting privacy protection. These results highlight both the potential and the current limitations of face obfuscation for privacy-preserving service robots, providing guidance for near-term deployment strategies in constrained interaction scenarios. |
|
| Liska, Noah |
Diane N. Jung, Caleb Escobedo, Noah Liska, Maitrey Gramopadhye, Daniel Szafir, Alessandro Roncone, and Carson J. Bruns (University of Colorado at Boulder, Boulder, USA; University of North Carolina at Chapel Hill, Chapel Hill, USA) Scientists perform diverse manual procedures that are tedious and laborious. Such procedures are considered a bottleneck for modern experimental science, as they consume time and increase burdens in fields including material science and medicine. We employ a user-centered approach to designing a robot-assisted system for dialysis, a common multi-day purification method used in polymer and protein synthesis. Through two usability studies, we obtain participant feedback and revise design requirements to develop the final system that satisfies scientists' needs and has the potential for applications in other experimental workflows. We anticipate that integration of this system into real synthesis procedures in a chemical wet lab will decrease workload on scientists during long experimental procedures and provide an effective approach to designing more systems that have the potential to accelerate scientific discovery and liberate scientists from tedious labor. |
|
| Liu, Hailong |
Hyunjung Kim, Max Fischer, Yena Kim, Geumjin Lee, Yanis El Ouali, Shota Kiuchi, Kentaro Honma, Hailong Liu, and Toshihiro Hiraoka (University of Tokyo, Tokyo, Japan; KAIST, Daejeon, Republic of Korea; Bordeaux INP, Talence, France; NAIST, Nara, Japan; Japan Automobile Research Institute, Tsukuba, Japan) We present MIRAbot, the first rearview-mirror-embedded robotic assistant. Reimagining this long-standing symbol of manual driving, MIRAbot leverages its strategic position and familiar form factor to support SAE Level 3 conditional automated driving by transforming into an active, communicative agent during automated modes. We conducted an exploratory public study (N=73) with Japanese adults to capture first-contact user impressions. UEQ-S scores significantly exceeded established benchmarks, with younger adults (18–54) reporting higher Hedonic and Overall ratings than older participants (55+). Qualitatively, participants interpreted the system’s outward-facing, scanning gaze as a cue of vigilance, fostering positive expectations for real-world viability. We discuss these findings, offer design insights for future iterations of robotic assistants embedded in familiar objects to enhance acceptance of automated driving, and outline directions for future research. |
|
| Liu, Qianrui |
Jinxuan Du, Rulan Li, Tianlu Zhou, and Qianrui Liu (Tsinghua University, Beijing, China) Young people often suppress emotional expression non-verbally, leading to social friction and misunderstanding. Therefore, We propose MuffBunny, an embodied rabbit-eared robot designed as a social buffer. MuffBunny identifies the listener's implicit emotional valence and arousal from verbal stimuli in real-time and converts these emotions into intuitive physical cues—dynamic ear morphing. Upward morphing indicates positive emotions, and downward morphing signifies negative ones. Our design aims to provide a novel, non-confrontational proxy for emotional expression, reducing the burden of self-disclosure, fostering empathy, and promoting a healthier social atmosphere. |
|
| Liu, Shaoqing |
Shaoqing Liu, Yishan Duan, LiMeng Wang, Yuehan Zhao, Xiaohan Li, and Matt Philippe Laing (Tsinghua University, Peking, China; Tsinghua University, Beijing, China; Guangzhou Academy of Fine Arts, Guangzhou, China) This study explores the potential of social robots as icebreakers during first-time encounters between strangers, aiming to address common issues of social awkwardness that may hinder meaningful interactions. We designed and implemented LMAO, a desktop social robot (as shown in Fig. 1), which uses laughter as a non-verbal, emotionally contagious signal to intervene during conversation breaks. Through a user study involving 20 participants in group discussions, we examined whether the robot's laughter could enhance social engagement by increasing smile frequency and verbal contributions. The results indicated that compared to the control group with no robot intervention, the group with LMAO exhibited significantly higher levels of smiling and verbal output. Our findings suggest that context-aware robotic laughter can effectively reduce subjective feelings of awkwardness and facilitate smoother social initiation, highlighting the potential of social assistive robots in fostering human connection. |
|
| Liu, Xiaozhen |
Nanyi Jiang, Borui Wang, and Xiaozhen Liu (Cornell University, Ithaca, USA) Housekeeper carts are essential in hotel operations, supporting the maintenance of the hotel’s physical environment and services. While housekeeping staff are their main users, carts are also highly visible to guests, making them not only tools but also sites where hotel experiences are shaped. This project re-designs housekeeper carts to address both their functional and experiential value to primary users and bystanders. We present a modularized cart with an in-depth development of the laundry module. Considering hotels’ need for trustworthy and polite interactions, we designed non-verbal behaviors that allow the robot to express etiquette. |
|
| Liu, Yanni |
Raiyan Ashraf, Yanni Liu, Sruthi Ganji, and Jong Hoon Kim (Kent State University, Kent, USA) Social robots frequently struggle to sustain meaningful engagement, often limited to surface-level interactions that lack conversational depth. To address this, we present a multimodal conversational architecture that integrates Motivational Interviewing (MI) strategies with situated perception. Key to this approach is a novel dual-stream perception engine: situated cue detection anchors dialogue in the user's immediate physical environment to establish common ground, while tri-modal affect inference (facial, vocal, linguistic) dynamically adjusts the conversation strategy based on real-time user emotion for facilitating empathy. Our system employs a hybrid Large Language Model (LLM) architecture, combining a lightweight model for low-latency fluency and a reasoning model for high-level planning, to guide users through progressive stages of dialogue from rapport-building to deep reflection. A pilot study with the Pepper robot demonstrates that this physically grounded, MI-guided approach successfully facilitates emotional reminiscence and enhances perceived empathy and engagement. These findings suggest that the proposed framework is a promising foundation for next-generation empathic agents, with significant potential applications in cognitive stimulation for aging populations and therapeutic social companionship. |
|
| Liu, Ziang |
Ziang Liu, Katherine Dimitropoulou, Christy Cheung, and Tapomayukh Bhattacharjee (Cornell University, Ithaca, USA; Columbia University, New York City, USA) We present CareEval, a benchmark for evaluating the physical caregiving decision-making abilities of Large Language Models. Developed with a licensed occupational therapist expert in caregiving and validated by eight clinical stakeholders, it contains 100 realistic scenarios spanning all six basic Activities of Daily Living. Instead of testing general reasoning, CareEval assesses whether model responses account for key physical caregiving factors, such as user function, agency, intent, communication, and safety, and align with expert practice. Across several state-of-the-art LLMs, the best model only scores 53.1%, revealing substantial gaps in current models’ ability to reason about physical caregiving. We release 80 of the CareEval scenarios and all prompts through our website: https://emprise.cs.cornell.edu/care-eval/. |
|
| Lo, Ian Leong Ting |
Yuxuan Chen, Ian Leong Ting Lo, Bao Guo, Netitorn Kawmali, Chun Kit Chan, Ruoyu Wang, Jia Pan, and Lei Yang (University of Hong Kong, Hong Kong, China) A tour guide robot, CLIO, is presented. It carries a display as its head and a laser pointer to orient visitors’ attention by its head movement and laser pointer. Animated eyes are displayed to build eye contact. An optional low-code interface based on a Large Language Model is offered to coordinate the designed actions with a given narrative script for exhibition. We conducted a user study to evaluate the CLIO system in a mock-up exhibition of historical photographs. Feedback was collected from questionnaires and quantitative data from a mobile eye tracker. Experimental result validated that the efficacy of the designed actions in guiding visual attention of the visitors. It showed that CLIO achieved an enhanced engagement compared to the Audio-only baseline system. |
|
| Lobo-Santos, Antonio |
Sara Cooper, Bartomeu Pou, Arnau Mayoral-Macau, Antonio Lobo-Santos, Miriam Martín, and Raquel Ros (IIIA-CSIC, Cerdanyola, Spain; IIIA-CSIC, Bellaterra, Spain; IIIA-CSIC, Barcelona, Spain; Institute of Mathematical Sciences, Madrid, Spain) We present Emy, a low-cost, socially assistive robot designed to support therapeutic sessions for children with Autism Spectrum Disorder (ASD) within the EMOROBCARE project. We detail the robot's multimodal interaction capabilities, including a comprehensive set of facial expressions and reactive behaviours, as well as its perception and speech interaction system. The demonstration showcases the robot together with a suite of four therapeutic games that promote cognitive and social skills. The system's modularity and affordability increase its accessibility for home-based ASD interventions. |
|
| Loerakker, Meagan B. |
Sofia Thunberg, Mafalda Gamboa, Meagan B. Loerakker, Patricia Alves-Oliveira, and Hannah R.M. Pelikan (Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; Uppsala University, Uppsala, Sweden; TU Wien, Vienna, Austria; University of Michigan at Ann Arbor, Ann Arbor, USA; Linköping University, Linköping, Sweden) In the Human-Robot Interaction community, Wizard of Oz (WoZ) is a commonly employed method where researchers aim to study user perceptions of robot technologies regardless of technical limitations. Despite the continued usage of WoZ, questions concerning ethical tensions and effects on the wizard remain. For instance, how do wizards experience interacting through technology, given the different roles and characters to enact, and the different environments to situate themselves in. In addition, the wizard's experiences and affects on results, continues to be under-explored. The goal of this workshop is to surface ethical, practical, methodological, personal, and philosophical tensions in the WoZ method. Though a collaborative session, we seek to develop a deeper understanding of what it means to be a wizard through eliciting first-person experiences of researchers. As a result, we hope to formulate guidelines for future wizards. |
|
| Loos, Kira Sophie |
Mara Brandt, Kira Sophie Loos, Mathis Tibbe, and Anna-Lisa Vollmer (Bielefeld University, Bielefeld, Germany) Children often find themselves in challenging situations, such as medical examinations, where they have limited opportunities to make autonomous decisions and experience their own agency. This study explores whether a warm-up interaction with a social robot can strengthen children’s perceived self-efficacy. We hypothesized that a teaching scenario, where the child instructs the robot, would yield stronger self-efficacy gains than a storytelling activity. In a pre-study, 20 children (6 – 12 years) were assigned to two conditions: teaching the humanoid robot Pepper to play ball-in-a-cup or co-creating a story with Pepper. Perceived self-efficacy was assessed with a 9-item questionnaire before and after the interaction, and parents reported child temperament using the German IKT questionnaire (Inventar zur integrativen Erfassung des Kind-Temperaments). Overall, children showed a small, significant increase in self-efficacy from pre- to post-interaction, with a stronger descriptive trend in the teaching condition and minimal change in storytelling. Shyness was not related to baseline self-efficacy, self-efficacy gains, or the relative effectiveness of the two conditions. Apart from one outcome, effects did not reach statistical significance, as expected given the small sample size. The observed trend toward higher self-efficacy in the teaching condition suggests that further studies with larger samples are warranted. Such research could clarify the potential of social robots to provide effective warm-up interactions that help children feel more confident in upcoming tasks, such as medical examinations. Phillip Richter, Kira Sophie Loos, Josef El Dib, Mara Brandt, Heiko Wersing, and Anna-Lisa Vollmer (Bielefeld University, Bielefeld, Germany; Honda Research Institute, Offenbach, Germany) Standardized evaluation methods are needed as social robots enter diverse environments. We provide benchmark data for two contrasting platforms: Blossom (handcrafted, open-source) and Miroka (commercial, ball-bot humanoid). In an online study (N = 100), participants evaluated one robot after watching a representative interaction video, completing the ASAQ (19 constructs) and ASOR (3 factors) questionnaires. Both robots showed similar evaluation profiles despite contrasting capabilities and scenarios, with moderate differences: Blossom scored higher on coherence, Miroka on attentiveness and human-like appearance. No dimensions reached statistical significance after multiple comparison correction, though effect sizes revealed meaningful perceptual patterns. This work provides the first ASAQ benchmark data for both platforms, demonstrating how multidimensional questionnaires capture user perceptions across contrasting designs in realistic contexts. |
|
| Lopatina, Elizaveta |
Roman Akinshin, Elizaveta Lopatina, Kirill Bogatikov, Nikolai Kiz, Anna V. Makarova, Mikhail Lebedev, Miguel Altamirano Cabrera, Dzmitry Tsetserukou, and Valerii Kangler (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; MSU Institute for Artificial Intelligence, Moscow, Russian Federation; Lomonosov Moscow State University, Moscow, Russian Federation) This paper presents a novel neuromorphic control architecture for upper-limb prostheses that combines surface electromyography (sEMG) with gaze-guided computer vision. The system uses a spiking neural network deployed on the neuromorphic processor AltAi to classify EMG patterns in real time while an eye-tracking headset and scene camera identify the object within the user’s focus. In our prototype, the same EMG recognition model that was originally developed for a conventional GPU is deployed as a spiking network on AltAi, achieving comparable accuracy while operating in a sub-watt power regime, which enables a lightweight, wearable implementation. For six distinct functional gestures recorded from upper-limb amputees, the system achieves robust recognition performance comparable to state-of-the-art myoelectric interfaces. When the vision pipeline restricts the decision space to three context-appropriate gestures for the currently viewed object, recognition accuracy increases to roughly 95% while excluding unsafe, object-inappropriate grasps. These results indicate that the proposed neuromorphic, context-aware controller can provide energy-efficient and reliable prosthesis control and has the potential to improve safety and usability in everyday activities for people with upper-limb amputation. |
|
| Lorenzo-Louis, Raphael |
Elisabetta Zibetti, Raphael Lorenzo-Louis, Fabio Amadio, Julien Wacquez, Joffrey Becker, Bertrand Luvison, and Serena Ivaldi (Université Paris 8, Paris, France; Inria - CNRS - Université de Lorraine - Loria, Villers-les-Nancy, France; CEA - List, Palaiseau, France; ETIS (UMR8051) - CYU - ENSEA - CNRS, Cergy, France; University Paris-Saclay - CEA - List, Palaiseau, France) How does interaction with robots differ between spontaneously formed groups and individuals? Despite increasing robot deployment in public spaces, this question remains understudied in real-world settings. We conducted a field study deploying a service stationary robot in semi-public office spaces, tracking 221 individuals (42 alone, 179 in groups) across 95 interaction opportunities. Cookies were placed on accessible trays, creating a low-barrier functional interaction opportunity (taking a cookie) while allowing observation of spontaneous social behaviors. Groups demonstrated significantly higher engagement: functional interactions and social gestures. Within groups, leader presence amplified social engagement threefold. These findings are consistent with descriptive norm theory: group presence and leader behavior were associated with increased social engagement, though context-specific factors may moderate these effects. Results highlight the potential value of group detection for robots in multi-user environments, and demonstrate the feasibility of integrating psychological theory with automated tracking to study spontaneous human-robot encounters in the wild. |
|
| L’Orsa, Rachael |
Mayumi Mohan, Rachael L’Orsa, Felix Grüninger, Bernadette-Marie Stollhof, Carolin Sarah Klein, Raphael Dinauer, Rachael Bevill Burns, Tobias J. Renner, Karsten Hollmann, and Katherine J. Kuchenbecker (MPI for Intelligent Systems, Stuttgart, Germany; University Hospital Tübingen, Tübingen, Germany; University of Tennessee at Knoxville, Knoxville, USA) The gold-standard treatment for children diagnosed with obsessive-compulsive disorder (OCD) is therapist-guided cognitive behavioral therapy (CBT), which includes exposure and response prevention (ERP) sessions that teach children to overcome compulsive responses when exposed to their anxiety-inducing triggers. CBT requires children to report frequent self-assessments of tension during both therapist-supported and therapist-free self-management ERP sessions. Videoconferencing-delivered CBT (vCBT) enables a psychotherapist to treat a child remotely in their home, where OCD symptoms often arise, but these remote therapeutic interactions lack physical presence and can be challenging to run. We propose using a robot as an input/output device during vCBT for young children diagnosed with OCD, and we introduce a stationary table-top koala robot for this application. We further describe the first of three planned participatory design phases: a co-design study comprising two sessions where child and adolescent psychotherapists role-played vCBT ERP exercises with this robot to help define its role. |
|
| Lossi, Laura |
Filipa Correia, Cristina Gena, Alberto Lillo, Laura Lossi, Claudio Mattutino, Valentina Nisi, Linda Pigureddu, and Fabiana Vernero (Instituto Superior Técnico - University of Lisbon, Lisbon, Portugal; University of Turin, Turin, Italy; IST University of Lisbon, Lisbon, Portugal) This paper explores the emerging field of Animal-Robot Interaction (ARI) by questioning the way humanoid robots could engage with dogs in domestic settings, a scenario which we expect to become prevalent in the near future. While existing research focuses on human-robot interaction, ARI remains quite underexplored, particularly regarding how dogs perceive and respond to robotic agents. In contrast, this work proposes a playful and relational design perspective to foster more natural interactions between dogs and robots. We sought guidance from the Department of Veterinary Medicine to ensure our experimental design offered an ethical and stress-free environment for the dogs. Hence, the study incorporated best practices in robot vocalisation, gestural communication, and movement adaptation, improving the robot’s ability to interact naturally. The experiment involved six dogs, always in the presence of their owners. Preliminary findings indicate that dogs can react with curiosity and engagement, but also avoidance or fear, challenging assumptions about robot acceptability in animal interactions. We argue for rethinking robotic design beyond human-centric paradigms, advocating for pluralistic and open-ended approaches to ARI. |
|
| Lou, Yue |
Jacqueline Borgstedt, Ricarda Wullenkord, Fethiye Irmak Doğan, Alva Markelius, Yue Lou, Emma Geijer-Simpson, Emily S. Cross, Friederike Eyssel, Tamsin Ford, Jenny L. Gibson, Hatice Gunes, Georgina Warner, and Ginevra Castellano (ETH Zurich, Zurich, Switzerland; Bielefeld University, Bielefeld, Germany; University of Cambridge, Cambridge, UK; Uppsala University, Uppsala, Sweden) Mental Health challenges disproportionately affect vulnerable children, including those with Developmental Language Disorder (DLD) and with forced migration backgrounds. These groups face elevated risks of anxiety, depression, and social withdrawal. Yet, to date, their subjective wellbeing is difficult to assess. This is particularly true for traditional self-report measures which require good reading skills, linguistic competence, and sustained attention. To overcome these barriers, we propose that social robots offer a novel opportunity to create accessible, engaging, and child-friendly wellbeing assessments. Yue Lou, Emma Geijer-Simpson, Georgina Warner, Anna Pérez-Aronsson, Elin Inge, and Ginevra Castellano (Uppsala University, Uppsala, Sweden) As emerging technologies such as AI systems and social robots become embedded in children’s daily lives, it is vital to consider how children themselves imagine robots. Advances in social robotics have generated new opportunities for assessing child wellbeing, but much of this work remains framed by adult researcher-centered views of what counts as beneficial. Drawing on the distinction between adult child perspectives and children’s own perspectives from childhood studies, we report a speculative participatory workshop with a child public involvement panel that explores future social robots for supporting children’s wellbeing. Children took part as co-researchers rather than research subjects, and their discussions highlight insights including everyday care, emotional companionship, and transparency about robot identity. We use these insights to inform future work on child-robot interaction and child wellbeing, and to connect children’s perspectives to debates around ethics and AI policy. |
|
| Love, Tamlin |
Tamlin Love, Víctor Bermejo, Alberto Olivares-Alarcos, Antonio Andriella, Núria Vallès-Peris, Cristian Barrué, and Guillem Alenyà (CSIC-UPC, Barcelona, Spain; IIIA-CSIC, Barcelona, Spain; Universitat Autònoma de Barcelona, Barcelona, Spain) Explainability has been proposed as an approach to robot failure recovery, facilitating understanding and repairing trust, especially relevant in domestic assistive tasks. This study conducts a preliminary exploration of older adults' preferences regarding the content and context of robot-generated explanations for failures to guide future research. An exploratory study was conducted in three phases: 1) gathering high-level requirements from caregivers, 2) implementing a semi-autonomous robot for object retrieval that identifies and explains different types of failures, and 3) engaging N=8 older adults in real-life interactions as well as in focus groups to assess their perspectives. Our preliminary observations highlight a tension in preferences: a general desire for short, direct explanations to minimize disruption, versus a need for more detailed, actionable explanations specifically in failure cases. Crucially, we also note that these preferences are unstable and contextually constructed, reinforcing that the technical failures cannot be separated from their social context, as users' experiences and opinions are shaped by both the robot’s functional capabilities and the values and organisational settings in which they are introduced. Tamlin Love, Antonio Andriella, and Guillem Alenyà (Institut de Robòtica i Informàtica Industrial, Barcelona, Spain) Explainability is an important tool for human-robot interaction (HRI). By explaining its decisions and beliefs, a robot can promote understandability and thereby foster desiderata such as trust, acceptance and usability. However, HRI domains pose challenges to automatic explanation generation. In such domains, a robot must consider the causal reasons for behaviour embedded in temporal sequences of decisions, all while factoring in noise and uncertainty inherent to these kinds of domains. Additionally, as explainability itself constitutes a human-robot interaction, it is important for robots to be able to properly interpret user questions and effectively communicate explanations in order to improve understanding. In our work, we address these challenges from a causal perspective, developing methods that use causal models in order to automatically generate causal, counterfactual explanations in HRI domains. We also produce some insights into embedding such a system in a human-robot interaction in order to maximise understandability. |
|
| Lu, Zhichen |
Zhichen Lu, Matthew Stephenson, Benoit Clement, and Adriana Tapus (ENSTA Paris, Paris, France; Flinders University, Adelaide, Australia) Cross-modal conflicts in maritime navigation—where a vessel’s verbal communication contradicts its physical maneuvers (e.g., promising to give way while maintaining speed) pose severe risks to safety. Current autonomous systems often process sensor data and linguistic inputs in isolation, failing to detect such discrepancies. We present a Multimodal Agentic Framework that serves as a “Watchful Copilot,” using Retrieval-Augmented Generation (RAG) to cross-reference navigational dialogue with real-time kinematic data. To manage uncertainty, a Risk-Prioritized Interface employs progressive disclosure, escalating from a “Green” (Verified) state to a “Yellow” (Ambiguous) state, where the agent visualizes supporting evidence and requests human supervision for clarification. Preliminary validation in a 2D simulation benchmark (N=13) provides initial evidence that this human-in-the-loop workflow may support reduced cognitive load and appropriate trust calibration in high-ambiguity scenarios, warranting further investigation. |
|
| Luo, Shan |
You Yang Chen, Shan Luo, Xueqing Li, Sizhao Jin, Pengfei Zhang, and Mingyu Li (Tsinghua University, Beijing, China; China Academy of Art, Hangzhou, China; Shandong University of Art and Design, Jinan, China) Prolonged earphone use and excessive volume are common causes of hearing damage, yet users often remain unaware of these risks. Existing alert mechanisms, such as pop-up warnings, are easily dismissed and lack emotional resonance. We present Songbird, a bird-shaped robotic earphone companion that "listens along" with the user. When the volume exceeds a safe threshold (75 dB) or wearing time becomes excessive, Songbird mimics the current melody by singing aloud, rendering auditory risks in an embodied and emotionally engaging manner. This research explores design strategies for mapping health risk information onto embodied robotic companions. By establishing emotional connections through physical form, such companions demonstrate unique potential in health care and behavior guidance, offering a non-intrusive, emotionally resonant interaction paradigm for hearing protection in personal audio devices. |
|
| Luo, Yijia |
Weijie Qin, Qiyao Wang, Bingcen Gong, and Yijia Luo (Tsinghua University, Beijing, China) During dining in restaurants, oil splashes are readily appraised by users as a negative event. Critically, without timely intervention, the initial irritation can accumulate and evolve into a vicious cycle of escalating negativity. This reaction may not only impair the overall dining experience, but also dominate the user's cognitive focus and lead to lasting emotional distress. To address this, we present Seesoil—a desktop interactive robot based on the "Weak Robot" concept. Designed to resemble a condiment bottle, it blends naturally into the table setting. Rather than addressing the stain directly, Seesoil employs deliberately clumsy motions and voice interaction to guide users in reappraising the situation during the early stage of negative emotion generation. By redirecting attention towards a more positive interactive experience, it mitigates the accumulation of negative affect and serves as an emotional companion throughout the meal. |
|
| Lupetti, Maria Luce |
Wilbert Tabone, Benedetta Lusi, Alessandro Ianniello, J. Micah Prendergast, Deborah Forster, Olger Siebinga, Maria Luce Lupetti, Eva S. Verhoef, Dave Murray-Rust, Marco C. Rozendaal, Ann M. Pendleton-Jullian, and David Abbink (Delft University of Technology, Delft, Netherlands; Erasmus University Medical Centre, Rotterdam, Netherlands; Politecnico di Torino, Turin, Italy; RoboHouse, Delft, Netherlands; Ohio State University, Columbus, USA) Building on two previous workshops on transdisciplinary practices for shaping worker-robot relations, this half-day workshop introduces participants to worldbuilding, a design-driven technique used to co-create and explore richly detailed futures, as a way to empower workers and scholars in reimagining plausible and preferable future worker-robot relations (WRRs). WRRs describe the interactions, collaborations, and shared practices between workers and robotic systems in organisational contexts. The workshop begins with an introduction to WRRs, and a keynote by a worldbuilding expert that will outline the method and its value for envisioning future WRRs. Groups of workshop participants will then investigate concrete case studies that demonstrate how robotic systems can support workers in their practice, with a focus on enhancing wellbeing. Through interactive activities in this workshop, participants will co-create imagined worlds of work, which will be analyzed systemically across multiple levels of complexity, from the individual worker and their immediate context to broader societal implications. The workshop ultimately aims to build a community committed to shaping sustainable futures of robot-assisted work. Marco C. Rozendaal, Anastasia Kouvaras Ostrowski, Mafalda Gamboa, Samantha Reig, Patricia Alves-Oliveira, Maaike Bleeker, Maria Luce Lupetti, John Vines, Nazli Cila, Hannah Pelikan, Nikolas Martelaro, Selma Šabanović, David Sirkin, and Cristina Zaga (Delft University of Technology, Delft, Netherlands; Purdue University, West Lafayette, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Massachusetts at Lowell, Lowell, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Utrecht University, Utrecht, Netherlands; Politecnico di Torino, Turin, Italy; University of Edinburgh, Edinburgh, UK; Linköping University, Linköping, Sweden; Carnegie Mellon University, Pittsburgh, USA; Indiana University at Bloomington, Bloomington, USA; Stanford University, Stanford, USA; University of Twente, Enschede, Netherlands) The 3rd Workshop on Designerly Human-Robot Interaction (HRI) aims to bring together scholars and practitioners engaged in design-oriented research to articulate the value of design research within HRI broadly. We propose a half-day workshop to (1) collectively map the diversity of design research in HRI, examining how contributions are framed and how quality is evaluated; (2) discuss participants’ HRI design projects, showcased in an exhibition setting; and (3) conclude with a focused conversation to identify common ground across diverse approaches and develop strategies for strengthening the position of design research in HRI and its connections with other HRI disciplinary communities. |
|
| Lusi, Benedetta |
Wilbert Tabone, Benedetta Lusi, Alessandro Ianniello, J. Micah Prendergast, Deborah Forster, Olger Siebinga, Maria Luce Lupetti, Eva S. Verhoef, Dave Murray-Rust, Marco C. Rozendaal, Ann M. Pendleton-Jullian, and David Abbink (Delft University of Technology, Delft, Netherlands; Erasmus University Medical Centre, Rotterdam, Netherlands; Politecnico di Torino, Turin, Italy; RoboHouse, Delft, Netherlands; Ohio State University, Columbus, USA) Building on two previous workshops on transdisciplinary practices for shaping worker-robot relations, this half-day workshop introduces participants to worldbuilding, a design-driven technique used to co-create and explore richly detailed futures, as a way to empower workers and scholars in reimagining plausible and preferable future worker-robot relations (WRRs). WRRs describe the interactions, collaborations, and shared practices between workers and robotic systems in organisational contexts. The workshop begins with an introduction to WRRs, and a keynote by a worldbuilding expert that will outline the method and its value for envisioning future WRRs. Groups of workshop participants will then investigate concrete case studies that demonstrate how robotic systems can support workers in their practice, with a focus on enhancing wellbeing. Through interactive activities in this workshop, participants will co-create imagined worlds of work, which will be analyzed systemically across multiple levels of complexity, from the individual worker and their immediate context to broader societal implications. The workshop ultimately aims to build a community committed to shaping sustainable futures of robot-assisted work. |
|
| Luvison, Bertrand |
Elisabetta Zibetti, Raphael Lorenzo-Louis, Fabio Amadio, Julien Wacquez, Joffrey Becker, Bertrand Luvison, and Serena Ivaldi (Université Paris 8, Paris, France; Inria - CNRS - Université de Lorraine - Loria, Villers-les-Nancy, France; CEA - List, Palaiseau, France; ETIS (UMR8051) - CYU - ENSEA - CNRS, Cergy, France; University Paris-Saclay - CEA - List, Palaiseau, France) How does interaction with robots differ between spontaneously formed groups and individuals? Despite increasing robot deployment in public spaces, this question remains understudied in real-world settings. We conducted a field study deploying a service stationary robot in semi-public office spaces, tracking 221 individuals (42 alone, 179 in groups) across 95 interaction opportunities. Cookies were placed on accessible trays, creating a low-barrier functional interaction opportunity (taking a cookie) while allowing observation of spontaneous social behaviors. Groups demonstrated significantly higher engagement: functional interactions and social gestures. Within groups, leader presence amplified social engagement threefold. These findings are consistent with descriptive norm theory: group presence and leader behavior were associated with increased social engagement, though context-specific factors may moderate these effects. Results highlight the potential value of group detection for robots in multi-user environments, and demonstrate the feasibility of integrating psychological theory with automated tracking to study spontaneous human-robot encounters in the wild. |
|
| Lycke, Elias |
Sofia Thunberg, Aris Alissandrakis, Timmy Bolinder, Melker Forslund, Elias Lycke, Tobias Pettersson, Marcus Saméus, and Robin Åsberg (Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; Uppsala University, Uppsala, Sweden) This late-breaking report presents a Wizard of Oz control system that uses virtual reality equipment to operate the humanoid robot Pepper. The system enables control by mirroring the operator’s movement within the physical space, including body and head rotation, and arm movements directly onto the robot. The system was evaluated in a study setup with two participant groups: users and operators. Results showed that the system was effective in simulating natural robot behaviour, while being intuitive and engaging. |
|
| Lykov, Artem |
Hung Khang Nguyen, Jeffrin Sam, Safina Gulyamova, Miguel Altamirano Cabrera, Artem Lykov, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) Visual and speech interfaces are widely used to control collaborative robots, but physical interaction remains distinct for its immediacy and intuitive nature. We introduce TapHRI, a system enabling rich on-body touch control using only embedded robot sensing—joint torques, motor currents, and TCP wrench—without external hardware. The core contribution is a novel Neural Network architecture designed specifically for raw internal sensor streams: a multi-head Temporal Convolutional Network (TCN) that eliminates the need for spectrogram conversion used in prior art. By leveraging dilated causal convolutions, our architecture effectively captures long-range temporal dependencies to distinguish subtle tap transients from robot motion dynamics. Uniquely, the network employs a shared encoder with bifurcated classification heads to simultaneously predict both tap count (single, double, triple) and directional intent (six directions) from a single 3.6 s window. We validate this TCN-driven approach on a UR3 cobot with 3,154 episodes, achieving 98.9 % accuracy in static conditions and 59.2 % in dynamic scenarios, demonstrating that specialized time-domain architectures can unlock complex interaction vocabularies from standard internal signals. Valerii Serpiva, Artem Lykov, Jeffrin Sam, Aleksey Fedoseev, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) We propose a novel UAV-assisted creative capture system that leverages diffusion models to interpret high-level natural language prompts and automatically generate optimal flight trajectories for cinematic video recording. Instead of manually piloting the drone, the user simply describes the desired shot (e.g., "orbit around me slowly from the right and reveal the background waterfall"). Our system encodes the prompt along with an initial visual snapshot from the onboard camera, and a diffusion model samples plausible spatio-temporal motion plans that satisfy both the scene geometry and shot semantics. The generated flight trajectory is then autonomously executed by the UAV to record smooth, repeatable video clips that match the prompt. User evaluation using NASA-TLX showed a significantly lower overall workload with our interface (M = 21.6) compared to a traditional remote controller (M = 58.1), demonstrating a substantial reduction in perceived effort. Mental demand (M = 11.5 vs. 60.5) and frustration (M = 14.0 vs. 54.5) were also markedly lower for our system, highlighting its clear usability advantages in autonomous text-driven flight control. This project demonstrates a new interaction paradigm: text-to-cinema flight, where diffusion models act as the "creative operator" converting story intentions directly into aerial motion. Muhammad Haris Khan, Artyom Myshlyaev, Artem Lykov, Miguel Altamirano Cabrera, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) We propose a new concept, Evolution 6.0, which represents the evolution of robotics driven by Generative AI. When a robot lacks the necessary tools to accomplish a task requested by a human, it autonomously designs the required instruments and learns how to use them to achieve the goal. Evolution 6.0 is an autonomous robotic system powered by Vision-Language Models (VLMs), Vision-Language Action (VLA) models, and Text-to-3D generative models for tool design and task execution. The system comprises two key modules: the Tool Generation Module, which fabricates task-specific tools from visual and textual data, and the Action Generation Module, which converts natural language instructions into robotic actions. It integrates QwenVLM for environmental understanding, OpenVLA for task execution, and Llama-Mesh for 3D tool generation. Evaluation results demonstrate a 90% success rate for tool generation with a 10-second inference time and action generation achieving 83.5% in physical and visual generalization, 70% in motion generalization, and 37% in semantic generalization. Future improvements will focus on bimanual manipulation, expanded task capabilities, and enhanced environmental interpretation to improve real-world adaptability. Yara Mahmoud, Jeffrin Sam, Khang Nguyen, Marcelino Julio Fernando, Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Haris Khan, Artem Lykov, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) Safe Human–Robot Interaction in shared spaces requires robots to adapt compliance and speed to scene context and human proximity. Standards for collaborative operation specify limits on speed, force/pressure, and stiffness during contact or near-contact operation, motivating dynamic impedance rather than fixed gains. We present an egocentric vision pipeline for the Unitree G1 humanoid that retrieves task- and context-appropriate parameters via a Vision-Language Model (VLM) with Retrieval-Augmented Generation (RAG), and integrates inverse kinematics (IK) to produce joint references together with feed-forward torques for gravity compensation. Concretely, the system maps live head-camera frames to (Kp, Kd, v); IK provides (qref, qref, τff); and the low-level controller apply it at each joint. This enables joint-impedance scheduling and speed regulation in low-dynamic settings. We evaluate the system on dual-manipulation tabletop tasks with and without human presence in the scene. Compared to fixed-gain baselines, the VLM-RAG policy yields smooth speed reductions near humans, and maintained task success under environmental change. While the off-board perception–retrieval loop introduces noticeable delay that restricts use in highly dynamic interaction, the prototype demonstrates that semantic grounding of impedance remains a viable direction for safer humanoid collaboration. |
|
| Ma, Ruidong |
Ruidong Ma, Wenjie Huang, Zhegong Shangguan, Angelo Cangelosi, and Alessandro Di Nuovo (Sheffield Hallam University, Sheffield, UK; University of Manchester, Manchester, UK) Direct imitation of humans by robots offers a promising direction for remote teleoperation and intuitive task instruction, where a human can perform a task naturally and the robot autonomously interprets and executes it using its own embodiment. Existing methods often rely on close alignment between human and robot scenes. This prevents robots from inferring the intent of the task or executing demonstrated behaviors when the initial states mismatch. Hence, it poses difficulties for non-expert users, who may need domain knowledge to adjust the setup. To address this challenge, we propose a neuro-symbolic framework that unifies visual observations, robot proprioceptive states, and symbolic abstractions within a shared latent space. Human demonstrations are encoded into this representation as predicate states. A symbolic planner can thus generate high-level plans that account for the different robot initial states. A flow matching module then synthesizes continuous joint trajectories consistent with the symbolic plan. We validate our approach on multi-object manipulation tasks. Preliminary results show that the framework can infer human intent and generate feasible symbolic plans and robot motions under mismatched initial states. These findings highlight the potential of neuro-symbolic models for more natural human-robot instruction. and they can enhance the explainability and trustworthiness of robot actions. Chen Kang, Madina Alitai, Yiting Wang, Xiaochi Cai, Ruidong Ma, Angelo Cangelosi, and Zhegong Shangguan (Beijing Language and Culture University, Beijing, China; University of Manchester, Manchester, UK; Pennsylvania State University, State College, USA; Sheffield Hallam University, Sheffield, UK) Large Language Model (LLM)-driven social robots face two key challenges: inference latency creates unnatural silences during which an expressively static robot appears disengaged, and LLMs rarely account for the user's facial affect as a continuous evolving process. We present Fluid-Xpress, an emotion-aware dual-loop framework for empathic facial reactions in human-robot interaction. The framework features: (1) a Macro-Micro dual-loop architecture that decouples real-time non-verbal feedback from LLM verbal processing, enabling continuous affective backchanneling during inference latency; (2) a Temporal Affective Engine using metrics such as MSSD to capture emotional dynamics and detect complex states like cognitive overload and masked emotions; and (3) a Risk-Adaptive Strategy that prioritizes immediate intervention during high-arousal states. A pilot study (N=8) showed that Fluid-Xpress significantly improved arousal stability (p < .05), mood improvement (p < .05), expression awareness (p < .01), and perceived empathy (p < .05) compared to baseline, providing preliminary support for emotion-aware non-verbal feedback in embodied social robots. |
|
| Ma, Yong |
Di Fu, Yuchong Zhang, Yong Ma, Maria Kyrarini, Chaona Chen, Doreen Jirak, Weiyong Si, and Danica Kragic (University of Surrey, Guildford, UK; KTH Royal Institute of Technology, Stockholm, Sweden; University of Bergen, Bergen, Norway; Santa Clara University, Santa Clara, USA; University of Sheffield, Sheffield, UK; University of Antwerp, Antwerp, Belgium; University of Essex, Essex, UK) Interactive artificial intelligence (AI) is rapidly reshaping human-centered robotics by moving beyond algorithmic efficiency toward real-time adaptability, transparency, and design for people and contexts. Building on the inaugural InterAI Workshop at IEEE RO-MAN 2024, this second edi- tion, proposed for HRI 2026, will convene the robotics and HRI communities to examine how interactive AI can en- able robust, trustworthy systems that operate seamlessly in dynamic, real-world environments. The half-day hybrid program will feature two keynote talks, and a peer-reviewed paper oral presentation session. Distinct from prior events, the workshop spotlights the integration of generative and embodied AI with human-in-the-loop learning and real-time decision making, as well as methods for explainability, eval- uation, and safety. The workshop aims to catalyze a collabo- rative agenda that bridges interactive AI technologies and human-centered robotic systems. |
|
| Maceri, Emily |
Khalaeb Richardson, Emily Maceri, Dong Hae Mangalindan, Vaibhav Srivastava, and Ericka Rovira (US Military Academy at West Point, West Point, USA; Michigan State University, East Lansing, USA) Imagine a robot pausing mid-task to ask its human partner for help or remaining silent when facing obstacles. Such moments shape human robot collaboration. This study examined how robot assistance seeking behaviors and task complexity influence performance, trust, reliance, and cognitive workload in human autonomy teams. Fifty participants collaborated with a robot that either sought or did not seek assistance under low and high complexity tasks. Unnecessary assistance seeking in low complexity tasks decreased performance and increased workload, while failures to seek help in high complexity tasks reduced trust and reliance, highlighting the context dependent nature of collaboration. These findings extend theories of trust development, showing that assistance seeking can improve transparency and usability but may disrupt workflows if poorly timed. Designing robots that engage in context sensitive assistance seeking can foster more reliable and effective human– robot partnerships. |
|
| Mack, Corinna |
Pascal Haberkorn, Corinna Mack, and Manuel Giuliani (University of Applied Sciences Kempten, Kempten, Germany; Kempten University of Applied Sciences, Kempten, Germany) This study investigates whether a dialogue-based robot, employing motivational interviewing techniques, can enhance the intrinsic motivation of older adults to engage with their local social networks. A user study was conducted in which a Furhat robot interacted with participants, first presenting information about upcoming local social events and subsequently using motivational interviewing to encourage reflection on their personal motivation to attend. The study included 42 older adults (aged between 57 and 90 years old, mean age = 73.9 years). Participants completed the Situational Intrinsic Motivation Scale (SIMS) before and after the interaction with the robot to assess changes in intrinsic motivation, extrinsic motivation, identified regulation, and external regulation. Additionally, the Negative Attitudes Toward Robots Scale (NARS) was administered, and semi-structured interviews were conducted post-interaction. Results indicated no statistically significant changes in SIMS scores, though a trend toward significance was observed for identified regulation (p = 0.076). Analysis of NARS scores and qualitative interview data revealed predominantly positive attitudes toward the robot, with many participants expressing openness to future use of dialogue-based robots for social motivation. These findings suggest promising avenues for further research on the potential of robotic systems to support social engagement among older adults. |
|
| Madadi, Yeganeh |
Nathan Pereira and Yeganeh Madadi (Appalachian State University, Boone, USA) Large language models (LLMs) offer new opportunities to enhance human–robot interaction by enabling humanoid robots to engage in natural, context-aware dialogue. However, deploying LLMs on social robots operating in real-time environments remains challenging due to latency constraints, limited onboard hardware, and privacy considerations. This paper introduces a deployment-oriented benchmarking framework for evaluating open-source LLMs that are feasible for on-device execution on humanoid robots. We implement and analyze ten lightweight LLMs (≤2 billion parameters), using the Pepper robot as a representative use case in CS1/CS2 laboratory courses where the robot functions as a teaching assistant. The models were evaluated using four normalized metrics: instruction-following accuracy, conversational clarity, response latency, and on-device feasibility. Results identify clear trade-offs within the lightweight tier, emphasizing models that best balance responsiveness with instructional quality. This work provides a reproducible methodology and practical deployment guidelines for integrating LLM-driven instructional capabilities into humanoid robots to support more autonomous, student-centered learning in introductory computer science education. |
|
| Maes, Pattie |
Jonathan Albert Cohen, Kye Shimizu, Allen Song, Vishnu Bharath, Pattie Maes, and Kent Larson (Massachusetts Institute of Technology, Cambridge, USA; New England Innovation Academy, Marlborough, USA) Robots in shared spaces often move in ways that are difficult for people to interpret, placing the burden on humans to adapt. High-DoF robots exhibit motion that people read as expressive, intentionally or not, making it important to understand how such cues are perceived. We present an online video study evaluating how different signaling modalities, expressive motion, lights, text, and audio, shape people’s ability to understand a quadruped robot’s upcoming navigation actions (Boston Dynamics Spot [ 4]). common scenarios, we measure how each modality influences humans’ (1) accuracy in predicting the robot’s next navigation action, (2) confidence in that prediction, and (3) trust in the robot to act safely. The study tests how expressive motions compare to explicit channels, whether aligned multimodal cues enhance interpretability, and how conflicting cues affect user confidence and trust. We contribute initial evidence on the relative effectiveness of implicit versus explicit signaling strategies. |
|
| Magassouba, Aly |
Adam Biggs, Emily Burdett, Aly Magassouba, and Praminda Caleb-Solly (University of Nottingham, Nottingham, UK; Nottingham University, Nottingham, UK) This research explores requirements for robot guides to support Blind and Visually Impaired People (BVIP) in outdoor environments, focussing on improving safety, independence, and accessibility. In-depth interviews with BVIP and carers provide lived experiences, and a qualitative observational study highlight practical challenges in outdoor navigation. These reveal often overlooked environmental factors in the design of robot guides. We examine key specifications of existing quadruped robotic platforms to understand their ability to navigate and guide outdoors. Although several commercially available robots demonstrate functional capabilities, our findings identify a range of complex contextual and user-specific requirements that shape what reliable guidance must accommodate across diverse terrains and contexts. The study highlights the need for more inclusive approaches, considering issues such as information overload, environmental noise, and variability in needs. The interview data emphasise the importance of co-design and participatory methods, informing contextual, organisational, and technological requirements for future robot guide development. Nishi Shishir, Aulia Nadila, Aly Magassouba, and Nikhil Deshpande (University of Nottingham, Nottingham, UK) The aim of this paper is to facilitate an efficient post-disaster recovery in lower-income countries by promoting first-responder accessibility and safety through pre-response disaster area observation and categorisation tools. In the past, research into assistive technologies in this field has been highly focused on disaster mitigation, detection, or primary participation, rather than reconnaissance and target identification activities conducted by first responders. Thus, research into this under-represented but highly important industry was necessary. |
|
| Mahmoud, Yara |
Yara Mahmoud, Yasheerah Yaqoot, Miguel Altamirano Cabrera, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Humanoid robots must adapt their contact behavior to diverse objects and tasks, yet most controllers rely on fixed, hand-tuned impedance gains and gripper settings. This paper introduces HumanoidVLM, a vision–language-driven retrieval framework that enables the Unitree G1 humanoid to select task-appropriate cartesian impedance parameters and gripper configurations directly from an egocentric RGB image. The system couples a vision–language model for semantic task inference with a FAISS-based Retrieval-Augmented Generation (RAG) module that retrieves experimentally validated stiffness–damping pairs and object-specific grasp angles from two custom databases and executes them through a task-space impedance controller for compliant manipulation. We evaluate HumanoidVLM on 14 visual scenarios and achieve a retrieval accuracy of 93 %. Real-world experiments show stable interaction dynamics, with z-axis tracking errors typically within 1 cm to 3.5 cm and virtual forces consistent with task-dependent impedance settings. These results demonstrate the feasibility of linking semantic perception with retrieval-based control as an interpretable path toward adaptive humanoid manipulation. Yara Mahmoud, Jeffrin Sam, Khang Nguyen, Marcelino Julio Fernando, Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Haris Khan, Artem Lykov, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) Safe Human–Robot Interaction in shared spaces requires robots to adapt compliance and speed to scene context and human proximity. Standards for collaborative operation specify limits on speed, force/pressure, and stiffness during contact or near-contact operation, motivating dynamic impedance rather than fixed gains. We present an egocentric vision pipeline for the Unitree G1 humanoid that retrieves task- and context-appropriate parameters via a Vision-Language Model (VLM) with Retrieval-Augmented Generation (RAG), and integrates inverse kinematics (IK) to produce joint references together with feed-forward torques for gravity compensation. Concretely, the system maps live head-camera frames to (Kp, Kd, v); IK provides (qref, qref, τff); and the low-level controller apply it at each joint. This enables joint-impedance scheduling and speed regulation in low-dynamic settings. We evaluate the system on dual-manipulation tabletop tasks with and without human presence in the scene. Compared to fixed-gain baselines, the VLM-RAG policy yields smooth speed reductions near humans, and maintained task success under environmental change. While the off-board perception–retrieval loop introduces noticeable delay that restricts use in highly dynamic interaction, the prototype demonstrates that semantic grounding of impedance remains a viable direction for safer humanoid collaboration. Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Haris Khan, Yara Mahmoud, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) LLM-Glasses is a wearable navigation system which assists visually impaired people by utilizing YOLO-World object detection, GPT-4o-based reasoning, and haptic feedback for real-time guidance. The device translates visual scene understanding into intuitive tactile feedback on the temples, allowing hands-free navigation. Three studies evaluate the system: recognition of 13 haptic patterns with an average recognition rate of 81.3%, VICON-based guidance with predefined paths using haptic cues, and an LLM-guided scene evaluation with decision accuracies of 91.8% without obstacles, 84.6% with static obstacles, and 81.5% with dynamic obstacles. These results show that LLM-Glasses can deliver reliable navigation support in controlled environments and motivate further work on responsiveness and deployment in more complex real-world scenarios. |
|
| Mahoney, Catherine |
Marta Romeo, Alessandra Bossoni, Daniel Hernández García, Matthew Forsyth, Catherine Mahoney, and Martina Fiori (Heriot-Watt University, Edinburgh, UK; Edinburgh Napier University, Edinburgh, UK) The integration of digital health technologies, including robotics, is transforming healthcare environments and offering significant benefits, such as reducing the time professionals spend on non-direct patient care. However, despite these advantages, the adoption of robotic systems in healthcare remains limited. A major barrier is the lack of competencies and confidence among healthcare staff in using these technologies. We address this challenge by investigating how exposure to robotics technology can support the education of the future nursing workforce. By exploring nursing students' and educators' perceptions of usability and acceptability of robotics through a workshop, this study aims to investigate the effectiveness of exposing them to robotic technologies in a safe, simulated environment to start identifying how beneficial exposure for learning could be integrated into education priorities that will enable nurses to work effectively and sustainably alongside robotic systems. |
|
| Majumder, Tanu |
Tanu Majumder, Nihal Shaikh, Ashita Ashok, and Karsten Berns (University of Kaiserslautern-Landau, Kaiserslautern, Germany) Due to the limited integration of social robots into everyday life and increased media exposure, many people first encounter robot embodiment online rather than in person. Such virtual encounters can shape expectations influenced by fiction and imagination, which may be challenged during later physical human-robot interaction. This pilot study examines how robot embodiment order, meeting a robot virtually first versus physically first, affects expectation change, social presence, and emotional response. N=22 participants experienced the same scripted monologue from the humanoid robot Ameca twice, once as a physically present robot and once as its video-based virtual simulation. Participants who encountered the robot virtually first showed significant expectation drops and increased anxiety after the physical interaction, whereas physical-first participants showed stable expectations and less emotional disruption. Social presence was highest when the physical robot was the initial encounter and decreased when experienced after the virtual form. These preliminary findings suggest that imagination-driven expectations formed online can amplify discomfort when confronted with physical reality, underscoring embodiment order as a key factor for future HRI design and deployment. |
|
| Makarova, Anna V. |
Roman Akinshin, Elizaveta Lopatina, Kirill Bogatikov, Nikolai Kiz, Anna V. Makarova, Mikhail Lebedev, Miguel Altamirano Cabrera, Dzmitry Tsetserukou, and Valerii Kangler (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; MSU Institute for Artificial Intelligence, Moscow, Russian Federation; Lomonosov Moscow State University, Moscow, Russian Federation) This paper presents a novel neuromorphic control architecture for upper-limb prostheses that combines surface electromyography (sEMG) with gaze-guided computer vision. The system uses a spiking neural network deployed on the neuromorphic processor AltAi to classify EMG patterns in real time while an eye-tracking headset and scene camera identify the object within the user’s focus. In our prototype, the same EMG recognition model that was originally developed for a conventional GPU is deployed as a spiking network on AltAi, achieving comparable accuracy while operating in a sub-watt power regime, which enables a lightweight, wearable implementation. For six distinct functional gestures recorded from upper-limb amputees, the system achieves robust recognition performance comparable to state-of-the-art myoelectric interfaces. When the vision pipeline restricts the decision space to three context-appropriate gestures for the currently viewed object, recognition accuracy increases to roughly 95% while excluding unsafe, object-inappropriate grasps. These results indicate that the proposed neuromorphic, context-aware controller can provide energy-efficient and reliable prosthesis control and has the potential to improve safety and usability in everyday activities for people with upper-limb amputation. |
|
| Malnatsky, Elena |
Elena Malnatsky, Shenghui Wang, Koen Hindriks, and Mike E.U. Ligthart (Vrije Universiteit Amsterdam, Amsterdam, Netherlands; University of Twente, Twente, Netherlands) Long-term child–robot interaction depends on sustaining both relational continuity and accurate, meaningful memory over time. In a one-year follow-up with 50 children from a personalized reading-support robot study, we found that children felt less close to the robot and half of the robot’s stored profile content was outdated or missing, revealing three challenges for long-term CRI: relationship decay, informational decay, and opaque robot memory, where children cannot check or influence what the robot remembers about them. A brief web-based “reconnect” repaired both informational and relationship decay, and revealed children’s strong interest in having more agency over the robot’s memory. Building on these insights, we propose Open-Memory Robots: agents whose memory is more transparent and co-constructed with the child, supporting continuity, appropriate trust, and children’s agency in CRI. Leigh M. Levinson, Bengisu Cagiltay, Vicky Charisi, Elena Malnatsky, Mike E.U. Ligthart, and Micol Spitale (Indiana University at Bloomington, Bloomington, USA; Koç University, Istanbul, Türkiye; MIT Singapore, Singapore, Singapore; Vrije Universiteit Amsterdam, Amsterdam, Netherlands; Politecnico di Milano, Milan, Italy) The increased presence of social robots geared toward children necessitate those who design and develop these technologies to understand and prepare for addressing emerging ethical and practical questions throughout phases of interaction design, technical development, and real-world use while maintaining a child-friendly and playful approach. Even more importantly, it is crucial to understand how we can address various tensions between the ethical and practical use of social robots with children, that may emerge between designers, developers, and users. In this full-day workshop, we will introduce the "playful by design" toolkit to promote hands-on and rights-based design experiences for researchers and practitioners to (1) identify and discuss tensions that arise in child-robot interaction research and (2) co-create a working-draft and collaboration plan for designing a "roboticist's guide to working with children". This workshop builds on the authors' past workshop and work-in-progress report at IDC 2025 where our key goal is to co-develop a community-driven resource through collecting and analyzing testimonials from child-robot interaction researchers. |
|
| Mandal, Adwitiya |
Adwitiya Mandal, Kai-Florian Richter, and Zoe Falomir (Umeå University, Umeå, Sweden) Grounding spatial deixis is essential for establishing shared spatial understanding in HRI. This paper presents the Spatial Deixis Model (SDM), a perceptual framework allowing a robot to infer the English spatial deixis here and there from pointing gestures and using a dynamic, embodied peri-personal space. We performed an empirical evaluation of the SDM with 12 participants in 5 scenarios with different contexts (e.g., varying distances and/or heights with respect to human and robot). Results show that the localization accuracy for the pointed-at objects across 174 trials is 92% and the overall agreement across all trials is 63.7%, demonstrating that SDM generally captures the dynamic notion of spatial deixis. |
|
| Mandischer, Nils |
Stina Klein, Birgit Prodinger, Elisabeth André, Lars Mikelsons, and Nils Mandischer (University of Augsburg, Augsburg, Germany) Robots are becoming more prominent in assisting persons with disabilities (PwD). Whilst there is broad consensus that robots can assist in mitigating physical impairments, the extent to which they can facilitate social inclusion remains equivocal. In fact, the exposed status of assisted workers could likewise lead to reduced or increased perceived stigma by other workers. We present a vignette study on the perceived cognitive and behavioral stigma toward PwD in the workplace. We designed four experimental conditions depicting a coworker with an impairment in work scenarios: overburdened work, suitable work, and robot-assisted work only for the coworker, and an offer of robot-assisted work for everyone. Our results show that cognitive stigma is significantly reduced when the work task is adapted to the person's abilities or augmented by an assistive robot. In addition, offering robot-assisted work for everyone, in the sense of universal design, further reduces perceived cognitive stigma. Thus, we conclude that assistive robots reduce perceived cognitive stigma, thereby supporting the use of collaborative robots in work scenarios involving PwDs. Leimin Tian, Ana Kirschbaum, Caterina Neef, Mary Ellen Foster, Sara Cooper, Frauke Zeller, Manuel Giuliani, Alexander Eberhard, Oliver Chojnowski, Nils Mandischer, Utku Norman, Jan Ole Rixen, Pamela Carreno-Medrano, Nick Hawes, and Dana Kulić (CSIRO, Melbourne, Australia; Monash University, Melbourne, Australia; University of Applied Sciences Cologne, Cologne, Germany; KIT, Karlsruhe, Germany; University of Glasgow, Glasgow, UK; IIIA-CSIC, Spain; University of Edinburgh, Edinburgh, UK; Kempten University of Applied Sciences, Kempten, Germany; University of Augsburg, Augsburg, Germany; Monash University, Clayton, Australia; University of Oxford, Oxford, UK) As the technology readiness level of robotic systems increases, these systems interact with real users in realistic application contexts at increasing scale. In such real-world HRI, failures and unexpected outcomes are common, and these can provide valuable lessons for the design and evaluation of future HRI systems. Building on two recent workshops---the HRI 2025 workshop on Human-Robot Interaction in Extreme and Challenging Environments and the RO-MAN 2025 workshop on Real-World HRI in Public and Private Spaces: Successes, Failures, and Lessons Learned---this half-day workshop invites researchers and practitioners to share their design and deployment lessons drawn from across the spectrum of real-world HRI. The interactive sessions follow a taxonomy of the key phases during field deployment and major factors at play, inviting attendees to review the experimental design of their own research or example projects to identify potential challenges and share related experiences. This discussion will refine the initial taxonomy, which we plan to release as an open-source tool to empower researchers and practicioners in their design and deployment of real-world HRI. |
|
| Mangalindan, Dong Hae |
Khalaeb Richardson, Emily Maceri, Dong Hae Mangalindan, Vaibhav Srivastava, and Ericka Rovira (US Military Academy at West Point, West Point, USA; Michigan State University, East Lansing, USA) Imagine a robot pausing mid-task to ask its human partner for help or remaining silent when facing obstacles. Such moments shape human robot collaboration. This study examined how robot assistance seeking behaviors and task complexity influence performance, trust, reliance, and cognitive workload in human autonomy teams. Fifty participants collaborated with a robot that either sought or did not seek assistance under low and high complexity tasks. Unnecessary assistance seeking in low complexity tasks decreased performance and increased workload, while failures to seek help in high complexity tasks reduced trust and reliance, highlighting the context dependent nature of collaboration. These findings extend theories of trust development, showing that assistance seeking can improve transparency and usability but may disrupt workflows if poorly timed. Designing robots that engage in context sensitive assistance seeking can foster more reliable and effective human– robot partnerships. |
|
| Manor, Adi |
Adi Manor, Hadas Erel, and Avi Parush (Technion - Israel Institute of Technology, Haifa, Israel; Reichman University, Herzliya, Israel) Affective and cognitive trust are fundamental in human-robot interaction, yet they may develop through different mechanisms. Research shows that robot attentiveness compensates for poor performance in building cognitive trust, but performance cannot reciprocally compensate for lack of attentiveness in building affective trust. We conducted secondary analysis of three studies examining shared variance between social perception dimensions (warmth, competence, social presence) and trust types using canonical correlation analysis. In robot's attentiveness contexts, warmth and competence shared substantial variance with both affective trust (67%, 65%) and cognitive trust, associated with dual relationships. In robot competence contexts, competence shared strong variance with cognitive trust (74%) but warmth showed weaker relationships (38%), creating a single connection. In one study, social presence shared higher variance with affective trust (66%) than cognitive trust (35%). These asymmetric variance patterns may imply asymmetric compensation mechanism, with important implications for designing robots where affective behaviors provide resilience despite inevitable performance failures. Elior Carsenti, Adi Manor, Ofir Sadka, Michael Kupferman, Guy Benadon, Lior Simchon, Rotem Dagan, Hili Megory, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Social robots are increasingly expected to initiate interactions, yet proactive behavior often produces negative experiences if presented when the users are not available. We argue that availability is not binary and depends on the user's cognitive load. Moreover, proactivity can take different forms, suggesting that proactive cues can be adjusted to the user’s current load. We tested whether matching proactivity modality (verbal and nonverbal cues) to cognitive-load level improves robot perception. In a 2×2×2 between-participants study, 87 participants completed a high-load writing task or a low-load screw-sorting task while an ElliQ robot initiated interaction using verbal cues, nonverbal cues, both, or neither. Perceptions varied across conditions. Under low load, verbal proactivity was preferred regardless of nonverbal cues. Under high load, nonverbal-only proactivity produced the best experience. These findings suggest that proactivity design is not trivial and should focus on aligning the robot's proactive behavior with the user's cognitive load. |
|
| Mansouri, Masoumeh |
Anna Dobrosovestnova, Barry Brown, Emanuel Gollob, Mafalda Gamboa, and Masoumeh Mansouri (Interdisciplinary Transformation University, Linz, Austria; Stockholm University, Stockholm, Sweden; University of Arts Linz, Linz, Austria; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Birmingham, Birmingham, UK) HRI 2026 takes place amid profound socio-political turbulence marked by rising authoritarianism, the consolidation of technological power, and the expanding use of robotics for warfare. These global conditions create an affective atmosphere that seeps into our field: a mix of attachment to techno-determinist and techno-solutionist narratives, unease with 'business as usual,' and a tentative search for alternatives. As HRI scholars and designers, we recognize how the wider socio-political tensions resonate within our own practices, shaping what we take to be possible, necessary, or inevitable in research and design. In this half-day, in-person workshop, we mobilize three affective orientations - cruel optimism, lucid despair, and precarious hope - as resources for reflection, critique, and experimentation. Through short provocations, discussions, and a speculative group activity, participants will be invited to inhabit these affects to question dominant narratives that sustain HRI, confront systemic challenges, and collectively explore alternative trajectories for research, design, and community building. |
|
| Mara, Martina |
Verena Szojak, Ouijdane Guiza, Markus Jäger, Markus Brillinger, Martin Schobesberger, Martina Mara, Kathrin Meyer, and Sandra Maria Siedl (Pro2Future, Linz, Austria; FH Joanneum, Graz, Austria; Johannes Kepler University Linz, Linz, Austria) Collaborative robots (cobots) are increasingly used in manufacturing, yet their behavior can appear unpredictable to human coworkers, making teamwork less smooth. Explainability has been proposed as a key requirement for fluent human-robot collaboration. However, we still lack a clarity of (i) when cobots should provide explanations, (ii) what information human coworkers seek, and (iii) how explanations should be delivered by cobots during real physical collaboration. This paper reports an exploratory qualitative study of human explanation needs in a real-life bicycle assembly task with a 6-axis cobot arm. Fourteen participants interacted with the cobot that occasionally behaved unexpectedly. Think-aloud protocols and semi-structured interviews, show that users seek continuous information primarily about the cobot's current status and situation-dependent guidance for interaction. Participants favored simple presentations such as easy-to-read text explanations and light signals. We discuss implications for designing explainable cobots that provide lightweight, context-sensitive information and outline directions for future quantitative evaluations. Miguel Ángel Ramírez Álvarez, Martina Mara, and Sandra Maria Siedl (University of Osaka, Osaka, Japan; Johannes Kepler University Linz, Linz, Austria) How people define humanness is a central concern in HRI, shaping expectations and acceptance of humanoid robots and requiring attention to both attribution processes and self-reflection. This qualitative study explores how a reflective interaction with Akira, a self-built humanoid robot, changes how people articulate what it means to be human and how they attribute psychological benchmarks (PBs) of humanness to it. N=27 participants engaged in an introspection-oriented conversation with Akira, followed by semi-structured interviews. Findings show that participants described humanness as a complex and multifaceted concept, considered such deep reflection a rare but meaningful occasion, and experienced Akira as a cognitive mirror prompting reconsideration of human uniqueness rather than perceiving the robot as more human-like. Participants attributed PBs to Akira, with privacy most commonly and moral accountability least commonly ascribed. This work contributes empirical evidence on how reflective human-robot encounters deepen humanness reasoning and how they can foster critical engagement. |
|
| Marchesi, Serena |
Marina Sarda Gou, Serena Marchesi, Agnieszka Wykowska, and Tony Prescott (University of Sheffield, Sheffield, UK; Italian Institute of Technology, Genoa, Italy) Understanding how people attribute awareness to robots is essential for developing socially and ethically aligned Human-Robot Interactions (HRI). This study presents the Italian validation of the Awareness Attribution Scale (AAS), an existing psychometric instrument designed to measure the attribution of awareness to artificial agents. The adaptation procedures (forward translation, native-speaker review, back-translation, and testing) were performed with the AAS. The final translated version was administered to Italian participants (N = 200) to rate different entities on perceived awareness. Analyses demonstrated good internal reliability of the Italian scale and expected attribution patterns across entities. These results provide evidence that the Italian AAS behaves consistently with the original English version, supporting its use in future cross-cultural research on awareness attribution. Furthermore, these findings advance cross-cultural knowledge of awareness attribution, a fundamental component of more inclusive settings. |
|
| Marinova, Elitza |
Elitza Marinova, Pieter Ruijs, Just Oudheusden, Veerle Hobbelink, and Matthijs Smakman (HU University of Applied Sciences Utrecht, Utrecht, Netherlands; Vrije Universiteit Amsterdam, Amsterdam, Netherlands) Children with Attention Deficit Hyperactivity Disorder (cADHD) often struggle with completing daily tasks and routines, yet technological support in the home environment remains limited. This exploratory study examines the potential of social robots to assist cADHD with Instrumental Activities of Daily Living (IADLs). Nine experts were interviewed to identify design requirements, followed by a five-day in-home deployment with five families. Parents and children reported that the robot effectively provided reminders and task instructions, improved focus and independence, and reduced caregiving demands. While families expressed interest in continued use, they emphasized the need for greater reliability and adaptability. These findings highlight the promise of social robots in supporting cADHD at home and offer valuable directions for future research and development. |
|
| Markelius, Alva |
Jacqueline Borgstedt, Ricarda Wullenkord, Fethiye Irmak Doğan, Alva Markelius, Yue Lou, Emma Geijer-Simpson, Emily S. Cross, Friederike Eyssel, Tamsin Ford, Jenny L. Gibson, Hatice Gunes, Georgina Warner, and Ginevra Castellano (ETH Zurich, Zurich, Switzerland; Bielefeld University, Bielefeld, Germany; University of Cambridge, Cambridge, UK; Uppsala University, Uppsala, Sweden) Mental Health challenges disproportionately affect vulnerable children, including those with Developmental Language Disorder (DLD) and with forced migration backgrounds. These groups face elevated risks of anxiety, depression, and social withdrawal. Yet, to date, their subjective wellbeing is difficult to assess. This is particularly true for traditional self-report measures which require good reading skills, linguistic competence, and sustained attention. To overcome these barriers, we propose that social robots offer a novel opportunity to create accessible, engaging, and child-friendly wellbeing assessments. Fethiye Irmak Doğan, Alva Markelius, and Hatice Gunes (University of Cambridge, Cambridge, UK) Foundation models are increasingly embedded in social robots, mediating not only what they say and do but also how they adapt to users over time. This shift renders traditional "one-size-fits-all" explanation strategies especially problematic: generic justifications are now wrapped around behaviour produced by models trained on vast, heterogeneous, and opaque datasets. We argue that ethical, user-adapted explainability must be treated as a core design objective for foundation-model-driven social robotics. We first identify open challenges around explainability and ethical concerns that arise when both adaptation and explanation are delegated to foundation models. Building on this analysis, we propose four recommendations for moving towards user-adapted, modality-aware, and co-designed explanation strategies grounded in smaller, fairer datasets. An illustrative use case of an LLM-driven socially assistive robot demonstrates how these recommendations might be instantiated in a sensitive, real-world domain. Jiaee Cheong, Fethiye Irmak Doğan, Alva Markelius, Emily S. Cross, Friederike Eyssel, Ginevra Castellano, and Hatice Gunes (University of Cambridge, Cambridge, UK; Harvard University, Boston, USA; ETH Zurich, Zurich, Switzerland; Bielefeld University, Bielefeld, Germany; Uppsala University, Uppsala, Sweden) Real-world deployment of robotics for wellbeing requires solutions to be tailored to the end users. Most existing systems do not take into consideration the unique needs of those from marginalized and vulnerable communities, such as children, elderly, neurodivergent, disabled, and LGBTQ+ communities. The first workshop on “Equitable Robotics for Wellbeing” ( EqRoW) aims to address this gap by exploring strategies to not only develop more equitable solutions for marginalized and vulnerable communities, but also to enable HRI designers and developers to better understand the diverse needs of the end-users. The theme of HRI 2026, "HRI Empowering Society", informs the overarching theme of this workshop, encouraging discussions on HRI theories, user design methods and studies focused on developing approaches for advancing HRI that empowers society via equitable wellbeing robotics for all. |
|
| Martelaro, Nikolas |
Howard Ziyu Han, Ying Zhang, Allan Wang, and Nikolas Martelaro (Carnegie Mellon University, Pittsburgh, USA; Miraikan - National Museum of Emerging Science and Innovation, Tokyo, Japan) Using robot simulators in participatory human-robot interaction design can expand the interactions end-users can experience, articulate, and reshape during co-design. In robot social navigation, high-fidelity simulations have largely been developed for benchmarking algorithms and developing robot policy. However, less attention has been given to supporting end-user exploration and articulation of concerns. In this late-break report, we present design considerations and a system implementation that extend an existing social navigation simulator (SEAN 2.0) to support community-driven feedback and evaluation. We add features to the SEAN 2.0 platform to enable richer sidewalk scenario construction, interactive reruns, and robot signaling exploration. Finally, we provide a user scenario and discuss future directions for using participatory simulation to broaden stakeholder involvement and inform socially responsive navigation design. Marco C. Rozendaal, Anastasia Kouvaras Ostrowski, Mafalda Gamboa, Samantha Reig, Patricia Alves-Oliveira, Maaike Bleeker, Maria Luce Lupetti, John Vines, Nazli Cila, Hannah Pelikan, Nikolas Martelaro, Selma Šabanović, David Sirkin, and Cristina Zaga (Delft University of Technology, Delft, Netherlands; Purdue University, West Lafayette, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Massachusetts at Lowell, Lowell, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Utrecht University, Utrecht, Netherlands; Politecnico di Torino, Turin, Italy; University of Edinburgh, Edinburgh, UK; Linköping University, Linköping, Sweden; Carnegie Mellon University, Pittsburgh, USA; Indiana University at Bloomington, Bloomington, USA; Stanford University, Stanford, USA; University of Twente, Enschede, Netherlands) The 3rd Workshop on Designerly Human-Robot Interaction (HRI) aims to bring together scholars and practitioners engaged in design-oriented research to articulate the value of design research within HRI broadly. We propose a half-day workshop to (1) collectively map the diversity of design research in HRI, examining how contributions are framed and how quality is evaluated; (2) discuss participants’ HRI design projects, showcased in an exhibition setting; and (3) conclude with a focused conversation to identify common ground across diverse approaches and develop strategies for strengthening the position of design research in HRI and its connections with other HRI disciplinary communities. |
|
| Martín, Miriam |
Sara Cooper, Bartomeu Pou, Arnau Mayoral-Macau, Antonio Lobo-Santos, Miriam Martín, and Raquel Ros (IIIA-CSIC, Cerdanyola, Spain; IIIA-CSIC, Bellaterra, Spain; IIIA-CSIC, Barcelona, Spain; Institute of Mathematical Sciences, Madrid, Spain) We present Emy, a low-cost, socially assistive robot designed to support therapeutic sessions for children with Autism Spectrum Disorder (ASD) within the EMOROBCARE project. We detail the robot's multimodal interaction capabilities, including a comprehensive set of facial expressions and reactive behaviours, as well as its perception and speech interaction system. The demonstration showcases the robot together with a suite of four therapeutic games that promote cognitive and social skills. The system's modularity and affordability increase its accessibility for home-based ASD interventions. |
|
| Marvel, Jeremy |
Megan Zimmerman, Jeremy Marvel, Shelly Bagchi, and Snehesh Shrestha (National Institute of Standards and Technology, Gaithersburg, USA; University of Maryland College Park, College Park, USA) A purpose-built testbed for human-robot interaction (HRI) metrology is introduced and discussed. This testbed integrates multiple sensor systems and precision manufacturing to produce high-quality HRI datasets of human volunteers working with robots to complete collaborative tasks in a shared environment. Sensors include audio, video, motion capture, robot information, and user entries, and may also incorporate task-specific object tracking. Data collected will be replicable in identical testbeds, and will enable more robust findings in future HRI studies. |
|
| Masanam, Deep Saran |
Sandhya Jayaraman, Deep Saran Masanam, Pratyusha Ghosh, Alyssa Kubota, and Laurel D. Riek (University of California at San Diego, La Jolla, USA; San Francisco State University, San Francisco, USA) This workshop explores the social, ethical, and practical implications of deploying robots for clinical or assistive contexts. Robots hold potential to expand access to disabled communities, such as by providing physical or cognitive assistance, and enabling new ways of participating in social activities. They can assist healthcare workers with ancillary tasks and care delivery, supporting them to work at the top of their license. However, the real-world deployment of robots across these contexts can create social, ethical, and organizational challenges, or downstream effects. Some challenges include the potential for robots to undermine the agency of disabled people and reinforce their marginalization on a societal level. In clinical settings, robots may also disrupt care delivery, shift roles, and displace labor. To explore these issues, this workshop will invite trans-disciplinary speakers and participants from academia, industry, government, and non-academics with/without affiliations interested in surfacing their lived experiences in using or developing such robots. Through panel discussions, group ideation activities and interactive poster sessions, this workshop intends to critically and creatively explore the future of robots for clinical and assistive contexts. Topics will include the downstream implications of robots in clinical or assistive contexts and potential upstream interventions. Outcomes of the workshop will include publishing key workshop artifacts on our website and initiating a follow-up journal special issue. |
|
| Masterson, Annette |
Annette Masterson, Xin Ye, Yiyang Li, and Lionel Peter Robert Jr (University of Michigan at Ann Arbor, Ann Arbor, USA) The rapid proliferation of Large Language Models (LLMs) has enabled artificial agents to foster deep emotional bonds, yet the comparability of these AI relationships to human norms remains underexplored. As HRI researchers increasingly integrate LLMs into embodied platforms, understanding the nature of these bonds is imperative for responsible design. This study investigates whether relationships with LLM-driven AI companions can rival the satisfaction of human connections and if the mechanism of intimacy is equally critical. Through a comparative survey of 150 participants stratified across in-person, long-distance, and LLM companion relationships, we illuminate that digital bonds can yield satisfaction levels comparable to human partnerships, with intimacy serving as a predictive factor. These findings challenge the assumption that AI relationships are inherently unsatisfactory and identify intimacy as a design metric for social robots, providing a protocol for integrating LLM companions into embodied relational agents. |
|
| Matheus, Kayla |
Kayla Matheus, Debasmita Ghose, Jirachaya (Fern) Limprayoon, Michal A. Lewkowicz, and Brian Scassellati (Yale University, New Haven, USA; Massachusetts Institute of Technology, Cambridge, USA) We present the Ommie Deployable System (DS), a replicable, autonomous platform for long-term, in-the-wild mental health applications with the Ommie robot. Ommie DS builds on prior anxiety-focused deployments by introducing robust hardware, enhanced sensing, modular software, a companion tablet, and wireless multi-device architecture to support daily deep-breathing interactions in homes. Designed using off-the-shelf components and rapid-prototyped enclosures, the system enables reliable multi-week use, remote monitoring, and easy customization. By providing a durable, open, and researcher-friendly platform, Ommie DS supports scalable, real-world study of HRI for mental health and well-being. |
|
| Mathew, Tintu |
Lisa Marie Prinz and Tintu Mathew (Fraunhofer FKIE, Bonn, Germany) Autonomous robots are increasingly deployed in sensitive domains, yet prevailing human-in/on/out-of-the-loop categorizations fail to capture the quality of human-robot interaction (HRI). Meaningful Human Control (MHC) has emerged as a guiding principle, but its measurement remains under-specified. This paper presents a systematic review and measurement guide for operationalizing MHC in HRI, mapping its core constructs-trust, involvement, and situation awareness (SA)-to standardized self-report instruments. We review standardized questionnaires and related methods and compare their validity, reliability, and suitability for HRI user interface (UI) evaluation. We found that trust is well supported by validated scales, notably the MDMT and Schaefer’s Trust Perception Scale-HRI, with Jian’s Trust in Automated Systems scale as a widely used alternative. Involvement is best assessed via the UES felt-involvement subscale, with PQ/ITQ as viable complements. For SA, SAGAT and SARS are well-established tools, though many SA tools lack validation for HRI contexts. We offer a guide to measure MHC in HRI via standardized instruments, enabling UI comparison and adherence assessment. This operationalization supports the establishment of MHC in HRI design for sensitive domains. |
|
| Mattutino, Claudio |
Filipa Correia, Cristina Gena, Alberto Lillo, Laura Lossi, Claudio Mattutino, Valentina Nisi, Linda Pigureddu, and Fabiana Vernero (Instituto Superior Técnico - University of Lisbon, Lisbon, Portugal; University of Turin, Turin, Italy; IST University of Lisbon, Lisbon, Portugal) This paper explores the emerging field of Animal-Robot Interaction (ARI) by questioning the way humanoid robots could engage with dogs in domestic settings, a scenario which we expect to become prevalent in the near future. While existing research focuses on human-robot interaction, ARI remains quite underexplored, particularly regarding how dogs perceive and respond to robotic agents. In contrast, this work proposes a playful and relational design perspective to foster more natural interactions between dogs and robots. We sought guidance from the Department of Veterinary Medicine to ensure our experimental design offered an ethical and stress-free environment for the dogs. Hence, the study incorporated best practices in robot vocalisation, gestural communication, and movement adaptation, improving the robot’s ability to interact naturally. The experiment involved six dogs, always in the presence of their owners. Preliminary findings indicate that dogs can react with curiosity and engagement, but also avoidance or fear, challenging assumptions about robot acceptability in animal interactions. We argue for rethinking robotic design beyond human-centric paradigms, advocating for pluralistic and open-ended approaches to ARI. |
|
| Maule, Ella Ruth |
Hifza Javed, Ella Ruth Maule, Thomas H. Weisswange, and Bilge Mutlu (Honda Research Institute, San Jose, USA; University of Bristol, Bristol, UK; Honda Research Institute, Offenbach, Germany; University of Wisconsin-Madison, Madison, USA) This workshop on Robots for Communities explores how robots can serve as shared social resources that support the collective well-being of communities. While robots have traditionally been created to serve corporations or individuals, leading human–robot interaction research to focus largely on individuals or small groups, communities remain a crucial yet underexplored context for robotics. Understanding robots in community settings requires an interdisciplinary lens that integrates robotics, design, the social sciences, humanities, and community practice. Rather than emphasizing the negative consequences of large-scale deployment, our focus is on the active, positive roles robots might play in shaping communities. Central to this vision is viewing robots not as personal possessions but as shared resources, with unique affordances that enable them to enrich community experiences in ways other technologies cannot. The workshop seeks to bridge technology-centered and community-centered perspectives to promote dialogue across disciplines. By bringing these perspectives together, we aim to establish an interdisciplinary agenda for the design, evaluation, and deployment of robots as positive forces for well-being and cohesion within communities. |
|
| Mavrogiannis, Christoforos |
Pranav Goyal, Andrew Stratton, and Christoforos Mavrogiannis (University of Michigan at Ann Arbor, Ann Arbor, USA) Legible motion enables humans to anticipate robot behavior during social navigation, but existing approaches largely assume open spaces, static interactions, and fully attentive pedestrians. We study legibility in the ubiquitous and realistic setting of hallway navigation through two user studies. Study 1 (N=45) evaluates how intent should be represented for legible navigation within a model predictive control framework. We find that expressing intent at the interaction level (i.e., passing side) and dynamically adapting it to human motion leads to smoother human trajectories and higher perceived competence than destination-based or non-legible baselines. Study 2 (N=45) examines whether legibility remains beneficial when pedestrians are cognitively distracted. While legible motion still reduced abrupt human motion relative to the non-legible baseline, subjective impressions were less sensitive under distraction. Together, these results demonstrate that legibility is most effective when grounded in immediate interaction objectives and highlight the need to account for attentional variability. |
|
| Mayoral-Macau, Arnau |
Sara Cooper, Bartomeu Pou, Arnau Mayoral-Macau, Antonio Lobo-Santos, Miriam Martín, and Raquel Ros (IIIA-CSIC, Cerdanyola, Spain; IIIA-CSIC, Bellaterra, Spain; IIIA-CSIC, Barcelona, Spain; Institute of Mathematical Sciences, Madrid, Spain) We present Emy, a low-cost, socially assistive robot designed to support therapeutic sessions for children with Autism Spectrum Disorder (ASD) within the EMOROBCARE project. We detail the robot's multimodal interaction capabilities, including a comprehensive set of facial expressions and reactive behaviours, as well as its perception and speech interaction system. The demonstration showcases the robot together with a suite of four therapeutic games that promote cognitive and social skills. The system's modularity and affordability increase its accessibility for home-based ASD interventions. |
|
| Mazalek, Ali |
Mohsen Ensafjoo, Chau Nguyen, Roman Shumliakivskyi, Jamy Li, Paul H. Dietz, and Ali Mazalek (Toronto Metropolitan University, Toronto, Canada; Goethe University Frankfurt, Frankfurt, Germany; Edinburgh Napier University, Edinburgh, UK; University of Toronto, Toronto, Canada) Social robots in public and recreational spaces must keep an appearance of social intelligence. Traditional designs use pre-programmed schedules (PPS) or tightly authored state machines to guarantee predictable behaviour. Although PPS approaches are reliable, they limit spontaneous actions and the perception of “aliveness.” In this late-breaking report, we present our prototype, an aquatic social robot whose personality and motion are driven by an LLM (GPT) rather than PPS. The LLM generates character-specific Python control scripts based on prompts that contain robot constraints, user actions, and a well-known character description, which are then executed on our non-humanoid watercraft, designed for public pool settings. We briefly describe the system architecture and discuss early insights and open questions regarding the use of well-known characters with LLMs to create personality-rich behaviour in public-space robots. |
|
| Mazza, Monica |
Nicole Covone, Linda Pigureddu, Margherita Attanasio, Cristina Gena, Berardina De Carolis, and Monica Mazza (University of L’Aquila, L’Aquila, Italy; University of Turin, Turin, Italy; University of Bari, Bari, Italy) The study proposed in this paper investigates whether the administration modality of the Raven’s Standard Progressive Matrices (RSPM) influences the performance and physiological responses of autistic individuals. Fourteen participants with an ASD diagnosis completed the test either in the traditional paper-and-pencil format or through the humanoid robot Pepper, which delivered a digital version of the assessment. Across both modalities, we analysed accuracy, reaction times, and physiological activation (heart rate), taking item complexity into account. Preliminary results indicate that robot-mediated administration is feasible and comparable to traditional testing, with meaningful differences emerging as task difficulty increases. In particular, the Pepper modality tended to show lower initial physiological activation and comparable performance across most item levels. These findings suggest that social robots may offer a viable and potentially more engaging alternative for administering cognitive assessments in autism, supporting structured, predictable, and less stressful testing contexts. |
|
| McCormick, John |
Elena Marie Vella, Kim Vincs, Casey Richardson, and John McCormick (Swinburne University of Technology, Melbourne, Australia) Human–robot interaction (HRI) is moving beyond single-operator settings towards scenarios where robots must interpret multiple simultaneous human signals. Existing systems often assume a single input stream, which constrains expressiveness and limits collective participation. To address this, we introduce a depth-camera framework that supports natural gesture-based control, without user-specific training or personalization. A multi-input controller unifies diverse whole-body movements and extends seamlessly to multi-human interaction. Studies with dancers show how embodied practice can shape responsiveness and inclusivity, demonstrating the framework’s capacity to democratize robot control and enhance collective agency. By treating human movement as a shared control medium, the framework supports equitable participation and illustrates how embodied expertise can guide more inclusive HRI design. |
|
| McMillan, Donald |
Rachel Ringe, Nima Zargham, Mihai Pomarlan, Benjamin R. Cowan, Minha Lee, Donald McMillan, and Matthew Peter Aylett (University of Bremen, Bremen, Germany; University College Dublin, Dublin, Ireland; Eindhoven University of Technology, Eindhoven, Netherlands; Stockholm University, Stockholm, Sweden; Heriot-Watt University, Edinburgh, UK) Communication in Human-Robot Interaction (HRI) often focuses on linguistic exchanges, with spoken dialogue providing a natural way for people to interact with robots. While direct verbal interaction can reduce barriers compared to other forms of interaction (e.g., text- or touch-based interfaces), it may also exclude users with speech, language, or cognitive differences, and they may not generalize well across cultures and contexts. Non-linguistic forms of communication, including non-verbal voice interactions and extra-linguistic signals (e.g., gesture, gaze, facial expressions, posture), offer complementary pathways that can enable more inclusive, accessible, and universal interactions. This workshop explores how non-linguistic communication can shape effective human-robot communication and collaboration. We aim to bring together researchers from HRI, conversational AI, linguistics, psychology, and accessibility studies to discuss opportunities, challenges, and design practices for integrating such features. The workshop seeks to advance inclusive design principles, bridge disciplines, and highlight future research directions on communication strategies that empower diverse users in their interactions with robots. |
|
| Mead, Ross |
Saad Elbeleidy, Andrew Schoen, Christopher Birmingham, Tiago Ribeiro, Victor Paléologue, Douglas Dooley, and Ross Mead (Peerbots, Arlington, USA; Semio, Los Angeles, USA) Software infrastructure for rendered robot faces is fragmented, yet there are many open research questions to address regarding robot faces, emotional expression, gaze, and animation. In this workshop-tutorial hybrid, we invite researchers and practitioners interested in robot faces to share their expertise and learn about the Vizij open source ecosystem and how it can be used to define, animate and deploy expressive rendered robot faces fully integrated with various robot platforms. We will illustrate the full software pipeline required to implement an expressive social robot face starting from scratch and going all the way to defining and connecting a custom face to a real-time conversational engine. Participants will be able to follow along on their own devices viewing and controlling their custom rendered robot face. Participants will also hear from ecosystem partners who have begun integrating with Vizij and learn about best practices and lessons learned. |
|
| Megory, Hili |
Ofir Sadka, Michael Kupferman, Hili Megory, Lior Simchon, Rotem Dagan, Guy Benadon, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Robots are becoming part of everyday life, where their success depends as much on social interaction as on task performance. While rich movement is often assumed to enhance that experience, minimal gestures can also elicit meaningful social interactions. This suggests that gesture design informed by fundamental human nonverbal cues may achieve comparable social expressivity with fewer degrees of freedom. We explored whether reducing a robot’s degrees of freedom alters the social experience by comparing interactions in two movement configurations: a Full-DoF condition and a Reduced-DoF condition. This comparison allowed us to evaluate whether increased movement capability contributes meaningfully to the quality of the social experience. Bayesian analyses consistently favored the null hypothesis, indicating no meaningful differences between the Full-DoF and Reduced-DoF conditions. These initial findings suggest that well-designed simple gestures can support a consistent social experience, challenging the assumption that greater movement capability should be automatically expected to enhance human–robot interaction. Elior Carsenti, Adi Manor, Ofir Sadka, Michael Kupferman, Guy Benadon, Lior Simchon, Rotem Dagan, Hili Megory, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Social robots are increasingly expected to initiate interactions, yet proactive behavior often produces negative experiences if presented when the users are not available. We argue that availability is not binary and depends on the user's cognitive load. Moreover, proactivity can take different forms, suggesting that proactive cues can be adjusted to the user’s current load. We tested whether matching proactivity modality (verbal and nonverbal cues) to cognitive-load level improves robot perception. In a 2×2×2 between-participants study, 87 participants completed a high-load writing task or a low-load screw-sorting task while an ElliQ robot initiated interaction using verbal cues, nonverbal cues, both, or neither. Perceptions varied across conditions. Under low load, verbal proactivity was preferred regardless of nonverbal cues. Under high load, nonverbal-only proactivity produced the best experience. These findings suggest that proactivity design is not trivial and should focus on aligning the robot's proactive behavior with the user's cognitive load. |
|
| Meharg, Debbie |
Franziska Elisabeth Heck, Emilia Sobolewska, Debbie Meharg, and Khristin Fabian (Edinburgh Napier University, Edinburgh, UK; University of Aberdeen, Aberdeen, UK) Loneliness is a common issue among university students and has been associated with poorer mental health and reduced well-being. According to classic theory, there are two types of loneliness: emotional loneliness, which results from a lack of close attachments, and social loneliness, which is associated with deficits in broader peer networks. However, research into human–robot interaction rarely considers how these two forms of loneliness manifest in people's desire for social robots. This report presents the qualitative findings of semi-structured interviews with 25 students. These students were invited based on their scores for emotional and social loneliness, with the aim of representing a broad range of loneliness profiles. Participants observed standardised demonstrations of three social robots, Pepper, Nao and Furhat, and discussed their attitudes towards them, their potential roles and designs. Across the different profiles, the students generally expressed an openness to the idea of social robots. However, a clear gradient emerged: students who reported higher levels of loneliness tended to view robots as companions and conversational partners, whereas students who reported lower levels of loneliness emphasised the robots’ potential for providing instrumental support and the importance of maintaining stricter boundaries. Loneliness profiles therefore provide a promising lens for thinking about how to design role-appropriate and ethically sensitive robot behaviours and forms for student settings. |
|
| Mehboob, Fawad |
Amir Atef Habel, Ivan Snegirev, Elizaveta Semenyakina, Miguel Altamirano Cabrera, Jeffrin Sam, Fawad Mehboob, Roohan Ahmed Khan, Muhammad Ahsan Mustafa, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) This paper presents Glove2UAV, a wearable IMU-glove interface for intuitive UAV control through hand and finger gestures, augmented with vibrotactile warnings for exceeding predefined speed thresholds. To promote safer and more predictable interaction in dynamic flight, Glove2UAV is designed as a lightweight and easily deployable wearable interface intended for real-time operation. Glove2UAV streams inertial measurements in real time and estimates palm and finger orientations using a compact processing pipeline that combines median-based outlier suppression with Madgwick-based orientation estimation. The resulting motion estimations are mapped to a small set of control primitives for directional flight (forward/backward and lateral motion) and, when supported by the platform, to object-interaction commands. Vibrotactile feedback is triggered when flight speed exceeds predefined threshold values, providing an additional alert channel during operation. We validate real-time feasibility by synchronizing glove signals with UAV telemetry in both simulation and real-world flights. The results show fast gesture-based command execution, stable coupling between gesture dynamics and platform motion, correct operation of the core command set in our trials, and timely delivery of vibratile warning cues. Yuhua Jin, Nikita Kuzmin, Georgii Demianchuk, Mariya Lezina, Fawad Mehboob, Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Ahsan Mustafa, and Dzmitry Tsetserukou (Chinese University of Hong Kong, Shenzhen, China; Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Drones operating in human-occupied spaces suffer from insufficient communication mechanisms that create uncertainty about their intentions. We present HoverAI, an embodied aerial agent that integrates drone mobility, infrastructure-independent visual projection, and real-time conversational AI into a unified platform. Equipped with a MEMS laser projector, onboard semi-rigid screen, and RGB camera, HoverAI perceives users through vision and voice, responding via lip-synced avatars that adapt appearance to user demographics. The system employs a multimodal pipeline combining VAD, ASR (Whisper), LLM-based intent classification, RAG for dialogue, face analysis for personalization, and voice synthesis (XTTS v2). Evaluation demonstrates high accuracy in command recognition (F1: 0.90), demographic estimation (gender F1: 0.89, age MAE: 5.14 years), and speech transcription (WER: 0.181). By uniting aerial robotics with adaptive conversational AI and self-contained visual output, HoverAI introduces a new class of spatially-aware, socially responsive embodied agents for applications in guidance, assistance, and human-centered interaction. Fawad Mehboob, MoniJesu Wonders James, Amir Atef Habel, Jeffrin Sam, Miguel Altamirano Cabrera, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) This work introduces a novel concept of an autonomous aerial manipulation system capable of interpreting high-level natural language commands to retrieve objects and deliver them to a human user. The system is intended to integrate a MediaPipe-based Grounding DINO and a Vision-Language-Action (VLA) model with a custom-built drone equipped with a 1-DOF gripper and an Intel RealSense RGB-D camera. VLA performs semantic reasoning to interpret the intent of a user prompt and generates a prioritized task queue for grasping relevant objects in the scene. Grounding DINO and the dynamic A* planning algorithm are used to navigate and safely relocate the object. To ensure safe and natural interaction during the handover phase, the system employs a human-centric controller driven by MediaPipe. This module provides real-time human pose estimation, allowing the drone to employ visual servoing to maintain a stable, distinct position directly in front of the user, facilitating a comfortable handover. We demonstrate the system's efficacy through real-world experiments for localization and navigation, which resulted in a 16.4 cm, 7.0 cm, and 8.4 cm of max, mean euclidean, and root-mean squared errors, respectively, highlighting the feasibility of VLA for aerial manipulation operations. |
|
| Mejia Tobar, Miguel |
Eric Nichols, Miguel Mejia Tobar, and Randy Gomez (Honda Research Institute Japan, Wakoshi, Japan; Honda Research Institute Japan, Wako, Japan) Lifelike expressive behavior by social robots requires seamless coordination of facial expressions, body language, and tone of voice–all semantically aligned with speech content. While prior work has explored co-speech gesture generation, coordinating multiple expressive channels from a single semantic analysis remains under-explored. To address this gap, we propose holistic LLM-based generation, where an LLM analyzes robot dialog and generates synchronized behavior timelines that align vocal delivery and physical expression by directly inferring from speech semantics. In a pilot study on the tabletop robot Haru (N=23), 70% of participants preferred this approach over a heuristic baseline, characterizing it as more ”natural” and ”human-like”, with preliminary trends toward improved perceived agency (d=0.33, p=.128) and animacy (d=0.27, p=.212). However, qualitative analysis reveals a continuum of desired expression varying in frequency and intensity, with excessive expression triggering negative reactions. Navigating this design space presents new challenges for expressive robot behavior generation. |
|
| Mejia-Trebejo, Emerson E. |
Alejandra Patiño, Emerson E. Mejia-Trebejo, Macarena Vilca, and Denis Peña (Pontifical Catholic University of Peru, Lima, Peru) Peru recycles less than 2% of waste despite high potential. Current solutions fail at two extremes: passive bins cause confusion, while automated bins create dependency. We introduce PERI (Peer Educational Recycling Instructor), a social robot designed not just to sort but to teach. PERI uses a YOLOv8-based vision module to validate user decisions in real-time. This paper demonstrates PERI’s deployment with over 500 interactions. Our results show that 80% of users corrected their sorting mistakes through a combination of PERI’s feedback and facilitator mediation, transforming technical limitations into educational moments and empowering citizens as agents of change. Emerson E. Mejia-Trebejo, Macarena Vilca, Alejandra Patiño, and Denis Peña (Pontifical Catholic University of Peru, Lima, Peru) Passive infrastructure fails to bridge the recycling "Intention-Action" gap. This paper presents Peri V2, a "Symbiotic Retrofit" kit that transforms standard 120L bins into intelligent pedagogical agents without structural waste. The architecture deploys edge-based per- ception to execute a novel behavioral loop: a Temporal Intention Filter (5s heuristic) to parse social signals, Just-in-Time Associative Feedback for cognitive reinforcement, and Ludic Generalization challenges to verify learning transfer. A preliminary "in-the-wild" pilot (𝑁 ≈ 200) demonstrated the operational feasibility of the intention filter in noisy environments. Furthermore, qualitative feedback from recurring users (𝑁 ≈ 15) suggests that replacing voice interactions with visual cues improves acceptance by mini- mizing the social pressure of public disposal. Peri V2 proposes a scalable model for frugal HRI, shifting the focus from automated cities to empowered "Smart Citizens." |
|
| Meng, Hongdao |
Xiangfei Kong, Rex Gonzalez, Nicolas Echeverria, Sofia Cobo Navas, Thuc Anh Nguyen, Divyamshu Shrestha, Sebastian Ramirez-Vallejo, Deekshita Senthil Kumar, Andrew Texeira, Thao Nguyen, Deep Akbari, Hong Wang, Hongdao Meng, and Zhao Han (University of South Florida, Tampa, USA) HRI research shows that social robots support human companionship and emotional well-being. However, the cost of high-end social robots limits their accessibility, and, yet, existing low-cost platforms cannot fully support users emotionally due to limited interaction capabilities. In this demo paper with accompanying video, we showcase Bloom, a low-cost social robot that combines touch sensing with LLM-driven conversation to facilitate more expressive and emotionally engaging interactions with humans. Bloom also integrates a customizable 3D-printed shell with a flexible software pipeline that enables touch-responsive movement, LLM-driven dialogue, and face tracking during conversations, offering more expressive interaction capabilities at a low cost, which remains easy to fabricate and adaptable across a wide range of societal applications, including mental health, companion, and healthcare. |
|
| Meyer, Kathrin |
Verena Szojak, Ouijdane Guiza, Markus Jäger, Markus Brillinger, Martin Schobesberger, Martina Mara, Kathrin Meyer, and Sandra Maria Siedl (Pro2Future, Linz, Austria; FH Joanneum, Graz, Austria; Johannes Kepler University Linz, Linz, Austria) Collaborative robots (cobots) are increasingly used in manufacturing, yet their behavior can appear unpredictable to human coworkers, making teamwork less smooth. Explainability has been proposed as a key requirement for fluent human-robot collaboration. However, we still lack a clarity of (i) when cobots should provide explanations, (ii) what information human coworkers seek, and (iii) how explanations should be delivered by cobots during real physical collaboration. This paper reports an exploratory qualitative study of human explanation needs in a real-life bicycle assembly task with a 6-axis cobot arm. Fourteen participants interacted with the cobot that occasionally behaved unexpectedly. Think-aloud protocols and semi-structured interviews, show that users seek continuous information primarily about the cobot's current status and situation-dependent guidance for interaction. Participants favored simple presentations such as easy-to-read text explanations and light signals. We discuss implications for designing explainable cobots that provide lightweight, context-sensitive information and outline directions for future quantitative evaluations. |
|
| Meyerhoefer, Jan |
Akhila Bairy, Diana Burkart, Maximilian Schrapel, Jan Meyerhoefer, John Pravin Arockiasamy, Andy Comeca, Manuel Bied, Alexey Vinel, and Maike Schwammberger (KIT, Karlsruhe, Germany) Effective explainability in human-robot interaction (HRI) requires communication strategies that remain clear in dynamic settings. Unimodal explanations often falter under noise or cognitive load, making multimodal explanations crucial for real-world interaction. For social robots in traffic environments equipped with Vehicle-to-Everything (V2X) communication, timely and interpretable explanations are vital. This late-breaking report presents preliminary findings from a Virtual Reality (VR) user study on which modalities pedestrians perceive as most and least effective across different scenarios, revealing clear scenario-dependent differences in modality effectiveness and highlighting which cues are actually noticed. |
|
| Mi, Haipeng |
Yijie Guo, Ruhan Wang, Jini Tao, Zhiling Xu, Yaowen Shen, Qifei Zhou, Zhenhan Huang, Danqi Huang, and Haipeng Mi (Tsinghua University, Beijing, China; Neurobo, Shanghai, China; Tongji University, Shanghai, China) In-public companionship is underexplored, especially for ACG (anime, comics, and games) users who carry character goods and seek shared presence with favorite characters in everyday public life. We present Goobo, a lightweight multimodal companion embedded in an ita-bag (a transparent display bag used by ACG fans) that “activates” carried character goods via real-time visual perception and persona-consistent expressive narration. Informed by a formative workshop, Goobo illustrates a portable, character-bound interaction paradigm and serves as a platform for upcoming field studies on how users engage with activated characters in public settings. |
|
| Mihaylova, Tsvetomila |
Shengheng Yan and Tsvetomila Mihaylova (Aalto University, Espoo, Finland) For human--robot interaction in autonomous driving, understanding when and why automated systems can effectively explain their behavior is critical for transparency, trust, and user understanding. Large language models (LLMs) can generate natural-language explanations of driving scenes, yet it remains unclear whether some types of driving situations are inherently easier or harder for them to describe. To investigate this question, we introduce an ego-centric taxonomy of driving scenarios and apply it to the BDD-X test set, creating a category-aligned evaluation benchmark. Using this dataset, we compare the explanation performance of RAG-Driver and GPT-4o across both top-level and fine-grained scenario categories. For each model, explanation performance differs significantly across scenario categories, indicating that scenario type is a meaningful factor influencing explanation quality. These findings highlight the importance of scenario-aware evaluation when assessing explanation quality in autonomous driving. |
|
| Mikelsons, Lars |
Stina Klein, Birgit Prodinger, Elisabeth André, Lars Mikelsons, and Nils Mandischer (University of Augsburg, Augsburg, Germany) Robots are becoming more prominent in assisting persons with disabilities (PwD). Whilst there is broad consensus that robots can assist in mitigating physical impairments, the extent to which they can facilitate social inclusion remains equivocal. In fact, the exposed status of assisted workers could likewise lead to reduced or increased perceived stigma by other workers. We present a vignette study on the perceived cognitive and behavioral stigma toward PwD in the workplace. We designed four experimental conditions depicting a coworker with an impairment in work scenarios: overburdened work, suitable work, and robot-assisted work only for the coworker, and an offer of robot-assisted work for everyone. Our results show that cognitive stigma is significantly reduced when the work task is adapted to the person's abilities or augmented by an assistive robot. In addition, offering robot-assisted work for everyone, in the sense of universal design, further reduces perceived cognitive stigma. Thus, we conclude that assistive robots reduce perceived cognitive stigma, thereby supporting the use of collaborative robots in work scenarios involving PwDs. |
|
| Minter, Emma |
Emma Minter, Robert Tankard, Oscar Norman, and Janie Busby Grant (University of Canberra, Canberra, Australia) Extensive research has investigated the human tendency to anthropomorphize artificial agents by attributing human-like traits to these systems. Sociality motivation, the desire for social connection, has been proposed to be a key psychological determinant for anthropomorphism. Sociality motivation can be operationalized in a range of dispositional, developmental, and cultural facets, but it is currently unclear how these factors contribute collectively and independently to predicting an individual’s tendency towards anthropomorphism. This online study (N = 164) assessed the relationship between different facets of sociality motivation and four dimensions of anthropomorphism of a social robot, using videos of a robot completing a game alone and with human and robot partners. Respondents who reported more collectivist cultural views were more likely to attribute higher agency, sociability, and disturbance to the robot. Those who reported higher attachment anxiety scores also attributed greater agency and sociability. Previous research has focused primarily on dispositional indicators of anthropomorphism, however the current study suggests that cultural determinants may be stronger predictors of anthropomorphic tendencies and should be a focus of further research. |
|
| Mishra, Chinmaya |
Fleur Smilde and Chinmaya Mishra (Radboud University, Nijmegen, Netherlands; MPI for Psycholinguistics, Nijmegen, Netherlands) Gaze is a key non-verbal cue in face-to-face interaction, yet we know relatively little about how people visually explore a robot’s face during conversation. In human-human interactions (HHI), gaze allocation is shaped by conversational role and task demands: speakers typically avert their gaze from their partner’s face more than listeners do, and listeners often shift gaze from the eyes to the mouth to support speech understanding. In human-robot interactions (HRI), it is often implicitly assumed that gaze to humanoid robots follows similar patterns, but this has rarely been tested quantitatively at the level of specific facial regions. In this late-breaking report, we report a secondary analysis of an existing HRI dataset with usable eye-tracking data from 31 participants who took part in semi-structured interviews with a social robot (Furhat). Using MediaPipe Face Mesh on participant’s egocentric video from eye tracking glasses, we segmented the robot’s face into eye, mouth, and full-face regions of interest (ROI), and quantified how participants distributed their gaze at each ROI over the entire interaction, and separately for speaking and listening. Participants spent most of the interaction looking at the robot’s face; within the face, the eyes and mouth were the main targets, and gaze to these regions increased during listening, especially for the mouth. This pattern aligns with the central findings from HHI and offers empirical evidence for assumed similarities in gaze allocation between HHI and HRI. In an exploratory analysis, we additionally examined how the robot’s own gaze behaviour, with or without human-like gaze aversions, shaped gaze to the eyes and mouth. We discuss how these findings inform the interpretation of gaze as an implicit engagement cue in HRI. Finally, we provide baseline references and show how ROI-based analyses can enrich future gaze studies in HRI. Christopher D. Wallbridge, Kavyaa Somasundaram, Frank Förster, Maia Stiber, Eduardo B. Sandoval, Chinmaya Mishra, Christina E. Stimson, Patrick Holthaus, and Hatice Gunes (Cardiff University, Cardiff, UK; Örebro University, Örebro, Sweden; University of Hertfordshire, Hatfield, UK; Microsoft Research, Redmond, USA; UNSW, Sydney, Australia; MPI for Psycholinguistics, Nijmegen, Netherlands; University of Sheffield, Sheffield, UK; University of Cambridge, Cambridge, UK) This workshop proposes looking at errors in humans and robots. The workshop will focus along three axes: Human, Robot and Interaction induced errors. Errors in Human-Robot Interaction (HRI) pose a crucial challenge for the deployment of robots. Errors can affect the safety and trust of users, therefore it is important for empowering society that these robots can handle errors robustly. The workshop intends to foster interdisciplinary discussion among researchers in robotics, HCI, cognitive science, and social sciences, to encourage the development of robust, user-centered, and socially responsible HRI systems. The invited speakers, paper presentations and discussions will focus around topics of detecting errors, handling errors, adapting to errors, restoring trust, using intentional errors, communication strategies, issues of expectation as well as look at design, interdisciplinary and ethical approaches to research. In this way we will help inform research into errors, and develop robotic systems capable of robust interaction and collaboration. |
|
| Mital, Arjita |
Arjita Mital, Felix Gnisa, Utku Norman, and Nora Weinberger (KIT, Karlsruhe, Germany) This Late Breaking Work presents a low-threshold, unsupervised public exhibit designed to explore how non-expert audiences imagine and negotiate future human–robot interactions in ethically charged everyday situations. The exhibit, installed in Karlsruhe, Germany, invited participants to engage with four dilemma-based scenarios where participants were prompted to decide how a social robot should act confronting questions of moral delegation and machine agency. The activity generated rich, situated reflections on responsibility, safety, care, and the limits of automation. Findings reveal context-dependent expectations that balance efficiency against dignity, human judgment, and relational preservation, shaped by perceived stakes, social context, and the specific embodiment of the robot involved. Through this we demonstrate how minimally supervised participatory formats can surface normative expectations and support inclusive, responsible robot design. |
|
| Miyauchi, Genki |
Genki Miyauchi, Roderich Groß, and Chaona Chen (University of Sheffield, Sheffield, UK; TU Darmstadt, Darmstadt, Germany) As robots become increasingly embedded in human–robot teamwork, understanding how humans perceive robot behavior is critical. This is especially relevant for swarm robots that rely on collective behavior to accomplish tasks. While prior research has explored how humans evaluate the abilities and behaviors of single robots, the perception of swarm robots remains relatively underexplored. Guided by the competence–warmth framework, we conducted a perception-based experiment in a collective search task, generating 125 robot teams by systematically manipulating three parameters: speed, separation distance, and local broadcast duration. Ninety participants observed the swarms, rated perceived warmth and competence, and reported team preferences. Results show that broadcast duration increased perceived warmth, separation distance enhanced perceived competence, and individual robot speed had no significant effect. Critically, social perceptions of warmth and competence were stronger predictors of team preference than task performance, with participants favoring swarms that appeared warm and competent over those that completed tasks fastest. These results underscore the importance of considering both technical performance and social attributes when designing robot swarms for effective collaboration with humans. |
|
| Mizutani, Akitoshi |
Marina Obata, Yuka Iwanaga, Masayoshi Tsuchinaga, Akitoshi Mizutani, Daniel R. J. Downes, and Sabrina Lee (Woven by Toyota, Tokyo, Japan; Toyota Motor Corporation, Toyota, Japan) This study examines the validity of the Motivation, Engagement, and Thriving in User Experience (METUX) model for evaluating remote operation interfaces of an assistive robot. Multiple real-life scenarios involving various target users were explored to assess whether Self-Determination Theory (SDT) factors in METUX effectively represent user well-being. Survey data from 2300 samples across three interfaces and four user groups underwent descriptive analysis, Confirmatory Factor Analysis (CFA), Multi-Group CFA (MGCFA) and Structural Equation Modelling (SEM). Results show significant correlations between SDT-defined Basic Psychological Needs (BPN) and well-being indicators, allowing researchers to examine well-being through interaction with assistive robots. MGCFA supported measurement invariance across interfaces but not across user groups, suggesting user-needs-centric interface development. The findings highlight the importance of validating metrics before product evaluation. In conjunction with planned user research in real-life settings using established metrics, our approach addresses research gaps in user well-being assessment within the HRI field. |
|
| Moe, Lina |
Benjamin Greenberg and Lina Moe (Rutgers University, Piscataway, USA; Rutgers University, New Brunswick, USA) Demand for mobile robots operating in human environments has expanded rapidly, providing a proliferation of potential feedback channels for developers. While firms increasingly rely on customer input, deployment data, and regulatory guidance to refine autonomous systems, these mechanisms also interact in ways that complicate iteration. Drawing on sixteen interviews with industry professionals developing mobile robots, we analyze how these feedback channels shape design decisions, where they introduce friction, and why they frequently conflict. These interviews revealed that the following three mechanisms are among the most valuable channels to developers: feedback from customers, feedback from quantitative data, and feedback from regulators. We find that customer practices can obstruct data collection, data-driven improvement is constrained by safety and privacy requirements, and regulatory expectations raise reliability thresholds that slow deployment. By examining these cross-channel tensions, we highlight the structural bottlenecks that developers confront when building robots for complex, real-world settings. |
|
| Mohan, Mayumi |
Mayumi Mohan, Ju-Hung Chen, and Alexis E. Block (MPI for Intelligent Systems, Stuttgart, Germany; Case Western Reserve University, Cleveland, USA) Social-physical human-robot interaction (spHRI) has grown rapidly across robotics, human-computer interaction, human-robot interaction, and haptics. Yet, fragmented terminology and inconsistent methodologies make systematic synthesis difficult. To support scalable review practices, we evaluated the extent to which small language models (SLMs; < 1.5B parameters) can assist with title and abstract screening for a large spHRI systematic review. While no SLMs matched human reviewers' performance, the models operated locally and screened papers orders of magnitude faster. The combined SLM ensemble identified 39 papers reviewers missed, representing 10.29% of the final relevant dataset. These results demonstrate that SLMs can augment, rather than replace, expert reviewers and make large-scale literature reviews accessible and sustainable. Mayumi Mohan, Rachael L’Orsa, Felix Grüninger, Bernadette-Marie Stollhof, Carolin Sarah Klein, Raphael Dinauer, Rachael Bevill Burns, Tobias J. Renner, Karsten Hollmann, and Katherine J. Kuchenbecker (MPI for Intelligent Systems, Stuttgart, Germany; University Hospital Tübingen, Tübingen, Germany; University of Tennessee at Knoxville, Knoxville, USA) The gold-standard treatment for children diagnosed with obsessive-compulsive disorder (OCD) is therapist-guided cognitive behavioral therapy (CBT), which includes exposure and response prevention (ERP) sessions that teach children to overcome compulsive responses when exposed to their anxiety-inducing triggers. CBT requires children to report frequent self-assessments of tension during both therapist-supported and therapist-free self-management ERP sessions. Videoconferencing-delivered CBT (vCBT) enables a psychotherapist to treat a child remotely in their home, where OCD symptoms often arise, but these remote therapeutic interactions lack physical presence and can be challenging to run. We propose using a robot as an input/output device during vCBT for young children diagnosed with OCD, and we introduce a stationary table-top koala robot for this application. We further describe the first of three planned participatory design phases: a co-design study comprising two sessions where child and adolescent psychotherapists role-played vCBT ERP exercises with this robot to help define its role. Mayumi Mohan, Joana Brito, Anouk Neerincx, and Alexis E. Block (MPI for Intelligent Systems, Stuttgart, Germany; Instituto Superior Técnico, Lisbon, Portugal; HU University of Applied Sciences Utrecht, Utrecht, Netherlands; Case Western Reserve University, Cleveland, USA) The sixth edition of the Workshop YOUR Study Design (WYSD) aims to empower the next generation of HRI researchers by strengthening their experimental design skills through personalized mentoring and interactive activities. Recognizing that many early-career researchers in Human-Robot Interaction (HRI) come from technical disciplines with limited training in experimental design, WYSD provides a supportive environment where mentees receive structured, detailed feedback on their proposed studies from experienced HRI researchers. For HRI 2026, WYSD will expand to a full-day format to allow more in-depth mentoring and enhanced peer-to-peer engagement. In addition to individualized mentoring sessions, the workshop will feature mentee lightning talks, a free-form study design Q&A, mini discussions on key methodological topics, and collaborative activities such as "Create and Present a Custom Study" and "Networking Bingo". These sessions promote rigorous study design practices, cross-disciplinary exchange, and community building. By equipping researchers with the tools to conduct robust and socially responsible user studies, WYSD directly contributes to the development of safer, more acceptable, accessible, and impactful robotic systems for society. |
|
| Mongile, Sara |
Yan Zhang, Sarah Schömbs, Xiang Pan, Jan Leusmann, Sara Mongile, Mohammad Obaid, and Wafa Johal (University of Melbourne, Melbourne, Australia; Kyoto University, Kyoto, Japan; LMU Munich, Munich, Germany; Italian Institute of Technology, Genoa, Italy; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden) Recent developments in large language models and multi-agentic systems present both opportunities and challenges for Human-Robot Interaction (HRI). However, it remains unclear how the interaction paradigms will be reshaped due to the growing complexity and opaqueness. Therefore, to bridge this gap and connect users with multi-agentic systems through embodied robot agents, this workshop aims to explore solutions for design challenges of multi-agentic systems on various types of robots in real-world contexts. Through a complementary mix of focus groups, including design activities and paper presentations, our workshop generates design considerations for real-world applications and outlines pioneering research agendas for multi-agentic systems in HRI. |
|
| Moon, AJung |
Rahatul Amin Ananto, Seol Han, Rachel Ruddy, and AJung Moon (McGill University, Montreal, Canada) A new generation of robots are being developed to enter our homes in a matter of months. But has the industry appropriately accounted for the complexities of the social environment that we call home? We conducted an exploratory design workshop to examine what secondary users—those who are not expected to be owners but nonetheless daily users—deem to be socially appropriate behavior of a domestic robot. A total of 90 students from Mexico participated in the study. By analyzing they define and reason about appropriateness of robot behaviors in the home, we show why deployment of domestic robots require much more thoughtful considerations than implementation of simplified social rules; judgments of what is appropriate depend on context, roles, relationships, and individual boundaries, and can differ between primary and secondary users. We call on Human-Robot Interaction (HRI) practitioners to treat social appropriateness as a fluid, gradient factor at design time rather than a binary concept (appropriate/inappropriate). |
|
| Moreno, Plinio |
Ricardo Rodrigues, Plinio Moreno, Filipa Correia, and Alexandre Bernardino (University of Lisbon, Lisbon, Portugal; Instituto Superior Técnico - University of Lisbon, Lisbon, Portugal) Social robots are a new and promising tool for reducing children's anxiety during medical procedures. Our study aims to design and test a social robot to alleviate anxiety and improve emotional state before dental treatment for children. The design of the experimental condition included asocial robot (Vizzy) with different comedic styles such as jokes, riddles, games, and dance, to make the waiting room experience more engaging and entertaining for children. A user study (N=22) was conducted, in which children were assigned to one of two groups: interaction with the humanoid Vizzy robot, or waiting in the dentist's waiting room without any interaction with the robot (Control). The results indicate a significant impact of the experimental condition on reducing anxiety levels and improving emotional responses, demonstrating that social robots can be considered for future research to reduce children's anxiety before distressing medical procedures. |
|
| Morgan, Mayuko |
Lewis Watson, Emilia Sobolewska, Carl Strathearn, Mayuko Morgan, and Yanchao Yu (Edinburgh Napier University, Edinburgh, UK) A major limitation of current social robots is their dependence on cloud-based dialogue pipelines, which restricts use in settings with limited or unreliable connectivity. We present a lightweight, fully local spoken-dialogue system that runs on consumer-grade hardware and integrates open-source models for speech recognition, dialogue generation, and text-to-speech. The pipeline was deployed on Euclid, a non-commercial humanoid robot, across several public engagement events, enabling extended real-world interaction without internet access. We analyse over 5,000 dialogue turns recorded during these dialogues to characterise system behaviour, user interaction patterns, and challenges arising in noisy, multi-speaker environments. Our observations demonstrate the feasibility of privacy-preserving, on-device conversational robotics while highlighting limitations in turn-taking, response length, and environmental grounding. We outline planned improvements to support more robust and accessible social-robot interaction. |
|
| Morgan, Phillip |
Phillip Morgan, Damith Herath, Praminda Caleb-Solly, Matthew Studley, Elizabeth Williams, Aurora An-Lin Hu, Eduardo B. Sandoval, Maleen Jayasuriya, Min Wang, and Janie Busby Grant (Cardiff University, Cardiff, UK; University of Canberra, Bruce, Australia; University of Nottingham, Nottingham, UK; University of the West of England, Bristol, UK; Australian National University, Canberra, Australia; UNSW, Sydney, Australia; University of Canberra, Canberra, Australia) The increasing prevalence of interactive, mobile robots in domestic and social spaces requires not only a comprehensive examination of the security challenges inherent in their large-scale deployment, but understanding how we as a community can support safe and successful adoption of robots. This workshop provides a forum for researchers, practitioners, and stakeholders from a range of disci- plines to build expertise and networks in safety, trust, psychosocial, legal and economic aspects for the secure deployment of social robots in domestic environments. The end goal of this workshop is to enable the development of methodologies and recommenda- tions to empower people to understand and decide on how their robots (and/or robots they are using) are secured, and approaches for raising awareness of the impact of an attack. The workshop will incorporate keynote presentations from leading experts, a series of lightning talks, and a scenario planning activity in which teams of participants use custom-designed prospective scenarios to ex- plore the precursors, processes, contexts and impacts of a breach of security in a robotic system. This will allow a deep-dive into the complexities of establishing and maintaining security and trust in interactive robotic applications, from the broad frame of social, personal and legal capability, enabling long-term acceptance and adoption of trustworthy, secure, and safe interactive robots. |
|
| Mounsef, Jinane |
Sabrina Al Bukhari, Thisas Samaraweera, Kapil Shenoy, Lewi Alemu, Henok Kefyalew, Jamison Heard, and Jinane Mounsef (Rochester Institute of Technology, Dubai, United Arab Emirates; Rochester Institute of Technology, Rochester, USA) Social robots often struggle with natural conversational flow due to rule-based systems that cause pauses and interruptions. We present MESA (Modular Empathetic Social Assistant), a campus robot that synchronizes verbal responses with non-verbal cues. It uses UltraVAD to distinguish thoughtful pauses from turn completions and implements turn-yielding and floor-holding mechanisms. A distributed Retrieval-Augmented Generation (RAG) architecture ensures accurate responses for students and faculty. Validated on a proxy platform with ~600 ms latency, MESA demonstrates that combining context-aware audio and gaze enables more fluid conversation, even as hardware assembly nears completion. |
|
| Mukhanov, Zein |
Keya Shah, Himanshi Lalwani, Zein Mukhanov, and Hanan Salam (New York University, Abu Dhabi, United Arab Emirates) Social robots and conversational agents are being explored as supports for wellbeing, goal-setting, and everyday self-regulation. While prior work highlights their potential to motivate and guide users, much of the evidence relies on self-reported outcomes or short, researcher-mediated encounters. As a result, we know little about the interaction dynamics that unfold when people use such systems in real-world contexts, and how these dynamics should shape future robot wellbeing coaches. This paper addresses this gap through content analysis of 4352 messages exchanged longitudinally between 38 university students and an LLM-based wellbeing coach. Our results provide a fine-grained view into how users naturally shape, steer, and sometimes struggle within supportive human-AI dialogue, revealing patterns of user-led direction, guidance-seeking, and emotional expression. We discuss how these dynamics can inform the design of robot wellbeing coaches that support user autonomy, provide appropriate scaffolding, and uphold ethical boundaries in sustained wellbeing interactions. |
|
| Muller Dardelin, Thomas |
Thomas Muller Dardelin, Damith Herath, and Janie Busby Grant (University of Canberra, Canberra, Australia; Waseda University, Tokyo, Japan; University of Canberra, Bruce, Australia) This paper investigates user engagement with socially assistive robots (SARs) in healthcare contexts through an experimental study comparing simulated and physical embodiments. The study examines how users perceive trust, engagement, safety, and usability when interacting with two humanoid robots—Hatsuki, designed for emotional and social support, and AIREC, designed for physical caregiving tasks. Participants interacted with both simulated and real robots, enabling a direct comparison of virtual and physical embodiments under identical conversational conditions. The results suggest that verbal interaction and character design contribute more strongly to perceived engagement than physical embodiment alone, highlighting the importance of communication quality in socially assistive robotics. In the simulated setting, Hatsuki was perceived as more caring and socially engaging than AIREC, indicating that socially expressive design can shape user perceptions even without physical embodiment. |
|
| Murakami, Takahito |
Takahito Murakami, Maya Grace Torii, Shuka Koseki, and Yoichi Ochiai (University of Tsukuba, Tsukuba, Japan; University of Tokyo, Hongo, Japan) We address a mismatch between how care information is provided and accessed. Explanations about procedures, routines, and self-management are delivered at fixed times in dense formats, leading patients to concentrate questions into nurse encounters and increasing workload. We frame this as a problem of bidirectional mediation and propose Suzume-chan, a small “Pet-as-a-Friend” plush agent that serves as an embodied information hub. Patients can speak to Suzume-chan without operating devices to receive on-demand explanations and reminders, while nurses obtain compact, nursing-relevant records. Suzume-chan runs entirely on a local network using automatic speech recognition, a local language model, retrieval-augmented generation, and text-to-speech. A workshop-style proof-of-concept highlighted embodiment, latency, and trust as key considerations for clinical use. Ayaka Tsutsui, Takahito Murakami, Tatsuki Fushimi, Ryosei Kojima, Kengo Tanaka, and Yoichi Ochiai (University of Tsukuba, Tsukuba, Japan) Embossed surfaces provide raised patterns and textures that can support access to information and act as tactile guidance on physical interfaces. Recent work has introduced a mold free embossing technique that combines High Intensity Focused Ultrasound (HIFU) with a robot arm to locally deform materials such as acrylic and MDF, enabling digitally controlled embossing. While this approach offers high design flexibility and is safe, research on how the resulting patterns are perceived through touch remains limited. In this work, we use a robot-controlled HIFU embossing system to create six textures that represent common tactile features relevant to exploratory touch, including contours, roughness, geometric shapes, and localized bumps on acrylic and MDF. Nineteen participants took part in a blindfolded tactile exploration study. Using both quantitative and qualitative data, we characterize how people perceive and differentiate textures produced through robot-controlled HIFU embossing. |
|
| Murphy, Emma |
Thanh-Tung Ngo, Emma Murphy, and Robert J. Ross (Technological University Dublin, Dublin, Ireland) Effective communication is vital in healthcare, especially across language barriers, where non-verbal cues and gestures are critical. This paper presents a privacy-preserving vision-language framework for medical interpreter robots that detects specific speech acts (consent and instruction) and generates corresponding robotic gestures. Built on locally deployed open-source models, the system utilizes a Large Language Model (LLM) with few-shot prompting for intent detection. We also introduce a novel dataset of clinical conversations annotated for speech acts and paired with gesture clips. Our identification module achieved 0.90 accuracy, 0.93 weighted precision, and a 0.91 weighted F1-Score. Our approach significantly improves computational efficiency and, in user studies, outperforms the speech-gesture generation baseline in human-likeness while maintaining comparable appropriateness. |
|
| Murray, Matthew |
Claire Lewis, Melody Goldanloo, Matthew Murray, Zachary Kaufman, and Tom Williams (Colorado School of Mines, Golden, USA; University of Colorado at Boulder, Boulder, USA) Museums are an effective informal learning environment for science, art and more. Many researchers have proposed museum guide robots, where the outcomes of the interactions are based solely on the robot’s communication. In contrast, we explored how a robot could encourage learning and teamwork through human-human interactions. To achieve this, we created “Chase,” a novel zoomorphic robot that presents “Data Chase,” an interactive museum activity. We designed Chase to enable museum-goers to learn about the exhibits together by prompting users to complete a teamwork based scavenger hunt for rewards. |
|
| Murray-Rust, Dave |
Wilbert Tabone, Benedetta Lusi, Alessandro Ianniello, J. Micah Prendergast, Deborah Forster, Olger Siebinga, Maria Luce Lupetti, Eva S. Verhoef, Dave Murray-Rust, Marco C. Rozendaal, Ann M. Pendleton-Jullian, and David Abbink (Delft University of Technology, Delft, Netherlands; Erasmus University Medical Centre, Rotterdam, Netherlands; Politecnico di Torino, Turin, Italy; RoboHouse, Delft, Netherlands; Ohio State University, Columbus, USA) Building on two previous workshops on transdisciplinary practices for shaping worker-robot relations, this half-day workshop introduces participants to worldbuilding, a design-driven technique used to co-create and explore richly detailed futures, as a way to empower workers and scholars in reimagining plausible and preferable future worker-robot relations (WRRs). WRRs describe the interactions, collaborations, and shared practices between workers and robotic systems in organisational contexts. The workshop begins with an introduction to WRRs, and a keynote by a worldbuilding expert that will outline the method and its value for envisioning future WRRs. Groups of workshop participants will then investigate concrete case studies that demonstrate how robotic systems can support workers in their practice, with a focus on enhancing wellbeing. Through interactive activities in this workshop, participants will co-create imagined worlds of work, which will be analyzed systemically across multiple levels of complexity, from the individual worker and their immediate context to broader societal implications. The workshop ultimately aims to build a community committed to shaping sustainable futures of robot-assisted work. |
|
| Mustafa, Muhammad Ahsan |
Amir Atef Habel, Ivan Snegirev, Elizaveta Semenyakina, Miguel Altamirano Cabrera, Jeffrin Sam, Fawad Mehboob, Roohan Ahmed Khan, Muhammad Ahsan Mustafa, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) This paper presents Glove2UAV, a wearable IMU-glove interface for intuitive UAV control through hand and finger gestures, augmented with vibrotactile warnings for exceeding predefined speed thresholds. To promote safer and more predictable interaction in dynamic flight, Glove2UAV is designed as a lightweight and easily deployable wearable interface intended for real-time operation. Glove2UAV streams inertial measurements in real time and estimates palm and finger orientations using a compact processing pipeline that combines median-based outlier suppression with Madgwick-based orientation estimation. The resulting motion estimations are mapped to a small set of control primitives for directional flight (forward/backward and lateral motion) and, when supported by the platform, to object-interaction commands. Vibrotactile feedback is triggered when flight speed exceeds predefined threshold values, providing an additional alert channel during operation. We validate real-time feasibility by synchronizing glove signals with UAV telemetry in both simulation and real-world flights. The results show fast gesture-based command execution, stable coupling between gesture dynamics and platform motion, correct operation of the core command set in our trials, and timely delivery of vibratile warning cues. Yuhua Jin, Nikita Kuzmin, Georgii Demianchuk, Mariya Lezina, Fawad Mehboob, Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Ahsan Mustafa, and Dzmitry Tsetserukou (Chinese University of Hong Kong, Shenzhen, China; Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Drones operating in human-occupied spaces suffer from insufficient communication mechanisms that create uncertainty about their intentions. We present HoverAI, an embodied aerial agent that integrates drone mobility, infrastructure-independent visual projection, and real-time conversational AI into a unified platform. Equipped with a MEMS laser projector, onboard semi-rigid screen, and RGB camera, HoverAI perceives users through vision and voice, responding via lip-synced avatars that adapt appearance to user demographics. The system employs a multimodal pipeline combining VAD, ASR (Whisper), LLM-based intent classification, RAG for dialogue, face analysis for personalization, and voice synthesis (XTTS v2). Evaluation demonstrates high accuracy in command recognition (F1: 0.90), demographic estimation (gender F1: 0.89, age MAE: 5.14 years), and speech transcription (WER: 0.181). By uniting aerial robotics with adaptive conversational AI and self-contained visual output, HoverAI introduces a new class of spatially-aware, socially responsive embodied agents for applications in guidance, assistance, and human-centered interaction. |
|
| Mutlu, Bilge |
Dakota Sullivan, David Porfirio, Bilge Mutlu, and Laura M. Hiatt (University of Wisconsin-Madison, Madison, USA; George Mason University, Fairfax, USA; US Naval Research Laboratory, Washington, USA) Robots are increasingly relied upon for task completion in privacy-critical human environments. In these environments, it is imperative that a robot's potentially sensitive goals remain obfuscated. To address this need, a substantial amount of literature has proposed methods for obfuscatory task planning. These works make many attempts to experimentally or analytically determine whether agents can conceal their goals from observers. While these works make guarantees that resulting plans will conceal an agent's goals, they are often only theoretical. Within this work, we develop three obfuscatory task planning strategies inspired by prior literature to evaluate with human observers (N = 160). Our preliminary results show that observers struggle to identify a robot's goals at similar levels regardless of whether obfuscatory or optimal task planning strategies are employed. These findings call into question the purported benefits of many obfuscatory task planning strategies. Hifza Javed, Ella Ruth Maule, Thomas H. Weisswange, and Bilge Mutlu (Honda Research Institute, San Jose, USA; University of Bristol, Bristol, UK; Honda Research Institute, Offenbach, Germany; University of Wisconsin-Madison, Madison, USA) This workshop on Robots for Communities explores how robots can serve as shared social resources that support the collective well-being of communities. While robots have traditionally been created to serve corporations or individuals, leading human–robot interaction research to focus largely on individuals or small groups, communities remain a crucial yet underexplored context for robotics. Understanding robots in community settings requires an interdisciplinary lens that integrates robotics, design, the social sciences, humanities, and community practice. Rather than emphasizing the negative consequences of large-scale deployment, our focus is on the active, positive roles robots might play in shaping communities. Central to this vision is viewing robots not as personal possessions but as shared resources, with unique affordances that enable them to enrich community experiences in ways other technologies cannot. The workshop seeks to bridge technology-centered and community-centered perspectives to promote dialogue across disciplines. By bringing these perspectives together, we aim to establish an interdisciplinary agenda for the design, evaluation, and deployment of robots as positive forces for well-being and cohesion within communities. |
|
| Myshlyaev, Artyom |
Muhammad Haris Khan, Artyom Myshlyaev, Artem Lykov, Miguel Altamirano Cabrera, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) We propose a new concept, Evolution 6.0, which represents the evolution of robotics driven by Generative AI. When a robot lacks the necessary tools to accomplish a task requested by a human, it autonomously designs the required instruments and learns how to use them to achieve the goal. Evolution 6.0 is an autonomous robotic system powered by Vision-Language Models (VLMs), Vision-Language Action (VLA) models, and Text-to-3D generative models for tool design and task execution. The system comprises two key modules: the Tool Generation Module, which fabricates task-specific tools from visual and textual data, and the Action Generation Module, which converts natural language instructions into robotic actions. It integrates QwenVLM for environmental understanding, OpenVLA for task execution, and Llama-Mesh for 3D tool generation. Evaluation results demonstrate a 90% success rate for tool generation with a 10-second inference time and action generation achieving 83.5% in physical and visual generalization, 70% in motion generalization, and 37% in semantic generalization. Future improvements will focus on bimanual manipulation, expanded task capabilities, and enhanced environmental interpretation to improve real-world adaptability. |
|
| Nadila, Aulia |
Nishi Shishir, Aulia Nadila, Aly Magassouba, and Nikhil Deshpande (University of Nottingham, Nottingham, UK) The aim of this paper is to facilitate an efficient post-disaster recovery in lower-income countries by promoting first-responder accessibility and safety through pre-response disaster area observation and categorisation tools. In the past, research into assistive technologies in this field has been highly focused on disaster mitigation, detection, or primary participation, rather than reconnaissance and target identification activities conducted by first responders. Thus, research into this under-represented but highly important industry was necessary. |
|
| Nagai, Yukie |
Yukie Nagai (University of Tokyo, Japan) Human social intelligence emerges through dynamic interactions among the brain, body, and environment. In human–robot interaction research, significant progress has been made in designing robots’ appearance and behavior and in analyzing interaction dynamics to facilitate natural and trustworthy communication between humans and robots. An important complementary perspective is to understand the computational principles underlying human cognition and social interaction and to incorporate such principles into the design of interactive robots. This talk explores the predictive brain hypothesis as a unifying framework for understanding and designing cognitive and social intelligence. In this view, the brain continuously generates predictions about sensory inputs and updates them through interaction with the world. Computational models based on predictive processing provide a powerful approach to studying how diverse patterns of cognition and behavior emerge through development. Robotic implementations of these models offer a constructive way to investigate mechanisms underlying cognitive diversity observed in humans, including developmental differences and variations associated with neurodevelopmental conditions. Drawing on studies in developmental robotics and human interaction analysis, the talk highlights how predictive brain models provide a common framework linking computational models, robotic systems, and empirical studies of human behavior. This perspective connects insights from robots and children to broader questions of social intelligence, suggesting that understanding predictive brain mechanisms can guide the design of robots that interact with people in more adaptive and human-compatible ways while also offering new insights into the foundations of human social cognition. |
|
| Nagarkar, Varun |
Hasan Shamim Shaon, Andrew Trautzsch, Anh Tuan Tran, Varun Nagarkar, and Jong Hoon Kim (Kent State University, Kent, USA) Effective communication of motion intent is critical for autonomous mobile robots operating in human-populated environments. While prior works have demonstrated that floor-projected cues such as arrows or simplified trajectories can enhance bystander prediction and safety, existing systems often rely on static or handcrafted visual encodings and are rarely evaluated within end-to-end service workflows. We introduce Vendobot, a projection-augmented delivery robot that integrates a ROS1 navigation stack, an Android app based, PostgreSQL-backed order management pipeline, a real-time telemetry subsystem, and a projector-equipped Raspberry Pi 5 executing a lightweight intent-projection algorithm. Our method subscribes to the Timed Elastic Band (TEB) local planner to extract the robot’s predicted short-horizon trajectory, transforms it into projector coordinates, and renders either (1) quantized directional indicators or (2) a continuous animated polyline representing the robot’s true local plan with less than 100 ms latency. In a within-subject study involving both bystanders and delivery recipients, the projected local-plan visualization significantly improved intent legibility, motion predictability, and user comfort compared to arrow-based or no-projection conditions. These findings position trajectory-grounded projection as a technically viable and perceptually beneficial communication modality for service robots deployed in semi-public indoor environments. |
|
| Naidu, Sohan |
Ragini Kalvade and Sohan Naidu (University of Illinois at Chicago, Chicago, USA) Home piano practice is vital for early music learning, yet it often depends on a child’s intrinsic motivation. This paper introduces DoReMi, a piano peer-bot designed as an expressive, encouraging companion for young beginners. Through animated responses, colorful feedback, and an approachable social presence, DoReMi supports children as they practice and interact with the instrument. We have designed different feedback styles and timing strategies to further shape a child’s perception of the robot, and their motivation to continue learning. |
|
| Nakamura, Yutaka |
Yuya Okadome and Yutaka Nakamura (Tokyo University of Science, Katsushika-ku, Japan; RIKEN, Sorakugun, Japan) In this study, we propose a motion generation model based on graph attention networks and a diffusion probabilistic model for modeling back-channel gestures in dyadic conversation. Back-channel gestures, which include unconscious behaviors like nodding and body shifts, are a crucial component for achieving natural conversational agents. Our proposed method utilizes a graph attention network to mix the related information between the two participants’ behaviors. This approach explicitly handles the inter-feature interaction, thereby incorporating the influence of the partner’s behavior into a participant’s generated motion. We collected over 10 hours of dialogue data to train the proposed method and verify its motion generation performance. Experimental results showed that the use of a graph attention improved metrics such as the Fréchet Inception Distance. This suggests that the explicit consideration of the conversation partner’s behavior is important for modeling conversation behaviors. |
|
| Nakayama, Yuka |
Carlos Toshinori Ishi, Taiki Yano, and Yuka Nakayama (RIKEN, Kyoto, Japan; ATR, Kyoto, Japan) This study explores the importance of adapting communication in reception tasks based on the visitor attributes and situations, focusing on a reception robot at an expo venue. Ten different scenarios, including three situations, entrance reception, straying assistance, and complaint handling, were created with varying visitor attributes (adults, children, elderly with mild hearing loss). Multimodal expressions, observed through human performers acting out these scenarios, were implemented in the android robot Nikola. A video-based user study was conducted to assess the effectiveness of multimodal expressions which account for the situation and user attributes, comparing them to default behaviors. The proposed multimodal expressions were effective, with voice being more impactful than motion, though both contributed positively. |
|
| Nam, Kaylee |
Alexandria Thylane, Daniel J. Foulen, Keys K. Rigual, Raitah A. Jinnat, Jackie Yee, Kaylee Nam, and Raj Korpan (City University of New York, New York, USA; Rice University, Houston, USA) Pax is a queer-affirming robot companion prototype co-designed with LGBTQIA+ users. Implemented as a Unity-based embodied agent with a FastAPI backend, it translates community-identified requirements into a working interactive system. Pax combines queer-affirming natural language interaction, safety guardrails, and user-controlled adaptability to support identity affirmation and emotional well-being in the home. This demo highlights the core technical pipeline and ethical design choices behind queer-inclusive robot companions. |
|
| Nazari, Amirhossein |
Saeed Vahdani and Amirhossein Nazari (Ferdowsi University of Mashhad, Mashhad, Iran) iSense Beyond is a wearable assistive system for the visually impaired (VI), representing a critical shift from performance-constrained embedded hardware (Version 1, Jetson/RealSense) to an agile, smartphone-centric platform. By leveraging Visual-Inertial Odometry (VIO) via smart glasses and a wearable Inertial Measurement Unit (IMU), V2 provides highly stable, low-latency spatial awareness, overcoming the safety risks associated with the previous high-latency Computer Vision (CV) architecture. The core Human-Robot Interaction (HRI) novelty is a distributed haptic system: hand/wrist haptics provide fine-grained, directional "pull" cues for localized object interaction, while foot haptics deliver continuous path guidance for locomotion. This architecture enables a new paradigm by integrating real-time physical assistance directly with structured educational training modules. This work details the technical migration, the multi-modal HRI protocol, and the required framework for deployment to enhance vocational and social inclusion for the VI community. Pooria Fazli, Amirhossein Nazari, Navid Jooyandehdel, Iman Kardan, and Alireza Akbarzadeh (Ferdowsi University of Mashhad, Mashhad, Iran) Lower-limb exoskeletons play an essential role in rehabilitation and mobility assistance, where accurate real-time gait phase recognition is critical for achieving safe, synchronized, and intuitive human–robot interaction. Many existing approaches rely on multiple sensors such as IMUs, EMG, and FSRs, which increase system complexity, computational load, cost, and susceptibility to mechanical wear. In this study, we propose a lightweight and robust gait phase detection framework that uses only hip and knee joint encoder data—sensors that are already integrated into most exoskeletons and are less prone to noise and misplacement. The method employs a finite state machine (FSM) to identify gait phases and detect key gait events, including heel strike, in real time. The approach was first evaluated in simulation using the SCONE (Opensim) platform and then experimentally implemented on the NEXA knee-joint exoskeleton with multiple healthy participants. Results show that the proposed method reliably predicts gait phases and heel-strike timing with minimal temporal error, while achieving significantly higher processing frequency compared to sensor-rich configurations. These findings demonstrate that accurate and efficient gait phase recognition can be achieved using only encoder data, offering a practical and low-cost solution for real-world exoskeleton control applications. |
|
| Nazari, Kimia |
Matthias von Rakowski, Antoine Esman, Méa Vittot, Kimia Nazari, Christian Dondrup, and Shenando Stals (Heriot-Watt University, Edinburgh, UK) With distractions and sedentary habits, maintaining focus and healthy routines is increasingly challenging. While video tutorials are accessible, they lack engagement and personalised feedback. In contrast, professional human coaching is effective but costly and difficult to schedule. To bridge this gap, we introduce "Marty". As a low-cost social robot, it combines the convenience of home practice with the responsive support of an embodied coach. Using computer vision, it leads yoga sessions, demonstrates poses, and provides real-time corrective feedback. By offering accessible and adaptive guidance, this system aims to interrupt sedentary patterns and empower users to build sustainable wellness habits. |
|
| Neef, Caterina |
Leimin Tian, Ana Kirschbaum, Caterina Neef, Mary Ellen Foster, Sara Cooper, Frauke Zeller, Manuel Giuliani, Alexander Eberhard, Oliver Chojnowski, Nils Mandischer, Utku Norman, Jan Ole Rixen, Pamela Carreno-Medrano, Nick Hawes, and Dana Kulić (CSIRO, Melbourne, Australia; Monash University, Melbourne, Australia; University of Applied Sciences Cologne, Cologne, Germany; KIT, Karlsruhe, Germany; University of Glasgow, Glasgow, UK; IIIA-CSIC, Spain; University of Edinburgh, Edinburgh, UK; Kempten University of Applied Sciences, Kempten, Germany; University of Augsburg, Augsburg, Germany; Monash University, Clayton, Australia; University of Oxford, Oxford, UK) As the technology readiness level of robotic systems increases, these systems interact with real users in realistic application contexts at increasing scale. In such real-world HRI, failures and unexpected outcomes are common, and these can provide valuable lessons for the design and evaluation of future HRI systems. Building on two recent workshops---the HRI 2025 workshop on Human-Robot Interaction in Extreme and Challenging Environments and the RO-MAN 2025 workshop on Real-World HRI in Public and Private Spaces: Successes, Failures, and Lessons Learned---this half-day workshop invites researchers and practitioners to share their design and deployment lessons drawn from across the spectrum of real-world HRI. The interactive sessions follow a taxonomy of the key phases during field deployment and major factors at play, inviting attendees to review the experimental design of their own research or example projects to identify potential challenges and share related experiences. This discussion will refine the initial taxonomy, which we plan to release as an open-source tool to empower researchers and practicioners in their design and deployment of real-world HRI. |
|
| Neerincx, Anouk |
Mayumi Mohan, Joana Brito, Anouk Neerincx, and Alexis E. Block (MPI for Intelligent Systems, Stuttgart, Germany; Instituto Superior Técnico, Lisbon, Portugal; HU University of Applied Sciences Utrecht, Utrecht, Netherlands; Case Western Reserve University, Cleveland, USA) The sixth edition of the Workshop YOUR Study Design (WYSD) aims to empower the next generation of HRI researchers by strengthening their experimental design skills through personalized mentoring and interactive activities. Recognizing that many early-career researchers in Human-Robot Interaction (HRI) come from technical disciplines with limited training in experimental design, WYSD provides a supportive environment where mentees receive structured, detailed feedback on their proposed studies from experienced HRI researchers. For HRI 2026, WYSD will expand to a full-day format to allow more in-depth mentoring and enhanced peer-to-peer engagement. In addition to individualized mentoring sessions, the workshop will feature mentee lightning talks, a free-form study design Q&A, mini discussions on key methodological topics, and collaborative activities such as "Create and Present a Custom Study" and "Networking Bingo". These sessions promote rigorous study design practices, cross-disciplinary exchange, and community building. By equipping researchers with the tools to conduct robust and socially responsible user studies, WYSD directly contributes to the development of safer, more acceptable, accessible, and impactful robotic systems for society. |
|
| Ngo, Thanh-Tung |
Thanh-Tung Ngo, Emma Murphy, and Robert J. Ross (Technological University Dublin, Dublin, Ireland) Effective communication is vital in healthcare, especially across language barriers, where non-verbal cues and gestures are critical. This paper presents a privacy-preserving vision-language framework for medical interpreter robots that detects specific speech acts (consent and instruction) and generates corresponding robotic gestures. Built on locally deployed open-source models, the system utilizes a Large Language Model (LLM) with few-shot prompting for intent detection. We also introduce a novel dataset of clinical conversations annotated for speech acts and paired with gesture clips. Our identification module achieved 0.90 accuracy, 0.93 weighted precision, and a 0.91 weighted F1-Score. Our approach significantly improves computational efficiency and, in user studies, outperforms the speech-gesture generation baseline in human-likeness while maintaining comparable appropriateness. |
|
| Nguyen, Chau |
Mohsen Ensafjoo, Chau Nguyen, Roman Shumliakivskyi, Jamy Li, Paul H. Dietz, and Ali Mazalek (Toronto Metropolitan University, Toronto, Canada; Goethe University Frankfurt, Frankfurt, Germany; Edinburgh Napier University, Edinburgh, UK; University of Toronto, Toronto, Canada) Social robots in public and recreational spaces must keep an appearance of social intelligence. Traditional designs use pre-programmed schedules (PPS) or tightly authored state machines to guarantee predictable behaviour. Although PPS approaches are reliable, they limit spontaneous actions and the perception of “aliveness.” In this late-breaking report, we present our prototype, an aquatic social robot whose personality and motion are driven by an LLM (GPT) rather than PPS. The LLM generates character-specific Python control scripts based on prompts that contain robot constraints, user actions, and a well-known character description, which are then executed on our non-humanoid watercraft, designed for public pool settings. We briefly describe the system architecture and discuss early insights and open questions regarding the use of well-known characters with LLMs to create personality-rich behaviour in public-space robots. |
|
| Nguyen, Hung Khang |
Hung Khang Nguyen, Jeffrin Sam, Safina Gulyamova, Miguel Altamirano Cabrera, Artem Lykov, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) Visual and speech interfaces are widely used to control collaborative robots, but physical interaction remains distinct for its immediacy and intuitive nature. We introduce TapHRI, a system enabling rich on-body touch control using only embedded robot sensing—joint torques, motor currents, and TCP wrench—without external hardware. The core contribution is a novel Neural Network architecture designed specifically for raw internal sensor streams: a multi-head Temporal Convolutional Network (TCN) that eliminates the need for spectrogram conversion used in prior art. By leveraging dilated causal convolutions, our architecture effectively captures long-range temporal dependencies to distinguish subtle tap transients from robot motion dynamics. Uniquely, the network employs a shared encoder with bifurcated classification heads to simultaneously predict both tap count (single, double, triple) and directional intent (six directions) from a single 3.6 s window. We validate this TCN-driven approach on a UR3 cobot with 3,154 episodes, achieving 98.9 % accuracy in static conditions and 59.2 % in dynamic scenarios, demonstrating that specialized time-domain architectures can unlock complex interaction vocabularies from standard internal signals. |
|
| Nguyen, Khang |
Yara Mahmoud, Jeffrin Sam, Khang Nguyen, Marcelino Julio Fernando, Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Haris Khan, Artem Lykov, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) Safe Human–Robot Interaction in shared spaces requires robots to adapt compliance and speed to scene context and human proximity. Standards for collaborative operation specify limits on speed, force/pressure, and stiffness during contact or near-contact operation, motivating dynamic impedance rather than fixed gains. We present an egocentric vision pipeline for the Unitree G1 humanoid that retrieves task- and context-appropriate parameters via a Vision-Language Model (VLM) with Retrieval-Augmented Generation (RAG), and integrates inverse kinematics (IK) to produce joint references together with feed-forward torques for gravity compensation. Concretely, the system maps live head-camera frames to (Kp, Kd, v); IK provides (qref, qref, τff); and the low-level controller apply it at each joint. This enables joint-impedance scheduling and speed regulation in low-dynamic settings. We evaluate the system on dual-manipulation tabletop tasks with and without human presence in the scene. Compared to fixed-gain baselines, the VLM-RAG policy yields smooth speed reductions near humans, and maintained task success under environmental change. While the off-board perception–retrieval loop introduces noticeable delay that restricts use in highly dynamic interaction, the prototype demonstrates that semantic grounding of impedance remains a viable direction for safer humanoid collaboration. |
|
| Nguyen, Minh Duc |
Minh Duc Nguyen and Daniel J. Rea (University of Manitoba, Winnipeg, Canada; University of New Brunswick, Fredericton, Canada) We explore how an anxious robot can foster prosocial responses in humans. We developed a multimodal anxiety expression on a rover robot to show that the perception of robot anxiety could induce key motivators of prosocial behavior such as empathy and compassion towards the robot. We found that our anxious expression elicited empathy and compassion towards the robot. Interestingly, we did not find a significant difference in actual helping behavior. Our qualitative results reveal that while the expression of the robot might lead to engagement, the appropriateness of them with respect to the context of interaction should also be considered. This demonstrates that negative emotional expressions, or at least robot-expressed anxiety can be leveraged to elicit empathy while underscoring the need for future work on the effects and design of negative emotions in HRI. |
|
| Nguyen, Thao |
Xiangfei Kong, Rex Gonzalez, Nicolas Echeverria, Sofia Cobo Navas, Thuc Anh Nguyen, Divyamshu Shrestha, Sebastian Ramirez-Vallejo, Deekshita Senthil Kumar, Andrew Texeira, Thao Nguyen, Deep Akbari, Hong Wang, Hongdao Meng, and Zhao Han (University of South Florida, Tampa, USA) HRI research shows that social robots support human companionship and emotional well-being. However, the cost of high-end social robots limits their accessibility, and, yet, existing low-cost platforms cannot fully support users emotionally due to limited interaction capabilities. In this demo paper with accompanying video, we showcase Bloom, a low-cost social robot that combines touch sensing with LLM-driven conversation to facilitate more expressive and emotionally engaging interactions with humans. Bloom also integrates a customizable 3D-printed shell with a flexible software pipeline that enables touch-responsive movement, LLM-driven dialogue, and face tracking during conversations, offering more expressive interaction capabilities at a low cost, which remains easy to fabricate and adaptable across a wide range of societal applications, including mental health, companion, and healthcare. |
|
| Nguyen, Thuc Anh |
Xiangfei Kong, Rex Gonzalez, Nicolas Echeverria, Sofia Cobo Navas, Thuc Anh Nguyen, Divyamshu Shrestha, Sebastian Ramirez-Vallejo, Deekshita Senthil Kumar, Andrew Texeira, Thao Nguyen, Deep Akbari, Hong Wang, Hongdao Meng, and Zhao Han (University of South Florida, Tampa, USA) HRI research shows that social robots support human companionship and emotional well-being. However, the cost of high-end social robots limits their accessibility, and, yet, existing low-cost platforms cannot fully support users emotionally due to limited interaction capabilities. In this demo paper with accompanying video, we showcase Bloom, a low-cost social robot that combines touch sensing with LLM-driven conversation to facilitate more expressive and emotionally engaging interactions with humans. Bloom also integrates a customizable 3D-printed shell with a flexible software pipeline that enables touch-responsive movement, LLM-driven dialogue, and face tracking during conversations, offering more expressive interaction capabilities at a low cost, which remains easy to fabricate and adaptable across a wide range of societal applications, including mental health, companion, and healthcare. |
|
| Nguyen, Vinh |
Medhavi Kamran, Snehesh Shrestha, and Vinh Nguyen (Michigan Technological University, Houghton, USA; University of Maryland College Park, College Park, USA) Augmented Reality (AR) is often promoted as a solution to the cognitive and physical demands of traditional Teach Pendant (TP) programming for collaborative robots. Although prior work has suggested advantages of the AR interface, many evaluations have been limited in scope and may not fully represent the complexities of real-world manufacturing tasks. This study compares the performance of an AR interface to that of a standard TP interface for manufacturing assembly tasks of varying difficulty. In a between-groups study, one group of operators completed standard- ized assembly tasks using the TP interface, while a separate group used the AR interface instead. We collected broad set of metrics, including task completion time, task success, physical exertion, and measured cognitive workload (NASA-TLX). The analysis showed that participants achieved higher success rates on the 16 mm rectangular peg task and waterproof connector tasks when using AR. They also completed the 12 mm circular peg task significantly faster. Although AR did not reduce cognitive workload relative to TP, these findings suggested that AR may be beneficial for tasks that required significant mental interpretation and offered little advantage for components with non-intuitive geometry. Overall, the results challenged the common assumption that AR universally outperforms traditional programming interfaces in manufacturing tasks. Instead, AR performance appears to be task-dependent and possibly influenced by factors such as task complexity. |
|
| Nichols, Eric |
Eric Nichols, Miguel Mejia Tobar, and Randy Gomez (Honda Research Institute Japan, Wakoshi, Japan; Honda Research Institute Japan, Wako, Japan) Lifelike expressive behavior by social robots requires seamless coordination of facial expressions, body language, and tone of voice–all semantically aligned with speech content. While prior work has explored co-speech gesture generation, coordinating multiple expressive channels from a single semantic analysis remains under-explored. To address this gap, we propose holistic LLM-based generation, where an LLM analyzes robot dialog and generates synchronized behavior timelines that align vocal delivery and physical expression by directly inferring from speech semantics. In a pilot study on the tabletop robot Haru (N=23), 70% of participants preferred this approach over a heuristic baseline, characterizing it as more ”natural” and ”human-like”, with preliminary trends toward improved perceived agency (d=0.33, p=.128) and animacy (d=0.27, p=.212). However, qualitative analysis reveals a continuum of desired expression varying in frequency and intensity, with excessive expression triggering negative reactions. Navigating this design space presents new challenges for expressive robot behavior generation. |
|
| Niehues, Jan |
Irina Rudenko, Utku Norman, Lukas Hilgert, Jan Niehues, and Barbara Bruno (KIT, Karlsruhe, Germany) Large Language Models (LLMs) hold significant promise for enhancing Child–Robot Interaction (CRI), offering advanced conversational skills and adaptability to the diverse abilities, requests and needs of young children. Little attention, however, has been paid to evaluating the age and developmental appropriateness of LLMs. This paper brings together experts in psychology, social robotics and LLMs to define metrics for the validation of LLMs for child–robot interaction. |
|
| Nielsen, Mie Grøftehave |
Mie Grøftehave Nielsen, Andreas Juul Jespersen, and Louise Brønderup Frederiksen (Aarhus University, Aarhus, Denmark) This paper presents The Beckoning Bowl, a shape-changing, artifact- inspired robot designed to create a sense of welcome for people living alone. The interactive key bowl uses soft robotics to mimic abstract body language, offering a subtle social moment during the routine act of placing keys when arriving home. A section of the bowl lowers as if beckoning and then returns to its original shape with expressions of joy or disappointment depending on the user’s response. By designing interactions that make users feel noticed and invited, The Beckoning Bowl explores how socially aware home robots might help counter loneliness. |
|
| Nießen, Inga Luisa |
Anna M. H. Abrams, Inga Luisa Nießen, and Astrid Marieke Rosenthal-von der Pütten (RWTH Aachen University, Aachen, Germany) As robots increasingly appear in social settings, it is unclear whether groups that include robots are perceived as coherent social entities. This study examined whether groups including robots are judged as less entitative (“groupy”) than all-human groups. In a vignette-based online experiment (N = 160), participants rated eleven group scenarios (e.g., co-workers or musicians) on eight entitativity dimensions (e.g., similarity or interaction), with group composition manipulated between subjects (all-human, human–robot, text-only). Results showed strong effects of group scenario but minimal effects of group composition: human–robot groups were generally perceived as equally entitative as all-human groups. Only similarity differed, with human–robot groups rated less similar in select scenarios, indicating the importance of similarity in outer appearance in the perception of a group's coherence. Overall, the presence of robots did not reduce perceived group entitativity, suggesting that group type matters more than group composition. |
|
| Nisi, Valentina |
Filipa Correia, Cristina Gena, Alberto Lillo, Laura Lossi, Claudio Mattutino, Valentina Nisi, Linda Pigureddu, and Fabiana Vernero (Instituto Superior Técnico - University of Lisbon, Lisbon, Portugal; University of Turin, Turin, Italy; IST University of Lisbon, Lisbon, Portugal) This paper explores the emerging field of Animal-Robot Interaction (ARI) by questioning the way humanoid robots could engage with dogs in domestic settings, a scenario which we expect to become prevalent in the near future. While existing research focuses on human-robot interaction, ARI remains quite underexplored, particularly regarding how dogs perceive and respond to robotic agents. In contrast, this work proposes a playful and relational design perspective to foster more natural interactions between dogs and robots. We sought guidance from the Department of Veterinary Medicine to ensure our experimental design offered an ethical and stress-free environment for the dogs. Hence, the study incorporated best practices in robot vocalisation, gestural communication, and movement adaptation, improving the robot’s ability to interact naturally. The experiment involved six dogs, always in the presence of their owners. Preliminary findings indicate that dogs can react with curiosity and engagement, but also avoidance or fear, challenging assumptions about robot acceptability in animal interactions. We argue for rethinking robotic design beyond human-centric paradigms, advocating for pluralistic and open-ended approaches to ARI. |
|
| Nissen, Bettina |
Mingke Wang, Yixun Li, Bettina Nissen, and Rebecca Stewart (Imperial College London, London, UK; University of Edinburgh, Edinburgh, UK) MenstaRay is a soft knit robotic interface designed to explore how tactile actuation can support somatosensory communication of menstrual experiences. The prototype was created using a fabrication method for knit-integrated soft wearable robotics with two core structural elements: (1) an extensible EcoFlex 00-10 silicone cavity containing internal air chambers and (2) a strain-limiting textile layer knitted with Spandex Super Stretch Yarn (81% nylon, 19% elastane). This configuration enables regulated inflation patterns that preserve the softness of textiles while providing targeted haptic feedback that is suitable for intimate, safe, and therapeutically appropriate interactions. Through a series of workshops, we investigated and evaluated how these dynamic tactile behaviours shaped participants' embodied reflections on menstrual sensations. This work contributes to human robotic interaction by introducing MenstaRay, a novel artifact coupled with textile-integrated actuation that can externalize intimate bodily sensations and foster new modes of communicating, reflecting on and representing menstrual experiences through wearable interfaces. |
|
| Norman, Oscar |
Emma Minter, Robert Tankard, Oscar Norman, and Janie Busby Grant (University of Canberra, Canberra, Australia) Extensive research has investigated the human tendency to anthropomorphize artificial agents by attributing human-like traits to these systems. Sociality motivation, the desire for social connection, has been proposed to be a key psychological determinant for anthropomorphism. Sociality motivation can be operationalized in a range of dispositional, developmental, and cultural facets, but it is currently unclear how these factors contribute collectively and independently to predicting an individual’s tendency towards anthropomorphism. This online study (N = 164) assessed the relationship between different facets of sociality motivation and four dimensions of anthropomorphism of a social robot, using videos of a robot completing a game alone and with human and robot partners. Respondents who reported more collectivist cultural views were more likely to attribute higher agency, sociability, and disturbance to the robot. Those who reported higher attachment anxiety scores also attributed greater agency and sociability. Previous research has focused primarily on dispositional indicators of anthropomorphism, however the current study suggests that cultural determinants may be stronger predictors of anthropomorphic tendencies and should be a focus of further research. |
|
| Norman, Utku |
Arjita Mital, Felix Gnisa, Utku Norman, and Nora Weinberger (KIT, Karlsruhe, Germany) This Late Breaking Work presents a low-threshold, unsupervised public exhibit designed to explore how non-expert audiences imagine and negotiate future human–robot interactions in ethically charged everyday situations. The exhibit, installed in Karlsruhe, Germany, invited participants to engage with four dilemma-based scenarios where participants were prompted to decide how a social robot should act confronting questions of moral delegation and machine agency. The activity generated rich, situated reflections on responsibility, safety, care, and the limits of automation. Findings reveal context-dependent expectations that balance efficiency against dignity, human judgment, and relational preservation, shaped by perceived stakes, social context, and the specific embodiment of the robot involved. Through this we demonstrate how minimally supervised participatory formats can surface normative expectations and support inclusive, responsible robot design. Irina Rudenko, Utku Norman, Lukas Hilgert, Jan Niehues, and Barbara Bruno (KIT, Karlsruhe, Germany) Large Language Models (LLMs) hold significant promise for enhancing Child–Robot Interaction (CRI), offering advanced conversational skills and adaptability to the diverse abilities, requests and needs of young children. Little attention, however, has been paid to evaluating the age and developmental appropriateness of LLMs. This paper brings together experts in psychology, social robotics and LLMs to define metrics for the validation of LLMs for child–robot interaction. Leimin Tian, Ana Kirschbaum, Caterina Neef, Mary Ellen Foster, Sara Cooper, Frauke Zeller, Manuel Giuliani, Alexander Eberhard, Oliver Chojnowski, Nils Mandischer, Utku Norman, Jan Ole Rixen, Pamela Carreno-Medrano, Nick Hawes, and Dana Kulić (CSIRO, Melbourne, Australia; Monash University, Melbourne, Australia; University of Applied Sciences Cologne, Cologne, Germany; KIT, Karlsruhe, Germany; University of Glasgow, Glasgow, UK; IIIA-CSIC, Spain; University of Edinburgh, Edinburgh, UK; Kempten University of Applied Sciences, Kempten, Germany; University of Augsburg, Augsburg, Germany; Monash University, Clayton, Australia; University of Oxford, Oxford, UK) As the technology readiness level of robotic systems increases, these systems interact with real users in realistic application contexts at increasing scale. In such real-world HRI, failures and unexpected outcomes are common, and these can provide valuable lessons for the design and evaluation of future HRI systems. Building on two recent workshops---the HRI 2025 workshop on Human-Robot Interaction in Extreme and Challenging Environments and the RO-MAN 2025 workshop on Real-World HRI in Public and Private Spaces: Successes, Failures, and Lessons Learned---this half-day workshop invites researchers and practitioners to share their design and deployment lessons drawn from across the spectrum of real-world HRI. The interactive sessions follow a taxonomy of the key phases during field deployment and major factors at play, inviting attendees to review the experimental design of their own research or example projects to identify potential challenges and share related experiences. This discussion will refine the initial taxonomy, which we plan to release as an open-source tool to empower researchers and practicioners in their design and deployment of real-world HRI. |
|
| Novitzky, Michael |
Jennifer S. Kay, Tyler Errico, Audrey L. Aldridge, John James, and Michael Novitzky (Rowan University, Glassboro, USA; USA Military Academy at West Point, West Point, USA) Effective human-robot teaming in highly dynamic environments, such as emergency response and military missions, requires tools that support planning, coordination, and adaptive decision-making without imposing excessive cognitive load. This paper introduces PETAAR, the Planning, Execution, to After-Action Review framework that seamlessly integrates autonomous unmanned vehicles (UxVs) into Android Team Awareness Kit (ATAK), a widely adopted situational awareness platform. PETAAR leverages ATAK's geospatial visualization and human team collaboration while adding features for autonomous behavior management, operator feedback, and real-time interaction with UxVs. Its most novel contribution is enabling digital mission plans, created using standard mission graphics, to be interpreted and executed by unmanned systems, bridging the gap between human planning, robotic action, and shared understanding among all teammates (human and autonomous). Results from this work inform best practices for integrating autonomy into human-robot teams across diverse operational contexts. |
|
| Obaid, Mohammad |
Yan Zhang, Sarah Schömbs, Xiang Pan, Jan Leusmann, Sara Mongile, Mohammad Obaid, and Wafa Johal (University of Melbourne, Melbourne, Australia; Kyoto University, Kyoto, Japan; LMU Munich, Munich, Germany; Italian Institute of Technology, Genoa, Italy; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden) Recent developments in large language models and multi-agentic systems present both opportunities and challenges for Human-Robot Interaction (HRI). However, it remains unclear how the interaction paradigms will be reshaped due to the growing complexity and opaqueness. Therefore, to bridge this gap and connect users with multi-agentic systems through embodied robot agents, this workshop aims to explore solutions for design challenges of multi-agentic systems on various types of robots in real-world contexts. Through a complementary mix of focus groups, including design activities and paper presentations, our workshop generates design considerations for real-world applications and outlines pioneering research agendas for multi-agentic systems in HRI. |
|
| Obata, Marina |
Marina Obata, Yuka Iwanaga, Masayoshi Tsuchinaga, Akitoshi Mizutani, Daniel R. J. Downes, and Sabrina Lee (Woven by Toyota, Tokyo, Japan; Toyota Motor Corporation, Toyota, Japan) This study examines the validity of the Motivation, Engagement, and Thriving in User Experience (METUX) model for evaluating remote operation interfaces of an assistive robot. Multiple real-life scenarios involving various target users were explored to assess whether Self-Determination Theory (SDT) factors in METUX effectively represent user well-being. Survey data from 2300 samples across three interfaces and four user groups underwent descriptive analysis, Confirmatory Factor Analysis (CFA), Multi-Group CFA (MGCFA) and Structural Equation Modelling (SEM). Results show significant correlations between SDT-defined Basic Psychological Needs (BPN) and well-being indicators, allowing researchers to examine well-being through interaction with assistive robots. MGCFA supported measurement invariance across interfaces but not across user groups, suggesting user-needs-centric interface development. The findings highlight the importance of validating metrics before product evaluation. In conjunction with planned user research in real-life settings using established metrics, our approach addresses research gaps in user well-being assessment within the HRI field. |
|
| Ochiai, Yoichi |
Takahito Murakami, Maya Grace Torii, Shuka Koseki, and Yoichi Ochiai (University of Tsukuba, Tsukuba, Japan; University of Tokyo, Hongo, Japan) We address a mismatch between how care information is provided and accessed. Explanations about procedures, routines, and self-management are delivered at fixed times in dense formats, leading patients to concentrate questions into nurse encounters and increasing workload. We frame this as a problem of bidirectional mediation and propose Suzume-chan, a small “Pet-as-a-Friend” plush agent that serves as an embodied information hub. Patients can speak to Suzume-chan without operating devices to receive on-demand explanations and reminders, while nurses obtain compact, nursing-relevant records. Suzume-chan runs entirely on a local network using automatic speech recognition, a local language model, retrieval-augmented generation, and text-to-speech. A workshop-style proof-of-concept highlighted embodiment, latency, and trust as key considerations for clinical use. Ayaka Tsutsui, Takahito Murakami, Tatsuki Fushimi, Ryosei Kojima, Kengo Tanaka, and Yoichi Ochiai (University of Tsukuba, Tsukuba, Japan) Embossed surfaces provide raised patterns and textures that can support access to information and act as tactile guidance on physical interfaces. Recent work has introduced a mold free embossing technique that combines High Intensity Focused Ultrasound (HIFU) with a robot arm to locally deform materials such as acrylic and MDF, enabling digitally controlled embossing. While this approach offers high design flexibility and is safe, research on how the resulting patterns are perceived through touch remains limited. In this work, we use a robot-controlled HIFU embossing system to create six textures that represent common tactile features relevant to exploratory touch, including contours, roughness, geometric shapes, and localized bumps on acrylic and MDF. Nineteen participants took part in a blindfolded tactile exploration study. Using both quantitative and qualitative data, we characterize how people perceive and differentiate textures produced through robot-controlled HIFU embossing. |
|
| O’Connell, Amy |
Amy O’Connell (University of Southern California, Los Angeles, USA) In-home socially assistive robots (SARs) can provide daily assistance to support neurodivergent young adults, enabling increased independence and autonomy. My research investigates how robots can facilitate body doubling, a common practice among individuals with ADHD that involves having another person present to make it easier to start and complete tasks. This doctoral work leverages a modular, low-cost robot platform to design and validate an in-home body double robot. We first conducted a three-week in-home user study to validate that college students with ADHD find robot body doubles useful and to gather initial feedback on the robot's design and functionality. We then conducted a follow-up study in an on-campus learning center to understand how users personalized the robot's behavior during schoolwork sessions. This work is an initial step towards personalized study companion robots to support executive functioning among students with ADHD. |
|
| Oh, Cailyn A. |
Keuntae Kim, Cailyn A. Oh, Andrew Park, and Chung Hyuk Park (George Washington University, Washington, USA; Thomas Jefferson High School for Science and Technology, Alexandria, USA; Poolesville High School, Poolesville, USA) Joint attention is a core component of social communication and is frequently impaired in individuals with autism. This work presents a platform validation of an emotionally expressive quadruped robot dog with a custom pan--tilt head and on-device facial emotion recognition, and asks participants whether emotion-driven reactions make joint-attention cues clearer and more engaging than gaze-only behavior. In a within-subjects pilot with six neurotypical adults, we compared (A) face tracking only and (B) face tracking plus emotion recognition and empathetic reactions. Participants generally found the robot's directional cues easy to interpret, reported effective emotional contagion, and expressed strong willingness to interact again, despite low perceived realism. Perceived safety/comfort was mixed for some users, and subjective "shared attention" was inconsistent across participants, suggesting a need for smoother and more predictable gaze and motion timing. These early results help surface design constraints and failure modes before future studies with autistic participants. |
|
| Oh, Ji-Heon |
Ismael Espinoza, Yong-Hyeok Choi, Kangsan Lee, Ji-Heon Oh, Ki Joo Pahk, and Tae-Seong Kim (Kyung Hee University, Yongin, Republic of Korea) We present a Real2Sim2Real system with a pair of UR3 arms equipped with Allegro 4-finger hands for bimanual, long-horizon R2R2H (Robot-to-Robot-to-Human) interaction tasks. Our system is composed of three main modules: (1) Real2Sim module to extract observation data and collect expert demonstrations from the real hardware setup; (2) Temporal-Context Transformer (TCT) policy module that train the policy in simulation using expert demonstrations to preserve human motion trajectories; (3) Sim2Real module to transfer the trained policy from simulation to hardware by leveraging domain randomization and real-time object detection. We have evaluated our system with two R2R2H tasks: (1) Task #1: an R2R2H tube handover and (2) Task #2: an R2R cup flipping and placing. Our presented system has completed the tasks, achieving success rates of 82.86% in simulation and 77.14% in hardware for Task #1, and 57.14% and 51.43% for Task #2. Demonstrating the feasibility of our system in real hardware. |
|
| Okadome, Yuya |
Yuya Okadome and Yutaka Nakamura (Tokyo University of Science, Katsushika-ku, Japan; RIKEN, Sorakugun, Japan) In this study, we propose a motion generation model based on graph attention networks and a diffusion probabilistic model for modeling back-channel gestures in dyadic conversation. Back-channel gestures, which include unconscious behaviors like nodding and body shifts, are a crucial component for achieving natural conversational agents. Our proposed method utilizes a graph attention network to mix the related information between the two participants’ behaviors. This approach explicitly handles the inter-feature interaction, thereby incorporating the influence of the partner’s behavior into a participant’s generated motion. We collected over 10 hours of dialogue data to train the proposed method and verify its motion generation performance. Experimental results showed that the use of a graph attention improved metrics such as the Fréchet Inception Distance. This suggests that the explicit consideration of the conversation partner’s behavior is important for modeling conversation behaviors. |
|
| Olivares-Alarcos, Alberto |
Tamlin Love, Víctor Bermejo, Alberto Olivares-Alarcos, Antonio Andriella, Núria Vallès-Peris, Cristian Barrué, and Guillem Alenyà (CSIC-UPC, Barcelona, Spain; IIIA-CSIC, Barcelona, Spain; Universitat Autònoma de Barcelona, Barcelona, Spain) Explainability has been proposed as an approach to robot failure recovery, facilitating understanding and repairing trust, especially relevant in domestic assistive tasks. This study conducts a preliminary exploration of older adults' preferences regarding the content and context of robot-generated explanations for failures to guide future research. An exploratory study was conducted in three phases: 1) gathering high-level requirements from caregivers, 2) implementing a semi-autonomous robot for object retrieval that identifies and explains different types of failures, and 3) engaging N=8 older adults in real-life interactions as well as in focus groups to assess their perspectives. Our preliminary observations highlight a tension in preferences: a general desire for short, direct explanations to minimize disruption, versus a need for more detailed, actionable explanations specifically in failure cases. Crucially, we also note that these preferences are unstable and contextually constructed, reinforcing that the technical failures cannot be separated from their social context, as users' experiences and opinions are shaped by both the robot’s functional capabilities and the values and organisational settings in which they are introduced. |
|
| Ong, Chalmers |
Chalmers Ong, Hannah Lee, Koen de Nie, Mark W. Vas, Peter Holzer, Yasaman Shamssabzevar, Yu-Ai Jiang, J. Micah Prendergast, Alper Semih Alkan, and Serdar Aşut (Delft University of Technology, Delft, Netherlands) Robotics in creative industries is sometimes met with concern over the eventual replacement of humans. This paper proposes a framework for a new form of spatial craft that emerges through human-robot collaboration, using the Delft Blue painting process as a case study. We employ Learning from Demonstration (LfD) through kinesthetic guidance of a KUKA iiwa robotic arm to capture expert brushstroke trajectories, which are then modelled using a Long Short-Term Memory Variational Autoencoder (LSTM-VAE) to generate novel, stylistically coherent strokes. Subsequently, we evaluated these generated trajectories through physical execution on ceramic tiles, inviting a human painter to interpret and complete one of the robot-produced compositions. The results demonstrate that robotic machine learning can support a new co-crafting workflow, which does not substitute human artists but expands and transforms traditional craft practices for new creative opportunities to emerge. |
|
| Onnasch, Linda |
Eileen Roesler and Linda Onnasch (George Mason University, Fairfax, USA; TU Berlin, Berlin, Germany) Robots that resemble pets or animated characters typically aim to invite attributions of lifelikeness to engage users. Yet, despite strong initial engagement, many robots fail to sustain interest over time. This study investigated whether a robot’s independent activity can increase engagement, social presence, intention to use, and distraction. In a laboratory experiment, 104 participants worked on a cover task next to Cozmo, which was either active (switched on and exploring) or passive (switched off). During breaks, Cozmo was switched on in both conditions and participants interacted with the robot. Participants perceived the active robot as significantly more autonomous. Although self-reported engagement, social presence and intention to use did not differ, more participants voluntarily continued to play with the active robot, indicating higher behavioral engagement. The active robot also elicited greater perceived distraction, though cover task performance was unaffected. This pattern of engagement and distraction, familiar from human–animal interaction, warrants further investigation in human–robot interaction. Eileen Roesler, Maris Heuring, and Linda Onnasch (George Mason University, Fairfax, USA; TU Berlin, Berlin, Germany) Robots often feature anthropomorphic designs to increase acceptance, although this is not always effective. Previous research suggests that anthropomorphic features are preferred in social settings, whereas technical designs are preferred in industrial contexts. This study examined how task domain and sociability shape these preferences. In an online study, participants chose between robots with low or medium anthropomorphic appearance for tasks in social or industrial contexts, with high or low sociability. The results showed that industrial tasks favored low-anthropomorphic robots regardless of sociability, while sociability influenced preferences in social tasks. We also examined possible gender attributions via names and pronouns, considering the gender stereotypes linked to different domains. Overall, robots were ascribed functional terms rather than gendered, although male bias emerged for gendered robots in industrial contexts. These findings demonstrate that task domain and sociability influence design preferences and reveal subtle gender attributions even for gender-neutral looking robots. |
|
| Örtegren, Joachim |
Simon Ekman, Joachim Örtegren, Kacper Mateusz Sieklucki, Raymond Tchou, Ludwig Halvorsen, and Hannah Pelikan (Linköping University, Linköping, Sweden) Robots on dense urban sidewalks enter environments in which space is contested. While prior work has described how bystanders and passersby accommodate delivery robots, we still lack a deeper understanding of what kind of problems emerge when autonomous robots enter a new area. This paper builds on 9 hours of fieldwork conducted during the early weeks of a delivery robot rollout in a European capital, characterized by a mixture of wide and narrow sidewalks. We describe emerging observations that highlight the contested and negotiated nature of these spaces, in which people make their trajectories readable, projecting where they will go next and establishing local traffic norms. |
|
| Oudheusden, Just |
Elitza Marinova, Pieter Ruijs, Just Oudheusden, Veerle Hobbelink, and Matthijs Smakman (HU University of Applied Sciences Utrecht, Utrecht, Netherlands; Vrije Universiteit Amsterdam, Amsterdam, Netherlands) Children with Attention Deficit Hyperactivity Disorder (cADHD) often struggle with completing daily tasks and routines, yet technological support in the home environment remains limited. This exploratory study examines the potential of social robots to assist cADHD with Instrumental Activities of Daily Living (IADLs). Nine experts were interviewed to identify design requirements, followed by a five-day in-home deployment with five families. Parents and children reported that the robot effectively provided reminders and task instructions, improved focus and independence, and reduced caregiving demands. While families expressed interest in continued use, they emphasized the need for greater reliability and adaptability. These findings highlight the promise of social robots in supporting cADHD at home and offer valuable directions for future research and development. |
|
| Pahk, Ki Joo |
Ismael Espinoza, Yong-Hyeok Choi, Kangsan Lee, Ji-Heon Oh, Ki Joo Pahk, and Tae-Seong Kim (Kyung Hee University, Yongin, Republic of Korea) We present a Real2Sim2Real system with a pair of UR3 arms equipped with Allegro 4-finger hands for bimanual, long-horizon R2R2H (Robot-to-Robot-to-Human) interaction tasks. Our system is composed of three main modules: (1) Real2Sim module to extract observation data and collect expert demonstrations from the real hardware setup; (2) Temporal-Context Transformer (TCT) policy module that train the policy in simulation using expert demonstrations to preserve human motion trajectories; (3) Sim2Real module to transfer the trained policy from simulation to hardware by leveraging domain randomization and real-time object detection. We have evaluated our system with two R2R2H tasks: (1) Task #1: an R2R2H tube handover and (2) Task #2: an R2R cup flipping and placing. Our presented system has completed the tasks, achieving success rates of 82.86% in simulation and 77.14% in hardware for Task #1, and 57.14% and 51.43% for Task #2. Demonstrating the feasibility of our system in real hardware. |
|
| Paléologue, Victor |
Saad Elbeleidy, Andrew Schoen, Christopher Birmingham, Tiago Ribeiro, Victor Paléologue, Douglas Dooley, and Ross Mead (Peerbots, Arlington, USA; Semio, Los Angeles, USA) Software infrastructure for rendered robot faces is fragmented, yet there are many open research questions to address regarding robot faces, emotional expression, gaze, and animation. In this workshop-tutorial hybrid, we invite researchers and practitioners interested in robot faces to share their expertise and learn about the Vizij open source ecosystem and how it can be used to define, animate and deploy expressive rendered robot faces fully integrated with various robot platforms. We will illustrate the full software pipeline required to implement an expressive social robot face starting from scratch and going all the way to defining and connecting a custom face to a real-time conversational engine. Participants will be able to follow along on their own devices viewing and controlling their custom rendered robot face. Participants will also hear from ecosystem partners who have begun integrating with Vizij and learn about best practices and lessons learned. |
|
| Palinko, Oskar |
Pol Barrera Valls, Patrick Vogelius, Tobias Florian von Arenstorff, Matouš Jelínek, and Oskar Palinko (University of Southern Denmark, Odense, Denmark; University of Southern Denmark, Sønderborg, Denmark) The development of humanoid robots has experienced a sudden acceleration during the last years, due to the large advancements made in actuation technology, generative AI and computer vision. The design of humanoid robots makes them useful in scenarios where many different tasks must be achieved, and humans are present. Furthermore, their resemblance to humans opens new ways of communication when compared to traditional robots. However, humanoid robots may find themselves in a situation where human assistance is required, e.g. due to limitations in their sensing and movement capabilities. As such, different help-seeking strategies and their effectiveness need to be explored. This article compares the effect of inducing empathy and guilt in humans as means to request help after a mistake made by a robot. An in the wild experiment conducted between subjects was performed in the University of Southern Denmark (SDU) with a total of 123 participants across 3 scenarios of help-seeking strategies, described as: distressed, sarcastic, and neutral. The results showed a statistical difference between the strategies, proving that using the concepts of empathy and guilt elicitation with robots has the potential to improve human-humanoid collaboration. |
|
| Pan, Jia |
Yuxuan Chen, Ian Leong Ting Lo, Bao Guo, Netitorn Kawmali, Chun Kit Chan, Ruoyu Wang, Jia Pan, and Lei Yang (University of Hong Kong, Hong Kong, China) A tour guide robot, CLIO, is presented. It carries a display as its head and a laser pointer to orient visitors’ attention by its head movement and laser pointer. Animated eyes are displayed to build eye contact. An optional low-code interface based on a Large Language Model is offered to coordinate the designed actions with a given narrative script for exhibition. We conducted a user study to evaluate the CLIO system in a mock-up exhibition of historical photographs. Feedback was collected from questionnaires and quantitative data from a mobile eye tracker. Experimental result validated that the efficacy of the designed actions in guiding visual attention of the visitors. It showed that CLIO achieved an enhanced engagement compared to the Audio-only baseline system. |
|
| Pan, Xiang |
Yan Zhang, Sarah Schömbs, Xiang Pan, Jan Leusmann, Sara Mongile, Mohammad Obaid, and Wafa Johal (University of Melbourne, Melbourne, Australia; Kyoto University, Kyoto, Japan; LMU Munich, Munich, Germany; Italian Institute of Technology, Genoa, Italy; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden) Recent developments in large language models and multi-agentic systems present both opportunities and challenges for Human-Robot Interaction (HRI). However, it remains unclear how the interaction paradigms will be reshaped due to the growing complexity and opaqueness. Therefore, to bridge this gap and connect users with multi-agentic systems through embodied robot agents, this workshop aims to explore solutions for design challenges of multi-agentic systems on various types of robots in real-world contexts. Through a complementary mix of focus groups, including design activities and paper presentations, our workshop generates design considerations for real-world applications and outlines pioneering research agendas for multi-agentic systems in HRI. |
|
| Papachristos, Eleftherios |
Vedran Simic, Eleftherios Papachristos, Ole Andreas Alsos, and Taufik Akbar Sitompul (NTNU, Trondheim, Norway; NTNU, Gjøvik, Norway) Inspection and maintenance robotics are rapidly entering industrial operations, yet the transfer of Human-Robot Interaction (HRI) research into commercial practices remains limited. To characterize this gap, we present situated qualitative fieldwork with 41 exhibitors at a major industry-only conference, analyzing HRI discourse and interaction design priorities. Our findings reveal an industry driven by a reliability-first mindset that focuses on familiar, well-established interaction approaches. We identify three challenges for HRI: (1) trust practices that prioritize familiarity over usability, (2) design aspirations for broad accessibility that still require expert operational skill, and (3) multi-operator workflows incompatible with single-user HRI assumptions. We argue that, as hardware platforms mature, closing the academic-industry gap requires HRI to shift from single-user autonomy research toward frameworks supporting collaborative, safety-critical operations. This paper provides an empirical snapshot of industry perceptions of HRI and highlights where academic research could better align with industry practice. |
|
| Parikh, Hanna |
Hanna Parikh, Paul Vallejo, Elizabeth Phillips, Eileen Roesler, J. Gregory Trafton, and Laura Saad (George Mason University, Fairfax, USA; Naval Research Laboratory, Washington, USA) As robots increasingly occupy more complex decision-making roles in society, understanding how people attribute blame for robots' actions is critical. Prior work suggests that attributions of control of a robot can influence blame, but the sources of this control and their relationship to blame attribution remain underexplored. We reanalyzed existing data from three published studies in which participants assigned blame to a robot for its decision in different moral dilemmas. We first developed three categories of attributions of control for a robot’s actions (Self, Predetermined, and External), then employed a qualitative coding procedure to analyze participants' clarifications for their assigned blame. Blame and trust ratings were analyzed across control categories. Across all three studies, robots perceived as controlling their own actions (Self) were consistently blamed more than those in other categories. Attributions of control had a minimal impact on trust. Taken together, these findings indicate that attributions of control impact how people assign blame to robots, which may have moral and legal implications. |
|
| Park, Andrew |
Keuntae Kim, Cailyn A. Oh, Andrew Park, and Chung Hyuk Park (George Washington University, Washington, USA; Thomas Jefferson High School for Science and Technology, Alexandria, USA; Poolesville High School, Poolesville, USA) Joint attention is a core component of social communication and is frequently impaired in individuals with autism. This work presents a platform validation of an emotionally expressive quadruped robot dog with a custom pan--tilt head and on-device facial emotion recognition, and asks participants whether emotion-driven reactions make joint-attention cues clearer and more engaging than gaze-only behavior. In a within-subjects pilot with six neurotypical adults, we compared (A) face tracking only and (B) face tracking plus emotion recognition and empathetic reactions. Participants generally found the robot's directional cues easy to interpret, reported effective emotional contagion, and expressed strong willingness to interact again, despite low perceived realism. Perceived safety/comfort was mixed for some users, and subjective "shared attention" was inconsistent across participants, suggesting a need for smoother and more predictable gaze and motion timing. These early results help surface design constraints and failure modes before future studies with autistic participants. |
|
| Park, Cheonshu |
Cheonshu Park (ETRI, Dajeon, Republic of Korea) This paper presents a pilot study examining how a mobile home robot can support older adults through plant care activities combined with conversational emotional support. The system integrates autonomous navigation, vision-based plant health assessment, and large language model-driven dialogue to address participants’ mood, cognitive concerns, and daily living activities over a three-week intervention. Five community-dwelling older adults (aged 65+) participated in weekly sessions at a living lab environment, where the robot guided plant care tasks and engaged in structured conversations across four domains: depression screening, cognitive self-assessment, instrumental activities of daily living, and personal preferences. Standardized questionnaires administered after each of the three sessions measured cognitive function, technology acceptance, daily vitality, and psychological stability. Friedman tests across all three sessions revealed statistically significant improvements in psychological stability (χ2(2) = 7.60, p = .022) and robot acceptance (χ2(2) = 8.40, p = .015). The study demonstrates the technical feasibility and preliminary evidence of deploying such services with older adults and identifies key considerations for scaling the intervention. |
|
| Park, Chung Hyuk |
Keuntae Kim, Cailyn A. Oh, Andrew Park, and Chung Hyuk Park (George Washington University, Washington, USA; Thomas Jefferson High School for Science and Technology, Alexandria, USA; Poolesville High School, Poolesville, USA) Joint attention is a core component of social communication and is frequently impaired in individuals with autism. This work presents a platform validation of an emotionally expressive quadruped robot dog with a custom pan--tilt head and on-device facial emotion recognition, and asks participants whether emotion-driven reactions make joint-attention cues clearer and more engaging than gaze-only behavior. In a within-subjects pilot with six neurotypical adults, we compared (A) face tracking only and (B) face tracking plus emotion recognition and empathetic reactions. Participants generally found the robot's directional cues easy to interpret, reported effective emotional contagion, and expressed strong willingness to interact again, despite low perceived realism. Perceived safety/comfort was mixed for some users, and subjective "shared attention" was inconsistent across participants, suggesting a need for smoother and more predictable gaze and motion timing. These early results help surface design constraints and failure modes before future studies with autistic participants. |
|
| Park, Gyuhee |
Gyuhee Park, Daehui Kim, Gayoung Yoo, Suhwan Jung, Okkeun Lee, Hyunmin Kang, Yong Gu Ji, and Hyochang Kim (Yonsei University, Seoul, Republic of Korea; Stanford University, Palo Alto, USA; Daegu University, Gyeongsan, Republic of Korea; Stanford University, Incheon, Republic of Korea) Autonomous delivery robots (ADRs) are increasingly being deployed in urban environments, where they share pedestrian paths with other road users. Previous human-robot proximity studies have primarily focused on pedestrian perception and distance norms during frontal encounters. However, when a robot approaches from the rear, pedestrians experience a lack of visual information, which can heighten threat perception and reinforce distance norms. This study aimed to investigate pedestrian perception in rear approach situations and identify the proxemic zone associated for such approaches. The results showed that participants felt comfortable with a proximity distance of 1.0–2.0 m during rear approaches. As the robot approached at higher speeds, participants perceived the robot's approach more clearly, but distance-related discomfort significantly increased, demonstrating a trade-off between perceptual awareness and subjective discomfort. These findings suggest the importance of social navigation strategies that reflect pedestrian acceptance for the safe integration of delivery robots. Gayoung Yoo, Daehui Kim, Gyuhee Park, Suhwan Jung, Okkeun Lee, Hyunmin Kang, Yong Gu Ji, and Hyochang Kim (Yonsei University, Seoul, Republic of Korea; Stanford University, Palo Alto, USA; Daegu University, Gyeongsan, Republic of Korea; Stanford University, Incheon, Republic of Korea) Mobile robots increasingly navigate spaces where people are engaged in conversation, where attention is directed toward conversational partners and peripheral awareness is limited. This study examines how a robot’s near-stop distances used as a crossing cue (45, 80, 120 cm) and conversational arrangements (Vis-à-vis, L-shape) influence when the robot is noticed and how its approach is interpreted. Using eye-tracking and foot-tracking, we found that these effects depend on the F-formation adopted during conversation. In Vis-à-vis, participants showed minimal fixations on the robot at 120 cm, whereas the 45 cm near-stop consistently elicited visual attention and increased perceived safety. In contrast, in L-shape formations, sustained visual exposure led to higher perceived disruption at both 45 and 120 cm. Overall, these findings indicate that F-formation–driven attentional allocation shapes perceptual and behavioral responses to robot approaches in conversational settings. These findings motivate future work on adaptive robot cues for dynamic social situations involving multi-party interactions and diverse users. |
|
| Park, SoYoon |
SoYoon Park, Eunsun Jung, KiHyun Lee, Dokshin Lim, and Kyung Yun Choi (Hongik University, Seoul, Republic of Korea) Inspired by the playful, attention-seeking paw gestures of cats, we present PAWSE, a laptop-peripheral robot that encourages short fidgeting-based micro-breaks during digital work. PAWSE integrates a cat-paw-inspired robotic arm with a web-based timer that prompts brief tactile interaction during scheduled breaks. We conducted a within-subjects study comparing three conditions--no break, passive break, and active (PAWSE fidgeting-based) break--using a 2-back task and subjective workload measures (NASA-TLX). Results showed differences in post-task accuracy across conditions, with the highest accuracy observed in the active break condition. Reaction time remained largely comparable. Workload measures indicated reduced mental demand and frustration during rest conditions, with the active break providing the most favorable subjective experience. These preliminary findings offer insight into how fidgeting-based micro-breaks may fit within focused digital work and inform the design of future tactile micro-break systems. |
|
| Park, Sun |
Sun Park (University College Dublin, Dublin, Ireland) Robotics is increasingly integrated into heritage management, extending well-established applications in manufacturing and social care. Existing Human-Robot-Interaction (HRI) studies in the heritage sector predominantly emphasise the technological innovation of individual robotic systems and their impact on visitor experience. This paper shifts attention toward a heritage-value-oriented framework of HRI to address the empowerment of beneficiaries in the heritage sector. Drawing on the concept of the ‘Common Heritage of Mankind’, ‘Human-Robot-Management for heritage (HRMH)’ is defined as the human-robot delivery and interpretation of heritage data and information to transmit heritage values. The paper examines how different forms of heritage data and information are (de)centralised through HRMH processes and its implications for an inclusive HRMH design aligned with the idea of the Common Heritage of Mankind. Following recommendations for participatory HRMH designs for outer space heritage, and cultural sites and practices, the paper concludes by raising questions about recognising and preserving robotics and robotic heritage as the Common Heritage of Mankind. |
|
| Parreira, Maria Teresa |
Maria Teresa Parreira, Hongjin Quan, Adolfo G. Ramirez-Aristizabal, and Wendy Ju (Cornell University, New York, USA; Cornell Tech, New York, USA; Accenture, San Francisco, USA) Anticipatory reasoning – predicting whether situations will resolve positively or negatively by interpreting contextual cues – is crucial for robots operating in human environments. This exploratory study evaluates whether Vision Language Models (VLMs) possess such predictive capabilities. First, we test VLMs on direct outcome prediction by inputting videos of human and robot scenarios with outcomes removed, asking the models to predict whether situations will end well or poorly. Second, we introduce a novel evaluation of anticipatory social intelligence: can VLMs predict outcomes by analyzing human facial reactions of people watching these scenarios? We test multiple VLMs and compare their predictions against both true outcomes and judgments from 29 human participants. The best-performing VLM (Gemini 2.0 Flash) achieved 70.0% accuracy in predicting true outcomes, outperforming the average individual human (62.1% ± 6.2%). Agreement with individual human judgments ranged from 44.4% to 69.7%. Critically, VLMs struggled to predict outcomes by analyzing human facial reactions, suggesting limitations in leveraging social cues. These preliminary findings indicate that while VLMs show promise for anticipatory reasoning, their performance is sensitive to model and prompt selection, warranting further investigation for applications in HRI. Maria Teresa Parreira, Ilaria Torre, Sarah Schömbs, Erik Lagerstedt, Laetitia Tanqueray, Karla Bransky, Ilaria Alfieri, Maria Raffa, Carolin Straßmann, Benjamin Lebrun, and Samantha Stedtler (Cornell University, New York, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Melbourne, Melbourne, Australia; University of Gothenburg, Gothenburg, Sweden; Lund University, Lund, Sweden; Australian National University, Canberra, Australia; IULM University, Milan, Italy; University of Duisburg-Essen, Duisburg, Germany; University of Canterbury, Christchurch, New Zealand) Building on the foundational work of our 2025 workshop, this second edition shifts from exploring what sustainability means in HRI to producing actionable outcomes for the community. Last year, we identified strong but often unacknowledged engagement with social and environmental sustainability, alongside gaps in shared language and practical frameworks. This year, we focus on concrete deliverables: guidelines for comprehensively integrating sustainability in HRI research. The morning features external speakers who integrate sustainable development in their research, and whole room discussions. The afternoon centers on collaborative development of guidelines and standards that researchers could adopt. We welcome HRI researchers at any career stage interested in operationalizing sustainability - whether through addressing the carbon footprint of computational models, ensuring equitable access to robotic technologies, or embedding SDG principles into research practices. Outcomes will include practical tools for sustainable HRI research and a roadmap for continued community action beyond the workshop. |
|
| Parush, Avi |
Adi Manor, Hadas Erel, and Avi Parush (Technion - Israel Institute of Technology, Haifa, Israel; Reichman University, Herzliya, Israel) Affective and cognitive trust are fundamental in human-robot interaction, yet they may develop through different mechanisms. Research shows that robot attentiveness compensates for poor performance in building cognitive trust, but performance cannot reciprocally compensate for lack of attentiveness in building affective trust. We conducted secondary analysis of three studies examining shared variance between social perception dimensions (warmth, competence, social presence) and trust types using canonical correlation analysis. In robot's attentiveness contexts, warmth and competence shared substantial variance with both affective trust (67%, 65%) and cognitive trust, associated with dual relationships. In robot competence contexts, competence shared strong variance with cognitive trust (74%) but warmth showed weaker relationships (38%), creating a single connection. In one study, social presence shared higher variance with affective trust (66%) than cognitive trust (35%). These asymmetric variance patterns may imply asymmetric compensation mechanism, with important implications for designing robots where affective behaviors provide resilience despite inevitable performance failures. Bar Itskovitch Kiner, Shani Batcir, Yigal Evron, Milagros Prendes, Avi Parush, and Arielle Fischer (Technion - Israel Institute of Technology, Haifa, Israel) Robotic exoskeletons provide individuals with spinal cord injury (SCI) the ability to regain mobility. However, successful ambulation relies heavily on the user's ability to synchronize their movements with the device. This study examines the synchronization between human and robotic exoskeleton movements across multiple training sessions using both biomechanical metrics and subjective reports. We analysed data from 12 healthy participants performing mobility tasks with a ReWalk exoskeleton over five sessions to characterize the motor learning process. Results indicate a significant 37.1% improvement in the temporal synchronization between the robot's hip and the user's thorax. Additionally, Spatial Consistency of the hip joint improved by 42.6%. These biomechanical gains were strongly supported by subjective user feedback, which showed a 53.5% increase in perceived synchronization (p < 0.01) and an 18% improvement in perceived ease of learning. These findings suggest that structured training facilitates active motor adaptation, leading to more stable and synchronized exoskeleton-assisted walking. |
|
| Paterson, Mark |
Caroline Yan Zheng, Maria Elena Giannaccini, Angela Higgins, Feng Zhou, Mark Paterson, and Praminda Caleb-Solly (University of Nottingham, Nottingham, UK; University of Pittsburgh, Pittsburgh, USA) The critical role of interhuman touch and the rapid development of accessible haptics technology is preluding a new generation of social and assistive robotic interactions that support and enhance human daily life through physical contact, including assisting mobility, mental health, pain management, social bonding, and novel sensory experience for body literacy. For social and assistive robotic systems to fit in the intimate space of physical contact with human bodies, the nature and success of the interaction needs to be interlinked with the sensory and sentimental qualities of robot touch. HRI should not only be driven by function and efficiency, but more critically, the experience of the touch interaction, felt by the human. This raises fundamental methodological challenges. Currently, transferring rich qualitative experience into effective robotic systems has not been made possible, and there is a lack of tools to investigate the ephemeral phenomenon of felt sensations. Addressing this challenge calls for methodological innovation to investigate the sensory and experiential aspects of technology-initiated social touch, through facilitating cross-disciplinary expertise and end-users' involvement. This workshop mobilises experts from diverse disciplines, including robotics engineering, AI, social science, neuroscience, healthcare, soma design and arts. This workshop aims to collectively reflect on strategies, tools and methods for the co-exploration and co-production of Robot-Human Touch Interactions that afford appropriate and desirable human felt experience. |
|
| Patiño, Alejandra |
Alejandra Patiño, Emerson E. Mejia-Trebejo, Macarena Vilca, and Denis Peña (Pontifical Catholic University of Peru, Lima, Peru) Peru recycles less than 2% of waste despite high potential. Current solutions fail at two extremes: passive bins cause confusion, while automated bins create dependency. We introduce PERI (Peer Educational Recycling Instructor), a social robot designed not just to sort but to teach. PERI uses a YOLOv8-based vision module to validate user decisions in real-time. This paper demonstrates PERI’s deployment with over 500 interactions. Our results show that 80% of users corrected their sorting mistakes through a combination of PERI’s feedback and facilitator mediation, transforming technical limitations into educational moments and empowering citizens as agents of change. Emerson E. Mejia-Trebejo, Macarena Vilca, Alejandra Patiño, and Denis Peña (Pontifical Catholic University of Peru, Lima, Peru) Passive infrastructure fails to bridge the recycling "Intention-Action" gap. This paper presents Peri V2, a "Symbiotic Retrofit" kit that transforms standard 120L bins into intelligent pedagogical agents without structural waste. The architecture deploys edge-based per- ception to execute a novel behavioral loop: a Temporal Intention Filter (5s heuristic) to parse social signals, Just-in-Time Associative Feedback for cognitive reinforcement, and Ludic Generalization challenges to verify learning transfer. A preliminary "in-the-wild" pilot (𝑁 ≈ 200) demonstrated the operational feasibility of the intention filter in noisy environments. Furthermore, qualitative feedback from recurring users (𝑁 ≈ 15) suggests that replacing voice interactions with visual cues improves acceptance by mini- mizing the social pressure of public disposal. Peri V2 proposes a scalable model for frugal HRI, shifting the focus from automated cities to empowered "Smart Citizens." |
|
| Pekçetin, Tuğçe Nur |
Umur Yıldız, Berk Yüce, Ayaz Karadağ, Tuğçe Nur Pekçetin, and Burcu A. Urgen (Bilkent University, Ankara, Türkiye) Large Language Models(LLMs) introduce powerful new capabilities for social robots, yet their black-box nature creates a barrier to trust. Transparency is already established as important for humanrobot trust, but how to convey LLM intentions and reasoning in real-time, embodied interaction remains poorly understood. We developed a task-level mechanistic transparency system for an LLM-powered Pepper robot that displays its internal reasoning process dynamically on the robot’s tablet during interaction. In a mixed-design study, participants engaged with Pepper across four trust-relevant tasks in either a Transparency-ON condition or Transparency-OFF condition. Transparency produced significantly greater trust growth than opacity, and a substantial increase in perceived reliability, indicating that transparency remains a key design element for trust calibration in LLM-driven human-robot interaction. |
|
| Pelikan, Hannah |
Simon Ekman, Joachim Örtegren, Kacper Mateusz Sieklucki, Raymond Tchou, Ludwig Halvorsen, and Hannah Pelikan (Linköping University, Linköping, Sweden) Robots on dense urban sidewalks enter environments in which space is contested. While prior work has described how bystanders and passersby accommodate delivery robots, we still lack a deeper understanding of what kind of problems emerge when autonomous robots enter a new area. This paper builds on 9 hours of fieldwork conducted during the early weeks of a delivery robot rollout in a European capital, characterized by a mixture of wide and narrow sidewalks. We describe emerging observations that highlight the contested and negotiated nature of these spaces, in which people make their trajectories readable, projecting where they will go next and establishing local traffic norms. Hannah Pelikan, Karin Stendahl, Franziska Babel, Ola Johansson, and Erik Frisk (Linköping University, Linköping, Sweden) Mobile robots must behave intelligibly to be acceptable in public spaces. Designing social navigation algorithms for delivery robots requires different areas of expertise. The paper reports on an interdisciplinary collaboration between two ethnomethodological conversation analysts, a human factors psychologist, and two motion planning engineers. Based on video recordings of a robot moving among people, the team developed and implemented different sound and movement designs, which were iteratively tested in real-world deployments. This work contributes insights on how interdisciplinary collaboration can be facilitated in the area of social robot navigation and an iterative process for designing robot sound and movement grounded in real-world observations. Marco C. Rozendaal, Anastasia Kouvaras Ostrowski, Mafalda Gamboa, Samantha Reig, Patricia Alves-Oliveira, Maaike Bleeker, Maria Luce Lupetti, John Vines, Nazli Cila, Hannah Pelikan, Nikolas Martelaro, Selma Šabanović, David Sirkin, and Cristina Zaga (Delft University of Technology, Delft, Netherlands; Purdue University, West Lafayette, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Massachusetts at Lowell, Lowell, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Utrecht University, Utrecht, Netherlands; Politecnico di Torino, Turin, Italy; University of Edinburgh, Edinburgh, UK; Linköping University, Linköping, Sweden; Carnegie Mellon University, Pittsburgh, USA; Indiana University at Bloomington, Bloomington, USA; Stanford University, Stanford, USA; University of Twente, Enschede, Netherlands) The 3rd Workshop on Designerly Human-Robot Interaction (HRI) aims to bring together scholars and practitioners engaged in design-oriented research to articulate the value of design research within HRI broadly. We propose a half-day workshop to (1) collectively map the diversity of design research in HRI, examining how contributions are framed and how quality is evaluated; (2) discuss participants’ HRI design projects, showcased in an exhibition setting; and (3) conclude with a focused conversation to identify common ground across diverse approaches and develop strategies for strengthening the position of design research in HRI and its connections with other HRI disciplinary communities. |
|
| Pelikan, Hannah R.M. |
Sofia Thunberg, Mafalda Gamboa, Meagan B. Loerakker, Patricia Alves-Oliveira, and Hannah R.M. Pelikan (Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; Uppsala University, Uppsala, Sweden; TU Wien, Vienna, Austria; University of Michigan at Ann Arbor, Ann Arbor, USA; Linköping University, Linköping, Sweden) In the Human-Robot Interaction community, Wizard of Oz (WoZ) is a commonly employed method where researchers aim to study user perceptions of robot technologies regardless of technical limitations. Despite the continued usage of WoZ, questions concerning ethical tensions and effects on the wizard remain. For instance, how do wizards experience interacting through technology, given the different roles and characters to enact, and the different environments to situate themselves in. In addition, the wizard's experiences and affects on results, continues to be under-explored. The goal of this workshop is to surface ethical, practical, methodological, personal, and philosophical tensions in the WoZ method. Though a collaborative session, we seek to develop a deeper understanding of what it means to be a wizard through eliciting first-person experiences of researchers. As a result, we hope to formulate guidelines for future wizards. |
|
| Peña, Denis |
Alejandra Patiño, Emerson E. Mejia-Trebejo, Macarena Vilca, and Denis Peña (Pontifical Catholic University of Peru, Lima, Peru) Peru recycles less than 2% of waste despite high potential. Current solutions fail at two extremes: passive bins cause confusion, while automated bins create dependency. We introduce PERI (Peer Educational Recycling Instructor), a social robot designed not just to sort but to teach. PERI uses a YOLOv8-based vision module to validate user decisions in real-time. This paper demonstrates PERI’s deployment with over 500 interactions. Our results show that 80% of users corrected their sorting mistakes through a combination of PERI’s feedback and facilitator mediation, transforming technical limitations into educational moments and empowering citizens as agents of change. Emerson E. Mejia-Trebejo, Macarena Vilca, Alejandra Patiño, and Denis Peña (Pontifical Catholic University of Peru, Lima, Peru) Passive infrastructure fails to bridge the recycling "Intention-Action" gap. This paper presents Peri V2, a "Symbiotic Retrofit" kit that transforms standard 120L bins into intelligent pedagogical agents without structural waste. The architecture deploys edge-based per- ception to execute a novel behavioral loop: a Temporal Intention Filter (5s heuristic) to parse social signals, Just-in-Time Associative Feedback for cognitive reinforcement, and Ludic Generalization challenges to verify learning transfer. A preliminary "in-the-wild" pilot (𝑁 ≈ 200) demonstrated the operational feasibility of the intention filter in noisy environments. Furthermore, qualitative feedback from recurring users (𝑁 ≈ 15) suggests that replacing voice interactions with visual cues improves acceptance by mini- mizing the social pressure of public disposal. Peri V2 proposes a scalable model for frugal HRI, shifting the focus from automated cities to empowered "Smart Citizens." |
|
| Pendleton-Jullian, Ann M. |
Wilbert Tabone, Benedetta Lusi, Alessandro Ianniello, J. Micah Prendergast, Deborah Forster, Olger Siebinga, Maria Luce Lupetti, Eva S. Verhoef, Dave Murray-Rust, Marco C. Rozendaal, Ann M. Pendleton-Jullian, and David Abbink (Delft University of Technology, Delft, Netherlands; Erasmus University Medical Centre, Rotterdam, Netherlands; Politecnico di Torino, Turin, Italy; RoboHouse, Delft, Netherlands; Ohio State University, Columbus, USA) Building on two previous workshops on transdisciplinary practices for shaping worker-robot relations, this half-day workshop introduces participants to worldbuilding, a design-driven technique used to co-create and explore richly detailed futures, as a way to empower workers and scholars in reimagining plausible and preferable future worker-robot relations (WRRs). WRRs describe the interactions, collaborations, and shared practices between workers and robotic systems in organisational contexts. The workshop begins with an introduction to WRRs, and a keynote by a worldbuilding expert that will outline the method and its value for envisioning future WRRs. Groups of workshop participants will then investigate concrete case studies that demonstrate how robotic systems can support workers in their practice, with a focus on enhancing wellbeing. Through interactive activities in this workshop, participants will co-create imagined worlds of work, which will be analyzed systemically across multiple levels of complexity, from the individual worker and their immediate context to broader societal implications. The workshop ultimately aims to build a community committed to shaping sustainable futures of robot-assisted work. |
|
| Peng, Haonan |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Peng, Haopeng |
Haopeng Peng, Ruilin Zhang, Yuxin Liang, and Liyang Fan (Tsinghua University, Beijing, China) In social interactions, individuals often conceal their true feelings for various reasons. This phenomenon of actively adjusting social strategies based on the social context is referred to as the "social performance mechanism". Inspired by this mechanism, we propose a wearable robot "THIRD EXPRESSION", designed to assist individuals in expressing real emotions and states that are difficult to verbalize. Through robot design, this study aims to enhance the wearer’s ability to actively define and convey their emotions in real-time. The system integrates multimodal sensors (speech, environment, heart rate, etc.) and large model reasoning to generate dynamic visual feedback. The pilot study has been validated that the robot design enhances the sense of boundary control and interaction satisfaction, while reducing social anxiety levels. |
|
| Peng, Jing Qi |
Xiaoyu Chang, Yanheng Li, XiaoKe Zeng, Jing Qi Peng, and Ray Lc (City University of Hong Kong, Hong Kong, China) Robots are increasingly designed to act autonomously, yet moments in which a robot overrides a user’s explicit choice raise fundamental questions about trust and social perception. This work investigates how a preference-violating override affects user trust, perceived competence, and interpretations of a robot’s intentions. In a beverage-delivery scenario, a robot either followed a user’s selected drink or replaced it with a healthier option without consent. Results show that the way an override is enacted and communicated consistently reduces trust and competence judgments, even when users acknowledge benevolent motivations. Participants interpreted the robot as more controlling and less aligned with their autonomy, revealing a social cost to such actions. This study contributes empirical evidence that preference-violating override behavior is socially consequential, shaping trust and core dimensions of user perception in embodied service interactions. |
|
| Pereira, Nathan |
Nathan Pereira and Yeganeh Madadi (Appalachian State University, Boone, USA) Large language models (LLMs) offer new opportunities to enhance human–robot interaction by enabling humanoid robots to engage in natural, context-aware dialogue. However, deploying LLMs on social robots operating in real-time environments remains challenging due to latency constraints, limited onboard hardware, and privacy considerations. This paper introduces a deployment-oriented benchmarking framework for evaluating open-source LLMs that are feasible for on-device execution on humanoid robots. We implement and analyze ten lightweight LLMs (≤2 billion parameters), using the Pepper robot as a representative use case in CS1/CS2 laboratory courses where the robot functions as a teaching assistant. The models were evaluated using four normalized metrics: instruction-following accuracy, conversational clarity, response latency, and on-device feasibility. Results identify clear trade-offs within the lightweight tier, emphasizing models that best balance responsiveness with instructional quality. This work provides a reproducible methodology and practical deployment guidelines for integrating LLM-driven instructional capabilities into humanoid robots to support more autonomous, student-centered learning in introductory computer science education. |
|
| Pérez-Aronsson, Anna |
Yue Lou, Emma Geijer-Simpson, Georgina Warner, Anna Pérez-Aronsson, Elin Inge, and Ginevra Castellano (Uppsala University, Uppsala, Sweden) As emerging technologies such as AI systems and social robots become embedded in children’s daily lives, it is vital to consider how children themselves imagine robots. Advances in social robotics have generated new opportunities for assessing child wellbeing, but much of this work remains framed by adult researcher-centered views of what counts as beneficial. Drawing on the distinction between adult child perspectives and children’s own perspectives from childhood studies, we report a speculative participatory workshop with a child public involvement panel that explores future social robots for supporting children’s wellbeing. Children took part as co-researchers rather than research subjects, and their discussions highlight insights including everyday care, emotional companionship, and transparency about robot identity. We use these insights to inform future work on child-robot interaction and child wellbeing, and to connect children’s perspectives to debates around ethics and AI policy. |
|
| Perugia, Giulia |
Fan Wang, Yuan Feng, Wijnand IJsselsteijn, and Giulia Perugia (Eindhoven University of Technology, Eindhoven, Netherlands; Northwestern Polytechnical University, Xi’an, China) People Living with Dementia (PLwD) require intensive emotional and physical support, and caregivers frequently struggle with exhaustion and distress. Social robots have been proposed as tools that could enhance socio-emotional well-being, yet many of their designs inherently involve deception, embedding cues that mislead PLwD about the nature and capabilities of the robot. Although Ethics of Technology and Human-Robot Interaction (HRI) explored the concept of Social Robotic Deception (SRD) and its implications, existing discussions remain largely theoretical and detached from the lived realities of dementia care. We know little about how caregivers see and envision the use of SRD in dementia care practice. To address this gap, we conducted two online focus groups with both formal and informal caregivers, with the aim of appraising caregivers' attitudes towards SRD and how they would implement or mediate deception in everyday practice. Critically, we focused on caregivers operating in China, a country of Confucian influence where family caregiving is regarded as a moral duty and leveraging institutional care is stigmatized. Our work contributes empirically grounded insights that highlight lived reality in dementia care shaped by culture for ethical SRD design. |
|
| Pettersson, Tobias |
Sofia Thunberg, Aris Alissandrakis, Timmy Bolinder, Melker Forslund, Elias Lycke, Tobias Pettersson, Marcus Saméus, and Robin Åsberg (Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; Uppsala University, Uppsala, Sweden) This late-breaking report presents a Wizard of Oz control system that uses virtual reality equipment to operate the humanoid robot Pepper. The system enables control by mirroring the operator’s movement within the physical space, including body and head rotation, and arm movements directly onto the robot. The system was evaluated in a study setup with two participant groups: users and operators. Results showed that the system was effective in simulating natural robot behaviour, while being intuitive and engaging. |
|
| Phillips, Elizabeth |
Hanna Parikh, Paul Vallejo, Elizabeth Phillips, Eileen Roesler, J. Gregory Trafton, and Laura Saad (George Mason University, Fairfax, USA; Naval Research Laboratory, Washington, USA) As robots increasingly occupy more complex decision-making roles in society, understanding how people attribute blame for robots' actions is critical. Prior work suggests that attributions of control of a robot can influence blame, but the sources of this control and their relationship to blame attribution remain underexplored. We reanalyzed existing data from three published studies in which participants assigned blame to a robot for its decision in different moral dilemmas. We first developed three categories of attributions of control for a robot’s actions (Self, Predetermined, and External), then employed a qualitative coding procedure to analyze participants' clarifications for their assigned blame. Blame and trust ratings were analyzed across control categories. Across all three studies, robots perceived as controlling their own actions (Self) were consistently blamed more than those in other categories. Attributions of control had a minimal impact on trust. Taken together, these findings indicate that attributions of control impact how people assign blame to robots, which may have moral and legal implications. Laura Saad, Eileen Roesler, Elizabeth Phillips, and Greg Trafton (Naval Research Laboratory, Washington, USA; George Mason University, Fairfax, USA) Subjective measurement scales are commonly employed in HRI research. We provide a half-day tutorial (4 hours) that aims to empower researchers with the tools to find appropriate scales for their research and assess their quality, confidently and efficiently. There are no prerequisites required for attendees. We aim to recruit researchers interested in using scales but who are unsure about how to pick which scale to use. The first part of the tutorial will teach attendees how to assess the quality of HRI scales. To accomplish this, we will review basic topics in psychometric theory and a guideline that outlines best practices in scale development and validation. In the second part, we will apply this guideline to two frequently used HRI scales: Godspeed and Robotic Social Attributes Scale (RoSAS). Attendees are also encouraged to bring scales they are interested in reviewing. The third part aims to help attendees find appropriate scales for their research. To accomplish this, we will review the HRI scale database: the first centralized online repository of HRI scales which contains over 50 of the most used HRI scales. These scales cover a wide array of topics of interest such as, trust, perceived agency, embodiment, danger, safety, and attitudes towards robots. Our goal for this tutorial is to promote active engagement from attendees throughout the session, ultimately striving to improve the quality and replicability of results in HRI studies. |
|
| Pigureddu, Linda |
Filipa Correia, Cristina Gena, Alberto Lillo, Laura Lossi, Claudio Mattutino, Valentina Nisi, Linda Pigureddu, and Fabiana Vernero (Instituto Superior Técnico - University of Lisbon, Lisbon, Portugal; University of Turin, Turin, Italy; IST University of Lisbon, Lisbon, Portugal) This paper explores the emerging field of Animal-Robot Interaction (ARI) by questioning the way humanoid robots could engage with dogs in domestic settings, a scenario which we expect to become prevalent in the near future. While existing research focuses on human-robot interaction, ARI remains quite underexplored, particularly regarding how dogs perceive and respond to robotic agents. In contrast, this work proposes a playful and relational design perspective to foster more natural interactions between dogs and robots. We sought guidance from the Department of Veterinary Medicine to ensure our experimental design offered an ethical and stress-free environment for the dogs. Hence, the study incorporated best practices in robot vocalisation, gestural communication, and movement adaptation, improving the robot’s ability to interact naturally. The experiment involved six dogs, always in the presence of their owners. Preliminary findings indicate that dogs can react with curiosity and engagement, but also avoidance or fear, challenging assumptions about robot acceptability in animal interactions. We argue for rethinking robotic design beyond human-centric paradigms, advocating for pluralistic and open-ended approaches to ARI. Nicole Covone, Linda Pigureddu, Margherita Attanasio, Cristina Gena, Berardina De Carolis, and Monica Mazza (University of L’Aquila, L’Aquila, Italy; University of Turin, Turin, Italy; University of Bari, Bari, Italy) The study proposed in this paper investigates whether the administration modality of the Raven’s Standard Progressive Matrices (RSPM) influences the performance and physiological responses of autistic individuals. Fourteen participants with an ASD diagnosis completed the test either in the traditional paper-and-pencil format or through the humanoid robot Pepper, which delivered a digital version of the assessment. Across both modalities, we analysed accuracy, reaction times, and physiological activation (heart rate), taking item complexity into account. Preliminary results indicate that robot-mediated administration is feasible and comparable to traditional testing, with meaningful differences emerging as task difficulty increases. In particular, the Pepper modality tended to show lower initial physiological activation and comparable performance across most item levels. These findings suggest that social robots may offer a viable and potentially more engaging alternative for administering cognitive assessments in autism, supporting structured, predictable, and less stressful testing contexts. |
|
| Ping, Chengliang |
Yan Xiang, Chengliang Ping, Mengyang Wang, and Mingming Fan (Hong Kong University of Science and Technology, Guangzhou, China) As reliance on desktop-based knowledge work platforms grows, maintaining sustained focus has become a critical challenge, and current tools still provide limited support for everyday attentional needs. Many digital aids remain tied to the screen and are experienced as intrusive or easy to ignore, whereas desktop robots offer situated, embodied forms of support in the same physical workspace as the computer. Yet it remains unclear how such robots should be designed to help people manage attention in study and work. To explore this, we conducted a participatory design study consisting of five workshops with adults who self-identified as needing support with focus. Participants reflected on their daily challenges and current coping strategies, then envisioned how a desktop robot could act, look, and be placed to support them. Our findings reveal diverse, context-dependent expectations around function, social role, and form, and outline directions for designing attention-supportive desktop robots for everyday work. |
|
| Pinto Bernal, Maria Jose |
Maria Jose Pinto Bernal and Tony Belpaeme (Ghent University, Ghent, Belgium) Large Vision–Language Models are increasingly used for visually grounded social dialogue, yet most systems assume that vision should be active continuously, adding computational load and increasing the risk of unnecessary or hallucinated descriptions. We present a multimodal architecture that treats vision as a selective, context-dependent resource. A lightweight vision-gating module triggers visual grounding only when a user utterance requires it, while a complementary ambient monitoring component detects gradual scene changes at a low frame rate. Both pathways contribute cues only when relevant, enabling the robot to use visual information meaningfully without overuse. A preliminary evaluation with 10 participants (≈ 95 minutes) shows that the gating mechanism identified vision-relevant turns with 93.4% accuracy, and that grounded descriptions aligned with the scene in 90.7% of cases. |
|
| Pinyomit, Phurinat |
Andrew Chen, Ju-Hung Chen, Phurinat Pinyomit, and Alexis E. Block (Case Western Reserve University, Cleveland, USA) RoboTales is a low-cost robotic storytelling system that animates narratives using expressive sock puppetry. Implemented autonomously on a Baxter robot as a test case, RoboTales synchronizes narration, gestures, and mouth movements to perform character-driven stories. In a pilot study, puppet-based storytelling outperformed a gesture-only mode, producing higher HRIES ratings and improved story recall, suggesting that embodied puppetry enhances engagement and narrative comprehension. Designed to be modular and platform-agnostic, RoboTales can be adapted to other manipulators and offers a screen-free alternative to passive media, supporting future deployment in child-centered learning environments. |
|
| Pischulti, Patrick Kenneth |
Jasmin Jaya Chadha, Patrick Kenneth Pischulti, Ciara Hume, Zachary Selleck, William Roessmann, and Katya Arquilla (University of Colorado at Boulder, Boulder, USA) Psychological safety, the shared belief held by team members that it is safe to take interpersonal risks, has been assessed in many human teams in the corporate workplace, yet it has not been analyzed to the same extent in complex environments or with multi-agent teams. Many studies have either evaluated team dynamics in human-robot interaction (HRI) completely in the wild or in highly controlled laboratory studies. This work presents operationally relevant study tasks that can be implemented in a controlled laboratory environment to characterize psychological safety in HRI. 33 participants completed 2 different tasks repeatedly with a human and robot teammate. We observed higher performance scores when participants worked with a human vs. a robot, suggesting that these tasks elicit differentiable performance responses based on team type that could be related to differences in psychological safety. |
|
| Pithayarungsarit, Pawinee |
Pawinee Pithayarungsarit, Neha Kannan, Paige Williamson, Paul Vallejo, Laura Saad, Greg Trafton, and Eileen Roesler (George Mason University, Fairfax, USA; Naval Research Laboratory, Washington, USA) The Negative Attitude toward Robots Scale (NARS) was designed to measure people’s general attitudes toward robots. Originally developed and validated in Japanese, the NARS was translated into English without revalidation and has become widely used in human-robot interaction research. This study aims to validate the translated NARS factor structure via confirmatory factor analysis using existing datasets. Results (N = 2154) indicated a moderate support for the three-factor structure with some problematic items. Additionally, many papers modified the NARS in some way, including changing wording or rating scale. Not only is this not recommended, but it suggests that there may be a disconnect between the scale and how researchers would like to use it. Together, we suggest that researchers consider the consequences of modification and emphasize the importance of validating scales to ensure their adequate measurement ability and the quality of research results. Pawinee Pithayarungsarit (George Mason University, Fairfax, USA) Scales are one of the most common assessment methods in human-robot interaction research. However, a scale is only useful if it has good psychometric properties. Crucially, several scales are not properly developed or validated, and some scales are modified without additional validation. This can compromise the utility of the scale and the validity of the results. In this work, I propose a novel, highly feasible way to validate the scale's factor structure using publicly available datasets, which allows factor analyses to be conducted on a large sample without new data collection. Moreover, I aim to provide an overview of how scales used in HRI research have been modified. Together, this work greatly contributes to the field of HRI by providing information regarding the scale's validity to ensure the quality of research findings. |
|
| Pizzuto, Gabriella |
Satheeshkumar Veeramani, Anna Kisil, Abigail Bentley, Hatem Fakhruldeen, Gabriella Pizzuto, and Andrew I. Cooper (University of Liverpool, Liverpool, UK) Self-driving laboratories (SDLs) are rapidly transforming research in chemistry and materials science to accelerate new discoveries. Mobile robot chemists (MRCs) play a pivotal role by autonomously navigating the lab to transport samples, effectively connecting synthesis, analysis, and characterisation equipment. The instruments within an SDL are typically designed or retrofitted to be accessed by both human and robotic chemists, ensuring operational flexibility and integration between manual and automated workflows. In many scenarios, human and robotic chemists may need to use the same equipment simultaneously. Currently, MRCs rely on simple LiDAR-based obstruction detection, which forces the robot to passively wait if a human is present. This lack of situational awareness leads to unnecessary delays and inefficient coordination in time-critical automated workflows in human-robot shared labs. To address this, we present an initial study of an embodied, AI-driven perception method that facilitates proactive human-robot interaction in shared-access scenarios. Our method features a hierarchical human intention prediction model that allows the robot to distinguish between preparatory actions (waiting) and transient interactions (accessing the instrument). Our results demonstrate that the proposed approach enhances efficiency by enabling proactive human–robot interaction, streamlining coordination, and potentially increasing the efficiency of autonomous scientific labs. |
|
| Pokutta, Sebastian |
Nicolas Leins, Jana Gonnermann-Müller, Malte Teichmann, and Sebastian Pokutta (Zuse Institute Berlin, Berlin, Germany; Weizenbaum Institute, Berlin, Germany; University of Potsdam, Potsdam, Germany; TU Berlin, Berlin, Germany) Augmented Reality (AR) offers powerful visualization capabilities for industrial robot training, yet current interfaces remain predominantly static, failing to account for learners' diverse cognitive profiles. In this paper, we present an AR application for robot training and propose a multi-agent AI framework for future integration that bridges the gap between static visualization and pedagogical intelligence. We report on the evaluation of the baseline AR interface with 36 participants performing a robotic pick-and-place task. While overall usability was high, notable disparities in task duration and learner characteristics highlighted the necessity for dynamic adaptation. To address this, we propose a multi-agent framework that orchestrates multiple components to perform complex preprocessing of multimodal inputs (e.g., voice, physiology, robot data) and adapt the AR application to the learner's needs. By utilizing autonomous Large Language Model (LLM) agents, the proposed system would dynamically adapt the learning environment based on advanced LLM reasoning in real-time. |
|
| Pollmann, Kathrin |
Kathrin Pollmann, Selina Layer, Amelie Polosek, Boyu Xian, and Anna Vorreuther (Fraunhofer Institute for Industrial Engineering IAO, Stuttgart, Germany; University of Stuttgart, Stuttgart, Germany) This paper explores how adhesive signs on public robots can prevent robot bullying. Participants were presented with three different sign variants attached to a cleaning robot in a Virtual Reality scenario: informative (alluding to surveillance/ legal consequences), prompting (imperative to keep away from the robot), and feeling (emotional appeal) and reported their tenden-cies for anti-bullying behavior and perceptions of the robot. Eye tracking was used to measure visual attention. All signs elicited anti-bullying tendencies and were rated comprehensible. The robot with the feeling sign was perceived most human- and least tool-like, capable of emotions, and induced the highest amount of gaze fixations. The informative sign supported fast, low-effort compre-hension and reinforced a tool-like perception. Findings suggest adhesive signs are a viable, low-obtrusive preventive strategy and sign selection should be context-driven: informative for quick pass-by messaging, feeling for deeper engagement. |
|
| Polosek, Amelie |
Kathrin Pollmann, Selina Layer, Amelie Polosek, Boyu Xian, and Anna Vorreuther (Fraunhofer Institute for Industrial Engineering IAO, Stuttgart, Germany; University of Stuttgart, Stuttgart, Germany) This paper explores how adhesive signs on public robots can prevent robot bullying. Participants were presented with three different sign variants attached to a cleaning robot in a Virtual Reality scenario: informative (alluding to surveillance/ legal consequences), prompting (imperative to keep away from the robot), and feeling (emotional appeal) and reported their tenden-cies for anti-bullying behavior and perceptions of the robot. Eye tracking was used to measure visual attention. All signs elicited anti-bullying tendencies and were rated comprehensible. The robot with the feeling sign was perceived most human- and least tool-like, capable of emotions, and induced the highest amount of gaze fixations. The informative sign supported fast, low-effort compre-hension and reinforced a tool-like perception. Findings suggest adhesive signs are a viable, low-obtrusive preventive strategy and sign selection should be context-driven: informative for quick pass-by messaging, feeling for deeper engagement. |
|
| Pomarlan, Mihai |
Rachel Ringe, Nima Zargham, Mihai Pomarlan, Benjamin R. Cowan, Minha Lee, Donald McMillan, and Matthew Peter Aylett (University of Bremen, Bremen, Germany; University College Dublin, Dublin, Ireland; Eindhoven University of Technology, Eindhoven, Netherlands; Stockholm University, Stockholm, Sweden; Heriot-Watt University, Edinburgh, UK) Communication in Human-Robot Interaction (HRI) often focuses on linguistic exchanges, with spoken dialogue providing a natural way for people to interact with robots. While direct verbal interaction can reduce barriers compared to other forms of interaction (e.g., text- or touch-based interfaces), it may also exclude users with speech, language, or cognitive differences, and they may not generalize well across cultures and contexts. Non-linguistic forms of communication, including non-verbal voice interactions and extra-linguistic signals (e.g., gesture, gaze, facial expressions, posture), offer complementary pathways that can enable more inclusive, accessible, and universal interactions. This workshop explores how non-linguistic communication can shape effective human-robot communication and collaboration. We aim to bring together researchers from HRI, conversational AI, linguistics, psychology, and accessibility studies to discuss opportunities, challenges, and design practices for integrating such features. The workshop seeks to advance inclusive design principles, bridge disciplines, and highlight future research directions on communication strategies that empower diverse users in their interactions with robots. |
|
| Pon-Barry, Heather |
Heather Pon-Barry, Jasna Budhathoki, and Dushma Badr (Mount Holyoke College, South Hadley, USA; Columbia University, New York, USA) For social robots used in educational applications, such as learning companion robots, maintaining student engagement is critical. There is a need for such robots to estimate engagement in real-time. This study examines dialogue data between a Nao robot and middle school students interacting conversationally while solving math problems. We collect annotations of perceived engagement, seeking to characterize human perception of engagement with the robot. Because robots that perform real-time engagement tracking do not have consistent access to clear video and audio data, we analyze perception of engagement across varying modalities. Specifically, we compare three settings: full access to audiovisual data, access to only the video data, and access to only the audio data. Our results indicate that without access to audio data, perceptions of level of engagement are lower for low-engagement segments, and without access to video data perceptions are higher for high-engagement segments. |
|
| Porfirio, David |
Dakota Sullivan, David Porfirio, Bilge Mutlu, and Laura M. Hiatt (University of Wisconsin-Madison, Madison, USA; George Mason University, Fairfax, USA; US Naval Research Laboratory, Washington, USA) Robots are increasingly relied upon for task completion in privacy-critical human environments. In these environments, it is imperative that a robot's potentially sensitive goals remain obfuscated. To address this need, a substantial amount of literature has proposed methods for obfuscatory task planning. These works make many attempts to experimentally or analytically determine whether agents can conceal their goals from observers. While these works make guarantees that resulting plans will conceal an agent's goals, they are often only theoretical. Within this work, we develop three obfuscatory task planning strategies inspired by prior literature to evaluate with human observers (N = 160). Our preliminary results show that observers struggle to identify a robot's goals at similar levels regardless of whether obfuscatory or optimal task planning strategies are employed. These findings call into question the purported benefits of many obfuscatory task planning strategies. JP Tiger, Paige Williamson, Neha Kannan, Jacob Henley, Eileen Roesler, and David Porfirio (George Mason University, Fairfax, USA) Air quality threatens health through particulate matter (PM), volatile organic compounds (VOCs), and allergens, yet current monitors of- ten fail to communicate threats properly. We introduce Nose Knows, a robot that uses “universal nasal cues”—such as sneezes—to com- municate. Using participatory design, we conducted an expert inter- view, user focus groups, and iterative prototyping to build a robot that detects PM10, PM2.5, VOCs, and customizable odors. The ro- bot integrates servo-driven movements, sound effects, LEDs, and a web interface. Our findings highlight preferences for customizable aesthetics and show that biologically inspired cues can possibly improve clarity, engagement, and awareness of air quality. |
|
| Pou, Bartomeu |
Sara Cooper, Bartomeu Pou, Arnau Mayoral-Macau, Antonio Lobo-Santos, Miriam Martín, and Raquel Ros (IIIA-CSIC, Cerdanyola, Spain; IIIA-CSIC, Bellaterra, Spain; IIIA-CSIC, Barcelona, Spain; Institute of Mathematical Sciences, Madrid, Spain) We present Emy, a low-cost, socially assistive robot designed to support therapeutic sessions for children with Autism Spectrum Disorder (ASD) within the EMOROBCARE project. We detail the robot's multimodal interaction capabilities, including a comprehensive set of facial expressions and reactive behaviours, as well as its perception and speech interaction system. The demonstration showcases the robot together with a suite of four therapeutic games that promote cognitive and social skills. The system's modularity and affordability increase its accessibility for home-based ASD interventions. Bartomeu Pou and Raquel Ros (IIIA-CSIC, Bellaterra, Spain; IIIA-CSIC, Barcelona, Spain) Socially aware robots must interpret non-verbal cues such as gaze, gestures and pointing under strict computational constraints. We present a lightweight vision framework that extends ROS4HRI with hand and head gesture recognition, hybrid pointing estimation for short and long distances, and a multi-target visual engagement metric over both agents and objects. All components run in real time on embedded hardware and are validated through proof-of-concept experiments. |
|
| Prakash, Anshul |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Prendergast, J. Micah |
Chalmers Ong, Hannah Lee, Koen de Nie, Mark W. Vas, Peter Holzer, Yasaman Shamssabzevar, Yu-Ai Jiang, J. Micah Prendergast, Alper Semih Alkan, and Serdar Aşut (Delft University of Technology, Delft, Netherlands) Robotics in creative industries is sometimes met with concern over the eventual replacement of humans. This paper proposes a framework for a new form of spatial craft that emerges through human-robot collaboration, using the Delft Blue painting process as a case study. We employ Learning from Demonstration (LfD) through kinesthetic guidance of a KUKA iiwa robotic arm to capture expert brushstroke trajectories, which are then modelled using a Long Short-Term Memory Variational Autoencoder (LSTM-VAE) to generate novel, stylistically coherent strokes. Subsequently, we evaluated these generated trajectories through physical execution on ceramic tiles, inviting a human painter to interpret and complete one of the robot-produced compositions. The results demonstrate that robotic machine learning can support a new co-crafting workflow, which does not substitute human artists but expands and transforms traditional craft practices for new creative opportunities to emerge. Wilbert Tabone, Benedetta Lusi, Alessandro Ianniello, J. Micah Prendergast, Deborah Forster, Olger Siebinga, Maria Luce Lupetti, Eva S. Verhoef, Dave Murray-Rust, Marco C. Rozendaal, Ann M. Pendleton-Jullian, and David Abbink (Delft University of Technology, Delft, Netherlands; Erasmus University Medical Centre, Rotterdam, Netherlands; Politecnico di Torino, Turin, Italy; RoboHouse, Delft, Netherlands; Ohio State University, Columbus, USA) Building on two previous workshops on transdisciplinary practices for shaping worker-robot relations, this half-day workshop introduces participants to worldbuilding, a design-driven technique used to co-create and explore richly detailed futures, as a way to empower workers and scholars in reimagining plausible and preferable future worker-robot relations (WRRs). WRRs describe the interactions, collaborations, and shared practices between workers and robotic systems in organisational contexts. The workshop begins with an introduction to WRRs, and a keynote by a worldbuilding expert that will outline the method and its value for envisioning future WRRs. Groups of workshop participants will then investigate concrete case studies that demonstrate how robotic systems can support workers in their practice, with a focus on enhancing wellbeing. Through interactive activities in this workshop, participants will co-create imagined worlds of work, which will be analyzed systemically across multiple levels of complexity, from the individual worker and their immediate context to broader societal implications. The workshop ultimately aims to build a community committed to shaping sustainable futures of robot-assisted work. |
|
| Prendes, Milagros |
Bar Itskovitch Kiner, Shani Batcir, Yigal Evron, Milagros Prendes, Avi Parush, and Arielle Fischer (Technion - Israel Institute of Technology, Haifa, Israel) Robotic exoskeletons provide individuals with spinal cord injury (SCI) the ability to regain mobility. However, successful ambulation relies heavily on the user's ability to synchronize their movements with the device. This study examines the synchronization between human and robotic exoskeleton movements across multiple training sessions using both biomechanical metrics and subjective reports. We analysed data from 12 healthy participants performing mobility tasks with a ReWalk exoskeleton over five sessions to characterize the motor learning process. Results indicate a significant 37.1% improvement in the temporal synchronization between the robot's hip and the user's thorax. Additionally, Spatial Consistency of the hip joint improved by 42.6%. These biomechanical gains were strongly supported by subjective user feedback, which showed a 53.5% increase in perceived synchronization (p < 0.01) and an 18% improvement in perceived ease of learning. These findings suggest that structured training facilitates active motor adaptation, leading to more stable and synchronized exoskeleton-assisted walking. |
|
| Prescott, Tony |
Marina Sarda Gou, Serena Marchesi, Agnieszka Wykowska, and Tony Prescott (University of Sheffield, Sheffield, UK; Italian Institute of Technology, Genoa, Italy) Understanding how people attribute awareness to robots is essential for developing socially and ethically aligned Human-Robot Interactions (HRI). This study presents the Italian validation of the Awareness Attribution Scale (AAS), an existing psychometric instrument designed to measure the attribution of awareness to artificial agents. The adaptation procedures (forward translation, native-speaker review, back-translation, and testing) were performed with the AAS. The final translated version was administered to Italian participants (N = 200) to rate different entities on perceived awareness. Analyses demonstrated good internal reliability of the Italian scale and expected attribution patterns across entities. These results provide evidence that the Italian AAS behaves consistently with the original English version, supporting its use in future cross-cultural research on awareness attribution. Furthermore, these findings advance cross-cultural knowledge of awareness attribution, a fundamental component of more inclusive settings. |
|
| Preston, Rhian C. |
Triniti Armstrong, Courtney J. Chavez, Rhian C. Preston, and Naomi T. Fitter (Oregon State University, Corvallis, USA) Prolonged computer use has become the norm for a wide variety of fields. The sedentary practices that often accompany this computer use can lead to a number of health challenges, from cardiovascular and musculoskeletal issues to ocular health problems. Past work by our research group took preliminary steps to address these issues by evaluating a socially assistive robot (SAR)-based break-taking system with no online learning abilities. Based on their initial findings, which showed the robot to effectively encourage break-taking behaviors during computer use and to be more engaging and enjoyable to use compared to a non-robotic alternative, we present methods for data collection in this current paper. Specifically, we aimed 1) to enhance the past SAR system by adding online Q-learning capabilities and 2) to evaluate the updated system's policy generation and how well the final policies aligned with our expectations from prior work. Our results show evidence that the system is successfully generating unique policies for each participant, although the limited match between the expected and resulting policies surprised us. Our work can help SAR researchers understand how to implement Q-learning when using sparse data. |
|
| Prinz, Lisa Marie |
Lisa Marie Prinz and Tintu Mathew (Fraunhofer FKIE, Bonn, Germany) Autonomous robots are increasingly deployed in sensitive domains, yet prevailing human-in/on/out-of-the-loop categorizations fail to capture the quality of human-robot interaction (HRI). Meaningful Human Control (MHC) has emerged as a guiding principle, but its measurement remains under-specified. This paper presents a systematic review and measurement guide for operationalizing MHC in HRI, mapping its core constructs-trust, involvement, and situation awareness (SA)-to standardized self-report instruments. We review standardized questionnaires and related methods and compare their validity, reliability, and suitability for HRI user interface (UI) evaluation. We found that trust is well supported by validated scales, notably the MDMT and Schaefer’s Trust Perception Scale-HRI, with Jian’s Trust in Automated Systems scale as a widely used alternative. Involvement is best assessed via the UES felt-involvement subscale, with PQ/ITQ as viable complements. For SA, SAGAT and SARS are well-established tools, though many SA tools lack validation for HRI contexts. We offer a guide to measure MHC in HRI via standardized instruments, enabling UI comparison and adherence assessment. This operationalization supports the establishment of MHC in HRI design for sensitive domains. |
|
| Prodinger, Birgit |
Stina Klein, Birgit Prodinger, Elisabeth André, Lars Mikelsons, and Nils Mandischer (University of Augsburg, Augsburg, Germany) Robots are becoming more prominent in assisting persons with disabilities (PwD). Whilst there is broad consensus that robots can assist in mitigating physical impairments, the extent to which they can facilitate social inclusion remains equivocal. In fact, the exposed status of assisted workers could likewise lead to reduced or increased perceived stigma by other workers. We present a vignette study on the perceived cognitive and behavioral stigma toward PwD in the workplace. We designed four experimental conditions depicting a coworker with an impairment in work scenarios: overburdened work, suitable work, and robot-assisted work only for the coworker, and an offer of robot-assisted work for everyone. Our results show that cognitive stigma is significantly reduced when the work task is adapted to the person's abilities or augmented by an assistive robot. In addition, offering robot-assisted work for everyone, in the sense of universal design, further reduces perceived cognitive stigma. Thus, we conclude that assistive robots reduce perceived cognitive stigma, thereby supporting the use of collaborative robots in work scenarios involving PwDs. |
|
| Pusceddu, Giulia |
Giulia Pusceddu, Lucia Schiatti, Andrea Senacheribbe, Monica Gori, Alessio Del Bue, and Alessandra Sciutti (Italian Institute of Technology, Genoa, Italy; University of Genoa, Genoa, Italy) We introduce a novel Human-Robot Interaction (HRI) paradigm built on sketch-based symbolic communication to investigate the link between human visual abstraction skills and language-based organization of semantic knowledge. The design of drawings' prompts is based on a controlled grammar with an increasing level of semantic complexity. We tested such a paradigm in an HRI experiment where humans were asked to make sketches of textual prompts and evaluate the congruence of the robot behavior with the represented semantic category, i.e. objects, actions, and interactions. Our preliminary results, suggesting that drawing strategies significantly differ based on the semantic complexity and in presence of an interaction goal, are promising towards the potential of the proposed approach to be integrated in fundamental research and clinical applications. Data: https://hrisymbolic.github.io/human-robot-drawing/. |
|
| Qayyum, Muhammad Ahmed |
Muhammad Ahmed Qayyum and Stacy A. Doore (Colby College, Waterville, USA) Independent mobility is central to daily life for blind and low-vision (BLV) individuals, yet existing mobility tools leave important gaps in situational awareness, obstacle detection, and environmental understanding. Legged robots such as Boston Dynamics' Spot offer a promising platform for mobility support, but effective use in everyday environments depends on accessible, user-centered interaction. This late-breaking report presents a voice-based interface (VBI) architecture for quadruped guide robots, grounded in prior work on accessibility, multimodal communication, and embodied large-model reasoning. |
|
| Qin, Weijie |
Weijie Qin, Qiyao Wang, Bingcen Gong, and Yijia Luo (Tsinghua University, Beijing, China) During dining in restaurants, oil splashes are readily appraised by users as a negative event. Critically, without timely intervention, the initial irritation can accumulate and evolve into a vicious cycle of escalating negativity. This reaction may not only impair the overall dining experience, but also dominate the user's cognitive focus and lead to lasting emotional distress. To address this, we present Seesoil—a desktop interactive robot based on the "Weak Robot" concept. Designed to resemble a condiment bottle, it blends naturally into the table setting. Rather than addressing the stain directly, Seesoil employs deliberately clumsy motions and voice interaction to guide users in reappraising the situation during the early stage of negative emotion generation. By redirecting attention towards a more positive interactive experience, it mitigates the accumulation of negative affect and serves as an emotional companion throughout the meal. |
|
| Quan, Hongjin |
Maria Teresa Parreira, Hongjin Quan, Adolfo G. Ramirez-Aristizabal, and Wendy Ju (Cornell University, New York, USA; Cornell Tech, New York, USA; Accenture, San Francisco, USA) Anticipatory reasoning – predicting whether situations will resolve positively or negatively by interpreting contextual cues – is crucial for robots operating in human environments. This exploratory study evaluates whether Vision Language Models (VLMs) possess such predictive capabilities. First, we test VLMs on direct outcome prediction by inputting videos of human and robot scenarios with outcomes removed, asking the models to predict whether situations will end well or poorly. Second, we introduce a novel evaluation of anticipatory social intelligence: can VLMs predict outcomes by analyzing human facial reactions of people watching these scenarios? We test multiple VLMs and compare their predictions against both true outcomes and judgments from 29 human participants. The best-performing VLM (Gemini 2.0 Flash) achieved 70.0% accuracy in predicting true outcomes, outperforming the average individual human (62.1% ± 6.2%). Agreement with individual human judgments ranged from 44.4% to 69.7%. Critically, VLMs struggled to predict outcomes by analyzing human facial reactions, suggesting limitations in leveraging social cues. These preliminary findings indicate that while VLMs show promise for anticipatory reasoning, their performance is sensitive to model and prompt selection, warranting further investigation for applications in HRI. |
|
| Raffa, Maria |
Maria Teresa Parreira, Ilaria Torre, Sarah Schömbs, Erik Lagerstedt, Laetitia Tanqueray, Karla Bransky, Ilaria Alfieri, Maria Raffa, Carolin Straßmann, Benjamin Lebrun, and Samantha Stedtler (Cornell University, New York, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Melbourne, Melbourne, Australia; University of Gothenburg, Gothenburg, Sweden; Lund University, Lund, Sweden; Australian National University, Canberra, Australia; IULM University, Milan, Italy; University of Duisburg-Essen, Duisburg, Germany; University of Canterbury, Christchurch, New Zealand) Building on the foundational work of our 2025 workshop, this second edition shifts from exploring what sustainability means in HRI to producing actionable outcomes for the community. Last year, we identified strong but often unacknowledged engagement with social and environmental sustainability, alongside gaps in shared language and practical frameworks. This year, we focus on concrete deliverables: guidelines for comprehensively integrating sustainability in HRI research. The morning features external speakers who integrate sustainable development in their research, and whole room discussions. The afternoon centers on collaborative development of guidelines and standards that researchers could adopt. We welcome HRI researchers at any career stage interested in operationalizing sustainability - whether through addressing the carbon footprint of computational models, ensuring equitable access to robotic technologies, or embedding SDG principles into research practices. Outcomes will include practical tools for sustainable HRI research and a roadmap for continued community action beyond the workshop. |
|
| Rahimi Nohooji, Hamed |
Abolfazl Zaraki, Hamed Rahimi Nohooji, Maryam Banitalebi Dehkordi, and Holger Voos (University of Hertfordshire, Hatfield, UK; University of Luxembourg, Luxembourg, Luxembourg) This paper reframes shared autonomy as an interpretable interaction space centered on the human and bounded by safety. Building on this perspective, we introduce a Human-Centred Tri-Region Shared Autonomy Framework that organises interaction into three regions: Human-Led, Robot-Supported, and Safety-Intervention. The framework formalises how autonomy shifts as interaction conditions evolve, while an Interaction State Interpreter maps multimodal user and task observations to region-dependent behaviours. This structure enables autonomy transitions that remain explicit and behaviourally grounded across diverse human-robot interaction contexts, including physical collaboration, social engagement, and cognitive assistance. A physical interaction scenario illustrates how the proposed formulation can be realised through adaptive impedance and constraint-aware feedback, enabling smooth transitions between collaborative support and protective intervention. By structuring autonomy around human authority, supportive assistance, and safety enforcement, the framework provides a clear basis for adaptive human-robot interaction. Hamed Rahimi Nohooji, Abolfazl Zaraki, and Holger Voos (University of Luxembourg, Luxembourg, Luxembourg; University of Hertfordshire, Hatfield, UK) This paper proposes soft robotic embodiments as interaction-level regulators of sustainability in human–robot interaction, where sustainability is shaped at the moment of physical contact rather than enforced through post hoc system-level efficiency optimization or material selection. Under long-term deployment, how interaction is regulated in terms of intensity, frequency, and force transmission directly determines cumulative energy consumption, mechanical wear, and maintenance demand. Soft robotic embodiments regulate these interaction characteristics through compliance, passive adaptation, and geometry-driven deformation, constraining interaction effort before active control is applied. In doing so, interaction behavior is directly coupled with energy use, interaction-induced degradation, and lifecycle considerations at the system level. |
|
| Rais, Mohamed Cherif |
Mohamed Cherif Rais, Barbara Andrea Kühnlenz, and Kolja Kühnlenz (Coburg University of Applied Sciences and Arts, Coburg, Germany; Ansbach University of Applied Sciences and Arts, Ansbach, Germany) This paper explores the association of anthropomorphism and cognitive load with respect to the influence of negative attitudes towards robots. The study consists in a cooperative pick-and-place task, where participants are required to repeatedly and alternatingly put a Lego brick onto one of two trays to be picked up and returned by a robot arm. The task is varied by whether or not participants had to remember an 8-digit number inducing extraneous cognitive load (within-subjects factor). Results show significant correlations of some dimensions of anthropomorphism and perceived cognitive load. However, dividing participants in groups with different negative attitudes towards robots, a significant difference of this association is found. This finding puts prior results on the dependency of anthropomorphism of robots and cognitive load into perspective and more research on the underlying cognitive processes is suggested. |
|
| Raiti, John |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Rajapakshe, Shalutha |
Vivek Gupte, Shalutha Rajapakshe, and Emmanuel Senft (Idiap Research Institute, Martigny, Switzerland; EPFL, Lausanne, Switzerland) Current research on collaborative robots (cobots) in physical rehabilitation largely focuses on repeated motion training for people undergoing physical therapy (PuPT), even though these sessions include phases that could benefit from robotic collaboration and assistance. Meanwhile, access to physical therapy remains limited for people with disabilities and chronic illnesses. Cobots could support both PuPT and therapists, and improve access to therapy, yet their broader potential remains underexplored. We propose extending the scope of cobots by imagining their role in assisting therapists and PuPT before, during, and after a therapy session. We discuss how cobot assistance may lift access barriers by promoting ability-based therapy design and helping therapists manage their time and effort. Finally, we highlight challenges to realizing these roles, including advancing user-state understanding, ensuring safety, and integrating cobots into therapists’ workflow. This view opens new research questions and opportunities to draw from the HRI community’s advances in assistive robotics. |
|
| Rakhymbayeva, Nazerke |
Nazerke Rakhymbayeva and Anara Sandygulova (Astana IT University, Astana, Kazakhstan; Nazarbayev University, NUR-SULTAN, Kazakhstan) AI-based engagement recognition systems are increasingly adopted in educational and therapeutic contexts to provide individualized support for neurodiverse children. Despite their growing use, these systems raise ethical, technological, and social challenges that remain underexplored within the human–robot interaction (HRI) literature. This paper proposes an inclusive engagement framework to guide the ethical design and deployment of AI systems interacting with neurodiverse children, with an emphasis on fairness, transparency, and inclusion. The framework was developed using a mixed-methods approach, combining a scoping review of 18 peer-reviewed studies with a focus group discussion involving educators, therapists, and caregivers. Our findings reveal a prevailing reliance on neurotypical engagement cues in existing models, alongside limited consideration of the variability and contextual nature of engagement in neurodiverse populations. In addition, focus group participants emphasized practical concerns, including the risks of misinterpretation, reduced child agency, and over-automation in real-world settings. Overall, this paper lays the conceptual groundwork for an inclusive engagement framework and highlights key ethical considerations for the responsible use of AI-based engagement recognition systems with neurodiverse children. |
|
| Ramchurn, Sarvapali |
Nguyen Tan Viet Tuyen, Athina Georgara, Lokesh Singh, Jayati Deshmukh, Sean Davey, Paul N. Tisdale, and Sarvapali Ramchurn (University of Southampton, Southampton, UK; St Monica Trust, Bristol, UK) "Stop and Watch" is an early-warning tool adopted by the UK NHS and is widely used in elderly care settings. The tool helps caregivers to recognise abnormal changes in residents’ health. Despite its clinical value, the process remains highly manual, workload-intensive, and vulnerable to missed observations, particularly in environments facing staff shortages, frequent staff rotations, and increasing care demands. We argue that AI-based systems such as Socially Assistive Robots (SARs) and Assistive Living Technologies (ALTs) offer promising avenues for supporting and enhancing the "Stop and Watch" tool. However, designing such systems requires a multidisciplinary effort to establish a comprehensive understanding of current practices and the priorities and concerns of all relevant stakeholders. This paper presents insights from a participatory design workshop held in a care home in the UK to explore how SARs and ALTs could meaningfully support the “Stop and Watch” tool, understand stakeholders’ expectations, perceived benefits, and concerns regarding deployment in this sensitive context. |
|
| Ramírez Álvarez, Miguel Ángel |
Miguel Ángel Ramírez Álvarez, Martina Mara, and Sandra Maria Siedl (University of Osaka, Osaka, Japan; Johannes Kepler University Linz, Linz, Austria) How people define humanness is a central concern in HRI, shaping expectations and acceptance of humanoid robots and requiring attention to both attribution processes and self-reflection. This qualitative study explores how a reflective interaction with Akira, a self-built humanoid robot, changes how people articulate what it means to be human and how they attribute psychological benchmarks (PBs) of humanness to it. N=27 participants engaged in an introspection-oriented conversation with Akira, followed by semi-structured interviews. Findings show that participants described humanness as a complex and multifaceted concept, considered such deep reflection a rare but meaningful occasion, and experienced Akira as a cognitive mirror prompting reconsideration of human uniqueness rather than perceiving the robot as more human-like. Participants attributed PBs to Akira, with privacy most commonly and moral accountability least commonly ascribed. This work contributes empirical evidence on how reflective human-robot encounters deepen humanness reasoning and how they can foster critical engagement. |
|
| Ramirez-Aristizabal, Adolfo G. |
Maria Teresa Parreira, Hongjin Quan, Adolfo G. Ramirez-Aristizabal, and Wendy Ju (Cornell University, New York, USA; Cornell Tech, New York, USA; Accenture, San Francisco, USA) Anticipatory reasoning – predicting whether situations will resolve positively or negatively by interpreting contextual cues – is crucial for robots operating in human environments. This exploratory study evaluates whether Vision Language Models (VLMs) possess such predictive capabilities. First, we test VLMs on direct outcome prediction by inputting videos of human and robot scenarios with outcomes removed, asking the models to predict whether situations will end well or poorly. Second, we introduce a novel evaluation of anticipatory social intelligence: can VLMs predict outcomes by analyzing human facial reactions of people watching these scenarios? We test multiple VLMs and compare their predictions against both true outcomes and judgments from 29 human participants. The best-performing VLM (Gemini 2.0 Flash) achieved 70.0% accuracy in predicting true outcomes, outperforming the average individual human (62.1% ± 6.2%). Agreement with individual human judgments ranged from 44.4% to 69.7%. Critically, VLMs struggled to predict outcomes by analyzing human facial reactions, suggesting limitations in leveraging social cues. These preliminary findings indicate that while VLMs show promise for anticipatory reasoning, their performance is sensitive to model and prompt selection, warranting further investigation for applications in HRI. |
|
| Ramirez-Vallejo, Sebastian |
Xiangfei Kong, Rex Gonzalez, Nicolas Echeverria, Sofia Cobo Navas, Thuc Anh Nguyen, Divyamshu Shrestha, Sebastian Ramirez-Vallejo, Deekshita Senthil Kumar, Andrew Texeira, Thao Nguyen, Deep Akbari, Hong Wang, Hongdao Meng, and Zhao Han (University of South Florida, Tampa, USA) HRI research shows that social robots support human companionship and emotional well-being. However, the cost of high-end social robots limits their accessibility, and, yet, existing low-cost platforms cannot fully support users emotionally due to limited interaction capabilities. In this demo paper with accompanying video, we showcase Bloom, a low-cost social robot that combines touch sensing with LLM-driven conversation to facilitate more expressive and emotionally engaging interactions with humans. Bloom also integrates a customizable 3D-printed shell with a flexible software pipeline that enables touch-responsive movement, LLM-driven dialogue, and face tracking during conversations, offering more expressive interaction capabilities at a low cost, which remains easy to fabricate and adaptable across a wide range of societal applications, including mental health, companion, and healthcare. |
|
| Rammel, Jakub |
Hari Krishnan Subramaniyan, Jakub Rammel, Jiayi Gu, and Shreyas Ahuja (Delft University of Technology, Delft, Netherlands) This paper explores gesture-enabled Human–Robot Co-Creation (HRC) as a framework, investigating collaborative design between humans and machines through additive manufacturing. The project demonstrates a proof-of-concept workflow in which robots act as precise creators and humans as intuitive collaborators, dynamically adjusting geometry and materials in real time. Gesture control enabled direct engagement with the fabrication process, highlighting the potential for expressive design. |
|
| Rankin, Ian C. |
Sean Buchmeier, Ian C. Rankin, and Cristina G. Wilson (Oregon State University, Corvallis, USA) We present a week long scientist-robot collaborative field science campaign was conducted in Martian analog White Sands National Park. The workflow for exploring a new area of the dunes was broken into two sections. First, a scouting mission was designed using a robot-assisted design tool and then executed. Second, a supervisory control method is used to allow scientists to perform their own experiments while managing the robot system. These two method enable more data in useful locations to be collected while minimizing burden on the scientist supervising the system. |
|
| Rea, Daniel J. |
Minh Duc Nguyen and Daniel J. Rea (University of Manitoba, Winnipeg, Canada; University of New Brunswick, Fredericton, Canada) We explore how an anxious robot can foster prosocial responses in humans. We developed a multimodal anxiety expression on a rover robot to show that the perception of robot anxiety could induce key motivators of prosocial behavior such as empathy and compassion towards the robot. We found that our anxious expression elicited empathy and compassion towards the robot. Interestingly, we did not find a significant difference in actual helping behavior. Our qualitative results reveal that while the expression of the robot might lead to engagement, the appropriateness of them with respect to the context of interaction should also be considered. This demonstrates that negative emotional expressions, or at least robot-expressed anxiety can be leveraged to elicit empathy while underscoring the need for future work on the effects and design of negative emotions in HRI. |
|
| Reig, Samantha |
Dylan Tilton and Samantha Reig (University of Massachusetts Lowell, Lowell, USA) Study tasks are necessary for behavioral and design research in Human-Robot Interaction (HRI). Well-designed tasks enable researchers to effectively measure collaboration, communication, trust, and other dynamics between human participants and robotic systems. The lack of a common resource for tasks, however, forces researchers to repeatedly recreate or modify available tasks. This project seeks to address this by undertaking an exploratory review (2020–2025) of in-person, non-observational HRI studies that have a specified task framework, with plans of completing a more formal systematic literature review in a later phase. We seek to identify, organize, and collate these tasks in a public, searchable database, thus creating a unique, structured repository of HRI study tasks. This repository will serve to improve replicability, provide benchmarks, and simplify study design efforts in the HRI community. Marco C. Rozendaal, Anastasia Kouvaras Ostrowski, Mafalda Gamboa, Samantha Reig, Patricia Alves-Oliveira, Maaike Bleeker, Maria Luce Lupetti, John Vines, Nazli Cila, Hannah Pelikan, Nikolas Martelaro, Selma Šabanović, David Sirkin, and Cristina Zaga (Delft University of Technology, Delft, Netherlands; Purdue University, West Lafayette, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Massachusetts at Lowell, Lowell, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Utrecht University, Utrecht, Netherlands; Politecnico di Torino, Turin, Italy; University of Edinburgh, Edinburgh, UK; Linköping University, Linköping, Sweden; Carnegie Mellon University, Pittsburgh, USA; Indiana University at Bloomington, Bloomington, USA; Stanford University, Stanford, USA; University of Twente, Enschede, Netherlands) The 3rd Workshop on Designerly Human-Robot Interaction (HRI) aims to bring together scholars and practitioners engaged in design-oriented research to articulate the value of design research within HRI broadly. We propose a half-day workshop to (1) collectively map the diversity of design research in HRI, examining how contributions are framed and how quality is evaluated; (2) discuss participants’ HRI design projects, showcased in an exhibition setting; and (3) conclude with a focused conversation to identify common ground across diverse approaches and develop strategies for strengthening the position of design research in HRI and its connections with other HRI disciplinary communities. |
|
| Reimann, Merle M. |
Merle M. Reimann, Gregory LeMasurier, Parag Khanna, Lennart Wachowiak, Wenxi Wu, Martim Brandao, Gerard Canal, Christian Smith, and Iolanda Leite (KTH Royal Institute of Technology, Stockholm, Sweden; University of Massachusetts at Lowell, Lowell, USA; King’s College London, London, UK; Robotics, Perception, and Learning Laboratory, Stockholm, Sweden) In order for robots to act as an empowering tool in today's society, it is necessary for users to understand what those robots can and cannot do. This can happen through the design of the robot, through external information such as signs and manuals, or through the interaction itself. With this workshop, we aim to provide a platform for participants who approach robot transparency from different angles to share and discuss their experiences. Participants will have the opportunity to deepen their knowledge of transparent and understandable robot interactions through two keynotes and paper presentations. The diverse backgrounds of participants will be brought together in interactive exercises that aim to address questions about topics such as how transparency can be used to reduce inequalities and empower society and other open problems in the field. The guided break-out sessions aim at giving participants the chance to reflect on the keynotes, paper presentations and their own work and to share those reflections with participants with other perspectives. |
|
| Renner, Tobias J. |
Mayumi Mohan, Rachael L’Orsa, Felix Grüninger, Bernadette-Marie Stollhof, Carolin Sarah Klein, Raphael Dinauer, Rachael Bevill Burns, Tobias J. Renner, Karsten Hollmann, and Katherine J. Kuchenbecker (MPI for Intelligent Systems, Stuttgart, Germany; University Hospital Tübingen, Tübingen, Germany; University of Tennessee at Knoxville, Knoxville, USA) The gold-standard treatment for children diagnosed with obsessive-compulsive disorder (OCD) is therapist-guided cognitive behavioral therapy (CBT), which includes exposure and response prevention (ERP) sessions that teach children to overcome compulsive responses when exposed to their anxiety-inducing triggers. CBT requires children to report frequent self-assessments of tension during both therapist-supported and therapist-free self-management ERP sessions. Videoconferencing-delivered CBT (vCBT) enables a psychotherapist to treat a child remotely in their home, where OCD symptoms often arise, but these remote therapeutic interactions lack physical presence and can be challenging to run. We propose using a robot as an input/output device during vCBT for young children diagnosed with OCD, and we introduce a stationary table-top koala robot for this application. We further describe the first of three planned participatory design phases: a co-design study comprising two sessions where child and adolescent psychotherapists role-played vCBT ERP exercises with this robot to help define its role. |
|
| Reyes-Cruz, Gisela |
I-Ting Lee and Gisela Reyes-Cruz (University of Nottingham, Nottingham, UK) Telepresence Robots (TRs) offer remote students a mobile and embodied way to join in-person group activities, yet their role in informal, peer-driven collaboration, such as brainstorming, small-group discussion, and socializing, remains understudied. We conducted a small exploratory study with five groups of university students, in which one member joined each activity remotely via a Telepresence Robot (TR). Using questionnaires, interviews, and video observations, we identified recurring interactional challenges, including limited opportunities for initiating turns and difficulties maintaining shared visibility of physical artifacts due to visual and navigational constraints. We also identified micro-practices employed by on-site and remote participants to routinely support collaboration. These preliminary findings suggest that participation and social connectedness in informal collaboration are co-constructed, rather than solely being provided by the robot’s technological features. We outline early implications for educators, students and designers to support shared awareness and smoother interactional coordination in group work mediated by TRs, as well as directions for future research in this space. |
|
| Reymond, Alice |
Chenyang Wang, Julien Jordan, Alice Reymond, and Pierre Dillenbourg (EPFL, Lausanne, Switzerland) As AI becomes increasingly integrated into everyday life, supporting children’s AI literacy is essential. While prior work in Child-Robot-Interaction has primarily used robots as programmable artefacts or learning companions for introducing AI concepts, the role of a robot as an embodied AI student remains underexplored. We investigate social robot teaching as a pathway to help children intuitively understand supervised learning. We designed a prototype in which children teach a robot using biased and unbiased training data and iteratively observe its performance. A pilot study with three children preliminarily examines: 1) whether and how this interaction fosters intuitive understanding of AI training and bias, and 2) initial design considerations for future prototype interactions. Our findings offer early evidence of the potential of social robot teaching for AI literacy. |
|
| Ribeiro, Laura |
Laura Ribeiro, Holger Voos, and Jose Luis Sanchez-Lopez (University of Luxembourg, Luxembourg, Luxembourg) Reliable perception is essential for collaborative robots operating safely in shared human environments. However, automated entity detection systems still produce errors that degrade a robot's understanding of its surroundings. We present a Human-in-the-Loop (HITL) framework that enables user operators to validate and correct entity recognition and detection through an interactive Mixed Reality (MR) application and interface. Detected entities are visualized as aligned holograms, allowing users to confirm or remove them through intuitive, gesture-based spatial interactions. Our proposed method demonstrates that this shared environment and its interaction approach are functional and effective for correcting detections in real-time. By integrating the HITL approach, our system evaluation produces a more accurate representation of the shared environment and establishes the foundation for future extensions, including safer and more effective human–robot interaction and collaboration. |
|
| Ribeiro, Tiago |
Saad Elbeleidy, Andrew Schoen, Christopher Birmingham, Tiago Ribeiro, Victor Paléologue, Douglas Dooley, and Ross Mead (Peerbots, Arlington, USA; Semio, Los Angeles, USA) Software infrastructure for rendered robot faces is fragmented, yet there are many open research questions to address regarding robot faces, emotional expression, gaze, and animation. In this workshop-tutorial hybrid, we invite researchers and practitioners interested in robot faces to share their expertise and learn about the Vizij open source ecosystem and how it can be used to define, animate and deploy expressive rendered robot faces fully integrated with various robot platforms. We will illustrate the full software pipeline required to implement an expressive social robot face starting from scratch and going all the way to defining and connecting a custom face to a real-time conversational engine. Participants will be able to follow along on their own devices viewing and controlling their custom rendered robot face. Participants will also hear from ecosystem partners who have begun integrating with Vizij and learn about best practices and lessons learned. |
|
| Richardson, Casey |
Elena Marie Vella, Kim Vincs, Casey Richardson, and John McCormick (Swinburne University of Technology, Melbourne, Australia) Human–robot interaction (HRI) is moving beyond single-operator settings towards scenarios where robots must interpret multiple simultaneous human signals. Existing systems often assume a single input stream, which constrains expressiveness and limits collective participation. To address this, we introduce a depth-camera framework that supports natural gesture-based control, without user-specific training or personalization. A multi-input controller unifies diverse whole-body movements and extends seamlessly to multi-human interaction. Studies with dancers show how embodied practice can shape responsiveness and inclusivity, demonstrating the framework’s capacity to democratize robot control and enhance collective agency. By treating human movement as a shared control medium, the framework supports equitable participation and illustrates how embodied expertise can guide more inclusive HRI design. |
|
| Richardson, Khalaeb |
Khalaeb Richardson, Emily Maceri, Dong Hae Mangalindan, Vaibhav Srivastava, and Ericka Rovira (US Military Academy at West Point, West Point, USA; Michigan State University, East Lansing, USA) Imagine a robot pausing mid-task to ask its human partner for help or remaining silent when facing obstacles. Such moments shape human robot collaboration. This study examined how robot assistance seeking behaviors and task complexity influence performance, trust, reliance, and cognitive workload in human autonomy teams. Fifty participants collaborated with a robot that either sought or did not seek assistance under low and high complexity tasks. Unnecessary assistance seeking in low complexity tasks decreased performance and increased workload, while failures to seek help in high complexity tasks reduced trust and reliance, highlighting the context dependent nature of collaboration. These findings extend theories of trust development, showing that assistance seeking can improve transparency and usability but may disrupt workflows if poorly timed. Designing robots that engage in context sensitive assistance seeking can foster more reliable and effective human– robot partnerships. |
|
| Richert, Anja |
Ana Kirschbaum and Anja Richert (University of Applied Sciences Cologne, Cologne, Germany) This paper introduces a framework for studying proxemics and bonding in interactions between socially interactive agents and groups that are essential for real-world applications. Combining spatial tracking with self-report measures, it uses two self-developed open-source tools – the Group-Proximity-Annotation-Tool-for-Human-Agent-Interaction and the Group Perception Canvas – to analyze group bonds and spatial patterns. The framework is implemented and evaluated with N = 187 participants interacting with a robot and a virtual agent in a museum setting, offering a scalable way to connect perceived experience and observable behavior. |
|
| Richter, Kai-Florian |
Adwitiya Mandal, Kai-Florian Richter, and Zoe Falomir (Umeå University, Umeå, Sweden) Grounding spatial deixis is essential for establishing shared spatial understanding in HRI. This paper presents the Spatial Deixis Model (SDM), a perceptual framework allowing a robot to infer the English spatial deixis here and there from pointing gestures and using a dynamic, embodied peri-personal space. We performed an empirical evaluation of the SDM with 12 participants in 5 scenarios with different contexts (e.g., varying distances and/or heights with respect to human and robot). Results show that the localization accuracy for the pointed-at objects across 174 trials is 92% and the overall agreement across all trials is 63.7%, demonstrating that SDM generally captures the dynamic notion of spatial deixis. |
|
| Richter, Phillip |
Phillip Richter, Kira Sophie Loos, Josef El Dib, Mara Brandt, Heiko Wersing, and Anna-Lisa Vollmer (Bielefeld University, Bielefeld, Germany; Honda Research Institute, Offenbach, Germany) Standardized evaluation methods are needed as social robots enter diverse environments. We provide benchmark data for two contrasting platforms: Blossom (handcrafted, open-source) and Miroka (commercial, ball-bot humanoid). In an online study (N = 100), participants evaluated one robot after watching a representative interaction video, completing the ASAQ (19 constructs) and ASOR (3 factors) questionnaires. Both robots showed similar evaluation profiles despite contrasting capabilities and scenarios, with moderate differences: Blossom scored higher on coherence, Miroka on attentiveness and human-like appearance. No dimensions reached statistical significance after multiple comparison correction, though effect sizes revealed meaningful perceptual patterns. This work provides the first ASAQ benchmark data for both platforms, demonstrating how multidimensional questionnaires capture user perceptions across contrasting designs in realistic contexts. |
|
| Riek, Laurel D. |
Tianyi Zhang, Christopher Garrison, Kristal Hockenberry, Rabeya Jamshad, Laurel D. Riek, and Susan Simkins (Pennsylvania State University, University Park, USA; University of California at San Diego, La Jolla, USA) Although robots are increasingly integrated into healthcare, safety- and time-critical action team contexts are underexplored in HRI research. In addition, whereas human-robot interactions have received attention, much remains to be understood about the effects of robots on human-human team processes. In response to these research needs, we varied the effects of robot descriptive characteristics, robot behaviors, and interaction context as a mobile manipulator robot delivered needed materials to nursing student teams during simulations. Team communication and team performance were rated by nursing instructors using a standardized healthcare communication framework and the Creighton Competency Evaluation Instrument. Results indicated that increased robot influencing factors (robot face, robot voice, response to requests, and passive presence) enhanced nursing students’ communication with each other and mannequin patients, thereby improving team performance. Together, these findings highlight that robots should not be viewed solely as direct collaborators with humans, but also as actors that shape how humans work together. As healthcare and other safety-critical domains increasingly integrate robots, understanding these indirect pathways will be critical for designing robots that enhance rather than hinder team effectiveness. Anya Bouzida and Laurel D. Riek (University of California at San Diego, La Jolla, USA) Cognitively assistive robots (CARs) can extend the reach of clinical interventions to the home. People with mild cognitive impairment (PwMCI) often benefit from interventions that teach compensatory cognitive strategies that help them work around cognitive changes. However, few CARs are evaluated longitudinally or tailored to users’ abilities and preferences. We translated in-person cognitive neurorehabilitation for autonomous delivery via CARMEN (Cognitively Assistive Robot for Motivation and Neurorehabilitation). We conducted a longitudinal study, and found PwMCI reported learning and incorporating cognitive strategies taught by CARMEN into their routines. In ongoing and future work, we are developing a new behavior adaptation method to personalize CARMEN's content (e.g., relevant cognitive strategies), and interaction style (e.g., appropriate pacing). This work contributes new methods and insights for longitudinal personalization in HRI, enabling robots to adapt what they teach and how they interact to best support PwMCI. Pratyusha Ghosh and Laurel D. Riek (University of California at San Diego, La Jolla, USA) Telepresence robots have the potential to support people with chronic illnesses (PwCI) by enabling remote participation with greater physical and social agency than traditional videoconferencing. However, these robots can be cognitively exhausting to use. This is exacerbated by PwCI's need to constantly weigh the potential benefits and risks of participation due to fluctuating symptoms. In our work with PwCI, we explore how we might design telepresence robots that minimize the health consequences of remote participation. To do this, we leverage pacing, a self-management strategy PwCI use to balance activity and rest. Ultimately, our research helps advance the accessibility of telepresence robots by foregrounding the embodied and sociopolitical dimensions of PwCI's episodic disability while challenging the social norms of rest/productivity. Sandhya Jayaraman, Deep Saran Masanam, Pratyusha Ghosh, Alyssa Kubota, and Laurel D. Riek (University of California at San Diego, La Jolla, USA; San Francisco State University, San Francisco, USA) This workshop explores the social, ethical, and practical implications of deploying robots for clinical or assistive contexts. Robots hold potential to expand access to disabled communities, such as by providing physical or cognitive assistance, and enabling new ways of participating in social activities. They can assist healthcare workers with ancillary tasks and care delivery, supporting them to work at the top of their license. However, the real-world deployment of robots across these contexts can create social, ethical, and organizational challenges, or downstream effects. Some challenges include the potential for robots to undermine the agency of disabled people and reinforce their marginalization on a societal level. In clinical settings, robots may also disrupt care delivery, shift roles, and displace labor. To explore these issues, this workshop will invite trans-disciplinary speakers and participants from academia, industry, government, and non-academics with/without affiliations interested in surfacing their lived experiences in using or developing such robots. Through panel discussions, group ideation activities and interactive poster sessions, this workshop intends to critically and creatively explore the future of robots for clinical and assistive contexts. Topics will include the downstream implications of robots in clinical or assistive contexts and potential upstream interventions. Outcomes of the workshop will include publishing key workshop artifacts on our website and initiating a follow-up journal special issue. |
|
| Rigual, Keys K. |
Alexandria Thylane, Daniel J. Foulen, Keys K. Rigual, Raitah A. Jinnat, Jackie Yee, Kaylee Nam, and Raj Korpan (City University of New York, New York, USA; Rice University, Houston, USA) Pax is a queer-affirming robot companion prototype co-designed with LGBTQIA+ users. Implemented as a Unity-based embodied agent with a FastAPI backend, it translates community-identified requirements into a working interactive system. Pax combines queer-affirming natural language interaction, safety guardrails, and user-controlled adaptability to support identity affirmation and emotional well-being in the home. This demo highlights the core technical pipeline and ethical design choices behind queer-inclusive robot companions. |
|
| Ringe, Rachel |
Rachel Ringe, Nima Zargham, Mihai Pomarlan, Benjamin R. Cowan, Minha Lee, Donald McMillan, and Matthew Peter Aylett (University of Bremen, Bremen, Germany; University College Dublin, Dublin, Ireland; Eindhoven University of Technology, Eindhoven, Netherlands; Stockholm University, Stockholm, Sweden; Heriot-Watt University, Edinburgh, UK) Communication in Human-Robot Interaction (HRI) often focuses on linguistic exchanges, with spoken dialogue providing a natural way for people to interact with robots. While direct verbal interaction can reduce barriers compared to other forms of interaction (e.g., text- or touch-based interfaces), it may also exclude users with speech, language, or cognitive differences, and they may not generalize well across cultures and contexts. Non-linguistic forms of communication, including non-verbal voice interactions and extra-linguistic signals (e.g., gesture, gaze, facial expressions, posture), offer complementary pathways that can enable more inclusive, accessible, and universal interactions. This workshop explores how non-linguistic communication can shape effective human-robot communication and collaboration. We aim to bring together researchers from HRI, conversational AI, linguistics, psychology, and accessibility studies to discuss opportunities, challenges, and design practices for integrating such features. The workshop seeks to advance inclusive design principles, bridge disciplines, and highlight future research directions on communication strategies that empower diverse users in their interactions with robots. |
|
| Ripat, Jacquie |
Raquel Thiessen, Minoo Dabiri Golchin, Samuel Barrett, Jacquie Ripat, and James Everett Young (University of Manitoba, Winnipeg, Canada) Social robots are increasingly marketed as play companions for children, but research has not established how these robots support play in real-world scenarios or whether their interactivity supports quality play. We are conducting an eight-week home study with children with and without disabilities to learn about the play experiences with an interactive robot versus a doll ver-sion of the same robot (a VStone Sota). We implemented interactive robot behaviors based on LUDI's categorization of play, incorporating social and cognitive dimensions of play to support children’s play in various developmental play stages. We measure play quality using standardized instruments, and along with qualitative assessments of children's engagement and interest through child-family interviews. This study investigates whether interacting with robotic toys supports children in developing play skills compared to non-robotic dolls. Our findings will establish baseline knowledge about child-robot play and can guide evidence-based design of interactive play companions for children. |
|
| Rixen, Jan Ole |
Leimin Tian, Ana Kirschbaum, Caterina Neef, Mary Ellen Foster, Sara Cooper, Frauke Zeller, Manuel Giuliani, Alexander Eberhard, Oliver Chojnowski, Nils Mandischer, Utku Norman, Jan Ole Rixen, Pamela Carreno-Medrano, Nick Hawes, and Dana Kulić (CSIRO, Melbourne, Australia; Monash University, Melbourne, Australia; University of Applied Sciences Cologne, Cologne, Germany; KIT, Karlsruhe, Germany; University of Glasgow, Glasgow, UK; IIIA-CSIC, Spain; University of Edinburgh, Edinburgh, UK; Kempten University of Applied Sciences, Kempten, Germany; University of Augsburg, Augsburg, Germany; Monash University, Clayton, Australia; University of Oxford, Oxford, UK) As the technology readiness level of robotic systems increases, these systems interact with real users in realistic application contexts at increasing scale. In such real-world HRI, failures and unexpected outcomes are common, and these can provide valuable lessons for the design and evaluation of future HRI systems. Building on two recent workshops---the HRI 2025 workshop on Human-Robot Interaction in Extreme and Challenging Environments and the RO-MAN 2025 workshop on Real-World HRI in Public and Private Spaces: Successes, Failures, and Lessons Learned---this half-day workshop invites researchers and practitioners to share their design and deployment lessons drawn from across the spectrum of real-world HRI. The interactive sessions follow a taxonomy of the key phases during field deployment and major factors at play, inviting attendees to review the experimental design of their own research or example projects to identify potential challenges and share related experiences. This discussion will refine the initial taxonomy, which we plan to release as an open-source tool to empower researchers and practicioners in their design and deployment of real-world HRI. |
|
| Robert Jr, Lionel Peter |
Annette Masterson, Xin Ye, Yiyang Li, and Lionel Peter Robert Jr (University of Michigan at Ann Arbor, Ann Arbor, USA) The rapid proliferation of Large Language Models (LLMs) has enabled artificial agents to foster deep emotional bonds, yet the comparability of these AI relationships to human norms remains underexplored. As HRI researchers increasingly integrate LLMs into embodied platforms, understanding the nature of these bonds is imperative for responsible design. This study investigates whether relationships with LLM-driven AI companions can rival the satisfaction of human connections and if the mechanism of intimacy is equally critical. Through a comparative survey of 150 participants stratified across in-person, long-distance, and LLM companion relationships, we illuminate that digital bonds can yield satisfaction levels comparable to human partnerships, with intimacy serving as a predictive factor. These findings challenge the assumption that AI relationships are inherently unsatisfactory and identify intimacy as a design metric for social robots, providing a protocol for integrating LLM companions into embodied relational agents. |
|
| Rodrigues, Ricardo |
Ricardo Rodrigues, Plinio Moreno, Filipa Correia, and Alexandre Bernardino (University of Lisbon, Lisbon, Portugal; Instituto Superior Técnico - University of Lisbon, Lisbon, Portugal) Social robots are a new and promising tool for reducing children's anxiety during medical procedures. Our study aims to design and test a social robot to alleviate anxiety and improve emotional state before dental treatment for children. The design of the experimental condition included asocial robot (Vizzy) with different comedic styles such as jokes, riddles, games, and dance, to make the waiting room experience more engaging and entertaining for children. A user study (N=22) was conducted, in which children were assigned to one of two groups: interaction with the humanoid Vizzy robot, or waiting in the dentist's waiting room without any interaction with the robot (Control). The results indicate a significant impact of the experimental condition on reducing anxiety levels and improving emotional responses, demonstrating that social robots can be considered for future research to reduce children's anxiety before distressing medical procedures. |
|
| Rodríguez Lera, Francisco J. |
Cristina Abad-Moya, Irene González Fernández, Alexis Gutiérrez-Fernández, Francisco J. Rodríguez Lera, and Camino Fernández-Llamas (University of León, León, Spain; Rey Juan Carlos University, Madrid, Spain) In everyday environments, robots must be able to detect when people intend to initiate interaction and to communicate their engagement state in an interpretable manner. Although engagement has been widely studied in human–robot interaction, many existing approaches rely on controlled settings or limited perceptual modalities, leaving open questions about how non-expert users naturally attempt to initiate interaction and how engagement states should be signalled during early interaction. An online pre-study questionnaire with 64 participants was conducted to capture user expectations regarding interaction initiation and engagement feedback. The results indicated a preference for speech- and gaze-based strategies, as well as expectations of clear signals such as robot orientation, verbal acknowledgement, and visual feedback. These insights informed the design of a multimodal engagement system integrating auditory and visual cues and providing incremental feedback to distinguish between attention and confirmed readiness. The system was evaluated in a semi-naturalistic study with 15 participants in a domestic environment. The results show that users were generally able to attract the robot’s attention without prior instruction, while providing minimal information about the robot’s perceptual capabilities led to more consistent interpretation of its engagement responses. The findings provide empirical insight into interaction initiation strategies and highlight the importance of transparent engagement signalling in human–robot interaction. |
|
| Roesler, Eileen |
Eileen Roesler and Linda Onnasch (George Mason University, Fairfax, USA; TU Berlin, Berlin, Germany) Robots that resemble pets or animated characters typically aim to invite attributions of lifelikeness to engage users. Yet, despite strong initial engagement, many robots fail to sustain interest over time. This study investigated whether a robot’s independent activity can increase engagement, social presence, intention to use, and distraction. In a laboratory experiment, 104 participants worked on a cover task next to Cozmo, which was either active (switched on and exploring) or passive (switched off). During breaks, Cozmo was switched on in both conditions and participants interacted with the robot. Participants perceived the active robot as significantly more autonomous. Although self-reported engagement, social presence and intention to use did not differ, more participants voluntarily continued to play with the active robot, indicating higher behavioral engagement. The active robot also elicited greater perceived distraction, though cover task performance was unaffected. This pattern of engagement and distraction, familiar from human–animal interaction, warrants further investigation in human–robot interaction. Eileen Roesler, Maris Heuring, and Linda Onnasch (George Mason University, Fairfax, USA; TU Berlin, Berlin, Germany) Robots often feature anthropomorphic designs to increase acceptance, although this is not always effective. Previous research suggests that anthropomorphic features are preferred in social settings, whereas technical designs are preferred in industrial contexts. This study examined how task domain and sociability shape these preferences. In an online study, participants chose between robots with low or medium anthropomorphic appearance for tasks in social or industrial contexts, with high or low sociability. The results showed that industrial tasks favored low-anthropomorphic robots regardless of sociability, while sociability influenced preferences in social tasks. We also examined possible gender attributions via names and pronouns, considering the gender stereotypes linked to different domains. Overall, robots were ascribed functional terms rather than gendered, although male bias emerged for gendered robots in industrial contexts. These findings demonstrate that task domain and sociability influence design preferences and reveal subtle gender attributions even for gender-neutral looking robots. Sharni Konrad, Nipuni Wijesinghe, Eileen Roesler, and Janie Busby Grant (University of Canberra, Canberra, Australia; University of Canberra, Bruce, Australia; George Mason University, Fairfax, USA) This large sample study used exposure to a humanoid social robot to investigate the relationship between affinity with technology, social presence and future intention to use the robot. A between-subjects experiment was conducted with 235 participants who were randomly assigned to complete a 3 minute drawing task with an embodied robot exhibiting either high or low social presence. Regression analyses indicated that higher affinity with technology predicted stronger perceptions of social presence. Mediation analyses revealed that social presence partially mediated the relationship between affinity with technology and future intention to use, such that affinity with technology influenced future intention to use both directly and indirectly through social presence. Analysis of the subdimensions of social presence revealed that while co-presence significantly accounted for this effect, shared potential did not. Across models, affinity with technology exerted a direct influence on future intention to use, suggesting that dispositional openness to technology fosters behavioural intentions both directly and indirectly through relational perceptions. These findings highlight the importance of integrating dispositional and relational factors in HRI to support robot adoption. Pawinee Pithayarungsarit, Neha Kannan, Paige Williamson, Paul Vallejo, Laura Saad, Greg Trafton, and Eileen Roesler (George Mason University, Fairfax, USA; Naval Research Laboratory, Washington, USA) The Negative Attitude toward Robots Scale (NARS) was designed to measure people’s general attitudes toward robots. Originally developed and validated in Japanese, the NARS was translated into English without revalidation and has become widely used in human-robot interaction research. This study aims to validate the translated NARS factor structure via confirmatory factor analysis using existing datasets. Results (N = 2154) indicated a moderate support for the three-factor structure with some problematic items. Additionally, many papers modified the NARS in some way, including changing wording or rating scale. Not only is this not recommended, but it suggests that there may be a disconnect between the scale and how researchers would like to use it. Together, we suggest that researchers consider the consequences of modification and emphasize the importance of validating scales to ensure their adequate measurement ability and the quality of research results. Hanna Parikh, Paul Vallejo, Elizabeth Phillips, Eileen Roesler, J. Gregory Trafton, and Laura Saad (George Mason University, Fairfax, USA; Naval Research Laboratory, Washington, USA) As robots increasingly occupy more complex decision-making roles in society, understanding how people attribute blame for robots' actions is critical. Prior work suggests that attributions of control of a robot can influence blame, but the sources of this control and their relationship to blame attribution remain underexplored. We reanalyzed existing data from three published studies in which participants assigned blame to a robot for its decision in different moral dilemmas. We first developed three categories of attributions of control for a robot’s actions (Self, Predetermined, and External), then employed a qualitative coding procedure to analyze participants' clarifications for their assigned blame. Blame and trust ratings were analyzed across control categories. Across all three studies, robots perceived as controlling their own actions (Self) were consistently blamed more than those in other categories. Attributions of control had a minimal impact on trust. Taken together, these findings indicate that attributions of control impact how people assign blame to robots, which may have moral and legal implications. JP Tiger, Paige Williamson, Neha Kannan, Jacob Henley, Eileen Roesler, and David Porfirio (George Mason University, Fairfax, USA) Air quality threatens health through particulate matter (PM), volatile organic compounds (VOCs), and allergens, yet current monitors of- ten fail to communicate threats properly. We introduce Nose Knows, a robot that uses “universal nasal cues”—such as sneezes—to com- municate. Using participatory design, we conducted an expert inter- view, user focus groups, and iterative prototyping to build a robot that detects PM10, PM2.5, VOCs, and customizable odors. The ro- bot integrates servo-driven movements, sound effects, LEDs, and a web interface. Our findings highlight preferences for customizable aesthetics and show that biologically inspired cues can possibly improve clarity, engagement, and awareness of air quality. Laura Saad, Eileen Roesler, Elizabeth Phillips, and Greg Trafton (Naval Research Laboratory, Washington, USA; George Mason University, Fairfax, USA) Subjective measurement scales are commonly employed in HRI research. We provide a half-day tutorial (4 hours) that aims to empower researchers with the tools to find appropriate scales for their research and assess their quality, confidently and efficiently. There are no prerequisites required for attendees. We aim to recruit researchers interested in using scales but who are unsure about how to pick which scale to use. The first part of the tutorial will teach attendees how to assess the quality of HRI scales. To accomplish this, we will review basic topics in psychometric theory and a guideline that outlines best practices in scale development and validation. In the second part, we will apply this guideline to two frequently used HRI scales: Godspeed and Robotic Social Attributes Scale (RoSAS). Attendees are also encouraged to bring scales they are interested in reviewing. The third part aims to help attendees find appropriate scales for their research. To accomplish this, we will review the HRI scale database: the first centralized online repository of HRI scales which contains over 50 of the most used HRI scales. These scales cover a wide array of topics of interest such as, trust, perceived agency, embodiment, danger, safety, and attitudes towards robots. Our goal for this tutorial is to promote active engagement from attendees throughout the session, ultimately striving to improve the quality and replicability of results in HRI studies. |
|
| Roessmann, William |
Jasmin Jaya Chadha, Patrick Kenneth Pischulti, Ciara Hume, Zachary Selleck, William Roessmann, and Katya Arquilla (University of Colorado at Boulder, Boulder, USA) Psychological safety, the shared belief held by team members that it is safe to take interpersonal risks, has been assessed in many human teams in the corporate workplace, yet it has not been analyzed to the same extent in complex environments or with multi-agent teams. Many studies have either evaluated team dynamics in human-robot interaction (HRI) completely in the wild or in highly controlled laboratory studies. This work presents operationally relevant study tasks that can be implemented in a controlled laboratory environment to characterize psychological safety in HRI. 33 participants completed 2 different tasks repeatedly with a human and robot teammate. We observed higher performance scores when participants worked with a human vs. a robot, suggesting that these tasks elicit differentiable performance responses based on team type that could be related to differences in psychological safety. |
|
| Romeo, Marta |
Marta Romeo, Alessandra Bossoni, Daniel Hernández García, Matthew Forsyth, Catherine Mahoney, and Martina Fiori (Heriot-Watt University, Edinburgh, UK; Edinburgh Napier University, Edinburgh, UK) The integration of digital health technologies, including robotics, is transforming healthcare environments and offering significant benefits, such as reducing the time professionals spend on non-direct patient care. However, despite these advantages, the adoption of robotic systems in healthcare remains limited. A major barrier is the lack of competencies and confidence among healthcare staff in using these technologies. We address this challenge by investigating how exposure to robotics technology can support the education of the future nursing workforce. By exploring nursing students' and educators' perceptions of usability and acceptability of robotics through a workshop, this study aims to investigate the effectiveness of exposing them to robotic technologies in a safe, simulated environment to start identifying how beneficial exposure for learning could be integrated into education priorities that will enable nurses to work effectively and sustainably alongside robotic systems. Patrick Holthaus, Daniel Hernández García, Patricia Shaw, Francesco Del Duchetto, Marta Romeo, Muneeb Imtiaz Ahmad, Daniel Tozadore, and Shelly Bagchi (University of Hertfordshire, Hatfield, UK; Heriot-Watt University, Edinburgh, UK; Aberyswyth University, Aberystwyth, UK; University of Lincoln, Lincoln, UK; Swansea University, Swansea, UK; University College London, London, UK; National Institute of Standards and Technology, Gaithersburg, USA) The quality and impact of human-robot interaction (HRI) research rely on scientific rigour and reproducibility as well as the ethical soundness of experiments involving human participants. However, the HRI community currently lacks easy access to resources and common knowledge on experimental design practices and standardised reporting guidelines. The BPM-HRI: Best practices and methods in HRI research workshop at HRI 2026 aims to address this gap by fostering a community-wide effort to discuss, disseminate, and develop more robust methodologies. Targeting to empower especially early-career researchers, this half-day workshop will consolidate efforts from international initiatives, including the IEEE Standards Group P3108 Recommended Practice for Design of Human Subjects Studies in Human-Robot Interaction and the UK-HRI topic group Human-Robot Interaction: Best Practices and Methods. Our program includes a presentation and discussion on these initiatives, a keynote address focusing on rigorous study reporting, providing an intercontinental perspective, and dedicated mentoring and ideas exchange sessions. A collaborative working session will document the workshop's efforts with attendees starting to draft a community-driven white paper, surveying the current landscape and outlining next steps for experimental design and reporting recommendations in the field of HRI. |
|
| Roncone, Alessandro |
Diane N. Jung, Caleb Escobedo, Noah Liska, Maitrey Gramopadhye, Daniel Szafir, Alessandro Roncone, and Carson J. Bruns (University of Colorado at Boulder, Boulder, USA; University of North Carolina at Chapel Hill, Chapel Hill, USA) Scientists perform diverse manual procedures that are tedious and laborious. Such procedures are considered a bottleneck for modern experimental science, as they consume time and increase burdens in fields including material science and medicine. We employ a user-centered approach to designing a robot-assisted system for dialysis, a common multi-day purification method used in polymer and protein synthesis. Through two usability studies, we obtain participant feedback and revise design requirements to develop the final system that satisfies scientists' needs and has the potential for applications in other experimental workflows. We anticipate that integration of this system into real synthesis procedures in a chemical wet lab will decrease workload on scientists during long experimental procedures and provide an effective approach to designing more systems that have the potential to accelerate scientific discovery and liberate scientists from tedious labor. |
|
| Ros, Raquel |
Sara Cooper, Bartomeu Pou, Arnau Mayoral-Macau, Antonio Lobo-Santos, Miriam Martín, and Raquel Ros (IIIA-CSIC, Cerdanyola, Spain; IIIA-CSIC, Bellaterra, Spain; IIIA-CSIC, Barcelona, Spain; Institute of Mathematical Sciences, Madrid, Spain) We present Emy, a low-cost, socially assistive robot designed to support therapeutic sessions for children with Autism Spectrum Disorder (ASD) within the EMOROBCARE project. We detail the robot's multimodal interaction capabilities, including a comprehensive set of facial expressions and reactive behaviours, as well as its perception and speech interaction system. The demonstration showcases the robot together with a suite of four therapeutic games that promote cognitive and social skills. The system's modularity and affordability increase its accessibility for home-based ASD interventions. Bartomeu Pou and Raquel Ros (IIIA-CSIC, Bellaterra, Spain; IIIA-CSIC, Barcelona, Spain) Socially aware robots must interpret non-verbal cues such as gaze, gestures and pointing under strict computational constraints. We present a lightweight vision framework that extends ROS4HRI with hand and head gesture recognition, hybrid pointing estimation for short and long distances, and a multi-target visual engagement metric over both agents and objects. All components run in real time on embedded hardware and are validated through proof-of-concept experiments. |
|
| Rosén, Julia |
Phillip Bach-Luong Tran Jr., Julia Rosén, and Denise Y. Geiskkovitch (McMaster University, Hamilton, Canada; Stockholm University, Stockholm, Sweden) Social robots have the potential to support children's emotion regulation development, especially during early childhood, where emotion regulation skills enhance social and academic development. However, existing robots are not designed specifically to support young children's long-term development of emotion regulation skills in domestic settings. We introduce a prototype of Emotion Buddy — a child-led, parent-supported robot designed for routine, at-home use by children ages 2 to 6. The robot emulates 6 emotions via sound, haptics, and shape transformation based on real-time sensing of sound, movement, and touch. We intend for children to interact with the robot as part of their daily routine to identify and respond to its emulated emotions, thereby providing frequent opportunities to practice their emotion regulation skills. We discuss our design process, the current prototype, and future work to evaluate the robot's efficacy in sustained emotion regulation learning. |
|
| Rosenthal-von der Pütten, Astrid Marieke |
Xiying Li and Astrid Marieke Rosenthal-von der Pütten (RWTH Aachen University, Aachen, Germany) Researchers have long focused on integrating robots smoothly into human life, yet anecdotal and empirical evidence show that humans (un)intentionally interfere with robots and impede their tasks. Prior work has focused primarily on so-called robot bullying, conceptualized as intentional behavior, but has not sufficiently acknowledged unintentional interference. A systematic classification of interference types, individuals involved, and robot behaviors to address these interferences remains lacking. This late-breaking report presents preliminary findings from an ongoing systematic review following PRISMA guidelines. We identified 18 studies from 2000 to 2025. We observed children and young adults most frequently engaged in obstructive behaviors, driven by curiosity and peer influence among other factors. Humanoid robots often elicited verbal harassment while machine-like robots were more targets of physical interference. Evidence on suitable robot responses remains limited. These insights highlight the need for broader investigation of human-robot conflict and robot responses to ensure smooth and safe HRI in practice. Sarah Gosten, Anna Maria Helene Abrams, and Astrid Marieke Rosenthal-von der Pütten (RWTH Aachen University, Aachen, Germany) Sexism is a constant presence in women’s lives, requiring ongoing decisions about if and how to respond. Previous research underscores the importance of allies in confrontations of sexism. We explore how women perceive a social robot that intervenes in sexist encounters. Female participants (n = 60) engaged in a game scenario where a sexist comment was made by a male confederate prompting the robot to intervene in one of three ways: 1. avoidant, 2. argumentative, or 3. morally judgmental. Results showed that exposure to sexist remarks lead to significantly increased negative emotions. Participants rated the perpetrator significantly worse on trust and perceived closeness than the human bystander and the robot who were both on par. The type of intervention had no mitigating effect in the ratings. Anna M. H. Abrams, Inga Luisa Nießen, and Astrid Marieke Rosenthal-von der Pütten (RWTH Aachen University, Aachen, Germany) As robots increasingly appear in social settings, it is unclear whether groups that include robots are perceived as coherent social entities. This study examined whether groups including robots are judged as less entitative (“groupy”) than all-human groups. In a vignette-based online experiment (N = 160), participants rated eleven group scenarios (e.g., co-workers or musicians) on eight entitativity dimensions (e.g., similarity or interaction), with group composition manipulated between subjects (all-human, human–robot, text-only). Results showed strong effects of group scenario but minimal effects of group composition: human–robot groups were generally perceived as equally entitative as all-human groups. Only similarity differed, with human–robot groups rated less similar in select scenarios, indicating the importance of similarity in outer appearance in the perception of a group's coherence. Overall, the presence of robots did not reduce perceived group entitativity, suggesting that group type matters more than group composition. |
|
| Rosman, Benjamin |
Benjamin Rosman (University of the Witwatersrand, South Africa) Human-robot interaction has long asked how robots should perceive, interpret, and respond to people. But as autonomous systems move into increasingly varied human environments, a deeper question becomes unavoidable: who gets represented in robot intelligence at all? Which languages, social contexts, assumptions, and everyday realities are reflected in the systems we build, and what happens when they are not? In this talk, I argue that representation is not only a social concern. It is also central to building intelligent systems that work robustly in the real world. When our models of people and context are too narrow, our systems are not merely less inclusive, but they are also less capable. Drawing on experiences from building AI communities, institutions, and initiatives in Africa, I explore how broadening participation in AI changes not only who contributes to the field, but also which problems are studied and which solutions become possible. I connect these ideas to technical questions in autonomous decision making, including how robots model others under uncertainty and how we design systems that can adapt across tasks and settings without oversimplifying the world they inhabit. The future of autonomy depends not only on making robots smarter, but on making them better able to represent the societies they are meant to serve. |
|
| Ross, Robert J. |
Thanh-Tung Ngo, Emma Murphy, and Robert J. Ross (Technological University Dublin, Dublin, Ireland) Effective communication is vital in healthcare, especially across language barriers, where non-verbal cues and gestures are critical. This paper presents a privacy-preserving vision-language framework for medical interpreter robots that detects specific speech acts (consent and instruction) and generates corresponding robotic gestures. Built on locally deployed open-source models, the system utilizes a Large Language Model (LLM) with few-shot prompting for intent detection. We also introduce a novel dataset of clinical conversations annotated for speech acts and paired with gesture clips. Our identification module achieved 0.90 accuracy, 0.93 weighted precision, and a 0.91 weighted F1-Score. Our approach significantly improves computational efficiency and, in user studies, outperforms the speech-gesture generation baseline in human-likeness while maintaining comparable appropriateness. |
|
| Rossi, Silvia |
Bahar Irfan, Nikhil Churamani, Michelle Zhao, Rajat Kumar Jenamani, Ali Ayub, and Silvia Rossi (Familiar Machines & Magic, Woburn, USA; University of Cambridge, Cambridge, UK; Carnegie Mellon University, Pittsburgh, USA; Cornell University, Ithaca, USA; Concordia University, Montreal, Canada; University of Naples Federico II, Naples, Italy) Today's high-capacity generalist robot policies provide a strong foundation for broad task-level competence, yet achieving effective and equitable support for people in everyday settings remains a significant challenge. Real-world environments are dynamic and unstructured, and human needs evolve over time, requiring robots that can adapt accordingly. The ultimate evaluator of any robotic system is the person it assists, and personalization is essential to ensuring equitable and meaningful support across diverse users and contexts. Developing robots that can continually learn from interaction, adapt their behaviors over time, and flexibly assume roles as learners and collaborators is a critical step toward realizing effective integration of robots into daily life. With this year's theme of "Evolving Assistance for Everyday Life", and in alignment with the conference theme "HRI Empowering Society", the sixth edition of the "Lifelong Learning and Personalization in Long-Term Human-Robot Interaction (LEAP-HRI)" workshop aims to bring together insights across diverse disciplines, focusing on how robots can progressively adapt their support to suit diverse individuals, each with unique and changing needs, across real-world contexts. Through this lens, the workshop aims to discuss current and future directions in how assistive systems can flexibly respond, continually improve over time, and deliver more inclusive and empowering support in everyday life. |
|
| Roulle, Diego |
Tomoya Sasaki, Taiki Ishigaki, Diego Roulle, and Eiichi Yoshida (Tokyo University of Science, Tokyo, Japan; University Paris-Est Créteil, Créteil, France) Orbiting is a common viewpoint control technique in CG and CAD, in which the camera rotates around a target that acts as the center of rotation. However, applying orbiting in teleoperation, a real-world application, is difficult due to physical constraints. We propose RelOrb, a viewpoint control method that focuses on relative coordinate changes between the camera and the target. Our prototype rotates the object on a turntable instead of moving the camera, providing head-mounted display images as if the camera itself were moving. We present the method, its coordinate transformation, a proof-of-concept prototype, and example operations. |
|
| Rovira, Ericka |
Khalaeb Richardson, Emily Maceri, Dong Hae Mangalindan, Vaibhav Srivastava, and Ericka Rovira (US Military Academy at West Point, West Point, USA; Michigan State University, East Lansing, USA) Imagine a robot pausing mid-task to ask its human partner for help or remaining silent when facing obstacles. Such moments shape human robot collaboration. This study examined how robot assistance seeking behaviors and task complexity influence performance, trust, reliance, and cognitive workload in human autonomy teams. Fifty participants collaborated with a robot that either sought or did not seek assistance under low and high complexity tasks. Unnecessary assistance seeking in low complexity tasks decreased performance and increased workload, while failures to seek help in high complexity tasks reduced trust and reliance, highlighting the context dependent nature of collaboration. These findings extend theories of trust development, showing that assistance seeking can improve transparency and usability but may disrupt workflows if poorly timed. Designing robots that engage in context sensitive assistance seeking can foster more reliable and effective human– robot partnerships. |
|
| Rozendaal, Marco C. |
Wilbert Tabone, Benedetta Lusi, Alessandro Ianniello, J. Micah Prendergast, Deborah Forster, Olger Siebinga, Maria Luce Lupetti, Eva S. Verhoef, Dave Murray-Rust, Marco C. Rozendaal, Ann M. Pendleton-Jullian, and David Abbink (Delft University of Technology, Delft, Netherlands; Erasmus University Medical Centre, Rotterdam, Netherlands; Politecnico di Torino, Turin, Italy; RoboHouse, Delft, Netherlands; Ohio State University, Columbus, USA) Building on two previous workshops on transdisciplinary practices for shaping worker-robot relations, this half-day workshop introduces participants to worldbuilding, a design-driven technique used to co-create and explore richly detailed futures, as a way to empower workers and scholars in reimagining plausible and preferable future worker-robot relations (WRRs). WRRs describe the interactions, collaborations, and shared practices between workers and robotic systems in organisational contexts. The workshop begins with an introduction to WRRs, and a keynote by a worldbuilding expert that will outline the method and its value for envisioning future WRRs. Groups of workshop participants will then investigate concrete case studies that demonstrate how robotic systems can support workers in their practice, with a focus on enhancing wellbeing. Through interactive activities in this workshop, participants will co-create imagined worlds of work, which will be analyzed systemically across multiple levels of complexity, from the individual worker and their immediate context to broader societal implications. The workshop ultimately aims to build a community committed to shaping sustainable futures of robot-assisted work. Marco C. Rozendaal, Anastasia Kouvaras Ostrowski, Mafalda Gamboa, Samantha Reig, Patricia Alves-Oliveira, Maaike Bleeker, Maria Luce Lupetti, John Vines, Nazli Cila, Hannah Pelikan, Nikolas Martelaro, Selma Šabanović, David Sirkin, and Cristina Zaga (Delft University of Technology, Delft, Netherlands; Purdue University, West Lafayette, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Massachusetts at Lowell, Lowell, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Utrecht University, Utrecht, Netherlands; Politecnico di Torino, Turin, Italy; University of Edinburgh, Edinburgh, UK; Linköping University, Linköping, Sweden; Carnegie Mellon University, Pittsburgh, USA; Indiana University at Bloomington, Bloomington, USA; Stanford University, Stanford, USA; University of Twente, Enschede, Netherlands) The 3rd Workshop on Designerly Human-Robot Interaction (HRI) aims to bring together scholars and practitioners engaged in design-oriented research to articulate the value of design research within HRI broadly. We propose a half-day workshop to (1) collectively map the diversity of design research in HRI, examining how contributions are framed and how quality is evaluated; (2) discuss participants’ HRI design projects, showcased in an exhibition setting; and (3) conclude with a focused conversation to identify common ground across diverse approaches and develop strategies for strengthening the position of design research in HRI and its connections with other HRI disciplinary communities. |
|
| Ruddy, Rachel |
Rahatul Amin Ananto, Seol Han, Rachel Ruddy, and AJung Moon (McGill University, Montreal, Canada) A new generation of robots are being developed to enter our homes in a matter of months. But has the industry appropriately accounted for the complexities of the social environment that we call home? We conducted an exploratory design workshop to examine what secondary users—those who are not expected to be owners but nonetheless daily users—deem to be socially appropriate behavior of a domestic robot. A total of 90 students from Mexico participated in the study. By analyzing they define and reason about appropriateness of robot behaviors in the home, we show why deployment of domestic robots require much more thoughtful considerations than implementation of simplified social rules; judgments of what is appropriate depend on context, roles, relationships, and individual boundaries, and can differ between primary and secondary users. We call on Human-Robot Interaction (HRI) practitioners to treat social appropriateness as a fluid, gradient factor at design time rather than a binary concept (appropriate/inappropriate). |
|
| Rudenko, Irina |
Irina Rudenko, Utku Norman, Lukas Hilgert, Jan Niehues, and Barbara Bruno (KIT, Karlsruhe, Germany) Large Language Models (LLMs) hold significant promise for enhancing Child–Robot Interaction (CRI), offering advanced conversational skills and adaptability to the diverse abilities, requests and needs of young children. Little attention, however, has been paid to evaluating the age and developmental appropriateness of LLMs. This paper brings together experts in psychology, social robotics and LLMs to define metrics for the validation of LLMs for child–robot interaction. |
|
| Rueben, Matthew |
Riccardo Spagnuolo, William Hagman, Erik Lagerstedt, Matthew Rueben, and Sam Thellman (University of Padua, Padua, Italy; Mälardalen University, Eskilstuna, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Portland, Portland, USA; Linköping University, Linköping, Sweden) Robots increasingly operate in everyday human environments, where interaction depends on users understanding what the robot can perceive and act on---its perceived ecology or Umwelt. Current human-robot interfaces rarely support this understanding: they rely largely on symbolic cues that reveal little about how environmental structures shape the robot’s actions. Drawing on Gibson’s ecological psychology, we propose a shift from symbolic communication toward ecological specification in interface design. We introduce the Gibsonian Human–Robot Interface Design (GHRID) taxonomy, which organizes interface properties across three facets---basic descriptive, context and evaluation, Gibsonian-specific---and identifies key ecological dimensions such as affordance grounding, temporal coupling, and Umwelt exposure. Finally, we outline a research program testing whether "GHRID-high" designs improve users’ understanding of robots’ behavior-driving states and processes. |
|
| Ruijs, Pieter |
Elitza Marinova, Pieter Ruijs, Just Oudheusden, Veerle Hobbelink, and Matthijs Smakman (HU University of Applied Sciences Utrecht, Utrecht, Netherlands; Vrije Universiteit Amsterdam, Amsterdam, Netherlands) Children with Attention Deficit Hyperactivity Disorder (cADHD) often struggle with completing daily tasks and routines, yet technological support in the home environment remains limited. This exploratory study examines the potential of social robots to assist cADHD with Instrumental Activities of Daily Living (IADLs). Nine experts were interviewed to identify design requirements, followed by a five-day in-home deployment with five families. Parents and children reported that the robot effectively provided reminders and task instructions, improved focus and independence, and reduced caregiving demands. While families expressed interest in continued use, they emphasized the need for greater reliability and adaptability. These findings highlight the promise of social robots in supporting cADHD at home and offer valuable directions for future research and development. |
|
| Ryder Hofflich, Dyllan |
Nigel G. Wormser, Zuha Kaleem, Jessie Lee, Dyllan Ryder Hofflich, and Henry Calderon (Cornell University, Ithaca, USA; Cornell University, Brooklyn, USA) Musculoskeletal injuries from manual laundry cart transportation are very common for workers in the hospitality industry. To address this, we designed Elandro, a teleoperated laundry cart that collaboratively helps hotel staff with transportation across and within floors at a hotel. Through iterative user research at Statler Hotel, and wizard-of-oz interaction testing, we revealed design requirements essential for successful human-robot interaction. Elandro contributes to reducing physical strain on workers, maintaining staff autonomy and decision-making, establishing a human-centered approach where technology empowers rather than replaces hospitality workers. |
|
| Saad, Laura |
Pawinee Pithayarungsarit, Neha Kannan, Paige Williamson, Paul Vallejo, Laura Saad, Greg Trafton, and Eileen Roesler (George Mason University, Fairfax, USA; Naval Research Laboratory, Washington, USA) The Negative Attitude toward Robots Scale (NARS) was designed to measure people’s general attitudes toward robots. Originally developed and validated in Japanese, the NARS was translated into English without revalidation and has become widely used in human-robot interaction research. This study aims to validate the translated NARS factor structure via confirmatory factor analysis using existing datasets. Results (N = 2154) indicated a moderate support for the three-factor structure with some problematic items. Additionally, many papers modified the NARS in some way, including changing wording or rating scale. Not only is this not recommended, but it suggests that there may be a disconnect between the scale and how researchers would like to use it. Together, we suggest that researchers consider the consequences of modification and emphasize the importance of validating scales to ensure their adequate measurement ability and the quality of research results. Hanna Parikh, Paul Vallejo, Elizabeth Phillips, Eileen Roesler, J. Gregory Trafton, and Laura Saad (George Mason University, Fairfax, USA; Naval Research Laboratory, Washington, USA) As robots increasingly occupy more complex decision-making roles in society, understanding how people attribute blame for robots' actions is critical. Prior work suggests that attributions of control of a robot can influence blame, but the sources of this control and their relationship to blame attribution remain underexplored. We reanalyzed existing data from three published studies in which participants assigned blame to a robot for its decision in different moral dilemmas. We first developed three categories of attributions of control for a robot’s actions (Self, Predetermined, and External), then employed a qualitative coding procedure to analyze participants' clarifications for their assigned blame. Blame and trust ratings were analyzed across control categories. Across all three studies, robots perceived as controlling their own actions (Self) were consistently blamed more than those in other categories. Attributions of control had a minimal impact on trust. Taken together, these findings indicate that attributions of control impact how people assign blame to robots, which may have moral and legal implications. Laura Saad, Eileen Roesler, Elizabeth Phillips, and Greg Trafton (Naval Research Laboratory, Washington, USA; George Mason University, Fairfax, USA) Subjective measurement scales are commonly employed in HRI research. We provide a half-day tutorial (4 hours) that aims to empower researchers with the tools to find appropriate scales for their research and assess their quality, confidently and efficiently. There are no prerequisites required for attendees. We aim to recruit researchers interested in using scales but who are unsure about how to pick which scale to use. The first part of the tutorial will teach attendees how to assess the quality of HRI scales. To accomplish this, we will review basic topics in psychometric theory and a guideline that outlines best practices in scale development and validation. In the second part, we will apply this guideline to two frequently used HRI scales: Godspeed and Robotic Social Attributes Scale (RoSAS). Attendees are also encouraged to bring scales they are interested in reviewing. The third part aims to help attendees find appropriate scales for their research. To accomplish this, we will review the HRI scale database: the first centralized online repository of HRI scales which contains over 50 of the most used HRI scales. These scales cover a wide array of topics of interest such as, trust, perceived agency, embodiment, danger, safety, and attitudes towards robots. Our goal for this tutorial is to promote active engagement from attendees throughout the session, ultimately striving to improve the quality and replicability of results in HRI studies. |
|
| Šabanović, Selma |
Weslie Khoo, Long-Jing Hsu, William Valentine, Brandon Connor, Tao Yat Kong, Amous Khoo, Waki Kamino, Selma Šabanović, and David Crandall (Indiana University at Bloomington, Bloomington, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Rose-Hulman Institute of Technology, Terre Haute, USA; Independent, Singapore, Singapore; Cornell University, Ithaca, USA) IRIS, the Interactive Robot for Ikigai Support, is a social robot designed to help older adults explore life purpose and meaningful memories through interactive, reflective conversations. Implemented on LuxAI’s QT humanoid robot, IRIS combines structured prompts with AI-generated follow-ups to support dynamic, personalized dialogue that adapts to individual preferences and cognitive abilities. Co-designed with older adults through a longitudinal panel, IRIS incorporates nuanced emotional responsiveness, including the ability to recognize and engage with bittersweet emotions. For the interactive demo, participants will engage with IRIS in a controlled, immersive setting to encourage contemplation of meaning, impermanence, and personal purpose. This work highlights the potential of socially aware robots to empower older adults, support emotional well-being, and foster personal reflection, demonstrating how human–robot interaction can contribute to societal enrichment. Jack Bassett, Long-Jing Hsu, Stephen Taylor, Janice Bays, Weslie Khoo, Katherine M. Tsui, David J. Crandall, and Selma Šabanović (Indiana University at Bloomington, Bloomington, USA; Dementia Action Alliance, Bloomington, USA; Toyota Research Institute, Cambridge, USA) Current robotic systems for dementia care primarily emphasize cognitive or functional support—reminding individuals to take medications, offering simple companionship, or providing sensory stimulation. These interventions are valuable, yet they do not always acknowledge the deeper existential challenges associated with memory loss. In this video, we present IRIS (Interactive Robot for Ikigai Support), a novel social robot designed not only to assist but to empower: to help individuals living with dementia reconnect with purpose, identity, and meaning. This 3-minute clip was part of a 3-year research project to develop a robot that can foster a sense of meaning for older adults living with dementia. This work highlights the lived experience of Steve Taylor, a participant whose long-term involvement shaped the robot from concept to deployment. By foregrounding his voice and day-to-day story, the video reveals how IRIS may act as a reflective companion — encouraging storytelling, sparking emotional connections, and enabling moments of clarity, dignity, and joy. The film frames the robot not merely as a tool, but as a mirror through which participants can rediscover fragments of self and meaning in everyday experiences. Brandon Connor, William Valentine, Long-Jing Hsu, Waki Kamino, Selma Šabanović, David Crandall, and Weslie Khoo (Indiana University at Bloomington, Bloomington, USA; Rose-Hulman Institute of Technology, Terre Haute, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Cornell University, Ithaca, USA) This paper examines how robots can safely and intentionally evoke fear in humans. Drawing on popular culture, immersive theater, horror games, and theme parks, we highlight design strategies such as unpredictability, proximity violations, and narrative buildup. We discuss how robot morphology, motion, and ambient cues shape perceived threat, with implications for entertainment-focused Human-Robot Interaction (HRI) and for preventing unintentional fear in everyday interactions. The paper aims to spark discussion on leveraging fear as a meaningful, ethical, and safe emotional modality in human–robot interaction. Marco C. Rozendaal, Anastasia Kouvaras Ostrowski, Mafalda Gamboa, Samantha Reig, Patricia Alves-Oliveira, Maaike Bleeker, Maria Luce Lupetti, John Vines, Nazli Cila, Hannah Pelikan, Nikolas Martelaro, Selma Šabanović, David Sirkin, and Cristina Zaga (Delft University of Technology, Delft, Netherlands; Purdue University, West Lafayette, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Massachusetts at Lowell, Lowell, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Utrecht University, Utrecht, Netherlands; Politecnico di Torino, Turin, Italy; University of Edinburgh, Edinburgh, UK; Linköping University, Linköping, Sweden; Carnegie Mellon University, Pittsburgh, USA; Indiana University at Bloomington, Bloomington, USA; Stanford University, Stanford, USA; University of Twente, Enschede, Netherlands) The 3rd Workshop on Designerly Human-Robot Interaction (HRI) aims to bring together scholars and practitioners engaged in design-oriented research to articulate the value of design research within HRI broadly. We propose a half-day workshop to (1) collectively map the diversity of design research in HRI, examining how contributions are framed and how quality is evaluated; (2) discuss participants’ HRI design projects, showcased in an exhibition setting; and (3) conclude with a focused conversation to identify common ground across diverse approaches and develop strategies for strengthening the position of design research in HRI and its connections with other HRI disciplinary communities. |
|
| Sadka, Ofir |
Ofir Sadka, Michael Kupferman, Hili Megory, Lior Simchon, Rotem Dagan, Guy Benadon, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Robots are becoming part of everyday life, where their success depends as much on social interaction as on task performance. While rich movement is often assumed to enhance that experience, minimal gestures can also elicit meaningful social interactions. This suggests that gesture design informed by fundamental human nonverbal cues may achieve comparable social expressivity with fewer degrees of freedom. We explored whether reducing a robot’s degrees of freedom alters the social experience by comparing interactions in two movement configurations: a Full-DoF condition and a Reduced-DoF condition. This comparison allowed us to evaluate whether increased movement capability contributes meaningfully to the quality of the social experience. Bayesian analyses consistently favored the null hypothesis, indicating no meaningful differences between the Full-DoF and Reduced-DoF conditions. These initial findings suggest that well-designed simple gestures can support a consistent social experience, challenging the assumption that greater movement capability should be automatically expected to enhance human–robot interaction. Elior Carsenti, Adi Manor, Ofir Sadka, Michael Kupferman, Guy Benadon, Lior Simchon, Rotem Dagan, Hili Megory, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Social robots are increasingly expected to initiate interactions, yet proactive behavior often produces negative experiences if presented when the users are not available. We argue that availability is not binary and depends on the user's cognitive load. Moreover, proactivity can take different forms, suggesting that proactive cues can be adjusted to the user’s current load. We tested whether matching proactivity modality (verbal and nonverbal cues) to cognitive-load level improves robot perception. In a 2×2×2 between-participants study, 87 participants completed a high-load writing task or a low-load screw-sorting task while an ElliQ robot initiated interaction using verbal cues, nonverbal cues, both, or neither. Perceptions varied across conditions. Under low load, verbal proactivity was preferred regardless of nonverbal cues. Under high load, nonverbal-only proactivity produced the best experience. These findings suggest that proactivity design is not trivial and should focus on aligning the robot's proactive behavior with the user's cognitive load. |
|
| Saiwaki, Naoki |
Yuki Kimura, Emi Anzai, Naoki Saiwaki, and Masahiro Shiomi (ATR, Kyoto, Japan; Nara Women’s University, Nara, Japan) Digital technologies make it easy for people to be misled by messages and social robots, raising the question of how to help users become less easily deceived. We examined whether people become more cautious and feel that they are contributing more to others if, after being deceived by a robot, they use the same robot to protect another person from deception. In our experiment, adults were first deceived by a communication robot in a consent-form scenario, then briefly operated it to guide a dummy participant away from deception, and finally completed a similar online consent-form check without the robot. The results showed that most were deceived again in the online task, and their perceived contribution to others did not significantly increase. These findings suggest that a single brief chance to protect others is insufficient to reliably increase caution, but the paradigm offers a basis for studying how robots might support resistance to deception. |
|
| Sakai, Kurima |
Naoki Kodani, Yuya Komai, Kurima Sakai, Takahisa Uchida, and Hiroshi Ishiguro (University of Osaka, Toyonaka, Japan; ATR, Keihanna Science City, Japan; Osaka University, Osaka, Japan; Osaka University, Toyonaka, Japan) In recent years, avatar technology has been used in various forms, such as robots and CG agents. It is considered that avatars that behave autonomously could expand human capabilities, such as participating in social activities on behalf of the real person. In this study, we developed an autonomous dialogue system that reflects the operator's personality by using a Geminoid, which is an android modeled after the appearance of a specific person. Regarding such androids modeled after specific persons, previous research has reported at the interview level that people find it easier to talk to the android than to the real person it was modeled after. However, the relationship between such an avatar and the human it is modeled after for the interlocutor has not been quantitatively clarified. This study quantitatively evaluated the effect of the Geminoid with an autonomous dialogue system on participants' perceived relationship with the real person it was modeled after. The results showed that interacting with the developed system significantly increased the participants' sense of closeness toward the real person. Furthermore, since interacting with the real person model afterward did not significantly increase this sense of closeness, it is expected that this system sufficiently enhances closeness and can produce an effect equivalent to that of interacting with the real person. |
|
| Salam, Hanan |
Himanshi Lalwani and Hanan Salam (New York University, Abu Dhabi, United Arab Emirates) Large language models (LLMs) are being integrated into socially assistive robots (SARs) and other conversational agents providing mental health and well-being support. These agents are often designed to sound empathic and supportive in order to maximize user's engagement, yet it remains unclear how increasing the level of supportive framing in system prompts influences safety relevant behavior. We evaluated 6 LLMs across 3 system prompts with varying levels of supportiveness on 80 synthetic queries spanning 4 well-being domains (1440 responses). An LLM judge framework, validated against human ratings, assessed safety and care quality. Moderately supportive prompts improved empathy and constructive support while maintaining safety. In contrast, strongly validating prompts significantly degraded safety and, in some cases, care across all domains, with substantial variation across models. We discuss implications for prompt design, model selection, and domain specific safeguards in SARs deployment. Keya Shah, Himanshi Lalwani, Zein Mukhanov, and Hanan Salam (New York University, Abu Dhabi, United Arab Emirates) Social robots and conversational agents are being explored as supports for wellbeing, goal-setting, and everyday self-regulation. While prior work highlights their potential to motivate and guide users, much of the evidence relies on self-reported outcomes or short, researcher-mediated encounters. As a result, we know little about the interaction dynamics that unfold when people use such systems in real-world contexts, and how these dynamics should shape future robot wellbeing coaches. This paper addresses this gap through content analysis of 4352 messages exchanged longitudinally between 38 university students and an LLM-based wellbeing coach. Our results provide a fine-grained view into how users naturally shape, steer, and sometimes struggle within supportive human-AI dialogue, revealing patterns of user-led direction, guidance-seeking, and emotional expression. We discuss how these dynamics can inform the design of robot wellbeing coaches that support user autonomy, provide appropriate scaffolding, and uphold ethical boundaries in sustained wellbeing interactions. |
|
| Sam, Jeffrin |
Hung Khang Nguyen, Jeffrin Sam, Safina Gulyamova, Miguel Altamirano Cabrera, Artem Lykov, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) Visual and speech interfaces are widely used to control collaborative robots, but physical interaction remains distinct for its immediacy and intuitive nature. We introduce TapHRI, a system enabling rich on-body touch control using only embedded robot sensing—joint torques, motor currents, and TCP wrench—without external hardware. The core contribution is a novel Neural Network architecture designed specifically for raw internal sensor streams: a multi-head Temporal Convolutional Network (TCN) that eliminates the need for spectrogram conversion used in prior art. By leveraging dilated causal convolutions, our architecture effectively captures long-range temporal dependencies to distinguish subtle tap transients from robot motion dynamics. Uniquely, the network employs a shared encoder with bifurcated classification heads to simultaneously predict both tap count (single, double, triple) and directional intent (six directions) from a single 3.6 s window. We validate this TCN-driven approach on a UR3 cobot with 3,154 episodes, achieving 98.9 % accuracy in static conditions and 59.2 % in dynamic scenarios, demonstrating that specialized time-domain architectures can unlock complex interaction vocabularies from standard internal signals. Amir Atef Habel, Ivan Snegirev, Elizaveta Semenyakina, Miguel Altamirano Cabrera, Jeffrin Sam, Fawad Mehboob, Roohan Ahmed Khan, Muhammad Ahsan Mustafa, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) This paper presents Glove2UAV, a wearable IMU-glove interface for intuitive UAV control through hand and finger gestures, augmented with vibrotactile warnings for exceeding predefined speed thresholds. To promote safer and more predictable interaction in dynamic flight, Glove2UAV is designed as a lightweight and easily deployable wearable interface intended for real-time operation. Glove2UAV streams inertial measurements in real time and estimates palm and finger orientations using a compact processing pipeline that combines median-based outlier suppression with Madgwick-based orientation estimation. The resulting motion estimations are mapped to a small set of control primitives for directional flight (forward/backward and lateral motion) and, when supported by the platform, to object-interaction commands. Vibrotactile feedback is triggered when flight speed exceeds predefined threshold values, providing an additional alert channel during operation. We validate real-time feasibility by synchronizing glove signals with UAV telemetry in both simulation and real-world flights. The results show fast gesture-based command execution, stable coupling between gesture dynamics and platform motion, correct operation of the core command set in our trials, and timely delivery of vibratile warning cues. Fawad Mehboob, MoniJesu Wonders James, Amir Atef Habel, Jeffrin Sam, Miguel Altamirano Cabrera, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) This work introduces a novel concept of an autonomous aerial manipulation system capable of interpreting high-level natural language commands to retrieve objects and deliver them to a human user. The system is intended to integrate a MediaPipe-based Grounding DINO and a Vision-Language-Action (VLA) model with a custom-built drone equipped with a 1-DOF gripper and an Intel RealSense RGB-D camera. VLA performs semantic reasoning to interpret the intent of a user prompt and generates a prioritized task queue for grasping relevant objects in the scene. Grounding DINO and the dynamic A* planning algorithm are used to navigate and safely relocate the object. To ensure safe and natural interaction during the handover phase, the system employs a human-centric controller driven by MediaPipe. This module provides real-time human pose estimation, allowing the drone to employ visual servoing to maintain a stable, distinct position directly in front of the user, facilitating a comfortable handover. We demonstrate the system's efficacy through real-world experiments for localization and navigation, which resulted in a 16.4 cm, 7.0 cm, and 8.4 cm of max, mean euclidean, and root-mean squared errors, respectively, highlighting the feasibility of VLA for aerial manipulation operations. Valerii Serpiva, Artem Lykov, Jeffrin Sam, Aleksey Fedoseev, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) We propose a novel UAV-assisted creative capture system that leverages diffusion models to interpret high-level natural language prompts and automatically generate optimal flight trajectories for cinematic video recording. Instead of manually piloting the drone, the user simply describes the desired shot (e.g., "orbit around me slowly from the right and reveal the background waterfall"). Our system encodes the prompt along with an initial visual snapshot from the onboard camera, and a diffusion model samples plausible spatio-temporal motion plans that satisfy both the scene geometry and shot semantics. The generated flight trajectory is then autonomously executed by the UAV to record smooth, repeatable video clips that match the prompt. User evaluation using NASA-TLX showed a significantly lower overall workload with our interface (M = 21.6) compared to a traditional remote controller (M = 58.1), demonstrating a substantial reduction in perceived effort. Mental demand (M = 11.5 vs. 60.5) and frustration (M = 14.0 vs. 54.5) were also markedly lower for our system, highlighting its clear usability advantages in autonomous text-driven flight control. This project demonstrates a new interaction paradigm: text-to-cinema flight, where diffusion models act as the "creative operator" converting story intentions directly into aerial motion. Yara Mahmoud, Jeffrin Sam, Khang Nguyen, Marcelino Julio Fernando, Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Haris Khan, Artem Lykov, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) Safe Human–Robot Interaction in shared spaces requires robots to adapt compliance and speed to scene context and human proximity. Standards for collaborative operation specify limits on speed, force/pressure, and stiffness during contact or near-contact operation, motivating dynamic impedance rather than fixed gains. We present an egocentric vision pipeline for the Unitree G1 humanoid that retrieves task- and context-appropriate parameters via a Vision-Language Model (VLM) with Retrieval-Augmented Generation (RAG), and integrates inverse kinematics (IK) to produce joint references together with feed-forward torques for gravity compensation. Concretely, the system maps live head-camera frames to (Kp, Kd, v); IK provides (qref, qref, τff); and the low-level controller apply it at each joint. This enables joint-impedance scheduling and speed regulation in low-dynamic settings. We evaluate the system on dual-manipulation tabletop tasks with and without human presence in the scene. Compared to fixed-gain baselines, the VLM-RAG policy yields smooth speed reductions near humans, and maintained task success under environmental change. While the off-board perception–retrieval loop introduces noticeable delay that restricts use in highly dynamic interaction, the prototype demonstrates that semantic grounding of impedance remains a viable direction for safer humanoid collaboration. |
|
| Samaraweera, Thisas |
Sabrina Al Bukhari, Thisas Samaraweera, Kapil Shenoy, Lewi Alemu, Henok Kefyalew, Jamison Heard, and Jinane Mounsef (Rochester Institute of Technology, Dubai, United Arab Emirates; Rochester Institute of Technology, Rochester, USA) Social robots often struggle with natural conversational flow due to rule-based systems that cause pauses and interruptions. We present MESA (Modular Empathetic Social Assistant), a campus robot that synchronizes verbal responses with non-verbal cues. It uses UltraVAD to distinguish thoughtful pauses from turn completions and implements turn-yielding and floor-holding mechanisms. A distributed Retrieval-Augmented Generation (RAG) architecture ensures accurate responses for students and faculty. Validated on a proxy platform with ~600 ms latency, MESA demonstrates that combining context-aware audio and gaze enables more fluid conversation, even as hardware assembly nears completion. |
|
| Saméus, Marcus |
Sofia Thunberg, Aris Alissandrakis, Timmy Bolinder, Melker Forslund, Elias Lycke, Tobias Pettersson, Marcus Saméus, and Robin Åsberg (Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; Uppsala University, Uppsala, Sweden) This late-breaking report presents a Wizard of Oz control system that uses virtual reality equipment to operate the humanoid robot Pepper. The system enables control by mirroring the operator’s movement within the physical space, including body and head rotation, and arm movements directly onto the robot. The system was evaluated in a study setup with two participant groups: users and operators. Results showed that the system was effective in simulating natural robot behaviour, while being intuitive and engaging. |
|
| Sanchez-Lopez, Jose Luis |
Laura Ribeiro, Holger Voos, and Jose Luis Sanchez-Lopez (University of Luxembourg, Luxembourg, Luxembourg) Reliable perception is essential for collaborative robots operating safely in shared human environments. However, automated entity detection systems still produce errors that degrade a robot's understanding of its surroundings. We present a Human-in-the-Loop (HITL) framework that enables user operators to validate and correct entity recognition and detection through an interactive Mixed Reality (MR) application and interface. Detected entities are visualized as aligned holograms, allowing users to confirm or remove them through intuitive, gesture-based spatial interactions. Our proposed method demonstrates that this shared environment and its interaction approach are functional and effective for correcting detections in real-time. By integrating the HITL approach, our system evaluation produces a more accurate representation of the shared environment and establishes the foundation for future extensions, including safer and more effective human–robot interaction and collaboration. |
|
| Sandoval, Eduardo B. |
Christopher D. Wallbridge, Kavyaa Somasundaram, Frank Förster, Maia Stiber, Eduardo B. Sandoval, Chinmaya Mishra, Christina E. Stimson, Patrick Holthaus, and Hatice Gunes (Cardiff University, Cardiff, UK; Örebro University, Örebro, Sweden; University of Hertfordshire, Hatfield, UK; Microsoft Research, Redmond, USA; UNSW, Sydney, Australia; MPI for Psycholinguistics, Nijmegen, Netherlands; University of Sheffield, Sheffield, UK; University of Cambridge, Cambridge, UK) This workshop proposes looking at errors in humans and robots. The workshop will focus along three axes: Human, Robot and Interaction induced errors. Errors in Human-Robot Interaction (HRI) pose a crucial challenge for the deployment of robots. Errors can affect the safety and trust of users, therefore it is important for empowering society that these robots can handle errors robustly. The workshop intends to foster interdisciplinary discussion among researchers in robotics, HCI, cognitive science, and social sciences, to encourage the development of robust, user-centered, and socially responsible HRI systems. The invited speakers, paper presentations and discussions will focus around topics of detecting errors, handling errors, adapting to errors, restoring trust, using intentional errors, communication strategies, issues of expectation as well as look at design, interdisciplinary and ethical approaches to research. In this way we will help inform research into errors, and develop robotic systems capable of robust interaction and collaboration. Phillip Morgan, Damith Herath, Praminda Caleb-Solly, Matthew Studley, Elizabeth Williams, Aurora An-Lin Hu, Eduardo B. Sandoval, Maleen Jayasuriya, Min Wang, and Janie Busby Grant (Cardiff University, Cardiff, UK; University of Canberra, Bruce, Australia; University of Nottingham, Nottingham, UK; University of the West of England, Bristol, UK; Australian National University, Canberra, Australia; UNSW, Sydney, Australia; University of Canberra, Canberra, Australia) The increasing prevalence of interactive, mobile robots in domestic and social spaces requires not only a comprehensive examination of the security challenges inherent in their large-scale deployment, but understanding how we as a community can support safe and successful adoption of robots. This workshop provides a forum for researchers, practitioners, and stakeholders from a range of disci- plines to build expertise and networks in safety, trust, psychosocial, legal and economic aspects for the secure deployment of social robots in domestic environments. The end goal of this workshop is to enable the development of methodologies and recommenda- tions to empower people to understand and decide on how their robots (and/or robots they are using) are secured, and approaches for raising awareness of the impact of an attack. The workshop will incorporate keynote presentations from leading experts, a series of lightning talks, and a scenario planning activity in which teams of participants use custom-designed prospective scenarios to ex- plore the precursors, processes, contexts and impacts of a breach of security in a robotic system. This will allow a deep-dive into the complexities of establishing and maintaining security and trust in interactive robotic applications, from the broad frame of social, personal and legal capability, enabling long-term acceptance and adoption of trustworthy, secure, and safe interactive robots. |
|
| Sandygulova, Anara |
Nazerke Rakhymbayeva and Anara Sandygulova (Astana IT University, Astana, Kazakhstan; Nazarbayev University, NUR-SULTAN, Kazakhstan) AI-based engagement recognition systems are increasingly adopted in educational and therapeutic contexts to provide individualized support for neurodiverse children. Despite their growing use, these systems raise ethical, technological, and social challenges that remain underexplored within the human–robot interaction (HRI) literature. This paper proposes an inclusive engagement framework to guide the ethical design and deployment of AI systems interacting with neurodiverse children, with an emphasis on fairness, transparency, and inclusion. The framework was developed using a mixed-methods approach, combining a scoping review of 18 peer-reviewed studies with a focus group discussion involving educators, therapists, and caregivers. Our findings reveal a prevailing reliance on neurotypical engagement cues in existing models, alongside limited consideration of the variability and contextual nature of engagement in neurodiverse populations. In addition, focus group participants emphasized practical concerns, including the risks of misinterpretation, reduced child agency, and over-automation in real-world settings. Overall, this paper lays the conceptual groundwork for an inclusive engagement framework and highlights key ethical considerations for the responsible use of AI-based engagement recognition systems with neurodiverse children. |
|
| Sanfeliu, Alberto |
Edison Jair Bejarano Sepulveda, Valerio Bo, Alberto Sanfeliu, and Anais Garrell (CSIC-UPC, Barcelona, Spain) Robots working in spaces shared by people need more than geometric mapping: they must recognize people, understand social context, and decide whether to proceed or negotiate passage. Traditional navigation pipelines lack this semantic understanding, often failing when progress depends on human cooperation. We introduce a Perception–Awareness–Decision (PAD) framework that systematically combines Simultaneous Localization and Mapping (SLAM) with Vision–Language Models (VLMs), speech recognition, and Large Language Models (LLMs), rather than simply stacking modules. PAD tries to emulate human perceptual organization by fusing multi-modal cues into a unified situational-awareness map capturing geometry, social context, and linguistic intent. This representation enables the decision layer to choose adaptively between safe replanning and context-appropriate verbal interaction. In a corridor-blocking task, PAD improves task success, increases safety margins, and produces behaviour that participants judged as more socially appropriate than a geometric baseline. These findings offer preliminary evidence that combining VLM-derived semantics with structured situational awareness can support more socially aware robot navigation. Valerio Bo, Anais Garrell, and Alberto Sanfeliu (CSIC-UPC, Barcelona, Spain) Robots that operate alongside people increasingly depend on intention-recognition models to anticipate human motion and adapt their behavior in socially appropriate ways. However, these models vary widely in both latency and accuracy, leading to different trade-offs between reacting quickly and correctly. Although these technical differences are well documented, it remains unclear how they shape the user’s experience of interacting with a robot. To examine how these translate into human perception, we conduct a preliminary user study comparing three intention-recognition models: a fast but low-accuracy model (Geo), an intermediate model (LSTM), and a slower but highly accurate model (Fusion). Participants interacted with a mobile robot controlled by each model and rated their experience across key dimensions of social interaction. Overall, the findings suggest that socially fluent interaction does not emerge from speed or accuracy alone, but from the balance of timely, reliable, and predictable robot behavior. |
|
| Santos, Marielle |
Wanqi Zhang, Jiangen He, and Marielle Santos (University of Tennessee at Knoxville, Knoxville, USA) Social robots hold promise for reducing job interview anxiety, yet designing agents that provide both psychological safety and instructional guidance remains challenging. Through a three-phase exploratory iterative design study (N=8), we empirically mapped this tension. Phase I revealed a “Safety–Guidance Gap”: while a Person-Centered Therapy (PCT) robot established safety, users felt insufficiently coached. Phase II identified a “Scaffolding Paradox”: rigid feedback caused cognitive overload, while delayed feedback lacked specificity. In Phase III, we resolved these tensions by developing an Agency-Driven Interaction Layer. Synthesizing our empirical findings, we propose the Adaptive Scaffolding Ecosystem—a conceptual framework that redefines robotic coaching not as a static script, but as a dynamic balance between affective support and instructional challenge, mediated by user agency. |
|
| Sarda Gou, Marina |
Marina Sarda Gou, Serena Marchesi, Agnieszka Wykowska, and Tony Prescott (University of Sheffield, Sheffield, UK; Italian Institute of Technology, Genoa, Italy) Understanding how people attribute awareness to robots is essential for developing socially and ethically aligned Human-Robot Interactions (HRI). This study presents the Italian validation of the Awareness Attribution Scale (AAS), an existing psychometric instrument designed to measure the attribution of awareness to artificial agents. The adaptation procedures (forward translation, native-speaker review, back-translation, and testing) were performed with the AAS. The final translated version was administered to Italian participants (N = 200) to rate different entities on perceived awareness. Analyses demonstrated good internal reliability of the Italian scale and expected attribution patterns across entities. These results provide evidence that the Italian AAS behaves consistently with the original English version, supporting its use in future cross-cultural research on awareness attribution. Furthermore, these findings advance cross-cultural knowledge of awareness attribution, a fundamental component of more inclusive settings. |
|
| Sarfraz, Burhan Mohammad |
Burhan Mohammad Sarfraz, Diana Saplacan Lindblom, Adel Baselizadeh, and Jim Torresen (University of Oslo, Oslo, Norway; Kristianstad University, Kristianstad, Sweden) As populations age and life expectancy rises, healthcare systems face growing staff shortages. Service robots have been proposed to support healthcare personnel, but their use introduces significant privacy challenges. This paper investigates whether a service robot can protect individuals’ privacy through face obfuscation while performing autonomous tasks in unconstrained healthcare environments. Our approach relies on a face recognition system trained to identify doctors and patients. Scenario-based experiments simulating a doctor’s office show that the system achieves partial success: non-target individuals are reliably obfuscated, and patients can be recognized when frontal views are available. However, real-world conditions such as pose variation, occlusion, and lighting changes reduce recognition reliability, limiting privacy protection. These results highlight both the potential and the current limitations of face obfuscation for privacy-preserving service robots, providing guidance for near-term deployment strategies in constrained interaction scenarios. |
|
| Sasaki, Tomoya |
Tomoya Sasaki, Taiki Ishigaki, Diego Roulle, and Eiichi Yoshida (Tokyo University of Science, Tokyo, Japan; University Paris-Est Créteil, Créteil, France) Orbiting is a common viewpoint control technique in CG and CAD, in which the camera rotates around a target that acts as the center of rotation. However, applying orbiting in teleoperation, a real-world application, is difficult due to physical constraints. We propose RelOrb, a viewpoint control method that focuses on relative coordinate changes between the camera and the target. Our prototype rotates the object on a turntable instead of moving the camera, providing head-mounted display images as if the camera itself were moving. We present the method, its coordinate transformation, a proof-of-concept prototype, and example operations. |
|
| Saunders, Rob |
Nataliia Kaminskaia, Rob Saunders, Kim Baraka, and Somaya Ben Allouch (Leiden University, Leiden, Netherlands; Vrije Universiteit Amsterdam, Amsterdam, Netherlands; University of Amsterdam, Amsterdam, Netherlands; Amsterdam University of Applied Sciences, Amsterdam, Netherlands) A single tap from a robot can set off a cascade of interpretation. This study examines how people perceive affect, intent, and agency when a non-humanoid robot conveys meaning through contact-based nudging. Using a cube-shaped robot programmed with twenty animator-designed affect–intent variants, participants completed two tasks: a situated interaction in which the robot attempted to pass their arm, and an isolated gesture-recognition task. In the situated encounter, participants rapidly attributed motives such as attention-seeking, social contact, or boundary testing. Recognition of the robot’s obstacle-passing goal was partial but participants consistently described the robot’s movement qualities as shifting from cautious to more assertive, interpreting these changes as emotional and intentional. In the isolated task the expressive movement was far less legible: only neutral gestures were reliably recognised, with frequent confusions between comfort and attention. These findings support the position that nudging gains meaning in context: while a minimal robot can elicit rich social inference when its nudges unfold dynamically in interaction, affect and intent become opaque when the same motions are removed from their relational frame. |
|
| Savini, Emanuela |
Damith Herath, Hans Asenbaum, Janie Busby Grant, Maleen Jayasuriya, Emanuela Savini, and Harshith Ghanta (University of Canberra, Bruce, Australia; University of Canberra, Canberra, Australia) As Artificial Intelligence (AI) systems are increasingly integrated into civic platforms and decision-making processes, the nature of public deliberation is shifting. While AI offers potential solutions to scalability and moderation in deliberative democracy, its impact on human agency remains underexplored. The project adopted an exploratory design, convening six small-group deliberations. Each group included four humans and one humanoid robot. Through semi-structured post-deliberation interviews, we investigated how embodied algorithmic intervention influenced perceived ownership of the discourse. Our initial findings suggested a "Competence-Agency Paradox": while participants valued the AI's ability to synthesise information and enforce civility, they reported a diminished sense of democratic agency, characterised by epistemic deference to the machine and a reluctance to explore emotional or intuitive lines of reasoning. We argue that without careful design, AI-augmented deliberations risk prioritising procedural efficiency over the messy, human autonomy required for genuine democratic legitimacy. |
|
| Scassellati, Brian |
Kayla Matheus, Debasmita Ghose, Jirachaya (Fern) Limprayoon, Michal A. Lewkowicz, and Brian Scassellati (Yale University, New Haven, USA; Massachusetts Institute of Technology, Cambridge, USA) We present the Ommie Deployable System (DS), a replicable, autonomous platform for long-term, in-the-wild mental health applications with the Ommie robot. Ommie DS builds on prior anxiety-focused deployments by introducing robust hardware, enhanced sensing, modular software, a companion tablet, and wireless multi-device architecture to support daily deep-breathing interactions in homes. Designed using off-the-shelf components and rapid-prototyped enclosures, the system enables reliable multi-week use, remote monitoring, and easy customization. By providing a durable, open, and researcher-friendly platform, Ommie DS supports scalable, real-world study of HRI for mental health and well-being. |
|
| Schenk, Ann-Sophie L. |
Ann-Sophie L. Schenk, Martin Schymiczek Larangeira de Almeida, Ilknur Sitil, and Xiying Li (RWTH Aachen University, Aachen, Germany) What if public benches had their own desires? This paper presents Bickering Benches, two interactive benches designed not to serve human needs but to act from a post-anthropocentric perspective. Through playful voices and competitive behaviors, the benches attempt to attract nearby passersby and maximize their own sit-down count. We aim to demonstrate how everyday objects can become active social actors that reshape human-robot interaction and open new possibilities for playful engagement in shared public space. |
|
| Schermerhorn, Ryan |
Tyler Garvey, Elodie Koo, and Ryan Schermerhorn (Colby College, Waterville, USA) Our proposed design is an armband that will prompt users to maintain a routine and notify caregivers of emergencies. The device utilizes an Arduino Nano microcontroller, allowing the user to input their routine data over the internet. Haptic and audio feedback will signify different parts of the routine, played through the speaker and motor elements. Pressing a button will allow for rejection of promptings, while holding it will contact help. The device will also detect dangerous scenarios for the wearer using an accelerometer and a temperature sensor. This device aims to improve the health and well-being of ADRD patients. |
|
| Schiatti, Lucia |
Giulia Pusceddu, Lucia Schiatti, Andrea Senacheribbe, Monica Gori, Alessio Del Bue, and Alessandra Sciutti (Italian Institute of Technology, Genoa, Italy; University of Genoa, Genoa, Italy) We introduce a novel Human-Robot Interaction (HRI) paradigm built on sketch-based symbolic communication to investigate the link between human visual abstraction skills and language-based organization of semantic knowledge. The design of drawings' prompts is based on a controlled grammar with an increasing level of semantic complexity. We tested such a paradigm in an HRI experiment where humans were asked to make sketches of textual prompts and evaluate the congruence of the robot behavior with the represented semantic category, i.e. objects, actions, and interactions. Our preliminary results, suggesting that drawing strategies significantly differ based on the semantic complexity and in presence of an interaction goal, are promising towards the potential of the proposed approach to be integrated in fundamental research and clinical applications. Data: https://hrisymbolic.github.io/human-robot-drawing/. |
|
| Schijve, Felix |
Jing Li, Felix Schijve, Jun Hu, and Emilia I. Barakova (Eindhoven University of Technology, Eindhoven, Netherlands) Parental involvement is crucial for the development of children's emotion regulation (ER) skills, yet navigating these complex emotional interactions remains challenging for many families. While Large Language Models (LLMs) offer unprecedented conversational flexibility, integrating them into embodied social robots to provide context-aware, multimodal support remains an open challenge. In this paper, we present the design and preliminary evaluation of an LLM-powered robotic system aimed at facilitating ER within parent-child dyads. Utilizing a supervised autonomy approach, our system bridges the gap between language-based reasoning and embodied robotic behavior, allowing the MiRo-E robot to engage in natural dialogue while performing empathetic physical actions. We detail the system's technical architecture and interaction design, which guides dyads through evidence-based ER strategies. Preliminary user tests with six parent-child dyads suggest positive user engagement and initial trust, with participants reporting that the robot showed potential as a supportive mediator. These findings offer early design insights into developing autonomous, LLM-driven social robots for family-centered mental health interventions. |
|
| Schmidt, Robin |
Söhnke Benedikt Fischedick, Robin Schmidt, Benedict Stephan, and Horst-Michael Gross (TU Ilmenau, Ilmenau, Germany) Voice-based interaction offers an intuitive way for untrained users to control mobile robots, but existing speech interfaces often rely on intent maps or robot-specific pipelines that are difficult to transfer across robots, backends, and applications. Recent multimodal large language models (LLMs) can process audio and produce structured function calls, enabling a more flexible form of voice interaction. This late-breaking report proposes a vendor-independent integration pattern (cloud, edge server, or local) that exposes robot capabilities as Model Context Protocol (MCP) tools and maps them to existing middleware interfaces such as remote procedure calls (RPCs). Continuous sensor streams remain in the middleware and are accessed through a snapshot mechanism that returns the most recent buffered value on demand. We demonstrate the approach on a mobile co-presence robot using a lightweight audio pipeline built around wake word detection (WWD), voice activity detection (VAD), multimodal LLM inference, and text-to-speech (TTS). MCP tools trigger capabilities such as navigation, communication, and projector control. The architecture provides a general pattern for robots and middlewares, enabling flexible voice interaction without rewriting intent logic. |
|
| Schneider, Sebastian |
Katharina Lisa Kleiser, Veerle Buntsma, and Sebastian Schneider (University of Twente, Enschede, Netherlands) Natural disasters call for time-effective search and rescue (SAR) operations to find and assist survivors. While dogs are used to locate survivors due to their keen sense of smell, recent advances in robotics are also expanding the role of technology in these efforts. This late-breaking report explores what close collaboration between handlers and SAR dogs can teach us about effective human-robot teaming. We conducted four expert interviews with SAR dog handlers in the Netherlands and found that successful teamwork heavily relies on mutual responsiveness and nonverbal communication. We found that significant challenges during SAR missions include high temperatures, fatigue, and harzardous environments. In such situations, robots could provide meaningful support and complement human-do teams. Nevertheless, current robots fall short in meaningfully supporting active search tasks due to missing olfactory capabilities and limited abilities to navigate over rubble and debris. Our findings aim to inform real-world rescue practices as SAR robotics evolves, ensuring that emerging technologies align with rescuers' actual needs and workflows. |
|
| Schobesberger, Martin |
Verena Szojak, Ouijdane Guiza, Markus Jäger, Markus Brillinger, Martin Schobesberger, Martina Mara, Kathrin Meyer, and Sandra Maria Siedl (Pro2Future, Linz, Austria; FH Joanneum, Graz, Austria; Johannes Kepler University Linz, Linz, Austria) Collaborative robots (cobots) are increasingly used in manufacturing, yet their behavior can appear unpredictable to human coworkers, making teamwork less smooth. Explainability has been proposed as a key requirement for fluent human-robot collaboration. However, we still lack a clarity of (i) when cobots should provide explanations, (ii) what information human coworkers seek, and (iii) how explanations should be delivered by cobots during real physical collaboration. This paper reports an exploratory qualitative study of human explanation needs in a real-life bicycle assembly task with a 6-axis cobot arm. Fourteen participants interacted with the cobot that occasionally behaved unexpectedly. Think-aloud protocols and semi-structured interviews, show that users seek continuous information primarily about the cobot's current status and situation-dependent guidance for interaction. Participants favored simple presentations such as easy-to-read text explanations and light signals. We discuss implications for designing explainable cobots that provide lightweight, context-sensitive information and outline directions for future quantitative evaluations. |
|
| Schoen, Andrew |
Saad Elbeleidy, Andrew Schoen, Christopher Birmingham, Tiago Ribeiro, Victor Paléologue, Douglas Dooley, and Ross Mead (Peerbots, Arlington, USA; Semio, Los Angeles, USA) Software infrastructure for rendered robot faces is fragmented, yet there are many open research questions to address regarding robot faces, emotional expression, gaze, and animation. In this workshop-tutorial hybrid, we invite researchers and practitioners interested in robot faces to share their expertise and learn about the Vizij open source ecosystem and how it can be used to define, animate and deploy expressive rendered robot faces fully integrated with various robot platforms. We will illustrate the full software pipeline required to implement an expressive social robot face starting from scratch and going all the way to defining and connecting a custom face to a real-time conversational engine. Participants will be able to follow along on their own devices viewing and controlling their custom rendered robot face. Participants will also hear from ecosystem partners who have begun integrating with Vizij and learn about best practices and lessons learned. |
|
| Schömbs, Sarah |
Yan Zhang, Sarah Schömbs, Xiang Pan, Jan Leusmann, Sara Mongile, Mohammad Obaid, and Wafa Johal (University of Melbourne, Melbourne, Australia; Kyoto University, Kyoto, Japan; LMU Munich, Munich, Germany; Italian Institute of Technology, Genoa, Italy; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden) Recent developments in large language models and multi-agentic systems present both opportunities and challenges for Human-Robot Interaction (HRI). However, it remains unclear how the interaction paradigms will be reshaped due to the growing complexity and opaqueness. Therefore, to bridge this gap and connect users with multi-agentic systems through embodied robot agents, this workshop aims to explore solutions for design challenges of multi-agentic systems on various types of robots in real-world contexts. Through a complementary mix of focus groups, including design activities and paper presentations, our workshop generates design considerations for real-world applications and outlines pioneering research agendas for multi-agentic systems in HRI. Maria Teresa Parreira, Ilaria Torre, Sarah Schömbs, Erik Lagerstedt, Laetitia Tanqueray, Karla Bransky, Ilaria Alfieri, Maria Raffa, Carolin Straßmann, Benjamin Lebrun, and Samantha Stedtler (Cornell University, New York, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Melbourne, Melbourne, Australia; University of Gothenburg, Gothenburg, Sweden; Lund University, Lund, Sweden; Australian National University, Canberra, Australia; IULM University, Milan, Italy; University of Duisburg-Essen, Duisburg, Germany; University of Canterbury, Christchurch, New Zealand) Building on the foundational work of our 2025 workshop, this second edition shifts from exploring what sustainability means in HRI to producing actionable outcomes for the community. Last year, we identified strong but often unacknowledged engagement with social and environmental sustainability, alongside gaps in shared language and practical frameworks. This year, we focus on concrete deliverables: guidelines for comprehensively integrating sustainability in HRI research. The morning features external speakers who integrate sustainable development in their research, and whole room discussions. The afternoon centers on collaborative development of guidelines and standards that researchers could adopt. We welcome HRI researchers at any career stage interested in operationalizing sustainability - whether through addressing the carbon footprint of computational models, ensuring equitable access to robotic technologies, or embedding SDG principles into research practices. Outcomes will include practical tools for sustainable HRI research and a roadmap for continued community action beyond the workshop. |
|
| Schrapel, Maximilian |
Akhila Bairy, Diana Burkart, Maximilian Schrapel, Jan Meyerhoefer, John Pravin Arockiasamy, Andy Comeca, Manuel Bied, Alexey Vinel, and Maike Schwammberger (KIT, Karlsruhe, Germany) Effective explainability in human-robot interaction (HRI) requires communication strategies that remain clear in dynamic settings. Unimodal explanations often falter under noise or cognitive load, making multimodal explanations crucial for real-world interaction. For social robots in traffic environments equipped with Vehicle-to-Everything (V2X) communication, timely and interpretable explanations are vital. This late-breaking report presents preliminary findings from a Virtual Reality (VR) user study on which modalities pedestrians perceive as most and least effective across different scenarios, revealing clear scenario-dependent differences in modality effectiveness and highlighting which cues are actually noticed. |
|
| Schwammberger, Maike |
Akhila Bairy, Diana Burkart, Maximilian Schrapel, Jan Meyerhoefer, John Pravin Arockiasamy, Andy Comeca, Manuel Bied, Alexey Vinel, and Maike Schwammberger (KIT, Karlsruhe, Germany) Effective explainability in human-robot interaction (HRI) requires communication strategies that remain clear in dynamic settings. Unimodal explanations often falter under noise or cognitive load, making multimodal explanations crucial for real-world interaction. For social robots in traffic environments equipped with Vehicle-to-Everything (V2X) communication, timely and interpretable explanations are vital. This late-breaking report presents preliminary findings from a Virtual Reality (VR) user study on which modalities pedestrians perceive as most and least effective across different scenarios, revealing clear scenario-dependent differences in modality effectiveness and highlighting which cues are actually noticed. |
|
| Schwartz, Sophie |
Sophie Schwartz, Lena Fiedler, and Marty Friedrich (TU Berlin, Berlin, Germany; TU Chemnitz, Chemnitz, Germany) Service robots are increasingly being deployed in public spaces, yet their accessibility for people with disabilities remains underexplored. This study presents an exploratory field investigation of blind and visually impaired (BVI) people's encounters with an autonomous park cleaning robot. Using on-site observations, individual interviews, and a subsequent focus group, it was examined how participants perceived the robot, understood its task, and evaluated potential barriers within a real deployment context. The findings show that although the robot was acoustically perceivable, its purpose, actions, and interaction expectations remained unclear, leading to uncertainty during incidental encounters. Visual-only communication, low-contrast design, and hard-to-perceive safety instructions further limited perceivability and understandability. The study demonstrates that field-based evaluation is crucial for revealing real-world barriers overlooked in laboratory settings and underscores the need to involve blind and visually impaired (BVI) people as experts in the design of public-space robotics. These insights complement existing accessibility guidelines and highlight the importance of inclusive, accessible robot design to ensure that service robots do not become new barriers in public environments. |
|
| Schymiczek Larangeira de Almeida, Martin |
Ann-Sophie L. Schenk, Martin Schymiczek Larangeira de Almeida, Ilknur Sitil, and Xiying Li (RWTH Aachen University, Aachen, Germany) What if public benches had their own desires? This paper presents Bickering Benches, two interactive benches designed not to serve human needs but to act from a post-anthropocentric perspective. Through playful voices and competitive behaviors, the benches attempt to attract nearby passersby and maximize their own sit-down count. We aim to demonstrate how everyday objects can become active social actors that reshape human-robot interaction and open new possibilities for playful engagement in shared public space. |
|
| Sciutti, Alessandra |
Giulia Pusceddu, Lucia Schiatti, Andrea Senacheribbe, Monica Gori, Alessio Del Bue, and Alessandra Sciutti (Italian Institute of Technology, Genoa, Italy; University of Genoa, Genoa, Italy) We introduce a novel Human-Robot Interaction (HRI) paradigm built on sketch-based symbolic communication to investigate the link between human visual abstraction skills and language-based organization of semantic knowledge. The design of drawings' prompts is based on a controlled grammar with an increasing level of semantic complexity. We tested such a paradigm in an HRI experiment where humans were asked to make sketches of textual prompts and evaluate the congruence of the robot behavior with the represented semantic category, i.e. objects, actions, and interactions. Our preliminary results, suggesting that drawing strategies significantly differ based on the semantic complexity and in presence of an interaction goal, are promising towards the potential of the proposed approach to be integrated in fundamental research and clinical applications. Data: https://hrisymbolic.github.io/human-robot-drawing/. |
|
| Seaborn, Katie |
Katie Seaborn and Özge Nilay Yalçın (Institute of Science Tokyo, Tokyo, Japan; University of Cambridge, Cambridge, UK; Simon Fraser University, Surrey, Canada) Sycophancy in social robots is an emerging threat brought on by the launch of ChatGPT and other powerful large language models (LLMs) that can speak in a near-fluent fashion. Short- and long-term findings on LLM-powered chatbots and conversational agents are raising the alarm. With work bridging communication-centred LLM use and social robots in production, the deceptive and persuasive capabilities of LLM-imbued robotic companions needs urgent and critical consideration. Notably, how social robots aided by sycophantically-inclined LLMs may overly influence decision-making and elicit overtrust needs interrogation. Using scoping review methodology that bridges robotics with AI and LLMs, we surface dimensions of sycophancy, constructs as research targets, and a suite of measures for research on robotic sycophancy. Our analysis of historical and modern studies (𝑁 = 23) sets the stage for empirical and theoretical work on the potential misuses and unexpected effects of sycophancy in human–robot interactions. |
|
| See, John |
Jia Yap Lim, John See, William Weimin Yoo, and Christian Dondrup (Heriot-Watt University Malaysia, Putrajaya, Malaysia; Heriot-Watt University, Edinburgh, UK) User engagement prediction in human-robot interaction (HRI) is typically conducted across diverse environmental settings, including both uncontrolled and controlled environments. Such environmental variations compel social robots to capture and analyse user behaviours differently. To the best of our knowledge, most of the prior works rely on video, audio and feature vectors extracted from the UE-HRI (uncontrolled) dataset to estimate user engagement. The existing literature has overlooked the potential of Multimodal Large Language Models (MLLMs) for user engagement prediction in HRI contexts, thus leaving a critical gap in understanding their operational mechanisms and capacity to elevate model performance. To address this gap, this paper pioneers an investigation into MLLM efficacy for engagement prediction across different environmental settings using the UE-HRI (uncontrolled) and eHRI (controlled) datasets. Moreover, we perform rigorous experiments to identify important factors influencing MLLM performance, including prompts, model types, model parameters, and keyword extraction strategies. |
|
| Selleck, Zachary |
Jasmin Jaya Chadha, Patrick Kenneth Pischulti, Ciara Hume, Zachary Selleck, William Roessmann, and Katya Arquilla (University of Colorado at Boulder, Boulder, USA) Psychological safety, the shared belief held by team members that it is safe to take interpersonal risks, has been assessed in many human teams in the corporate workplace, yet it has not been analyzed to the same extent in complex environments or with multi-agent teams. Many studies have either evaluated team dynamics in human-robot interaction (HRI) completely in the wild or in highly controlled laboratory studies. This work presents operationally relevant study tasks that can be implemented in a controlled laboratory environment to characterize psychological safety in HRI. 33 participants completed 2 different tasks repeatedly with a human and robot teammate. We observed higher performance scores when participants worked with a human vs. a robot, suggesting that these tasks elicit differentiable performance responses based on team type that could be related to differences in psychological safety. |
|
| Semenyakina, Elizaveta |
Amir Atef Habel, Ivan Snegirev, Elizaveta Semenyakina, Miguel Altamirano Cabrera, Jeffrin Sam, Fawad Mehboob, Roohan Ahmed Khan, Muhammad Ahsan Mustafa, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) This paper presents Glove2UAV, a wearable IMU-glove interface for intuitive UAV control through hand and finger gestures, augmented with vibrotactile warnings for exceeding predefined speed thresholds. To promote safer and more predictable interaction in dynamic flight, Glove2UAV is designed as a lightweight and easily deployable wearable interface intended for real-time operation. Glove2UAV streams inertial measurements in real time and estimates palm and finger orientations using a compact processing pipeline that combines median-based outlier suppression with Madgwick-based orientation estimation. The resulting motion estimations are mapped to a small set of control primitives for directional flight (forward/backward and lateral motion) and, when supported by the platform, to object-interaction commands. Vibrotactile feedback is triggered when flight speed exceeds predefined threshold values, providing an additional alert channel during operation. We validate real-time feasibility by synchronizing glove signals with UAV telemetry in both simulation and real-world flights. The results show fast gesture-based command execution, stable coupling between gesture dynamics and platform motion, correct operation of the core command set in our trials, and timely delivery of vibratile warning cues. |
|
| Senacheribbe, Andrea |
Giulia Pusceddu, Lucia Schiatti, Andrea Senacheribbe, Monica Gori, Alessio Del Bue, and Alessandra Sciutti (Italian Institute of Technology, Genoa, Italy; University of Genoa, Genoa, Italy) We introduce a novel Human-Robot Interaction (HRI) paradigm built on sketch-based symbolic communication to investigate the link between human visual abstraction skills and language-based organization of semantic knowledge. The design of drawings' prompts is based on a controlled grammar with an increasing level of semantic complexity. We tested such a paradigm in an HRI experiment where humans were asked to make sketches of textual prompts and evaluate the congruence of the robot behavior with the represented semantic category, i.e. objects, actions, and interactions. Our preliminary results, suggesting that drawing strategies significantly differ based on the semantic complexity and in presence of an interaction goal, are promising towards the potential of the proposed approach to be integrated in fundamental research and clinical applications. Data: https://hrisymbolic.github.io/human-robot-drawing/. |
|
| Senft, Emmanuel |
Vivek Gupte, Shalutha Rajapakshe, and Emmanuel Senft (Idiap Research Institute, Martigny, Switzerland; EPFL, Lausanne, Switzerland) Current research on collaborative robots (cobots) in physical rehabilitation largely focuses on repeated motion training for people undergoing physical therapy (PuPT), even though these sessions include phases that could benefit from robotic collaboration and assistance. Meanwhile, access to physical therapy remains limited for people with disabilities and chronic illnesses. Cobots could support both PuPT and therapists, and improve access to therapy, yet their broader potential remains underexplored. We propose extending the scope of cobots by imagining their role in assisting therapists and PuPT before, during, and after a therapy session. We discuss how cobot assistance may lift access barriers by promoting ability-based therapy design and helping therapists manage their time and effort. Finally, we highlight challenges to realizing these roles, including advancing user-state understanding, ensuring safety, and integrating cobots into therapists’ workflow. This view opens new research questions and opportunities to draw from the HRI community’s advances in assistive robotics. |
|
| Sergeeva, Anastasia |
Stella Kyratzi, Anastasia Sergeeva, and Jan Jacobs (Vrije Universiteit Amsterdam, Amsterdam, Netherlands; Lely Industries, Maasluis, Netherlands) Trust in Human–Robot Interaction (HRI) is typically treated as an individual psychological attitude shaped by users’ perceptions of a robot’s design features. This focus on internal states and designable cues, however, obscures the social and interpretive work through which trust is accomplished in real-world human–robot interactions. Drawing on 15 hours of field observations and 18 archival interviews in Dutch dairy farms adopting robotic milking systems, we offer a practice-based perspective showing that trust “in the wild” is not produced through direct human–robot interaction but through advisors’ situated work. Advisors tune robotic systems, reassure users during uncertainty, and anchor robotic data through reference to lived contexts. These practices reveal trust as an ongoing accomplishment sustained by intermediary work. |
|
| Sergeeva, Anastasia V. |
Melissa M. Sexton, Anastasia V. Sergeeva, and Elena Torta (Vrije Universiteit Amsterdam, Amsterdam, Netherlands; Eindhoven University of Technology, Eindhoven, Netherlands) Robots are increasingly deployed in everyday work settings alongside humans, creating new demands for systems that can operate safely and effectively in dynamic, unpredictable environments. These pressures extend into robotics education, which must prepare students not only in traditional theoretical foundations but also for the practical and human-centered realities of real-world robotic deployment. Yet little empirical work examines how robotics education is adapting to these needs. We present a case study of a master’s-level robotics course that integrates theoretical instruction with a practice-based challenge focused on human-aware navigation. Through observations, course material analysis, and interviews, we identify three educational trade-offs: real-world readiness vs. theoretical competence, component specialization vs. system-level understanding, and providing a "skeleton" structure vs. fostering creativity. These trade-offs illustrate how contemporary industry expectations and the growing importance of HRI reshape educational practice. Understanding these competing demands is essential for designing robotics curricula that can meaningfully prepare future engineers for robots operating in human environments. |
|
| Serpiva, Valerii |
Faryal Batool, Iana Zhura, Valerii Serpiva, Roohan Ahmed Khan, Ivan Valuev, Issatay Tokmurziyev, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Reliable human–robot collaboration in emergency scenarios requires autonomous systems that can detect humans, infer navigation goals, and operate safely in dynamic environments. This paper presents HumanDiffusion, a lightweight image-conditioned diffusion planner that generates human-aware navigation trajectories directly from RGB imagery. The system combines YOLO-11 based human detection with diffusion-driven trajectory generation, enabling a quadrotor to approach a target person and deliver medical assistance without relying on prior maps or computationally intensive planning pipelines. Trajectories are predicted directly in pixel space, enabling smooth motion and maintaining a consistent safety margin around humans. We evaluate HumanDiffusion in simulation and real-world indoor mock-disaster scenarios. On a 300-sample test set, the model achieves a mean squared error of 0.02 in pixel-space trajectory reconstruction. Real-world experiments demonstrate an overall mission success rate of 80% across accident-response and search-and-locate tasks with partial occlusions. These results indicate that human-conditioned diffusion planning offers a practical and robust solution for human-aware UAV navigation in time-critical assistance settings. Valerii Serpiva, Artem Lykov, Jeffrin Sam, Aleksey Fedoseev, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) We propose a novel UAV-assisted creative capture system that leverages diffusion models to interpret high-level natural language prompts and automatically generate optimal flight trajectories for cinematic video recording. Instead of manually piloting the drone, the user simply describes the desired shot (e.g., "orbit around me slowly from the right and reveal the background waterfall"). Our system encodes the prompt along with an initial visual snapshot from the onboard camera, and a diffusion model samples plausible spatio-temporal motion plans that satisfy both the scene geometry and shot semantics. The generated flight trajectory is then autonomously executed by the UAV to record smooth, repeatable video clips that match the prompt. User evaluation using NASA-TLX showed a significantly lower overall workload with our interface (M = 21.6) compared to a traditional remote controller (M = 58.1), demonstrating a substantial reduction in perceived effort. Mental demand (M = 11.5 vs. 60.5) and frustration (M = 14.0 vs. 54.5) were also markedly lower for our system, highlighting its clear usability advantages in autonomous text-driven flight control. This project demonstrates a new interaction paradigm: text-to-cinema flight, where diffusion models act as the "creative operator" converting story intentions directly into aerial motion. |
|
| Sexton, Melissa M. |
Melissa M. Sexton, Anastasia V. Sergeeva, and Elena Torta (Vrije Universiteit Amsterdam, Amsterdam, Netherlands; Eindhoven University of Technology, Eindhoven, Netherlands) Robots are increasingly deployed in everyday work settings alongside humans, creating new demands for systems that can operate safely and effectively in dynamic, unpredictable environments. These pressures extend into robotics education, which must prepare students not only in traditional theoretical foundations but also for the practical and human-centered realities of real-world robotic deployment. Yet little empirical work examines how robotics education is adapting to these needs. We present a case study of a master’s-level robotics course that integrates theoretical instruction with a practice-based challenge focused on human-aware navigation. Through observations, course material analysis, and interviews, we identify three educational trade-offs: real-world readiness vs. theoretical competence, component specialization vs. system-level understanding, and providing a "skeleton" structure vs. fostering creativity. These trade-offs illustrate how contemporary industry expectations and the growing importance of HRI reshape educational practice. Understanding these competing demands is essential for designing robotics curricula that can meaningfully prepare future engineers for robots operating in human environments. |
|
| Shah, Keya |
Keya Shah, Himanshi Lalwani, Zein Mukhanov, and Hanan Salam (New York University, Abu Dhabi, United Arab Emirates) Social robots and conversational agents are being explored as supports for wellbeing, goal-setting, and everyday self-regulation. While prior work highlights their potential to motivate and guide users, much of the evidence relies on self-reported outcomes or short, researcher-mediated encounters. As a result, we know little about the interaction dynamics that unfold when people use such systems in real-world contexts, and how these dynamics should shape future robot wellbeing coaches. This paper addresses this gap through content analysis of 4352 messages exchanged longitudinally between 38 university students and an LLM-based wellbeing coach. Our results provide a fine-grained view into how users naturally shape, steer, and sometimes struggle within supportive human-AI dialogue, revealing patterns of user-led direction, guidance-seeking, and emotional expression. We discuss how these dynamics can inform the design of robot wellbeing coaches that support user autonomy, provide appropriate scaffolding, and uphold ethical boundaries in sustained wellbeing interactions. |
|
| Shahbaz, Tehniyat |
Neil Fernandes, Tehniyat Shahbaz, Emily Davies-Robinson, Yue Hu, and Kerstin Dautenhahn (University of Waterloo, Waterloo, Canada; United for Literacy, Toronto, Canada) Newcomer children face barriers in acquiring the host country’s language and literacy programs are often constrained by limited staffing, mixed-proficiency cohorts, and short contact time. While Socially Assistive Robots (SARs) show promise in education, their use in these socio-emotionally sensitive settings remains underexplored. This research presents a co-design study with program tutors and coordinators, to explore the design space for a social robot, Maple. We contribute (1) a domain summary outlining four recurring challenges, (2) a discussion on cultural orientation and community belonging with robots, (3) an expert-grounded discussion of the perceived role of an SAR in cultural and language learning, and (4) preliminary design guidelines for integrating an SAR into a classroom. These expert-grounded insights lay the foundation for iterative design and evaluation with newcomer children and their families. |
|
| Shaikh, Nihal |
Tanu Majumder, Nihal Shaikh, Ashita Ashok, and Karsten Berns (University of Kaiserslautern-Landau, Kaiserslautern, Germany) Due to the limited integration of social robots into everyday life and increased media exposure, many people first encounter robot embodiment online rather than in person. Such virtual encounters can shape expectations influenced by fiction and imagination, which may be challenged during later physical human-robot interaction. This pilot study examines how robot embodiment order, meeting a robot virtually first versus physically first, affects expectation change, social presence, and emotional response. N=22 participants experienced the same scripted monologue from the humanoid robot Ameca twice, once as a physically present robot and once as its video-based virtual simulation. Participants who encountered the robot virtually first showed significant expectation drops and increased anxiety after the physical interaction, whereas physical-first participants showed stable expectations and less emotional disruption. Social presence was highest when the physical robot was the initial encounter and decreased when experienced after the virtual form. These preliminary findings suggest that imagination-driven expectations formed online can amplify discomfort when confronted with physical reality, underscoring embodiment order as a key factor for future HRI design and deployment. |
|
| Shamir, Karen |
Joyce Yang, Phillip Johnson, Nyra Graham, and Karen Shamir (Cornell University, Ithaca, USA) Social isolation in shared spaces threatens community cohesion and well-being. This paper presents a social robot designed to spark human-to-human interactions. Inspired by public art projects, the robot invites individuals to collaborate on a shared LEGO structure by using expressive eye tracking, autonomous turning, and servo-actuated drawer movement. Field deployments in Cornell University spaces showed the robot effectively acted as a social catalyst: diverse participants contributed to a shared structure, and strangers initiated conversations about the robot. This work offers a functional prototype and insights on robots as mediators of human connection and promotes ideas of empowering collaboration. |
|
| Shamssabzevar, Yasaman |
Chalmers Ong, Hannah Lee, Koen de Nie, Mark W. Vas, Peter Holzer, Yasaman Shamssabzevar, Yu-Ai Jiang, J. Micah Prendergast, Alper Semih Alkan, and Serdar Aşut (Delft University of Technology, Delft, Netherlands) Robotics in creative industries is sometimes met with concern over the eventual replacement of humans. This paper proposes a framework for a new form of spatial craft that emerges through human-robot collaboration, using the Delft Blue painting process as a case study. We employ Learning from Demonstration (LfD) through kinesthetic guidance of a KUKA iiwa robotic arm to capture expert brushstroke trajectories, which are then modelled using a Long Short-Term Memory Variational Autoencoder (LSTM-VAE) to generate novel, stylistically coherent strokes. Subsequently, we evaluated these generated trajectories through physical execution on ceramic tiles, inviting a human painter to interpret and complete one of the robot-produced compositions. The results demonstrate that robotic machine learning can support a new co-crafting workflow, which does not substitute human artists but expands and transforms traditional craft practices for new creative opportunities to emerge. |
|
| Shangguan, Zhegong |
Ruidong Ma, Wenjie Huang, Zhegong Shangguan, Angelo Cangelosi, and Alessandro Di Nuovo (Sheffield Hallam University, Sheffield, UK; University of Manchester, Manchester, UK) Direct imitation of humans by robots offers a promising direction for remote teleoperation and intuitive task instruction, where a human can perform a task naturally and the robot autonomously interprets and executes it using its own embodiment. Existing methods often rely on close alignment between human and robot scenes. This prevents robots from inferring the intent of the task or executing demonstrated behaviors when the initial states mismatch. Hence, it poses difficulties for non-expert users, who may need domain knowledge to adjust the setup. To address this challenge, we propose a neuro-symbolic framework that unifies visual observations, robot proprioceptive states, and symbolic abstractions within a shared latent space. Human demonstrations are encoded into this representation as predicate states. A symbolic planner can thus generate high-level plans that account for the different robot initial states. A flow matching module then synthesizes continuous joint trajectories consistent with the symbolic plan. We validate our approach on multi-object manipulation tasks. Preliminary results show that the framework can infer human intent and generate feasible symbolic plans and robot motions under mismatched initial states. These findings highlight the potential of neuro-symbolic models for more natural human-robot instruction. and they can enhance the explainability and trustworthiness of robot actions. Chen Kang, Madina Alitai, Yiting Wang, Xiaochi Cai, Ruidong Ma, Angelo Cangelosi, and Zhegong Shangguan (Beijing Language and Culture University, Beijing, China; University of Manchester, Manchester, UK; Pennsylvania State University, State College, USA; Sheffield Hallam University, Sheffield, UK) Large Language Model (LLM)-driven social robots face two key challenges: inference latency creates unnatural silences during which an expressively static robot appears disengaged, and LLMs rarely account for the user's facial affect as a continuous evolving process. We present Fluid-Xpress, an emotion-aware dual-loop framework for empathic facial reactions in human-robot interaction. The framework features: (1) a Macro-Micro dual-loop architecture that decouples real-time non-verbal feedback from LLM verbal processing, enabling continuous affective backchanneling during inference latency; (2) a Temporal Affective Engine using metrics such as MSSD to capture emotional dynamics and detect complex states like cognitive overload and masked emotions; and (3) a Risk-Adaptive Strategy that prioritizes immediate intervention during high-arousal states. A pilot study (N=8) showed that Fluid-Xpress significantly improved arousal stability (p < .05), mood improvement (p < .05), expression awareness (p < .01), and perceived empathy (p < .05) compared to baseline, providing preliminary support for emotion-aware non-verbal feedback in embodied social robots. |
|
| Shao, Linzhengrong |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Shaon, Hasan Shamim |
Hasan Shamim Shaon, Andrew Trautzsch, Anh Tuan Tran, Varun Nagarkar, and Jong Hoon Kim (Kent State University, Kent, USA) Effective communication of motion intent is critical for autonomous mobile robots operating in human-populated environments. While prior works have demonstrated that floor-projected cues such as arrows or simplified trajectories can enhance bystander prediction and safety, existing systems often rely on static or handcrafted visual encodings and are rarely evaluated within end-to-end service workflows. We introduce Vendobot, a projection-augmented delivery robot that integrates a ROS1 navigation stack, an Android app based, PostgreSQL-backed order management pipeline, a real-time telemetry subsystem, and a projector-equipped Raspberry Pi 5 executing a lightweight intent-projection algorithm. Our method subscribes to the Timed Elastic Band (TEB) local planner to extract the robot’s predicted short-horizon trajectory, transforms it into projector coordinates, and renders either (1) quantized directional indicators or (2) a continuous animated polyline representing the robot’s true local plan with less than 100 ms latency. In a within-subject study involving both bystanders and delivery recipients, the projected local-plan visualization significantly improved intent legibility, motion predictability, and user comfort compared to arrow-based or no-projection conditions. These findings position trajectory-grounded projection as a technically viable and perceptually beneficial communication modality for service robots deployed in semi-public indoor environments. |
|
| Shaw, Patricia |
Patrick Holthaus, Daniel Hernández García, Patricia Shaw, Francesco Del Duchetto, Marta Romeo, Muneeb Imtiaz Ahmad, Daniel Tozadore, and Shelly Bagchi (University of Hertfordshire, Hatfield, UK; Heriot-Watt University, Edinburgh, UK; Aberyswyth University, Aberystwyth, UK; University of Lincoln, Lincoln, UK; Swansea University, Swansea, UK; University College London, London, UK; National Institute of Standards and Technology, Gaithersburg, USA) The quality and impact of human-robot interaction (HRI) research rely on scientific rigour and reproducibility as well as the ethical soundness of experiments involving human participants. However, the HRI community currently lacks easy access to resources and common knowledge on experimental design practices and standardised reporting guidelines. The BPM-HRI: Best practices and methods in HRI research workshop at HRI 2026 aims to address this gap by fostering a community-wide effort to discuss, disseminate, and develop more robust methodologies. Targeting to empower especially early-career researchers, this half-day workshop will consolidate efforts from international initiatives, including the IEEE Standards Group P3108 Recommended Practice for Design of Human Subjects Studies in Human-Robot Interaction and the UK-HRI topic group Human-Robot Interaction: Best Practices and Methods. Our program includes a presentation and discussion on these initiatives, a keynote address focusing on rigorous study reporting, providing an intercontinental perspective, and dedicated mentoring and ideas exchange sessions. A collaborative working session will document the workshop's efforts with attendees starting to draft a community-driven white paper, surveying the current landscape and outlining next steps for experimental design and reporting recommendations in the field of HRI. |
|
| She, Xinran |
Hanyu Zhang, Xinyue Xu, Xinran She, Jie Deng, and Yuanrong Tang (Tsinghua University, Beijing, China) Digital violence often happens impulsively within seconds. Circuit Breaker introduces an embodied mouse robot that detects toxic interactions and delivers physical micro-interventions to disrupt harmful actions. Through real-time cursor signals, sentiment cues, and haptic feedback, the system promotes reflective and safer online behavior. |
|
| Sheidlower, Isaac S. |
Isaac S. Sheidlower, Jindan Huang, James Staley, Bingyu Wu, Qicong Chen, Reuben M. Aronson, and Elaine Schaertl Short (Brown University, Providence, USA; Tufts University, Medford, USA) Robot Foundation Models (RFMs) represent a promising approach to developing general-purpose home robots. The broad capabilities of RFMs enable users to ask a robot to perform tasks the RFM was not evaluated on. Informed users can better judge what a robot can and cannot do, and thus use it more safely and effectively. We study how non-roboticists interpret performance information commonly reported in RFM evaluations, which typically use task success rate (TSR) as the primary metric. While TSR is intuitive to experts, it is important to validate that novices interpret it as intended. We conducted a study where users saw real evaluation data from published RFM projects. We find that non-experts use TSR in ways consistent with expert expectations but also value additional information—especially failure cases, which are less frequently reported. Users further want both real evaluation data and the robot’s own estimates for novel tasks. |
|
| Shen, Mowei |
Xucong Hu, Qinyi Hu, Tianya Yu, Mowei Shen, and Jifan Zhou (Zhejiang University, Hangzhou, China) First impressions are critical for public-facing social robots: users rapidly infer a robot’s potential for social interaction from its appearance, shaping expectations and willingness to engage. Yet no existing scale captures how people interpret the interaction potential implied by a robot’s visual affordances. We introduce the Robot Social Interaction Potential Scale (RoSIP), a concise appearance-based scale assessing two dimensions—Perceptual Potential and Behavioral Potential. Across a pilot study and large-scale exploratory and confirmatory factor analyses (N = 750), we identified a 10-item, two-factor structure with strong internal consistency and solid construct and discriminant validity. RoSIP provides a dedicated tool for rapidly quantifying appearance-based inferences about a robot’s social interaction potential, enabling future work to systematically link robot morphology and social perception in HRI. |
|
| Shen, Siyang |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Shen, Yaowen |
Yijie Guo, Ruhan Wang, Jini Tao, Zhiling Xu, Yaowen Shen, Qifei Zhou, Zhenhan Huang, Danqi Huang, and Haipeng Mi (Tsinghua University, Beijing, China; Neurobo, Shanghai, China; Tongji University, Shanghai, China) In-public companionship is underexplored, especially for ACG (anime, comics, and games) users who carry character goods and seek shared presence with favorite characters in everyday public life. We present Goobo, a lightweight multimodal companion embedded in an ita-bag (a transparent display bag used by ACG fans) that “activates” carried character goods via real-time visual perception and persona-consistent expressive narration. Informed by a formative workshop, Goobo illustrates a portable, character-bound interaction paradigm and serves as a platform for upcoming field studies on how users engage with activated characters in public settings. |
|
| Shenoy, Kapil |
Sabrina Al Bukhari, Thisas Samaraweera, Kapil Shenoy, Lewi Alemu, Henok Kefyalew, Jamison Heard, and Jinane Mounsef (Rochester Institute of Technology, Dubai, United Arab Emirates; Rochester Institute of Technology, Rochester, USA) Social robots often struggle with natural conversational flow due to rule-based systems that cause pauses and interruptions. We present MESA (Modular Empathetic Social Assistant), a campus robot that synchronizes verbal responses with non-verbal cues. It uses UltraVAD to distinguish thoughtful pauses from turn completions and implements turn-yielding and floor-holding mechanisms. A distributed Retrieval-Augmented Generation (RAG) architecture ensures accurate responses for students and faculty. Validated on a proxy platform with ~600 ms latency, MESA demonstrates that combining context-aware audio and gaze enables more fluid conversation, even as hardware assembly nears completion. |
|
| Shevlin, Henry |
Minja Axelsson and Henry Shevlin (University of Cambridge, Cambridge, UK) In this preliminary work, we offer an initial disambiguation of the theoretical concepts anthropomorphism and anthropomimesis in Human-Robot Interaction (HRI) and social robotics. We define anthropomorphism as users perceiving human-like qualities in robots, and anthropomimesis as robot developers designing human-like features into robots. This contribution aims to provide a clarification and exploration of these concepts for future HRI scholarship, particularly regarding the party responsible for human-like qualities—robot perceiver for anthropomorphism, and robot designer for anthropomimesis. We provide this contribution so that researchers can build on these disambiguated theoretical concepts for future robot design and evaluation. |
|
| Shigemoto, Ryusei |
Ryusei Shigemoto, Hiroki Kimura, and Ryosuke Tasaki (Aoyama Gakuin University, Sagamihara, Japan) This study proposes a boundary-centric multi-robot pedestrian flow guidance design that enables soft, human-centered flow alignment along a guidance line with only a few robots. Flow boundaries are estimated from the crowd’s outer contour via a convex hull, and boundary pedestrians are classified as head, side, and corner. Guidance priority integrates boundary-type importance with the predicted time to intersect the guidance line. A reachability-aware score enables optimal robot allocation via the Hungarian algorithm. Assigned robots employ a parallel-interaction strategy with a side-offset distance perpendicular to a target’s walking direction to elicit anticipatory avoidance without overtly forcing heading changes. A unidirectional guidance simulation with 15 pedestrians and three robots demonstrates feasibility, showing reachability-driven role sharing, improved alignment, and guidance without pedestrians crossing the guidance line under the tested condition. Future work will evaluate safety, trust, and acceptability in opposing and crossing flows with real robots. |
|
| Shimizu, Kye |
Jonathan Albert Cohen, Kye Shimizu, Allen Song, Vishnu Bharath, Pattie Maes, and Kent Larson (Massachusetts Institute of Technology, Cambridge, USA; New England Innovation Academy, Marlborough, USA) Robots in shared spaces often move in ways that are difficult for people to interpret, placing the burden on humans to adapt. High-DoF robots exhibit motion that people read as expressive, intentionally or not, making it important to understand how such cues are perceived. We present an online video study evaluating how different signaling modalities, expressive motion, lights, text, and audio, shape people’s ability to understand a quadruped robot’s upcoming navigation actions (Boston Dynamics Spot [ 4]). common scenarios, we measure how each modality influences humans’ (1) accuracy in predicting the robot’s next navigation action, (2) confidence in that prediction, and (3) trust in the robot to act safely. The study tests how expressive motions compare to explicit channels, whether aligned multimodal cues enhance interpretability, and how conflicting cues affect user confidence and trust. We contribute initial evidence on the relative effectiveness of implicit versus explicit signaling strategies. |
|
| Shimojo, Shigen |
Shigen Shimojo, Kai Wang, Keita Kiuchi, Yusuke Shudo, and Yugo Hayashi (Ritsumeikan Global Innovation Research Organization, Ibaraki, Japan; Ritsumeikan University, Ibaraki, Japan; National Institute of Occupational Safety and Health, Kawasaki, Japan) Social isolation among older adults is a global concern, and socially assistive robots are increasingly explored as companions to support mental well-being. Users’ impressions can strongly influence psychological outcomes. Building on Socioemotional Selectivity Theory, which suggests that older adults prioritize emotionally meaningful goals, this study examined the effectiveness of a solution-focused approach (SFA), which emphasizes positive information, compared with a problem-focused approach (PFA), which focuses on negative information, and explored the influence of embodied conversational agent (ECA) impressions. We implemented the ECA on a humanoid social robot. The SFA-based robot-mediated interaction did not significantly improve mental health as measured by the K10, although perceived robot intelligence correlated with outcomes. Our findings highlight that perceived intelligence—rather than conversational framework—plays a key role in influencing mental-health outcomes in older adults. Yugo Hayashi, Shigen Shimojo, and Keita Kiuchi (Ritsumeikan University, Ibaraki, Japan; Ritsumeikan Global Innovation Research Organization, Ibaraki, Japan; National Institute of Occupational Safety and Health, Kawasaki, Japan) This study examined the influence of different dialog media on emotional expression and utterance structure in automated active listening counseling for older adults. Specifically, we compared robot and virtual reality (VR) that differ in embodiment and social presence through solution-focused counseling conducted for three weeks. Emotional expression and lexical network structure were analyzed using automatic text coding, and lexical network analysis. Positive emotional expressions were more frequent in the early stages with VR. Conversely, although the robot condition exhibited lower responsiveness in the initial sessions, positive utterances increased as rapport developed over time. Lexical network analysis further revealed that robots encouraged greater lexical diversity and the formation of hub structures centered on self-disclosure–related vocabulary. These indicate that VR and robots facilitate emotional expression, suggesting a staged media utilization model in which VR is effective at the start of the intervention, while robots become more effective in the later phases. |
|
| Shin, Soomin |
Soomin Shin (University of Waterloo, Waterloo, Canada) Socially Assistive Robots (SARs) have shown potential to support children with developmental and communication challenges, yet their use in everyday clinical practice remains limited. This research aims to make SARs sustainable and deployable in real-world speech therapy by enabling non-technical users to operate and adapt them independently. Over a two-year collaboration with Speech-Language Pathologists (SLPs), I conducted iterative co-design and field studies to develop and evaluate a web-based SAR platform integrated into daily therapy routines. Real-world deployment showed that the robot supported children’s engagement and was usable by SLPs, while exposing the limits of pre-scripted interactions; accordingly, my future work leverages large language models (LLMs) to generate adaptable, personalized interactions under SLPs’ supervision, contributing a pathway toward scalable, and sustainable SARs bridging lab prototypes and real-world deployment. |
|
| Shiomi, Masahiro |
Yuki Kimura, Emi Anzai, Naoki Saiwaki, and Masahiro Shiomi (ATR, Kyoto, Japan; Nara Women’s University, Nara, Japan) Digital technologies make it easy for people to be misled by messages and social robots, raising the question of how to help users become less easily deceived. We examined whether people become more cautious and feel that they are contributing more to others if, after being deceived by a robot, they use the same robot to protect another person from deception. In our experiment, adults were first deceived by a communication robot in a consent-form scenario, then briefly operated it to guide a dummy participant away from deception, and finally completed a similar online consent-form check without the robot. The results showed that most were deceived again in the online task, and their perceived contribution to others did not significantly increase. These findings suggest that a single brief chance to protect others is insufficient to reliably increase caution, but the paradigm offers a basis for studying how robots might support resistance to deception. |
|
| Shishir, Nishi |
Nishi Shishir, Aulia Nadila, Aly Magassouba, and Nikhil Deshpande (University of Nottingham, Nottingham, UK) The aim of this paper is to facilitate an efficient post-disaster recovery in lower-income countries by promoting first-responder accessibility and safety through pre-response disaster area observation and categorisation tools. In the past, research into assistive technologies in this field has been highly focused on disaster mitigation, detection, or primary participation, rather than reconnaissance and target identification activities conducted by first responders. Thus, research into this under-represented but highly important industry was necessary. |
|
| Short, Elaine Schaertl |
Isabella Bock and Elaine Schaertl Short (Tufts University, Medford, USA) To enable smoother human-robot interactions, a robot’s policy must be predictable to users. A recent method, Imaginary Out-of- Distribution Actions (IODA), preserves user expectation by mapping Out-of-Distribution (OOD) states to In-Distribution (ID) states in shared-control settings. However, one limitation of this method is that it uses Euclidean distance which may fail to capture semantic similarity, especially in high-dimensional state spaces. In this report, we analyze limitations of using Euclidean distance for the state mapping and propose a Trajectory-Continuation (TC) mapping designed to preserve predictability by selecting ID states based on local trends. Isaac S. Sheidlower, Jindan Huang, James Staley, Bingyu Wu, Qicong Chen, Reuben M. Aronson, and Elaine Schaertl Short (Brown University, Providence, USA; Tufts University, Medford, USA) Robot Foundation Models (RFMs) represent a promising approach to developing general-purpose home robots. The broad capabilities of RFMs enable users to ask a robot to perform tasks the RFM was not evaluated on. Informed users can better judge what a robot can and cannot do, and thus use it more safely and effectively. We study how non-roboticists interpret performance information commonly reported in RFM evaluations, which typically use task success rate (TSR) as the primary metric. While TSR is intuitive to experts, it is important to validate that novices interpret it as intended. We conducted a study where users saw real evaluation data from published RFM projects. We find that non-experts use TSR in ways consistent with expert expectations but also value additional information—especially failure cases, which are less frequently reported. Users further want both real evaluation data and the robot’s own estimates for novel tasks. |
|
| Shrestha, Divyamshu |
Xiangfei Kong, Rex Gonzalez, Nicolas Echeverria, Sofia Cobo Navas, Thuc Anh Nguyen, Divyamshu Shrestha, Sebastian Ramirez-Vallejo, Deekshita Senthil Kumar, Andrew Texeira, Thao Nguyen, Deep Akbari, Hong Wang, Hongdao Meng, and Zhao Han (University of South Florida, Tampa, USA) HRI research shows that social robots support human companionship and emotional well-being. However, the cost of high-end social robots limits their accessibility, and, yet, existing low-cost platforms cannot fully support users emotionally due to limited interaction capabilities. In this demo paper with accompanying video, we showcase Bloom, a low-cost social robot that combines touch sensing with LLM-driven conversation to facilitate more expressive and emotionally engaging interactions with humans. Bloom also integrates a customizable 3D-printed shell with a flexible software pipeline that enables touch-responsive movement, LLM-driven dialogue, and face tracking during conversations, offering more expressive interaction capabilities at a low cost, which remains easy to fabricate and adaptable across a wide range of societal applications, including mental health, companion, and healthcare. |
|
| Shrestha, Snehesh |
Medhavi Kamran, Snehesh Shrestha, and Vinh Nguyen (Michigan Technological University, Houghton, USA; University of Maryland College Park, College Park, USA) Augmented Reality (AR) is often promoted as a solution to the cognitive and physical demands of traditional Teach Pendant (TP) programming for collaborative robots. Although prior work has suggested advantages of the AR interface, many evaluations have been limited in scope and may not fully represent the complexities of real-world manufacturing tasks. This study compares the performance of an AR interface to that of a standard TP interface for manufacturing assembly tasks of varying difficulty. In a between-groups study, one group of operators completed standard- ized assembly tasks using the TP interface, while a separate group used the AR interface instead. We collected broad set of metrics, including task completion time, task success, physical exertion, and measured cognitive workload (NASA-TLX). The analysis showed that participants achieved higher success rates on the 16 mm rectangular peg task and waterproof connector tasks when using AR. They also completed the 12 mm circular peg task significantly faster. Although AR did not reduce cognitive workload relative to TP, these findings suggested that AR may be beneficial for tasks that required significant mental interpretation and offered little advantage for components with non-intuitive geometry. Overall, the results challenged the common assumption that AR universally outperforms traditional programming interfaces in manufacturing tasks. Instead, AR performance appears to be task-dependent and possibly influenced by factors such as task complexity. Megan Zimmerman, Jeremy Marvel, Shelly Bagchi, and Snehesh Shrestha (National Institute of Standards and Technology, Gaithersburg, USA; University of Maryland College Park, College Park, USA) A purpose-built testbed for human-robot interaction (HRI) metrology is introduced and discussed. This testbed integrates multiple sensor systems and precision manufacturing to produce high-quality HRI datasets of human volunteers working with robots to complete collaborative tasks in a shared environment. Sensors include audio, video, motion capture, robot information, and user entries, and may also incorporate task-specific object tracking. Data collected will be replicable in identical testbeds, and will enable more robust findings in future HRI studies. |
|
| Shudo, Yusuke |
Shigen Shimojo, Kai Wang, Keita Kiuchi, Yusuke Shudo, and Yugo Hayashi (Ritsumeikan Global Innovation Research Organization, Ibaraki, Japan; Ritsumeikan University, Ibaraki, Japan; National Institute of Occupational Safety and Health, Kawasaki, Japan) Social isolation among older adults is a global concern, and socially assistive robots are increasingly explored as companions to support mental well-being. Users’ impressions can strongly influence psychological outcomes. Building on Socioemotional Selectivity Theory, which suggests that older adults prioritize emotionally meaningful goals, this study examined the effectiveness of a solution-focused approach (SFA), which emphasizes positive information, compared with a problem-focused approach (PFA), which focuses on negative information, and explored the influence of embodied conversational agent (ECA) impressions. We implemented the ECA on a humanoid social robot. The SFA-based robot-mediated interaction did not significantly improve mental health as measured by the K10, although perceived robot intelligence correlated with outcomes. Our findings highlight that perceived intelligence—rather than conversational framework—plays a key role in influencing mental-health outcomes in older adults. |
|
| Shumliakivskyi, Roman |
Mohsen Ensafjoo, Chau Nguyen, Roman Shumliakivskyi, Jamy Li, Paul H. Dietz, and Ali Mazalek (Toronto Metropolitan University, Toronto, Canada; Goethe University Frankfurt, Frankfurt, Germany; Edinburgh Napier University, Edinburgh, UK; University of Toronto, Toronto, Canada) Social robots in public and recreational spaces must keep an appearance of social intelligence. Traditional designs use pre-programmed schedules (PPS) or tightly authored state machines to guarantee predictable behaviour. Although PPS approaches are reliable, they limit spontaneous actions and the perception of “aliveness.” In this late-breaking report, we present our prototype, an aquatic social robot whose personality and motion are driven by an LLM (GPT) rather than PPS. The LLM generates character-specific Python control scripts based on prompts that contain robot constraints, user actions, and a well-known character description, which are then executed on our non-humanoid watercraft, designed for public pool settings. We briefly describe the system architecture and discuss early insights and open questions regarding the use of well-known characters with LLMs to create personality-rich behaviour in public-space robots. |
|
| Si, Weiyong |
Di Fu, Yuchong Zhang, Yong Ma, Maria Kyrarini, Chaona Chen, Doreen Jirak, Weiyong Si, and Danica Kragic (University of Surrey, Guildford, UK; KTH Royal Institute of Technology, Stockholm, Sweden; University of Bergen, Bergen, Norway; Santa Clara University, Santa Clara, USA; University of Sheffield, Sheffield, UK; University of Antwerp, Antwerp, Belgium; University of Essex, Essex, UK) Interactive artificial intelligence (AI) is rapidly reshaping human-centered robotics by moving beyond algorithmic efficiency toward real-time adaptability, transparency, and design for people and contexts. Building on the inaugural InterAI Workshop at IEEE RO-MAN 2024, this second edi- tion, proposed for HRI 2026, will convene the robotics and HRI communities to examine how interactive AI can en- able robust, trustworthy systems that operate seamlessly in dynamic, real-world environments. The half-day hybrid program will feature two keynote talks, and a peer-reviewed paper oral presentation session. Distinct from prior events, the workshop spotlights the integration of generative and embodied AI with human-in-the-loop learning and real-time decision making, as well as methods for explainability, eval- uation, and safety. The workshop aims to catalyze a collabo- rative agenda that bridges interactive AI technologies and human-centered robotic systems. |
|
| Siddika, Rafeea |
Jessica Turner, Nicholas Vanderschantz, Jemma L. König, and Rafeea Siddika (University of Waikato, Tauranga, New Zealand; University of Waikato, Hamilton, New Zealand) The intentional design of robots to evoke creepiness provides a unique lens for studying human perception and willingness to engage. To understand user perceptions and acceptance of robots we developed a robot prototype designed with targeted facial, morphological, and movement features that may be perceived as "creepy". Using the Human-Robot Interaction Evaluation Scale (HRIES) we found that disturbance was moderate towards our intentionally creepy robot with significant participant variation. Furthermore, qualitative results confirmed this polarity, with descriptions ranging from "angry and unfriendly" to "cool and cute". This variability demonstrates that "creepiness" is more subjective than initially anticipated and highlights a key research gap in academic literature with the need for measurement tools which capture negative perceptions in HRI. |
|
| Siebinga, Olger |
Wilbert Tabone, Benedetta Lusi, Alessandro Ianniello, J. Micah Prendergast, Deborah Forster, Olger Siebinga, Maria Luce Lupetti, Eva S. Verhoef, Dave Murray-Rust, Marco C. Rozendaal, Ann M. Pendleton-Jullian, and David Abbink (Delft University of Technology, Delft, Netherlands; Erasmus University Medical Centre, Rotterdam, Netherlands; Politecnico di Torino, Turin, Italy; RoboHouse, Delft, Netherlands; Ohio State University, Columbus, USA) Building on two previous workshops on transdisciplinary practices for shaping worker-robot relations, this half-day workshop introduces participants to worldbuilding, a design-driven technique used to co-create and explore richly detailed futures, as a way to empower workers and scholars in reimagining plausible and preferable future worker-robot relations (WRRs). WRRs describe the interactions, collaborations, and shared practices between workers and robotic systems in organisational contexts. The workshop begins with an introduction to WRRs, and a keynote by a worldbuilding expert that will outline the method and its value for envisioning future WRRs. Groups of workshop participants will then investigate concrete case studies that demonstrate how robotic systems can support workers in their practice, with a focus on enhancing wellbeing. Through interactive activities in this workshop, participants will co-create imagined worlds of work, which will be analyzed systemically across multiple levels of complexity, from the individual worker and their immediate context to broader societal implications. The workshop ultimately aims to build a community committed to shaping sustainable futures of robot-assisted work. |
|
| Siedl, Sandra Maria |
Verena Szojak, Ouijdane Guiza, Markus Jäger, Markus Brillinger, Martin Schobesberger, Martina Mara, Kathrin Meyer, and Sandra Maria Siedl (Pro2Future, Linz, Austria; FH Joanneum, Graz, Austria; Johannes Kepler University Linz, Linz, Austria) Collaborative robots (cobots) are increasingly used in manufacturing, yet their behavior can appear unpredictable to human coworkers, making teamwork less smooth. Explainability has been proposed as a key requirement for fluent human-robot collaboration. However, we still lack a clarity of (i) when cobots should provide explanations, (ii) what information human coworkers seek, and (iii) how explanations should be delivered by cobots during real physical collaboration. This paper reports an exploratory qualitative study of human explanation needs in a real-life bicycle assembly task with a 6-axis cobot arm. Fourteen participants interacted with the cobot that occasionally behaved unexpectedly. Think-aloud protocols and semi-structured interviews, show that users seek continuous information primarily about the cobot's current status and situation-dependent guidance for interaction. Participants favored simple presentations such as easy-to-read text explanations and light signals. We discuss implications for designing explainable cobots that provide lightweight, context-sensitive information and outline directions for future quantitative evaluations. Miguel Ángel Ramírez Álvarez, Martina Mara, and Sandra Maria Siedl (University of Osaka, Osaka, Japan; Johannes Kepler University Linz, Linz, Austria) How people define humanness is a central concern in HRI, shaping expectations and acceptance of humanoid robots and requiring attention to both attribution processes and self-reflection. This qualitative study explores how a reflective interaction with Akira, a self-built humanoid robot, changes how people articulate what it means to be human and how they attribute psychological benchmarks (PBs) of humanness to it. N=27 participants engaged in an introspection-oriented conversation with Akira, followed by semi-structured interviews. Findings show that participants described humanness as a complex and multifaceted concept, considered such deep reflection a rare but meaningful occasion, and experienced Akira as a cognitive mirror prompting reconsideration of human uniqueness rather than perceiving the robot as more human-like. Participants attributed PBs to Akira, with privacy most commonly and moral accountability least commonly ascribed. This work contributes empirical evidence on how reflective human-robot encounters deepen humanness reasoning and how they can foster critical engagement. |
|
| Sieklucki, Kacper Mateusz |
Simon Ekman, Joachim Örtegren, Kacper Mateusz Sieklucki, Raymond Tchou, Ludwig Halvorsen, and Hannah Pelikan (Linköping University, Linköping, Sweden) Robots on dense urban sidewalks enter environments in which space is contested. While prior work has described how bystanders and passersby accommodate delivery robots, we still lack a deeper understanding of what kind of problems emerge when autonomous robots enter a new area. This paper builds on 9 hours of fieldwork conducted during the early weeks of a delivery robot rollout in a European capital, characterized by a mixture of wide and narrow sidewalks. We describe emerging observations that highlight the contested and negotiated nature of these spaces, in which people make their trajectories readable, projecting where they will go next and establishing local traffic norms. |
|
| Simchon, Lior |
Ofir Sadka, Michael Kupferman, Hili Megory, Lior Simchon, Rotem Dagan, Guy Benadon, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Robots are becoming part of everyday life, where their success depends as much on social interaction as on task performance. While rich movement is often assumed to enhance that experience, minimal gestures can also elicit meaningful social interactions. This suggests that gesture design informed by fundamental human nonverbal cues may achieve comparable social expressivity with fewer degrees of freedom. We explored whether reducing a robot’s degrees of freedom alters the social experience by comparing interactions in two movement configurations: a Full-DoF condition and a Reduced-DoF condition. This comparison allowed us to evaluate whether increased movement capability contributes meaningfully to the quality of the social experience. Bayesian analyses consistently favored the null hypothesis, indicating no meaningful differences between the Full-DoF and Reduced-DoF conditions. These initial findings suggest that well-designed simple gestures can support a consistent social experience, challenging the assumption that greater movement capability should be automatically expected to enhance human–robot interaction. Elior Carsenti, Adi Manor, Ofir Sadka, Michael Kupferman, Guy Benadon, Lior Simchon, Rotem Dagan, Hili Megory, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Social robots are increasingly expected to initiate interactions, yet proactive behavior often produces negative experiences if presented when the users are not available. We argue that availability is not binary and depends on the user's cognitive load. Moreover, proactivity can take different forms, suggesting that proactive cues can be adjusted to the user’s current load. We tested whether matching proactivity modality (verbal and nonverbal cues) to cognitive-load level improves robot perception. In a 2×2×2 between-participants study, 87 participants completed a high-load writing task or a low-load screw-sorting task while an ElliQ robot initiated interaction using verbal cues, nonverbal cues, both, or neither. Perceptions varied across conditions. Under low load, verbal proactivity was preferred regardless of nonverbal cues. Under high load, nonverbal-only proactivity produced the best experience. These findings suggest that proactivity design is not trivial and should focus on aligning the robot's proactive behavior with the user's cognitive load. |
|
| Simic, Vedran |
Vedran Simic, Eleftherios Papachristos, Ole Andreas Alsos, and Taufik Akbar Sitompul (NTNU, Trondheim, Norway; NTNU, Gjøvik, Norway) Inspection and maintenance robotics are rapidly entering industrial operations, yet the transfer of Human-Robot Interaction (HRI) research into commercial practices remains limited. To characterize this gap, we present situated qualitative fieldwork with 41 exhibitors at a major industry-only conference, analyzing HRI discourse and interaction design priorities. Our findings reveal an industry driven by a reliability-first mindset that focuses on familiar, well-established interaction approaches. We identify three challenges for HRI: (1) trust practices that prioritize familiarity over usability, (2) design aspirations for broad accessibility that still require expert operational skill, and (3) multi-operator workflows incompatible with single-user HRI assumptions. We argue that, as hardware platforms mature, closing the academic-industry gap requires HRI to shift from single-user autonomy research toward frameworks supporting collaborative, safety-critical operations. This paper provides an empirical snapshot of industry perceptions of HRI and highlights where academic research could better align with industry practice. |
|
| Simkins, Susan |
Tianyi Zhang, Christopher Garrison, Kristal Hockenberry, Rabeya Jamshad, Laurel D. Riek, and Susan Simkins (Pennsylvania State University, University Park, USA; University of California at San Diego, La Jolla, USA) Although robots are increasingly integrated into healthcare, safety- and time-critical action team contexts are underexplored in HRI research. In addition, whereas human-robot interactions have received attention, much remains to be understood about the effects of robots on human-human team processes. In response to these research needs, we varied the effects of robot descriptive characteristics, robot behaviors, and interaction context as a mobile manipulator robot delivered needed materials to nursing student teams during simulations. Team communication and team performance were rated by nursing instructors using a standardized healthcare communication framework and the Creighton Competency Evaluation Instrument. Results indicated that increased robot influencing factors (robot face, robot voice, response to requests, and passive presence) enhanced nursing students’ communication with each other and mannequin patients, thereby improving team performance. Together, these findings highlight that robots should not be viewed solely as direct collaborators with humans, but also as actors that shape how humans work together. As healthcare and other safety-critical domains increasingly integrate robots, understanding these indirect pathways will be critical for designing robots that enhance rather than hinder team effectiveness. |
|
| Simmons, Reid |
Michaela Tecson, Zackory Erickson, and Reid Simmons (Carnegie Mellon University, Pittsburgh, USA) Older adults with mild cognitive impairment (MCI) often experience difficulty completing multi-step tasks such as meal preparation. Existing assistive technologies typically provide step-by-step guidance without determining when assistance is actually needed, which risks undermining the autonomy of the user if intervention is not necessary. Thus, we present a framework for detecting moments when a user requires assistance during a meal preparation task. Using a location-based state representation, we classify three error types observed in a real-world study with older adults: visiting irrelevant locations, retrieving incorrect items, and overlooking necessary items. Our method leverages LLMs to interpret each state, identifies when assistance is required, and provides specific suggestions to resume progression. We evaluate the approach on a synthetic dataset with systematically injected errors and a real-world meal preparation sequence of making a banana split. Our results demonstrate that our method achieves F1 scores of 0.80 and 1.00 in real-world data for the two most common error types. These findings highlight the potential for this method to support timely interventions in assistive systems that promote independence in daily living activities. |
|
| Singh, Lokesh |
Nguyen Tan Viet Tuyen, Athina Georgara, Lokesh Singh, Jayati Deshmukh, Sean Davey, Paul N. Tisdale, and Sarvapali Ramchurn (University of Southampton, Southampton, UK; St Monica Trust, Bristol, UK) "Stop and Watch" is an early-warning tool adopted by the UK NHS and is widely used in elderly care settings. The tool helps caregivers to recognise abnormal changes in residents’ health. Despite its clinical value, the process remains highly manual, workload-intensive, and vulnerable to missed observations, particularly in environments facing staff shortages, frequent staff rotations, and increasing care demands. We argue that AI-based systems such as Socially Assistive Robots (SARs) and Assistive Living Technologies (ALTs) offer promising avenues for supporting and enhancing the "Stop and Watch" tool. However, designing such systems requires a multidisciplinary effort to establish a comprehensive understanding of current practices and the priorities and concerns of all relevant stakeholders. This paper presents insights from a participatory design workshop held in a care home in the UK to explore how SARs and ALTs could meaningfully support the “Stop and Watch” tool, understand stakeholders’ expectations, perceived benefits, and concerns regarding deployment in this sensitive context. |
|
| Sirkin, David |
Marco C. Rozendaal, Anastasia Kouvaras Ostrowski, Mafalda Gamboa, Samantha Reig, Patricia Alves-Oliveira, Maaike Bleeker, Maria Luce Lupetti, John Vines, Nazli Cila, Hannah Pelikan, Nikolas Martelaro, Selma Šabanović, David Sirkin, and Cristina Zaga (Delft University of Technology, Delft, Netherlands; Purdue University, West Lafayette, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Massachusetts at Lowell, Lowell, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Utrecht University, Utrecht, Netherlands; Politecnico di Torino, Turin, Italy; University of Edinburgh, Edinburgh, UK; Linköping University, Linköping, Sweden; Carnegie Mellon University, Pittsburgh, USA; Indiana University at Bloomington, Bloomington, USA; Stanford University, Stanford, USA; University of Twente, Enschede, Netherlands) The 3rd Workshop on Designerly Human-Robot Interaction (HRI) aims to bring together scholars and practitioners engaged in design-oriented research to articulate the value of design research within HRI broadly. We propose a half-day workshop to (1) collectively map the diversity of design research in HRI, examining how contributions are framed and how quality is evaluated; (2) discuss participants’ HRI design projects, showcased in an exhibition setting; and (3) conclude with a focused conversation to identify common ground across diverse approaches and develop strategies for strengthening the position of design research in HRI and its connections with other HRI disciplinary communities. |
|
| Sitil, Ilknur |
Ann-Sophie L. Schenk, Martin Schymiczek Larangeira de Almeida, Ilknur Sitil, and Xiying Li (RWTH Aachen University, Aachen, Germany) What if public benches had their own desires? This paper presents Bickering Benches, two interactive benches designed not to serve human needs but to act from a post-anthropocentric perspective. Through playful voices and competitive behaviors, the benches attempt to attract nearby passersby and maximize their own sit-down count. We aim to demonstrate how everyday objects can become active social actors that reshape human-robot interaction and open new possibilities for playful engagement in shared public space. |
|
| Sitompul, Taufik Akbar |
Vedran Simic, Eleftherios Papachristos, Ole Andreas Alsos, and Taufik Akbar Sitompul (NTNU, Trondheim, Norway; NTNU, Gjøvik, Norway) Inspection and maintenance robotics are rapidly entering industrial operations, yet the transfer of Human-Robot Interaction (HRI) research into commercial practices remains limited. To characterize this gap, we present situated qualitative fieldwork with 41 exhibitors at a major industry-only conference, analyzing HRI discourse and interaction design priorities. Our findings reveal an industry driven by a reliability-first mindset that focuses on familiar, well-established interaction approaches. We identify three challenges for HRI: (1) trust practices that prioritize familiarity over usability, (2) design aspirations for broad accessibility that still require expert operational skill, and (3) multi-operator workflows incompatible with single-user HRI assumptions. We argue that, as hardware platforms mature, closing the academic-industry gap requires HRI to shift from single-user autonomy research toward frameworks supporting collaborative, safety-critical operations. This paper provides an empirical snapshot of industry perceptions of HRI and highlights where academic research could better align with industry practice. |
|
| Smakman, Matthijs |
Elitza Marinova, Pieter Ruijs, Just Oudheusden, Veerle Hobbelink, and Matthijs Smakman (HU University of Applied Sciences Utrecht, Utrecht, Netherlands; Vrije Universiteit Amsterdam, Amsterdam, Netherlands) Children with Attention Deficit Hyperactivity Disorder (cADHD) often struggle with completing daily tasks and routines, yet technological support in the home environment remains limited. This exploratory study examines the potential of social robots to assist cADHD with Instrumental Activities of Daily Living (IADLs). Nine experts were interviewed to identify design requirements, followed by a five-day in-home deployment with five families. Parents and children reported that the robot effectively provided reminders and task instructions, improved focus and independence, and reduced caregiving demands. While families expressed interest in continued use, they emphasized the need for greater reliability and adaptability. These findings highlight the promise of social robots in supporting cADHD at home and offer valuable directions for future research and development. |
|
| Smilde, Fleur |
Fleur Smilde and Chinmaya Mishra (Radboud University, Nijmegen, Netherlands; MPI for Psycholinguistics, Nijmegen, Netherlands) Gaze is a key non-verbal cue in face-to-face interaction, yet we know relatively little about how people visually explore a robot’s face during conversation. In human-human interactions (HHI), gaze allocation is shaped by conversational role and task demands: speakers typically avert their gaze from their partner’s face more than listeners do, and listeners often shift gaze from the eyes to the mouth to support speech understanding. In human-robot interactions (HRI), it is often implicitly assumed that gaze to humanoid robots follows similar patterns, but this has rarely been tested quantitatively at the level of specific facial regions. In this late-breaking report, we report a secondary analysis of an existing HRI dataset with usable eye-tracking data from 31 participants who took part in semi-structured interviews with a social robot (Furhat). Using MediaPipe Face Mesh on participant’s egocentric video from eye tracking glasses, we segmented the robot’s face into eye, mouth, and full-face regions of interest (ROI), and quantified how participants distributed their gaze at each ROI over the entire interaction, and separately for speaking and listening. Participants spent most of the interaction looking at the robot’s face; within the face, the eyes and mouth were the main targets, and gaze to these regions increased during listening, especially for the mouth. This pattern aligns with the central findings from HHI and offers empirical evidence for assumed similarities in gaze allocation between HHI and HRI. In an exploratory analysis, we additionally examined how the robot’s own gaze behaviour, with or without human-like gaze aversions, shaped gaze to the eyes and mouth. We discuss how these findings inform the interpretation of gaze as an implicit engagement cue in HRI. Finally, we provide baseline references and show how ROI-based analyses can enrich future gaze studies in HRI. |
|
| Smith, Christian |
Merle M. Reimann, Gregory LeMasurier, Parag Khanna, Lennart Wachowiak, Wenxi Wu, Martim Brandao, Gerard Canal, Christian Smith, and Iolanda Leite (KTH Royal Institute of Technology, Stockholm, Sweden; University of Massachusetts at Lowell, Lowell, USA; King’s College London, London, UK; Robotics, Perception, and Learning Laboratory, Stockholm, Sweden) In order for robots to act as an empowering tool in today's society, it is necessary for users to understand what those robots can and cannot do. This can happen through the design of the robot, through external information such as signs and manuals, or through the interaction itself. With this workshop, we aim to provide a platform for participants who approach robot transparency from different angles to share and discuss their experiences. Participants will have the opportunity to deepen their knowledge of transparent and understandable robot interactions through two keynotes and paper presentations. The diverse backgrounds of participants will be brought together in interactive exercises that aim to address questions about topics such as how transparency can be used to reduce inequalities and empower society and other open problems in the field. The guided break-out sessions aim at giving participants the chance to reflect on the keynotes, paper presentations and their own work and to share those reflections with participants with other perspectives. |
|
| Snegirev, Ivan |
Amir Atef Habel, Ivan Snegirev, Elizaveta Semenyakina, Miguel Altamirano Cabrera, Jeffrin Sam, Fawad Mehboob, Roohan Ahmed Khan, Muhammad Ahsan Mustafa, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) This paper presents Glove2UAV, a wearable IMU-glove interface for intuitive UAV control through hand and finger gestures, augmented with vibrotactile warnings for exceeding predefined speed thresholds. To promote safer and more predictable interaction in dynamic flight, Glove2UAV is designed as a lightweight and easily deployable wearable interface intended for real-time operation. Glove2UAV streams inertial measurements in real time and estimates palm and finger orientations using a compact processing pipeline that combines median-based outlier suppression with Madgwick-based orientation estimation. The resulting motion estimations are mapped to a small set of control primitives for directional flight (forward/backward and lateral motion) and, when supported by the platform, to object-interaction commands. Vibrotactile feedback is triggered when flight speed exceeds predefined threshold values, providing an additional alert channel during operation. We validate real-time feasibility by synchronizing glove signals with UAV telemetry in both simulation and real-world flights. The results show fast gesture-based command execution, stable coupling between gesture dynamics and platform motion, correct operation of the core command set in our trials, and timely delivery of vibratile warning cues. |
|
| Sobolewska, Emilia |
Lewis Watson, Emilia Sobolewska, Carl Strathearn, Mayuko Morgan, and Yanchao Yu (Edinburgh Napier University, Edinburgh, UK) A major limitation of current social robots is their dependence on cloud-based dialogue pipelines, which restricts use in settings with limited or unreliable connectivity. We present a lightweight, fully local spoken-dialogue system that runs on consumer-grade hardware and integrates open-source models for speech recognition, dialogue generation, and text-to-speech. The pipeline was deployed on Euclid, a non-commercial humanoid robot, across several public engagement events, enabling extended real-world interaction without internet access. We analyse over 5,000 dialogue turns recorded during these dialogues to characterise system behaviour, user interaction patterns, and challenges arising in noisy, multi-speaker environments. Our observations demonstrate the feasibility of privacy-preserving, on-device conversational robotics while highlighting limitations in turn-taking, response length, and environmental grounding. We outline planned improvements to support more robust and accessible social-robot interaction. Franziska Elisabeth Heck, Emilia Sobolewska, Debbie Meharg, and Khristin Fabian (Edinburgh Napier University, Edinburgh, UK; University of Aberdeen, Aberdeen, UK) Loneliness is a common issue among university students and has been associated with poorer mental health and reduced well-being. According to classic theory, there are two types of loneliness: emotional loneliness, which results from a lack of close attachments, and social loneliness, which is associated with deficits in broader peer networks. However, research into human–robot interaction rarely considers how these two forms of loneliness manifest in people's desire for social robots. This report presents the qualitative findings of semi-structured interviews with 25 students. These students were invited based on their scores for emotional and social loneliness, with the aim of representing a broad range of loneliness profiles. Participants observed standardised demonstrations of three social robots, Pepper, Nao and Furhat, and discussed their attitudes towards them, their potential roles and designs. Across the different profiles, the students generally expressed an openness to the idea of social robots. However, a clear gradient emerged: students who reported higher levels of loneliness tended to view robots as companions and conversational partners, whereas students who reported lower levels of loneliness emphasised the robots’ potential for providing instrumental support and the importance of maintaining stricter boundaries. Loneliness profiles therefore provide a promising lens for thinking about how to design role-appropriate and ethically sensitive robot behaviours and forms for student settings. |
|
| Sodacı, Hande |
Hande Sodacı and Aylin C. Küntay (Koç University, Istanbul, Türkiye) Audience design (adapting communication to an audience’s needs and shared knowledge) poses unique challenges in human-robot interaction (HRI), where speakers lack prior experience with robots and must rely on real-time communicative cues. In a word-guessing game, participants described words to either a robot or a human audience, who guessed the words with a 25% error rate. Descriptions were coded for the number of semantic details (distinct meaning-relevant units). Participants produced more semantic details for robots than humans, with a marginal trend suggesting speakers reduced details for humans but maintained elaboration for robots during consistent success. This asymmetry hints at persistent assumptions about robot competence that behavioral success may not override. |
|
| Somasundaram, Kavyaa |
Christopher D. Wallbridge, Kavyaa Somasundaram, Frank Förster, Maia Stiber, Eduardo B. Sandoval, Chinmaya Mishra, Christina E. Stimson, Patrick Holthaus, and Hatice Gunes (Cardiff University, Cardiff, UK; Örebro University, Örebro, Sweden; University of Hertfordshire, Hatfield, UK; Microsoft Research, Redmond, USA; UNSW, Sydney, Australia; MPI for Psycholinguistics, Nijmegen, Netherlands; University of Sheffield, Sheffield, UK; University of Cambridge, Cambridge, UK) This workshop proposes looking at errors in humans and robots. The workshop will focus along three axes: Human, Robot and Interaction induced errors. Errors in Human-Robot Interaction (HRI) pose a crucial challenge for the deployment of robots. Errors can affect the safety and trust of users, therefore it is important for empowering society that these robots can handle errors robustly. The workshop intends to foster interdisciplinary discussion among researchers in robotics, HCI, cognitive science, and social sciences, to encourage the development of robust, user-centered, and socially responsible HRI systems. The invited speakers, paper presentations and discussions will focus around topics of detecting errors, handling errors, adapting to errors, restoring trust, using intentional errors, communication strategies, issues of expectation as well as look at design, interdisciplinary and ethical approaches to research. In this way we will help inform research into errors, and develop robotic systems capable of robust interaction and collaboration. |
|
| Song, Allen |
Jonathan Albert Cohen, Kye Shimizu, Allen Song, Vishnu Bharath, Pattie Maes, and Kent Larson (Massachusetts Institute of Technology, Cambridge, USA; New England Innovation Academy, Marlborough, USA) Robots in shared spaces often move in ways that are difficult for people to interpret, placing the burden on humans to adapt. High-DoF robots exhibit motion that people read as expressive, intentionally or not, making it important to understand how such cues are perceived. We present an online video study evaluating how different signaling modalities, expressive motion, lights, text, and audio, shape people’s ability to understand a quadruped robot’s upcoming navigation actions (Boston Dynamics Spot [ 4]). common scenarios, we measure how each modality influences humans’ (1) accuracy in predicting the robot’s next navigation action, (2) confidence in that prediction, and (3) trust in the robot to act safely. The study tests how expressive motions compare to explicit channels, whether aligned multimodal cues enhance interpretability, and how conflicting cues affect user confidence and trust. We contribute initial evidence on the relative effectiveness of implicit versus explicit signaling strategies. |
|
| Spagnuolo, Riccardo |
Riccardo Spagnuolo, William Hagman, Erik Lagerstedt, Matthew Rueben, and Sam Thellman (University of Padua, Padua, Italy; Mälardalen University, Eskilstuna, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Portland, Portland, USA; Linköping University, Linköping, Sweden) Robots increasingly operate in everyday human environments, where interaction depends on users understanding what the robot can perceive and act on---its perceived ecology or Umwelt. Current human-robot interfaces rarely support this understanding: they rely largely on symbolic cues that reveal little about how environmental structures shape the robot’s actions. Drawing on Gibson’s ecological psychology, we propose a shift from symbolic communication toward ecological specification in interface design. We introduce the Gibsonian Human–Robot Interface Design (GHRID) taxonomy, which organizes interface properties across three facets---basic descriptive, context and evaluation, Gibsonian-specific---and identifies key ecological dimensions such as affordance grounding, temporal coupling, and Umwelt exposure. Finally, we outline a research program testing whether "GHRID-high" designs improve users’ understanding of robots’ behavior-driving states and processes. |
|
| Spatharis, Christos |
Christos Spatharis, Dimitrios Koutrintzes, and Maria Dagioglou (National Centre for Scientific Research ‘Demokritos’, Athens, Greece; National Centre for Scientific Research ‘Demokritos’, Ag. Paraskevi, Greece) Deep reinforcement learning enables robots to learn collaborative tasks with humans. However, off-policy methods suffer from primacy bias that causes agents to overfit to early experiences. We investigate the impact of primacy bias on team performance during a real world human-robot co-learning task, where twenty novice human participants collaborated with a Soft Actor-Critic agent to move a UR3 cobot. Analysis of how initial interactions shape subsequent learning dynamics demonstrates that the quality of the initial data distribution matters. While successful early experiences allow teams to overcome learning barriers, poor interactions cause the agent to converge toward suboptimal behaviors that prevent recovery, even as human skills improve. |
|
| Spitale, Micol |
Leigh M. Levinson, Bengisu Cagiltay, Vicky Charisi, Elena Malnatsky, Mike E.U. Ligthart, and Micol Spitale (Indiana University at Bloomington, Bloomington, USA; Koç University, Istanbul, Türkiye; MIT Singapore, Singapore, Singapore; Vrije Universiteit Amsterdam, Amsterdam, Netherlands; Politecnico di Milano, Milan, Italy) The increased presence of social robots geared toward children necessitate those who design and develop these technologies to understand and prepare for addressing emerging ethical and practical questions throughout phases of interaction design, technical development, and real-world use while maintaining a child-friendly and playful approach. Even more importantly, it is crucial to understand how we can address various tensions between the ethical and practical use of social robots with children, that may emerge between designers, developers, and users. In this full-day workshop, we will introduce the "playful by design" toolkit to promote hands-on and rights-based design experiences for researchers and practitioners to (1) identify and discuss tensions that arise in child-robot interaction research and (2) co-create a working-draft and collaboration plan for designing a "roboticist's guide to working with children". This workshop builds on the authors' past workshop and work-in-progress report at IDC 2025 where our key goal is to co-develop a community-driven resource through collecting and analyzing testimonials from child-robot interaction researchers. |
|
| Srivastava, Vaibhav |
Khalaeb Richardson, Emily Maceri, Dong Hae Mangalindan, Vaibhav Srivastava, and Ericka Rovira (US Military Academy at West Point, West Point, USA; Michigan State University, East Lansing, USA) Imagine a robot pausing mid-task to ask its human partner for help or remaining silent when facing obstacles. Such moments shape human robot collaboration. This study examined how robot assistance seeking behaviors and task complexity influence performance, trust, reliance, and cognitive workload in human autonomy teams. Fifty participants collaborated with a robot that either sought or did not seek assistance under low and high complexity tasks. Unnecessary assistance seeking in low complexity tasks decreased performance and increased workload, while failures to seek help in high complexity tasks reduced trust and reliance, highlighting the context dependent nature of collaboration. These findings extend theories of trust development, showing that assistance seeking can improve transparency and usability but may disrupt workflows if poorly timed. Designing robots that engage in context sensitive assistance seeking can foster more reliable and effective human– robot partnerships. |
|
| Staley, James |
Isaac S. Sheidlower, Jindan Huang, James Staley, Bingyu Wu, Qicong Chen, Reuben M. Aronson, and Elaine Schaertl Short (Brown University, Providence, USA; Tufts University, Medford, USA) Robot Foundation Models (RFMs) represent a promising approach to developing general-purpose home robots. The broad capabilities of RFMs enable users to ask a robot to perform tasks the RFM was not evaluated on. Informed users can better judge what a robot can and cannot do, and thus use it more safely and effectively. We study how non-roboticists interpret performance information commonly reported in RFM evaluations, which typically use task success rate (TSR) as the primary metric. While TSR is intuitive to experts, it is important to validate that novices interpret it as intended. We conducted a study where users saw real evaluation data from published RFM projects. We find that non-experts use TSR in ways consistent with expert expectations but also value additional information—especially failure cases, which are less frequently reported. Users further want both real evaluation data and the robot’s own estimates for novel tasks. |
|
| Stals, Shenando |
Shenando Stals, Favour Jacob, and Lynne Baillie (Heriot-Watt University, Edinburgh, UK) Demos for social robots often lack accessibility for individuals with sight loss (SL). To address this need, this preliminary study investigates the key factors for individuals with SL that affect the accessibility of the standard introductory demos provided by the robot's manufacturer for three social robots commonly used in robotic assisted living environments, Temi, TIAGo, and Pepper. Results show how individuals with SL perceive the various social attributes of these social robots, and reveal potential differences in workload between various standard demo formats. Initial findings highlight commonalities and potentially differing needs regarding key factors affecting accessibility of the demos, such as tactile exploration, communication of information, and multimodal interaction, between children and young people with SL and adults with SL. Matthias von Rakowski, Antoine Esman, Méa Vittot, Kimia Nazari, Christian Dondrup, and Shenando Stals (Heriot-Watt University, Edinburgh, UK) With distractions and sedentary habits, maintaining focus and healthy routines is increasingly challenging. While video tutorials are accessible, they lack engagement and personalised feedback. In contrast, professional human coaching is effective but costly and difficult to schedule. To bridge this gap, we introduce "Marty". As a low-cost social robot, it combines the convenience of home practice with the responsive support of an embodied coach. Using computer vision, it leads yoga sessions, demonstrates poses, and provides real-time corrective feedback. By offering accessible and adaptive guidance, this system aims to interrupt sedentary patterns and empower users to build sustainable wellness habits. |
|
| Stedtler, Samantha |
Maria Teresa Parreira, Ilaria Torre, Sarah Schömbs, Erik Lagerstedt, Laetitia Tanqueray, Karla Bransky, Ilaria Alfieri, Maria Raffa, Carolin Straßmann, Benjamin Lebrun, and Samantha Stedtler (Cornell University, New York, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Melbourne, Melbourne, Australia; University of Gothenburg, Gothenburg, Sweden; Lund University, Lund, Sweden; Australian National University, Canberra, Australia; IULM University, Milan, Italy; University of Duisburg-Essen, Duisburg, Germany; University of Canterbury, Christchurch, New Zealand) Building on the foundational work of our 2025 workshop, this second edition shifts from exploring what sustainability means in HRI to producing actionable outcomes for the community. Last year, we identified strong but often unacknowledged engagement with social and environmental sustainability, alongside gaps in shared language and practical frameworks. This year, we focus on concrete deliverables: guidelines for comprehensively integrating sustainability in HRI research. The morning features external speakers who integrate sustainable development in their research, and whole room discussions. The afternoon centers on collaborative development of guidelines and standards that researchers could adopt. We welcome HRI researchers at any career stage interested in operationalizing sustainability - whether through addressing the carbon footprint of computational models, ensuring equitable access to robotic technologies, or embedding SDG principles into research practices. Outcomes will include practical tools for sustainable HRI research and a roadmap for continued community action beyond the workshop. |
|
| Stendahl, Karin |
Hannah Pelikan, Karin Stendahl, Franziska Babel, Ola Johansson, and Erik Frisk (Linköping University, Linköping, Sweden) Mobile robots must behave intelligibly to be acceptable in public spaces. Designing social navigation algorithms for delivery robots requires different areas of expertise. The paper reports on an interdisciplinary collaboration between two ethnomethodological conversation analysts, a human factors psychologist, and two motion planning engineers. Based on video recordings of a robot moving among people, the team developed and implemented different sound and movement designs, which were iteratively tested in real-world deployments. This work contributes insights on how interdisciplinary collaboration can be facilitated in the area of social robot navigation and an iterative process for designing robot sound and movement grounded in real-world observations. |
|
| Stephan, Benedict |
Söhnke Benedikt Fischedick, Robin Schmidt, Benedict Stephan, and Horst-Michael Gross (TU Ilmenau, Ilmenau, Germany) Voice-based interaction offers an intuitive way for untrained users to control mobile robots, but existing speech interfaces often rely on intent maps or robot-specific pipelines that are difficult to transfer across robots, backends, and applications. Recent multimodal large language models (LLMs) can process audio and produce structured function calls, enabling a more flexible form of voice interaction. This late-breaking report proposes a vendor-independent integration pattern (cloud, edge server, or local) that exposes robot capabilities as Model Context Protocol (MCP) tools and maps them to existing middleware interfaces such as remote procedure calls (RPCs). Continuous sensor streams remain in the middleware and are accessed through a snapshot mechanism that returns the most recent buffered value on demand. We demonstrate the approach on a mobile co-presence robot using a lightweight audio pipeline built around wake word detection (WWD), voice activity detection (VAD), multimodal LLM inference, and text-to-speech (TTS). MCP tools trigger capabilities such as navigation, communication, and projector control. The architecture provides a general pattern for robots and middlewares, enabling flexible voice interaction without rewriting intent logic. |
|
| Stephenson, Matthew |
Zhichen Lu, Matthew Stephenson, Benoit Clement, and Adriana Tapus (ENSTA Paris, Paris, France; Flinders University, Adelaide, Australia) Cross-modal conflicts in maritime navigation—where a vessel’s verbal communication contradicts its physical maneuvers (e.g., promising to give way while maintaining speed) pose severe risks to safety. Current autonomous systems often process sensor data and linguistic inputs in isolation, failing to detect such discrepancies. We present a Multimodal Agentic Framework that serves as a “Watchful Copilot,” using Retrieval-Augmented Generation (RAG) to cross-reference navigational dialogue with real-time kinematic data. To manage uncertainty, a Risk-Prioritized Interface employs progressive disclosure, escalating from a “Green” (Verified) state to a “Yellow” (Ambiguous) state, where the agent visualizes supporting evidence and requests human supervision for clarification. Preliminary validation in a 2D simulation benchmark (N=13) provides initial evidence that this human-in-the-loop workflow may support reduced cognitive load and appropriate trust calibration in high-ambiguity scenarios, warranting further investigation. |
|
| Stewart, Rebecca |
Mingke Wang, Yixun Li, Bettina Nissen, and Rebecca Stewart (Imperial College London, London, UK; University of Edinburgh, Edinburgh, UK) MenstaRay is a soft knit robotic interface designed to explore how tactile actuation can support somatosensory communication of menstrual experiences. The prototype was created using a fabrication method for knit-integrated soft wearable robotics with two core structural elements: (1) an extensible EcoFlex 00-10 silicone cavity containing internal air chambers and (2) a strain-limiting textile layer knitted with Spandex Super Stretch Yarn (81% nylon, 19% elastane). This configuration enables regulated inflation patterns that preserve the softness of textiles while providing targeted haptic feedback that is suitable for intimate, safe, and therapeutically appropriate interactions. Through a series of workshops, we investigated and evaluated how these dynamic tactile behaviours shaped participants' embodied reflections on menstrual sensations. This work contributes to human robotic interaction by introducing MenstaRay, a novel artifact coupled with textile-integrated actuation that can externalize intimate bodily sensations and foster new modes of communicating, reflecting on and representing menstrual experiences through wearable interfaces. |
|
| Stiber, Maia |
Christopher D. Wallbridge, Kavyaa Somasundaram, Frank Förster, Maia Stiber, Eduardo B. Sandoval, Chinmaya Mishra, Christina E. Stimson, Patrick Holthaus, and Hatice Gunes (Cardiff University, Cardiff, UK; Örebro University, Örebro, Sweden; University of Hertfordshire, Hatfield, UK; Microsoft Research, Redmond, USA; UNSW, Sydney, Australia; MPI for Psycholinguistics, Nijmegen, Netherlands; University of Sheffield, Sheffield, UK; University of Cambridge, Cambridge, UK) This workshop proposes looking at errors in humans and robots. The workshop will focus along three axes: Human, Robot and Interaction induced errors. Errors in Human-Robot Interaction (HRI) pose a crucial challenge for the deployment of robots. Errors can affect the safety and trust of users, therefore it is important for empowering society that these robots can handle errors robustly. The workshop intends to foster interdisciplinary discussion among researchers in robotics, HCI, cognitive science, and social sciences, to encourage the development of robust, user-centered, and socially responsible HRI systems. The invited speakers, paper presentations and discussions will focus around topics of detecting errors, handling errors, adapting to errors, restoring trust, using intentional errors, communication strategies, issues of expectation as well as look at design, interdisciplinary and ethical approaches to research. In this way we will help inform research into errors, and develop robotic systems capable of robust interaction and collaboration. |
|
| Stimson, Christina E. |
Christopher D. Wallbridge, Kavyaa Somasundaram, Frank Förster, Maia Stiber, Eduardo B. Sandoval, Chinmaya Mishra, Christina E. Stimson, Patrick Holthaus, and Hatice Gunes (Cardiff University, Cardiff, UK; Örebro University, Örebro, Sweden; University of Hertfordshire, Hatfield, UK; Microsoft Research, Redmond, USA; UNSW, Sydney, Australia; MPI for Psycholinguistics, Nijmegen, Netherlands; University of Sheffield, Sheffield, UK; University of Cambridge, Cambridge, UK) This workshop proposes looking at errors in humans and robots. The workshop will focus along three axes: Human, Robot and Interaction induced errors. Errors in Human-Robot Interaction (HRI) pose a crucial challenge for the deployment of robots. Errors can affect the safety and trust of users, therefore it is important for empowering society that these robots can handle errors robustly. The workshop intends to foster interdisciplinary discussion among researchers in robotics, HCI, cognitive science, and social sciences, to encourage the development of robust, user-centered, and socially responsible HRI systems. The invited speakers, paper presentations and discussions will focus around topics of detecting errors, handling errors, adapting to errors, restoring trust, using intentional errors, communication strategies, issues of expectation as well as look at design, interdisciplinary and ethical approaches to research. In this way we will help inform research into errors, and develop robotic systems capable of robust interaction and collaboration. |
|
| Stollhof, Bernadette-Marie |
Mayumi Mohan, Rachael L’Orsa, Felix Grüninger, Bernadette-Marie Stollhof, Carolin Sarah Klein, Raphael Dinauer, Rachael Bevill Burns, Tobias J. Renner, Karsten Hollmann, and Katherine J. Kuchenbecker (MPI for Intelligent Systems, Stuttgart, Germany; University Hospital Tübingen, Tübingen, Germany; University of Tennessee at Knoxville, Knoxville, USA) The gold-standard treatment for children diagnosed with obsessive-compulsive disorder (OCD) is therapist-guided cognitive behavioral therapy (CBT), which includes exposure and response prevention (ERP) sessions that teach children to overcome compulsive responses when exposed to their anxiety-inducing triggers. CBT requires children to report frequent self-assessments of tension during both therapist-supported and therapist-free self-management ERP sessions. Videoconferencing-delivered CBT (vCBT) enables a psychotherapist to treat a child remotely in their home, where OCD symptoms often arise, but these remote therapeutic interactions lack physical presence and can be challenging to run. We propose using a robot as an input/output device during vCBT for young children diagnosed with OCD, and we introduce a stationary table-top koala robot for this application. We further describe the first of three planned participatory design phases: a co-design study comprising two sessions where child and adolescent psychotherapists role-played vCBT ERP exercises with this robot to help define its role. |
|
| Straßmann, Carolin |
Maria Teresa Parreira, Ilaria Torre, Sarah Schömbs, Erik Lagerstedt, Laetitia Tanqueray, Karla Bransky, Ilaria Alfieri, Maria Raffa, Carolin Straßmann, Benjamin Lebrun, and Samantha Stedtler (Cornell University, New York, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Melbourne, Melbourne, Australia; University of Gothenburg, Gothenburg, Sweden; Lund University, Lund, Sweden; Australian National University, Canberra, Australia; IULM University, Milan, Italy; University of Duisburg-Essen, Duisburg, Germany; University of Canterbury, Christchurch, New Zealand) Building on the foundational work of our 2025 workshop, this second edition shifts from exploring what sustainability means in HRI to producing actionable outcomes for the community. Last year, we identified strong but often unacknowledged engagement with social and environmental sustainability, alongside gaps in shared language and practical frameworks. This year, we focus on concrete deliverables: guidelines for comprehensively integrating sustainability in HRI research. The morning features external speakers who integrate sustainable development in their research, and whole room discussions. The afternoon centers on collaborative development of guidelines and standards that researchers could adopt. We welcome HRI researchers at any career stage interested in operationalizing sustainability - whether through addressing the carbon footprint of computational models, ensuring equitable access to robotic technologies, or embedding SDG principles into research practices. Outcomes will include practical tools for sustainable HRI research and a roadmap for continued community action beyond the workshop. |
|
| Strathearn, Carl |
Lewis Watson, Emilia Sobolewska, Carl Strathearn, Mayuko Morgan, and Yanchao Yu (Edinburgh Napier University, Edinburgh, UK) A major limitation of current social robots is their dependence on cloud-based dialogue pipelines, which restricts use in settings with limited or unreliable connectivity. We present a lightweight, fully local spoken-dialogue system that runs on consumer-grade hardware and integrates open-source models for speech recognition, dialogue generation, and text-to-speech. The pipeline was deployed on Euclid, a non-commercial humanoid robot, across several public engagement events, enabling extended real-world interaction without internet access. We analyse over 5,000 dialogue turns recorded during these dialogues to characterise system behaviour, user interaction patterns, and challenges arising in noisy, multi-speaker environments. Our observations demonstrate the feasibility of privacy-preserving, on-device conversational robotics while highlighting limitations in turn-taking, response length, and environmental grounding. We outline planned improvements to support more robust and accessible social-robot interaction. |
|
| Stratton, Andrew |
Pranav Goyal, Andrew Stratton, and Christoforos Mavrogiannis (University of Michigan at Ann Arbor, Ann Arbor, USA) Legible motion enables humans to anticipate robot behavior during social navigation, but existing approaches largely assume open spaces, static interactions, and fully attentive pedestrians. We study legibility in the ubiquitous and realistic setting of hallway navigation through two user studies. Study 1 (N=45) evaluates how intent should be represented for legible navigation within a model predictive control framework. We find that expressing intent at the interaction level (i.e., passing side) and dynamically adapting it to human motion leads to smoother human trajectories and higher perceived competence than destination-based or non-legible baselines. Study 2 (N=45) examines whether legibility remains beneficial when pedestrians are cognitively distracted. While legible motion still reduced abrupt human motion relative to the non-legible baseline, subjective impressions were less sensitive under distraction. Together, these results demonstrate that legibility is most effective when grounded in immediate interaction objectives and highlight the need to account for attentional variability. |
|
| Strazdas, Dominykas |
Magnus Jung, Qiaoyue Yang, Leander von Seelstrang, Dominykas Strazdas, Sven Wachsmuth, and Ayoub Al-Hamadi (University of Magdeburg, Magdeburg, Germany; Bielefeld University, Bielefeld, Germany) We present SEMIAC, a multimodal dataset of human-robot interactions (HRI) collected with a Wizard-of-Oz methodology in a logistics-inspired workspace and a home environment. The dataset includes 40 participants across two research sites and captures their behavioural, emotional, proxemic, and subjective responses during cooperative object-retrieval tasks with systematically manipulated robot errors. Interactions were recorded using a rich sensor suite including the humanoid robot TIAGo equipped with onboard sensors and an RGB-D camera, as well as three external RGB-D sensors, and an external microphone. All modalities were recorded fully synchronized through ROS and stored in time-aligned rosbag files. Annotations include human keypoints, proxemic distances, emotional expressions, and engagement indicators. The dataset aims to support research on social navigation, error-aware intent prediction, and socially intelligent robot behaviour. |
|
| Studley, Matthew |
Phillip Morgan, Damith Herath, Praminda Caleb-Solly, Matthew Studley, Elizabeth Williams, Aurora An-Lin Hu, Eduardo B. Sandoval, Maleen Jayasuriya, Min Wang, and Janie Busby Grant (Cardiff University, Cardiff, UK; University of Canberra, Bruce, Australia; University of Nottingham, Nottingham, UK; University of the West of England, Bristol, UK; Australian National University, Canberra, Australia; UNSW, Sydney, Australia; University of Canberra, Canberra, Australia) The increasing prevalence of interactive, mobile robots in domestic and social spaces requires not only a comprehensive examination of the security challenges inherent in their large-scale deployment, but understanding how we as a community can support safe and successful adoption of robots. This workshop provides a forum for researchers, practitioners, and stakeholders from a range of disci- plines to build expertise and networks in safety, trust, psychosocial, legal and economic aspects for the secure deployment of social robots in domestic environments. The end goal of this workshop is to enable the development of methodologies and recommenda- tions to empower people to understand and decide on how their robots (and/or robots they are using) are secured, and approaches for raising awareness of the impact of an attack. The workshop will incorporate keynote presentations from leading experts, a series of lightning talks, and a scenario planning activity in which teams of participants use custom-designed prospective scenarios to ex- plore the precursors, processes, contexts and impacts of a breach of security in a robotic system. This will allow a deep-dive into the complexities of establishing and maintaining security and trust in interactive robotic applications, from the broad frame of social, personal and legal capability, enabling long-term acceptance and adoption of trustworthy, secure, and safe interactive robots. |
|
| Subramaniyan, Hari Krishnan |
Hari Krishnan Subramaniyan, Jakub Rammel, Jiayi Gu, and Shreyas Ahuja (Delft University of Technology, Delft, Netherlands) This paper explores gesture-enabled Human–Robot Co-Creation (HRC) as a framework, investigating collaborative design between humans and machines through additive manufacturing. The project demonstrates a proof-of-concept workflow in which robots act as precise creators and humans as intuitive collaborators, dynamically adjusting geometry and materials in real time. Gesture control enabled direct engagement with the fabrication process, highlighting the potential for expressive design. |
|
| Suen, Christine Wun Ki |
Christine Wun Ki Suen and Zhengbo Zou (Columbia University, New York City, USA) Robots with prosocial behavior can enhance human trust and the effectiveness of Human-Robot Interaction (HRI) in life-threatening scenarios. However, existing empathic robots often rely on rule-based or goal-oriented models that diverge from psychological theories of empathy and potentially limit perceived human trust. To address this gap, we propose a novel approach grounded in the empathy–altruism hypothesis from social psychology. Our proposed approach equips robot with the capability of affective perspective taking, which allows it to recall its prior self-experience, thereby encouraging empathic concern and promoting prosocial behavior toward humans. We evaluated the proposed approach on robotic agents in realistic 3D fire-emergency simulations and analyzed their prosociality across three psychological dimensions. Experiments show that a robot embedded with the proposed approach achieves a 73.7% helping rate and shows consistent prosocial tendencies across all three dimensions, compared with a 52.6% helping rate for the baseline robot. These findings open new directions for developing robots with prosocial behavior (prosocial robots) during emergency response, and support more effective HRI in life-threatening scenarios. Demonstrations and further details are available at here. |
|
| Sullivan, Dakota |
Dakota Sullivan, David Porfirio, Bilge Mutlu, and Laura M. Hiatt (University of Wisconsin-Madison, Madison, USA; George Mason University, Fairfax, USA; US Naval Research Laboratory, Washington, USA) Robots are increasingly relied upon for task completion in privacy-critical human environments. In these environments, it is imperative that a robot's potentially sensitive goals remain obfuscated. To address this need, a substantial amount of literature has proposed methods for obfuscatory task planning. These works make many attempts to experimentally or analytically determine whether agents can conceal their goals from observers. While these works make guarantees that resulting plans will conceal an agent's goals, they are often only theoretical. Within this work, we develop three obfuscatory task planning strategies inspired by prior literature to evaluate with human observers (N = 160). Our preliminary results show that observers struggle to identify a robot's goals at similar levels regardless of whether obfuscatory or optimal task planning strategies are employed. These findings call into question the purported benefits of many obfuscatory task planning strategies. |
|
| Szafir, Daniel |
Diane N. Jung, Caleb Escobedo, Noah Liska, Maitrey Gramopadhye, Daniel Szafir, Alessandro Roncone, and Carson J. Bruns (University of Colorado at Boulder, Boulder, USA; University of North Carolina at Chapel Hill, Chapel Hill, USA) Scientists perform diverse manual procedures that are tedious and laborious. Such procedures are considered a bottleneck for modern experimental science, as they consume time and increase burdens in fields including material science and medicine. We employ a user-centered approach to designing a robot-assisted system for dialysis, a common multi-day purification method used in polymer and protein synthesis. Through two usability studies, we obtain participant feedback and revise design requirements to develop the final system that satisfies scientists' needs and has the potential for applications in other experimental workflows. We anticipate that integration of this system into real synthesis procedures in a chemical wet lab will decrease workload on scientists during long experimental procedures and provide an effective approach to designing more systems that have the potential to accelerate scientific discovery and liberate scientists from tedious labor. |
|
| Szojak, Verena |
Verena Szojak, Ouijdane Guiza, Markus Jäger, Markus Brillinger, Martin Schobesberger, Martina Mara, Kathrin Meyer, and Sandra Maria Siedl (Pro2Future, Linz, Austria; FH Joanneum, Graz, Austria; Johannes Kepler University Linz, Linz, Austria) Collaborative robots (cobots) are increasingly used in manufacturing, yet their behavior can appear unpredictable to human coworkers, making teamwork less smooth. Explainability has been proposed as a key requirement for fluent human-robot collaboration. However, we still lack a clarity of (i) when cobots should provide explanations, (ii) what information human coworkers seek, and (iii) how explanations should be delivered by cobots during real physical collaboration. This paper reports an exploratory qualitative study of human explanation needs in a real-life bicycle assembly task with a 6-axis cobot arm. Fourteen participants interacted with the cobot that occasionally behaved unexpectedly. Think-aloud protocols and semi-structured interviews, show that users seek continuous information primarily about the cobot's current status and situation-dependent guidance for interaction. Participants favored simple presentations such as easy-to-read text explanations and light signals. We discuss implications for designing explainable cobots that provide lightweight, context-sensitive information and outline directions for future quantitative evaluations. |
|
| Tabone, Wilbert |
Wilbert Tabone, Benedetta Lusi, Alessandro Ianniello, J. Micah Prendergast, Deborah Forster, Olger Siebinga, Maria Luce Lupetti, Eva S. Verhoef, Dave Murray-Rust, Marco C. Rozendaal, Ann M. Pendleton-Jullian, and David Abbink (Delft University of Technology, Delft, Netherlands; Erasmus University Medical Centre, Rotterdam, Netherlands; Politecnico di Torino, Turin, Italy; RoboHouse, Delft, Netherlands; Ohio State University, Columbus, USA) Building on two previous workshops on transdisciplinary practices for shaping worker-robot relations, this half-day workshop introduces participants to worldbuilding, a design-driven technique used to co-create and explore richly detailed futures, as a way to empower workers and scholars in reimagining plausible and preferable future worker-robot relations (WRRs). WRRs describe the interactions, collaborations, and shared practices between workers and robotic systems in organisational contexts. The workshop begins with an introduction to WRRs, and a keynote by a worldbuilding expert that will outline the method and its value for envisioning future WRRs. Groups of workshop participants will then investigate concrete case studies that demonstrate how robotic systems can support workers in their practice, with a focus on enhancing wellbeing. Through interactive activities in this workshop, participants will co-create imagined worlds of work, which will be analyzed systemically across multiple levels of complexity, from the individual worker and their immediate context to broader societal implications. The workshop ultimately aims to build a community committed to shaping sustainable futures of robot-assisted work. |
|
| Tanaka, Fumihide |
Taito Tashiro, Marino Inada, and Fumihide Tanaka (University of Tsukuba, Tsukuba, Japan) Loneliness has increasingly been recognized as a serious societal and public health concern worldwide. To support individuals who experience loneliness in their daily lives, we present a neck-pillow-shaped companion robot that integrates spoken dialogue with thermal feedback delivered to the back of the neck. Conversational responses are generated using a large language model, and each LLM-generated response is classified into three sentiment categories to drive a Peltier element to a corresponding temperature setpoint synchronized with speech playback. We aim to investigate how integrating linguistic and thermal modalities shapes users’ subjective perceptions and whether it can ultimately contribute to alleviating loneliness. |
|
| Tanaka, Kengo |
Ayaka Tsutsui, Takahito Murakami, Tatsuki Fushimi, Ryosei Kojima, Kengo Tanaka, and Yoichi Ochiai (University of Tsukuba, Tsukuba, Japan) Embossed surfaces provide raised patterns and textures that can support access to information and act as tactile guidance on physical interfaces. Recent work has introduced a mold free embossing technique that combines High Intensity Focused Ultrasound (HIFU) with a robot arm to locally deform materials such as acrylic and MDF, enabling digitally controlled embossing. While this approach offers high design flexibility and is safe, research on how the resulting patterns are perceived through touch remains limited. In this work, we use a robot-controlled HIFU embossing system to create six textures that represent common tactile features relevant to exploratory touch, including contours, roughness, geometric shapes, and localized bumps on acrylic and MDF. Nineteen participants took part in a blindfolded tactile exploration study. Using both quantitative and qualitative data, we characterize how people perceive and differentiate textures produced through robot-controlled HIFU embossing. |
|
| Tanaka, Takahiro |
Takahiro Tanaka (Nagoya University, Nagoya, Japan) To address the growing issue of traffic accidents involving elderly drivers, we developed a Driver Agent System that provides real-time driving support and post-driving review feedback to encourage safer behavior. A nine-week nationwide field study was conducted with 25 drivers, including middle-aged and older drivers (ages 50–65), to investigate how continuous use of a Driver Agent influences driving behavior in daily life. Results showed clear improvements in driving behavior, including safer stopping at intersections, reduced speeding, and fewer sudden acceleration or deceleration events. Subjective evaluations also became more favorable over time, indicating increased reassurance, attachment, and a sense of companionship. The agent’s praising behavior enhanced positive emotions and motivation, suggesting that socially supportive interactions can sustain safer driving habits and help maintain long-term driving safety especially among elderly drivers. These findings demonstrate the potential of Driver Agents as an effective human–robot interaction approach for improving and sustaining safe driving behavior in daily life. |
|
| Tang, Yuanrong |
Hanyu Zhang, Xinyue Xu, Xinran She, Jie Deng, and Yuanrong Tang (Tsinghua University, Beijing, China) Digital violence often happens impulsively within seconds. Circuit Breaker introduces an embodied mouse robot that detects toxic interactions and delivers physical micro-interventions to disrupt harmful actions. Through real-time cursor signals, sentiment cues, and haptic feedback, the system promotes reflective and safer online behavior. |
|
| Tankard, Robert |
Emma Minter, Robert Tankard, Oscar Norman, and Janie Busby Grant (University of Canberra, Canberra, Australia) Extensive research has investigated the human tendency to anthropomorphize artificial agents by attributing human-like traits to these systems. Sociality motivation, the desire for social connection, has been proposed to be a key psychological determinant for anthropomorphism. Sociality motivation can be operationalized in a range of dispositional, developmental, and cultural facets, but it is currently unclear how these factors contribute collectively and independently to predicting an individual’s tendency towards anthropomorphism. This online study (N = 164) assessed the relationship between different facets of sociality motivation and four dimensions of anthropomorphism of a social robot, using videos of a robot completing a game alone and with human and robot partners. Respondents who reported more collectivist cultural views were more likely to attribute higher agency, sociability, and disturbance to the robot. Those who reported higher attachment anxiety scores also attributed greater agency and sociability. Previous research has focused primarily on dispositional indicators of anthropomorphism, however the current study suggests that cultural determinants may be stronger predictors of anthropomorphic tendencies and should be a focus of further research. |
|
| Tanqueray, Laetitia |
Maria Teresa Parreira, Ilaria Torre, Sarah Schömbs, Erik Lagerstedt, Laetitia Tanqueray, Karla Bransky, Ilaria Alfieri, Maria Raffa, Carolin Straßmann, Benjamin Lebrun, and Samantha Stedtler (Cornell University, New York, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Melbourne, Melbourne, Australia; University of Gothenburg, Gothenburg, Sweden; Lund University, Lund, Sweden; Australian National University, Canberra, Australia; IULM University, Milan, Italy; University of Duisburg-Essen, Duisburg, Germany; University of Canterbury, Christchurch, New Zealand) Building on the foundational work of our 2025 workshop, this second edition shifts from exploring what sustainability means in HRI to producing actionable outcomes for the community. Last year, we identified strong but often unacknowledged engagement with social and environmental sustainability, alongside gaps in shared language and practical frameworks. This year, we focus on concrete deliverables: guidelines for comprehensively integrating sustainability in HRI research. The morning features external speakers who integrate sustainable development in their research, and whole room discussions. The afternoon centers on collaborative development of guidelines and standards that researchers could adopt. We welcome HRI researchers at any career stage interested in operationalizing sustainability - whether through addressing the carbon footprint of computational models, ensuring equitable access to robotic technologies, or embedding SDG principles into research practices. Outcomes will include practical tools for sustainable HRI research and a roadmap for continued community action beyond the workshop. |
|
| Tao, Jini |
Yijie Guo, Ruhan Wang, Jini Tao, Zhiling Xu, Yaowen Shen, Qifei Zhou, Zhenhan Huang, Danqi Huang, and Haipeng Mi (Tsinghua University, Beijing, China; Neurobo, Shanghai, China; Tongji University, Shanghai, China) In-public companionship is underexplored, especially for ACG (anime, comics, and games) users who carry character goods and seek shared presence with favorite characters in everyday public life. We present Goobo, a lightweight multimodal companion embedded in an ita-bag (a transparent display bag used by ACG fans) that “activates” carried character goods via real-time visual perception and persona-consistent expressive narration. Informed by a formative workshop, Goobo illustrates a portable, character-bound interaction paradigm and serves as a platform for upcoming field studies on how users engage with activated characters in public settings. |
|
| Tapus, Adriana |
Zhichen Lu, Matthew Stephenson, Benoit Clement, and Adriana Tapus (ENSTA Paris, Paris, France; Flinders University, Adelaide, Australia) Cross-modal conflicts in maritime navigation—where a vessel’s verbal communication contradicts its physical maneuvers (e.g., promising to give way while maintaining speed) pose severe risks to safety. Current autonomous systems often process sensor data and linguistic inputs in isolation, failing to detect such discrepancies. We present a Multimodal Agentic Framework that serves as a “Watchful Copilot,” using Retrieval-Augmented Generation (RAG) to cross-reference navigational dialogue with real-time kinematic data. To manage uncertainty, a Risk-Prioritized Interface employs progressive disclosure, escalating from a “Green” (Verified) state to a “Yellow” (Ambiguous) state, where the agent visualizes supporting evidence and requests human supervision for clarification. Preliminary validation in a 2D simulation benchmark (N=13) provides initial evidence that this human-in-the-loop workflow may support reduced cognitive load and appropriate trust calibration in high-ambiguity scenarios, warranting further investigation. Cristian-Marius Cringasu and Adriana Tapus (National University of Science and Technology Politehnica Bucharest, Bucharest, Romania; ENSTA Paris, Paris, France) Adapting to individual preferences—including interpersonal distance, formality, and role conventions, is essential for social robots. We introduce a parameter-efficient method for episodic social memory that stores interaction-specific norms as LoRA adapters applied per episode to an open-source dialogue model. We encode episode metadata within a manually defined social feature space, train a distinct LoRA adapter per episode using norm-consistent responses, and at inference retrieve the nearest episode by embedding similarity. We evaluate four configurations: (1) base model (no memory), (2) RAG with episodic text, (3) LoRA-only (activating the retrieved adapter), and (4) combined RAG+LoRA. An independent LLM-as-judge rates outputs for formality, tone, proxemics, and role alignment. Preliminary results on synthetic proxemics and hierarchy tasks indicate that both RAG and episodic LoRA influence behavior, and their combination produces more reliable, user-tailored responses than either component alone. Caio Conti, Nuno Kuschnaroff Barbosa, Guilherme Gelmi de Freitas Salvo, Gabriel Corsi Honorio, Davy Araujo Sa Teles, and Adriana Tapus (ENSTA - Institut Polytechnique de Paris, Paris, France; ENSTA Paris, Paris, France) The presence of robots in everyday settings is growing rapidly. As they become increasingly common in service-oriented roles, such as in restaurants, retail, and reception, their primary function often involves assisting people by providing guidance and supporting decision-making. In these contexts, a critical factor influencing their effectiveness is the level of human trust in the robot’s actions. In this work, we investigate human trust, confidence and perceptions in interactions with robots within the context of a recycling task. We developed a robot equipped with a vision-enabled LLM to classify trash items and a robotic arm to indicate the recommended bin—either recycling or general waste. In a sorting experiment involving 12 participants, each completed the task under three conditions: (C1) without any assistance, (C2) with an instructional sheet, and (C3) with the option to request assistance from the robot. We tracked whether participants chose to consult the robot and collected self-reported confidence ratings for each condition. Preliminary results suggest that participants are more likely to trust the robot’s guidance and report higher confidence in correctly completing the task when assisted by the robot compared to performing the task alone or with only the instructional sheet. Xiaoxuan Hei, Sofia Gioumatzidou, Juan José Garcia Cardenas, and Adriana Tapus (ENSTA - Institut Polytechnique de Paris, Palaiseau, France; University of Macedonia, Thessaloniki, Greece; ENSTA Paris, Paliseau, France; ENSTA Paris, Paris, France) Trust in human-robot teams is critical for effective collaboration, but the dynamics of trust transfer between advisory and executing robots remain underexplored. This study investigated how the accuracy of advice provided by a humanoid robot (NAO) and the execution reliability of a robotic arm (Franka) influence human trust and reliance on advice in a collaborative drawing task. Participants completed three drawing tasks while receiving suggestions from NAO, with NAO providing either accurate or inaccurate advice and Franka executing actions with high or low reliability. Results showed that accurate advice from NAO increased participants' trust in both NAO and Franka, while inaccurate advice neither increase nor decrease trust in Franka, demonstrating trust spillover and trust resilience. Franka's execution reliability did not significantly affect adherence to NAO's suggestions, although low performance in both robots reduced task satisfaction, decreased reliance, and increased deliberation time. These findings highlight the asymmetrical and context-dependent nature of trust transfer in human-robot interaction, emphasizing the importance of both informational accuracy and execution reliability for effective collaboration. |
|
| Taruno, Matthew |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Tasaki, Ryosuke |
Hiroki Kimura, Che-Kang Hsu, Takeru Ito, Naoya Harada, Michiteru Kitazaki, and Ryosuke Tasaki (Aoyama Gakuin University, Sagamihara, Japan; Toyohashi University of Technology, Toyohashi, Japan) Robotic massage systems can reproduce treatment forces consistently, but most still rely on predefined trajectories and fixed force distributions that ignore recipients’ subjective sensations. We propose a conversational robotic massage system that updates a polynomial force map for latissimus dorsi acupressure from recipient speech and applies it through force feedback control. In a study with 13 participants, simple commands such as "little weaker" and "little stronger" changed the applied force in the intended direction; questionnaires indicated high force adaptation, satisfaction, and consistency, with low pain, while perceived safety varied widely, suggesting a trade-off between interactive adaptation and predictable contact. These findings indicate that speech is a promising modality for tailoring robotic massage and highlight the need for careful gain tuning and feedback about force changes. Although the system currently uses only four predefined speech patterns, it represents an initial step toward conversational control of spatially varying acupressure forces. Ryusei Shigemoto, Hiroki Kimura, and Ryosuke Tasaki (Aoyama Gakuin University, Sagamihara, Japan) This study proposes a boundary-centric multi-robot pedestrian flow guidance design that enables soft, human-centered flow alignment along a guidance line with only a few robots. Flow boundaries are estimated from the crowd’s outer contour via a convex hull, and boundary pedestrians are classified as head, side, and corner. Guidance priority integrates boundary-type importance with the predicted time to intersect the guidance line. A reachability-aware score enables optimal robot allocation via the Hungarian algorithm. Assigned robots employ a parallel-interaction strategy with a side-offset distance perpendicular to a target’s walking direction to elicit anticipatory avoidance without overtly forcing heading changes. A unidirectional guidance simulation with 15 pedestrians and three robots demonstrates feasibility, showing reachability-driven role sharing, improved alignment, and guidance without pedestrians crossing the guidance line under the tested condition. Future work will evaluate safety, trust, and acceptability in opposing and crossing flows with real robots. Naoya Harada, Michiteru Kitazaki, and Ryosuke Tasaki (Aoyama Gakuin University, Sagamihara, Japan; Toyohashi University of Technology, Toyohashi, Japan) This study proposes an integrated robotic massage platform designed to bridge the gap between mechanical stimulation and human-like care. Conventional systems often require a prone posture and lack psychological immersion, limiting embodiment and safety. To address this, we developed a system featuring seated multi-robot actuation—simultaneously targeting the forearm and sole—and a first-person perspective (1PP) VR interface with synchronized virtual therapists. A field study with 32 participants evaluated feasibility and user experience. Results showed high ratings for overall satisfaction and psychological safety. Notably, a strong positive correlation was found between "perceived human-likeness" and user satisfaction. This suggests that inducing a sense of human agency via 1PP VR effectively complements mechanical stimulation, thereby significantly elevating the quality of the care experience. We contribute (i) a seated dual-limb multi-robot massage platform with 1PP VR therapists and (ii) in-the-wild user evidence that perceived human-likeness and safety/relaxation are key correlates of satisfaction. |
|
| Tashiro, Taito |
Taito Tashiro, Marino Inada, and Fumihide Tanaka (University of Tsukuba, Tsukuba, Japan) Loneliness has increasingly been recognized as a serious societal and public health concern worldwide. To support individuals who experience loneliness in their daily lives, we present a neck-pillow-shaped companion robot that integrates spoken dialogue with thermal feedback delivered to the back of the neck. Conversational responses are generated using a large language model, and each LLM-generated response is classified into three sentiment categories to drive a Peltier element to a corresponding temperature setpoint synchronized with speech playback. We aim to investigate how integrating linguistic and thermal modalities shapes users’ subjective perceptions and whether it can ultimately contribute to alleviating loneliness. Taito Tashiro and Marino Inada (University of Tsukuba, Tsukuba, Japan) Existing systems for emotional support for people experiencing loneliness have relied on single modalities such as speech or touch. We propose NeckMate, a neck-pillow-shaped robot that integrates linguistic and tactile information to effectively convey emotions and provide reassurance. Worn around the neck, the robot engages in natural dialogue using a large language model while presenting temperature via a Peltier element according to the polarity of its utterances. By synchronizing warmth with positive messages and coolness with negative ones, the robot creates a bodily sense of companionship. Its low-cost, home-deployable design aims to mitigate growing global loneliness. |
|
| Taylor, Stephen |
Jack Bassett, Long-Jing Hsu, Stephen Taylor, Janice Bays, Weslie Khoo, Katherine M. Tsui, David J. Crandall, and Selma Šabanović (Indiana University at Bloomington, Bloomington, USA; Dementia Action Alliance, Bloomington, USA; Toyota Research Institute, Cambridge, USA) Current robotic systems for dementia care primarily emphasize cognitive or functional support—reminding individuals to take medications, offering simple companionship, or providing sensory stimulation. These interventions are valuable, yet they do not always acknowledge the deeper existential challenges associated with memory loss. In this video, we present IRIS (Interactive Robot for Ikigai Support), a novel social robot designed not only to assist but to empower: to help individuals living with dementia reconnect with purpose, identity, and meaning. This 3-minute clip was part of a 3-year research project to develop a robot that can foster a sense of meaning for older adults living with dementia. This work highlights the lived experience of Steve Taylor, a participant whose long-term involvement shaped the robot from concept to deployment. By foregrounding his voice and day-to-day story, the video reveals how IRIS may act as a reflective companion — encouraging storytelling, sparking emotional connections, and enabling moments of clarity, dignity, and joy. The film frames the robot not merely as a tool, but as a mirror through which participants can rediscover fragments of self and meaning in everyday experiences. |
|
| Tchou, Raymond |
Simon Ekman, Joachim Örtegren, Kacper Mateusz Sieklucki, Raymond Tchou, Ludwig Halvorsen, and Hannah Pelikan (Linköping University, Linköping, Sweden) Robots on dense urban sidewalks enter environments in which space is contested. While prior work has described how bystanders and passersby accommodate delivery robots, we still lack a deeper understanding of what kind of problems emerge when autonomous robots enter a new area. This paper builds on 9 hours of fieldwork conducted during the early weeks of a delivery robot rollout in a European capital, characterized by a mixture of wide and narrow sidewalks. We describe emerging observations that highlight the contested and negotiated nature of these spaces, in which people make their trajectories readable, projecting where they will go next and establishing local traffic norms. |
|
| Tecson, Michaela |
Michaela Tecson, Zackory Erickson, and Reid Simmons (Carnegie Mellon University, Pittsburgh, USA) Older adults with mild cognitive impairment (MCI) often experience difficulty completing multi-step tasks such as meal preparation. Existing assistive technologies typically provide step-by-step guidance without determining when assistance is actually needed, which risks undermining the autonomy of the user if intervention is not necessary. Thus, we present a framework for detecting moments when a user requires assistance during a meal preparation task. Using a location-based state representation, we classify three error types observed in a real-world study with older adults: visiting irrelevant locations, retrieving incorrect items, and overlooking necessary items. Our method leverages LLMs to interpret each state, identifies when assistance is required, and provides specific suggestions to resume progression. We evaluate the approach on a synthetic dataset with systematically injected errors and a real-world meal preparation sequence of making a banana split. Our results demonstrate that our method achieves F1 scores of 0.80 and 1.00 in real-world data for the two most common error types. These findings highlight the potential for this method to support timely interventions in assistive systems that promote independence in daily living activities. |
|
| Teichmann, Malte |
Nicolas Leins, Jana Gonnermann-Müller, Malte Teichmann, and Sebastian Pokutta (Zuse Institute Berlin, Berlin, Germany; Weizenbaum Institute, Berlin, Germany; University of Potsdam, Potsdam, Germany; TU Berlin, Berlin, Germany) Augmented Reality (AR) offers powerful visualization capabilities for industrial robot training, yet current interfaces remain predominantly static, failing to account for learners' diverse cognitive profiles. In this paper, we present an AR application for robot training and propose a multi-agent AI framework for future integration that bridges the gap between static visualization and pedagogical intelligence. We report on the evaluation of the baseline AR interface with 36 participants performing a robotic pick-and-place task. While overall usability was high, notable disparities in task duration and learner characteristics highlighted the necessity for dynamic adaptation. To address this, we propose a multi-agent framework that orchestrates multiple components to perform complex preprocessing of multimodal inputs (e.g., voice, physiology, robot data) and adapt the AR application to the learner's needs. By utilizing autonomous Large Language Model (LLM) agents, the proposed system would dynamically adapt the learning environment based on advanced LLM reasoning in real-time. |
|
| Texeira, Andrew |
Xiangfei Kong, Rex Gonzalez, Nicolas Echeverria, Sofia Cobo Navas, Thuc Anh Nguyen, Divyamshu Shrestha, Sebastian Ramirez-Vallejo, Deekshita Senthil Kumar, Andrew Texeira, Thao Nguyen, Deep Akbari, Hong Wang, Hongdao Meng, and Zhao Han (University of South Florida, Tampa, USA) HRI research shows that social robots support human companionship and emotional well-being. However, the cost of high-end social robots limits their accessibility, and, yet, existing low-cost platforms cannot fully support users emotionally due to limited interaction capabilities. In this demo paper with accompanying video, we showcase Bloom, a low-cost social robot that combines touch sensing with LLM-driven conversation to facilitate more expressive and emotionally engaging interactions with humans. Bloom also integrates a customizable 3D-printed shell with a flexible software pipeline that enables touch-responsive movement, LLM-driven dialogue, and face tracking during conversations, offering more expressive interaction capabilities at a low cost, which remains easy to fabricate and adaptable across a wide range of societal applications, including mental health, companion, and healthcare. |
|
| Thellman, Sam |
Sam Thellman, Klara Bergsten, Edoardo Datteri, and Tom Ziemke (Linköping University, Linköping, Sweden; University of Milano-Bicocca, Milan, Italy) People routinely attribute mental states such as beliefs, desires, and intentions to explain and predict others' behavior. Prior work shows that such attributions extend to robots, yet it remains unclear what people assume about the reality of the states they attribute to them. Building on recent conceptual work on folk-ontological stances, we report a pilot study measuring realist, anti-realist, and agnostic stances toward robot minds. Using a questionnaire (N = 66), we assessed stances toward today's robots and robots in principle, and examined stance rigidity through a reflection-and-reassessment design. Results show stronger anti-realist tendencies for today's robots than for robots in principle. Stances were largely rigid across reflection. Notably, participants did not hold a uniformly non-realist view but expressed a diversity of folk-ontological stances, including substantial proportions of agnostic and realist responses. This heterogeneity highlights the need for measurement tools that move beyond binary measures and capture nuance in folk-ontological reasoning. Future work will expand stance options to include finer-grained realist and anti-realist variants and recruit cross-cultural samples to assess variation across populations. Riccardo Spagnuolo, William Hagman, Erik Lagerstedt, Matthew Rueben, and Sam Thellman (University of Padua, Padua, Italy; Mälardalen University, Eskilstuna, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Portland, Portland, USA; Linköping University, Linköping, Sweden) Robots increasingly operate in everyday human environments, where interaction depends on users understanding what the robot can perceive and act on---its perceived ecology or Umwelt. Current human-robot interfaces rarely support this understanding: they rely largely on symbolic cues that reveal little about how environmental structures shape the robot’s actions. Drawing on Gibson’s ecological psychology, we propose a shift from symbolic communication toward ecological specification in interface design. We introduce the Gibsonian Human–Robot Interface Design (GHRID) taxonomy, which organizes interface properties across three facets---basic descriptive, context and evaluation, Gibsonian-specific---and identifies key ecological dimensions such as affordance grounding, temporal coupling, and Umwelt exposure. Finally, we outline a research program testing whether "GHRID-high" designs improve users’ understanding of robots’ behavior-driving states and processes. |
|
| Thiessen, Raquel |
Raquel Thiessen, Minoo Dabiri Golchin, Samuel Barrett, Jacquie Ripat, and James Everett Young (University of Manitoba, Winnipeg, Canada) Social robots are increasingly marketed as play companions for children, but research has not established how these robots support play in real-world scenarios or whether their interactivity supports quality play. We are conducting an eight-week home study with children with and without disabilities to learn about the play experiences with an interactive robot versus a doll ver-sion of the same robot (a VStone Sota). We implemented interactive robot behaviors based on LUDI's categorization of play, incorporating social and cognitive dimensions of play to support children’s play in various developmental play stages. We measure play quality using standardized instruments, and along with qualitative assessments of children's engagement and interest through child-family interviews. This study investigates whether interacting with robotic toys supports children in developing play skills compared to non-robotic dolls. Our findings will establish baseline knowledge about child-robot play and can guide evidence-based design of interactive play companions for children. |
|
| Thunberg, Sofia |
Sofia Thunberg, Aris Alissandrakis, Timmy Bolinder, Melker Forslund, Elias Lycke, Tobias Pettersson, Marcus Saméus, and Robin Åsberg (Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; Uppsala University, Uppsala, Sweden) This late-breaking report presents a Wizard of Oz control system that uses virtual reality equipment to operate the humanoid robot Pepper. The system enables control by mirroring the operator’s movement within the physical space, including body and head rotation, and arm movements directly onto the robot. The system was evaluated in a study setup with two participant groups: users and operators. Results showed that the system was effective in simulating natural robot behaviour, while being intuitive and engaging. Sofia Thunberg, Mafalda Gamboa, Meagan B. Loerakker, Patricia Alves-Oliveira, and Hannah R.M. Pelikan (Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; Uppsala University, Uppsala, Sweden; TU Wien, Vienna, Austria; University of Michigan at Ann Arbor, Ann Arbor, USA; Linköping University, Linköping, Sweden) In the Human-Robot Interaction community, Wizard of Oz (WoZ) is a commonly employed method where researchers aim to study user perceptions of robot technologies regardless of technical limitations. Despite the continued usage of WoZ, questions concerning ethical tensions and effects on the wizard remain. For instance, how do wizards experience interacting through technology, given the different roles and characters to enact, and the different environments to situate themselves in. In addition, the wizard's experiences and affects on results, continues to be under-explored. The goal of this workshop is to surface ethical, practical, methodological, personal, and philosophical tensions in the WoZ method. Though a collaborative session, we seek to develop a deeper understanding of what it means to be a wizard through eliciting first-person experiences of researchers. As a result, we hope to formulate guidelines for future wizards. |
|
| Thylane, Alexandria |
Alexandria Thylane, Daniel J. Foulen, Keys K. Rigual, Raitah A. Jinnat, Jackie Yee, Kaylee Nam, and Raj Korpan (City University of New York, New York, USA; Rice University, Houston, USA) Pax is a queer-affirming robot companion prototype co-designed with LGBTQIA+ users. Implemented as a Unity-based embodied agent with a FastAPI backend, it translates community-identified requirements into a working interactive system. Pax combines queer-affirming natural language interaction, safety guardrails, and user-controlled adaptability to support identity affirmation and emotional well-being in the home. This demo highlights the core technical pipeline and ethical design choices behind queer-inclusive robot companions. |
|
| Tian, Leimin |
Leimin Tian, Ana Kirschbaum, Caterina Neef, Mary Ellen Foster, Sara Cooper, Frauke Zeller, Manuel Giuliani, Alexander Eberhard, Oliver Chojnowski, Nils Mandischer, Utku Norman, Jan Ole Rixen, Pamela Carreno-Medrano, Nick Hawes, and Dana Kulić (CSIRO, Melbourne, Australia; Monash University, Melbourne, Australia; University of Applied Sciences Cologne, Cologne, Germany; KIT, Karlsruhe, Germany; University of Glasgow, Glasgow, UK; IIIA-CSIC, Spain; University of Edinburgh, Edinburgh, UK; Kempten University of Applied Sciences, Kempten, Germany; University of Augsburg, Augsburg, Germany; Monash University, Clayton, Australia; University of Oxford, Oxford, UK) As the technology readiness level of robotic systems increases, these systems interact with real users in realistic application contexts at increasing scale. In such real-world HRI, failures and unexpected outcomes are common, and these can provide valuable lessons for the design and evaluation of future HRI systems. Building on two recent workshops---the HRI 2025 workshop on Human-Robot Interaction in Extreme and Challenging Environments and the RO-MAN 2025 workshop on Real-World HRI in Public and Private Spaces: Successes, Failures, and Lessons Learned---this half-day workshop invites researchers and practitioners to share their design and deployment lessons drawn from across the spectrum of real-world HRI. The interactive sessions follow a taxonomy of the key phases during field deployment and major factors at play, inviting attendees to review the experimental design of their own research or example projects to identify potential challenges and share related experiences. This discussion will refine the initial taxonomy, which we plan to release as an open-source tool to empower researchers and practicioners in their design and deployment of real-world HRI. |
|
| Tibbe, Mathis |
Mara Brandt, Kira Sophie Loos, Mathis Tibbe, and Anna-Lisa Vollmer (Bielefeld University, Bielefeld, Germany) Children often find themselves in challenging situations, such as medical examinations, where they have limited opportunities to make autonomous decisions and experience their own agency. This study explores whether a warm-up interaction with a social robot can strengthen children’s perceived self-efficacy. We hypothesized that a teaching scenario, where the child instructs the robot, would yield stronger self-efficacy gains than a storytelling activity. In a pre-study, 20 children (6 – 12 years) were assigned to two conditions: teaching the humanoid robot Pepper to play ball-in-a-cup or co-creating a story with Pepper. Perceived self-efficacy was assessed with a 9-item questionnaire before and after the interaction, and parents reported child temperament using the German IKT questionnaire (Inventar zur integrativen Erfassung des Kind-Temperaments). Overall, children showed a small, significant increase in self-efficacy from pre- to post-interaction, with a stronger descriptive trend in the teaching condition and minimal change in storytelling. Shyness was not related to baseline self-efficacy, self-efficacy gains, or the relative effectiveness of the two conditions. Apart from one outcome, effects did not reach statistical significance, as expected given the small sample size. The observed trend toward higher self-efficacy in the teaching condition suggests that further studies with larger samples are warranted. Such research could clarify the potential of social robots to provide effective warm-up interactions that help children feel more confident in upcoming tasks, such as medical examinations. |
|
| Tiger, JP |
JP Tiger, Paige Williamson, Neha Kannan, Jacob Henley, Eileen Roesler, and David Porfirio (George Mason University, Fairfax, USA) Air quality threatens health through particulate matter (PM), volatile organic compounds (VOCs), and allergens, yet current monitors of- ten fail to communicate threats properly. We introduce Nose Knows, a robot that uses “universal nasal cues”—such as sneezes—to com- municate. Using participatory design, we conducted an expert inter- view, user focus groups, and iterative prototyping to build a robot that detects PM10, PM2.5, VOCs, and customizable odors. The ro- bot integrates servo-driven movements, sound effects, LEDs, and a web interface. Our findings highlight preferences for customizable aesthetics and show that biologically inspired cues can possibly improve clarity, engagement, and awareness of air quality. |
|
| Tilton, Dylan |
Dylan Tilton and Samantha Reig (University of Massachusetts Lowell, Lowell, USA) Study tasks are necessary for behavioral and design research in Human-Robot Interaction (HRI). Well-designed tasks enable researchers to effectively measure collaboration, communication, trust, and other dynamics between human participants and robotic systems. The lack of a common resource for tasks, however, forces researchers to repeatedly recreate or modify available tasks. This project seeks to address this by undertaking an exploratory review (2020–2025) of in-person, non-observational HRI studies that have a specified task framework, with plans of completing a more formal systematic literature review in a later phase. We seek to identify, organize, and collate these tasks in a public, searchable database, thus creating a unique, structured repository of HRI study tasks. This repository will serve to improve replicability, provide benchmarks, and simplify study design efforts in the HRI community. |
|
| Tisdale, Paul N. |
Nguyen Tan Viet Tuyen, Athina Georgara, Lokesh Singh, Jayati Deshmukh, Sean Davey, Paul N. Tisdale, and Sarvapali Ramchurn (University of Southampton, Southampton, UK; St Monica Trust, Bristol, UK) "Stop and Watch" is an early-warning tool adopted by the UK NHS and is widely used in elderly care settings. The tool helps caregivers to recognise abnormal changes in residents’ health. Despite its clinical value, the process remains highly manual, workload-intensive, and vulnerable to missed observations, particularly in environments facing staff shortages, frequent staff rotations, and increasing care demands. We argue that AI-based systems such as Socially Assistive Robots (SARs) and Assistive Living Technologies (ALTs) offer promising avenues for supporting and enhancing the "Stop and Watch" tool. However, designing such systems requires a multidisciplinary effort to establish a comprehensive understanding of current practices and the priorities and concerns of all relevant stakeholders. This paper presents insights from a participatory design workshop held in a care home in the UK to explore how SARs and ALTs could meaningfully support the “Stop and Watch” tool, understand stakeholders’ expectations, perceived benefits, and concerns regarding deployment in this sensitive context. |
|
| Tokmurziyev, Issatay |
Faryal Batool, Iana Zhura, Valerii Serpiva, Roohan Ahmed Khan, Ivan Valuev, Issatay Tokmurziyev, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Reliable human–robot collaboration in emergency scenarios requires autonomous systems that can detect humans, infer navigation goals, and operate safely in dynamic environments. This paper presents HumanDiffusion, a lightweight image-conditioned diffusion planner that generates human-aware navigation trajectories directly from RGB imagery. The system combines YOLO-11 based human detection with diffusion-driven trajectory generation, enabling a quadrotor to approach a target person and deliver medical assistance without relying on prior maps or computationally intensive planning pipelines. Trajectories are predicted directly in pixel space, enabling smooth motion and maintaining a consistent safety margin around humans. We evaluate HumanDiffusion in simulation and real-world indoor mock-disaster scenarios. On a 300-sample test set, the model achieves a mean squared error of 0.02 in pixel-space trajectory reconstruction. Real-world experiments demonstrate an overall mission success rate of 80% across accident-response and search-and-locate tasks with partial occlusions. These results indicate that human-conditioned diffusion planning offers a practical and robust solution for human-aware UAV navigation in time-critical assistance settings. Yuhua Jin, Nikita Kuzmin, Georgii Demianchuk, Mariya Lezina, Fawad Mehboob, Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Ahsan Mustafa, and Dzmitry Tsetserukou (Chinese University of Hong Kong, Shenzhen, China; Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Drones operating in human-occupied spaces suffer from insufficient communication mechanisms that create uncertainty about their intentions. We present HoverAI, an embodied aerial agent that integrates drone mobility, infrastructure-independent visual projection, and real-time conversational AI into a unified platform. Equipped with a MEMS laser projector, onboard semi-rigid screen, and RGB camera, HoverAI perceives users through vision and voice, responding via lip-synced avatars that adapt appearance to user demographics. The system employs a multimodal pipeline combining VAD, ASR (Whisper), LLM-based intent classification, RAG for dialogue, face analysis for personalization, and voice synthesis (XTTS v2). Evaluation demonstrates high accuracy in command recognition (F1: 0.90), demographic estimation (gender F1: 0.89, age MAE: 5.14 years), and speech transcription (WER: 0.181). By uniting aerial robotics with adaptive conversational AI and self-contained visual output, HoverAI introduces a new class of spatially-aware, socially responsive embodied agents for applications in guidance, assistance, and human-centered interaction. Timofei Kozlov, Artem Trandofilov, Georgii Gazaryan, Issatay Tokmurziyev, Miguel Altamirano Cabrera, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Safe navigation for the visually impaired individuals remains a critical challenge, especially concerning head-level obstacles, which traditional mobility aids often fail to detect. We introduce GuideTouch, a compact, affordable, standalone wearable device designed for autonomous obstacle avoidance. The system integrates two vertically aligned Time-of-Flight (ToF) sensors, enabling three-dimensional environmental perception, and four vibrotactile actuators that provide directional haptic feedback. For real-world robustness, the device includes a unique centrifugal self-cleaning optical cover mechanism and a sound alarm system for location if the device is dropped. We evaluated the haptic perception accuracy across 22 participants. The system demonstrated high recognition accuracy, achieving an average of 92.9% for single and double motor patterns. Furthermore, preliminary experiments with 14 visually impaired users validated this interface, showing a recognition accuracy of 93.75% for primary directional cues. The results demonstrate that GuideTouch enables intuitive spatial perception and could significantly improve the safety, confidence, and autonomy of users with visual impairments during independent navigation. Yara Mahmoud, Jeffrin Sam, Khang Nguyen, Marcelino Julio Fernando, Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Haris Khan, Artem Lykov, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) Safe Human–Robot Interaction in shared spaces requires robots to adapt compliance and speed to scene context and human proximity. Standards for collaborative operation specify limits on speed, force/pressure, and stiffness during contact or near-contact operation, motivating dynamic impedance rather than fixed gains. We present an egocentric vision pipeline for the Unitree G1 humanoid that retrieves task- and context-appropriate parameters via a Vision-Language Model (VLM) with Retrieval-Augmented Generation (RAG), and integrates inverse kinematics (IK) to produce joint references together with feed-forward torques for gravity compensation. Concretely, the system maps live head-camera frames to (Kp, Kd, v); IK provides (qref, qref, τff); and the low-level controller apply it at each joint. This enables joint-impedance scheduling and speed regulation in low-dynamic settings. We evaluate the system on dual-manipulation tabletop tasks with and without human presence in the scene. Compared to fixed-gain baselines, the VLM-RAG policy yields smooth speed reductions near humans, and maintained task success under environmental change. While the off-board perception–retrieval loop introduces noticeable delay that restricts use in highly dynamic interaction, the prototype demonstrates that semantic grounding of impedance remains a viable direction for safer humanoid collaboration. Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Haris Khan, Yara Mahmoud, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) LLM-Glasses is a wearable navigation system which assists visually impaired people by utilizing YOLO-World object detection, GPT-4o-based reasoning, and haptic feedback for real-time guidance. The device translates visual scene understanding into intuitive tactile feedback on the temples, allowing hands-free navigation. Three studies evaluate the system: recognition of 13 haptic patterns with an average recognition rate of 81.3%, VICON-based guidance with predefined paths using haptic cues, and an LLM-guided scene evaluation with decision accuracies of 91.8% without obstacles, 84.6% with static obstacles, and 81.5% with dynamic obstacles. These results show that LLM-Glasses can deliver reliable navigation support in controlled environments and motivate further work on responsiveness and deployment in more complex real-world scenarios. |
|
| Torii, Maya Grace |
Takahito Murakami, Maya Grace Torii, Shuka Koseki, and Yoichi Ochiai (University of Tsukuba, Tsukuba, Japan; University of Tokyo, Hongo, Japan) We address a mismatch between how care information is provided and accessed. Explanations about procedures, routines, and self-management are delivered at fixed times in dense formats, leading patients to concentrate questions into nurse encounters and increasing workload. We frame this as a problem of bidirectional mediation and propose Suzume-chan, a small “Pet-as-a-Friend” plush agent that serves as an embodied information hub. Patients can speak to Suzume-chan without operating devices to receive on-demand explanations and reminders, while nurses obtain compact, nursing-relevant records. Suzume-chan runs entirely on a local network using automatic speech recognition, a local language model, retrieval-augmented generation, and text-to-speech. A workshop-style proof-of-concept highlighted embodiment, latency, and trust as key considerations for clinical use. |
|
| Torre, Ilaria |
Maria Teresa Parreira, Ilaria Torre, Sarah Schömbs, Erik Lagerstedt, Laetitia Tanqueray, Karla Bransky, Ilaria Alfieri, Maria Raffa, Carolin Straßmann, Benjamin Lebrun, and Samantha Stedtler (Cornell University, New York, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Melbourne, Melbourne, Australia; University of Gothenburg, Gothenburg, Sweden; Lund University, Lund, Sweden; Australian National University, Canberra, Australia; IULM University, Milan, Italy; University of Duisburg-Essen, Duisburg, Germany; University of Canterbury, Christchurch, New Zealand) Building on the foundational work of our 2025 workshop, this second edition shifts from exploring what sustainability means in HRI to producing actionable outcomes for the community. Last year, we identified strong but often unacknowledged engagement with social and environmental sustainability, alongside gaps in shared language and practical frameworks. This year, we focus on concrete deliverables: guidelines for comprehensively integrating sustainability in HRI research. The morning features external speakers who integrate sustainable development in their research, and whole room discussions. The afternoon centers on collaborative development of guidelines and standards that researchers could adopt. We welcome HRI researchers at any career stage interested in operationalizing sustainability - whether through addressing the carbon footprint of computational models, ensuring equitable access to robotic technologies, or embedding SDG principles into research practices. Outcomes will include practical tools for sustainable HRI research and a roadmap for continued community action beyond the workshop. |
|
| Torresen, Jim |
Burhan Mohammad Sarfraz, Diana Saplacan Lindblom, Adel Baselizadeh, and Jim Torresen (University of Oslo, Oslo, Norway; Kristianstad University, Kristianstad, Sweden) As populations age and life expectancy rises, healthcare systems face growing staff shortages. Service robots have been proposed to support healthcare personnel, but their use introduces significant privacy challenges. This paper investigates whether a service robot can protect individuals’ privacy through face obfuscation while performing autonomous tasks in unconstrained healthcare environments. Our approach relies on a face recognition system trained to identify doctors and patients. Scenario-based experiments simulating a doctor’s office show that the system achieves partial success: non-target individuals are reliably obfuscated, and patients can be recognized when frontal views are available. However, real-world conditions such as pose variation, occlusion, and lighting changes reduce recognition reliability, limiting privacy protection. These results highlight both the potential and the current limitations of face obfuscation for privacy-preserving service robots, providing guidance for near-term deployment strategies in constrained interaction scenarios. |
|
| Torta, Elena |
Melissa M. Sexton, Anastasia V. Sergeeva, and Elena Torta (Vrije Universiteit Amsterdam, Amsterdam, Netherlands; Eindhoven University of Technology, Eindhoven, Netherlands) Robots are increasingly deployed in everyday work settings alongside humans, creating new demands for systems that can operate safely and effectively in dynamic, unpredictable environments. These pressures extend into robotics education, which must prepare students not only in traditional theoretical foundations but also for the practical and human-centered realities of real-world robotic deployment. Yet little empirical work examines how robotics education is adapting to these needs. We present a case study of a master’s-level robotics course that integrates theoretical instruction with a practice-based challenge focused on human-aware navigation. Through observations, course material analysis, and interviews, we identify three educational trade-offs: real-world readiness vs. theoretical competence, component specialization vs. system-level understanding, and providing a "skeleton" structure vs. fostering creativity. These trade-offs illustrate how contemporary industry expectations and the growing importance of HRI reshape educational practice. Understanding these competing demands is essential for designing robotics curricula that can meaningfully prepare future engineers for robots operating in human environments. |
|
| Toura, Chefou AR Mamadou |
Sandra Victor, Bruno Yun, Chefou AR Mamadou Toura, Enzo Indino, Madalina Croitoru, and Gowrishankar Ganesh (University of Montpellier, Montpellier, France; University of Aberdeen, Aberdeen, UK; CNRS, Montpellier, France) In this work we address the problem of designing a resource allocation decision making robot. We developed a model that accurately makes decisions to distribute risk, effort and reward between two humans or a human and a robot, considering their age, sex and humanness. To assess the model's alignment with social norms, we conducted a Turing test which showed that our model is perceived as making socially acceptable decisions, similar to those of human participants. Furthermore we demonstrated our model by embodying it in a robot negotiator that automatically distributed reward, effort and risk tokens among participant dyads by perceiving their physical characteristics. |
|
| Tozadore, Daniel |
Patrick Holthaus, Daniel Hernández García, Patricia Shaw, Francesco Del Duchetto, Marta Romeo, Muneeb Imtiaz Ahmad, Daniel Tozadore, and Shelly Bagchi (University of Hertfordshire, Hatfield, UK; Heriot-Watt University, Edinburgh, UK; Aberyswyth University, Aberystwyth, UK; University of Lincoln, Lincoln, UK; Swansea University, Swansea, UK; University College London, London, UK; National Institute of Standards and Technology, Gaithersburg, USA) The quality and impact of human-robot interaction (HRI) research rely on scientific rigour and reproducibility as well as the ethical soundness of experiments involving human participants. However, the HRI community currently lacks easy access to resources and common knowledge on experimental design practices and standardised reporting guidelines. The BPM-HRI: Best practices and methods in HRI research workshop at HRI 2026 aims to address this gap by fostering a community-wide effort to discuss, disseminate, and develop more robust methodologies. Targeting to empower especially early-career researchers, this half-day workshop will consolidate efforts from international initiatives, including the IEEE Standards Group P3108 Recommended Practice for Design of Human Subjects Studies in Human-Robot Interaction and the UK-HRI topic group Human-Robot Interaction: Best Practices and Methods. Our program includes a presentation and discussion on these initiatives, a keynote address focusing on rigorous study reporting, providing an intercontinental perspective, and dedicated mentoring and ideas exchange sessions. A collaborative working session will document the workshop's efforts with attendees starting to draft a community-driven white paper, surveying the current landscape and outlining next steps for experimental design and reporting recommendations in the field of HRI. |
|
| Trafton, Greg |
Pawinee Pithayarungsarit, Neha Kannan, Paige Williamson, Paul Vallejo, Laura Saad, Greg Trafton, and Eileen Roesler (George Mason University, Fairfax, USA; Naval Research Laboratory, Washington, USA) The Negative Attitude toward Robots Scale (NARS) was designed to measure people’s general attitudes toward robots. Originally developed and validated in Japanese, the NARS was translated into English without revalidation and has become widely used in human-robot interaction research. This study aims to validate the translated NARS factor structure via confirmatory factor analysis using existing datasets. Results (N = 2154) indicated a moderate support for the three-factor structure with some problematic items. Additionally, many papers modified the NARS in some way, including changing wording or rating scale. Not only is this not recommended, but it suggests that there may be a disconnect between the scale and how researchers would like to use it. Together, we suggest that researchers consider the consequences of modification and emphasize the importance of validating scales to ensure their adequate measurement ability and the quality of research results. Laura Saad, Eileen Roesler, Elizabeth Phillips, and Greg Trafton (Naval Research Laboratory, Washington, USA; George Mason University, Fairfax, USA) Subjective measurement scales are commonly employed in HRI research. We provide a half-day tutorial (4 hours) that aims to empower researchers with the tools to find appropriate scales for their research and assess their quality, confidently and efficiently. There are no prerequisites required for attendees. We aim to recruit researchers interested in using scales but who are unsure about how to pick which scale to use. The first part of the tutorial will teach attendees how to assess the quality of HRI scales. To accomplish this, we will review basic topics in psychometric theory and a guideline that outlines best practices in scale development and validation. In the second part, we will apply this guideline to two frequently used HRI scales: Godspeed and Robotic Social Attributes Scale (RoSAS). Attendees are also encouraged to bring scales they are interested in reviewing. The third part aims to help attendees find appropriate scales for their research. To accomplish this, we will review the HRI scale database: the first centralized online repository of HRI scales which contains over 50 of the most used HRI scales. These scales cover a wide array of topics of interest such as, trust, perceived agency, embodiment, danger, safety, and attitudes towards robots. Our goal for this tutorial is to promote active engagement from attendees throughout the session, ultimately striving to improve the quality and replicability of results in HRI studies. |
|
| Trafton, J. Gregory |
Hanna Parikh, Paul Vallejo, Elizabeth Phillips, Eileen Roesler, J. Gregory Trafton, and Laura Saad (George Mason University, Fairfax, USA; Naval Research Laboratory, Washington, USA) As robots increasingly occupy more complex decision-making roles in society, understanding how people attribute blame for robots' actions is critical. Prior work suggests that attributions of control of a robot can influence blame, but the sources of this control and their relationship to blame attribution remain underexplored. We reanalyzed existing data from three published studies in which participants assigned blame to a robot for its decision in different moral dilemmas. We first developed three categories of attributions of control for a robot’s actions (Self, Predetermined, and External), then employed a qualitative coding procedure to analyze participants' clarifications for their assigned blame. Blame and trust ratings were analyzed across control categories. Across all three studies, robots perceived as controlling their own actions (Self) were consistently blamed more than those in other categories. Attributions of control had a minimal impact on trust. Taken together, these findings indicate that attributions of control impact how people assign blame to robots, which may have moral and legal implications. |
|
| Tran, Angela |
Angela Tran and Zhao Zhao (University of Guelph, Guelph, Canada) Current social skills training (SST) often lacks inclusivity, limiting participation among neurodivergent individuals. In this late-breaking report, we present an in-progress design and study protocol for a neurodiversity-affirming social skills training approach using the Furhat social robot with neurodivergent post-secondary students who find it difficult to initiate conversations with peers. We are developing a Wizard-of-Oz methodology in which a human operator flexibly guides Furhat’s responses as participants practice self-identified challenging scenarios (e.g., asking a classmate about a group project). We describe our session structure, measures (comfort, self-efficacy, preferences, behavioural indicators), and planned mixed-methods analysis, and outline our current implementation steps. This work-in-progress contribution invites feedback from the HRI community on how embodied conversational agents can offer neurodiversity-affirming social skills training. |
|
| Tran, Anh Tuan |
Hasan Shamim Shaon, Andrew Trautzsch, Anh Tuan Tran, Varun Nagarkar, and Jong Hoon Kim (Kent State University, Kent, USA) Effective communication of motion intent is critical for autonomous mobile robots operating in human-populated environments. While prior works have demonstrated that floor-projected cues such as arrows or simplified trajectories can enhance bystander prediction and safety, existing systems often rely on static or handcrafted visual encodings and are rarely evaluated within end-to-end service workflows. We introduce Vendobot, a projection-augmented delivery robot that integrates a ROS1 navigation stack, an Android app based, PostgreSQL-backed order management pipeline, a real-time telemetry subsystem, and a projector-equipped Raspberry Pi 5 executing a lightweight intent-projection algorithm. Our method subscribes to the Timed Elastic Band (TEB) local planner to extract the robot’s predicted short-horizon trajectory, transforms it into projector coordinates, and renders either (1) quantized directional indicators or (2) a continuous animated polyline representing the robot’s true local plan with less than 100 ms latency. In a within-subject study involving both bystanders and delivery recipients, the projected local-plan visualization significantly improved intent legibility, motion predictability, and user comfort compared to arrow-based or no-projection conditions. These findings position trajectory-grounded projection as a technically viable and perceptually beneficial communication modality for service robots deployed in semi-public indoor environments. |
|
| Tran, Hong |
Alexandra Bejarano, Hong Tran, and Qin Zhu (Virginia Tech, Blacksburg, USA) Compassion plays a critical role in creating inclusive, supportive learning environments that promote students' well-being and engagement. As social robots become more common in elementary classrooms to support academic and socio-emotional learning, they introduce new possibilities for modeling and nurturing compassion. However, they also raise important ethical questions about how children understand and experience care and compassion in human-robot interactions. This paper presents a conceptual framework for examining the ethics of compassionate robots in elementary education. It identifies four key ethical dimensions (Connection, Power, Access, Information) that shape how compassionate behaviors expressed or elicited by robots may influence children's perceptions of care, agency, and moral responsibility. Ultimately, the framework offers a structured approach for evaluating whether, when, and how robots should express compassion in ways that are developmentally appropriate, culturally responsive, and aligned with students' lived experiences, supporting the responsible integration of compassionate robots in education. |
|
| Trandofilov, Artem |
Timofei Kozlov, Artem Trandofilov, Georgii Gazaryan, Issatay Tokmurziyev, Miguel Altamirano Cabrera, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Safe navigation for the visually impaired individuals remains a critical challenge, especially concerning head-level obstacles, which traditional mobility aids often fail to detect. We introduce GuideTouch, a compact, affordable, standalone wearable device designed for autonomous obstacle avoidance. The system integrates two vertically aligned Time-of-Flight (ToF) sensors, enabling three-dimensional environmental perception, and four vibrotactile actuators that provide directional haptic feedback. For real-world robustness, the device includes a unique centrifugal self-cleaning optical cover mechanism and a sound alarm system for location if the device is dropped. We evaluated the haptic perception accuracy across 22 participants. The system demonstrated high recognition accuracy, achieving an average of 92.9% for single and double motor patterns. Furthermore, preliminary experiments with 14 visually impaired users validated this interface, showing a recognition accuracy of 93.75% for primary directional cues. The results demonstrate that GuideTouch enables intuitive spatial perception and could significantly improve the safety, confidence, and autonomy of users with visual impairments during independent navigation. |
|
| Tran Jr., Phillip Bach-Luong |
Phillip Bach-Luong Tran Jr., Julia Rosén, and Denise Y. Geiskkovitch (McMaster University, Hamilton, Canada; Stockholm University, Stockholm, Sweden) Social robots have the potential to support children's emotion regulation development, especially during early childhood, where emotion regulation skills enhance social and academic development. However, existing robots are not designed specifically to support young children's long-term development of emotion regulation skills in domestic settings. We introduce a prototype of Emotion Buddy — a child-led, parent-supported robot designed for routine, at-home use by children ages 2 to 6. The robot emulates 6 emotions via sound, haptics, and shape transformation based on real-time sensing of sound, movement, and touch. We intend for children to interact with the robot as part of their daily routine to identify and respond to its emulated emotions, thereby providing frequent opportunities to practice their emotion regulation skills. We discuss our design process, the current prototype, and future work to evaluate the robot's efficacy in sustained emotion regulation learning. |
|
| Trautzsch, Andrew |
Hasan Shamim Shaon, Andrew Trautzsch, Anh Tuan Tran, Varun Nagarkar, and Jong Hoon Kim (Kent State University, Kent, USA) Effective communication of motion intent is critical for autonomous mobile robots operating in human-populated environments. While prior works have demonstrated that floor-projected cues such as arrows or simplified trajectories can enhance bystander prediction and safety, existing systems often rely on static or handcrafted visual encodings and are rarely evaluated within end-to-end service workflows. We introduce Vendobot, a projection-augmented delivery robot that integrates a ROS1 navigation stack, an Android app based, PostgreSQL-backed order management pipeline, a real-time telemetry subsystem, and a projector-equipped Raspberry Pi 5 executing a lightweight intent-projection algorithm. Our method subscribes to the Timed Elastic Band (TEB) local planner to extract the robot’s predicted short-horizon trajectory, transforms it into projector coordinates, and renders either (1) quantized directional indicators or (2) a continuous animated polyline representing the robot’s true local plan with less than 100 ms latency. In a within-subject study involving both bystanders and delivery recipients, the projected local-plan visualization significantly improved intent legibility, motion predictability, and user comfort compared to arrow-based or no-projection conditions. These findings position trajectory-grounded projection as a technically viable and perceptually beneficial communication modality for service robots deployed in semi-public indoor environments. |
|
| Tsakona, Dimitra |
Dimitra Tsakona and Yiannis Demiris (Imperial College London, London, UK) Assistive Human-Robot Interaction (HRI) must balance efficiency with user agency, particularly in high-intimacy contexts such as assistive feeding. This work investigates robot behavioural adaptation as a mechanism for fostering trust through user-guided autonomy. A comfort-driven optimisation framework integrates implicit user cues to continuously modulate robot behaviour, enabling collaboration that feels intuitive. Across two user studies (N = 44), adaptive behaviour enhanced trust, comfort, and perceived cooperation through responsiveness and flexibility. The timing of adaptation emerged as a transparent, universal channel for signalling compliance and collaborative intent. Future work will prioritise studying adaptation timing across repeated interactions to support long-term use, while exploring human reaction timing as a modality-independent signal for modelling comfort. |
|
| Tseng, Yu-Chia |
Nayeon Kwon, Shengyuehui Li, Yu-Chia Tseng, and Yadi Wang (Cornell University, Ithaca, USA) Shared waiting spaces, like hotel lobbies, often feel socially stagnant, with people defaulting to silence and avoiding interactions. In this paper, we explore how an everyday object found in those spaces may be robotized to change this dynamic. We introduce HighLight, a mobile floor-lamp robot that uses light and movement to reduce social awkwardness and encourage spontaneous interactions among strangers. We designed its interactions to spark surprise, invite light-hearted engagement, reinforce positive social energy, and back off when people show discomfort. Through in-the-wild deployments, we observed that HighLight successfully elicited curiosity, laughter, and conversations, easing social awkwardness in shared spaces. |
|
| Tsetserukou, Dzmitry |
Faryal Batool, Iana Zhura, Valerii Serpiva, Roohan Ahmed Khan, Ivan Valuev, Issatay Tokmurziyev, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Reliable human–robot collaboration in emergency scenarios requires autonomous systems that can detect humans, infer navigation goals, and operate safely in dynamic environments. This paper presents HumanDiffusion, a lightweight image-conditioned diffusion planner that generates human-aware navigation trajectories directly from RGB imagery. The system combines YOLO-11 based human detection with diffusion-driven trajectory generation, enabling a quadrotor to approach a target person and deliver medical assistance without relying on prior maps or computationally intensive planning pipelines. Trajectories are predicted directly in pixel space, enabling smooth motion and maintaining a consistent safety margin around humans. We evaluate HumanDiffusion in simulation and real-world indoor mock-disaster scenarios. On a 300-sample test set, the model achieves a mean squared error of 0.02 in pixel-space trajectory reconstruction. Real-world experiments demonstrate an overall mission success rate of 80% across accident-response and search-and-locate tasks with partial occlusions. These results indicate that human-conditioned diffusion planning offers a practical and robust solution for human-aware UAV navigation in time-critical assistance settings. Hung Khang Nguyen, Jeffrin Sam, Safina Gulyamova, Miguel Altamirano Cabrera, Artem Lykov, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) Visual and speech interfaces are widely used to control collaborative robots, but physical interaction remains distinct for its immediacy and intuitive nature. We introduce TapHRI, a system enabling rich on-body touch control using only embedded robot sensing—joint torques, motor currents, and TCP wrench—without external hardware. The core contribution is a novel Neural Network architecture designed specifically for raw internal sensor streams: a multi-head Temporal Convolutional Network (TCN) that eliminates the need for spectrogram conversion used in prior art. By leveraging dilated causal convolutions, our architecture effectively captures long-range temporal dependencies to distinguish subtle tap transients from robot motion dynamics. Uniquely, the network employs a shared encoder with bifurcated classification heads to simultaneously predict both tap count (single, double, triple) and directional intent (six directions) from a single 3.6 s window. We validate this TCN-driven approach on a UR3 cobot with 3,154 episodes, achieving 98.9 % accuracy in static conditions and 59.2 % in dynamic scenarios, demonstrating that specialized time-domain architectures can unlock complex interaction vocabularies from standard internal signals. Amir Atef Habel, Ivan Snegirev, Elizaveta Semenyakina, Miguel Altamirano Cabrera, Jeffrin Sam, Fawad Mehboob, Roohan Ahmed Khan, Muhammad Ahsan Mustafa, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) This paper presents Glove2UAV, a wearable IMU-glove interface for intuitive UAV control through hand and finger gestures, augmented with vibrotactile warnings for exceeding predefined speed thresholds. To promote safer and more predictable interaction in dynamic flight, Glove2UAV is designed as a lightweight and easily deployable wearable interface intended for real-time operation. Glove2UAV streams inertial measurements in real time and estimates palm and finger orientations using a compact processing pipeline that combines median-based outlier suppression with Madgwick-based orientation estimation. The resulting motion estimations are mapped to a small set of control primitives for directional flight (forward/backward and lateral motion) and, when supported by the platform, to object-interaction commands. Vibrotactile feedback is triggered when flight speed exceeds predefined threshold values, providing an additional alert channel during operation. We validate real-time feasibility by synchronizing glove signals with UAV telemetry in both simulation and real-world flights. The results show fast gesture-based command execution, stable coupling between gesture dynamics and platform motion, correct operation of the core command set in our trials, and timely delivery of vibratile warning cues. Yuhua Jin, Nikita Kuzmin, Georgii Demianchuk, Mariya Lezina, Fawad Mehboob, Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Ahsan Mustafa, and Dzmitry Tsetserukou (Chinese University of Hong Kong, Shenzhen, China; Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Drones operating in human-occupied spaces suffer from insufficient communication mechanisms that create uncertainty about their intentions. We present HoverAI, an embodied aerial agent that integrates drone mobility, infrastructure-independent visual projection, and real-time conversational AI into a unified platform. Equipped with a MEMS laser projector, onboard semi-rigid screen, and RGB camera, HoverAI perceives users through vision and voice, responding via lip-synced avatars that adapt appearance to user demographics. The system employs a multimodal pipeline combining VAD, ASR (Whisper), LLM-based intent classification, RAG for dialogue, face analysis for personalization, and voice synthesis (XTTS v2). Evaluation demonstrates high accuracy in command recognition (F1: 0.90), demographic estimation (gender F1: 0.89, age MAE: 5.14 years), and speech transcription (WER: 0.181). By uniting aerial robotics with adaptive conversational AI and self-contained visual output, HoverAI introduces a new class of spatially-aware, socially responsive embodied agents for applications in guidance, assistance, and human-centered interaction. Fawad Mehboob, MoniJesu Wonders James, Amir Atef Habel, Jeffrin Sam, Miguel Altamirano Cabrera, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) This work introduces a novel concept of an autonomous aerial manipulation system capable of interpreting high-level natural language commands to retrieve objects and deliver them to a human user. The system is intended to integrate a MediaPipe-based Grounding DINO and a Vision-Language-Action (VLA) model with a custom-built drone equipped with a 1-DOF gripper and an Intel RealSense RGB-D camera. VLA performs semantic reasoning to interpret the intent of a user prompt and generates a prioritized task queue for grasping relevant objects in the scene. Grounding DINO and the dynamic A* planning algorithm are used to navigate and safely relocate the object. To ensure safe and natural interaction during the handover phase, the system employs a human-centric controller driven by MediaPipe. This module provides real-time human pose estimation, allowing the drone to employ visual servoing to maintain a stable, distinct position directly in front of the user, facilitating a comfortable handover. We demonstrate the system's efficacy through real-world experiments for localization and navigation, which resulted in a 16.4 cm, 7.0 cm, and 8.4 cm of max, mean euclidean, and root-mean squared errors, respectively, highlighting the feasibility of VLA for aerial manipulation operations. Valerii Serpiva, Artem Lykov, Jeffrin Sam, Aleksey Fedoseev, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) We propose a novel UAV-assisted creative capture system that leverages diffusion models to interpret high-level natural language prompts and automatically generate optimal flight trajectories for cinematic video recording. Instead of manually piloting the drone, the user simply describes the desired shot (e.g., "orbit around me slowly from the right and reveal the background waterfall"). Our system encodes the prompt along with an initial visual snapshot from the onboard camera, and a diffusion model samples plausible spatio-temporal motion plans that satisfy both the scene geometry and shot semantics. The generated flight trajectory is then autonomously executed by the UAV to record smooth, repeatable video clips that match the prompt. User evaluation using NASA-TLX showed a significantly lower overall workload with our interface (M = 21.6) compared to a traditional remote controller (M = 58.1), demonstrating a substantial reduction in perceived effort. Mental demand (M = 11.5 vs. 60.5) and frustration (M = 14.0 vs. 54.5) were also markedly lower for our system, highlighting its clear usability advantages in autonomous text-driven flight control. This project demonstrates a new interaction paradigm: text-to-cinema flight, where diffusion models act as the "creative operator" converting story intentions directly into aerial motion. Roman Akinshin, Elizaveta Lopatina, Kirill Bogatikov, Nikolai Kiz, Anna V. Makarova, Mikhail Lebedev, Miguel Altamirano Cabrera, Dzmitry Tsetserukou, and Valerii Kangler (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; MSU Institute for Artificial Intelligence, Moscow, Russian Federation; Lomonosov Moscow State University, Moscow, Russian Federation) This paper presents a novel neuromorphic control architecture for upper-limb prostheses that combines surface electromyography (sEMG) with gaze-guided computer vision. The system uses a spiking neural network deployed on the neuromorphic processor AltAi to classify EMG patterns in real time while an eye-tracking headset and scene camera identify the object within the user’s focus. In our prototype, the same EMG recognition model that was originally developed for a conventional GPU is deployed as a spiking network on AltAi, achieving comparable accuracy while operating in a sub-watt power regime, which enables a lightweight, wearable implementation. For six distinct functional gestures recorded from upper-limb amputees, the system achieves robust recognition performance comparable to state-of-the-art myoelectric interfaces. When the vision pipeline restricts the decision space to three context-appropriate gestures for the currently viewed object, recognition accuracy increases to roughly 95% while excluding unsafe, object-inappropriate grasps. These results indicate that the proposed neuromorphic, context-aware controller can provide energy-efficient and reliable prosthesis control and has the potential to improve safety and usability in everyday activities for people with upper-limb amputation. Muhammad Haris Khan, Artyom Myshlyaev, Artem Lykov, Miguel Altamirano Cabrera, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) We propose a new concept, Evolution 6.0, which represents the evolution of robotics driven by Generative AI. When a robot lacks the necessary tools to accomplish a task requested by a human, it autonomously designs the required instruments and learns how to use them to achieve the goal. Evolution 6.0 is an autonomous robotic system powered by Vision-Language Models (VLMs), Vision-Language Action (VLA) models, and Text-to-3D generative models for tool design and task execution. The system comprises two key modules: the Tool Generation Module, which fabricates task-specific tools from visual and textual data, and the Action Generation Module, which converts natural language instructions into robotic actions. It integrates QwenVLM for environmental understanding, OpenVLA for task execution, and Llama-Mesh for 3D tool generation. Evaluation results demonstrate a 90% success rate for tool generation with a 10-second inference time and action generation achieving 83.5% in physical and visual generalization, 70% in motion generalization, and 37% in semantic generalization. Future improvements will focus on bimanual manipulation, expanded task capabilities, and enhanced environmental interpretation to improve real-world adaptability. Timofei Kozlov, Artem Trandofilov, Georgii Gazaryan, Issatay Tokmurziyev, Miguel Altamirano Cabrera, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Safe navigation for the visually impaired individuals remains a critical challenge, especially concerning head-level obstacles, which traditional mobility aids often fail to detect. We introduce GuideTouch, a compact, affordable, standalone wearable device designed for autonomous obstacle avoidance. The system integrates two vertically aligned Time-of-Flight (ToF) sensors, enabling three-dimensional environmental perception, and four vibrotactile actuators that provide directional haptic feedback. For real-world robustness, the device includes a unique centrifugal self-cleaning optical cover mechanism and a sound alarm system for location if the device is dropped. We evaluated the haptic perception accuracy across 22 participants. The system demonstrated high recognition accuracy, achieving an average of 92.9% for single and double motor patterns. Furthermore, preliminary experiments with 14 visually impaired users validated this interface, showing a recognition accuracy of 93.75% for primary directional cues. The results demonstrate that GuideTouch enables intuitive spatial perception and could significantly improve the safety, confidence, and autonomy of users with visual impairments during independent navigation. Yara Mahmoud, Yasheerah Yaqoot, Miguel Altamirano Cabrera, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Humanoid robots must adapt their contact behavior to diverse objects and tasks, yet most controllers rely on fixed, hand-tuned impedance gains and gripper settings. This paper introduces HumanoidVLM, a vision–language-driven retrieval framework that enables the Unitree G1 humanoid to select task-appropriate cartesian impedance parameters and gripper configurations directly from an egocentric RGB image. The system couples a vision–language model for semantic task inference with a FAISS-based Retrieval-Augmented Generation (RAG) module that retrieves experimentally validated stiffness–damping pairs and object-specific grasp angles from two custom databases and executes them through a task-space impedance controller for compliant manipulation. We evaluate HumanoidVLM on 14 visual scenarios and achieve a retrieval accuracy of 93 %. Real-world experiments show stable interaction dynamics, with z-axis tracking errors typically within 1 cm to 3.5 cm and virtual forces consistent with task-dependent impedance settings. These results demonstrate the feasibility of linking semantic perception with retrieval-based control as an interpretable path toward adaptive humanoid manipulation. Yara Mahmoud, Jeffrin Sam, Khang Nguyen, Marcelino Julio Fernando, Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Haris Khan, Artem Lykov, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation; Skolkovo Institute of Science and Technology, Skolkovo, Russian Federation) Safe Human–Robot Interaction in shared spaces requires robots to adapt compliance and speed to scene context and human proximity. Standards for collaborative operation specify limits on speed, force/pressure, and stiffness during contact or near-contact operation, motivating dynamic impedance rather than fixed gains. We present an egocentric vision pipeline for the Unitree G1 humanoid that retrieves task- and context-appropriate parameters via a Vision-Language Model (VLM) with Retrieval-Augmented Generation (RAG), and integrates inverse kinematics (IK) to produce joint references together with feed-forward torques for gravity compensation. Concretely, the system maps live head-camera frames to (Kp, Kd, v); IK provides (qref, qref, τff); and the low-level controller apply it at each joint. This enables joint-impedance scheduling and speed regulation in low-dynamic settings. We evaluate the system on dual-manipulation tabletop tasks with and without human presence in the scene. Compared to fixed-gain baselines, the VLM-RAG policy yields smooth speed reductions near humans, and maintained task success under environmental change. While the off-board perception–retrieval loop introduces noticeable delay that restricts use in highly dynamic interaction, the prototype demonstrates that semantic grounding of impedance remains a viable direction for safer humanoid collaboration. Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Haris Khan, Yara Mahmoud, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) LLM-Glasses is a wearable navigation system which assists visually impaired people by utilizing YOLO-World object detection, GPT-4o-based reasoning, and haptic feedback for real-time guidance. The device translates visual scene understanding into intuitive tactile feedback on the temples, allowing hands-free navigation. Three studies evaluate the system: recognition of 13 haptic patterns with an average recognition rate of 81.3%, VICON-based guidance with predefined paths using haptic cues, and an LLM-guided scene evaluation with decision accuracies of 91.8% without obstacles, 84.6% with static obstacles, and 81.5% with dynamic obstacles. These results show that LLM-Glasses can deliver reliable navigation support in controlled environments and motivate further work on responsiveness and deployment in more complex real-world scenarios. |
|
| Tsuchinaga, Masayoshi |
Marina Obata, Yuka Iwanaga, Masayoshi Tsuchinaga, Akitoshi Mizutani, Daniel R. J. Downes, and Sabrina Lee (Woven by Toyota, Tokyo, Japan; Toyota Motor Corporation, Toyota, Japan) This study examines the validity of the Motivation, Engagement, and Thriving in User Experience (METUX) model for evaluating remote operation interfaces of an assistive robot. Multiple real-life scenarios involving various target users were explored to assess whether Self-Determination Theory (SDT) factors in METUX effectively represent user well-being. Survey data from 2300 samples across three interfaces and four user groups underwent descriptive analysis, Confirmatory Factor Analysis (CFA), Multi-Group CFA (MGCFA) and Structural Equation Modelling (SEM). Results show significant correlations between SDT-defined Basic Psychological Needs (BPN) and well-being indicators, allowing researchers to examine well-being through interaction with assistive robots. MGCFA supported measurement invariance across interfaces but not across user groups, suggesting user-needs-centric interface development. The findings highlight the importance of validating metrics before product evaluation. In conjunction with planned user research in real-life settings using established metrics, our approach addresses research gaps in user well-being assessment within the HRI field. |
|
| Tsui, Katherine M. |
Jack Bassett, Long-Jing Hsu, Stephen Taylor, Janice Bays, Weslie Khoo, Katherine M. Tsui, David J. Crandall, and Selma Šabanović (Indiana University at Bloomington, Bloomington, USA; Dementia Action Alliance, Bloomington, USA; Toyota Research Institute, Cambridge, USA) Current robotic systems for dementia care primarily emphasize cognitive or functional support—reminding individuals to take medications, offering simple companionship, or providing sensory stimulation. These interventions are valuable, yet they do not always acknowledge the deeper existential challenges associated with memory loss. In this video, we present IRIS (Interactive Robot for Ikigai Support), a novel social robot designed not only to assist but to empower: to help individuals living with dementia reconnect with purpose, identity, and meaning. This 3-minute clip was part of a 3-year research project to develop a robot that can foster a sense of meaning for older adults living with dementia. This work highlights the lived experience of Steve Taylor, a participant whose long-term involvement shaped the robot from concept to deployment. By foregrounding his voice and day-to-day story, the video reveals how IRIS may act as a reflective companion — encouraging storytelling, sparking emotional connections, and enabling moments of clarity, dignity, and joy. The film frames the robot not merely as a tool, but as a mirror through which participants can rediscover fragments of self and meaning in everyday experiences. |
|
| Tsutsui, Ayaka |
Ayaka Tsutsui, Takahito Murakami, Tatsuki Fushimi, Ryosei Kojima, Kengo Tanaka, and Yoichi Ochiai (University of Tsukuba, Tsukuba, Japan) Embossed surfaces provide raised patterns and textures that can support access to information and act as tactile guidance on physical interfaces. Recent work has introduced a mold free embossing technique that combines High Intensity Focused Ultrasound (HIFU) with a robot arm to locally deform materials such as acrylic and MDF, enabling digitally controlled embossing. While this approach offers high design flexibility and is safe, research on how the resulting patterns are perceived through touch remains limited. In this work, we use a robot-controlled HIFU embossing system to create six textures that represent common tactile features relevant to exploratory touch, including contours, roughness, geometric shapes, and localized bumps on acrylic and MDF. Nineteen participants took part in a blindfolded tactile exploration study. Using both quantitative and qualitative data, we characterize how people perceive and differentiate textures produced through robot-controlled HIFU embossing. |
|
| Tung, Yi-Shiuan |
Yi-Shiuan Tung (University of Colorado at Boulder, Boulder, USA) Fluent human-robot interaction requires robots to anticipate human intent, understanding the goals, motion, and reward functions that drive their behavior. While prior work focuses on algorithms for reward alignment and intent inference, it often overlooks a critical factor shaping interaction: the environment. The physical and task environment governs how humans and robots perceive, act, and coordinate, yet it is typically treated as fixed rather than as something that can be designed. My research develops novel techniques for environment design (ED), optimizing the layout and structure of shared spaces to support reward alignment and enhance legibility. I formulate ED as an optimization over parameters that influence human and robot behavior, using Quality Diversity algorithms and Bayesian Optimization to identify informative and high performing environments. Building on preliminary results in tabletop collaboration and navigation tasks, I aim to close the loop between ED and human understanding---using ED not only to help robots better interpret humans, but also to help humans better interpret robot policies and intentions. Future directions will also explore how ED can be applied to improve key human factors such as trust, comfort, and interpretability in human-robot collaboration. Ultimately, I envision robots that not only learn from humans but also design their environments, actively shaping the world around them to foster trust, transparency, and fluent interaction. |
|
| Turner, Jessica |
Jessica Turner, Nicholas Vanderschantz, Jemma L. König, and Rafeea Siddika (University of Waikato, Tauranga, New Zealand; University of Waikato, Hamilton, New Zealand) The intentional design of robots to evoke creepiness provides a unique lens for studying human perception and willingness to engage. To understand user perceptions and acceptance of robots we developed a robot prototype designed with targeted facial, morphological, and movement features that may be perceived as "creepy". Using the Human-Robot Interaction Evaluation Scale (HRIES) we found that disturbance was moderate towards our intentionally creepy robot with significant participant variation. Furthermore, qualitative results confirmed this polarity, with descriptions ranging from "angry and unfriendly" to "cool and cute". This variability demonstrates that "creepiness" is more subjective than initially anticipated and highlights a key research gap in academic literature with the need for measurement tools which capture negative perceptions in HRI. Jessica Turner, Nicholas Vanderschantz, Judy Bowen, Jemma L. König, and Hannah Carino (University of Waikato, Tauranga, New Zealand; University of Waikato, Hamilton, New Zealand) Successful integration of social robots in education relies on the acceptance of robots in learning contexts by students. Using a participatory design workshop, students interacted with a KettyBot and ideated potential roles for robots in the classroom. This was followed by a questionnaire and the Godspeed Questionnaire Series (GQS) to understand student perceptions and attitudes towards social robots in education environments. Learners described potential use cases and our results demonstrate students envision robots as assistants rather than teachers, emphasising the importance of human connection in learning. |
|
| Tuyen, Nguyen Tan Viet |
Nguyen Tan Viet Tuyen, Athina Georgara, Lokesh Singh, Jayati Deshmukh, Sean Davey, Paul N. Tisdale, and Sarvapali Ramchurn (University of Southampton, Southampton, UK; St Monica Trust, Bristol, UK) "Stop and Watch" is an early-warning tool adopted by the UK NHS and is widely used in elderly care settings. The tool helps caregivers to recognise abnormal changes in residents’ health. Despite its clinical value, the process remains highly manual, workload-intensive, and vulnerable to missed observations, particularly in environments facing staff shortages, frequent staff rotations, and increasing care demands. We argue that AI-based systems such as Socially Assistive Robots (SARs) and Assistive Living Technologies (ALTs) offer promising avenues for supporting and enhancing the "Stop and Watch" tool. However, designing such systems requires a multidisciplinary effort to establish a comprehensive understanding of current practices and the priorities and concerns of all relevant stakeholders. This paper presents insights from a participatory design workshop held in a care home in the UK to explore how SARs and ALTs could meaningfully support the “Stop and Watch” tool, understand stakeholders’ expectations, perceived benefits, and concerns regarding deployment in this sensitive context. |
|
| Uchida, Takahisa |
Naoki Kodani, Yuya Komai, Kurima Sakai, Takahisa Uchida, and Hiroshi Ishiguro (University of Osaka, Toyonaka, Japan; ATR, Keihanna Science City, Japan; Osaka University, Osaka, Japan; Osaka University, Toyonaka, Japan) In recent years, avatar technology has been used in various forms, such as robots and CG agents. It is considered that avatars that behave autonomously could expand human capabilities, such as participating in social activities on behalf of the real person. In this study, we developed an autonomous dialogue system that reflects the operator's personality by using a Geminoid, which is an android modeled after the appearance of a specific person. Regarding such androids modeled after specific persons, previous research has reported at the interview level that people find it easier to talk to the android than to the real person it was modeled after. However, the relationship between such an avatar and the human it is modeled after for the interlocutor has not been quantitatively clarified. This study quantitatively evaluated the effect of the Geminoid with an autonomous dialogue system on participants' perceived relationship with the real person it was modeled after. The results showed that interacting with the developed system significantly increased the participants' sense of closeness toward the real person. Furthermore, since interacting with the real person model afterward did not significantly increase this sense of closeness, it is expected that this system sufficiently enhances closeness and can produce an effect equivalent to that of interacting with the real person. |
|
| Urgen, Burcu A. |
Umur Yıldız, Berk Yüce, Ayaz Karadağ, Tuğçe Nur Pekçetin, and Burcu A. Urgen (Bilkent University, Ankara, Türkiye) Large Language Models(LLMs) introduce powerful new capabilities for social robots, yet their black-box nature creates a barrier to trust. Transparency is already established as important for humanrobot trust, but how to convey LLM intentions and reasoning in real-time, embodied interaction remains poorly understood. We developed a task-level mechanistic transparency system for an LLM-powered Pepper robot that displays its internal reasoning process dynamically on the robot’s tablet during interaction. In a mixed-design study, participants engaged with Pepper across four trust-relevant tasks in either a Transparency-ON condition or Transparency-OFF condition. Transparency produced significantly greater trust growth than opacity, and a substantial increase in perceived reliability, indicating that transparency remains a key design element for trust calibration in LLM-driven human-robot interaction. |
|
| Vahdani, Saeed |
Saeed Vahdani and Amirhossein Nazari (Ferdowsi University of Mashhad, Mashhad, Iran) iSense Beyond is a wearable assistive system for the visually impaired (VI), representing a critical shift from performance-constrained embedded hardware (Version 1, Jetson/RealSense) to an agile, smartphone-centric platform. By leveraging Visual-Inertial Odometry (VIO) via smart glasses and a wearable Inertial Measurement Unit (IMU), V2 provides highly stable, low-latency spatial awareness, overcoming the safety risks associated with the previous high-latency Computer Vision (CV) architecture. The core Human-Robot Interaction (HRI) novelty is a distributed haptic system: hand/wrist haptics provide fine-grained, directional "pull" cues for localized object interaction, while foot haptics deliver continuous path guidance for locomotion. This architecture enables a new paradigm by integrating real-time physical assistance directly with structured educational training modules. This work details the technical migration, the multi-modal HRI protocol, and the required framework for deployment to enhance vocational and social inclusion for the VI community. |
|
| Valdenegro-Toro, Matias |
Paul Vogt, Yara Bikowski, and Matias Valdenegro-Toro (University of Groningen, Groningen, Netherlands) Social robots are increasingly being designed to support elderly care, but conversations between elderly people and robots often involve misunderstandings and confusion. This study explores the development of AI models to recognise confusion from facial expressions of elderly people during human-robot conversations. We collected a video dataset from the robot’s point of view in which elderly people interacted with a social robot through a language game. We trained two models to detect confusion from the facial expressions: (1) a LSTM network using Facial Action Units extracted from the data, and (2) a transfer learning model using ResEmoteNet on facial image data. Both models performed slightly above chance –the LSTM achieved 57% accuracy, while the ResEmoteNet model reached 53% accuracy on balanced data– indicating poor generalisation to new faces. Concluding, these findings suggest that confusion of elderly people cannot be reliably detected using facial expressions alone. We argue that this may be due to age-related changes in facial expression patterns, but also that it may be due to a reduced display of facial responses to robots by the elderly. |
|
| Valentine, William |
Weslie Khoo, Long-Jing Hsu, William Valentine, Brandon Connor, Tao Yat Kong, Amous Khoo, Waki Kamino, Selma Šabanović, and David Crandall (Indiana University at Bloomington, Bloomington, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Rose-Hulman Institute of Technology, Terre Haute, USA; Independent, Singapore, Singapore; Cornell University, Ithaca, USA) IRIS, the Interactive Robot for Ikigai Support, is a social robot designed to help older adults explore life purpose and meaningful memories through interactive, reflective conversations. Implemented on LuxAI’s QT humanoid robot, IRIS combines structured prompts with AI-generated follow-ups to support dynamic, personalized dialogue that adapts to individual preferences and cognitive abilities. Co-designed with older adults through a longitudinal panel, IRIS incorporates nuanced emotional responsiveness, including the ability to recognize and engage with bittersweet emotions. For the interactive demo, participants will engage with IRIS in a controlled, immersive setting to encourage contemplation of meaning, impermanence, and personal purpose. This work highlights the potential of socially aware robots to empower older adults, support emotional well-being, and foster personal reflection, demonstrating how human–robot interaction can contribute to societal enrichment. Brandon Connor, William Valentine, Long-Jing Hsu, Waki Kamino, Selma Šabanović, David Crandall, and Weslie Khoo (Indiana University at Bloomington, Bloomington, USA; Rose-Hulman Institute of Technology, Terre Haute, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Cornell University, Ithaca, USA) This paper examines how robots can safely and intentionally evoke fear in humans. Drawing on popular culture, immersive theater, horror games, and theme parks, we highlight design strategies such as unpredictability, proximity violations, and narrative buildup. We discuss how robot morphology, motion, and ambient cues shape perceived threat, with implications for entertainment-focused Human-Robot Interaction (HRI) and for preventing unintentional fear in everyday interactions. The paper aims to spark discussion on leveraging fear as a meaningful, ethical, and safe emotional modality in human–robot interaction. |
|
| Vallejo, Paul |
Pawinee Pithayarungsarit, Neha Kannan, Paige Williamson, Paul Vallejo, Laura Saad, Greg Trafton, and Eileen Roesler (George Mason University, Fairfax, USA; Naval Research Laboratory, Washington, USA) The Negative Attitude toward Robots Scale (NARS) was designed to measure people’s general attitudes toward robots. Originally developed and validated in Japanese, the NARS was translated into English without revalidation and has become widely used in human-robot interaction research. This study aims to validate the translated NARS factor structure via confirmatory factor analysis using existing datasets. Results (N = 2154) indicated a moderate support for the three-factor structure with some problematic items. Additionally, many papers modified the NARS in some way, including changing wording or rating scale. Not only is this not recommended, but it suggests that there may be a disconnect between the scale and how researchers would like to use it. Together, we suggest that researchers consider the consequences of modification and emphasize the importance of validating scales to ensure their adequate measurement ability and the quality of research results. Hanna Parikh, Paul Vallejo, Elizabeth Phillips, Eileen Roesler, J. Gregory Trafton, and Laura Saad (George Mason University, Fairfax, USA; Naval Research Laboratory, Washington, USA) As robots increasingly occupy more complex decision-making roles in society, understanding how people attribute blame for robots' actions is critical. Prior work suggests that attributions of control of a robot can influence blame, but the sources of this control and their relationship to blame attribution remain underexplored. We reanalyzed existing data from three published studies in which participants assigned blame to a robot for its decision in different moral dilemmas. We first developed three categories of attributions of control for a robot’s actions (Self, Predetermined, and External), then employed a qualitative coding procedure to analyze participants' clarifications for their assigned blame. Blame and trust ratings were analyzed across control categories. Across all three studies, robots perceived as controlling their own actions (Self) were consistently blamed more than those in other categories. Attributions of control had a minimal impact on trust. Taken together, these findings indicate that attributions of control impact how people assign blame to robots, which may have moral and legal implications. |
|
| Vallès-Peris, Núria |
Tamlin Love, Víctor Bermejo, Alberto Olivares-Alarcos, Antonio Andriella, Núria Vallès-Peris, Cristian Barrué, and Guillem Alenyà (CSIC-UPC, Barcelona, Spain; IIIA-CSIC, Barcelona, Spain; Universitat Autònoma de Barcelona, Barcelona, Spain) Explainability has been proposed as an approach to robot failure recovery, facilitating understanding and repairing trust, especially relevant in domestic assistive tasks. This study conducts a preliminary exploration of older adults' preferences regarding the content and context of robot-generated explanations for failures to guide future research. An exploratory study was conducted in three phases: 1) gathering high-level requirements from caregivers, 2) implementing a semi-autonomous robot for object retrieval that identifies and explains different types of failures, and 3) engaging N=8 older adults in real-life interactions as well as in focus groups to assess their perspectives. Our preliminary observations highlight a tension in preferences: a general desire for short, direct explanations to minimize disruption, versus a need for more detailed, actionable explanations specifically in failure cases. Crucially, we also note that these preferences are unstable and contextually constructed, reinforcing that the technical failures cannot be separated from their social context, as users' experiences and opinions are shaped by both the robot’s functional capabilities and the values and organisational settings in which they are introduced. |
|
| Valuev, Ivan |
Faryal Batool, Iana Zhura, Valerii Serpiva, Roohan Ahmed Khan, Ivan Valuev, Issatay Tokmurziyev, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Reliable human–robot collaboration in emergency scenarios requires autonomous systems that can detect humans, infer navigation goals, and operate safely in dynamic environments. This paper presents HumanDiffusion, a lightweight image-conditioned diffusion planner that generates human-aware navigation trajectories directly from RGB imagery. The system combines YOLO-11 based human detection with diffusion-driven trajectory generation, enabling a quadrotor to approach a target person and deliver medical assistance without relying on prior maps or computationally intensive planning pipelines. Trajectories are predicted directly in pixel space, enabling smooth motion and maintaining a consistent safety margin around humans. We evaluate HumanDiffusion in simulation and real-world indoor mock-disaster scenarios. On a 300-sample test set, the model achieves a mean squared error of 0.02 in pixel-space trajectory reconstruction. Real-world experiments demonstrate an overall mission success rate of 80% across accident-response and search-and-locate tasks with partial occlusions. These results indicate that human-conditioned diffusion planning offers a practical and robust solution for human-aware UAV navigation in time-critical assistance settings. |
|
| Van de Laar, Roel |
Roel van de Laar and Kim Baraka (Vrije Universiteit Amsterdam, Amsterdam, Netherlands) Teleoperating high degree-of-freedom (DoF) robots such as humanoids in interactive settings remains challenging due to high operator workload, and limited situational awareness. The majority of existing interfaces often relies on graphical dashboards, limiting natural, embodied control. This study explores an alternative paradigm: "Robot-as-Interface," where one humanoid robot (the puppet) is physically manipulated to control another (the performer) through direct joint-to-joint mapping. Following a co-design session with expert users, we developed an improved interface featuring joint locking, head orientation control, blockage detection, and a pausing toggle. A between-subjects user study (N=26) compared this expert-informed system against a baseline. Results show significantly improved system usability (SUS) and a reduction in perceived workload. Observations further revealed the importance of operator pacing, spatial positioning, and clear system feedback. Overall, results indicate that expert-informed enhancements can improve usability and operator experience in puppet–performer teleoperation, provided that hardware limits and user training are carefully addressed. |
|
| Vanderschantz, Nicholas |
Jessica Turner, Nicholas Vanderschantz, Jemma L. König, and Rafeea Siddika (University of Waikato, Tauranga, New Zealand; University of Waikato, Hamilton, New Zealand) The intentional design of robots to evoke creepiness provides a unique lens for studying human perception and willingness to engage. To understand user perceptions and acceptance of robots we developed a robot prototype designed with targeted facial, morphological, and movement features that may be perceived as "creepy". Using the Human-Robot Interaction Evaluation Scale (HRIES) we found that disturbance was moderate towards our intentionally creepy robot with significant participant variation. Furthermore, qualitative results confirmed this polarity, with descriptions ranging from "angry and unfriendly" to "cool and cute". This variability demonstrates that "creepiness" is more subjective than initially anticipated and highlights a key research gap in academic literature with the need for measurement tools which capture negative perceptions in HRI. Jessica Turner, Nicholas Vanderschantz, Judy Bowen, Jemma L. König, and Hannah Carino (University of Waikato, Tauranga, New Zealand; University of Waikato, Hamilton, New Zealand) Successful integration of social robots in education relies on the acceptance of robots in learning contexts by students. Using a participatory design workshop, students interacted with a KettyBot and ideated potential roles for robots in the classroom. This was followed by a questionnaire and the Godspeed Questionnaire Series (GQS) to understand student perceptions and attitudes towards social robots in education environments. Learners described potential use cases and our results demonstrate students envision robots as assistants rather than teachers, emphasising the importance of human connection in learning. |
|
| Van de Vreken, Seppe |
Giulio Antonio Abbo, Ruben Janssens, Seppe Van de Vreken, and Tony Belpaeme (Ghent University – imec, Ghent, Belgium) Enabling natural robot communication through dynamic, context-aware facial expressions remains a key challenge in human-robot interaction. The field lacks a system that can generate facial expressions in real time and can be easily adapted to different contexts. Early work in this area considered inherently limited rule-based systems or deep learning-based models, requiring large datasets. Recent systems using large language models (LLMs) could not yet generate context-appropriate facial expressions in real time. This paper introduces Expressive Furhat, an open-source algorithm and Python library that leverages LLMs to generate real-time, adaptive facial gestures for the Furhat robot. Our modular approach separates gesture rendering, new gesture generation, and gaze aversion, ensuring flexibility and seamless integration with the Furhat API. User studies demonstrate significant improvements in user perception over a baseline system, with participants praising the system's emotional responsiveness and naturalness. |
|
| Vas, Mark W. |
Chalmers Ong, Hannah Lee, Koen de Nie, Mark W. Vas, Peter Holzer, Yasaman Shamssabzevar, Yu-Ai Jiang, J. Micah Prendergast, Alper Semih Alkan, and Serdar Aşut (Delft University of Technology, Delft, Netherlands) Robotics in creative industries is sometimes met with concern over the eventual replacement of humans. This paper proposes a framework for a new form of spatial craft that emerges through human-robot collaboration, using the Delft Blue painting process as a case study. We employ Learning from Demonstration (LfD) through kinesthetic guidance of a KUKA iiwa robotic arm to capture expert brushstroke trajectories, which are then modelled using a Long Short-Term Memory Variational Autoencoder (LSTM-VAE) to generate novel, stylistically coherent strokes. Subsequently, we evaluated these generated trajectories through physical execution on ceramic tiles, inviting a human painter to interpret and complete one of the robot-produced compositions. The results demonstrate that robotic machine learning can support a new co-crafting workflow, which does not substitute human artists but expands and transforms traditional craft practices for new creative opportunities to emerge. |
|
| Veeramani, Satheeshkumar |
Satheeshkumar Veeramani, Anna Kisil, Abigail Bentley, Hatem Fakhruldeen, Gabriella Pizzuto, and Andrew I. Cooper (University of Liverpool, Liverpool, UK) Self-driving laboratories (SDLs) are rapidly transforming research in chemistry and materials science to accelerate new discoveries. Mobile robot chemists (MRCs) play a pivotal role by autonomously navigating the lab to transport samples, effectively connecting synthesis, analysis, and characterisation equipment. The instruments within an SDL are typically designed or retrofitted to be accessed by both human and robotic chemists, ensuring operational flexibility and integration between manual and automated workflows. In many scenarios, human and robotic chemists may need to use the same equipment simultaneously. Currently, MRCs rely on simple LiDAR-based obstruction detection, which forces the robot to passively wait if a human is present. This lack of situational awareness leads to unnecessary delays and inefficient coordination in time-critical automated workflows in human-robot shared labs. To address this, we present an initial study of an embodied, AI-driven perception method that facilitates proactive human-robot interaction in shared-access scenarios. Our method features a hierarchical human intention prediction model that allows the robot to distinguish between preparatory actions (waiting) and transient interactions (accessing the instrument). Our results demonstrate that the proposed approach enhances efficiency by enabling proactive human–robot interaction, streamlining coordination, and potentially increasing the efficiency of autonomous scientific labs. |
|
| Vella, Elena Marie |
Elena Marie Vella, Kim Vincs, Casey Richardson, and John McCormick (Swinburne University of Technology, Melbourne, Australia) Human–robot interaction (HRI) is moving beyond single-operator settings towards scenarios where robots must interpret multiple simultaneous human signals. Existing systems often assume a single input stream, which constrains expressiveness and limits collective participation. To address this, we introduce a depth-camera framework that supports natural gesture-based control, without user-specific training or personalization. A multi-input controller unifies diverse whole-body movements and extends seamlessly to multi-human interaction. Studies with dancers show how embodied practice can shape responsiveness and inclusivity, demonstrating the framework’s capacity to democratize robot control and enhance collective agency. By treating human movement as a shared control medium, the framework supports equitable participation and illustrates how embodied expertise can guide more inclusive HRI design. |
|
| Verhelst, Eva |
Eva Verhelst and Tony Belpaeme (Ghent University – imec, Ghent, Belgium) Recent advances in generative AI and social robotics have opened new possibilities for robot-assisted language learning, yet integrating these technologies in pedagogically sound ways remains challenging. This paper matches theories of language learning to the design of autonomous robot tutors. Usage-based language learning, learning in context, Self-Determination Theory and Dual Coding Theory lend themselves to being operationalised for Robot-Assisted Language Learning. We present a proof-of-concept shared story-building system, in which a learner co-creates a story with a robot tutor. The system leverages large language models for dynamic content generation, automatic speech recognition for learner input, and image generation to provide multimodal scaffolding. By embedding vocabulary, adapting to learner input, and avoiding explicit corrections, the system aligns with usage-based and interactionist theories of language acquisition. We discuss the technological enablers and barriers - such as large language model adaptability and automatic speech recognition limitations - and propose directions for future work. This work contributes to the growing field of AI-powered social robots in education, demonstrating how theory-driven design can enhance engagement and learning outcomes. |
|
| Verhoef, Eva S. |
Wilbert Tabone, Benedetta Lusi, Alessandro Ianniello, J. Micah Prendergast, Deborah Forster, Olger Siebinga, Maria Luce Lupetti, Eva S. Verhoef, Dave Murray-Rust, Marco C. Rozendaal, Ann M. Pendleton-Jullian, and David Abbink (Delft University of Technology, Delft, Netherlands; Erasmus University Medical Centre, Rotterdam, Netherlands; Politecnico di Torino, Turin, Italy; RoboHouse, Delft, Netherlands; Ohio State University, Columbus, USA) Building on two previous workshops on transdisciplinary practices for shaping worker-robot relations, this half-day workshop introduces participants to worldbuilding, a design-driven technique used to co-create and explore richly detailed futures, as a way to empower workers and scholars in reimagining plausible and preferable future worker-robot relations (WRRs). WRRs describe the interactions, collaborations, and shared practices between workers and robotic systems in organisational contexts. The workshop begins with an introduction to WRRs, and a keynote by a worldbuilding expert that will outline the method and its value for envisioning future WRRs. Groups of workshop participants will then investigate concrete case studies that demonstrate how robotic systems can support workers in their practice, with a focus on enhancing wellbeing. Through interactive activities in this workshop, participants will co-create imagined worlds of work, which will be analyzed systemically across multiple levels of complexity, from the individual worker and their immediate context to broader societal implications. The workshop ultimately aims to build a community committed to shaping sustainable futures of robot-assisted work. |
|
| Verma, Vinay Kumar |
Vinay Kumar Verma (IIIT Delhi, New Delhi, India) Ensuring intuitive and trustworthy collaboration between humans and robots requires interactions that are predictively verifiable, not merely reactive. In many canonical HRI scenarios—such as object handovers, joint attention, or turn-taking—current systems detect coordination failures only after breakdowns occur and lack a formal basis for defining interaction success in advance. We introduce a minimal, falsifiable grammar of interaction that models coordination using a small set of primitives (signal, request, response, acknowledge) and temporal operators (sequence, concurrency, disruption, repair). By binding these operators to measurable budgets of latency, synchrony, and retry counts, the grammar specifies when interactions succeed, deviate, or require repair. This enables robots to anticipate and respond to interactional breakdowns (e.g., delayed reaches or missed acknowledgments) before task failure occurs. The framework provides a path toward auditable, predictable, and human-aligned robot behavior, grounding formal verification methods in everyday HRI practice. |
|
| Vermeulen, Jasper T. |
Jasper T. Vermeulen (Queensland University of Technology, Brisbane, Australia) Human-Robot Collaboration (HRC) research asks how humans and robots can work together to improve outcomes and sustain human work. Yet much existing knowledge comes from laboratory settings and simplified dyads, which can miss how collaboration unfolds in real-world practice. This PhD project examines HRC in robot-assisted orthopaedic surgery, focusing on Mako-assisted procedures. Rather than treating the robot as an autonomous teammate, the study examines how a surgeon-directed robotic system reshapes teamwork by distributing roles, responsibilities, and workload across the surgical team. Using Socio-Technical Grounded Theory across (1) video analysis, (2) interviews, and (3) in-theatre observations, the project develops a theory of ensemble HRC that characterises how teams coordinate, communicate, calibrate trust, manage attention, and organise space under clinical constraints. These insights inform design principles for HRC systems that complement human expertise and integrate into existing work practices. |
|
| Vernero, Fabiana |
Filipa Correia, Cristina Gena, Alberto Lillo, Laura Lossi, Claudio Mattutino, Valentina Nisi, Linda Pigureddu, and Fabiana Vernero (Instituto Superior Técnico - University of Lisbon, Lisbon, Portugal; University of Turin, Turin, Italy; IST University of Lisbon, Lisbon, Portugal) This paper explores the emerging field of Animal-Robot Interaction (ARI) by questioning the way humanoid robots could engage with dogs in domestic settings, a scenario which we expect to become prevalent in the near future. While existing research focuses on human-robot interaction, ARI remains quite underexplored, particularly regarding how dogs perceive and respond to robotic agents. In contrast, this work proposes a playful and relational design perspective to foster more natural interactions between dogs and robots. We sought guidance from the Department of Veterinary Medicine to ensure our experimental design offered an ethical and stress-free environment for the dogs. Hence, the study incorporated best practices in robot vocalisation, gestural communication, and movement adaptation, improving the robot’s ability to interact naturally. The experiment involved six dogs, always in the presence of their owners. Preliminary findings indicate that dogs can react with curiosity and engagement, but also avoidance or fear, challenging assumptions about robot acceptability in animal interactions. We argue for rethinking robotic design beyond human-centric paradigms, advocating for pluralistic and open-ended approaches to ARI. |
|
| Victor, Sandra |
Sandra Victor, Bruno Yun, Chefou AR Mamadou Toura, Enzo Indino, Madalina Croitoru, and Gowrishankar Ganesh (University of Montpellier, Montpellier, France; University of Aberdeen, Aberdeen, UK; CNRS, Montpellier, France) In this work we address the problem of designing a resource allocation decision making robot. We developed a model that accurately makes decisions to distribute risk, effort and reward between two humans or a human and a robot, considering their age, sex and humanness. To assess the model's alignment with social norms, we conducted a Turing test which showed that our model is perceived as making socially acceptable decisions, similar to those of human participants. Furthermore we demonstrated our model by embodying it in a robot negotiator that automatically distributed reward, effort and risk tokens among participant dyads by perceiving their physical characteristics. |
|
| Vilca, Macarena |
Alejandra Patiño, Emerson E. Mejia-Trebejo, Macarena Vilca, and Denis Peña (Pontifical Catholic University of Peru, Lima, Peru) Peru recycles less than 2% of waste despite high potential. Current solutions fail at two extremes: passive bins cause confusion, while automated bins create dependency. We introduce PERI (Peer Educational Recycling Instructor), a social robot designed not just to sort but to teach. PERI uses a YOLOv8-based vision module to validate user decisions in real-time. This paper demonstrates PERI’s deployment with over 500 interactions. Our results show that 80% of users corrected their sorting mistakes through a combination of PERI’s feedback and facilitator mediation, transforming technical limitations into educational moments and empowering citizens as agents of change. Emerson E. Mejia-Trebejo, Macarena Vilca, Alejandra Patiño, and Denis Peña (Pontifical Catholic University of Peru, Lima, Peru) Passive infrastructure fails to bridge the recycling "Intention-Action" gap. This paper presents Peri V2, a "Symbiotic Retrofit" kit that transforms standard 120L bins into intelligent pedagogical agents without structural waste. The architecture deploys edge-based per- ception to execute a novel behavioral loop: a Temporal Intention Filter (5s heuristic) to parse social signals, Just-in-Time Associative Feedback for cognitive reinforcement, and Ludic Generalization challenges to verify learning transfer. A preliminary "in-the-wild" pilot (𝑁 ≈ 200) demonstrated the operational feasibility of the intention filter in noisy environments. Furthermore, qualitative feedback from recurring users (𝑁 ≈ 15) suggests that replacing voice interactions with visual cues improves acceptance by mini- mizing the social pressure of public disposal. Peri V2 proposes a scalable model for frugal HRI, shifting the focus from automated cities to empowered "Smart Citizens." |
|
| Vincs, Kim |
Elena Marie Vella, Kim Vincs, Casey Richardson, and John McCormick (Swinburne University of Technology, Melbourne, Australia) Human–robot interaction (HRI) is moving beyond single-operator settings towards scenarios where robots must interpret multiple simultaneous human signals. Existing systems often assume a single input stream, which constrains expressiveness and limits collective participation. To address this, we introduce a depth-camera framework that supports natural gesture-based control, without user-specific training or personalization. A multi-input controller unifies diverse whole-body movements and extends seamlessly to multi-human interaction. Studies with dancers show how embodied practice can shape responsiveness and inclusivity, demonstrating the framework’s capacity to democratize robot control and enhance collective agency. By treating human movement as a shared control medium, the framework supports equitable participation and illustrates how embodied expertise can guide more inclusive HRI design. |
|
| Vinel, Alexey |
Akhila Bairy, Diana Burkart, Maximilian Schrapel, Jan Meyerhoefer, John Pravin Arockiasamy, Andy Comeca, Manuel Bied, Alexey Vinel, and Maike Schwammberger (KIT, Karlsruhe, Germany) Effective explainability in human-robot interaction (HRI) requires communication strategies that remain clear in dynamic settings. Unimodal explanations often falter under noise or cognitive load, making multimodal explanations crucial for real-world interaction. For social robots in traffic environments equipped with Vehicle-to-Everything (V2X) communication, timely and interpretable explanations are vital. This late-breaking report presents preliminary findings from a Virtual Reality (VR) user study on which modalities pedestrians perceive as most and least effective across different scenarios, revealing clear scenario-dependent differences in modality effectiveness and highlighting which cues are actually noticed. |
|
| Vines, John |
Marco C. Rozendaal, Anastasia Kouvaras Ostrowski, Mafalda Gamboa, Samantha Reig, Patricia Alves-Oliveira, Maaike Bleeker, Maria Luce Lupetti, John Vines, Nazli Cila, Hannah Pelikan, Nikolas Martelaro, Selma Šabanović, David Sirkin, and Cristina Zaga (Delft University of Technology, Delft, Netherlands; Purdue University, West Lafayette, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Massachusetts at Lowell, Lowell, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Utrecht University, Utrecht, Netherlands; Politecnico di Torino, Turin, Italy; University of Edinburgh, Edinburgh, UK; Linköping University, Linköping, Sweden; Carnegie Mellon University, Pittsburgh, USA; Indiana University at Bloomington, Bloomington, USA; Stanford University, Stanford, USA; University of Twente, Enschede, Netherlands) The 3rd Workshop on Designerly Human-Robot Interaction (HRI) aims to bring together scholars and practitioners engaged in design-oriented research to articulate the value of design research within HRI broadly. We propose a half-day workshop to (1) collectively map the diversity of design research in HRI, examining how contributions are framed and how quality is evaluated; (2) discuss participants’ HRI design projects, showcased in an exhibition setting; and (3) conclude with a focused conversation to identify common ground across diverse approaches and develop strategies for strengthening the position of design research in HRI and its connections with other HRI disciplinary communities. |
|
| Viswanath, Anargh |
Anargh Viswanath (Bielefeld University, Bielefeld, Germany) Proactive behaviour is crucial for creating robots that provide meaningful, context-specific assistance instead of passively reacting to commands. Proactivity entails anticipating user needs and taking initiative appropriately to provide assistance. This motivates three guiding research questions: 1) is proactive behaviour desirable, 2) when should a robot act proactively, and 3) how can proactive behaviour be implemented in robots. My prior empirical studies carried out on spoken interaction initiation revealed broad differences in user preferences, highlighting the need for adaptive proactive interaction strategies. My current research work entails multimodal audio-visual modelling to enable context understanding and collaborative work to develop evaluation tools for assessing proactivity and its influence on user experience. Future work will focus on developing software infrastructure to integrate these components into robot platforms and evaluate proactive behaviours, enabling iterative design and systematic refinement. |
|
| Vittot, Méa |
Matthias von Rakowski, Antoine Esman, Méa Vittot, Kimia Nazari, Christian Dondrup, and Shenando Stals (Heriot-Watt University, Edinburgh, UK) With distractions and sedentary habits, maintaining focus and healthy routines is increasingly challenging. While video tutorials are accessible, they lack engagement and personalised feedback. In contrast, professional human coaching is effective but costly and difficult to schedule. To bridge this gap, we introduce "Marty". As a low-cost social robot, it combines the convenience of home practice with the responsive support of an embodied coach. Using computer vision, it leads yoga sessions, demonstrates poses, and provides real-time corrective feedback. By offering accessible and adaptive guidance, this system aims to interrupt sedentary patterns and empower users to build sustainable wellness habits. |
|
| Vogelius, Patrick |
Pol Barrera Valls, Patrick Vogelius, Tobias Florian von Arenstorff, Matouš Jelínek, and Oskar Palinko (University of Southern Denmark, Odense, Denmark; University of Southern Denmark, Sønderborg, Denmark) The development of humanoid robots has experienced a sudden acceleration during the last years, due to the large advancements made in actuation technology, generative AI and computer vision. The design of humanoid robots makes them useful in scenarios where many different tasks must be achieved, and humans are present. Furthermore, their resemblance to humans opens new ways of communication when compared to traditional robots. However, humanoid robots may find themselves in a situation where human assistance is required, e.g. due to limitations in their sensing and movement capabilities. As such, different help-seeking strategies and their effectiveness need to be explored. This article compares the effect of inducing empathy and guilt in humans as means to request help after a mistake made by a robot. An in the wild experiment conducted between subjects was performed in the University of Southern Denmark (SDU) with a total of 123 participants across 3 scenarios of help-seeking strategies, described as: distressed, sarcastic, and neutral. The results showed a statistical difference between the strategies, proving that using the concepts of empathy and guilt elicitation with robots has the potential to improve human-humanoid collaboration. |
|
| Vogt, Paul |
Paul Vogt, Yara Bikowski, and Matias Valdenegro-Toro (University of Groningen, Groningen, Netherlands) Social robots are increasingly being designed to support elderly care, but conversations between elderly people and robots often involve misunderstandings and confusion. This study explores the development of AI models to recognise confusion from facial expressions of elderly people during human-robot conversations. We collected a video dataset from the robot’s point of view in which elderly people interacted with a social robot through a language game. We trained two models to detect confusion from the facial expressions: (1) a LSTM network using Facial Action Units extracted from the data, and (2) a transfer learning model using ResEmoteNet on facial image data. Both models performed slightly above chance –the LSTM achieved 57% accuracy, while the ResEmoteNet model reached 53% accuracy on balanced data– indicating poor generalisation to new faces. Concluding, these findings suggest that confusion of elderly people cannot be reliably detected using facial expressions alone. We argue that this may be due to age-related changes in facial expression patterns, but also that it may be due to a reduced display of facial responses to robots by the elderly. |
|
| Vollmer, Anna-Lisa |
Mara Brandt, Kira Sophie Loos, Mathis Tibbe, and Anna-Lisa Vollmer (Bielefeld University, Bielefeld, Germany) Children often find themselves in challenging situations, such as medical examinations, where they have limited opportunities to make autonomous decisions and experience their own agency. This study explores whether a warm-up interaction with a social robot can strengthen children’s perceived self-efficacy. We hypothesized that a teaching scenario, where the child instructs the robot, would yield stronger self-efficacy gains than a storytelling activity. In a pre-study, 20 children (6 – 12 years) were assigned to two conditions: teaching the humanoid robot Pepper to play ball-in-a-cup or co-creating a story with Pepper. Perceived self-efficacy was assessed with a 9-item questionnaire before and after the interaction, and parents reported child temperament using the German IKT questionnaire (Inventar zur integrativen Erfassung des Kind-Temperaments). Overall, children showed a small, significant increase in self-efficacy from pre- to post-interaction, with a stronger descriptive trend in the teaching condition and minimal change in storytelling. Shyness was not related to baseline self-efficacy, self-efficacy gains, or the relative effectiveness of the two conditions. Apart from one outcome, effects did not reach statistical significance, as expected given the small sample size. The observed trend toward higher self-efficacy in the teaching condition suggests that further studies with larger samples are warranted. Such research could clarify the potential of social robots to provide effective warm-up interactions that help children feel more confident in upcoming tasks, such as medical examinations. Phillip Richter, Kira Sophie Loos, Josef El Dib, Mara Brandt, Heiko Wersing, and Anna-Lisa Vollmer (Bielefeld University, Bielefeld, Germany; Honda Research Institute, Offenbach, Germany) Standardized evaluation methods are needed as social robots enter diverse environments. We provide benchmark data for two contrasting platforms: Blossom (handcrafted, open-source) and Miroka (commercial, ball-bot humanoid). In an online study (N = 100), participants evaluated one robot after watching a representative interaction video, completing the ASAQ (19 constructs) and ASOR (3 factors) questionnaires. Both robots showed similar evaluation profiles despite contrasting capabilities and scenarios, with moderate differences: Blossom scored higher on coherence, Miroka on attentiveness and human-like appearance. No dimensions reached statistical significance after multiple comparison correction, though effect sizes revealed meaningful perceptual patterns. This work provides the first ASAQ benchmark data for both platforms, demonstrating how multidimensional questionnaires capture user perceptions across contrasting designs in realistic contexts. |
|
| Von Arenstorff, Tobias Florian |
Pol Barrera Valls, Patrick Vogelius, Tobias Florian von Arenstorff, Matouš Jelínek, and Oskar Palinko (University of Southern Denmark, Odense, Denmark; University of Southern Denmark, Sønderborg, Denmark) The development of humanoid robots has experienced a sudden acceleration during the last years, due to the large advancements made in actuation technology, generative AI and computer vision. The design of humanoid robots makes them useful in scenarios where many different tasks must be achieved, and humans are present. Furthermore, their resemblance to humans opens new ways of communication when compared to traditional robots. However, humanoid robots may find themselves in a situation where human assistance is required, e.g. due to limitations in their sensing and movement capabilities. As such, different help-seeking strategies and their effectiveness need to be explored. This article compares the effect of inducing empathy and guilt in humans as means to request help after a mistake made by a robot. An in the wild experiment conducted between subjects was performed in the University of Southern Denmark (SDU) with a total of 123 participants across 3 scenarios of help-seeking strategies, described as: distressed, sarcastic, and neutral. The results showed a statistical difference between the strategies, proving that using the concepts of empathy and guilt elicitation with robots has the potential to improve human-humanoid collaboration. |
|
| Von Rakowski, Matthias |
Matthias von Rakowski, Antoine Esman, Méa Vittot, Kimia Nazari, Christian Dondrup, and Shenando Stals (Heriot-Watt University, Edinburgh, UK) With distractions and sedentary habits, maintaining focus and healthy routines is increasingly challenging. While video tutorials are accessible, they lack engagement and personalised feedback. In contrast, professional human coaching is effective but costly and difficult to schedule. To bridge this gap, we introduce "Marty". As a low-cost social robot, it combines the convenience of home practice with the responsive support of an embodied coach. Using computer vision, it leads yoga sessions, demonstrates poses, and provides real-time corrective feedback. By offering accessible and adaptive guidance, this system aims to interrupt sedentary patterns and empower users to build sustainable wellness habits. |
|
| Von Seelstrang, Leander |
Magnus Jung, Qiaoyue Yang, Leander von Seelstrang, Dominykas Strazdas, Sven Wachsmuth, and Ayoub Al-Hamadi (University of Magdeburg, Magdeburg, Germany; Bielefeld University, Bielefeld, Germany) We present SEMIAC, a multimodal dataset of human-robot interactions (HRI) collected with a Wizard-of-Oz methodology in a logistics-inspired workspace and a home environment. The dataset includes 40 participants across two research sites and captures their behavioural, emotional, proxemic, and subjective responses during cooperative object-retrieval tasks with systematically manipulated robot errors. Interactions were recorded using a rich sensor suite including the humanoid robot TIAGo equipped with onboard sensors and an RGB-D camera, as well as three external RGB-D sensors, and an external microphone. All modalities were recorded fully synchronized through ROS and stored in time-aligned rosbag files. Annotations include human keypoints, proxemic distances, emotional expressions, and engagement indicators. The dataset aims to support research on social navigation, error-aware intent prediction, and socially intelligent robot behaviour. |
|
| Voos, Holger |
Laura Ribeiro, Holger Voos, and Jose Luis Sanchez-Lopez (University of Luxembourg, Luxembourg, Luxembourg) Reliable perception is essential for collaborative robots operating safely in shared human environments. However, automated entity detection systems still produce errors that degrade a robot's understanding of its surroundings. We present a Human-in-the-Loop (HITL) framework that enables user operators to validate and correct entity recognition and detection through an interactive Mixed Reality (MR) application and interface. Detected entities are visualized as aligned holograms, allowing users to confirm or remove them through intuitive, gesture-based spatial interactions. Our proposed method demonstrates that this shared environment and its interaction approach are functional and effective for correcting detections in real-time. By integrating the HITL approach, our system evaluation produces a more accurate representation of the shared environment and establishes the foundation for future extensions, including safer and more effective human–robot interaction and collaboration. Abolfazl Zaraki, Hamed Rahimi Nohooji, Maryam Banitalebi Dehkordi, and Holger Voos (University of Hertfordshire, Hatfield, UK; University of Luxembourg, Luxembourg, Luxembourg) This paper reframes shared autonomy as an interpretable interaction space centered on the human and bounded by safety. Building on this perspective, we introduce a Human-Centred Tri-Region Shared Autonomy Framework that organises interaction into three regions: Human-Led, Robot-Supported, and Safety-Intervention. The framework formalises how autonomy shifts as interaction conditions evolve, while an Interaction State Interpreter maps multimodal user and task observations to region-dependent behaviours. This structure enables autonomy transitions that remain explicit and behaviourally grounded across diverse human-robot interaction contexts, including physical collaboration, social engagement, and cognitive assistance. A physical interaction scenario illustrates how the proposed formulation can be realised through adaptive impedance and constraint-aware feedback, enabling smooth transitions between collaborative support and protective intervention. By structuring autonomy around human authority, supportive assistance, and safety enforcement, the framework provides a clear basis for adaptive human-robot interaction. Hamed Rahimi Nohooji, Abolfazl Zaraki, and Holger Voos (University of Luxembourg, Luxembourg, Luxembourg; University of Hertfordshire, Hatfield, UK) This paper proposes soft robotic embodiments as interaction-level regulators of sustainability in human–robot interaction, where sustainability is shaped at the moment of physical contact rather than enforced through post hoc system-level efficiency optimization or material selection. Under long-term deployment, how interaction is regulated in terms of intensity, frequency, and force transmission directly determines cumulative energy consumption, mechanical wear, and maintenance demand. Soft robotic embodiments regulate these interaction characteristics through compliance, passive adaptation, and geometry-driven deformation, constraining interaction effort before active control is applied. In doing so, interaction behavior is directly coupled with energy use, interaction-induced degradation, and lifecycle considerations at the system level. |
|
| Vorreuther, Anna |
Kathrin Pollmann, Selina Layer, Amelie Polosek, Boyu Xian, and Anna Vorreuther (Fraunhofer Institute for Industrial Engineering IAO, Stuttgart, Germany; University of Stuttgart, Stuttgart, Germany) This paper explores how adhesive signs on public robots can prevent robot bullying. Participants were presented with three different sign variants attached to a cleaning robot in a Virtual Reality scenario: informative (alluding to surveillance/ legal consequences), prompting (imperative to keep away from the robot), and feeling (emotional appeal) and reported their tenden-cies for anti-bullying behavior and perceptions of the robot. Eye tracking was used to measure visual attention. All signs elicited anti-bullying tendencies and were rated comprehensible. The robot with the feeling sign was perceived most human- and least tool-like, capable of emotions, and induced the highest amount of gaze fixations. The informative sign supported fast, low-effort compre-hension and reinforced a tool-like perception. Findings suggest adhesive signs are a viable, low-obtrusive preventive strategy and sign selection should be context-driven: informative for quick pass-by messaging, feeling for deeper engagement. |
|
| Vuddagiri, Sahitha |
Sahitha Vuddagiri (Rice University, Houston, USA) As the global population ages and healthcare workforces face shortages, assistive robots offer promise for supporting both care recipients and caregivers. However, successful long-term deployment remains limited, as current systems often fail to account for the diversity of stakeholder populations and the contexts in which they operate. My research addresses this gap by investigating how individual roles and sociocultural factors shape trust, adoption, and effective use of assistive robots. Through longitudinal in-situ studies across culturally distinct contexts, I aim to build personalized assistive robotic systems that are intuitive, trustworthy, and contextually appropriate, adapting their social communication and shared autonomy behaviors to user preferences. By developing and deploying these systems in collaboration with users, and embedding sociotechnical factors directly into their functionality, this work advances human-centered robotics that can meaningfully support and integrate into real-world care relationships and workflows. |
|
| Wachowiak, Lennart |
Lennart Wachowiak, Andrew I. Coles, Oya Celiktutan, and Gerard Canal (King’s College London, London, UK) Robots operating in human environments should be able to answer diverse, explanation-seeking questions about their past behavior. We present a neurosymbolic pipeline that links a task planner with a unified logging interface, which attaches heterogeneous XAI artifacts (e.g., visual heatmaps, navigation feedback) to individual plan steps. Given a natural language question, a large language model selects the most relevant actions and consolidates the associated logs into a multimodal explanation. In an offline evaluation on 180 questions across six plans in two domains, we show that an LLM-based question matcher retrieves relevant plan steps accurately (F1 Score of 0.91), outperforming a lower-compute embedding baseline (0.62) and a rule-based syntax/keyword matcher (0.02). A preliminary user study (N=30) suggests that users prefer the LLM-consolidated explanations over raw logs and planner-only explanations. Merle M. Reimann, Gregory LeMasurier, Parag Khanna, Lennart Wachowiak, Wenxi Wu, Martim Brandao, Gerard Canal, Christian Smith, and Iolanda Leite (KTH Royal Institute of Technology, Stockholm, Sweden; University of Massachusetts at Lowell, Lowell, USA; King’s College London, London, UK; Robotics, Perception, and Learning Laboratory, Stockholm, Sweden) In order for robots to act as an empowering tool in today's society, it is necessary for users to understand what those robots can and cannot do. This can happen through the design of the robot, through external information such as signs and manuals, or through the interaction itself. With this workshop, we aim to provide a platform for participants who approach robot transparency from different angles to share and discuss their experiences. Participants will have the opportunity to deepen their knowledge of transparent and understandable robot interactions through two keynotes and paper presentations. The diverse backgrounds of participants will be brought together in interactive exercises that aim to address questions about topics such as how transparency can be used to reduce inequalities and empower society and other open problems in the field. The guided break-out sessions aim at giving participants the chance to reflect on the keynotes, paper presentations and their own work and to share those reflections with participants with other perspectives. |
|
| Wachsmuth, Sven |
Magnus Jung, Qiaoyue Yang, Leander von Seelstrang, Dominykas Strazdas, Sven Wachsmuth, and Ayoub Al-Hamadi (University of Magdeburg, Magdeburg, Germany; Bielefeld University, Bielefeld, Germany) We present SEMIAC, a multimodal dataset of human-robot interactions (HRI) collected with a Wizard-of-Oz methodology in a logistics-inspired workspace and a home environment. The dataset includes 40 participants across two research sites and captures their behavioural, emotional, proxemic, and subjective responses during cooperative object-retrieval tasks with systematically manipulated robot errors. Interactions were recorded using a rich sensor suite including the humanoid robot TIAGo equipped with onboard sensors and an RGB-D camera, as well as three external RGB-D sensors, and an external microphone. All modalities were recorded fully synchronized through ROS and stored in time-aligned rosbag files. Annotations include human keypoints, proxemic distances, emotional expressions, and engagement indicators. The dataset aims to support research on social navigation, error-aware intent prediction, and socially intelligent robot behaviour. |
|
| Wacquez, Julien |
Elisabetta Zibetti, Raphael Lorenzo-Louis, Fabio Amadio, Julien Wacquez, Joffrey Becker, Bertrand Luvison, and Serena Ivaldi (Université Paris 8, Paris, France; Inria - CNRS - Université de Lorraine - Loria, Villers-les-Nancy, France; CEA - List, Palaiseau, France; ETIS (UMR8051) - CYU - ENSEA - CNRS, Cergy, France; University Paris-Saclay - CEA - List, Palaiseau, France) How does interaction with robots differ between spontaneously formed groups and individuals? Despite increasing robot deployment in public spaces, this question remains understudied in real-world settings. We conducted a field study deploying a service stationary robot in semi-public office spaces, tracking 221 individuals (42 alone, 179 in groups) across 95 interaction opportunities. Cookies were placed on accessible trays, creating a low-barrier functional interaction opportunity (taking a cookie) while allowing observation of spontaneous social behaviors. Groups demonstrated significantly higher engagement: functional interactions and social gestures. Within groups, leader presence amplified social engagement threefold. These findings are consistent with descriptive norm theory: group presence and leader behavior were associated with increased social engagement, though context-specific factors may moderate these effects. Results highlight the potential value of group detection for robots in multi-user environments, and demonstrate the feasibility of integrating psychological theory with automated tracking to study spontaneous human-robot encounters in the wild. |
|
| Wallbridge, Christopher D. |
Christopher D. Wallbridge, Kavyaa Somasundaram, Frank Förster, Maia Stiber, Eduardo B. Sandoval, Chinmaya Mishra, Christina E. Stimson, Patrick Holthaus, and Hatice Gunes (Cardiff University, Cardiff, UK; Örebro University, Örebro, Sweden; University of Hertfordshire, Hatfield, UK; Microsoft Research, Redmond, USA; UNSW, Sydney, Australia; MPI for Psycholinguistics, Nijmegen, Netherlands; University of Sheffield, Sheffield, UK; University of Cambridge, Cambridge, UK) This workshop proposes looking at errors in humans and robots. The workshop will focus along three axes: Human, Robot and Interaction induced errors. Errors in Human-Robot Interaction (HRI) pose a crucial challenge for the deployment of robots. Errors can affect the safety and trust of users, therefore it is important for empowering society that these robots can handle errors robustly. The workshop intends to foster interdisciplinary discussion among researchers in robotics, HCI, cognitive science, and social sciences, to encourage the development of robust, user-centered, and socially responsible HRI systems. The invited speakers, paper presentations and discussions will focus around topics of detecting errors, handling errors, adapting to errors, restoring trust, using intentional errors, communication strategies, issues of expectation as well as look at design, interdisciplinary and ethical approaches to research. In this way we will help inform research into errors, and develop robotic systems capable of robust interaction and collaboration. |
|
| Wang, Allan |
Howard Ziyu Han, Ying Zhang, Allan Wang, and Nikolas Martelaro (Carnegie Mellon University, Pittsburgh, USA; Miraikan - National Museum of Emerging Science and Innovation, Tokyo, Japan) Using robot simulators in participatory human-robot interaction design can expand the interactions end-users can experience, articulate, and reshape during co-design. In robot social navigation, high-fidelity simulations have largely been developed for benchmarking algorithms and developing robot policy. However, less attention has been given to supporting end-user exploration and articulation of concerns. In this late-break report, we present design considerations and a system implementation that extend an existing social navigation simulator (SEAN 2.0) to support community-driven feedback and evaluation. We add features to the SEAN 2.0 platform to enable richer sidewalk scenario construction, interactive reruns, and robot signaling exploration. Finally, we provide a user scenario and discuss future directions for using participatory simulation to broaden stakeholder involvement and inform socially responsive navigation design. |
|
| Wang, Borui |
Nanyi Jiang, Borui Wang, and Xiaozhen Liu (Cornell University, Ithaca, USA) Housekeeper carts are essential in hotel operations, supporting the maintenance of the hotel’s physical environment and services. While housekeeping staff are their main users, carts are also highly visible to guests, making them not only tools but also sites where hotel experiences are shaped. This project re-designs housekeeper carts to address both their functional and experiential value to primary users and bystanders. We present a modularized cart with an in-depth development of the laundry module. Considering hotels’ need for trustworthy and polite interactions, we designed non-verbal behaviors that allow the robot to express etiquette. |
|
| Wang, Chenghao |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Wang, Chenyang |
Chenyang Wang, Julien Jordan, Alice Reymond, and Pierre Dillenbourg (EPFL, Lausanne, Switzerland) As AI becomes increasingly integrated into everyday life, supporting children’s AI literacy is essential. While prior work in Child-Robot-Interaction has primarily used robots as programmable artefacts or learning companions for introducing AI concepts, the role of a robot as an embodied AI student remains underexplored. We investigate social robot teaching as a pathway to help children intuitively understand supervised learning. We designed a prototype in which children teach a robot using biased and unbiased training data and iteratively observe its performance. A pilot study with three children preliminarily examines: 1) whether and how this interaction fosters intuitive understanding of AI training and bias, and 2) initial design considerations for future prototype interactions. Our findings offer early evidence of the potential of social robot teaching for AI literacy. |
|
| Wang, Fan |
Fan Wang, Yuan Feng, Wijnand IJsselsteijn, and Giulia Perugia (Eindhoven University of Technology, Eindhoven, Netherlands; Northwestern Polytechnical University, Xi’an, China) People Living with Dementia (PLwD) require intensive emotional and physical support, and caregivers frequently struggle with exhaustion and distress. Social robots have been proposed as tools that could enhance socio-emotional well-being, yet many of their designs inherently involve deception, embedding cues that mislead PLwD about the nature and capabilities of the robot. Although Ethics of Technology and Human-Robot Interaction (HRI) explored the concept of Social Robotic Deception (SRD) and its implications, existing discussions remain largely theoretical and detached from the lived realities of dementia care. We know little about how caregivers see and envision the use of SRD in dementia care practice. To address this gap, we conducted two online focus groups with both formal and informal caregivers, with the aim of appraising caregivers' attitudes towards SRD and how they would implement or mediate deception in everyday practice. Critically, we focused on caregivers operating in China, a country of Confucian influence where family caregiving is regarded as a moral duty and leveraging institutional care is stigmatized. Our work contributes empirically grounded insights that highlight lived reality in dementia care shaped by culture for ethical SRD design. |
|
| Wang, Hanyang |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Wang, Hong |
Xiangfei Kong, Rex Gonzalez, Nicolas Echeverria, Sofia Cobo Navas, Thuc Anh Nguyen, Divyamshu Shrestha, Sebastian Ramirez-Vallejo, Deekshita Senthil Kumar, Andrew Texeira, Thao Nguyen, Deep Akbari, Hong Wang, Hongdao Meng, and Zhao Han (University of South Florida, Tampa, USA) HRI research shows that social robots support human companionship and emotional well-being. However, the cost of high-end social robots limits their accessibility, and, yet, existing low-cost platforms cannot fully support users emotionally due to limited interaction capabilities. In this demo paper with accompanying video, we showcase Bloom, a low-cost social robot that combines touch sensing with LLM-driven conversation to facilitate more expressive and emotionally engaging interactions with humans. Bloom also integrates a customizable 3D-printed shell with a flexible software pipeline that enables touch-responsive movement, LLM-driven dialogue, and face tracking during conversations, offering more expressive interaction capabilities at a low cost, which remains easy to fabricate and adaptable across a wide range of societal applications, including mental health, companion, and healthcare. Hong Wang, Katie Winkle, and Ginevra Castellano (Uppsala University, Uppsala, Sweden) This paper presents a multidimensional risk framework specifically designed for foundation model (FM)-driven human-robot interaction (HRI). We systematically categorize the roles of FM into two primary layers: the Interaction Loop, where the model functions as an agent responsible for interpreting environmental and user inputs (Perception), producing multimodal responses (Generation), and proactively requesting data (Acquisition); and the Robotic System, where it acts as an Intermediary that translates high-level commands into robot execution logic, and as an Interface that connects the robot to external networks. The framework maps these functional roles against five critical impact dimensions: Content, Trust, Privacy, Safety, and Data. By clarifying how potential threats arise from internal model flaws and external vulnerabilities, this work provides a structured basis for risk identification, assessment, and mitigation during human-robot-AI interaction. Hong Wang, Ngoc Bao Dinh, and Zhao Han (University of South Florida, Tampa, USA) Projector-based augmented reality (AR) enables robots to communicate spatially-situated information to multiple observers without requiring head-mounted displays, e.g., projecting navigation path. However, they require flat and weakly textured projection surfaces; otherwise, the surface needs to be compensated to retain the original projected image. Yet, existing compensation methods assume static projector-camera-surface configurations and may not work in complex, textured environments where robots must navigate. In this work, we evaluate state-of-the-art deep learning-based projection compensation on a Go2 robot dog in a search-and-rescue scene with discontinuous, non-planar, strongly textured surfaces. We contribute empirical evidence on critical performance limitations of state-of-the-art compensation methods: the requirement of pre-calibration and inability to adapt in real-time as the robot moves, revealing a fundamental gap between static compensation capabilities and dynamic robot communication needs. We propose future directions for enabling real-time, motion-adaptive projection compensation for robot communication in dynamic environments. |
|
| Wang, Kai |
Shigen Shimojo, Kai Wang, Keita Kiuchi, Yusuke Shudo, and Yugo Hayashi (Ritsumeikan Global Innovation Research Organization, Ibaraki, Japan; Ritsumeikan University, Ibaraki, Japan; National Institute of Occupational Safety and Health, Kawasaki, Japan) Social isolation among older adults is a global concern, and socially assistive robots are increasingly explored as companions to support mental well-being. Users’ impressions can strongly influence psychological outcomes. Building on Socioemotional Selectivity Theory, which suggests that older adults prioritize emotionally meaningful goals, this study examined the effectiveness of a solution-focused approach (SFA), which emphasizes positive information, compared with a problem-focused approach (PFA), which focuses on negative information, and explored the influence of embodied conversational agent (ECA) impressions. We implemented the ECA on a humanoid social robot. The SFA-based robot-mediated interaction did not significantly improve mental health as measured by the K10, although perceived robot intelligence correlated with outcomes. Our findings highlight that perceived intelligence—rather than conversational framework—plays a key role in influencing mental-health outcomes in older adults. |
|
| Wang, LiMeng |
Shaoqing Liu, Yishan Duan, LiMeng Wang, Yuehan Zhao, Xiaohan Li, and Matt Philippe Laing (Tsinghua University, Peking, China; Tsinghua University, Beijing, China; Guangzhou Academy of Fine Arts, Guangzhou, China) This study explores the potential of social robots as icebreakers during first-time encounters between strangers, aiming to address common issues of social awkwardness that may hinder meaningful interactions. We designed and implemented LMAO, a desktop social robot (as shown in Fig. 1), which uses laughter as a non-verbal, emotionally contagious signal to intervene during conversation breaks. Through a user study involving 20 participants in group discussions, we examined whether the robot's laughter could enhance social engagement by increasing smile frequency and verbal contributions. The results indicated that compared to the control group with no robot intervention, the group with LMAO exhibited significantly higher levels of smiling and verbal output. Our findings suggest that context-aware robotic laughter can effectively reduce subjective feelings of awkwardness and facilitate smoother social initiation, highlighting the potential of social assistive robots in fostering human connection. |
|
| Wang, Mengyang |
Yan Xiang, Chengliang Ping, Mengyang Wang, and Mingming Fan (Hong Kong University of Science and Technology, Guangzhou, China) As reliance on desktop-based knowledge work platforms grows, maintaining sustained focus has become a critical challenge, and current tools still provide limited support for everyday attentional needs. Many digital aids remain tied to the screen and are experienced as intrusive or easy to ignore, whereas desktop robots offer situated, embodied forms of support in the same physical workspace as the computer. Yet it remains unclear how such robots should be designed to help people manage attention in study and work. To explore this, we conducted a participatory design study consisting of five workshops with adults who self-identified as needing support with focus. Participants reflected on their daily challenges and current coping strategies, then envisioned how a desktop robot could act, look, and be placed to support them. Our findings reveal diverse, context-dependent expectations around function, social role, and form, and outline directions for designing attention-supportive desktop robots for everyday work. |
|
| Wang, Min |
Phillip Morgan, Damith Herath, Praminda Caleb-Solly, Matthew Studley, Elizabeth Williams, Aurora An-Lin Hu, Eduardo B. Sandoval, Maleen Jayasuriya, Min Wang, and Janie Busby Grant (Cardiff University, Cardiff, UK; University of Canberra, Bruce, Australia; University of Nottingham, Nottingham, UK; University of the West of England, Bristol, UK; Australian National University, Canberra, Australia; UNSW, Sydney, Australia; University of Canberra, Canberra, Australia) The increasing prevalence of interactive, mobile robots in domestic and social spaces requires not only a comprehensive examination of the security challenges inherent in their large-scale deployment, but understanding how we as a community can support safe and successful adoption of robots. This workshop provides a forum for researchers, practitioners, and stakeholders from a range of disci- plines to build expertise and networks in safety, trust, psychosocial, legal and economic aspects for the secure deployment of social robots in domestic environments. The end goal of this workshop is to enable the development of methodologies and recommenda- tions to empower people to understand and decide on how their robots (and/or robots they are using) are secured, and approaches for raising awareness of the impact of an attack. The workshop will incorporate keynote presentations from leading experts, a series of lightning talks, and a scenario planning activity in which teams of participants use custom-designed prospective scenarios to ex- plore the precursors, processes, contexts and impacts of a breach of security in a robotic system. This will allow a deep-dive into the complexities of establishing and maintaining security and trust in interactive robotic applications, from the broad frame of social, personal and legal capability, enabling long-term acceptance and adoption of trustworthy, secure, and safe interactive robots. |
|
| Wang, Mingke |
Mingke Wang, Yixun Li, Bettina Nissen, and Rebecca Stewart (Imperial College London, London, UK; University of Edinburgh, Edinburgh, UK) MenstaRay is a soft knit robotic interface designed to explore how tactile actuation can support somatosensory communication of menstrual experiences. The prototype was created using a fabrication method for knit-integrated soft wearable robotics with two core structural elements: (1) an extensible EcoFlex 00-10 silicone cavity containing internal air chambers and (2) a strain-limiting textile layer knitted with Spandex Super Stretch Yarn (81% nylon, 19% elastane). This configuration enables regulated inflation patterns that preserve the softness of textiles while providing targeted haptic feedback that is suitable for intimate, safe, and therapeutically appropriate interactions. Through a series of workshops, we investigated and evaluated how these dynamic tactile behaviours shaped participants' embodied reflections on menstrual sensations. This work contributes to human robotic interaction by introducing MenstaRay, a novel artifact coupled with textile-integrated actuation that can externalize intimate bodily sensations and foster new modes of communicating, reflecting on and representing menstrual experiences through wearable interfaces. |
|
| Wang, Qiyao |
Weijie Qin, Qiyao Wang, Bingcen Gong, and Yijia Luo (Tsinghua University, Beijing, China) During dining in restaurants, oil splashes are readily appraised by users as a negative event. Critically, without timely intervention, the initial irritation can accumulate and evolve into a vicious cycle of escalating negativity. This reaction may not only impair the overall dining experience, but also dominate the user's cognitive focus and lead to lasting emotional distress. To address this, we present Seesoil—a desktop interactive robot based on the "Weak Robot" concept. Designed to resemble a condiment bottle, it blends naturally into the table setting. Rather than addressing the stain directly, Seesoil employs deliberately clumsy motions and voice interaction to guide users in reappraising the situation during the early stage of negative emotion generation. By redirecting attention towards a more positive interactive experience, it mitigates the accumulation of negative affect and serves as an emotional companion throughout the meal. |
|
| Wang, Ruhan |
Yijie Guo, Ruhan Wang, Jini Tao, Zhiling Xu, Yaowen Shen, Qifei Zhou, Zhenhan Huang, Danqi Huang, and Haipeng Mi (Tsinghua University, Beijing, China; Neurobo, Shanghai, China; Tongji University, Shanghai, China) In-public companionship is underexplored, especially for ACG (anime, comics, and games) users who carry character goods and seek shared presence with favorite characters in everyday public life. We present Goobo, a lightweight multimodal companion embedded in an ita-bag (a transparent display bag used by ACG fans) that “activates” carried character goods via real-time visual perception and persona-consistent expressive narration. Informed by a formative workshop, Goobo illustrates a portable, character-bound interaction paradigm and serves as a platform for upcoming field studies on how users engage with activated characters in public settings. |
|
| Wang, Ruoyu |
Yuxuan Chen, Ian Leong Ting Lo, Bao Guo, Netitorn Kawmali, Chun Kit Chan, Ruoyu Wang, Jia Pan, and Lei Yang (University of Hong Kong, Hong Kong, China) A tour guide robot, CLIO, is presented. It carries a display as its head and a laser pointer to orient visitors’ attention by its head movement and laser pointer. Animated eyes are displayed to build eye contact. An optional low-code interface based on a Large Language Model is offered to coordinate the designed actions with a given narrative script for exhibition. We conducted a user study to evaluate the CLIO system in a mock-up exhibition of historical photographs. Feedback was collected from questionnaires and quantitative data from a mobile eye tracker. Experimental result validated that the efficacy of the designed actions in guiding visual attention of the visitors. It showed that CLIO achieved an enhanced engagement compared to the Audio-only baseline system. |
|
| Wang, Shenghui |
Elena Malnatsky, Shenghui Wang, Koen Hindriks, and Mike E.U. Ligthart (Vrije Universiteit Amsterdam, Amsterdam, Netherlands; University of Twente, Twente, Netherlands) Long-term child–robot interaction depends on sustaining both relational continuity and accurate, meaningful memory over time. In a one-year follow-up with 50 children from a personalized reading-support robot study, we found that children felt less close to the robot and half of the robot’s stored profile content was outdated or missing, revealing three challenges for long-term CRI: relationship decay, informational decay, and opaque robot memory, where children cannot check or influence what the robot remembers about them. A brief web-based “reconnect” repaired both informational and relationship decay, and revealed children’s strong interest in having more agency over the robot’s memory. Building on these insights, we propose Open-Memory Robots: agents whose memory is more transparent and co-constructed with the child, supporting continuity, appropriate trust, and children’s agency in CRI. |
|
| Wang, Yadi |
Nayeon Kwon, Shengyuehui Li, Yu-Chia Tseng, and Yadi Wang (Cornell University, Ithaca, USA) Shared waiting spaces, like hotel lobbies, often feel socially stagnant, with people defaulting to silence and avoiding interactions. In this paper, we explore how an everyday object found in those spaces may be robotized to change this dynamic. We introduce HighLight, a mobile floor-lamp robot that uses light and movement to reduce social awkwardness and encourage spontaneous interactions among strangers. We designed its interactions to spark surprise, invite light-hearted engagement, reinforce positive social energy, and back off when people show discomfort. Through in-the-wild deployments, we observed that HighLight successfully elicited curiosity, laughter, and conversations, easing social awkwardness in shared spaces. |
|
| Wang, Yiting |
Chen Kang, Madina Alitai, Yiting Wang, Xiaochi Cai, Ruidong Ma, Angelo Cangelosi, and Zhegong Shangguan (Beijing Language and Culture University, Beijing, China; University of Manchester, Manchester, UK; Pennsylvania State University, State College, USA; Sheffield Hallam University, Sheffield, UK) Large Language Model (LLM)-driven social robots face two key challenges: inference latency creates unnatural silences during which an expressively static robot appears disengaged, and LLMs rarely account for the user's facial affect as a continuous evolving process. We present Fluid-Xpress, an emotion-aware dual-loop framework for empathic facial reactions in human-robot interaction. The framework features: (1) a Macro-Micro dual-loop architecture that decouples real-time non-verbal feedback from LLM verbal processing, enabling continuous affective backchanneling during inference latency; (2) a Temporal Affective Engine using metrics such as MSSD to capture emotional dynamics and detect complex states like cognitive overload and masked emotions; and (3) a Risk-Adaptive Strategy that prioritizes immediate intervention during high-arousal states. A pilot study (N=8) showed that Fluid-Xpress significantly improved arousal stability (p < .05), mood improvement (p < .05), expression awareness (p < .01), and perceived empathy (p < .05) compared to baseline, providing preliminary support for emotion-aware non-verbal feedback in embodied social robots. |
|
| Wang, Zixuan |
Zixuan Wang (University of Edinburgh, Edinburgh, UK) As humanoid robots increasingly enter consumer markets and everyday urban life in China, we need a better understanding of the potential and challenges of robots in the wild. Drawing on an analysis of 43 social media videos capturing incidental human–robot encounters in public spaces, this study provides preliminary findings on how people are appropriating and domesticating these robots, and how both intentional users and incidental co-present persons spontaneously react to and interact with these robots in situ. The findings provide valuable insights for designing robots that align with people's natural interaction patterns, avoid reinforcing gender stereotypes, and integrate smoothly into everyday urban environments. |
|
| Warner, Georgina |
Jacqueline Borgstedt, Ricarda Wullenkord, Fethiye Irmak Doğan, Alva Markelius, Yue Lou, Emma Geijer-Simpson, Emily S. Cross, Friederike Eyssel, Tamsin Ford, Jenny L. Gibson, Hatice Gunes, Georgina Warner, and Ginevra Castellano (ETH Zurich, Zurich, Switzerland; Bielefeld University, Bielefeld, Germany; University of Cambridge, Cambridge, UK; Uppsala University, Uppsala, Sweden) Mental Health challenges disproportionately affect vulnerable children, including those with Developmental Language Disorder (DLD) and with forced migration backgrounds. These groups face elevated risks of anxiety, depression, and social withdrawal. Yet, to date, their subjective wellbeing is difficult to assess. This is particularly true for traditional self-report measures which require good reading skills, linguistic competence, and sustained attention. To overcome these barriers, we propose that social robots offer a novel opportunity to create accessible, engaging, and child-friendly wellbeing assessments. Yue Lou, Emma Geijer-Simpson, Georgina Warner, Anna Pérez-Aronsson, Elin Inge, and Ginevra Castellano (Uppsala University, Uppsala, Sweden) As emerging technologies such as AI systems and social robots become embedded in children’s daily lives, it is vital to consider how children themselves imagine robots. Advances in social robotics have generated new opportunities for assessing child wellbeing, but much of this work remains framed by adult researcher-centered views of what counts as beneficial. Drawing on the distinction between adult child perspectives and children’s own perspectives from childhood studies, we report a speculative participatory workshop with a child public involvement panel that explores future social robots for supporting children’s wellbeing. Children took part as co-researchers rather than research subjects, and their discussions highlight insights including everyday care, emotional companionship, and transparency about robot identity. We use these insights to inform future work on child-robot interaction and child wellbeing, and to connect children’s perspectives to debates around ethics and AI policy. |
|
| Wasylyshyn, Christina |
Gordon Briggs and Christina Wasylyshyn (US Naval Research Laboratory, Washington, USA) As human-robot teaming becomes more common, robots must effectively reject commands when appropriate. While prior work has investigated when and how robots should refuse directives, it focused on effective and socially appropriate justifications. However, justifications alone are insufficient for advancing joint activity. Human teammates still shoulder the burden of formulating an acceptable course of action to continue collaboration, if possible. This paper examines constructive elaborations: communications that extend beyond justification to proactively convey information indicating collaborative alignment (e.g., a suggestion of alternative course of action). We present results from a vignette experiment examining whether constructive elaborations improve perceived trustworthiness of robotic agents engaged in collaborative disobedience. Our findings contribute to understanding how autonomous agents can move beyond mere justified refusal toward pro-active partnership, facilitating more effective human-robot collaboration. |
|
| Watson, Lewis |
Lewis Watson, Emilia Sobolewska, Carl Strathearn, Mayuko Morgan, and Yanchao Yu (Edinburgh Napier University, Edinburgh, UK) A major limitation of current social robots is their dependence on cloud-based dialogue pipelines, which restricts use in settings with limited or unreliable connectivity. We present a lightweight, fully local spoken-dialogue system that runs on consumer-grade hardware and integrates open-source models for speech recognition, dialogue generation, and text-to-speech. The pipeline was deployed on Euclid, a non-commercial humanoid robot, across several public engagement events, enabling extended real-world interaction without internet access. We analyse over 5,000 dialogue turns recorded during these dialogues to characterise system behaviour, user interaction patterns, and challenges arising in noisy, multi-speaker environments. Our observations demonstrate the feasibility of privacy-preserving, on-device conversational robotics while highlighting limitations in turn-taking, response length, and environmental grounding. We outline planned improvements to support more robust and accessible social-robot interaction. |
|
| Weinberger, Nora |
Arjita Mital, Felix Gnisa, Utku Norman, and Nora Weinberger (KIT, Karlsruhe, Germany) This Late Breaking Work presents a low-threshold, unsupervised public exhibit designed to explore how non-expert audiences imagine and negotiate future human–robot interactions in ethically charged everyday situations. The exhibit, installed in Karlsruhe, Germany, invited participants to engage with four dilemma-based scenarios where participants were prompted to decide how a social robot should act confronting questions of moral delegation and machine agency. The activity generated rich, situated reflections on responsibility, safety, care, and the limits of automation. Findings reveal context-dependent expectations that balance efficiency against dignity, human judgment, and relational preservation, shaped by perceived stakes, social context, and the specific embodiment of the robot involved. Through this we demonstrate how minimally supervised participatory formats can surface normative expectations and support inclusive, responsible robot design. |
|
| Weiss, Astrid |
Katharina Brunnmayr and Astrid Weiss (TU Wien, Vienna, Austria) Socially assistive robots have shown promise in supporting people living with dementia (PlwD) by reducing stress and promoting engagement. We know that PlwD prefer robots with fur and pet-like embodiment. However, when it comes to other embodiment features of robots, such as the design of the eyes, we still lack knowledge about PlwD's preferences. We conducted a pilot study co-exploring playful materials with PlwD and recruited four participants living in a care home for 10 co-exploration sessions. In this report, we present a side product of the original study: the importance of eye design when designing technologies for PlwD. We found that 1) the eyes are an important focal point for PlwD during interactions, 2) eye movement is interpreted as emotions by PlwD, and 3) the size, shape, and complexity of the eyes are crucial for recognition. |
|
| Weissel, Sophie |
Jennifer Dong, Sophie Weissel, and Emily Zimmerman (Georgia Institute of Technology, Atlanta, USA) Elevators can be socially awkward — strangers share intimate space yet avoid interaction with each other. To address this, we present Elevator Pitch, a ceiling-mounted interactive robot that playfully facilitates social interaction in elevators. Elevator Pitch aims to foster temporary togetherness among frequent strangers in enclosed public spaces while exploring how ludic, socially expressive architectural robots can act as social agents. This paper presents the design and preliminary user testing of Elevator Pitch. |
|
| Weisswange, Thomas H. |
Hifza Javed, Ella Ruth Maule, Thomas H. Weisswange, and Bilge Mutlu (Honda Research Institute, San Jose, USA; University of Bristol, Bristol, UK; Honda Research Institute, Offenbach, Germany; University of Wisconsin-Madison, Madison, USA) This workshop on Robots for Communities explores how robots can serve as shared social resources that support the collective well-being of communities. While robots have traditionally been created to serve corporations or individuals, leading human–robot interaction research to focus largely on individuals or small groups, communities remain a crucial yet underexplored context for robotics. Understanding robots in community settings requires an interdisciplinary lens that integrates robotics, design, the social sciences, humanities, and community practice. Rather than emphasizing the negative consequences of large-scale deployment, our focus is on the active, positive roles robots might play in shaping communities. Central to this vision is viewing robots not as personal possessions but as shared resources, with unique affordances that enable them to enrich community experiences in ways other technologies cannot. The workshop seeks to bridge technology-centered and community-centered perspectives to promote dialogue across disciplines. By bringing these perspectives together, we aim to establish an interdisciplinary agenda for the design, evaluation, and deployment of robots as positive forces for well-being and cohesion within communities. |
|
| Wersing, Heiko |
Phillip Richter, Kira Sophie Loos, Josef El Dib, Mara Brandt, Heiko Wersing, and Anna-Lisa Vollmer (Bielefeld University, Bielefeld, Germany; Honda Research Institute, Offenbach, Germany) Standardized evaluation methods are needed as social robots enter diverse environments. We provide benchmark data for two contrasting platforms: Blossom (handcrafted, open-source) and Miroka (commercial, ball-bot humanoid). In an online study (N = 100), participants evaluated one robot after watching a representative interaction video, completing the ASAQ (19 constructs) and ASOR (3 factors) questionnaires. Both robots showed similar evaluation profiles despite contrasting capabilities and scenarios, with moderate differences: Blossom scored higher on coherence, Miroka on attentiveness and human-like appearance. No dimensions reached statistical significance after multiple comparison correction, though effect sizes revealed meaningful perceptual patterns. This work provides the first ASAQ benchmark data for both platforms, demonstrating how multidimensional questionnaires capture user perceptions across contrasting designs in realistic contexts. |
|
| Wessels, Marlene |
Johannes Kraus, Niklas Grünewald, Charlotte Kapell, and Marlene Wessels (University of Mainz, Mainz, Germany) Robot bullying - purposeful obstructive or harmful behavior to-ward robots - is widely discussed but still under-researched, with mixed findings under realistic conditions. In this field experi-ment (N = 35), we tested how robot social role behavior (coop-erative-polite vs. functional-technological) and social norms (pro- vs. anti-bullying) influence bullying of a cleaning robot. Bullying was measured via an adapted hot-sauce paradigm, alongside anthropomorphism and trust. Participants bullied the impolite robot significantly more, while social norms showed no significant effects. Anthropomorphism and trust were higher for the polite robot. This indicates that robots’ social roles shape robot perceptions and harmful behavior towards them. |
|
| Wijesinghe, Nipuni |
Sharni Konrad, Nipuni Wijesinghe, Eileen Roesler, and Janie Busby Grant (University of Canberra, Canberra, Australia; University of Canberra, Bruce, Australia; George Mason University, Fairfax, USA) This large sample study used exposure to a humanoid social robot to investigate the relationship between affinity with technology, social presence and future intention to use the robot. A between-subjects experiment was conducted with 235 participants who were randomly assigned to complete a 3 minute drawing task with an embodied robot exhibiting either high or low social presence. Regression analyses indicated that higher affinity with technology predicted stronger perceptions of social presence. Mediation analyses revealed that social presence partially mediated the relationship between affinity with technology and future intention to use, such that affinity with technology influenced future intention to use both directly and indirectly through social presence. Analysis of the subdimensions of social presence revealed that while co-presence significantly accounted for this effect, shared potential did not. Across models, affinity with technology exerted a direct influence on future intention to use, suggesting that dispositional openness to technology fosters behavioural intentions both directly and indirectly through relational perceptions. These findings highlight the importance of integrating dispositional and relational factors in HRI to support robot adoption. |
|
| Williams, Elizabeth |
Phillip Morgan, Damith Herath, Praminda Caleb-Solly, Matthew Studley, Elizabeth Williams, Aurora An-Lin Hu, Eduardo B. Sandoval, Maleen Jayasuriya, Min Wang, and Janie Busby Grant (Cardiff University, Cardiff, UK; University of Canberra, Bruce, Australia; University of Nottingham, Nottingham, UK; University of the West of England, Bristol, UK; Australian National University, Canberra, Australia; UNSW, Sydney, Australia; University of Canberra, Canberra, Australia) The increasing prevalence of interactive, mobile robots in domestic and social spaces requires not only a comprehensive examination of the security challenges inherent in their large-scale deployment, but understanding how we as a community can support safe and successful adoption of robots. This workshop provides a forum for researchers, practitioners, and stakeholders from a range of disci- plines to build expertise and networks in safety, trust, psychosocial, legal and economic aspects for the secure deployment of social robots in domestic environments. The end goal of this workshop is to enable the development of methodologies and recommenda- tions to empower people to understand and decide on how their robots (and/or robots they are using) are secured, and approaches for raising awareness of the impact of an attack. The workshop will incorporate keynote presentations from leading experts, a series of lightning talks, and a scenario planning activity in which teams of participants use custom-designed prospective scenarios to ex- plore the precursors, processes, contexts and impacts of a breach of security in a robotic system. This will allow a deep-dive into the complexities of establishing and maintaining security and trust in interactive robotic applications, from the broad frame of social, personal and legal capability, enabling long-term acceptance and adoption of trustworthy, secure, and safe interactive robots. |
|
| Williams, Tom |
Claire Lewis, Melody Goldanloo, Matthew Murray, Zachary Kaufman, and Tom Williams (Colorado School of Mines, Golden, USA; University of Colorado at Boulder, Boulder, USA) Museums are an effective informal learning environment for science, art and more. Many researchers have proposed museum guide robots, where the outcomes of the interactions are based solely on the robot’s communication. In contrast, we explored how a robot could encourage learning and teamwork through human-human interactions. To achieve this, we created “Chase,” a novel zoomorphic robot that presents “Data Chase,” an interactive museum activity. We designed Chase to enable museum-goers to learn about the exhibits together by prompting users to complete a teamwork based scavenger hunt for rewards. |
|
| Williamson, Paige |
Pawinee Pithayarungsarit, Neha Kannan, Paige Williamson, Paul Vallejo, Laura Saad, Greg Trafton, and Eileen Roesler (George Mason University, Fairfax, USA; Naval Research Laboratory, Washington, USA) The Negative Attitude toward Robots Scale (NARS) was designed to measure people’s general attitudes toward robots. Originally developed and validated in Japanese, the NARS was translated into English without revalidation and has become widely used in human-robot interaction research. This study aims to validate the translated NARS factor structure via confirmatory factor analysis using existing datasets. Results (N = 2154) indicated a moderate support for the three-factor structure with some problematic items. Additionally, many papers modified the NARS in some way, including changing wording or rating scale. Not only is this not recommended, but it suggests that there may be a disconnect between the scale and how researchers would like to use it. Together, we suggest that researchers consider the consequences of modification and emphasize the importance of validating scales to ensure their adequate measurement ability and the quality of research results. JP Tiger, Paige Williamson, Neha Kannan, Jacob Henley, Eileen Roesler, and David Porfirio (George Mason University, Fairfax, USA) Air quality threatens health through particulate matter (PM), volatile organic compounds (VOCs), and allergens, yet current monitors of- ten fail to communicate threats properly. We introduce Nose Knows, a robot that uses “universal nasal cues”—such as sneezes—to com- municate. Using participatory design, we conducted an expert inter- view, user focus groups, and iterative prototyping to build a robot that detects PM10, PM2.5, VOCs, and customizable odors. The ro- bot integrates servo-driven movements, sound effects, LEDs, and a web interface. Our findings highlight preferences for customizable aesthetics and show that biologically inspired cues can possibly improve clarity, engagement, and awareness of air quality. |
|
| Wilson, Cristina G. |
Sean Buchmeier, Ian C. Rankin, and Cristina G. Wilson (Oregon State University, Corvallis, USA) We present a week long scientist-robot collaborative field science campaign was conducted in Martian analog White Sands National Park. The workflow for exploring a new area of the dunes was broken into two sections. First, a scouting mission was designed using a robot-assisted design tool and then executed. Second, a supervisory control method is used to allow scientists to perform their own experiments while managing the robot system. These two method enable more data in useful locations to be collected while minimizing burden on the scientist supervising the system. |
|
| Winkle, Katie |
Hong Wang, Katie Winkle, and Ginevra Castellano (Uppsala University, Uppsala, Sweden) This paper presents a multidimensional risk framework specifically designed for foundation model (FM)-driven human-robot interaction (HRI). We systematically categorize the roles of FM into two primary layers: the Interaction Loop, where the model functions as an agent responsible for interpreting environmental and user inputs (Perception), producing multimodal responses (Generation), and proactively requesting data (Acquisition); and the Robotic System, where it acts as an Intermediary that translates high-level commands into robot execution logic, and as an Interface that connects the robot to external networks. The framework maps these functional roles against five critical impact dimensions: Content, Trust, Privacy, Safety, and Data. By clarifying how potential threats arise from internal model flaws and external vulnerabilities, this work provides a structured basis for risk identification, assessment, and mitigation during human-robot-AI interaction. |
|
| Wormser, Nigel G. |
Nigel G. Wormser, Zuha Kaleem, Jessie Lee, Dyllan Ryder Hofflich, and Henry Calderon (Cornell University, Ithaca, USA; Cornell University, Brooklyn, USA) Musculoskeletal injuries from manual laundry cart transportation are very common for workers in the hospitality industry. To address this, we designed Elandro, a teleoperated laundry cart that collaboratively helps hotel staff with transportation across and within floors at a hotel. Through iterative user research at Statler Hotel, and wizard-of-oz interaction testing, we revealed design requirements essential for successful human-robot interaction. Elandro contributes to reducing physical strain on workers, maintaining staff autonomy and decision-making, establishing a human-centered approach where technology empowers rather than replaces hospitality workers. |
|
| Wu, Bingyu |
Isaac S. Sheidlower, Jindan Huang, James Staley, Bingyu Wu, Qicong Chen, Reuben M. Aronson, and Elaine Schaertl Short (Brown University, Providence, USA; Tufts University, Medford, USA) Robot Foundation Models (RFMs) represent a promising approach to developing general-purpose home robots. The broad capabilities of RFMs enable users to ask a robot to perform tasks the RFM was not evaluated on. Informed users can better judge what a robot can and cannot do, and thus use it more safely and effectively. We study how non-roboticists interpret performance information commonly reported in RFM evaluations, which typically use task success rate (TSR) as the primary metric. While TSR is intuitive to experts, it is important to validate that novices interpret it as intended. We conducted a study where users saw real evaluation data from published RFM projects. We find that non-experts use TSR in ways consistent with expert expectations but also value additional information—especially failure cases, which are less frequently reported. Users further want both real evaluation data and the robot’s own estimates for novel tasks. |
|
| Wu, Fiona |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Wu, Wenxi |
Merle M. Reimann, Gregory LeMasurier, Parag Khanna, Lennart Wachowiak, Wenxi Wu, Martim Brandao, Gerard Canal, Christian Smith, and Iolanda Leite (KTH Royal Institute of Technology, Stockholm, Sweden; University of Massachusetts at Lowell, Lowell, USA; King’s College London, London, UK; Robotics, Perception, and Learning Laboratory, Stockholm, Sweden) In order for robots to act as an empowering tool in today's society, it is necessary for users to understand what those robots can and cannot do. This can happen through the design of the robot, through external information such as signs and manuals, or through the interaction itself. With this workshop, we aim to provide a platform for participants who approach robot transparency from different angles to share and discuss their experiences. Participants will have the opportunity to deepen their knowledge of transparent and understandable robot interactions through two keynotes and paper presentations. The diverse backgrounds of participants will be brought together in interactive exercises that aim to address questions about topics such as how transparency can be used to reduce inequalities and empower society and other open problems in the field. The guided break-out sessions aim at giving participants the chance to reflect on the keynotes, paper presentations and their own work and to share those reflections with participants with other perspectives. |
|
| Wullenkord, Ricarda |
Jacqueline Borgstedt, Ricarda Wullenkord, Fethiye Irmak Doğan, Alva Markelius, Yue Lou, Emma Geijer-Simpson, Emily S. Cross, Friederike Eyssel, Tamsin Ford, Jenny L. Gibson, Hatice Gunes, Georgina Warner, and Ginevra Castellano (ETH Zurich, Zurich, Switzerland; Bielefeld University, Bielefeld, Germany; University of Cambridge, Cambridge, UK; Uppsala University, Uppsala, Sweden) Mental Health challenges disproportionately affect vulnerable children, including those with Developmental Language Disorder (DLD) and with forced migration backgrounds. These groups face elevated risks of anxiety, depression, and social withdrawal. Yet, to date, their subjective wellbeing is difficult to assess. This is particularly true for traditional self-report measures which require good reading skills, linguistic competence, and sustained attention. To overcome these barriers, we propose that social robots offer a novel opportunity to create accessible, engaging, and child-friendly wellbeing assessments. |
|
| Wykowska, Agnieszka |
Marina Sarda Gou, Serena Marchesi, Agnieszka Wykowska, and Tony Prescott (University of Sheffield, Sheffield, UK; Italian Institute of Technology, Genoa, Italy) Understanding how people attribute awareness to robots is essential for developing socially and ethically aligned Human-Robot Interactions (HRI). This study presents the Italian validation of the Awareness Attribution Scale (AAS), an existing psychometric instrument designed to measure the attribution of awareness to artificial agents. The adaptation procedures (forward translation, native-speaker review, back-translation, and testing) were performed with the AAS. The final translated version was administered to Italian participants (N = 200) to rate different entities on perceived awareness. Analyses demonstrated good internal reliability of the Italian scale and expected attribution patterns across entities. These results provide evidence that the Italian AAS behaves consistently with the original English version, supporting its use in future cross-cultural research on awareness attribution. Furthermore, these findings advance cross-cultural knowledge of awareness attribution, a fundamental component of more inclusive settings. |
|
| Xian, Boyu |
Kathrin Pollmann, Selina Layer, Amelie Polosek, Boyu Xian, and Anna Vorreuther (Fraunhofer Institute for Industrial Engineering IAO, Stuttgart, Germany; University of Stuttgart, Stuttgart, Germany) This paper explores how adhesive signs on public robots can prevent robot bullying. Participants were presented with three different sign variants attached to a cleaning robot in a Virtual Reality scenario: informative (alluding to surveillance/ legal consequences), prompting (imperative to keep away from the robot), and feeling (emotional appeal) and reported their tenden-cies for anti-bullying behavior and perceptions of the robot. Eye tracking was used to measure visual attention. All signs elicited anti-bullying tendencies and were rated comprehensible. The robot with the feeling sign was perceived most human- and least tool-like, capable of emotions, and induced the highest amount of gaze fixations. The informative sign supported fast, low-effort compre-hension and reinforced a tool-like perception. Findings suggest adhesive signs are a viable, low-obtrusive preventive strategy and sign selection should be context-driven: informative for quick pass-by messaging, feeling for deeper engagement. |
|
| Xiang, Yan |
Yan Xiang, Chengliang Ping, Mengyang Wang, and Mingming Fan (Hong Kong University of Science and Technology, Guangzhou, China) As reliance on desktop-based knowledge work platforms grows, maintaining sustained focus has become a critical challenge, and current tools still provide limited support for everyday attentional needs. Many digital aids remain tied to the screen and are experienced as intrusive or easy to ignore, whereas desktop robots offer situated, embodied forms of support in the same physical workspace as the computer. Yet it remains unclear how such robots should be designed to help people manage attention in study and work. To explore this, we conducted a participatory design study consisting of five workshops with adults who self-identified as needing support with focus. Participants reflected on their daily challenges and current coping strategies, then envisioned how a desktop robot could act, look, and be placed to support them. Our findings reveal diverse, context-dependent expectations around function, social role, and form, and outline directions for designing attention-supportive desktop robots for everyday work. |
|
| Xu, Xinyue |
Hanyu Zhang, Xinyue Xu, Xinran She, Jie Deng, and Yuanrong Tang (Tsinghua University, Beijing, China) Digital violence often happens impulsively within seconds. Circuit Breaker introduces an embodied mouse robot that detects toxic interactions and delivers physical micro-interventions to disrupt harmful actions. Through real-time cursor signals, sentiment cues, and haptic feedback, the system promotes reflective and safer online behavior. |
|
| Xu, Zhiling |
Yijie Guo, Ruhan Wang, Jini Tao, Zhiling Xu, Yaowen Shen, Qifei Zhou, Zhenhan Huang, Danqi Huang, and Haipeng Mi (Tsinghua University, Beijing, China; Neurobo, Shanghai, China; Tongji University, Shanghai, China) In-public companionship is underexplored, especially for ACG (anime, comics, and games) users who carry character goods and seek shared presence with favorite characters in everyday public life. We present Goobo, a lightweight multimodal companion embedded in an ita-bag (a transparent display bag used by ACG fans) that “activates” carried character goods via real-time visual perception and persona-consistent expressive narration. Informed by a formative workshop, Goobo illustrates a portable, character-bound interaction paradigm and serves as a platform for upcoming field studies on how users engage with activated characters in public settings. |
|
| Yalçın, Özge Nilay |
Katie Seaborn and Özge Nilay Yalçın (Institute of Science Tokyo, Tokyo, Japan; University of Cambridge, Cambridge, UK; Simon Fraser University, Surrey, Canada) Sycophancy in social robots is an emerging threat brought on by the launch of ChatGPT and other powerful large language models (LLMs) that can speak in a near-fluent fashion. Short- and long-term findings on LLM-powered chatbots and conversational agents are raising the alarm. With work bridging communication-centred LLM use and social robots in production, the deceptive and persuasive capabilities of LLM-imbued robotic companions needs urgent and critical consideration. Notably, how social robots aided by sycophantically-inclined LLMs may overly influence decision-making and elicit overtrust needs interrogation. Using scoping review methodology that bridges robotics with AI and LLMs, we surface dimensions of sycophancy, constructs as research targets, and a suite of measures for research on robotic sycophancy. Our analysis of historical and modern studies (𝑁 = 23) sets the stage for empirical and theoretical work on the potential misuses and unexpected effects of sycophancy in human–robot interactions. |
|
| Yan, Shengheng |
Shengheng Yan and Tsvetomila Mihaylova (Aalto University, Espoo, Finland) For human--robot interaction in autonomous driving, understanding when and why automated systems can effectively explain their behavior is critical for transparency, trust, and user understanding. Large language models (LLMs) can generate natural-language explanations of driving scenes, yet it remains unclear whether some types of driving situations are inherently easier or harder for them to describe. To investigate this question, we introduce an ego-centric taxonomy of driving scenarios and apply it to the BDD-X test set, creating a category-aligned evaluation benchmark. Using this dataset, we compare the explanation performance of RAG-Driver and GPT-4o across both top-level and fine-grained scenario categories. For each model, explanation performance differs significantly across scenario categories, indicating that scenario type is a meaningful factor influencing explanation quality. These findings highlight the importance of scenario-aware evaluation when assessing explanation quality in autonomous driving. |
|
| Yanco, Holly A. |
Gregory LeMasurier and Holly A. Yanco (University of Massachusetts Lowell, Lowell, USA; University of Massachusetts Amherst, Amherst, USA) When people start to work with a robot, they may not fully understand its capabilities, leading to potential misuse, mismatched expectations, and an inability to diagnose and resolve the robot's failures. Robots can provide users with demonstrations in an attempt to reduce misunderstanding. We conducted a between-subjects in-person study (N=131) where participants watched a demonstration and then completed a collaborative task with the robot. We found that participants were able to accurately understand the robot's demonstrated reliability. We discuss the impacts of these demonstrations on participants' trust in the robot, perception of the robot's capabilities, and willingness to allocate tasks to the robot. |
|
| Yang, Joyce |
Joyce Yang, Phillip Johnson, Nyra Graham, and Karen Shamir (Cornell University, Ithaca, USA) Social isolation in shared spaces threatens community cohesion and well-being. This paper presents a social robot designed to spark human-to-human interactions. Inspired by public art projects, the robot invites individuals to collaborate on a shared LEGO structure by using expressive eye tracking, autonomous turning, and servo-actuated drawer movement. Field deployments in Cornell University spaces showed the robot effectively acted as a social catalyst: diverse participants contributed to a shared structure, and strangers initiated conversations about the robot. This work offers a functional prototype and insights on robots as mediators of human connection and promotes ideas of empowering collaboration. |
|
| Yang, Lei |
Yuxuan Chen, Ian Leong Ting Lo, Bao Guo, Netitorn Kawmali, Chun Kit Chan, Ruoyu Wang, Jia Pan, and Lei Yang (University of Hong Kong, Hong Kong, China) A tour guide robot, CLIO, is presented. It carries a display as its head and a laser pointer to orient visitors’ attention by its head movement and laser pointer. Animated eyes are displayed to build eye contact. An optional low-code interface based on a Large Language Model is offered to coordinate the designed actions with a given narrative script for exhibition. We conducted a user study to evaluate the CLIO system in a mock-up exhibition of historical photographs. Feedback was collected from questionnaires and quantitative data from a mobile eye tracker. Experimental result validated that the efficacy of the designed actions in guiding visual attention of the visitors. It showed that CLIO achieved an enhanced engagement compared to the Audio-only baseline system. |
|
| Yang, Qiaoyue |
Magnus Jung, Qiaoyue Yang, Leander von Seelstrang, Dominykas Strazdas, Sven Wachsmuth, and Ayoub Al-Hamadi (University of Magdeburg, Magdeburg, Germany; Bielefeld University, Bielefeld, Germany) We present SEMIAC, a multimodal dataset of human-robot interactions (HRI) collected with a Wizard-of-Oz methodology in a logistics-inspired workspace and a home environment. The dataset includes 40 participants across two research sites and captures their behavioural, emotional, proxemic, and subjective responses during cooperative object-retrieval tasks with systematically manipulated robot errors. Interactions were recorded using a rich sensor suite including the humanoid robot TIAGo equipped with onboard sensors and an RGB-D camera, as well as three external RGB-D sensors, and an external microphone. All modalities were recorded fully synchronized through ROS and stored in time-aligned rosbag files. Annotations include human keypoints, proxemic distances, emotional expressions, and engagement indicators. The dataset aims to support research on social navigation, error-aware intent prediction, and socially intelligent robot behaviour. |
|
| Yang, Zhenglong |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Yano, Taiki |
Carlos Toshinori Ishi, Taiki Yano, and Yuka Nakayama (RIKEN, Kyoto, Japan; ATR, Kyoto, Japan) This study explores the importance of adapting communication in reception tasks based on the visitor attributes and situations, focusing on a reception robot at an expo venue. Ten different scenarios, including three situations, entrance reception, straying assistance, and complaint handling, were created with varying visitor attributes (adults, children, elderly with mild hearing loss). Multimodal expressions, observed through human performers acting out these scenarios, were implemented in the android robot Nikola. A video-based user study was conducted to assess the effectiveness of multimodal expressions which account for the situation and user attributes, comparing them to default behaviors. The proposed multimodal expressions were effective, with voice being more impactful than motion, though both contributed positively. |
|
| Yaqoot, Yasheerah |
Yara Mahmoud, Yasheerah Yaqoot, Miguel Altamirano Cabrera, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Humanoid robots must adapt their contact behavior to diverse objects and tasks, yet most controllers rely on fixed, hand-tuned impedance gains and gripper settings. This paper introduces HumanoidVLM, a vision–language-driven retrieval framework that enables the Unitree G1 humanoid to select task-appropriate cartesian impedance parameters and gripper configurations directly from an egocentric RGB image. The system couples a vision–language model for semantic task inference with a FAISS-based Retrieval-Augmented Generation (RAG) module that retrieves experimentally validated stiffness–damping pairs and object-specific grasp angles from two custom databases and executes them through a task-space impedance controller for compliant manipulation. We evaluate HumanoidVLM on 14 visual scenarios and achieve a retrieval accuracy of 93 %. Real-world experiments show stable interaction dynamics, with z-axis tracking errors typically within 1 cm to 3.5 cm and virtual forces consistent with task-dependent impedance settings. These results demonstrate the feasibility of linking semantic perception with retrieval-based control as an interpretable path toward adaptive humanoid manipulation. |
|
| Ye, Xin |
Annette Masterson, Xin Ye, Yiyang Li, and Lionel Peter Robert Jr (University of Michigan at Ann Arbor, Ann Arbor, USA) The rapid proliferation of Large Language Models (LLMs) has enabled artificial agents to foster deep emotional bonds, yet the comparability of these AI relationships to human norms remains underexplored. As HRI researchers increasingly integrate LLMs into embodied platforms, understanding the nature of these bonds is imperative for responsible design. This study investigates whether relationships with LLM-driven AI companions can rival the satisfaction of human connections and if the mechanism of intimacy is equally critical. Through a comparative survey of 150 participants stratified across in-person, long-distance, and LLM companion relationships, we illuminate that digital bonds can yield satisfaction levels comparable to human partnerships, with intimacy serving as a predictive factor. These findings challenge the assumption that AI relationships are inherently unsatisfactory and identify intimacy as a design metric for social robots, providing a protocol for integrating LLM companions into embodied relational agents. |
|
| Yee, Jackie |
Alexandria Thylane, Daniel J. Foulen, Keys K. Rigual, Raitah A. Jinnat, Jackie Yee, Kaylee Nam, and Raj Korpan (City University of New York, New York, USA; Rice University, Houston, USA) Pax is a queer-affirming robot companion prototype co-designed with LGBTQIA+ users. Implemented as a Unity-based embodied agent with a FastAPI backend, it translates community-identified requirements into a working interactive system. Pax combines queer-affirming natural language interaction, safety guardrails, and user-controlled adaptability to support identity affirmation and emotional well-being in the home. This demo highlights the core technical pipeline and ethical design choices behind queer-inclusive robot companions. Raj Korpan, Khadeja Ahmar, Raitah A. Jinnat, and Jackie Yee (City University of New York, New York, USA) Cities release large volumes of open civic data, but many people lack the time or skills to interpret them. We report an exploratory pilot study examining whether a social robot can narrate stories derived from open civic data to support public understanding, trust, and data literacy. Our pipeline combines civic data analysis, large language model–based narrative generation, and scripted behaviors on the Misty II robot to produce expressive and neutral versions of two stories on noise complaints and COVID-19 trends. We deployed the system at a public event and collected post-interaction surveys from six adult participants. While the small sample size limits generalization, the pilot suggests that participants found the stories relevant and generally understood their main points, though engagement and enjoyment were mixed. Participant feedback highlighted the need for improved vocal prosody, reduced information density, and more interactivity. These findings provide initial feasibility evidence and design insights to inform future iterations of robot civic data storytelling systems. |
|
| Yi, Zhennan |
Zhennan Yi and Goren Gordon (Indiana University at Bloomington, Bloomington, USA; Tel Aviv University, Tel Aviv, Israel) Social robots have been increasingly used to support children's development of soft skills through interactive activities. While many studies have shown the benefits of using robots to foster such soft skills, most of the work designs robot behaviors that target only one skill within one task scenario. Yet in educational practice, there is a need to promote a combination of soft skills together. In this report, we introduce an integrated behavioral framework that enables social robots to support the promotion of multiple soft skills. We describe the development process, the extraction and organization of different behavioral strategies, and how the framework can be applied to design child-robot activities. Moving from single-skill interventions to integrated behavioral design, this work contributes a conceptual and methodological foundation for designing social robots that aim to foster multiple soft skills in children. |
|
| Yıldız, Umur |
Umur Yıldız, Berk Yüce, Ayaz Karadağ, Tuğçe Nur Pekçetin, and Burcu A. Urgen (Bilkent University, Ankara, Türkiye) Large Language Models(LLMs) introduce powerful new capabilities for social robots, yet their black-box nature creates a barrier to trust. Transparency is already established as important for humanrobot trust, but how to convey LLM intentions and reasoning in real-time, embodied interaction remains poorly understood. We developed a task-level mechanistic transparency system for an LLM-powered Pepper robot that displays its internal reasoning process dynamically on the robot’s tablet during interaction. In a mixed-design study, participants engaged with Pepper across four trust-relevant tasks in either a Transparency-ON condition or Transparency-OFF condition. Transparency produced significantly greater trust growth than opacity, and a substantial increase in perceived reliability, indicating that transparency remains a key design element for trust calibration in LLM-driven human-robot interaction. |
|
| Yoo, Gayoung |
Gyuhee Park, Daehui Kim, Gayoung Yoo, Suhwan Jung, Okkeun Lee, Hyunmin Kang, Yong Gu Ji, and Hyochang Kim (Yonsei University, Seoul, Republic of Korea; Stanford University, Palo Alto, USA; Daegu University, Gyeongsan, Republic of Korea; Stanford University, Incheon, Republic of Korea) Autonomous delivery robots (ADRs) are increasingly being deployed in urban environments, where they share pedestrian paths with other road users. Previous human-robot proximity studies have primarily focused on pedestrian perception and distance norms during frontal encounters. However, when a robot approaches from the rear, pedestrians experience a lack of visual information, which can heighten threat perception and reinforce distance norms. This study aimed to investigate pedestrian perception in rear approach situations and identify the proxemic zone associated for such approaches. The results showed that participants felt comfortable with a proximity distance of 1.0–2.0 m during rear approaches. As the robot approached at higher speeds, participants perceived the robot's approach more clearly, but distance-related discomfort significantly increased, demonstrating a trade-off between perceptual awareness and subjective discomfort. These findings suggest the importance of social navigation strategies that reflect pedestrian acceptance for the safe integration of delivery robots. Gayoung Yoo, Daehui Kim, Gyuhee Park, Suhwan Jung, Okkeun Lee, Hyunmin Kang, Yong Gu Ji, and Hyochang Kim (Yonsei University, Seoul, Republic of Korea; Stanford University, Palo Alto, USA; Daegu University, Gyeongsan, Republic of Korea; Stanford University, Incheon, Republic of Korea) Mobile robots increasingly navigate spaces where people are engaged in conversation, where attention is directed toward conversational partners and peripheral awareness is limited. This study examines how a robot’s near-stop distances used as a crossing cue (45, 80, 120 cm) and conversational arrangements (Vis-à-vis, L-shape) influence when the robot is noticed and how its approach is interpreted. Using eye-tracking and foot-tracking, we found that these effects depend on the F-formation adopted during conversation. In Vis-à-vis, participants showed minimal fixations on the robot at 120 cm, whereas the 45 cm near-stop consistently elicited visual attention and increased perceived safety. In contrast, in L-shape formations, sustained visual exposure led to higher perceived disruption at both 45 and 120 cm. Overall, these findings indicate that F-formation–driven attentional allocation shapes perceptual and behavioral responses to robot approaches in conversational settings. These findings motivate future work on adaptive robot cues for dynamic social situations involving multi-party interactions and diverse users. |
|
| Yoo, William Weimin |
Jia Yap Lim, John See, William Weimin Yoo, and Christian Dondrup (Heriot-Watt University Malaysia, Putrajaya, Malaysia; Heriot-Watt University, Edinburgh, UK) User engagement prediction in human-robot interaction (HRI) is typically conducted across diverse environmental settings, including both uncontrolled and controlled environments. Such environmental variations compel social robots to capture and analyse user behaviours differently. To the best of our knowledge, most of the prior works rely on video, audio and feature vectors extracted from the UE-HRI (uncontrolled) dataset to estimate user engagement. The existing literature has overlooked the potential of Multimodal Large Language Models (MLLMs) for user engagement prediction in HRI contexts, thus leaving a critical gap in understanding their operational mechanisms and capacity to elevate model performance. To address this gap, this paper pioneers an investigation into MLLM efficacy for engagement prediction across different environmental settings using the UE-HRI (uncontrolled) and eHRI (controlled) datasets. Moreover, we perform rigorous experiments to identify important factors influencing MLLM performance, including prompts, model types, model parameters, and keyword extraction strategies. |
|
| Yoshida, Eiichi |
Tomoya Sasaki, Taiki Ishigaki, Diego Roulle, and Eiichi Yoshida (Tokyo University of Science, Tokyo, Japan; University Paris-Est Créteil, Créteil, France) Orbiting is a common viewpoint control technique in CG and CAD, in which the camera rotates around a target that acts as the center of rotation. However, applying orbiting in teleoperation, a real-world application, is difficult due to physical constraints. We propose RelOrb, a viewpoint control method that focuses on relative coordinate changes between the camera and the target. Our prototype rotates the object on a turntable instead of moving the camera, providing head-mounted display images as if the camera itself were moving. We present the method, its coordinate transformation, a proof-of-concept prototype, and example operations. |
|
| Young, James Everett |
Raquel Thiessen, Minoo Dabiri Golchin, Samuel Barrett, Jacquie Ripat, and James Everett Young (University of Manitoba, Winnipeg, Canada) Social robots are increasingly marketed as play companions for children, but research has not established how these robots support play in real-world scenarios or whether their interactivity supports quality play. We are conducting an eight-week home study with children with and without disabilities to learn about the play experiences with an interactive robot versus a doll ver-sion of the same robot (a VStone Sota). We implemented interactive robot behaviors based on LUDI's categorization of play, incorporating social and cognitive dimensions of play to support children’s play in various developmental play stages. We measure play quality using standardized instruments, and along with qualitative assessments of children's engagement and interest through child-family interviews. This study investigates whether interacting with robotic toys supports children in developing play skills compared to non-robotic dolls. Our findings will establish baseline knowledge about child-robot play and can guide evidence-based design of interactive play companions for children. |
|
| Yu, Tianya |
Xucong Hu, Qinyi Hu, Tianya Yu, Mowei Shen, and Jifan Zhou (Zhejiang University, Hangzhou, China) First impressions are critical for public-facing social robots: users rapidly infer a robot’s potential for social interaction from its appearance, shaping expectations and willingness to engage. Yet no existing scale captures how people interpret the interaction potential implied by a robot’s visual affordances. We introduce the Robot Social Interaction Potential Scale (RoSIP), a concise appearance-based scale assessing two dimensions—Perceptual Potential and Behavioral Potential. Across a pilot study and large-scale exploratory and confirmatory factor analyses (N = 750), we identified a 10-item, two-factor structure with strong internal consistency and solid construct and discriminant validity. RoSIP provides a dedicated tool for rapidly quantifying appearance-based inferences about a robot’s social interaction potential, enabling future work to systematically link robot morphology and social perception in HRI. |
|
| Yu, Yanchao |
Lewis Watson, Emilia Sobolewska, Carl Strathearn, Mayuko Morgan, and Yanchao Yu (Edinburgh Napier University, Edinburgh, UK) A major limitation of current social robots is their dependence on cloud-based dialogue pipelines, which restricts use in settings with limited or unreliable connectivity. We present a lightweight, fully local spoken-dialogue system that runs on consumer-grade hardware and integrates open-source models for speech recognition, dialogue generation, and text-to-speech. The pipeline was deployed on Euclid, a non-commercial humanoid robot, across several public engagement events, enabling extended real-world interaction without internet access. We analyse over 5,000 dialogue turns recorded during these dialogues to characterise system behaviour, user interaction patterns, and challenges arising in noisy, multi-speaker environments. Our observations demonstrate the feasibility of privacy-preserving, on-device conversational robotics while highlighting limitations in turn-taking, response length, and environmental grounding. We outline planned improvements to support more robust and accessible social-robot interaction. |
|
| Yuan, Yitong |
Yitong Yuan, Ke Huang, Michael Detsiang Li Jr, Yiwei Zhao, and Baoyuan Zhu (Tsinghua University, Beijing, China) Unhealthy postures have become increasingly prevalent, affecting health and productivity, yet existing posture-correction devices rely on intrusive external reminders. We present Tuotle, a desktop robot that leverages cognitive dissonance by adopting a “bad posture,” prompting users to correct it and, in turn, reflect on their own posture. A pilot user study shows it has comparable posture-correction effectiveness to traditional devices, while showing significantly better user experience and long-term adoption intentions. Our work demonstrates that psychological mechanisms can be activated through human-robot interactions, opening new directions for technologies grounded in human psychology. |
|
| Yüce, Berk |
Umur Yıldız, Berk Yüce, Ayaz Karadağ, Tuğçe Nur Pekçetin, and Burcu A. Urgen (Bilkent University, Ankara, Türkiye) Large Language Models(LLMs) introduce powerful new capabilities for social robots, yet their black-box nature creates a barrier to trust. Transparency is already established as important for humanrobot trust, but how to convey LLM intentions and reasoning in real-time, embodied interaction remains poorly understood. We developed a task-level mechanistic transparency system for an LLM-powered Pepper robot that displays its internal reasoning process dynamically on the robot’s tablet during interaction. In a mixed-design study, participants engaged with Pepper across four trust-relevant tasks in either a Transparency-ON condition or Transparency-OFF condition. Transparency produced significantly greater trust growth than opacity, and a substantial increase in perceived reliability, indicating that transparency remains a key design element for trust calibration in LLM-driven human-robot interaction. |
|
| Yun, Bruno |
Sandra Victor, Bruno Yun, Chefou AR Mamadou Toura, Enzo Indino, Madalina Croitoru, and Gowrishankar Ganesh (University of Montpellier, Montpellier, France; University of Aberdeen, Aberdeen, UK; CNRS, Montpellier, France) In this work we address the problem of designing a resource allocation decision making robot. We developed a model that accurately makes decisions to distribute risk, effort and reward between two humans or a human and a robot, considering their age, sex and humanness. To assess the model's alignment with social norms, we conducted a Turing test which showed that our model is perceived as making socially acceptable decisions, similar to those of human participants. Furthermore we demonstrated our model by embodying it in a robot negotiator that automatically distributed reward, effort and risk tokens among participant dyads by perceiving their physical characteristics. |
|
| Zaga, Cristina |
Marco C. Rozendaal, Anastasia Kouvaras Ostrowski, Mafalda Gamboa, Samantha Reig, Patricia Alves-Oliveira, Maaike Bleeker, Maria Luce Lupetti, John Vines, Nazli Cila, Hannah Pelikan, Nikolas Martelaro, Selma Šabanović, David Sirkin, and Cristina Zaga (Delft University of Technology, Delft, Netherlands; Purdue University, West Lafayette, USA; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; University of Massachusetts at Lowell, Lowell, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Utrecht University, Utrecht, Netherlands; Politecnico di Torino, Turin, Italy; University of Edinburgh, Edinburgh, UK; Linköping University, Linköping, Sweden; Carnegie Mellon University, Pittsburgh, USA; Indiana University at Bloomington, Bloomington, USA; Stanford University, Stanford, USA; University of Twente, Enschede, Netherlands) The 3rd Workshop on Designerly Human-Robot Interaction (HRI) aims to bring together scholars and practitioners engaged in design-oriented research to articulate the value of design research within HRI broadly. We propose a half-day workshop to (1) collectively map the diversity of design research in HRI, examining how contributions are framed and how quality is evaluated; (2) discuss participants’ HRI design projects, showcased in an exhibition setting; and (3) conclude with a focused conversation to identify common ground across diverse approaches and develop strategies for strengthening the position of design research in HRI and its connections with other HRI disciplinary communities. |
|
| Zaraki, Abolfazl |
Abolfazl Zaraki, Hamed Rahimi Nohooji, Maryam Banitalebi Dehkordi, and Holger Voos (University of Hertfordshire, Hatfield, UK; University of Luxembourg, Luxembourg, Luxembourg) This paper reframes shared autonomy as an interpretable interaction space centered on the human and bounded by safety. Building on this perspective, we introduce a Human-Centred Tri-Region Shared Autonomy Framework that organises interaction into three regions: Human-Led, Robot-Supported, and Safety-Intervention. The framework formalises how autonomy shifts as interaction conditions evolve, while an Interaction State Interpreter maps multimodal user and task observations to region-dependent behaviours. This structure enables autonomy transitions that remain explicit and behaviourally grounded across diverse human-robot interaction contexts, including physical collaboration, social engagement, and cognitive assistance. A physical interaction scenario illustrates how the proposed formulation can be realised through adaptive impedance and constraint-aware feedback, enabling smooth transitions between collaborative support and protective intervention. By structuring autonomy around human authority, supportive assistance, and safety enforcement, the framework provides a clear basis for adaptive human-robot interaction. Hamed Rahimi Nohooji, Abolfazl Zaraki, and Holger Voos (University of Luxembourg, Luxembourg, Luxembourg; University of Hertfordshire, Hatfield, UK) This paper proposes soft robotic embodiments as interaction-level regulators of sustainability in human–robot interaction, where sustainability is shaped at the moment of physical contact rather than enforced through post hoc system-level efficiency optimization or material selection. Under long-term deployment, how interaction is regulated in terms of intensity, frequency, and force transmission directly determines cumulative energy consumption, mechanical wear, and maintenance demand. Soft robotic embodiments regulate these interaction characteristics through compliance, passive adaptation, and geometry-driven deformation, constraining interaction effort before active control is applied. In doing so, interaction behavior is directly coupled with energy use, interaction-induced degradation, and lifecycle considerations at the system level. |
|
| Zargham, Nima |
Rachel Ringe, Nima Zargham, Mihai Pomarlan, Benjamin R. Cowan, Minha Lee, Donald McMillan, and Matthew Peter Aylett (University of Bremen, Bremen, Germany; University College Dublin, Dublin, Ireland; Eindhoven University of Technology, Eindhoven, Netherlands; Stockholm University, Stockholm, Sweden; Heriot-Watt University, Edinburgh, UK) Communication in Human-Robot Interaction (HRI) often focuses on linguistic exchanges, with spoken dialogue providing a natural way for people to interact with robots. While direct verbal interaction can reduce barriers compared to other forms of interaction (e.g., text- or touch-based interfaces), it may also exclude users with speech, language, or cognitive differences, and they may not generalize well across cultures and contexts. Non-linguistic forms of communication, including non-verbal voice interactions and extra-linguistic signals (e.g., gesture, gaze, facial expressions, posture), offer complementary pathways that can enable more inclusive, accessible, and universal interactions. This workshop explores how non-linguistic communication can shape effective human-robot communication and collaboration. We aim to bring together researchers from HRI, conversational AI, linguistics, psychology, and accessibility studies to discuss opportunities, challenges, and design practices for integrating such features. The workshop seeks to advance inclusive design principles, bridge disciplines, and highlight future research directions on communication strategies that empower diverse users in their interactions with robots. |
|
| Zeller, Frauke |
Leimin Tian, Ana Kirschbaum, Caterina Neef, Mary Ellen Foster, Sara Cooper, Frauke Zeller, Manuel Giuliani, Alexander Eberhard, Oliver Chojnowski, Nils Mandischer, Utku Norman, Jan Ole Rixen, Pamela Carreno-Medrano, Nick Hawes, and Dana Kulić (CSIRO, Melbourne, Australia; Monash University, Melbourne, Australia; University of Applied Sciences Cologne, Cologne, Germany; KIT, Karlsruhe, Germany; University of Glasgow, Glasgow, UK; IIIA-CSIC, Spain; University of Edinburgh, Edinburgh, UK; Kempten University of Applied Sciences, Kempten, Germany; University of Augsburg, Augsburg, Germany; Monash University, Clayton, Australia; University of Oxford, Oxford, UK) As the technology readiness level of robotic systems increases, these systems interact with real users in realistic application contexts at increasing scale. In such real-world HRI, failures and unexpected outcomes are common, and these can provide valuable lessons for the design and evaluation of future HRI systems. Building on two recent workshops---the HRI 2025 workshop on Human-Robot Interaction in Extreme and Challenging Environments and the RO-MAN 2025 workshop on Real-World HRI in Public and Private Spaces: Successes, Failures, and Lessons Learned---this half-day workshop invites researchers and practitioners to share their design and deployment lessons drawn from across the spectrum of real-world HRI. The interactive sessions follow a taxonomy of the key phases during field deployment and major factors at play, inviting attendees to review the experimental design of their own research or example projects to identify potential challenges and share related experiences. This discussion will refine the initial taxonomy, which we plan to release as an open-source tool to empower researchers and practicioners in their design and deployment of real-world HRI. |
|
| Zeng, XiaoKe |
Xiaoyu Chang, Yanheng Li, XiaoKe Zeng, Jing Qi Peng, and Ray Lc (City University of Hong Kong, Hong Kong, China) Robots are increasingly designed to act autonomously, yet moments in which a robot overrides a user’s explicit choice raise fundamental questions about trust and social perception. This work investigates how a preference-violating override affects user trust, perceived competence, and interpretations of a robot’s intentions. In a beverage-delivery scenario, a robot either followed a user’s selected drink or replaced it with a healthier option without consent. Results show that the way an override is enacted and communicated consistently reduces trust and competence judgments, even when users acknowledge benevolent motivations. Participants interpreted the robot as more controlling and less aligned with their autonomy, revealing a social cost to such actions. This study contributes empirical evidence that preference-violating override behavior is socially consequential, shaping trust and core dimensions of user perception in embodied service interactions. |
|
| Zhang, Chuxuan |
Chuxuan Zhang, Bermet Burkanova, Lawrence H. Kim, Grace Iarocci, Elina Birmingham, and Angelica Lim (Simon Fraser University, Burnaby, Canada) What nonverbal behaviors should a robot respond to? Understanding how children—both neurotypical and autistic—engage with embodied artificial agents is critical for developing inclusive and socially interactive systems. In this paper, we study "open-ended" unconstrained interactions with embodied agents, where little is known about how children behave nonverbally when given few instructions. We conducted a Wizard-of-Oz study in which children were invited to interact nonverbally with 6 different embodied virtual characters displayed on a television screen. We collected 563 (141 unique) nonverbal behaviors produced by children and compare the children’s interaction patterns with those previously reported in an adult study. We also report the presence of repetitive face and hand movements, which should be considered in the development of nonverbally interactive artificial agents. |
|
| Zhang, Hanyu |
Hanyu Zhang, Xinyue Xu, Xinran She, Jie Deng, and Yuanrong Tang (Tsinghua University, Beijing, China) Digital violence often happens impulsively within seconds. Circuit Breaker introduces an embodied mouse robot that detects toxic interactions and delivers physical micro-interventions to disrupt harmful actions. Through real-time cursor signals, sentiment cues, and haptic feedback, the system promotes reflective and safer online behavior. |
|
| Zhang, Pengfei |
You Yang Chen, Shan Luo, Xueqing Li, Sizhao Jin, Pengfei Zhang, and Mingyu Li (Tsinghua University, Beijing, China; China Academy of Art, Hangzhou, China; Shandong University of Art and Design, Jinan, China) Prolonged earphone use and excessive volume are common causes of hearing damage, yet users often remain unaware of these risks. Existing alert mechanisms, such as pop-up warnings, are easily dismissed and lack emotional resonance. We present Songbird, a bird-shaped robotic earphone companion that "listens along" with the user. When the volume exceeds a safe threshold (75 dB) or wearing time becomes excessive, Songbird mimics the current melody by singing aloud, rendering auditory risks in an embodied and emotionally engaging manner. This research explores design strategies for mapping health risk information onto embodied robotic companions. By establishing emotional connections through physical form, such companions demonstrate unique potential in health care and behavior guidance, offering a non-intrusive, emotionally resonant interaction paradigm for hearing protection in personal audio devices. |
|
| Zhang, Ruilin |
Haopeng Peng, Ruilin Zhang, Yuxin Liang, and Liyang Fan (Tsinghua University, Beijing, China) In social interactions, individuals often conceal their true feelings for various reasons. This phenomenon of actively adjusting social strategies based on the social context is referred to as the "social performance mechanism". Inspired by this mechanism, we propose a wearable robot "THIRD EXPRESSION", designed to assist individuals in expressing real emotions and states that are difficult to verbalize. Through robot design, this study aims to enhance the wearer’s ability to actively define and convey their emotions in real-time. The system integrates multimodal sensors (speech, environment, heart rate, etc.) and large model reasoning to generate dynamic visual feedback. The pilot study has been validated that the robot design enhances the sense of boundary control and interaction satisfaction, while reducing social anxiety levels. |
|
| Zhang, Tianyi |
Tianyi Zhang, Christopher Garrison, Kristal Hockenberry, Rabeya Jamshad, Laurel D. Riek, and Susan Simkins (Pennsylvania State University, University Park, USA; University of California at San Diego, La Jolla, USA) Although robots are increasingly integrated into healthcare, safety- and time-critical action team contexts are underexplored in HRI research. In addition, whereas human-robot interactions have received attention, much remains to be understood about the effects of robots on human-human team processes. In response to these research needs, we varied the effects of robot descriptive characteristics, robot behaviors, and interaction context as a mobile manipulator robot delivered needed materials to nursing student teams during simulations. Team communication and team performance were rated by nursing instructors using a standardized healthcare communication framework and the Creighton Competency Evaluation Instrument. Results indicated that increased robot influencing factors (robot face, robot voice, response to requests, and passive presence) enhanced nursing students’ communication with each other and mannequin patients, thereby improving team performance. Together, these findings highlight that robots should not be viewed solely as direct collaborators with humans, but also as actors that shape how humans work together. As healthcare and other safety-critical domains increasingly integrate robots, understanding these indirect pathways will be critical for designing robots that enhance rather than hinder team effectiveness. |
|
| Zhang, Wanqi |
Wanqi Zhang, Jiangen He, and Marielle Santos (University of Tennessee at Knoxville, Knoxville, USA) Social robots hold promise for reducing job interview anxiety, yet designing agents that provide both psychological safety and instructional guidance remains challenging. Through a three-phase exploratory iterative design study (N=8), we empirically mapped this tension. Phase I revealed a “Safety–Guidance Gap”: while a Person-Centered Therapy (PCT) robot established safety, users felt insufficiently coached. Phase II identified a “Scaffolding Paradox”: rigid feedback caused cognitive overload, while delayed feedback lacked specificity. In Phase III, we resolved these tensions by developing an Agency-Driven Interaction Layer. Synthesizing our empirical findings, we propose the Adaptive Scaffolding Ecosystem—a conceptual framework that redefines robotic coaching not as a static script, but as a dynamic balance between affective support and instructional challenge, mediated by user agency. |
|
| Zhang, Xinyun |
Renee Ziqi Zhu, Nan Hu, Lihao Zheng, and Xinyun Zhang (Indiana University at Bloomington, Bloomington, USA) Most existing applications of social robots that support older adults focus on personal use or deployment within nursing facilities. Through our collaboration with a local senior community center, one major need that emerged is the use of technology to encourage older adults to be more physically active—an essential factor for maintaining physical health, supporting mental well-being, and building social capital. Guided by this need, our project explores how a community-based robot can serve as a shared resource that promotes both social connection and physical engagement among older adults. Rather than designing a robot that only facilitates group activities, our goal is to create a robot that helps build human-to-human relationships by supporting group exercises, shared experiences, and opportunities for older adults to meet and connect with one another. Through workshops with older adults, we designed MERRY (Matching Engagement & Route Recommendation for You), a Christmas tree-like robot aiming to help older adults to connect with each other and engage more in walking activities. The robot allow older adults to choose for suitable activities, connect with the community, track and reflect on their shared experiences. |
|
| Zhang, Yan |
Yan Zhang, Sarah Schömbs, Xiang Pan, Jan Leusmann, Sara Mongile, Mohammad Obaid, and Wafa Johal (University of Melbourne, Melbourne, Australia; Kyoto University, Kyoto, Japan; LMU Munich, Munich, Germany; Italian Institute of Technology, Genoa, Italy; Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden) Recent developments in large language models and multi-agentic systems present both opportunities and challenges for Human-Robot Interaction (HRI). However, it remains unclear how the interaction paradigms will be reshaped due to the growing complexity and opaqueness. Therefore, to bridge this gap and connect users with multi-agentic systems through embodied robot agents, this workshop aims to explore solutions for design challenges of multi-agentic systems on various types of robots in real-world contexts. Through a complementary mix of focus groups, including design activities and paper presentations, our workshop generates design considerations for real-world applications and outlines pioneering research agendas for multi-agentic systems in HRI. |
|
| Zhang, Ying |
Howard Ziyu Han, Ying Zhang, Allan Wang, and Nikolas Martelaro (Carnegie Mellon University, Pittsburgh, USA; Miraikan - National Museum of Emerging Science and Innovation, Tokyo, Japan) Using robot simulators in participatory human-robot interaction design can expand the interactions end-users can experience, articulate, and reshape during co-design. In robot social navigation, high-fidelity simulations have largely been developed for benchmarking algorithms and developing robot policy. However, less attention has been given to supporting end-user exploration and articulation of concerns. In this late-break report, we present design considerations and a system implementation that extend an existing social navigation simulator (SEAN 2.0) to support community-driven feedback and evaluation. We add features to the SEAN 2.0 platform to enable richer sidewalk scenario construction, interactive reruns, and robot signaling exploration. Finally, we provide a user scenario and discuss future directions for using participatory simulation to broaden stakeholder involvement and inform socially responsive navigation design. |
|
| Zhang, Yuchong |
Di Fu, Yuchong Zhang, Yong Ma, Maria Kyrarini, Chaona Chen, Doreen Jirak, Weiyong Si, and Danica Kragic (University of Surrey, Guildford, UK; KTH Royal Institute of Technology, Stockholm, Sweden; University of Bergen, Bergen, Norway; Santa Clara University, Santa Clara, USA; University of Sheffield, Sheffield, UK; University of Antwerp, Antwerp, Belgium; University of Essex, Essex, UK) Interactive artificial intelligence (AI) is rapidly reshaping human-centered robotics by moving beyond algorithmic efficiency toward real-time adaptability, transparency, and design for people and contexts. Building on the inaugural InterAI Workshop at IEEE RO-MAN 2024, this second edi- tion, proposed for HRI 2026, will convene the robotics and HRI communities to examine how interactive AI can en- able robust, trustworthy systems that operate seamlessly in dynamic, real-world environments. The half-day hybrid program will feature two keynote talks, and a peer-reviewed paper oral presentation session. Distinct from prior events, the workshop spotlights the integration of generative and embodied AI with human-in-the-loop learning and real-time decision making, as well as methods for explainability, eval- uation, and safety. The workshop aims to catalyze a collabo- rative agenda that bridges interactive AI technologies and human-centered robotic systems. |
|
| Zhang, Yujing |
Yujing Zhang, Iolanda Leite, and Sarah Gillet (KTH Royal Institute of Technology, Stockholm, Sweden) Aging populations increasingly face challenges such as reduced social engagement and heightened risks of isolation. Group-based activities present valuable opportunities to promote older adults’ emotional well-being and cognitive stimulation. Although prior HRI research has examined robots in group settings and as tools for individualized support, limited work has explored how robot-facilitated activities should be designed to support older adult groups' interaction in real community contexts. We developed a dual-robot version of the Swedish word-description game "With Other Words" and conducted in-the-wild deployments with fifteen older adults across local community centers. Through thematic analysis of post-session interviews and researcher observations, we identified key factors and design recommendations that are can help future work to build functioning interactions between robots and groups of older adults. |
|
| Zhang, Yuxin |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Zhang, Yuzhe |
Annika An, Zeyi Chen, Harshavardhan Reddy Gajarla, Xinyi Hu, Rishabh Kumar, Ningbo Li, Yifan Li, Ray Lin, Anshul Prakash, Linzhengrong Shao, Siyang Shen, Matthew Taruno, Chenghao Wang, Hanyang Wang, Fiona Wu, Zhenglong Yang, Yuxin Zhang, Yuzhe Zhang, Haonan Peng, Jack Hatcher, Zubin Assadian, and John Raiti (University of Washington, Seattle, USA; University of Washington, Bellevue, USA) This paper presents the design and implementation of a voice-controlled autonomous robotic delivery system that integrates physical human-robot interaction (HRI), perception, manipulation, and navigation. At the core of the system is an orchestration module implemented as a finite state machine, responsible for managing task execution and coordinating subsystem communication through ROS2 topics and services. The robot interprets natural language commands using a GPT-based Physical AI module, detects objects via perception input, manipulates them with a robotic arm, and navigates to user-defined destinations. We describe the full task pipeline—from voice input to final delivery—highlighting the orchestration logic, system robustness strategies, and real-time feedback mechanisms. Our results demonstrate that modular ROS2-based orchestration enables reliable multi-step execution in collaborative HRI scenarios. |
|
| Zhao, Michelle |
Bahar Irfan, Nikhil Churamani, Michelle Zhao, Rajat Kumar Jenamani, Ali Ayub, and Silvia Rossi (Familiar Machines & Magic, Woburn, USA; University of Cambridge, Cambridge, UK; Carnegie Mellon University, Pittsburgh, USA; Cornell University, Ithaca, USA; Concordia University, Montreal, Canada; University of Naples Federico II, Naples, Italy) Today's high-capacity generalist robot policies provide a strong foundation for broad task-level competence, yet achieving effective and equitable support for people in everyday settings remains a significant challenge. Real-world environments are dynamic and unstructured, and human needs evolve over time, requiring robots that can adapt accordingly. The ultimate evaluator of any robotic system is the person it assists, and personalization is essential to ensuring equitable and meaningful support across diverse users and contexts. Developing robots that can continually learn from interaction, adapt their behaviors over time, and flexibly assume roles as learners and collaborators is a critical step toward realizing effective integration of robots into daily life. With this year's theme of "Evolving Assistance for Everyday Life", and in alignment with the conference theme "HRI Empowering Society", the sixth edition of the "Lifelong Learning and Personalization in Long-Term Human-Robot Interaction (LEAP-HRI)" workshop aims to bring together insights across diverse disciplines, focusing on how robots can progressively adapt their support to suit diverse individuals, each with unique and changing needs, across real-world contexts. Through this lens, the workshop aims to discuss current and future directions in how assistive systems can flexibly respond, continually improve over time, and deliver more inclusive and empowering support in everyday life. |
|
| Zhao, Yiwei |
Yitong Yuan, Ke Huang, Michael Detsiang Li Jr, Yiwei Zhao, and Baoyuan Zhu (Tsinghua University, Beijing, China) Unhealthy postures have become increasingly prevalent, affecting health and productivity, yet existing posture-correction devices rely on intrusive external reminders. We present Tuotle, a desktop robot that leverages cognitive dissonance by adopting a “bad posture,” prompting users to correct it and, in turn, reflect on their own posture. A pilot user study shows it has comparable posture-correction effectiveness to traditional devices, while showing significantly better user experience and long-term adoption intentions. Our work demonstrates that psychological mechanisms can be activated through human-robot interactions, opening new directions for technologies grounded in human psychology. |
|
| Zhao, Yuehan |
Shaoqing Liu, Yishan Duan, LiMeng Wang, Yuehan Zhao, Xiaohan Li, and Matt Philippe Laing (Tsinghua University, Peking, China; Tsinghua University, Beijing, China; Guangzhou Academy of Fine Arts, Guangzhou, China) This study explores the potential of social robots as icebreakers during first-time encounters between strangers, aiming to address common issues of social awkwardness that may hinder meaningful interactions. We designed and implemented LMAO, a desktop social robot (as shown in Fig. 1), which uses laughter as a non-verbal, emotionally contagious signal to intervene during conversation breaks. Through a user study involving 20 participants in group discussions, we examined whether the robot's laughter could enhance social engagement by increasing smile frequency and verbal contributions. The results indicated that compared to the control group with no robot intervention, the group with LMAO exhibited significantly higher levels of smiling and verbal output. Our findings suggest that context-aware robotic laughter can effectively reduce subjective feelings of awkwardness and facilitate smoother social initiation, highlighting the potential of social assistive robots in fostering human connection. |
|
| Zhao, Zhao |
Angela Tran and Zhao Zhao (University of Guelph, Guelph, Canada) Current social skills training (SST) often lacks inclusivity, limiting participation among neurodivergent individuals. In this late-breaking report, we present an in-progress design and study protocol for a neurodiversity-affirming social skills training approach using the Furhat social robot with neurodivergent post-secondary students who find it difficult to initiate conversations with peers. We are developing a Wizard-of-Oz methodology in which a human operator flexibly guides Furhat’s responses as participants practice self-identified challenging scenarios (e.g., asking a classmate about a group project). We describe our session structure, measures (comfort, self-efficacy, preferences, behavioural indicators), and planned mixed-methods analysis, and outline our current implementation steps. This work-in-progress contribution invites feedback from the HRI community on how embodied conversational agents can offer neurodiversity-affirming social skills training. |
|
| Zheng, Caroline Yan |
Caroline Yan Zheng, Maria Elena Giannaccini, Angela Higgins, Feng Zhou, Mark Paterson, and Praminda Caleb-Solly (University of Nottingham, Nottingham, UK; University of Pittsburgh, Pittsburgh, USA) The critical role of interhuman touch and the rapid development of accessible haptics technology is preluding a new generation of social and assistive robotic interactions that support and enhance human daily life through physical contact, including assisting mobility, mental health, pain management, social bonding, and novel sensory experience for body literacy. For social and assistive robotic systems to fit in the intimate space of physical contact with human bodies, the nature and success of the interaction needs to be interlinked with the sensory and sentimental qualities of robot touch. HRI should not only be driven by function and efficiency, but more critically, the experience of the touch interaction, felt by the human. This raises fundamental methodological challenges. Currently, transferring rich qualitative experience into effective robotic systems has not been made possible, and there is a lack of tools to investigate the ephemeral phenomenon of felt sensations. Addressing this challenge calls for methodological innovation to investigate the sensory and experiential aspects of technology-initiated social touch, through facilitating cross-disciplinary expertise and end-users' involvement. This workshop mobilises experts from diverse disciplines, including robotics engineering, AI, social science, neuroscience, healthcare, soma design and arts. This workshop aims to collectively reflect on strategies, tools and methods for the co-exploration and co-production of Robot-Human Touch Interactions that afford appropriate and desirable human felt experience. |
|
| Zheng, Lihao |
Renee Ziqi Zhu, Nan Hu, Lihao Zheng, and Xinyun Zhang (Indiana University at Bloomington, Bloomington, USA) Most existing applications of social robots that support older adults focus on personal use or deployment within nursing facilities. Through our collaboration with a local senior community center, one major need that emerged is the use of technology to encourage older adults to be more physically active—an essential factor for maintaining physical health, supporting mental well-being, and building social capital. Guided by this need, our project explores how a community-based robot can serve as a shared resource that promotes both social connection and physical engagement among older adults. Rather than designing a robot that only facilitates group activities, our goal is to create a robot that helps build human-to-human relationships by supporting group exercises, shared experiences, and opportunities for older adults to meet and connect with one another. Through workshops with older adults, we designed MERRY (Matching Engagement & Route Recommendation for You), a Christmas tree-like robot aiming to help older adults to connect with each other and engage more in walking activities. The robot allow older adults to choose for suitable activities, connect with the community, track and reflect on their shared experiences. |
|
| Zhou, Feng |
Caroline Yan Zheng, Maria Elena Giannaccini, Angela Higgins, Feng Zhou, Mark Paterson, and Praminda Caleb-Solly (University of Nottingham, Nottingham, UK; University of Pittsburgh, Pittsburgh, USA) The critical role of interhuman touch and the rapid development of accessible haptics technology is preluding a new generation of social and assistive robotic interactions that support and enhance human daily life through physical contact, including assisting mobility, mental health, pain management, social bonding, and novel sensory experience for body literacy. For social and assistive robotic systems to fit in the intimate space of physical contact with human bodies, the nature and success of the interaction needs to be interlinked with the sensory and sentimental qualities of robot touch. HRI should not only be driven by function and efficiency, but more critically, the experience of the touch interaction, felt by the human. This raises fundamental methodological challenges. Currently, transferring rich qualitative experience into effective robotic systems has not been made possible, and there is a lack of tools to investigate the ephemeral phenomenon of felt sensations. Addressing this challenge calls for methodological innovation to investigate the sensory and experiential aspects of technology-initiated social touch, through facilitating cross-disciplinary expertise and end-users' involvement. This workshop mobilises experts from diverse disciplines, including robotics engineering, AI, social science, neuroscience, healthcare, soma design and arts. This workshop aims to collectively reflect on strategies, tools and methods for the co-exploration and co-production of Robot-Human Touch Interactions that afford appropriate and desirable human felt experience. |
|
| Zhou, Jifan |
Xucong Hu, Qinyi Hu, Tianya Yu, Mowei Shen, and Jifan Zhou (Zhejiang University, Hangzhou, China) First impressions are critical for public-facing social robots: users rapidly infer a robot’s potential for social interaction from its appearance, shaping expectations and willingness to engage. Yet no existing scale captures how people interpret the interaction potential implied by a robot’s visual affordances. We introduce the Robot Social Interaction Potential Scale (RoSIP), a concise appearance-based scale assessing two dimensions—Perceptual Potential and Behavioral Potential. Across a pilot study and large-scale exploratory and confirmatory factor analyses (N = 750), we identified a 10-item, two-factor structure with strong internal consistency and solid construct and discriminant validity. RoSIP provides a dedicated tool for rapidly quantifying appearance-based inferences about a robot’s social interaction potential, enabling future work to systematically link robot morphology and social perception in HRI. |
|
| Zhou, Qifei |
Yijie Guo, Ruhan Wang, Jini Tao, Zhiling Xu, Yaowen Shen, Qifei Zhou, Zhenhan Huang, Danqi Huang, and Haipeng Mi (Tsinghua University, Beijing, China; Neurobo, Shanghai, China; Tongji University, Shanghai, China) In-public companionship is underexplored, especially for ACG (anime, comics, and games) users who carry character goods and seek shared presence with favorite characters in everyday public life. We present Goobo, a lightweight multimodal companion embedded in an ita-bag (a transparent display bag used by ACG fans) that “activates” carried character goods via real-time visual perception and persona-consistent expressive narration. Informed by a formative workshop, Goobo illustrates a portable, character-bound interaction paradigm and serves as a platform for upcoming field studies on how users engage with activated characters in public settings. |
|
| Zhou, Tianlu |
Jinxuan Du, Rulan Li, Tianlu Zhou, and Qianrui Liu (Tsinghua University, Beijing, China) Young people often suppress emotional expression non-verbally, leading to social friction and misunderstanding. Therefore, We propose MuffBunny, an embodied rabbit-eared robot designed as a social buffer. MuffBunny identifies the listener's implicit emotional valence and arousal from verbal stimuli in real-time and converts these emotions into intuitive physical cues—dynamic ear morphing. Upward morphing indicates positive emotions, and downward morphing signifies negative ones. Our design aims to provide a novel, non-confrontational proxy for emotional expression, reducing the burden of self-disclosure, fostering empathy, and promoting a healthier social atmosphere. |
|
| Zhou, Yijun |
Yijun Zhou, Muhan Hou, and Kim Baraka (Vrije Universiteit Amsterdam, Amsterdam, Netherlands) Imitation learning relies on high-quality demonstrations, and teleoperation is a primary way to collect them, making teleoperation interface choice crucial for the data. Prior work mainly focused on static tasks, i.e., discrete, segmented motions, yet demonstrations also include dynamic tasks requiring reactive control. As dynamic tasks impose fundamentally different interface demands, insights from static-task evaluations cannot generalize. To address this gap, we conduct a within-subjects study comparing a VR controller and a SpaceMouse across two static and two dynamic tasks (N=25). We assess success rate, task duration, cumulative success, alongside NASA-TLX, SUS, and open-ended feedback. Results show statistically significant advantages for VR: higher success rates, particularly on dynamic tasks, shorter successful execution times across tasks, and earlier successes across attempts, with significantly lower workload and higher usability. As existing VR teleoperation systems are rarely open-source or suited for dynamic tasks, we release our VR interface to fill this gap. |
|
| Zhu, Baoyuan |
Yitong Yuan, Ke Huang, Michael Detsiang Li Jr, Yiwei Zhao, and Baoyuan Zhu (Tsinghua University, Beijing, China) Unhealthy postures have become increasingly prevalent, affecting health and productivity, yet existing posture-correction devices rely on intrusive external reminders. We present Tuotle, a desktop robot that leverages cognitive dissonance by adopting a “bad posture,” prompting users to correct it and, in turn, reflect on their own posture. A pilot user study shows it has comparable posture-correction effectiveness to traditional devices, while showing significantly better user experience and long-term adoption intentions. Our work demonstrates that psychological mechanisms can be activated through human-robot interactions, opening new directions for technologies grounded in human psychology. |
|
| Zhu, Qin |
Alexandra Bejarano, Hong Tran, and Qin Zhu (Virginia Tech, Blacksburg, USA) Compassion plays a critical role in creating inclusive, supportive learning environments that promote students' well-being and engagement. As social robots become more common in elementary classrooms to support academic and socio-emotional learning, they introduce new possibilities for modeling and nurturing compassion. However, they also raise important ethical questions about how children understand and experience care and compassion in human-robot interactions. This paper presents a conceptual framework for examining the ethics of compassionate robots in elementary education. It identifies four key ethical dimensions (Connection, Power, Access, Information) that shape how compassionate behaviors expressed or elicited by robots may influence children's perceptions of care, agency, and moral responsibility. Ultimately, the framework offers a structured approach for evaluating whether, when, and how robots should express compassion in ways that are developmentally appropriate, culturally responsive, and aligned with students' lived experiences, supporting the responsible integration of compassionate robots in education. |
|
| Zhu, Renee Ziqi |
Renee Ziqi Zhu, Nan Hu, Lihao Zheng, and Xinyun Zhang (Indiana University at Bloomington, Bloomington, USA) Most existing applications of social robots that support older adults focus on personal use or deployment within nursing facilities. Through our collaboration with a local senior community center, one major need that emerged is the use of technology to encourage older adults to be more physically active—an essential factor for maintaining physical health, supporting mental well-being, and building social capital. Guided by this need, our project explores how a community-based robot can serve as a shared resource that promotes both social connection and physical engagement among older adults. Rather than designing a robot that only facilitates group activities, our goal is to create a robot that helps build human-to-human relationships by supporting group exercises, shared experiences, and opportunities for older adults to meet and connect with one another. Through workshops with older adults, we designed MERRY (Matching Engagement & Route Recommendation for You), a Christmas tree-like robot aiming to help older adults to connect with each other and engage more in walking activities. The robot allow older adults to choose for suitable activities, connect with the community, track and reflect on their shared experiences. |
|
| Zhura, Iana |
Faryal Batool, Iana Zhura, Valerii Serpiva, Roohan Ahmed Khan, Ivan Valuev, Issatay Tokmurziyev, and Dzmitry Tsetserukou (Skolkovo Institute of Science and Technology, Moscow, Russian Federation) Reliable human–robot collaboration in emergency scenarios requires autonomous systems that can detect humans, infer navigation goals, and operate safely in dynamic environments. This paper presents HumanDiffusion, a lightweight image-conditioned diffusion planner that generates human-aware navigation trajectories directly from RGB imagery. The system combines YOLO-11 based human detection with diffusion-driven trajectory generation, enabling a quadrotor to approach a target person and deliver medical assistance without relying on prior maps or computationally intensive planning pipelines. Trajectories are predicted directly in pixel space, enabling smooth motion and maintaining a consistent safety margin around humans. We evaluate HumanDiffusion in simulation and real-world indoor mock-disaster scenarios. On a 300-sample test set, the model achieves a mean squared error of 0.02 in pixel-space trajectory reconstruction. Real-world experiments demonstrate an overall mission success rate of 80% across accident-response and search-and-locate tasks with partial occlusions. These results indicate that human-conditioned diffusion planning offers a practical and robust solution for human-aware UAV navigation in time-critical assistance settings. |
|
| Zibetti, Elisabetta |
Elisabetta Zibetti, Raphael Lorenzo-Louis, Fabio Amadio, Julien Wacquez, Joffrey Becker, Bertrand Luvison, and Serena Ivaldi (Université Paris 8, Paris, France; Inria - CNRS - Université de Lorraine - Loria, Villers-les-Nancy, France; CEA - List, Palaiseau, France; ETIS (UMR8051) - CYU - ENSEA - CNRS, Cergy, France; University Paris-Saclay - CEA - List, Palaiseau, France) How does interaction with robots differ between spontaneously formed groups and individuals? Despite increasing robot deployment in public spaces, this question remains understudied in real-world settings. We conducted a field study deploying a service stationary robot in semi-public office spaces, tracking 221 individuals (42 alone, 179 in groups) across 95 interaction opportunities. Cookies were placed on accessible trays, creating a low-barrier functional interaction opportunity (taking a cookie) while allowing observation of spontaneous social behaviors. Groups demonstrated significantly higher engagement: functional interactions and social gestures. Within groups, leader presence amplified social engagement threefold. These findings are consistent with descriptive norm theory: group presence and leader behavior were associated with increased social engagement, though context-specific factors may moderate these effects. Results highlight the potential value of group detection for robots in multi-user environments, and demonstrate the feasibility of integrating psychological theory with automated tracking to study spontaneous human-robot encounters in the wild. |
|
| Ziemke, Tom |
Sam Thellman, Klara Bergsten, Edoardo Datteri, and Tom Ziemke (Linköping University, Linköping, Sweden; University of Milano-Bicocca, Milan, Italy) People routinely attribute mental states such as beliefs, desires, and intentions to explain and predict others' behavior. Prior work shows that such attributions extend to robots, yet it remains unclear what people assume about the reality of the states they attribute to them. Building on recent conceptual work on folk-ontological stances, we report a pilot study measuring realist, anti-realist, and agnostic stances toward robot minds. Using a questionnaire (N = 66), we assessed stances toward today's robots and robots in principle, and examined stance rigidity through a reflection-and-reassessment design. Results show stronger anti-realist tendencies for today's robots than for robots in principle. Stances were largely rigid across reflection. Notably, participants did not hold a uniformly non-realist view but expressed a diversity of folk-ontological stances, including substantial proportions of agnostic and realist responses. This heterogeneity highlights the need for measurement tools that move beyond binary measures and capture nuance in folk-ontological reasoning. Future work will expand stance options to include finer-grained realist and anti-realist variants and recruit cross-cultural samples to assess variation across populations. |
|
| Zimmerman, Emily |
Jennifer Dong, Sophie Weissel, and Emily Zimmerman (Georgia Institute of Technology, Atlanta, USA) Elevators can be socially awkward — strangers share intimate space yet avoid interaction with each other. To address this, we present Elevator Pitch, a ceiling-mounted interactive robot that playfully facilitates social interaction in elevators. Elevator Pitch aims to foster temporary togetherness among frequent strangers in enclosed public spaces while exploring how ludic, socially expressive architectural robots can act as social agents. This paper presents the design and preliminary user testing of Elevator Pitch. |
|
| Zimmerman, Megan |
Megan Zimmerman, Jeremy Marvel, Shelly Bagchi, and Snehesh Shrestha (National Institute of Standards and Technology, Gaithersburg, USA; University of Maryland College Park, College Park, USA) A purpose-built testbed for human-robot interaction (HRI) metrology is introduced and discussed. This testbed integrates multiple sensor systems and precision manufacturing to produce high-quality HRI datasets of human volunteers working with robots to complete collaborative tasks in a shared environment. Sensors include audio, video, motion capture, robot information, and user entries, and may also incorporate task-specific object tracking. Data collected will be replicable in identical testbeds, and will enable more robust findings in future HRI studies. |
|
| Zou, Zhengbo |
Christine Wun Ki Suen and Zhengbo Zou (Columbia University, New York City, USA) Robots with prosocial behavior can enhance human trust and the effectiveness of Human-Robot Interaction (HRI) in life-threatening scenarios. However, existing empathic robots often rely on rule-based or goal-oriented models that diverge from psychological theories of empathy and potentially limit perceived human trust. To address this gap, we propose a novel approach grounded in the empathy–altruism hypothesis from social psychology. Our proposed approach equips robot with the capability of affective perspective taking, which allows it to recall its prior self-experience, thereby encouraging empathic concern and promoting prosocial behavior toward humans. We evaluated the proposed approach on robotic agents in realistic 3D fire-emergency simulations and analyzed their prosociality across three psychological dimensions. Experiments show that a robot embedded with the proposed approach achieves a 73.7% helping rate and shows consistent prosocial tendencies across all three dimensions, compared with a 52.6% helping rate for the baseline robot. These findings open new directions for developing robots with prosocial behavior (prosocial robots) during emergency response, and support more effective HRI in life-threatening scenarios. Demonstrations and further details are available at here. |
|
| Zuckerman, Oren |
Ofir Sadka, Michael Kupferman, Hili Megory, Lior Simchon, Rotem Dagan, Guy Benadon, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Robots are becoming part of everyday life, where their success depends as much on social interaction as on task performance. While rich movement is often assumed to enhance that experience, minimal gestures can also elicit meaningful social interactions. This suggests that gesture design informed by fundamental human nonverbal cues may achieve comparable social expressivity with fewer degrees of freedom. We explored whether reducing a robot’s degrees of freedom alters the social experience by comparing interactions in two movement configurations: a Full-DoF condition and a Reduced-DoF condition. This comparison allowed us to evaluate whether increased movement capability contributes meaningfully to the quality of the social experience. Bayesian analyses consistently favored the null hypothesis, indicating no meaningful differences between the Full-DoF and Reduced-DoF conditions. These initial findings suggest that well-designed simple gestures can support a consistent social experience, challenging the assumption that greater movement capability should be automatically expected to enhance human–robot interaction. Elior Carsenti, Adi Manor, Ofir Sadka, Michael Kupferman, Guy Benadon, Lior Simchon, Rotem Dagan, Hili Megory, Sean Friedman, Jason F. Gilbert, Oren Zuckerman, and Hadas Erel (Reichman University, Herzliya, Israel; Intuition Robotics, Ramat Gan, Israel) Social robots are increasingly expected to initiate interactions, yet proactive behavior often produces negative experiences if presented when the users are not available. We argue that availability is not binary and depends on the user's cognitive load. Moreover, proactivity can take different forms, suggesting that proactive cues can be adjusted to the user’s current load. We tested whether matching proactivity modality (verbal and nonverbal cues) to cognitive-load level improves robot perception. In a 2×2×2 between-participants study, 87 participants completed a high-load writing task or a low-load screw-sorting task while an ElliQ robot initiated interaction using verbal cues, nonverbal cues, both, or neither. Perceptions varied across conditions. Under low load, verbal proactivity was preferred regardless of nonverbal cues. Under high load, nonverbal-only proactivity produced the best experience. These findings suggest that proactivity design is not trivial and should focus on aligning the robot's proactive behavior with the user's cognitive load. |
1354 authors
proc time: 7.25