Powered by
21st ACM/IEEE International Conference on Human-Robot Interaction (HRI 2026), March 16–19, 2026,
Edinburgh, Scotland, UK
Frontmatter
Welcome from the Chairs
Welcome to the 21st Annual IEEE/ACM International Conference on Human-Robot Interaction: we are very excited to welcome the HRI community to Edinburgh, UK to celebrate the 21st anniversary of the conference. We are excited to announce that the HRI conference series has this year been awarded Category A status. Category A conferences are considered to be top-tier, prestigious, and highly influential, often serving as flagship events in their particular computer science area.
Article: hri26foreword-fm001-p
Full Research Papers
Do Type and Importance of Agent’s Resource Matter? How Robots’ Helping Behavior Influences Human Trust to Them
Chenlin Hang,
Masahiro Shiomi,
Rui Prada, and
Seiji Yamada
(Kanagawa University, Kanagawa, Japan; ATR, Kyoto, Japan; Instituto Superior Técnico - University of Lisbon, Lisbon, Portugal)
With the rapid advancement of robotics , robots’ helping behaviors are increasingly framed not only as functional assistance but also as prosocially meaningful interaction. In this context, the resource cost borne by the help-provider is a critical factor, yet it has not been systematically explored in existing human-robot interaction (HRI) research. Understanding how humans perceive and respond to different types of helping is essential for building better human–robot relationships. This study addresses this gap through two experiments. Study 1 examined the role of agent resource type (robot’s own resources vs. external resources). Results showed that when robots shared their own resources, participants did not report significant differences in overall attitudes or prosocial behavior, but attributed higher performance trust and expressed stronger feelings of gratitude and guilt. Study 2 further examined the importance of agent resources (robot battery level: high vs. low) . The results showed that even when relative costs were the same, participants tended to perceive sharing from a low-battery robot as more reliable, while variations in resource type or importance did not significantly change social responses. These findings suggest that human evaluations of robots are shaped not only by the outcomes of helping but also by the perceived cost and sacrifice underlying robot actions. Our work offers an initial direction for integrating resource cost considerations into the design of social robots.
Article Search
Article: hri26main-p1173-p
Long-Term Integration of a Robot in an Inclusive Daycare: An Ethnographic Study Focused on Children and Caregivers
Jan Ole Rixen,
Kathrin Gerling, and
Barbara Bruno
(Karlsruhe Institute of Technology, Karlsruhe, Germany)
Embedding social robots in educational and/or childcare settings has potential to engage children while supporting caregivers. However, little is known about the practical, long-term integration of robots in such settings. Our work addresses this gap through an ethnographic study that adopts a year-long perspective toward a socially assistive robot in an inclusive daycare with a four-month robot deployment phase. Through Thematic Analysis, we highlight children's ability to develop a variety of self-determined interactions with the robot, while unveiling caregivers' need for its integration into structured routines such as meal times, as well as challenges for robustness of the robot in our research setting. On this basis, we make recommendations for the design of social robots for daycare environments.
Article Search
Article: hri26main-p1199-p
(Em)Powering the End of Power: HRI for Safe Nuclear Decommissioning
Anne-Marie Oostveen,
Jingyi E. Zhang, and
Sarah Fletcher
(Cranfield University, Cranfield, UK)
As fusion research facilities like the Joint European Torus reach the end of their operational lives, the need for safe and precise decommissioning has become critical. Human-Robot Interaction (HRI) technologies are central to reducing human risk and enabling complex remote operations in hazardous environments. Based on an exploratory study with experts from the UK Atomic Energy Authority and the International Thermonuclear Experimental Reactor, this paper examines current and emerging uses of teleoperated manipulators, articulated booms, virtual reality, and force feedback systems in fusion decommissioning. The findings highlight key insights into training, cognitive load, and ethical considerations in operator data use. HRI emerges not just as a technical solution but as a vital enabler for empowering society to transition responsibly away from legacy energy infrastructures.
Article Search
Article: hri26main-p1241-p
PEHRCIVE: Platform for Evaluating Human-Robot Collaboration and Interaction in Virtual Environments
Eduardo Araújo,
Paula Alexandra Silva,
Sergi Bermúdez i Badia,
Diogo Branco, and
Artur Pilacinski
(University of Coimbra, Coimbra, Portugal; University of Madeira, Funchal, Portugal; NOVA LINCS, Funchal, Portugal; ARDITI, Funchal, Portugal; Ruhr University Bochum, Bochum, Germany)
As Human-Robot Interaction (HRI) and Human-Robot Collaboration (HRC) integrate into society, there's a growing need for flexible, safe, and cost-effective methods to evaluate HRI and HRC from a human perspective. Current evaluation techniques, which often rely on physical hardware, are limited by high costs, safety risks, and low replicability. To address these challenges, we introduce Platform for Evaluating Human-Robot Collaboration and Interaction in Virtual Environments (PEHRCIVE), a software tool built with Unity that leverages Virtual Reality (VR) to provide a safe, flexible, and low-cost research platform. PEHRCIVE provides a customizable, immersive VR environment for HRI and HRC research, based on a collaborative interlocking block assembly task, three distinct collaborative robots (Kuka LBR iiwa, Rethink Robotics Sawyer, and Baxter), and sequential and simultaneous collaboration modes. Features include an easily modifiable external configuration file, a robust data logging system with timestamps and task-specific metrics, and customizable answered-in-VR questionnaires for data collection. PEHRCIVE was validated with 36 participants by collecting physiological and psychological data and responses to the Godspeed Questionnaire Series and VR Presence questionnaire. Participants reported a high sense of presence and a positive VR experience, confirming the platform's effectiveness as a research tool. PEHRCIVE facilitates the design, testing, and evaluation of HRI and HRC experiments more efficiently and safely, accelerating progress for the research community.
Article Search
Article: hri26main-p1416-p
What Is a Robot? Understanding Baseball’s “Robot Umpire” through the Lens of Fluid Technology
Waki Kamino,
Andrea W. Wen-Yi,
Guy Hoffman,
Selma Šabanović, and
Malte F. Jung
(Cornell University, Ithaca, USA; Indiana University at Bloomington, Bloomington, USA)
The question “what is a robot?” has long been contested as automated embodied systems encompass many forms. We examine this fundamental question in Human-Robot Interaction through the case of Major League Baseball’s “robot umpire,” officially known as the Automated Ball-Strike System (ABS). Drawing on the concept of “fluid technology,” we analyze how the robot umpire is not a fixed technological artifact but a fluid sociotechnical assemblage whose definition and function are continuously negotiated. Through ethnographic fieldwork and interviews with stakeholders across the baseball ecosystem, we demonstrate that the robot umpire’s physical boundaries, operational parameters, and authorship remain contested and evolving, shaped by ongoing interactions between technology developers, league officials, umpires, players, and fans. Our findings reveal that treating robots as fluid technologies—rather than as discrete objects—opens new possibilities for understanding human-robot relationships. We contribute both theoretical insights regarding the ontological flexibility of “robots” and methodological approaches for studying and designing robots as sociotechnical assemblages.
Article Search
Article: hri26main-p1508-p
Interface-Aware Trajectory Reconstruction of Limited Demonstrations for Robot Learning
Demiana R. Barsoum,
Mahdieh Nejati Javaremi,
Larisa Y.C. Loke, and
Brenna D. Argall
(Northwestern University, Evanston, USA; Shirley Ryan AbilityLab, Chicago, USA; Northwestern University, Chicago, USA)
Assistive robots offer agency to humans with severe motor impairments. Often, these users control high-DoF robots through low-dimensional interfaces—such as using a 1-D sip/puff interface to operate a 6-DoF robotic arm. This mismatch results in having access to only a subset of control dimensions at a given time, imposing unintended and artificial constraints on robot motion. As a result, interface-limited demonstrations embed suboptimal motions that reflect interface restrictions rather than user intent. To address this, we present a trajectory reconstruction algorithm that reasons about task, environment, and interface constraints to lift demonstrations into the robot’s full control space. We evaluate our approach using real-world demonstrations of ADL-inspired tasks performed via a 2-D joystick and 1-D sip/puff control interface, teleoperating two distinct 7-DoF robotic arms. Analyses of the reconstructed demonstrations and derived control policies show that lifted trajectories are faster and more efficient than their interface-constrained counterparts while respecting user preferences.
Article Search
Article: hri26main-p1622-p
Reframing Conversational Design in HRI: Deliberate Design with AI Scaffolds
Shiye Cao,
Jiwon Moon,
Yifan Xu,
Anqi Liu, and
Chien-Ming Huang
(Johns Hopkins University, Baltimore, USA; University of Chicago, Chicago, USA)
Large language models (LLMs) enabled conversational robots to shift toward free-form interaction. However, without context-specific adaptation, generic LLM outputs can be ineffective or inappropriate. This adaptation is often attempted through prompt engineering, which is non-intuitive and tedious. Moreover, predominant design practice in HRI relies on impression-based, trial-and-error refinement without structured methods or tools, making the process inefficient and inconsistent. To address this, we present AI-Aided Conversation Engine (ACE) to support deliberate design of human-robot conversations with three key innovations: 1) an LLM-powered voice agent that scaffolds initial prompt creation to overcome the "blank page problem," 2) an annotation interface that enables the collection of granular and grounded feedback on conversational transcripts, and 3) using LLMs to translate user feedback into prompt refinements. We evaluated ACE through two user studies, examining both designs' experience and end users' interactions with robots designed using ACE. Results show that ACE facilitates the creation of robot behavior prompts with greater clarity and specificity, and that the prompts generated with ACE lead to higher-quality human-robot conversational interactions.
Article Search
Info
Article: hri26main-p1664-p
Open-Ended Goal Inference through Actions and Language for Human-Robot Collaboration
Debasmita Ghose,
Oz Gitelson,
Marynel Vázquez, and
Brian Scassellati
(Yale University, New Haven, USA)
To collaborate with humans, robots must infer goals that are often ambiguous, difficult to articulate, or not drawn from a fixed set. Prior approaches restrict inference to a predefined goal set, rely only on observed actions, or depend exclusively on explicit instructions, making them brittle in real-world interactions. We present BALI (Bidirectional Action–Language Inference) for goal prediction, a method that integrates natural language preferences with observed human actions in a receding-horizon planning tree. BALI combines language and action cues from the human, asks clarifying questions only when the expected information gain from the answer outweighs the cost of interruption, and selects supportive actions that align with inferred goals. We evaluate the approach in collaborative cooking tasks, where goals may be novel to the robot and unbounded. Compared to baselines, BALI yields more stable goal predictions and significantly fewer mistakes.
Preprint
Article: hri26main-p1722-p
WAFFLE: A Wearable Approach to Bite Timing Estimation in Robot-Assisted Feeding
Akhil Padmanabha,
Jessie Yuan,
Tanisha S. Mehta,
Rajat Kumar Jenamani,
Eric Hu,
Victoria de León,
Anthony Wertz,
Janavi Gupta,
Ben Dodson,
Yunting Yan,
Carmel Majidi,
Tapomayukh Bhattacharjee, and
Zackory Erickson
(Carnegie Mellon University, Pittsburgh, USA; Cornell University, Ithaca, USA)
Millions of people around the world need assistance with feeding. Robotic feeding systems offer the potential to enhance autonomy and quality of life for individuals with impairments and reduce caregiver workload. However, their widespread adoption has been limited by technical challenges such as estimating bite timing, the appropriate moment for the robot to transfer food to a user’s mouth. In this work, we introduce WAFFLE: Wearable Approach For Feeding with LEarned Bite Timing, a system that accurately predicts bite timing by leveraging wearable sensor data to be highly reactive to natural user cues such as head movements, chewing, and talking. We train a supervised regression model on bite timing data from 14 participants and incorporate a user-adjustable assertiveness threshold to convert predictions into proceed or stop commands. In a study with 15 participants without motor impairments with the Obi feeding robot, WAFFLE performs statistically on par with or better than baseline methods across measures of feeling of control, robot understanding, and workload, and is preferred by the majority of participants for both individual and social dining. We further demonstrate WAFFLE’s generalizability in a study with 2 participants with motor impairments in their home environments using a Kinova 7DOF robot. Our findings support WAFFLE’s effectiveness in enabling natural, reactive bite timing that generalizes across users, robot hardware, robot positioning, feeding trajectories, foods, and both individual and social dining contexts. Videos are located at https://sites.google.com/view/bitetiming/.
Article Search
Video
Info
Article: hri26main-p1765-p
A Nuanced Approached to Robotics and Aesthetics
Alexandre Colle and
Mauro Dragone
(Heriot-Watt University, Edinburgh, UK; University of Edinburgh, Edinburgh, UK)
This paper argues that aesthetics in Human–Robot Interaction (HRI) should be understood as an integrated system of perception, emotion, and meaning making, rather than as surface-level styling. A concise review connects aesthetic theory, design studies, and branding research to show how form, material, motion, and context shape user judgement, trust, and desire. This review also highlights the prevailing tendencies in robotics research to focus on isolated visual features, while often overlooking the contextual and symbolic dimensions that influence user interpretation in their daily lives.
Brand is introduced as an aesthetic context that shapes expectation and value. To examine this, an online between-subjects study was conducted in which the same service robot was presented with identical descriptive features while only the brand label varied across five well-known technology and retail names, plus a fictitious control. Two hundred and fifty participants rated intention to use, perceived quality, aesthetics, performance, and expected price. Brand identity significantly influenced intention to use and perceived quality; expected price varied strongly by brand, while ratings of the visual design remained stable. Technology brands increased willingness to use, the fictitious brand led in aesthetics and quality, and weaker outcomes for some names were attributed to a poor fit with the domestic robot category.
These findings position brand identity as a contextual design variable in HRI. Effective adoption emerges when visual design, embodied behaviour, and symbolic framing are coherently aligned, enabling robots to support meaning-making and identity formation in everyday settings.
Article Search
Article: hri26main-p1831-p
The Dynamics of Human Fairness Judgments towards a Robot
Houston Claure,
Austin Narcomey,
Kate Candon,
Inyoung Shin, and
Marynel Vázquez
(Yale University, New Haven, USA)
Fairness is critical for collaboration between humans, and recent research has shown its importance in human–robot collaboration. However, most human-robot interaction (HRI) studies probe fairness judgments toward a robot only at the conclusion of an interaction, overlooking the fact that perceptions of fairness can evolve over time. We present two studies of dynamic fairness that both leverage a Multiplayer Space Invaders game, where a robot controls a spaceship and distributes support across players' sides of the screen. The robot's support is at times biased in favor of one player or the other. In the first study, we examine how fairness perceptions are influenced by the timing of a robot's biased support (early vs. late in the interaction) and the beneficiary of this support (the participant vs. another agent). In the second study, we investigate how expectations of a robot's support behavior (biased vs. unbiased) interact with its actual behavior (biased vs. unbiased) in a setting where two participants each worked to score an individual score threshold. We find that fairness judgments are dynamic: fairness falls after the robot's allocation of support becomes biased but is slower to recover once support becomes unbiased, and participants expecting unbiased behavior judge fairness more harshly when these expectations are violated. Our findings advance understanding of fairness in HRI by presenting it as a dynamic construct shaped not only by the actual behavior of the robot but also by the timing of robot actions and expectations of robot behavior.
Article Search
Article: hri26main-p1834-p
Why Do Service Robots Fail? A Systematic Literature Review from a Service Design Perspective
Sanqi Yang,
Jaehyun Park,
Sangseok You, and
Younghoon Chang
(Hong Kong Polytechnic University, Hong Kong, China; Sungkyunkwan University, Seoul, Republic of Korea; University of Nottingham Ningbo China, Ningbo, China)
Service robots are widely deployed to address labor shortages and improve service quality, yet many still fail in practice. Most research has focused on the positive, overlooking why they fail. To address this gap, our study examines service robot failures from a service design perspective, focusing on design features. To validate this, we conducted a systematic literature review of 863 studies and selected 34 as our data sample. We identified 17 robot design features across three service design aspects and proposed an integrated framework that explains how these features lead to service failures. Theoretically, this study provides a framework for explaining service robot failures from a service design (S-D Logic) perspective, addressing a gap in HRI and service design research. In practice, this study offers actionable guidance for practitioners on improving the design of service robots, their human-robot interaction, and their system integration in real-world service environments.
Article Search
Article: hri26main-p1937-p
Uncanny Touch? Investigating the Influence of Lifelike Haptic Cues on User Perceptions of a Social Robot
Jacqueline Borgstedt,
Shaun Macdonald,
Jacob Bhattacharyya,
Frank Pollick, and
Stephen Brewster
(ETH Zurich, Zurich, Switzerland; University of Glasgow, Glasgow, UK)
Touch is a central aspect of social human-human interaction, yet when designing social human-robot interaction, the experience of touch is very often neglected. This work explores whether a robot should feel alive by investigating the effect of life-like haptic stimuli inspired by bio-physiological signals (heartbeat, purring, and thermal feedback) on user perception. The results of an interaction study (n=42) showed that haptic cues which simulate bio-physiological signals significantly increased perceived anthropomorphism, social warmth, and likeability of the PARO robot compared to no additional haptics or abstract vibration. We discuss how multimodal haptic cues, despite increasing perceived eeriness, also increased perceived likeability of the robot, providing novel insights into whether the uncanny valley effect may extend into non-visual modalities. These results provide the first empirical evidence that life-like tactile cues can enhance user perceptions of social robots, offering design guidance for deploying effective and engaging touch interactions between humans and zoomorphic robots.
Article Search
Article: hri26main-p2025-p
Perceiving Animacy in Robots: A Neuroimaging Study
Or Yizhar,
Amber Maimon,
Zohar Tal,
Iddo Yehoshua Wald,
Hadas Erel,
Doron Friedman,
Oren Zuckerman, and
Amir Amedi
(MPI for Human Development, Berlin, Germany; TU Dresden, Dresden, Germany; Ben Gurion University of the Negev, Be'er Sheva, Israel; Reichman University, Herzliya, Israel; University of Coimbra, Coimbra, Portugal; University of Bremen, Bremen, Germany)
Animacy is central to HRI, as it is critical for perceptions of intentionality and influences acceptance, interaction quality, and trust. In order to better understand the underlying mechanisms of the sense of animacy and its role in HRI, we conducted an experiment (n=13) using a social robotic object that is ambiguous with respect to its perceived animacy. We examined how brain activity related to a subjective animacy rating of the robot by participants after watching it perform simple gestures. Using functional imaging, we found that the ventral occipitotemporal (VOT) cortex, a region known to distinguish animate from inanimate entities, also responds to robots along this same organizational gradient. In an exploratory analysis, activity in the left VOT was associated with the individual differences in participants’ subjective animacy rating of the robot. This extends established principles of animacy perception to non-humanoid robots and suggests that individual differences in brain responses may shape how robots are experienced. We discuss implications for HRI, including guiding social cue design, informing gesture alignment with robot roles, and raising ethical considerations around attachment and trust.
Article Search
Article: hri26main-p2050-p
Who Leads the Story? Comparing Autonomous vs. Adult-Supported Child-Robot Interactions
Dotun Olutunbi,
Riccardo Polvara,
Tom Davies,
Jenny Hamilton,
Niko Kargas, and
Francesco Del Duchetto
(University of Lincoln, Lincoln, UK; United Arab Emirates University, Al Ain, United Arab Emirates)
This study investigates the potential of large language model (LLM)-enabled robots to support inclusive child-robot storytelling activities in elementary education. 65 children (ages 6–11) engaged in story co-construction with an LLM-enabled Pepper robot across two experimental conditions. In the child-robot dyad, the robot autonomously guided the interaction through questioning and cartoon displays. In the child-adult-robot triad, Pepper displayed story visuals while an adult co-facilitated dialogue. Evidence-based, neuro-affirming strategies were embedded throughout.
Parents completed the Autism Spectrum Quotient (AQ-10), while children provided post-session self-reports on engagement and affect. Interaction transcripts were analysed for structural linguistic features. Results showed that both conditions effectively supported story recall and engagement. Affect toward the robot was consistently high, though a negative correlation with age in the dyad condition suggested increased awareness of technical limitations. Linguistic analyses revealed no differences in grammatical complexity between conditions, yet triadic sessions fostered broader vocabulary use and greater recall for older children. Outcomes correlation with AQ-10 indicate that both the autonomous robot and the human-robot team were effective and engaging across the neurodiverse spectrum.
Findings demonstrate that LLM-enabled robots can be both standalone educational partners and tools for hybrid human-robot facilitation.
We discuss implications for inclusive pedagogy, highlight technical challenges, and propose future directions for research.
Article Search
Article: hri26main-p2085-p
“Meet My Sidekick!”: Effects of Separate Identities and Control of a Single Robot in HRI
Drake Moore,
Arushi Aggarwal,
Emily Taylor,
Sarah Zhang,
Taskin Padir, and
Xiang Zhi Tan
(Northeastern University, Boston, USA; Amazon Robotics, Westborough, USA)
The presentation of a robot's capability and identity directly influences a human collaborator's perception and implicit trust in the robot. Unlike humans, a physical robot can simultaneously present different identities and have them reside and control different parts of the robot. This paper presents a novel study that investigates how users perceive a robot where different robot control domains (head and gripper) are presented as independent robots. We conducted a mixed design study where participants experienced one of three presentations: a single robot, two agents with shared full control (co-embodiment), or two agents with split control across robot control domains (split-embodiment). Participants underwent three distinct tasks -- a mundane data entry task where the robot provides motivational support, an individual sorting task with isolated robot failures, and a collaborative arrangement task where the robot causes a failure that directly affects the human participant. Participants perceived the robot as residing in the different control domains and were able to associate robot failure with different identities. This work signals how future robots can leverage different embodiment configurations to obtain the benefit of multiple robots within a single body.
Article Search
Article: hri26main-p2243-p
Elements of Robot Morphology: Supporting Designers in Robot Form Exploration
Amy Koike,
Serena Ge Guo,
Xinning He,
Callie Y. Kim,
Dakota Sullivan, and
Bilge Mutlu
(University of Wisconsin-Madison, Madison, USA)
Robot morphology-the form, shape, and structure of robots-is a key design space in huma-robot interaction (HRI), shaping how robots function, express themselves, and interact with people. Yet, despite its importance, little is known about how design frameworks can guide systematic form exploration. To address this gap, we introduce Elements of Robot Morphology, a framework that identifies five fundamental elements: perception, articulation, end effectors, locomotion, and structure. Derived from an analysis of existing robots, the framework supports structured exploration of diverse robot forms. To operationalize the framework, we developed Morphology Exploration Blocks (MEB), a set of tangible blocks that enable hands-on, collaborative experimentation with robot morphologies. We evaluate the framework and toolkit through a case study and design workshops, showing how they support analysis, ideation, reflection, and collaborative robot design.
Article Search
Article: hri26main-p2292-p
When Robots Say No: The Empathic Ethical Disobedience Benchmark
Dmytro Kuzmenko and
Nadiya Shvai
(National University of Kyiv-Mohyla Academy, Kyiv, Ukraine; Cyclope AI, Paris, France)
Robots must balance compliance with safety and social expectations as blind obedience can cause harm, while over-refusal erodes trust. Existing safe reinforcement learning (RL) benchmarks emphasize physical hazards, while human-robot interaction trust studies are small-scale and hard to reproduce. We present the Empathic Ethical Disobedience (EED) Gym, a standardized testbed that jointly evaluates refusal safety and social acceptability. Agents weigh risk, affect, and trust when choosing to comply, refuse (with or without explanation), clarify, or propose safer alternatives. EED Gym provides different scenarios, multiple persona profiles, and metrics for safety, calibration, and refusals, with trust and blame models grounded in a vignette study. Using EED Gym, we find that action masking eliminates unsafe compliance, while explanatory refusals help sustain trust. Constructive styles are rated most trustworthy, empathic styles – most empathic, and safe RL methods improve robustness but also make agents more prone to overly cautious behavior. We release code, configurations, and reference policies available at https://github.com/dmytro-kuzmenko/eed_gym to enable reproducible evaluation and systematic human-robot interaction research on refusal and trust.
Preprint
Info
Artifacts Available
Article: hri26main-p2345-p
Crafting Companions: A Mixed Methods Exploration of Customization amongst Robot Owners
Amelie Voges,
Mary Ellen Foster, and
Emily S. Cross
(University of Glasgow, Glasgow, UK; ETH Zurich, Zurich, Switzerland)
A key challenge in social robotics is identifying the design features and social mechanisms that sustain long-term engagement with robots. Although mounting evidence suggests that end-user customization is a vital aspect of robot ownership “in the wild”, empirical research on this phenomenon and its psychological outcomes remains sparse. In this mixed methods study, we surveyed 113 robot owners and conducted semi-structured interviews with 13 more, providing a holistic perspective on customization practices among long-term users. Our findings show that customization is highly prevalent among robot owners, with quantitative results indicating that customization indirectly predicts robot attachment through self-extension and psychological ownership. Our interviews furthermore reveal a vibrant culture of customization in online and offline robotics spaces, which is sustained by strong community networks. Together, these results underscore the central role of customization in fostering enduring engagement with companion robots. Through customization, users imbue their robots with personally significant identities, form deeper attachments to them, and reinforce their own individuality as robot owners. Customization also embeds owners within a broader community of robot enthusiasts, promoting social connections, creative practice, and sustained use. Given its prevalence among robot owners, we conclude with recommendations for how robot designers and researchers can leverage customization to set the stage for long-term and personally meaningful human–robot bonds.
Article Search
Info
Article: hri26main-p2445-p
What You Reward Is What You Learn: Comparing Rewards for Online Speech Policy Optimization in Public HRI
Sichao Song,
Yuki Okafuji,
Kaito Ariu, and
Amy Koike
(CyberAgent, Tokyo, Japan; University of Wisconsin-Madison, Madison, USA)
Designing policies that are both efficient and acceptable for conversational service robots in open and diverse environments is non-trivial. Unlike fixed, hand-tuned parameters, online learning can adapt to non-stationary conditions. In this paper, we study how to adapt a social robot’s speech policy in the wild. During a 12-day in-situ deployment with over 1,400 public encounters, we cast online policy optimization as a multi-armed bandit problem and use Thompson sampling to select among six actions defined by speech rate (slow/normal/fast) and verbosity (concise/detailed). We compare three complementary binary rewards–Ru (user rating), Rc (conversation closure), and Rt (≥2 turns)–and show that each induces distinct arm distributions and interaction behaviors. We complement the online results with offline evaluations that analyze contextual factors (e.g., crowd level, group size) using video-annotated data. Taken together, we distill ready-to-use design lessons for deploying online optimization of speech policies in real public HRI settings.
Article Search
Article: hri26main-p2470-p
Bystander Privacy Implications of Robots in Everyday Spaces: A Scoping Review
Manuel Dietrich,
Alan Sarkisian, and
Thomas H. Weisswange
(Honda Research Institute, Offenbach, Germany; Honda Research Institute, Wako, Japan)
The advancement of AI is driving the integration of robots into everyday environments. The acceptance of these robots not only depends on direct users, but also on others who share these spaces, often referred to as bystanders or non-users. A frequently discussed prerequisite for acceptance is the proper handling of personal information as robots are equipped with means for environmental awareness, data inference, and human interaction. Although bystanders are not the main target of such processing, they can be affected by robot operation. Despite its significance, bystander privacy concerns have received limited attention in prior robotics research. In this paper, we address bystander privacy in the context of robots operating in everyday environments. We conduct a scoping review of bystander privacy issues associated with related technologies exhibiting agentic qualities comparable to robots. We analyze how agency may reshape conventional attributions of actor roles and transmission principles within the established privacy framework of Contextual Integrity. This allows us to derive transferable insights about privacy expectations as well as unique opportunities and open research issues for robots in public spaces.
Article Search
Article: hri26main-p2555-p
Breathe with Me: Synchronizing Biosignals for User Embodiment in Robots
Iddo Yehoshua Wald,
Amber Maimon,
Shiyao Zhang,
Dennis Küster,
Robert Porzel,
Tanja Schultz, and
Rainer Malaka
(University of Bremen, Bremen, Germany; University of Haifa, Haifa, Israel)
Embodiment of users within robotic systems has been explored in human-robot interaction, most often in telepresence and teleoperation. In these applications, synchronized visuomotor feedback can evoke a sense of body ownership and agency, contributing to the experience of embodiment. We extend this work by employing embreathment, the representation of the user's own breath in real time, as a means for enhancing user embodiment experience in robots. In a within-subjects experiment, participants controlled a robotic arm, while its movements were either synchronized or non-synchronized with their own breath. Synchrony was shown to significantly increase body ownership, and was preferred by most participants. We propose the representation of physiological signals as a novel interoceptive pathway for human–robot interaction, and discuss implications for telepresence, prosthetics, collaboration with robots, and shared autonomy.
Preprint
Article: hri26main-p2605-p
Persona Non Graphica: Visual Representation Biases in Human-Robot Interaction Research
Katie Seaborn and
Özge Nilay Yalçın
(Institute of Science Tokyo, Tokyo, Japan; University of Cambridge, Cambridge, UK; Simon Fraser University, Surrey, Canada)
Visual representations of "the user" are a key part of academic papers in human-robot interaction (HRI). These visualizations can include photos of participants, drawings of users, or simulated personas. Critical analyses have revealed representation biases in scholarly work, often detectable in formal output like publications. Sampling biases, limitations in demographic reporting, and bias across researchers have been discovered. Yet, no work to date has considered the visualizations (drawings, photos, and other graphical representations) of the human side in the HRI equation. We surface representation biases in work from the flagship ACM/IEEE HRI conference: over-representation of younger light-skinned men with typical bodies, and under-representation of people with darker skin tones, women and gender-diverse people, people with disabilities, and people of various ages and sizes. We critically discuss these trends and offer suggestions for best practices in reporting.
Article Search
Info
Article: hri26main-p2624-p
LEGS-POMDP: Language and Gesture-Guided Object Search in Partially Observable Environments
Ivy Xiao He,
Stefanie Tellex, and
Jason Xinyu Liu
(Brown University, Providence, USA)
To assist humans in open-world environments, robots must accurately interpret ambiguous instructions to locate desired objects. Foundation model-based approaches excel at reference expression grounding and multimodal instruction understanding, but lack a principled mechanism to model uncertainty in long-horizon tasks. Conversely, Partially Observable Markov Decision Processes (POMDPs) provide a systematic framework for planning under uncertainty but are typically limited in modalities and environment assumptions. To achieve the best of both worlds, we introduce Language and Gesture-Guided Object Search in Partially Observable Environments (LEGS-POMDP), a modular POMDP system that integrates language, gesture, and visual observations for open-world object search. Unlike prior work, LEGS-POMDP explicitly models two sources of partial observability: uncertainty over the target object’s identity and its spatial location. Simulation results show that multimodal fusion significantly outperforms unimodal baselines, achieving an average success rate of 89%±7% across challenging environments and object categories. Finally, we demonstrate the full system on a quadruped mobile manipulator, where real-world experiments qualitatively validate robust multimodal perception and uncertainty reduction under ambiguous human instructions.
Article Search
Video
Info
Article: hri26main-p2627-p
Robot Tutors or Peers? Evaluating Math Learning and Conformity with LLM-Powered Robots in Tanzanian Primary Schools
Edger P. Rutatola,
Elina C. Ntahomvukye,
Koen Stroeken, and
Tony Belpaeme
(Ghent University, Ghent, Belgium; Mzumbe University, Morogoro, Tanzania)
In the past decade, more than half of Tanzanian pupils have failed mathematics in the national Primary School Leaving Examinations (PSLEs), a problem often linked to large class sizes, limited resources, and a shortage of qualified teachers. Social robots have shown promise in supporting learning, and their integration with large language models (LLMs) enables advanced conversational tutoring capabilities. This study investigates the use of two LLM-powered NAO robots, one acting as a tutor and the other as a peer, to assist pupils in solving complex mathematics problems from past PSLEs. Recognising that LLMs are prone to errors in mathematical reasoning, the robots were deliberately programmed to make noticeable mistakes, allowing us to examine whether pupils detect these errors and how their responses shape the learning process. Data collected from 54 pupils across two Tanzanian primary schools indicate that LLM-powered robots can significantly enhance mathematics performance, with the robot tutor slightly outperforming the robot peer. However, results also reveal that pupils often accept robot-provided answers, even when recognised as incorrect, if they perceive the robot as being smart. These findings underscore both the potential and the risks of deploying autonomous robots in education, with the authority attributed to the robot being a double-edged sword, highlighting the need for designs that encourage pupils to question robot-provided solutions.
Article Search
Article: hri26main-p2757-p
Should Robots Comply with Our Instructions or Intentions?
Tiffany Horter,
Andrew Markham,
Niki Trigoni, and
Serena Booth
(University of Oxford, Oxford, UK; Brown University, Providence, USA)
When people communicate, they often express their intent imperfectly, and human collaborators routinely compensate for these mistakes without issue. For example, if Alice asks for a spatula while serving soup, Bob may infer her intent and bring a ladle instead. This raises a key question for human–robot collaboration: should robots follow instructions literally or should they infer and act on human intent? We study how people expect robots to respond to ambiguous or incorrect instructions in collaborative kitchen scenarios. In this user study, participants either act directly on behalf of a robot or indirectly in observing a robot that may depart from literal instructions to pursue the inferred intent. We find that people generally prefer robots to take some action rather than refuse to comply, although people expect robots to attempt to satisfy the literal instruction (i.e., by thoroughly searching the scene) before taking an imperfect action to satisfy the intent. As large language models (LLMs) are increasingly used to model common sense, we conduct a pilot study to assess whether LLMs make the same decisions as human users about when robots should reinterpret requests.
Article Search
Article: hri26main-p2821-p
Adding More Value Than Work: Practical Guidelines for Integrating Robots into Intercultural Competence Learning
Zhennan Yi,
Sophia Sakakibara Capello,
Randy Gomez, and
Selma Šabanović
(Indiana University at Bloomington, Bloomington, USA; University of Central Florida, Orlando, USA; Honda Research Institute, Wako, Japan)
While social robots have demonstrated effectiveness in supporting students' intercultural competence development, it is unclear how they can effectively be adopted for integrated use in K-12 schools. We conducted two phases of design workshops with teachers, where they co-designed robot-mediated intercultural activities while considering student needs and school integration concerns. Using thematic analysis, we identify appropriate scenarios and roles for classroom robots, explore how robots could complement rather than replace teachers, and consider how to address ethical and compliance considerations. Our findings provide practical design guidelines for the HRI community to develop social robots that can effectively support intercultural education in K-12 schools.
Article Search
Article: hri26main-p2826-p
Social Robotics for Disabled Students: An Empirical Investigation of Embodiment, Roles, and Interaction
Alva Markelius,
Fethiye Irmak Doğan,
Julie Bailey,
Guy Laban,
Jenny L. Gibson, and
Hatice Gunes
(University of Cambridge, Cambridge, UK; Ben Gurion University of the Negev, Be'er Sheva, Israel)
Institutional and social barriers in higher education often prevent students with disabilities from effectively accessing support, including lengthy procedures, insufficient information, and high social-emotional demands. This study empirically explores how disabled students perceive robot-based support, comparing two interaction roles, one information based (signposting) and one disclosure based (sounding board), and two embodiment types (physical robot/disembodied voice agent). Participants assessed these systems across five dimensions: perceived understanding, social energy demands, information access/clarity, task difficulty, and data privacy concerns. The main findings of the study reveal that the physical robot was perceived as more understanding than the voice-only agent, with embodiment significantly shaping perceptions of sociability, animacy, and privacy. We also analyse differences between disability types. These results provide critical insights into the potential of social robots to mitigate accessibility barriers in higher education, while highlighting ethical, social and technical challenges.
Article Search
Article: hri26main-p2878-p
Robot-Mediated Mutual Gaze: How a Mobile Robot with Actuated Mirrors Facilitates Encounters between Strangers
Serena Ge Guo,
Jenny J. Yu,
Wenqian Niu,
Yifei Gao,
Guy Hoffman,
Gilly Leshed, and
Keith Evan Green
(University of Wisconsin-Madison, Madison, USA; Cornell University, Ithaca, USA)
Brief eye contact with strangers can foster connection, belonging, and positive affect, yet such moments are often scarce in public spaces. This paper investigates how a spatially situated robot can reshape the visual field of a shared space to influence how strangers notice and respond to one another. We present MirrorBot, a mobile robot equipped with two actuated mirrors that dynamically redirect reflections to reshape sightlines between people. In a study with 32 strangers in 16 pairs in a waiting-room setting, MirrorBot elicited patterns such as low-stakes icebreaking, nonverbal synchrony, joint sensemaking, asymmetric engagement, and avoidance. Participants also attributed multiple roles to the robot, such as mediator, observer, magnifier, or disrupter, revealing that its social meaning was fluid and co-constructed. Our work extends HRI by showing that robots can act not only as conversational partners but also as spatial mediators, curating opportunities for human–human connection through the reconfiguration of spatial relationships
Article Search
Video
Article: hri26main-p2889-p
Pluriversal Approach to Co-designing Delivery Robots with People with Disabilities
Abena Boadi-Agyemang,
Sanika Moharana,
Cynthia L. Bennett,
Elizabeth J. Carter,
Patrick Carrington, and
Aaron Steinfeld
(Carnegie Mellon University, Pittsburgh, USA; Google, New York, USA)
On-demand, last-mile delivery -- the transportation of goods from a local distribution center to the customer's door within a specified time window -- is used by people with disabilities (PwDs) for a variety of reasons. While delivery robots have potential in this space, they often widen disparities in access for PwDs. Inspired by the concept of pluriversality, which embraces the diversity of worldviews, we facilitated participatory design workshops with PwDs to co-design accessible and equitable delivery robots. Our findings support the importance of delivery robots with varied form factors that are robust and customizable to support diverse PwDs and their respective needs and preferences. We also provide broader considerations for automation in delivery ecosystems.
Article Search
Article: hri26main-p2927-p
The Roles of Fairness and Effectiveness in Promoting Legitimacy and Cooperation with Security Robotic Authority
Xin Ye and
Lionel P. Robert
(University of Michigan, Ann Arbor, USA)
Security robots increasingly assume authoritative roles, but the underlying mechanisms for why humans cooperate with them is not well understood. This study proposed and tested a cooperation model based on legitimacy theory, focusing on how distributive fairness (outcome equity) and interactional fairness (treatment equity) influence robot legitimacy and cooperation. Using a 2 × 2 online video-based experiment with 372 U.S. participants, the authors found that both fairness types promote cooperation through value alignment, with a non-significant path through obligation to obey; meanwhile, perceived effectiveness was strongly associated with both value alignment and obligation to obey. These findings extend legitimacy theory to human–robot interaction in a U.S. context, emphasizing fairness and perceived effectiveness as key to fostering cooperation and informing ethical robot design.
Article Search
Article: hri26main-p3013-p
Generative Encounters with Robin: Design through Adaptation and Appropriation of a Social Robot in Four Eldercare Facilities
Nan Hu and
Selma Šabanović
(Indiana University at Bloomington, Bloomington, USA)
Iterative engagement conceptualizes robot design as a continuous process that extends beyond initial deployment, using real-life feedback from "generative encounters" with residents and staff to guide ongoing technology improvements. This study examines iterative development of "Robin the Robot"—a socially assistive robot deployed across multiple eldercare facilities—to investigate how contextual factors influence robot use, appropriation, and collaborative adaptation by developers and users. Using ethnographic methods, we conducted interviews with robot developers and care facility staff, along with on-site observations of interactions between older adults, care staff, and the robot. Our analysis reveals four key contextual factors that affect user perceptions and robot use: resident types within each facility, staff mental models of the robot, communication practices between staff and developers, and users' awareness of remote teleoperators. Our findings contribute new insights into how contextual factors shape eldercare robot adaptation and appropriation across diverse care institutions, providing practical guidance for designing socially assistive robots that can be effectively integrated into varied care environments.
Article Search
Article: hri26main-p3202-p
Unlocking Emotions: The Impact of Robot Question-Asking and Reciprocal Sharing on Self-Disclosure during Emotion Learning
Joana Brito,
Anouk Neerincx,
Antonio Soares,
Haohua Dong,
Ana Teresa Antunes,
Ana Paiva,
Maartje M.A. de Graaf, and
Joana Campos
(INESC-ID, Lisbon, Portugal; Instituto Superior Técnico - University of Lisbon, Lisbon, Portugal; HU University of Applied Sciences, Utrecht, Netherlands; Iscte-Instituto Universitário de Lisboa, Lisbon, Portugal; Utrecht University, Utrecht, Netherlands)
Social robots can support children’s emotional skills development through playful interactions, yet skills like emotional self-disclosure remain underexplored. This study investigates the impact of a social robot designed to encourage emotional self-disclosure during an emotion-identification game with children aged 6-10. In a between-subjects design with 28 participants across two local schools, we compared a Reflective condition, where the robot actively encouraged emotional self-disclosure through question-asking and reciprocal sharing, to a Control condition, where the robot did not. Children in the Reflective condition engaged in emotional self-disclosure when prompted, and showed higher engagement than those in Control. Direct question-asking was more effective than reciprocal self-disclosure. Results suggested that children who perceived the robot as kinder disclosed more, whereas those who viewed it as more real disclosed less. These findings highlight the potential of social robots to foster emotional skills in children and inform the design of future child-robot interaction research.
Article Search
Article: hri26main-p3313-p
The Valley of Ontological Friction: Motivating, Framing, and Guiding HRI Research on Verisimilitude and Its Implications
Tom Williams and
Alexandra Bejarano
(Colorado School of Mines, Golden, USA; Virginia Tech, Blacksburg, USA)
Human-Robot Interaction (HRI) researchers use diverse methodologies for empirical, hypothesis-driven research, including laboratory experiments, longitudinal field deployments, and online experiments. Within these, field deployments are typically seen as suffering from lessened ecological control, and online experiments are seen as suffering from lessened ecological validity. Yet HRI researchers have largely ignored other threats to validity that uniquely emerge at the center of this spectrum.
In this work, we analyze (1) how HRI experiments differ in terms of verisimilitude; (2) the nonlinear relation between verisimilitude and
ontological coherence; and (3) how this creates a Valley of Ontological Friction that presents unique threats to validity.
In doing so, we make several key contributions that motivate, frame, and guide future research: we (1) introduce novel theoretically-grounded concepts; (2) identify key constructs that must be operationalized and measured in future work; (3) draw theoretical distinctions that HRI researchers should observe when discussing the consequences of their choice of experimental methodology; and (4) identify key guiding questions that the field of HRI should aim to pursue.
Article Search
Article: hri26main-p3342-p
Customizing Robot Personality: How Personality Control and Form Factor Shape Perceptions of a Robot as a Social Agent
Alex Wuqi Zhang,
Aaron Huang,
Allison J. Li, and
Sarah Sebo
(University of Chicago, Chicago, USA; Columbia University, New York, USA)
A robot's personality can shape user experience and acceptance in many social robot applications. Allowing users to customize robot personality could help them tailor robot products to their preferences, but it remains unclear whether this customization diminishes perceptions of the robot as a social agent and whether robot form factor influences these effects. We conducted a 2x2 between-subjects study (N = 79) examining robot form factor (humanoid NAO vs. non-humanoid TurtleBot) and personality customizability (customizable vs. non-customizable) during a collaborative event-planning task. Our results reveal that while customization reduced perceived social agency for both robot types, this reduction was particularly evident for humanoid robots. Conversely, personality customization significantly improved human-robot rapport, with this improvement driven primarily by non-humanoid robots. These findings reveal form factor-dependent effects in personality customization, indicating that robot form and customization capabilities yield differential impacts on perceived social agency and human-robot rapport in human-robot interaction design.
Article Search
Article: hri26main-p3360-p
A Human-in-the-Loop Confidence-Aware Failure Recovery Framework for Modular Robot Policies
Rohan Banerjee,
Krishna Palempalli,
Bohan Yang,
Jiaying Fang,
Alif Abdullah,
Tom Silver,
Sarah Dean, and
Tapomayukh Bhattacharjee
(Cornell University, Ithaca, USA; Princeton University, Princeton, USA)
Robots operating in unstructured human environments inevitably encounter failures, especially in robot caregiving scenarios. While humans can often help robots recover, excessive or poorly targeted queries impose unnecessary cognitive and physical workload on the human partner. We present a human-in-the-loop failure-recovery framework for modular robotic policies, where a policy is composed of distinct modules such as perception, planning, and control, any of which may fail and often require different forms of human feedback. Our framework integrates calibrated estimates of module-level uncertainty with models of human intervention cost to decide which module to query and when to query the human. It separates these two decisions: a module selector identifies the module most likely responsible for failure, and a querying algorithm determines whether to solicit human input or act autonomously. We evaluate several module-selection strategies and querying algorithms in controlled synthetic experiments, revealing trade-offs between recovery efficiency, robustness to system and user variables, and user workload. Finally, we deploy the framework on a robot-assisted bite acquisition system and demonstrate, in studies involving individuals with both emulated and real mobility limitations, that it improves recovery success while reducing the workload imposed on users. Our results highlight how explicitly reasoning about both robot uncertainty and human effort can enable more efficient and user-centered failure recovery in collaborative robots. Supplementary materials and videos can be found at: emprise.cs.cornell.edu/modularhil.
Article Search
Article: hri26main-p3538-p
Designing Robots for Families: In-Situ Prototyping for Contextual Reminders on Family Routines
Michael F. Xu,
Enhui Zhao,
Yawen Zhang,
Joseph E. Michaelis,
Sarah Sebo, and
Bilge Mutlu
(University of Wisconsin-Madison, Madison, USA; University of Illinois at Chicago, Chicago, USA; University of Chicago, Chicago, USA)
Robots are increasingly entering the daily lives of families, yet their successful integration into domestic life remains a challenge. We explore family routines as a critical entry point for understanding how robots might find a sustainable role in everyday family settings. Together with each of the ten families, we co-designed robot interactions and behaviors, and a plan for the robot to support their chosen routines, accounting for contextual factors such as timing, participants, locations, and the activities in the environment. We then designed, prototyped, and deployed a mobile social robot in a four-day, in-home user study. Families welcomed the robot’s reminders, with parents especially appreciating the offloading of some reminding tasks. At the same time, interviews revealed tensions around timing, authority, and family dynamics, highlighting the complexity of integrating robots into households beyond the immediate task of reminders. Based on these insights, we offer design implications for robot-facilitated contextual reminders and discuss broader considerations for designing robots for family settings.
Article Search
Article: hri26main-p3539-p
Dull, Dirty, Dangerous: Understanding the Past, Present, and Future of a Key Motivation for Robotics
Nozomi Nakajima,
Pedro Reynolds-Cuéllar,
Caitrin Lynch, and
Kate Darling
(Robotics and AI Institute, Cambridge, USA; Olin College of Engineering, Needham, USA)
In robotics, the concept of “dull, dirty, and dangerous” (DDD) work has been used to motivate where robots might be useful. In this paper, we conduct an empirical analysis of robotics publications between 1980 and 2024 that mention DDD, and find that only 2.7% of publications define DDD and 8.7% of publications provide concrete examples of tasks or jobs that are DDD. We then review the social science literature on “dull,” “dirty,” and “dangerous” work to provide definitions and guidance on how to conceptualize DDD for robotics. Finally, we propose a framework that helps the robotics community consider the job context for our technology, encouraging a more informed perspective on how robotics may impact human labor.
Article Search
Article: hri26main-p3575-p
Task-Oriented Robot-Human Handovers on Legged Manipulators
Andreea Tulbure,
Carmen Scheidemann,
Elias Steiner, and
Marco Hutter
(ETH Zurich, Zurich, Switzerland)
Task-oriented handovers (TOH) are fundamental to effective human-robot collaboration, requiring robots to present objects in a way that supports the human’s intended post-handover use. Existing approaches are typically based on object- or task-specific affordances, but their ability to generalize to novel scenarios is limited. To address this gap, we present AFT-Handover, a framework that integrates large language model (LLM)-driven affordance reasoning with efficient texture-based affordance transfer to achieve zero-shot, generalizable TOH. Given a novel object-task pair, the method retrieves a proxy exemplar from a database, establishes part-level correspondences via LLM reasoning, and texturizes affordances for feature-based point cloud transfer. We evaluate AFT-Handover across diverse task-object pairs, showing improved handover success rates and stronger generalization compared to baselines. In a comparative user study, our framework is significantly preferred over the current state-of-the-art, effectively reducing human regrasping before tool use. Finally, we demonstrate TOH on legged manipulators, highlighting the potential of our framework for real-world robot-human handovers.
Article Search
Article: hri26main-p3595-p
Speaking with Screens: Design Space and Guidelines for Informational Robot Screens
Yujin Kim,
Christine P Lee, and
Bilge Mutlu
(University of Wisconsin-Madison, Madison, USA)
Advances in AI have enabled robots to engage in flexible, multi-turn dialogue. Yet many scenarios require robots to utilize additional modalities that complement speech to convey complex information. Robot screens can meet this need by supporting parallel processing, quick verification, and richer representations. As robots are integrated into an increasing number of scenarios with complex communication requirements, there is a need to systematically examine how screens may be used and designed to complement and augment verbal communication. In this paper, through an analysis of 357 commercial and research robots, we outline an initial design space of robot screens. Building on this design space, we present findings from two studies: a co-design study (n = 12) that explored user preferences for screen designs and derived a set of design guidelines; and an online pilot study (n = 89) that integrated these guidelines into screen designs and evaluated how these designs shaped user perceptions of robot communication. Our contributions include a database of screen-equipped robots; the design space of and guidelines for informational robot screens; and an empirical understanding of user perceptions on the use of robot screens.
Article Search
Info
Article: hri26main-p3680-p
Who’s the Boss? Children Negotiate Robot Control across Role and Context
Isabella Pu,
Kantwon Rogers,
Linh Dieu Dinh,
Sharifa Alghowinem, and
Cynthia Breazeal
(Massachusetts Institute of Technology, Cambridge, USA; Wellesley College, Wellesley, USA)
Children regularly negotiate questions of authority and control in home and school life, but little is known about how they believe robots should fit into these dynamics. We conducted a 75-minute design session with 17 children (ages 6-9) to examine when robots should take, share, or defer control, and how expectations shift when robots are framed as teachers, classmates, or mentees. Children resisted robot control, particularly in adult-regulated domains and areas tied to personal skill or self-expression. They were more open to robot control in domains where they felt less competent, or where robots, perceived as less legitimate authorities than humans, could substitute for adult control. Role framing further shaped expectations: teacher robots were granted autonomy, classmate robots were expected to act as peers, and mentee robots were expected to defer. These findings show that children apply context- and role-sensitive rules when negotiating control with robots. We conclude with design considerations for robots in children's everyday lives that respect children's agency, calibrate autonomy by domain, and align behavior with children's context-sensitive expectations.
Article Search
Article: hri26main-p3704-p
Reimagining Social Robots as Recommender Systems: Foundations, Framework, and Applications
Jin Huang,
Fethiye Irmak Doğan, and
Hatice Gunes
(University of Cambridge, Cambridge, UK)
Personalization in social robots refers to the ability of the robot to meet the needs and/or preferences of an individual user. Existing approaches typically rely on large language models (LLMs) to generate context-aware responses based on user metadata and historical interactions or on adaptive methods such as reinforcement learning (RL) to learn from users’ immediate reactions in real time. However, these approaches fall short of comprehensively capturing user preferences–including long-term, short-term, and fine-grained aspects–, and of using them to rank and select actions, proactively personalize interactions, and ensure ethically responsible adaptations. To address the limitations, we propose drawing on recommender systems (RSs), which specialize in modeling user preferences and providing personalized recommendations. To ensure the integration of RS techniques is well-grounded and seamless throughout the social robot pipeline, we (i) align the paradigms underlying social robots and RSs, (ii) identify key techniques that can enhance personalization in social robots, and (iii) design them as modular, plug-and-play components. This work not only establishes a framework for integrating RS techniques into social robots but also opens a pathway for deep collaboration between the RS and HRI communities, accelerating innovation in both fields.
Article Search
Article: hri26main-p3751-p
Not the Intended User: Queer Perspectives on Identity, Risk, and Trust in Robot Companions
Kaylee Nam,
Jackie Yee,
Raitah A. Jinnat,
Keys K. Rigual,
Alexandria Thylane, and
Raj Korpan
(Rice University, Houston, USA; City University of New York, New York, USA)
Robot companions often default to cisnormative and heteronormative assumptions, undermining trust and inclusion for LGBTQIA+ (queer) communities. Although inclusivity is widely discussed in HRI, empirical, community-led insights into queer needs and risks remain limited. We address this gap through a mixed-methods study of how queer individuals envision affirming and trustworthy robot companions, combining an online survey of community priorities with participatory design workshops exploring lived experiences and design tensions. Although many participants felt they were not the "intended user" of current technologies, they outlined clear priorities for future robot companions, including accurate handling of names and pronouns, robust privacy and data safeguards, clear and bounded utilitarian roles, and user-controlled adaptability, while identifying risks such as misgendering, surveillance, stereotype reinforcement, and unhealthy dependence. We translate these findings into justice-oriented design requirements for robot companions serving queer and other marginalized communities.
Article Search
Article: hri26main-p3833-p
Explainable OOHRI: Communicating Robot Capabilities and Limitations as Augmented Reality Affordances
Lauren W. Wang,
Mohamed Kari, and
Parastoo Abtahi
(Princeton University, Princeton, USA)
Human interaction is essential for issuing personalized instructions and assisting robots when failure is likely. However, robots remain largely black boxes, offering users little insight into their evolving capabilities and limitations. To address this gap, we present explainable object-oriented HRI (X-OOHRI), an augmented reality (AR) interface that conveys robot action possibilities and constraints through visual signifiers, radial menus, color coding, and explanation tags. Our system encodes object properties and robot limits into object-oriented structures using a vision-language model, allowing explanation generation on the fly and direct manipulation of virtual twins spatially aligned within a simulated environment. We integrate the end-to-end pipeline with a physical robot and showcase diverse use cases ranging from low-level pick-and-place to high-level instructions. Finally, we evaluate X-OOHRI through a user study and find that participants effectively issue object-oriented commands, develop accurate mental models of robot limitations, and engage in mixed-initiative resolution.
Article Search
Info
Article: hri26main-p3846-p
Context-Aware Generation and Modulation of Expressive Motion Behavior using Multimodal Foundation Models
Till Hielscher,
Fabio Scaparro, and
Kai O. Arras
(University of Stuttgart, Stuttgart, Germany)
Expressive robot motion positively impacts human-robot interaction by improving user engagement, likability, or task performance. We present a novel approach to automatically generate and modulate complex, expressive full-body gesture sequences from a multimodal (text, audio, video) context description. Based on a unified mathematical implementation of the Principles of Animation using Dynamic Movement Primitives, we use multimodal foundation models to generate such sequences along with parametric motion variations that are highly context-aware. This method extends the state of the art in terms of flexibility and generality by being interpretable, composable, and working across different robot morphologies. Moreover, we integrate the system into a continuous control framework and leverage knowledge distillation to learn a much smaller model, significantly improving token efficiency and system latency. Results from a user study with a human-like platform indicate that participants judged our system’s motions to be better aligned with the interaction context than motions produced under a non-modulated or API-level condition. We also demonstrate how the system generates expressive motion for robots with different kinematics, showcasing its versatility. Paper webpage: https://gen-mod-expressive-motion-behavior.github.io/.
Article Search
Article: hri26main-p3919-p
Kept Alive, Bricked, Revived: Community Articulation Work and Value Renegotiation beyond a Robot’s Commercial Failure
Waki Kamino,
Bengisu Cagiltay,
Bilge Mutlu,
Malte F. Jung, and
Selma Šabanović
(Cornell University, Ithaca, USA; Koç University, Istanbul, Türkiye; University of Wisconsin-Madison, Madison, USA; Indiana University at Bloomington, Bloomington, USA)
This paper examines the community-driven sustenance of Moxie, a social robot that faced discontinuation when its parent company failed to secure funding. Through interviews and investigative digital ethnography, we study how users performed extensive articulation work to transition from corporate support to an open-source platform. Our findings reveal that invisible labor and value negotiation were central to Moxie's continued operation, simultaneously opening access for some users while excluding others. These processes also fundamentally reshaped the robot's desirability and meaning within the user community. This work demonstrates how socio-technical infrastructures, articulation work, and value renegotiation can sustain robots beyond their commercial lifecycles, while revealing the uneven distribution of both labor and access in community-driven technology repair and maintenance.
Article Search
Article: hri26main-p3979-p
When Fatigue Shapes Trust: Perceptual Shifts beyond Performance in Physical Human-Robot Collaboration
Aakash Yadav,
Prabhakar Pagilla, and
Ranjana Mehta
(University of Wisconsin-Madison, Madison, USA; Texas A&M University, College Station, USA)
Human-Robot Collaboration (HRC) is increasingly common in physically demanding industrial settings. However, the impact of variable human states, such as physical fatigue, on trust and the quality of this collaboration remains unclear. This is especially critical during HRC that entails co-lifting of industrial materials. This study examines the impact of operator physical fatigue on trust in a robotic partner, as well as associated perceptions of robot effort, coordination, lift quality, and objective lift performance during a co-lifting task that requires complex maneuvering. In a within-subjects experiment, 40 healthy adults (20 females) performed a collaborative lifting task through an asymmetrical complex lift trajectory under two conditions: no-fatigue and fatigue, induced via physical exertion of the dominant arm. We collected subjective ratings of trust, perceived exertion, lift quality, robot effort, and coordination, alongside objective performance metrics based on movement trajectories, such as similarity (using dynamic time warping), acceleration, and jerks. The results demonstrate that physical fatigue, as evidenced by increased perceived exertion ratings, significantly decreased participants' trust in the robot (particularly in the late phase), despite no change in the robot's actual performance. Interestingly, the impacts of fatigue on perceptions of effective coordination were dynamically impacted by the early (higher under fatigue) versus the late (lower under fatigue) phases; participants viewed the robot's effort as less consistent only in the late phase when fatigued. Surprisingly, while fatigue did not affect any of the objective performance metrics based on movement trajectories, participants perceived an improvement in transport quality; however, this was only found in the early phase. These findings indicate that physical fatigue alters not only the physical capacity of the operator but also crucial perceptual shifts of the robot partner and perceived transport quality. This highlights the need for more objective metrics to evaluate HRC that are resilient to operator fatigue. Doing so may enable the development of robust HRC systems that ensure effective and trustworthy collaborations.
Article Search
Article: hri26main-p4111-p
Can You Help Me? The Influence of Robot Requests for Help on Child-Robot Connection
Teresa Flanagan,
Justin Chenjia Zhang,
Lin Bian, and
Sarah Sebo
(University of Chicago, Chicago, USA)
Children are interacting with robots in sophisticated ways to the extent in which they may be establishing a relationship. Forming appropriate levels of connection between children and robots could have significant impacts on the future of robot design, particularly in education. Despite this, little is known about the underlying mechanisms of such formation. In this work, we take an initial step by exploring what robot behaviors build a child-robot connection in a single interaction. Specifically, we investigated whether 6-10-year-old children feel more connected to a robot that responds to an issue by asking the child for help or simply disclosing the issue, and whether this is dependent on the valence of the response to the issue (emotional or mechanical). In a 2 x 2 between-subjects study (N=100), we found that children of all ages trusted a robot that asked for help more than a robot that simply disclosed the issue. Furthermore, children felt closer to an emotional robot that asked for help than an emotional robot that did not ask for help. Together, these findings suggest that asking for help builds trust between a robot and child and expressing relatable vulnerability, via emotional help requests, creates further feelings of connection.
Article Search
Info
Article: hri26main-p4221-p
Explaining Why Things Go Where They Go: Interpretable Constructs of Human Organizational Preferences
Emmanuel Fashae,
Michael Burke,
Leimin Tian,
Lingheng Meng, and
Pamela Carreno-Medrano
(Monash University, Clayton, Australia; CSIRO Robotics, Melbourne, Australia; Monash University, Melbourne, Australia; CSIRO, Melbourne, Australia; CSIRO, Clayton, Australia)
Robotic systems for household object rearrangement often rely on latent preference models inferred from human demonstrations. While effective at prediction, these models offer limited insight into the interpretable factors that guide human decisions. We introduce an explicit formulation of object arrangement preferences along four interpretable constructs: spatial practicality (putting items where they naturally fit best in the space), habitual convenience (making frequently used items easy to reach), semantic coherence (placing items together if they are used for the same task or are contextually related), and commonsense appropriateness (putting things where people would usually expect to find them). To capture these constructs, we designed and validated a self-report questionnaire through a 63-participant online study. Results confirm the psychological distinctiveness of these constructs and their explanatory power across two scenarios (kitchen and living room). We demonstrate the utility of these constructs by integrating them into a Monte Carlo Tree Search (MCTS) planner and show that when guided by participant-derived preferences, our planner can generate reasonable arrangements that closely align with those generated by participants. This work contributes a compact, interpretable formulation of object arrangement preferences and a demonstration of how it can be operationalized for robot planning.
Preprint
Article: hri26main-p4244-p
Estimation of Mobile Robot Waiting Locations using Fluid Simulation without Prior Observation
Ryoya Sakaki,
Yoshio Ishiguro,
Kento Ohtani,
Takanori Nishino, and
Kazuya Takeda
(Nagoya University, Aichi, Japan; University of Tokyo, Tokyo, Japan; Nagoya University, Nagoya, Japan; Meijo University, Nagoya, Japan)
The social implementation of small autonomous mobile robots is advancing in delivery and accompanying services. However, inappropriate waiting positions can obstruct pedestrian flow and impair facility operations. Conventional waiting location estimation methods require long-term observation of actual pedestrian walking history, resulting in high introduction costs and environmental dependence. This research proposes a fluid simulation-based method that estimates waiting locations using only floor maps without prior observation. By approximating pedestrian flow as a two-dimensional incompressible fluid, the method extracts waiting candidates from velocity fields determined by environmental geometry. Through comparative verification with agent-based simulation and human subject experiments, we confirmed that the proposed method achieves estimation accuracy equivalent to history-based approaches while requiring no prior observation. The method demonstrates environmental adaptability across simple corridors to complex spaces, with a practical threshold of 0.4 times maximum flow velocity effectively distinguishing appropriate waiting locations. The approach enables rapid deployment and adaptation to layout changes, addressing key challenges in mobile robot social implementation.
Article Search
Article: hri26main-p4247-p
Robot-Assisted Medical Training for Safety-Critical Environments
Huajie Jay Cao,
Michael Joseph Sack,
Lili Mkrtchyan,
Kevin Ching,
Tariq Iqbal,
Hee Rin Lee, and
Angelique Taylor
(Michigan State University, East Lansing, USA; Cornell University, Ithaca, USA; Weill Cornell Medicine, New York City, USA; University of Virginia, Charlottesville, USA; Cornell University, New York, USA)
While resuscitation training is critical, healthcare workers (HCWs) with high workload have limited chance to get trained and re-trained due to time and resource constraints. To address this gap, we engaged in a co-design process of robots that facilitate and prepare HCWs for resuscitation procedures (i.e., codes). First, we investigated what resuscitation training consists of, including challenges faced by trainees and trainers. Second, we collaboratively explored how a crash cart robot, that guides users to medical supplies and equipment, could assist trainers and trainees synchronously–during team-based clinical simulations and asynchronously–during one-on-one training. We found that robots could 1) serve as a learning assistant by providing real-time feedback and supporting personalized training needs; and 2) an evaluating assistant by monitoring multiple trainees and tracking critical timing of interventions in the training. Through this new training paradigm, we hope to demonstrate opportunities for crash cart robots to aid HCWs for their sustainable training and reskilling. We discuss the role of robots in training beyond cognitive knowledge, situating them within two underexplored contexts: practical skill training and team-based training.
Article Search
Article: hri26main-p4400-p
Embodied Referring Expression Comprehension in Human-Robot Interaction
Md Mofijul Islam,
Alexi Gladstone,
Sujan Sarker,
Ganesh Nanduru,
Md Fahim,
Keyan Du,
Aman Chadha, and
Tariq Iqbal
(University of Virginia, Charlottesville, USA; University of Dhaka, Dhaka, Bangladesh; Apple, Cupertino, USA)
As robots enter human workspaces, there is a crucial need for them to comprehend embodied human instructions, enabling intuitive and fluent human-robot interaction (HRI). However, accurate comprehension is challenging due to a lack of large-scale datasets that capture natural embodied interactions in diverse HRI settings. Existing datasets suffer from perspective bias, single-view data collection, inadequate coverage of nonverbal gestures, and a predominant focus on indoor environments. To address these issues, we present the Refer360 dataset, a large-scale dataset of embodied verbal and nonverbal interactions collected across diverse viewpoints in both indoor and outdoor settings. Additionally, we introduce MuRes, a multimodal guided residual module designed to improve embodied referring expression comprehension. MuRes acts as an information bottleneck, extracting salient modality-specific signals and reinforcing them into pre-trained representations to form complementary features for downstream tasks. We conduct extensive experiments on four datasets, including our Refer360 dataset, and demonstrate that current multimodal models fail to capture embodied interactions comprehensively; however, augmenting them with MuRes consistently improves performance. These findings establish Refer360 as a valuable benchmark and exhibit the potential of guided residual learning to advance embodied referring expression comprehension in robots operating within human environments.
Article Search
Article: hri26main-p4516-p
Emergency Department Waiting Experience: The Impact of an Attentive Robot
Yuval Rubin Kopelman,
Dikla Dahan Shriki,
Roy Cohen Elias,
Tomer Leivy,
Julian Waksberg,
Mira Leybel,
Jalal Ashkar,
Mahmod Hamdan, and
Hadas Erel
(Reichman University, Herzliya, Israel; Hillel Yaffe Medical Center, Hadera, Israel)
It is well established that when people seek medical care for physical conditions, their affective state significantly impacts health and recovery. Despite this, patients’ emotional needs are often overlooked due to the prioritization of physical care and resource constraints. Robots’ social capabilities, especially robotic attentiveness, present an opportunity to address this gap and enhance patients’ experience in healthcare settings. While robots are already integrated into healthcare, they are largely leveraged for functional support, with limited attention to their potential role in addressing affective challenges. In this field study, we evaluated whether a simple attentive robot could improve the affective experience in the emergency department (ED) of a public general hospital. Participants’ waiting experience was compared across three conditions: (1) waiting next to an attentive robot; (2) waiting next to a non-attentive robot; and (3) waiting without a robot. Behavioral and self-report measures indicated that the attentive robotic behavior reduced the unpleasantness of the waiting experience. Our findings highlight the power of leveraging simple robotic attentiveness to support patients' affective state in stressful healthcare environments.
Article Search
Video
Article: hri26main-p4523-p
Fictional vs. Factual Robot Tutor Dialogue Can Shape Child Social-Emotional Learning
Lauren L. Wright,
Kaitlyn Li,
Hewitt Watkins,
Kiljoong Kim, and
Sarah Sebo
(University of Chicago, Chicago, USA; Chapin Hall at the University of Chicago, Chicago, USA)
Social-emotional learning (SEL) is an educational framework that helps children develop the skills necessary for academic and life success. However limited resources restrict most schools to whole-group SEL instruction which may not benefit all students. In this work, we explore using social robots to address this challenge and how a robot’s dialogue style can influence the effectiveness of one-on-one SEL lessons. The dialogue styles we investigate are (1) fictional dialogue, where the robot is human-like with emotions and discusses SEL scenarios as first person anecdotes, and (2) factual dialogue, where the robot is transparent, lacks emotions, and discusses scenarios from the third person. In a between-subjects study (N=52) at Chicago schools, students aged 9-10 were either part of a control group, receiving no robot instruction, or received four SEL lessons across two weeks from either the fictional or factual robot. We found that students who had lessons with either robot improved more in lesson skill than students in the control. We also found that during lessons students spoke to the factual robot using more lesson concepts than those talking to the fictional robot, indicating that first person storytelling and emotional disclosure from a robot may be unnecessary for, or even hinder, SEL learning with a robot.
Article Search
Article: hri26main-p4565-p
Beyond Information Amount: Rethinking Transparency through Active Reasoning Demand
Xuedong Zhang,
Yilu Ye,
Yong Ma,
Zelun Tony Zhang, and
Andreas Butz
(LMU Munich, Munich, Germany; University of Bergen, Bergen, Norway; TU Munich, Munich, Germany; Munich Center for Machine Learning, Munich, Germany)
Transparency of a robot's actions and intentions is important for trustful human-robot collaboration. Current approaches for creating transparency through explanations mostly follow the approach "more information creates more transparency", assuming cognitive load remains manageable. However, this ignores human active reasoning, which can, if not too demanding, create a better understanding from less information, as found in education literature. To explore this self-explanation effect, we compared three explanation structures (fully-specified, under-specified, no explanation) that induced different levels of active reasoning demand (low, medium, high). In a controlled laboratory study, 36 participants observed a robot completing rule-based classification tasks of varying difficulty (easy, moderate, hard) across all explanation conditions. We found that a moderate reasoning demand, elicited through under-specified explanations, produced the best understanding of the robot's actions compared to full or no explanations. This challenges the current "more helps more" approach and may help design more effective explanations for creating transparency.
Article Search
Article: hri26main-p4641-p
Teaching the Teacher: Live Foundation Model and Augmented Reality Feedback for Human-to-Robot Skill Transfer
Nina Marie Moorman,
Matthew Luebbers,
Zhang Xi-Jia,
Yee Ching (Marcus) Lau,
Yixing Yao,
Megan Langwasser,
Zulfiqar Zaidi,
Letian Chen,
Sanne van Waveren, and
Matthew Gombolay
(Georgia Institute of Technology, Atlanta, USA)
Deploying robots in dynamic, human-populated environments will require techniques for adaptable robot skill acquisition that extend beyond pre-programmed functionality. Learning from demonstration (LfD) methods enable robots to learn skills from human-provided trajectories demonstrated in situ. However, prior work has shown non-expert end-users struggle to provide demonstrations that enable robots to perform complex, multi-step tasks, or to generalize skill knowledge beyond a specific environment and task context. This work enables robots to actively participate in the situated learning interaction by autonomously providing bespoke guidance in response to end-users' demonstrations, thus improving end-users' ability to teach robots useful skills via LfD. We introduce a novel LfD system integrating foundation model (FM)-based textual feedback and augmented reality (AR)-based visual feedback. The FM and AR feedbacks operate synergistically, with FM feedback helping users break tasks down effectively and with AR feedback allowing users to quickly evaluate how well demonstrations perform and generalize. This system provides targeted, actionable guidance throughout the demonstration process: it enhances users' ability to define, decompose, and demonstrate modular, repurposable skills capable of accomplishing complex tasks. We validate our system with a human-subjects experiment in which participants receive bespoke feedback as they teach a robot via kinesthetic demonstrations in a pair of robotic manipulation domains. From this study, we observe positive results demonstrating that the combination of AR and FM feedback improves the quality and generalizability of robot policies, compared to AR feedback alone, FM feedback alone, or a baseline system where learned skills can be played physically on the robot.
Article Search
Article: hri26main-p4726-p
SCOPE: Real-Time Natural Language Camera Agent at the Edge: A Sim-to-Real Benchmark and Analysis of Open-Source Vision and Language Agents for PTZ Camera Tasks
Nikolaj Hindsbo,
Sina Ehsani, and
Pragyana Mishra
(Armada, Bellevue, USA)
Deploying language-driven agents in robotics requires evaluations that reflect real-world task demands: natural-language instructions with reproducible outcomes. Such agents must connect language models to callable perception and control tools, and be assessed using deployment-critical metrics including latency, accuracy, and error modes. We present SCOPE (Simulation and Camera Operations for Perception and Evaluation), a modular agent for natural-language, open-vocabulary pan–tilt–zoom (PTZ) camera control and visual scene understanding, designed explicitly for edge deployment. SCOPE operates both in a Blender-based simulation environment and on a physical PTZ camera, executing all perception, planning, and control locally at the deployment site using edge-accessible compute.
We introduce a Blender-based agent environment that exposes realistic PTZ control affordances and enables reproducible, language-driven tasks aligned with real-world camera operation. Using this environment, we release a 536-task benchmark spanning QA, single- and multi-step commands, counting, spatial reasoning, descriptions, and optical character recognition. Execution traces are combined with an LM-as-Judge to evaluate latency, accuracy, and error modes.
We evaluate 19 planner–perception model combinations pairing Qwen3 small language models (SLMs) with Moondream and Qwen vision–language models (VLMs) on our benchmark. Stronger SLMs substantially reduce hallucinations and improve tool routing, leading to more reliable closed-loop behavior. Once a sufficiently capable SLM is used, perception becomes the dominant performance bottleneck. Architectural choices further shape deployability: Mixture-of-Experts models on both the planning and perception side consistently match or exceed dense alternatives while operating at latencies and memory footprints comparable to much smaller networks. Quantization provides additional efficiency gains with minimal accuracy degradation. Together, these results identify a practical, sim-to-real–validated design point for real-time, edge-feasible language-driven PTZ control.
Article Search
Info
Article: hri26main-p4761-p
Robotic Bias and Its Dimensions
Tomislav Furlanis,
Dražen Brščić, and
Takayuki Kanda
(University of Ljubljana, Ljubljana, Slovenia; Kyoto University, Kyoto, Japan)
Bias in algorithmic systems is well documented, but the growing deployment of social robots in workplaces, homes, and public spaces poses a distinct challenge: their physical embodiment and social presence make bias visible in appearance, tangible in interaction, and consequential in everyday inclusion. This paper proposes a framework for understanding robotic bias across three dimensions - stereotypes, constraints, and treatments. The framework clarifies how different forms of bias emerge through embodiment and interaction, and highlights why they call for different strategies of intervention. In doing so, it provides a foundation for more systematic recognition and mitigation of bias in human-robot interaction.
Article Search
Article: hri26main-p4776-p
Bidirectional Human-Robot Communication for Physical Human-Robot Interaction
Junxiang Wang,
Cindy Wang,
Rana Soltani Zarrin, and
Zackory Erickson
(Carnegie Mellon University, Pittsburgh, USA; Honda Research Institute, San Jose, USA)
Effective physical human-robot interaction requires systems that are not only adaptable to user preferences but also transparent about their actions. This paper introduces BRIDGE, a system for bidirectional human-robot communication in physical assistance. Our method allows users to modify a robot's planned trajectory---position, velocity, and force---in real time using natural language. We utilize a large language model (LLM) to interpret any trajectory modifications implied by user commands in the context of the planned motion and conversation history. Importantly, our system provides verbal feedback in response to the user, either assuring any resulting changes or posing a clarifying question. We evaluated our method in a user study with 18 older adults across three assistive tasks, comparing BRIDGE to an ablation without verbal feedback and a baseline. Results show that participants successfully used the system to modify trajectories in real time. Moreover, the bidirectional feedback led to significantly higher ratings of interactivity and transparency, demonstrating that the robot's verbal response is critical for a more intuitive user experience. Videos and code can be found on our project website.
Article Search
Info
Article: hri26main-p4917-p
From Human Bias to Robot Choice: How Occupational Contexts and Racial Priming Shape Robot Selection
Jiangen He,
Wanqi Zhang, and
Jessica K. Barfield
(University of Tennessee at Knoxville, Knoxville, USA; University of Kentucky, Lexington, USA)
As artificial agents increasingly integrate into professional environments, fundamental questions have emerged about how societal biases influence human-robot selection decisions. We conducted two comprehensive experiments (N = 1,038) examining how occupational contexts and stereotype activation shape robotic agent choices across construction, healthcare, educational, and athletic domains. Participants made selections from artificial agents that varied systematically in skin tone and anthropomorphic characteristics. Our study revealed distinct context-dependent patterns. Healthcare and educational scenarios demonstrated strong favoritism toward lighter-skinned artificial agents, while construction and athletic contexts showed greater acceptance of darker-toned alternatives. Participant race was associated with systematic differences in selection patterns across professional domains. The second experiment demonstrated that exposure to human professionals from specific racial backgrounds systematically shifted later robotic agent preferences in stereotype-consistent directions. These findings show that occupational biases and color-based discrimination transfer directly from human-human to human-robot evaluation contexts. The results highlight mechanisms through which robotic deployment may unintentionally perpetuate existing social inequalities.
Article Search
Article: hri26main-p5069-p
RECS: LSTM-Based Cognitive Status Estimation for Human-Robot Interaction
Mark Higger,
Zhuoyi Wang,
Polina Rygina,
Lara Ferreira Bezerra,
Logan Daigler,
Zane Aloia,
Sheena Wu,
Ishani Pandey,
Amanda Chen, and
Tom Williams
(Colorado School of Mines, Golden, USA; University of Wisconsin, Madison, USA)
Effective communication is critical to the success of many types of human-robot interaction. A key capability for enabling effective communication is accurate modeling of the cognitive status that entities hold (e.g., modeling what objects interlocutors are currently thinking about, or are generally aware of). However, existing models of cognitive status estimation are not well suited for situated and embodied interactions, as they do not account for nonverbal cues, which are a key way in which humans moderate cognitive status. To address this gap, we make three primary contributions. First, we introduce the BOWTIE corpus of dialogues from a multi-modal open-world referential task, annotated with cognitive status, gesture type, linguistic roles, and grammatical roles of entities across utterances. Second, we introduce RECS, the first LSTM-based model of cognitive status, which we train on the BOWTIE corpus. Third, we present empirical evidence for the success of RECS.
These contributions stand to accelerate the future use of cognitively informed algorithms for robot language understanding and generation.
Article Search
Article: hri26main-p5095-p
The RUSH Checklist: A Standardized Framework for Reporting User Studies in Human-Robot Interaction
Shruti Chandra,
Katie Seaborn,
Giulia Barbareschi,
Wing-Yue Geoffrey Louie,
Shelly Bagchi,
Sara Cooper,
Zhao Han, and
Daniel Tozadore
(University of Northern British Columbia, Prince George, Canada; Institute of Science Tokyo, Tokyo, Japan; University of Cambridge, Cambridge, UK; Keio University, Yokohama, Japan; University of Duisburg-Essen, Duisburg, Germany; Oakland University, Rochester, USA; National Institute of Standards and Technology, Gaithersburg, USA; Artificial Intelligence Institute - CSIC, Cerdanyola, Spain; University of South Florida, Tampa, USA; University College London, London, UK)
Transparent and consistent reporting of user studies is essential for advancing scientific knowledge. In human-robot interaction (HRI), studies are often reported incompletely, even in top-tier venues, limiting proper evaluation, replication, and practical application of findings in practice. This study aimed to generate expert consensus on a reporting checklist for HRI user studies and provide a validated tool to improve transparency, reproducibility, and methodological rigor in the field, leading to easier translation of research into practice. A two-round Delphi study was conducted with 34 HRI experts from academia and industry from over 12 countries. An international panel of nine interdisciplinary experts first synthesized a preliminary list of 116 reporting items from the literature. Experts rated the importance of each item and provided qualitative feed- back. Consensus was defined as 70% agreement, and items were iteratively refined through anonymous online surveys. Overall, consensus was achieved on 106 items, encompassing both essential and context-dependent elements in nine domains. The resulting RUSH checklist (Reporting User Studies in Human-Robot Inter- action) provides the first community-endorsed, consensus-based reporting guideline for HRI user studies.
Article Search
Info
Article: hri26main-p5197-p
MotionBuddy: Exploring Tactile-Based Motion Learning with a Tabletop Humanoid Robot for Blind People
Kengo Tanaka,
Xiyue Wang,
Hironobu Takagi,
Yoichi Ochiai, and
Chieko Asakawa
(University of Tsukuba, Tsukuba, Japan; Miraikan - The National Museum of Emerging Science and Innovation, Tokyo, Japan; IBM Research, Tokyo, Japan; IBM, Yorktown Heights, USA)
Blind people face persistent challenges in learning body movements such as exercise, dance, and rehabilitation routines. Verbal instructions are widely used but often ambiguous, while tactile graphics or 3D models can illustrate static postures but not transitions. Humanoid robots can present dynamic motions, suggesting potential to convey trajectories and simultaneous limb actions. We conducted an exploratory study with 11 blind participants comparing humanoid robot demonstrations with audio instructions. Quantitative evaluation assessed reproduction accuracy, learning time, and usability ratings, while qualitative interviews captured perceived benefits and challenges. Results show that simple movements could be conveyed through both modalities, but robots were particularly effective for complex transitions and concurrent limb coordination that audio could not easily express. These findings highlight design opportunities for integrating multimodal instruction to support movement learning for blind people.
Article Search
Article: hri26main-p5254-p
Enhancing Gesture-Based Human-Robot Interaction: Investigating the Role of Force Automation
Tonia Mielke,
Marilena Georgiades,
Oliver S. Großer,
Maciej Pech,
Christian Hansen, and
Florian Heinrich
(University of Magdeburg, Magdeburg, Germany; Forschungscampus STIMULATE, Magdeburg, Germany; University Hospital Magdeburg, Magdeburg, Germany)
Directly controlling robot motion through human–robot interaction enables the integration of human expertise into robot control. However, this requires efficient interaction methods. While previous work has shown that hand gestures offer potential for natural control, the lack of haptic feedback remains a key challenge. One solution is partial automation, in which the contact force is autonomously controlled.
We implemented force automation enhanced gesture control, as well as the state-of-the-art baseline of hand-guiding, to systematically investigate hand gesture interaction for robot control. A user study (n=28) was conducted to investigate the efficiency and workload of these interaction concepts. As the interaction method efficiency may depend on task demands, two tasks with different precision requirements were evaluated in the context of robotic ultrasound.
The results indicate that force automation significantly reduces perceived workload and task duration when using hand gestures. Moreover, gesture-based interaction can match the efficiency of hand-guiding for broad tasks and even outperform it in precise tasks. These findings highlight the potential of gesture-based interaction for robot control, particularly when the absence of haptic feedback is compensated by partial automation.
Article Search
Video
Article: hri26main-p5302-p
Towards a Systematic Model of the Effects of Transparency Utterances on Calibrating Trust in Social Robots
Kerstin Fischer and
Matouš Jelínek
(University of Southern Denmark, Sønderborg, Denmark)
This paper presents the theoretical basis for understanding how transparency utterances in social robots can be systematically employed to regulate trust. Current approaches to trust calibration often lack more detailed insights into how different verbal explanations of actions or intentions shape users' mental models of the robot's capabilities. Drawing on pragmatic principles and mechanisms from human interaction research, we present a model of how transparency utterances contribute to mental models of the robotic interaction partner. We test the model in a user study on the effects of transparency utterances that allow inferences about higher- or lower-level capabilities. The in-person, between-subject experiment with N=47 shows that depending on the design of the transparency utterance, users perceive the competence, benevolence and transparency of the robot differently and infer more or fewer additional capabilities. The results confirm the effect of the proposed design of transparency utterances on (over-)trust. Our model thus offers a theoretical foundation for developing transparency strategies in various contexts of human-robot interaction.
Article Search
Article: hri26main-p5368-p
Coordinating Speech with Touch Input and Visual Cues in Human–Robot Interaction: A Multimodal System Evaluated through Metamorphic Testing
Massimo Donini,
Paolo Arcaini,
Michael Oliverio,
Fuyuki Ishikawa,
Alessandro Mazzei,
Deyun Lyu, and
Cristina Gena
(University of Turin, Turin, Italy; National Institute of Informatics, Tokyo, Japan)
This paper presents a multimodal human-robot interaction (HRI) system for educational contexts implemented on the humanoid robot Pepper. The system leverages multiple communicative channels, allowing learners to combine speech with tablet interaction while the robot responds through synchronized speech, textual captions and dynamic visual cues. To ensure robustness and reliability, we introduce the use of Metamorphic Testing for multimodal HRI. By validating system behavior through systematic input transformations, we demonstrate how metamorphic testing can uncover inconsistencies across linguistic, visual and cross-modal interactions. This work contributes both a novel methodological framework for evaluating multimodal HRI systems and an application to educational robotics.
Article Search
Article: hri26main-p5442-p
Mind the Seat: Passenger Compliance with a Social Robot Conductor’s Safety Announcements on Buses
Nihan Karatas,
Linjing Jiang,
Yuki Yoshihara,
Tetsuya Hirota,
Ryugo Fujita, and
Takahiro Tanaka
(Nagoya University, Nagoya, Japan; Tokai Rika, Aichi, Japan)
Standing passengers in public buses face increased fall risk during sudden braking and acceleration, yet they often remain standing despite available seats due to proxemic discomfort, seating norms, and trip goals. We present a two-day, in-the-wild deployment of a minimal social robot “conductor” that delivered real-time, context-aware greetings and seating/safety prompts on regular bus routes, driven by an AI-based electronic control unit (AI-ECU) pipeline that recognized passenger state and seat-zone availability (front, priority, rear) from onboard cameras. Using synchronized video and system logs, we analyzed 74 compliance-potential episodes among 670 boardings. Overall seat-taking compliance was 27.0% (20/74) and tended to be higher when passengers visibly attended to the robot. When passengers complied, they were more likely to choose rear seating, suggesting that compliance depended on the emergence of a socially unambiguous, low-friction option rather than availability alone. The findings show that robotic safety prompts are negotiated through situational constraints and normative seating logics, motivating designs that make socially acceptable seating options salient, low-friction, and well-timed in shared public environments.
Article Search
Article: hri26main-p5489-p
From Voice to Form: How Gender-Ambiguous Voices Shape Physical Robot Design
Martina De Cet,
Negin Hashmati,
Mohammad Obaid, and
Ilaria Torre
(Chalmers University of Technology and University of Gothenburg, Gothenburg, Sweden)
Robot design often involves gendered choices that shape Human–Robot Interaction. Voice is a key channel through which gendering occurs, yet little is known about how it influences people’s mental images of robots. This study examines how ambiguous, feminine, and masculine voices affect physical robot design. Participants (N = 45) listened to robot voices (ambiguous, feminine, masculine), built a physical prototype, took part in an interview to explain their design process, and concluded by evaluating both the voice and the robot prototype they built. The findings show that although participants’ explicit ratings of the robots showed no differences across conditions, analyses of the physical prototypes and interview data revealed consistent patterns, suggesting that voice strongly shaped design choices. Specifically, we found that ambiguous voices led to less human-like forms and more hybrid human-machine-like and masculine forms, whereas masculine voices encouraged more human-like prototypes. The results suggest that starting robot design from voice, particularly ambiguous voices, helps reduce gendering and fosters more inclusive robots.
Article Search
Article: hri26main-p5519-p
Exploring Labor Issues in Human–Robot Collaboration: Community-Based Research with Trade Unions in Manufacturing
Yuxuan Wang,
Wenlong Zhang, and
Hee Rin Lee
(Michigan State University, East Lansing, USA; Arizona State University, Mesa, USA)
Automation technologies are increasingly adopted in industrial workplaces, reshaping organizational structures and labor dynamics. Drawing on a community-based research study with production workers and union representatives from a large U.S. automotive manufacturing facility, this study shows that workers’ challenges with robots and the underlying causes of these challenges are closely tied to a historically developed division of labor. This division separates planning and decision-making, dominated by management and engineers, from execution, which is left to workers. These findings indicate that such challenges cannot be addressed through technological interventions alone but require sociopolitical approaches. We therefore propose integrating workers’ embodied knowledge into HRI research, alongside automation literacy education, as pathways to reconfigure existing labor relations.
Article Search
Article: hri26main-p5565-p
Chirality-Aware Grammar-Guided Surgical Action Anticipation from Video
Md Rezowan Hossain Ferdous Shuvo,
M. S. Mekala, and
Eyad Elyan
(Robert Gordon University, Aberdeen, UK)
Anticipating surgical actions requires more than recognising motion patterns, it also demands adherence to procedural logic and the resolution of subtle ambiguities, such as distinguishing mirrored grasp–retract interactions. However, existing Transformer-based models often fall short in this domain, often producing structurally invalid step sequences and misclassifying chirally opposite actions that appear visually similar. To address these limitations, we introduce a neuro-symbolic framework centred on a Probabilistic Temporal Grammar (PTG). The grammar was constructed from a unified corpus of surgical data (Ground-Truth), by capturing procedural structure, temporal priors, and chirality-aware terminals for opposite actions (e.g., push_needle <-> pull_suture) directly into its rules. To enforce causal consistency, the PTG incorporates a Goal-conditioned Multivariate Markov Chain (GcMMC) that models evolving object-action dependencies. Our framework employs a two-stage process: a V-JEPA-powered Transformer generates raw forecasts of future actions and durations, which are then refined by a constrained parsing algorithm guided by the PTG. Candidate futures are jointly scored for structural validity, temporal plausibility, and causal grounding. By explicitly encoding surgical logic into a unified neuro-symbolic system. Experiments across three publicly available surgical datasets show that our approach outperforms state-of-the-art anticipation models. Importantly, by generating interpretable and procedurally consistent forecasts of upcoming actions, PTG establishes the predictive foundation required for proactive robotic assistance and safe human–robot collaboration in the operating room.
Article Search
Info
Article: hri26main-p5571-p
The ‘Aww’ Factor: Robot Cuteness as a Catalyst for Emotional Responses and Caretaking Tendencies
Giulia Perugia,
Sascha Ankersmit,
Nadia Jansen, and
Stefano Guidi
(Eindhoven University of Technology, Eindhoven, Netherlands; University of Siena, Siena, Italy)
Cuteness is a key factor in human-human interaction. Infant cuteness, described by Lorenz’s baby schema, is thought to promote infant survival by eliciting caregiving behaviors and positive emotions. Despite evidence that people prefer to interact with cute robots, the specific mechanisms through which cuteness influences human–robot interaction remain poorly understood. This research investigated the relationship between perceived robot cuteness, emotional responses, and caretaking tendencies. In two online surveys, participants rated all robots from the ABOT database. In survey 1, 156 participants evaluated robots’ perceived cuteness and the extent to which they evoked positive and negative emotions. In survey 2, a separate pool of 152 participants rated the caretaking tendencies elicited by the robots. Results showed that cuteness was positively correlated with positive emotions and caretaking tendencies, and negatively correlated with negative emotions. Path analysis revealed that the effect of cuteness on caretaking tendencies was partially mediated by participants’ emotional responses. Consistent with the baby-schema hypothesis, cuteness was negatively associated with perceived robot age and positively with the presence of facial features. Interestingly, participants' individual characteristics, most notably their tendency to anthropomorphize, influenced the responses. Our findings confirm the importance of robot cuteness for HRI and extend theories of baby schema to artificial agents. They also raise ethical considerations: while cuteness is a powerful design feature that facilitates affective bonding with robots, its persuasive potential should not be used lightly. Cute robots may foster care and trust, but these same mechanisms could be exploited to manipulate users in harmful ways.
Article Search
Article: hri26main-p5748-p
Learning Contextually-Adaptive Rewards via Calibrated Features
Alexandra Forsey-Smerek,
Julie Shah, and
Andreea Bobu
(Massachusetts Institute of Technology, Cambridge, USA)
A key challenge in reward learning from human input is that desired agent behavior often changes based on context. For example, a robot must adapt to avoid a stove once it becomes hot. We observe that while high-level preferences (e.g., prioritizing safety over efficiency) often remain constant, context alters the saliency–or importance–of reward features. For instance, stove heat changes the relevance of the robot’s proximity, not the underlying preference for safety. Moreover, these contextual effects recur across tasks, motivating the need for transferable representations to encode them. Existing multi-task and meta-learning methods simultaneously learn representations and task preferences, at best implicitly capturing contextual effects and requiring substantial data to separate them from task-specific preferences. Instead, we propose explicitly modeling and learning context-dependent feature saliency separately from context-invariant preferences. We introduce calibrated features–modular representations that capture contextual effects on feature saliency–and present specialized paired comparison queries that isolate saliency from preference for efficient learning. Simulated experiments show our method improves sample efficiency, requiring 10x fewer preference queries than baselines to achieve equivalent reward accuracy, with up to 15% better performance in low-data regimes (5–10 queries). An in-person user study (N=12) demonstrates that participants can effectively teach their personal contextual preferences with our method, enabling adaptable and personalized reward learning.
Article Search
Article: hri26main-p5766-p
Pop-Up Encounters with Spot: Shaping Public Perceptions of Robots through Hands-On Experience
Hae Won Park,
Georgia D. Van de Zande,
Xiajie Zhang,
Dawn Wendell, and
Jessica Hodgins
(Massachusetts Institute of Technology, Cambridge, USA; Olin College of Engineering, Needham, USA; Robotics and AI Institute, Cambridge, USA)
Public attitudes toward robots are often shaped by indirect exposure (e.g., media, staged demos), leaving open how direct, hands-on experience influences acceptance. In this study, we investigate how interacting with Boston Dynamics’ Spot, an agile, state-of-the-art quadruped robot, in a public pop-up booth affects perceptions of comfort and suitability across everyday and high-stakes environments. In a walk-up, 10-week pop-up booth, participants (N=753) completed pre–post surveys before and after driving Spot within curated Drive Scenes (Factory, Home, Hospital, Outdoor/Disaster). Measures captured comfort encountering robots and perceived suitability across Rated Contexts (RCs), affective reactions, and open-ended reflections. Hands-on control significantly increased comfort across all RCs, with the largest gains in Outdoor/Disaster, and increased perceived suitability—most in Home/Office/Hospital where baselines were lower. Improvements generalized beyond the experienced Drive Scene to other contexts. Age, gender, and prior familiarity moderated baseline levels and some changes, but hands-on exposure raised scores for all groups and attenuated several gaps. Thematic analysis showed memorable moments tied to locomotion, terrain adaptation, and expressive tilt; imagined roles consistently emphasized domestic assistance (e.g., cleaning, mobility), with entertainment/play and companionship emerging post-interaction. Together, these results demonstrate that brief, agency-granting encounters with a high-capability quadruped can broaden where people see robots as appropriate and diversify envisioned roles, offering a scalable model for public-facing HRI that fosters comfort, enthusiasm, and acceptance.
Article Search
Article: hri26main-p5775-p
Social Robots on the Loose: Supporting Children’s Independent Learning in the Wild
Ramin Kupaei,
Piyush Daryanani,
Anupriya Shaju,
Muhammad Alfatih Olaniyan,
Gerardo Chavez Castaneda, and
Shruti Chandra
(University of Northern British Columbia, Prince George, Canada)
Social robots have shown considerable potential in supporting children’s learning. However, most child-robot interaction (CRI) research has been conducted in supervised laboratory settings, leaving questions about how children engage with robots autonomously in real-world learning environments. We developed a fully autonomous social robotic system and deployed it in summer camp classrooms without direct adult supervision. Our study spanned two months across seven summer camps (each lasting 3–5 days) and examined children’s interactions with a humanoid robot through game-based learning. A total of 100 children from two age groups (6–8 and 9–12 years) engaged with ten educational games tailored to their developmental stages, guided by a QTrobot. Our findings show that autonomous social robots can be integrated into classroom settings without direct supervision and are well accepted by children across age groups, while revealing age-specific differences in perception, engagement, and social behavior, alongside practical challenges related to time, distraction, and technical robustness.
Article Search
Article: hri26main-p5826-p
Anticipating Disengagement: Physiology-Informed Adaptation for Child–Robot Reading Companions
Zhao Zhao
(University of Guelph, Guelph, Canada)
Children’s attention during shared reading is fragile, yet most child–robot reading systems are reactive: they wait for visible lapses before intervening. We present a physiology-driven pipeline that anticipates near-term disengagement and adjusts delivery before attention is lost. A wrist-worn sensor streams EDA and HRV; short-window features feed a lightweight classifier that estimates the probability of disengagement 5–10 s ahead. When the probability exceeds a threshold and guardrails are satisfied, a bounded adaptation policy triggers subtle, delivery-only changes—slower pacing, clause-aligned pauses, and light prosodic emphasis—without altering script content. In a within-subjects study with 36 children (ages 3–6) reading with a Luka robot, the adaptive condition reduced disengagement events by about half (IRR≈0.49), improved comprehension (≈+0.86 points, especially sequencing/inference questions), and was preferred by most children (≈61%). The results show that anticipatory, physiology-aware control can stabilize attention and yield measurable learning benefits while preserving transparency and safety via simple rules. We discuss design implications for low-overhead sensing, auditable policies, and child-appropriate guardrails, and we release implementation details to support reproducibility.
Article Search
Article: hri26main-p5982-p
Can VR Robots Stand in for the Real Thing? Comparing a Physical Cobot and Its Virtual Twin for User Perceptions, Experimental Effects, and Study Costs
Martina Mara,
Andreas Winklbauer,
Sandra Maria Siedl, and
Benedikt Leichtmann
(JKU Linz, Linz, Austria; LMU Munich, Munich, Germany)
Researchers in Human-Robot Interaction (HRI) increasingly consider Virtual Reality (VR) for running user studies that would otherwise require physical robots. Yet it remains unclear when VR can serve as a valid proxy for real-world interaction. We present a preregistered comparison between two studies in which participants completed tasks with either a physically present cobot (N = 61) or its virtual twin (N = 39) in an immersive collaboration game. Procedures, game environment, task flow, and robot behavior were held constant across settings; the primary difference was the robot's embodiment. Our work delivers (1) a direct comparison of user perceptions and behavioral outcomes (e.g., attitudes, trust, presence, task completion time); (2) a replication test of an experimental manipulation (two different introductory tutorials); and (3) an analysis of study execution costs. Our results show no significant differences in any of the subjective self-reports between the virtual and physical robot, but task durations were longer with the physical robot, and tutorial effects replicated only partially across settings. Study costs were substantially lower for VR. Together, these findings provide a holistic assessment of VR's suitability as a complementary tool for HRI research, offering guidance for future study design and resource planning.
Article Search
Video
Info
Article: hri26main-p6063-p
GeoSACS: Geometric Shared Autonomy via Canal Surfaces
Shalutha Rajapakshe,
Atharva Dastenavar,
Michael Hagenow,
Jean-Marc Odobez, and
Emmanuel Senft
(Idiap Research Institute, Martigny, Switzerland; EPFL, Lausanne, Switzerland; University of Wisconsin-Madison, Madison, USA)
Shared autonomy (SA), which combines user inputs with autonomous capabilities, presents a significant opportunity for assistive robotics.
A key challenge in SA is the dimensionality gap: the mismatch between low-dimensional user inputs from familiar interfaces (e.g., 2D joysticks) and the high-dimensional control required by robot manipulators.
To enhance usability and acceptance, this mapping must be as simple and intuitive as possible. We introduce GeoSACS, a geometric framework for SA. GeoSACS uses canal surfaces to encode task structure with as few as two demonstrations. While the robot moves autonomously along the canal, users can then make corrections on the 2D planar circular cross-sections orthogonal to the robot motion.
By leveraging geometric structure to partition the 6D control space between the robot and the user, GeoSACS allows the intuitive mapping of 2D user inputs to 6D end-effector control. We describe GeoSACS and evaluate its underlying assumptions in a user study against two baselines.
Results from the study demonstrate reduced workload and improved performance, providing insights for the design of future SA systems.
Article Search
Article: hri26main-p6149-p
Disability Justice in Human-Robot Interaction: Reflections on Paternalism, Autonomy, and Care for More Equitable Futures
Pratyusha Ghosh,
Belen Liedo, and
Laurel D. Riek
(University of California at San Diego, La Jolla, USA; Spanish National Research Council, Madrid, Spain)
When we design assistive robots, we are largely well-intentioned: we want to use our research and engineering skills to help others. However, to meet this end, as we move towards more disability- and social justice-oriented HRI research, it is important to critically examine the dominant social narratives of disability that shape how assistive robots for disabled people are conceptualized, designed, and deployed. In this paper, we introduce the concept of robot-mediated paternalism (RMP), drawing on a multidisciplinary body of literature, including critical HCI and crip technoscience. Then, we discuss how RMP may manifest when assistive robots provide unwanted assistance to disabled people and interfere with their decisions/actions based on assumptions about their autonomy and care needs. Finally, we reflect on and offer actionable suggestions for how HRI researchers might mitigate RMP by aligning their work with the principles of disability justice. Overall, our work advances disability-centered research on assistive robots and, in doing so, supports more equitable, inclusive futures for disabled communities.
Article Search
Article: hri26main-p6225-p
Designing Persuasive Social Robots for Health Behavior Change: A Systematic Review of Behavior Change Strategies and Evaluation Methods
Jiaxin Xu,
Chao Zhang,
Raymond H. Cuijpers, and
Wijnand A. IJsselsteijn
(Eindhoven University of Technology, Eindhoven, Netherlands)
Social robots are increasingly applied as health behavior change interventions, yet actionable knowledge to guide their design and evaluation remains limited. This systematic review synthesizes (1) the behavior change strategies used in existing HRI studies employing social robots to promote health behavior change, and (2) the evaluation methods applied to assess behavior change outcomes. Relevant literature was identified through systematic database searches and hand searches. Analysis of 39 studies revealed four overarching categories of behavior change strategies: coaching strategies, counseling strategies, social influence strategies, and persuasion-enhancing strategies. These strategies highlight the unique affordances of social robots as behavior change interventions and offer valuable design heuristics. The review also identified key characteristics of current evaluation practices, including study designs, settings, durations, and outcome measures, on the basis of which we propose several directions for future HRI research.
Article Search
Article: hri26main-p6247-p
Cognitive Trust in HRI: “Pay Attention to Me and I’ll Trust You Even If You Are Wrong”
Adi Manor,
Dan Cohen,
Ziv Keidar,
Avi Parush, and
Hadas Erel
(Technion - Israel Institute of Technology, Haifa, Israel; Reichman University, Herzliya, Israel)
Cognitive trust, the belief that a robot can accurately perform tasks, is crucial for effective human-robot interaction. While robot competence and reliability are known to build this trust, recent research shows that affective factors like attentiveness also matter. This study examines how competence and attentiveness interact to shape cognitive trust, specifically testing whether one factor can compensate for the other. Participants completed a search task with a robotic dog in a 2 × 2 design varying competence (high/low) and attentiveness (high/low). Results showed that high attentiveness compensates for low competence: participants working with an attentive but poorly
performing robot reported trust levels similar to those working with highly competent robot. These findings suggest that building cognitive trust involves emotional processes often overlooked in
traditional competence-based models.
Preprint
Article: hri26main-p6427-p
Speaking of Safety: Drone-Based Voice Communication for Industry
Chandhawat Boonyard,
Stine S. Johansen,
Christophe Jouffrais,
Anke M. Brock, and
Timothy Merritt
(Fédération ENAC ISAE-SUPAERO ONERA - Université de Toulouse, Toulouse, France; Aalborg University, Aalborg, Denmark; CNRS IPAL, Singapore, Singapore)
Ensuring the safety of personnel at large industrial sites is a significant challenge. Misunderstandings and increased safety risks are related to communication, as traditional non-targeted alerts like sirens are often ineffective. To address this, we propose a drone-based system that uses voice interaction to deliver timely and targeted safety advisories. This paper presents a multi-stage, user-centered study, beginning with formative interviews with security and safety managers and culminating in an on-site evaluative study with personnel at two biomass power plants. Through our study, we identify key design insights for creating a drone that is perceived as an official, authoritative, and non-threatening entity. We provide design guidance for using sound and movement to signal intent, creating a voice persona that balances authority with clarity, and designing interaction flows to manage confusion and non-compliance. The results demonstrate the system's potential not only to enhance safety compliance, but also to mitigate interpersonal confrontation. First, we provide an empirically-grounded understanding of contextual safety challenges, which informs a set of design principles for intelligible and socially acceptable voice advisories from drones. Second, we present results from an in-situ evaluation of a working prototype that demonstrates its effectiveness in a real-world setting.
Article Search
Video
Article: hri26main-p6437-p
Lantern: A Minimalist Robotic Object Platform
Victor Nikhil Antony,
Zhili Gong,
Guanchen Li,
Clara Jeon, and
Chien-Ming Huang
(Johns Hopkins University, Baltimore, USA; Rice University, Houston, USA)
Robotic objects are simple actuated systems that subtly blend into human environments. We design and introduce Lantern, a minimalist robotic object platform to enable building simple robotic artifacts. We conducted in-depth design and engineering iterations of Lantern’s mechatronic architecture to meet specific design goals while maintaining a low build cost (~40 USD). As an extendable, open-source platform, Lantern aims to enable exploration of a range of HRI scenarios by leveraging human tendency to assign social meaning to simple forms. To evaluate Lantern’s potential for HRI, we conducted a series of explorations: 1) a co-design workshop, 2) a sensory room case study, 3) distribution to external HRI labs, 4) integration into a graduate-level HRI course, and 5) public exhibitions with older adults and children. Our findings show that Lantern effectively evokes engagement, can support versatile applications ranging from emotion regulation to focused work, and serves as a viable platform for lowering barriers to HRI as a field.
Article Search
Info
Article: hri26main-p6555-p
Older Adult Perspectives on Home Monitoring Robots: User Preferences and Performance of Robot Observation Strategies
Nadira Mahamane and
Sonia Chernova
(Georgia Institute of Technology, Atlanta, USA)
In-home service robots embodied as mobile platforms with onboard cameras are increasingly being proposed for well-being monitoring and fall detection for older adults. Yet, how users perceive a robot’s movement and observation behavior within the home remains underexplored. This work examines user preferences for robot-based observation strategies in Human Activity Recognition (HAR) and evaluates how these strategies affect recognition performance. In a within-subject study with adults over 50, we compare stationary versus adaptive-distance observation behaviors. Results reveal that while stationary observation is generally preferred for being less intrusive, preferences vary depending on the activity context. HAR accuracy remains comparable across both strategies, and combining robot and ambient sensing enhances recognition of complex, temporally extended activities.
Article Search
Article: hri26main-p6600-p
Clarifying Constraints in Interactive Robot Learning with Language Feedback
Hannah Kuehn,
Leonardo Santos, and
Iolanda Leite
(KTH Royal Institute of Technology, Stockholm, Sweden)
Using non-expert language feedback in learning is crucial to making robots successful in human-centered environments. While language feedback has shown potential to teach robots complex tasks, it also brings challenges: humans leave important context, detail, and clarifications unspoken, making interactive approaches necessary to use the feedback effectively. In this work, we develop an interactive robot learning system that can ask clarifying questions to differentiate between hard and soft task constraints from user verbal feedback. The system uses feedback as either shields (hard constraints) or to shape the reward (soft constraints). We conducted a user study with 24 participants, comparing the use of both hard and soft constraints versus two baseline conditions. We show that participants significantly prefer a system using a combination of both hard and soft constraints, or only using soft constraints, compared to a system using only hard constraints.
Qualitative analysis of the participants' interactions with the system revealed common feedback types: spatial, temporal and meta-level.
To evaluate the learning performance of the system, we conducted simulated experiments showing that combining both hard and soft constraints performs best in terms of reaching high rewards and finding an efficient solution. Additionally, we provide demonstrations of our system on real robot hardware.
Article Search
Article: hri26main-p6657-p
Investigating the Impact of Robot Degree of Redundancy on Learning from Demonstration
Muhammad Bilal,
D. Antony Chacon,
Nir Lipovetzky,
Denny Oetomo, and
Wafa Johal
(University of Melbourne, Melbourne, Australia)
Learning from Demonstration allows robots to acquire skills from human demonstrations, making them more accessible to a wider range of users. Among different approaches, kinesthetic teaching allows humans to manipulate the robot joints directly, making it effective method for demonstrating constrained tasks. However, robots with kinematic redundancy enable multiple joint configurations to achieve a desired task, which could influence human teaching performance. One one hand, it could make it easier, allowing more freedom to demonstrate the task, but on the other, it also increases the number of joints that needs to be manipulated, potentially affecting cognitive and physical load of the demonstrator. Therefore, it is crucial to investigate how the number of degrees of redundancy (DoR) impact human performance during kinesthetic demonstrations, and then how these demonstrations influence robot performance. We simulated high and low DoR by locking one of the robot joint on a 7-DoF Panda robotic arm. We conducted a within-subject user study (N = 24) with two conditions: unconstrained condition (high DoR) and constrained condition (low DoR). We used a motion capture system to capture participants physical interaction with the robot when demonstrating two tasks: button pressing and cuboid block insertion. The results show that the robot’s DoR significantly affects mental workload, demonstration time, number of failed attempts, and physical interaction with the robot. Likewise, joint constraints significantly influence robot performance, measured by task completion using the learned model. These findings highlight the importance of considering robot DoR during demonstrating constrained tasks, allowing novice users to provide effective demonstrations.
Article Search
Article: hri26main-p6774-p
On Being Guided: How People Follow a Robot-Guided Tour
Gisela Reyes-Cruz,
Stuart Reeves,
Andriana Boudouraki,
Dominic James Price,
Joel E. Fischer, and
Praminda Caleb-Solly
(University of Nottingham, Nottingham, UK)
Being guided from one place to another is a pervasive social practice that connects deeply with socially aware robot navigation. We examine how robots come to feature within the organisation of these established and well-worn leading and following practices, practices which are assembled 'in place' by the efforts of individuals and groups that are using robot guides. We deployed mobile robots in a museum context to provide additional information for visitors around multiple sequential exhibits. Our ethnomethodological video-based analysis of interaction centres on how the social organisation of being guided was practically managed by visitors: in initiation of following, doing following, and finding a place to stop. Our study shows how following and being led is more than just a mechanical activity, and describe the implications for socially aware robot navigation in addressing novel technical challenges that a shift in understanding following-leading phenomena presents.
Article Search
Article: hri26main-p6820-p
Expensive, Limited, and Still Here: The Paradox of Weak Robots in Family Homes
Zhao Zhao
(University of Guelph, Guelph, Canada)
Social robots like Lovot are costly and capability-limited—more plush companion than helper—yet many remain in homes long after novelty should fade. We report a six-month, multi-sited qualitative study of 24 households (42 adults, 18 children) that independently purchased Lovot. Using onboarding/exit interviews, weekly diaries (text/photo/audio), bi-weekly check-ins, 88 hours of in-home observation, 750+ maintenance-log events, and triangulation with 152 public owner accounts, we trace how families make expensive, weak robots “stick.” We identify five dynamics: (1) ritual anchoring (greetings, bedtime, study-break “sessions” paced by short activity cycles); (2) expectation repair (glitches reframed as quirks; role shift from helper to mascot); (3) distributed care work (charging, cleaning, troubleshooting as shared household labor and responsibility pedagogy); (4) vulnerability–empathy loops (micro-failures elicit guidance, touch, and renewed interest); and (5) policy tethers (subscriptions, suspension, and memory continuity folded into moral narratives of care). We synthesize these into a Paradox of Weak Robots: limited function and high cost can coexist with durable attachment when weakness is staged, legible, and ritualizable. Contributions include: (i) a longitudinal, in-the-wild account of a purchased companion robot; (ii) a process model explaining persistence of “weak robots” in family homes; and (iii) design guidelines for affect-first companions: ritual-first roadmaps, legible limits, low-friction shared care, calibrated vulnerability, and humane business models (suspension-by-default, memory continuity, second-life pathways).
Article Search
Article: hri26main-p6821-p
Communicating Object Relations through Robot Gestures
Xiang Pan,
Malcolm Doering, and
Takayuki Kanda
(Kyoto University, Kyoto, Japan)
We proposed a system for generating relational gestures that convey semantic relations such as similarity and difference between two objects. To understand how humans naturally express such relations, we conducted an observational study with experienced shopkeepers as they frequently compare objects using both speech and gestures. Through analysis of their interactions, we identified four common types of object relations and extracted representative gesture patterns for each. For example, similarity was often conveyed through synchronized hand movements bringing both hands closer together, accompanied by alternating gaze between two objects. Based on these findings, we developed a gesture generation system in which one large language model (LLM) infers the intended object relation from utterance text, and another LLM adapts gestures from a co-speech gesture system that aligns them with speech, integrating relational cues without disrupting this alignment. These modified gestures were automatically mapped onto a dual-arm robot. We evaluated the system through two user studies. In the first study, 20 participants were asked to identify object relations from 24 relational gestures performed by the robot without accompanying speech. They correctly identified the intended relations with an average accuracy of 89.8% across all relation types. In the second study, another 20 participants compared two robot conditions in a within-subjects design: one with relational gestures and one without. Results showed that the robot using relational gestures was perceived as more competent, sociable, and animate compared to the robot without them.
Article Search
Article: hri26main-p6905-p
InterPReT: Interactive Policy Restructuring and Training Enable Effective Imitation Learning from Laypersons
Feiyu Gavin Zhu,
Jean Oh, and
Reid Simmons
(Carnegie Mellon University, Pittsburgh, USA)
Imitation learning has shown success in many tasks by learning from expert demonstrations. However, most existing work relies on large-scale demonstrations from technical professionals and close monitoring of the training process. These are challenging for a layperson when they want to teach the agent new skills. To lower the barrier of teaching AI agents, we propose Interactive Policy Restructuring and Training (InterPReT), which takes user instructions to continually update the policy structure and optimize its parameters to fit user demonstrations. This enables end-users to interactively give instructions and demonstrations, monitor the agent's performance, and review the agent's decision-making strategies. A user study (N=34) on teaching an AI agent to drive in a racing game confirms that our approach yields more robust policies without impairing system usability, compared to a generic imitation learning baseline, when a layperson is responsible for both giving demonstrations and determining when to stop. This shows that our method is more suitable for end-users without much technical background in machine learning to train a dependable policy.
Article Search
Article: hri26main-p6921-p
POIROT: Investigating Direct Tangible vs. Digitally Mediated Interaction and Attitude Moderation in Multi-party Murder Mystery Games
Wen Chen,
Rongxi Chen,
Shankai Chen,
Huiyang Gong,
Minghui Guo,
Yingri Xu,
Xintong Wu, and
Xinyi Fu
(Tsinghua University, Beijing, China; University of Pennsylvania, Philadelphia, USA; Syracuse University, USA; National University of Singapore, Singapore, Singapore; Royal College of Art, London, UK)
As social robots take on increasingly complex roles like game masters (GMs) in multi-party games, the expectation that physicality universally enhances user experience remains debated. This study challenges the "one-size-fits-all" view of tangible interaction by identifying a critical boundary condition: users' Negative Attitudes towards Robots (NARS). In a between-subjects experiment (N = 67), a custom-built robot GM facilitated a multi-party murder mystery game (MMG) by delivering clues either through direct tangible interaction or a digitally mediated interface. Baseline multivariate analysis (MANOVA) showed no significant main effect of delivery modality, confirming that tangibility alone does not guarantee superior engagement. However, primary analysis using multilevel linear models (MLM) revealed a reliable moderation: participants high in NARS experienced markedly lower narrative immersion under tangible delivery, whereas those with low NARS scores showed no such decrement. Qualitative findings further illuminate this divergence: tangibility provides novelty and engagement for some but imposes excessive proxemic friction for anxious users, for whom the digital interface acts as a protective social buffer. These results advance a conditional model of HRI and emphasize the necessity for adaptive systems that can tailor interaction modalities to user predispositions.
Article Search
Article: hri26main-p7113-p
From Metrics to Meaning: Insights from a Mixed-Methods Field Experiment on Retail Robot Deployment
Sichao Song,
Yuki Okafuji,
Takuya Iwamoto,
Jun Baba, and
Hiroshi Ishiguro
(CyberAgent, Tokyo, Japan; University of Osaka, Toyonaka-shi, Japan; Cyberagent, Shibuya, Japan; Osaka University, Toyonaka, Japan)
We report a mixed-methods field experiment of a conversational service robot deployed under everyday staffing discretion in a live bedding store. Over 12 days we alternated three conditions--Baseline (no robot), Robot-only, and Robot+Fixture--and video-annotated the service funnel from passersby to purchase. An explanatory sequential design then used six post-experiment staff interviews to interpret the quantitative patterns.
Quantitatively, the robot increased stopping per passerby (highest with the fixture), yet clerk-led downstream steps per stopper--clerk approach, store entry, assisted experience, and purchase--decreased. Interviews explained this divergence: clerks avoided interrupting ongoing robot-customer talk, struggled with ambiguous timing amid conversational latency, and noted child-centered attraction that often satisfied curiosity at the doorway. The fixture amplified visibility but also anchored encounters at the threshold, creating a well-defined micro-space where needs could "close" without moving inside.
We synthesize these strands into an integrative account from the initial show of interest on the part of a customer to their entering the store and derive actionable guidance. The results advance the understanding of interactions between customers, staff members, and the robot and offer practical recommendations for deploying service robots in high-touch retail.
Article Search
Article: hri26main-p7195-p
“We Will Grow into the Age of Robots”: A Participatory Interview Study for Service Robots and Their Value for Care
Stina Klein,
Shuyuan Shen,
Elisabeth André, and
Matthias Kraus
(University of Augsburg, Augsburg, Germany)
The aging population and chronic staff shortages are prompting care facilities to use service robots (SR) as part of daily care. There are high hopes for support with physical workloads and routine tasks, but adoption often stalls due to technical complexity, poor integration into workflows, and fears that "support" could become "replacement". We address this problem by viewing care as a value-driven practice rather than a list of tasks. In a participatory interview study with caregivers and care recipients in three facilities, based on value-sensitive design, we identified expectations, non-negotiable boundaries, and the values that should guide robot behavior. Participants identified credible roles for SRs in logistics, documentation, reminders, and guidance, but rejected intimate or safety-critical care tasks. Acceptance depends on value-oriented and fluid adaptivity. Robots should dynamically modulate initiative, proactivity, and interaction modality to maintain human attentiveness and warmth, sustain independence, support control over workload, and take legal safeguards into account. We contribute to this (1) with an empirically grounded overview of acceptable potentials and limitations guided by stakeholder values, (2) with a value-sensitive design-based framework for fluid adaptivity as a mechanism that operationalizes values in daily interaction, and (3) design requirements for user-centered, transparent, and context-sensitive SRs that reduce workload and create space for human care rather than replacing it.
Article Search
Article: hri26main-p7288-p
If It’s Safe, Don’t Ask: Decreasing Frustration through User Involvement for Risky Robot Behaviors
Jan Leusmann,
Sarah Schömbs,
Maximilian Diedrich, and
Florian Müller
(LMU Munich, Munich, Germany; University of Melbourne, Melbourne, Australia; TU Darmstadt, Darmstadt, Germany)
Human–robot collaboration faces a speed–accuracy trade-off (SAT): higher speed lowers latency but increases errors; lower speed improves accuracy but extends waiting time. Both pathways can frustrate users in research and real-world deployments. Despite this importance, the impact of SAT on frustration and how to mitigate it remains underexplored. We conducted a user study (N=24) in which participants collaborated with a robot in an assembly task. We investigate three levels of SAT (conservative, moderate, risky), and examine how uncertainty communication and offering decision autonomy affect user frustration. Our results show that user frustration is highest for risky robot behavior. User involvement decreased frustration for risky behaviors, but increased it for conservative ones, while verbal uncertainty communication had no effects. We further found that perceived transparency, agency, intelligence, and utility of the robot increase with conservative SAT, with user workload decreasing. We propose that user involvement is advisable in higher-risk settings to mitigate user frustration, whereas autonomous operation is preferable in lower-risk scenarios.
Article Search
Article: hri26main-p7295-p
RoboCritics: Enabling Reliable End-to-End LLM Robot Programming through Expert-Informed Critics
Callie Y. Kim,
Nathan Thomas White,
Evan He,
Frederic Sala, and
Bilge Mutlu
(University of Wisconsin-Madison, Madison, USA)
End-user robot programming grants users the flexibility to re-task robots in situ, yet it remains challenging for novices due to the need for specialized robotics knowledge.
Large Language Models (LLMs) hold the potential to lower the barrier to robot programming by enabling task specification through natural language.
However, current LLM-based approaches generate opaque, "black-box" code that is difficult to verify or debug, creating tangible safety and reliability risks in physical systems.
We present RoboCritics, an approach that augments LLM-based robot programming with expert-informed motion-level critics.
These critics encode robotics expertise to analyze motion-level execution traces for issues such as joint speed violations, collisions, and unsafe end-effector poses.
When violations are detected, critics surface transparent feedback and offer one-click fixes that forward structured messages back to the LLM, enabling iterative refinement while keeping users in the loop.
We instantiated RoboCritics in a web-based interface connected to a UR3e robot and evaluated it in a between-subjects user study (n=18).
Compared to a baseline LLM interface, RoboCritics reduced safety violations, improved execution quality, and shaped how participants verified and refined their programs.
Our findings demonstrate that RoboCritics enables more reliable and user-centered end-to-end robot programming with LLMs.
Article Search
Article: hri26main-p7349-p
“What’s on your mind?”: Understanding the Development of Multidimensional Trust in Social Robots
Chih-Wei (Charlotte) Ning,
Carolina Centeio Jorge,
Myrthe L. Tielman, and
Mark A. Neerincx
(Delft University of Technology, Delft, Netherlands)
As robots and virtual agents are increasingly envisioned as long-term companions, understanding how trust develops becomes crucial for ensuring safe and appropriate human-robot relationships. This research investigates how affective and cognitive trust evolve in social human-robot interactions. Participants (n=40) engaged in a 2 (social attitude: social, baseline) × 3 (time: t1, t2, t3) mixed-design user study with a social robot, using a novel Card Divination Task developed to elicit both cognitive and affective trust dimensions. Results show that cognitive trust develops early while affective trust emerges gradually. Moreover, social cues enhance both cognitive trust, affective trust, and participants’ certainty in trust judgment. These findings provide empirical support for the theoretical distinction between trust dimensions and highlight the role of social behavior in shaping trust over repeated interactions.
Article Search
Article: hri26main-p7366-p
Welcome to the Shop! Field Trial of a VLM-Powered Autonomous Shop Worker Robot
Sachi Edirisinghe,
Satoru Satake, and
Takayuki Kanda
(Kyoto University, Kyoto, Japan; ATR, Kyoto, Japan)
We integrated state-of-the-art vision–language models (VLMs) into a hat shop robot, enabling four visually grounded human shop-worker behaviors: 1) offering personalized greetings based on customers' visual aspects, 2) recognizing their shopping activities and delivering timely encouragement to try hats, 3) offering feedback on hat fit, and 4) responding to customer inquiries that contain deictic references to hats. We strategically utilized both local and cloud-based VLMs to develop the robot’s visual perception, leveraging their complementary capabilities. We report key insights from our development process, including our experience collaborating with a domain expert on robot interaction development. Results from an 8-day field trial at a hat shop show that the robot’s advanced visual perception capabilities made a positive impression on both customers and the shop manager. Customers were excited by the robot’s visually grounded remarks, and the manager adopted a complementary working style that leveraged these new capabilities.
Article Search
Article: hri26main-p7476-p
An Ethnography of Restaurant Robots in Japan: Promises, Perceptions, and Impacts
Martim Brandão,
Anna Sharko,
Zoe Evans,
Wenxi Wu,
Atmadeep Ghoshal, and
Brian Tshuma
(King’s College London, London, UK; University of Oxford, Oxford, UK)
Robots are increasingly being used in restaurants to assist with service and increase efficiency. Yet, their impact on the daily work of restaurant workers, customers' perceptions, and robots' limitations are poorly understood - and so is the gap between these and official marketing narratives. In this paper we conduct an investigation of the impact of restaurant robots in a set of restaurant chains in Japan, through a combination of in-person ethnography and analysis of online customer reviews and news articles. We show how robots are used in practice, how they structure work, and their impact on workers and customers. In particular, while we find robots to be well integrated and 'invisible', and majorly well received by customers and management, we also find they lead to a customer-perceived loss of human contact, a restructuring of work, an incentive for shortstaffing, deskilling of workers, and several technical challenges that are collectively addressed by workers and customers in work-like tasks. We compare these findings with marketing and management-led narratives, identifying gaps consistent with labor and power-centered critical studies.
Article Search
Article: hri26main-p7760-p
Robots for Those in Transition: Exploring Mental Wellness through Design Probes
Minghe Lu,
Yu Xing,
Carlye Anne Lauff,
Hee Rin Lee, and
Ji Youn Shin
(University of Minnesota, Minneapolis, USA; Michigan State University, East Lansing, USA)
International students in emerging adulthood face unique challenges as they navigate life transitions and adapt to new environments. These challenges significantly affect their mental well-being. While previous HRI studies have explored robotic solutions to address anxiety through therapeutic guidance, positive psychology, and mood recognition, little is known about design features tailored to the challenges of transition. By deploying co-design probes with 18 international students experiencing mild symptoms of anxiety and depression, we investigate their lived experiences and coping strategies and translate these insights into design implications for social robots. Our findings indicate that participants envisioned robots that create a welcoming living environment through seamless design and provide a sense of agency through achievable tasks. These features were seen as particularly helpful in relieving day-to-day tension in effortless ways. Based on these findings, we discuss how social robots can better support transitions in emerging adulthood.
Article Search
Article: hri26main-p7947-p
The Role of Agent’s Anthropomorphism in Shaping Phantom Costs
Benjamin Lebrun,
Christoph Bartneck, and
Andrew J. Vonasch
(University of Canterbury, Christchurch, New Zealand)
Individuals perceive phantom costs, such as ulterior motives and risks, when a person makes an unreasonably generous offer without sufficient explanation. Prior research relying exclusively on the Nao robot found similar effects, though smaller than with humans. To better understand these differences, we manipulated agent human-likeness across five robots and a human. Participants read a vignette in which the agent either offered a free parking spot (reasonable) or added an unjustified $10 incentive (unreasonably generous). They then decided whether to accept the offer, explained why they thought the agent made this offer, and rated the agent's anthropomorphism and perceived phantom costs. Results showed that unreasonably generous offers prompted participants to attribute more mind to the agents, increasing perceived phantom costs. Anthropomorphism also influenced phantom costs: Animacy and Disturbance increased them, while Intentionality and Sociability decreased them. This study advances our knowledge of phantom costs in HRI, suggesting that people adopt the intentional stance to explain a robot's behaviour—especially when it deviates from social norms—highlighting the need for careful anthropomorphic design of social robots to minimize phantom costs perception.
Article Search
Article: hri26main-p8039-p
Aligning Task Goals before Execution: Insights from Diverse User Groups into Human–Robot Communication in Domestic Settings
Lesong Jia,
Yang Ye,
Breelyn Kane Styler, and
Na Du
(University of Pittsburgh, Pittsburgh, USA; Veterans Affairs Pittsburgh Healthcare System, Pittsburgh, USA)
Integrating domestic robots into everyday life requires not only reliable execution but also prior alignment of task goals between humans and robots. While prior research has examined input interfaces and feedback strategies, it has largely focused on objective performance metrics and often overlooked user variability. To address this gap, we conducted a survey study with 113 participants across four groups: adolescents, younger adults, older adults, and wheelchair users. The survey captured participants’ expectations of future robots (roles, embodiments, and concerns) and their preferences for instruction delivery and robot feedback before execution. Our results reveal both shared and group-specific patterns. Across groups, participants prioritized efficiency in instruction delivery and reliability in robot feedback. Regarding group differences: adolescents emphasized efficiency, wheelchair users valued transparency, and older adults may benefit from additional explanations of novel interaction technologies. Based on these findings, we derive stage-aware, context-sensitive, and group-adaptive design principles and recommendations to guide future robot interfaces.
Article Search
Article: hri26main-p8152-p
DiSCo: Diffusion Sequence Copilots for Shared Autonomy
Andy Wang,
Xu Yan,
Brandon McMahan,
Michael Zhou,
Yuyang Yuan,
Johannes Y. Lee,
Ali Shreif,
Matthew Li,
Zhenghao Peng,
Bolei Zhou,
Yuchen Cui, and
Jonathan C. Kao
(University of California at Los Angeles, Los Angeles, USA)
Shared autonomy combines human user and AI copilot actions to control complex systems such as robotic arms. When a task is challenging, requires high dimensional control, or is subject to corruption, shared autonomy can significantly increase task performance by using a trained copilot to effectively correct user actions in a manner consistent with the user’s goals. To significantly improve the performance of shared autonomy, we introduce Diffusion Sequence Copilots (DiSCo): a method of shared autonomy with diffusion policy that plans action sequences consistent with past user actions. DiSCo seeds and inpaints the diffusion process with user-provided actions with hyperparameters to balance conformity to expert actions, alignment with user intent, and perceived responsiveness. We demonstrate that DiSCo substantially improves task performance in simulated driving and robotic arm tasks. Project website: https://sites.google.com/view/disco-shared-autonomy/
Article Search
Article: hri26main-p8166-p
Don’t Say That! Proactive Support for Appropriate Power Use with Avatar Robot
Rui Chen,
Takashi Minato,
Jani Even, and
Takayuki Kanda
(Kyoto University, Kyoto, Japan; Advanced Telecommunications Research Institute International, Kyoto, Japan)
We propose a proactive teleoperation support system to help high-power individuals exercise power appropriately during hierarchical interactions. In our study, seniors (high-power individuals) remotely operated an avatar robot as operators, while juniors (low-power individuals) exercised as exercisers. To ground the design, we conducted an observational study of hierarchical interactions, identifying three recurring challenges: perceived loss of power, escalation of negativity, and abandonment of power. Based on these findings, we proposed interaction policies and integrated them into a teleoperation system that provides operators with guidance for the appropriate use of power, aiming to prevent conflict and foster positive communication. We evaluated this system in a 2-hour study with 17 senior–junior pairs, comparing it to a baseline teleoperation system where operators controlled the avatar robot directly without such guidance. Results showed that our system significantly improved operator satisfaction and reduced workload. Exercisers reported greater enjoyment and acceptance, with similar exercise counts in both conditions. Interviews revealed that the system broadened operators’ communication strategies and fostered more positive, supportive environments for low-power participants.
Article Search
Article: hri26main-p8175-p
Risk-Taking Behavior in Human-Robot Teams: Collaboration vs. Competition
Katharina Wille,
Eva Wiese, and
Jairo Perez-Osorio
(KTH Royal Institute of Technology, Stockholm, Sweden; TU Berlin, Berlin, Germany; George Mason University, Fairfax, USA)
Robots are increasingly joining human teams, where collaboration and competition often coexist. However, their impact on human risk-taking remains unclear. We investigated whether collaborating with or competing against a humanoid robot influences risk-taking behavior and performance using the Balloon Analogue Risk Task (BART). In this task, participants pump a virtual balloon to earn points, but risk losing all their points if the balloon bursts. They interacted with a NAO robot in a pre-registered, mixed-design study (N=43), with interaction type (collaboration vs. competition) as a between-subjects factor and team context (solo vs. social) as a within-subjects factor. We found that social interaction increased risk behavior compared to playing alone. Competition fostered strategic risk-taking, with more pumps, fewer bursts and higher scores resulting from optimal strategy adoption. In contrast, collaboration fostered exploratory risk-taking with increased risk but without performance gains. Additional exploratory analysis revealed that men took more risks in collaboration. Our findings demonstrate that the type of social interaction, rather than the mere presence of a robot, shapes how humans take risks with robots.
Article Search
Article: hri26main-p8191-p
The Robot Bookworm: Fostering Children’s Reading Motivation through Personalized Book Discussions
Elena Malnatsky,
Sobhaan ul Husan,
Kuhu Sinha,
Sofie Veld,
Rafaella van Nee,
Daniël Wijnhorst,
Shenghui Wang,
Koen Hindriks, and
Mike E.U. Ligthart
(Vrije Universiteit Amsterdam, Amsterdam, Netherlands; University of Twente, Twente, Netherlands)
We present the Robot Bookworm, a multi-session intervention co-designed with children and educators to foster reading motivation through personalized book discussions. The robot assigned each child a personally fitting book and engaged them in pedagogically structured discussions, with personalized book-aligned dialogic content selectively generated offline by a language model and moderated by people to ensure safety.
We compared a personalized book discussion condition with a book-neutral control in a four-session, large-scale user study in two primary schools (N = 101, 8-11 y.o.). The intervention significantly increased reader-book relatedness and reading enjoyment, particularly for children with below-ceiling baseline enjoyment, but had no effect on intrinsic motivation. At a one-year follow-up, the quantitative effects were not sustained. However, children reported perceived positive shifts in attitudes towards reading, which they attributed to the Robot Bookworm.
Article Search
Article: hri26main-p8345-p
Is Robot Labor Labor? Delivery Robots and the Politics of Work in Public Space
EunJeong Cheon and
Do Yeon Shin
(Syracuse University, Syracuse, USA; University of Illinois at Chicago, Chicago, USA)
As sidewalk delivery robots become increasingly integrated into urban life, this paper begins with a critical provocation: Is robot labor labor? More than a rhetorical question, this inquiry invites closer attention to the social and political arrangements that robot labor entails. Drawing on ethnographic fieldwork across two smart-city districts in Seoul, we examine how delivery robot labor is collectively sustained. While robotic actions are often framed as autonomous and efficient, we show that each successful delivery is in fact a distributed sociotechnical achievement—reliant on human labor, regulatory coordination, and social accommodations. We argue that delivery robots do not replace labor but reconfigure it—rendering some forms more visible (robotic performance) while obscuring others (human and institutional support). Unlike industrial robots, delivery robots operate in shared public space, engage everyday passersby, and are embedded in policy and progress narratives. In these spaces, we identify robot privilege—humans routinely yielding to robots—and distinct perceptions between casual observers (“cute”) and everyday coexisters (“admirable”). We contribute a conceptual reframing of robot labor as a collective assemblage, empirical insights into South Korea’s smart-city automation, and a call for HRI to engage more deeply with labor and spatial politics to better theorize public-facing robots.
Article Search
Article: hri26main-p8415-p
Plant-Inspired Robot Design Metaphors for Ambient HRI
Victor Nikhil Antony,
Adithya R N,
Sarah Derrick,
Zhili Gong,
Peter M. Donley, and
Chien-Ming Huang
(Johns Hopkins University, Baltimore, USA; Rice University, Houston, USA)
Plants offer a paradoxical model for interaction: they are ambient, low-demand presences that nonetheless shape atmosphere, routines, and relationships through temporal rhythms and subtle expressions. In contrast, most human–robot interaction (HRI) has been grounded in anthropomorphic and zoomorphic paradigms, producing overt, high-demand forms of engagement. Using a Research through Design (RtD) methodology, we explore plants as metaphoric inspiration for HRI; we conducted iterative cycles of ideation, prototyping, and reflection to investigate what design primitives emerge from plant metaphors and morphologies, and how these primitives can be combined into expressive robotic forms. We present a suite of speculative, open-source prototypes that help probe plant-inspired presence, temporality, form, and gestures. We deepened our learnings from design and prototyping through prototype-centered workshops that explored people’s perceptions and imaginaries of plant-inspired robots. This work contributes: (1) Set of plant-inspired robotic artifacts; (2) Designerly insights on how people perceive plant-inspired robots; and (3) Design consideration to inform
how to use plant metaphors to reshape HRI.
Article Search
Article: hri26main-p8635-p
Learning Human Preferences over a Human-Robot Collaboration Based on Explicit and Implicit Human Feedback
Kate Candon,
Qiping Zhang,
Alexander Lew,
Houston Claure,
Lena Qian,
Alyssa Quarles,
Chayan Sarkar, and
Marynel Vázquez
(Yale University, New Haven, USA; TCS Research, New Delhi, India)
There is significant interest in enabling robots to learn to perform tasks directly from interactions with non-expert users. Typically, a human serves as a teacher whose only task is to provide feedback to a robot learner. However, in real-world human-robot collaborations, the human often assists with the task while also offering feedback. Our key insight is that we can extract additional, implicit feedback from the human’s actions during the collaboration to augment the robot learning process. Under the assumption of fixed-role assignments, we first propose to formalize human preferences over a human-robot collaboration as a shared set of parameters encoding alignment between two reward functions: one that drives human behavior, and another that should direct robot behavior. This allows us to extract implicit feedback from an interaction by reasoning about the human’s actions in the task as actions that reveal the human’s preferences. Then, we combine this implicit feedback with traditional explicit human feedback to facilitate estimating the human’s preferences. We evaluated our proposed approach for Preference learning from Implicit and Explicit feedback (PIE) in simulations and with real users in a cooking scenario. Our simulation results indicate that combining multiple modalities of human feedback improves a robot’s ability to estimate human preferences over the collaboration, with a similar trend observed in real-world evaluations. These findings highlight a promising direction for enabling robots to adapt to a user’s preference model more quickly, thereby reducing the amount of time a person must spend teaching a robot.
Article Search
Article: hri26main-p8760-p
Social Robots as Active Safeguards for Children’s Welfare: Community Stakeholder Insights and Design Recommendations
Nida Itrat Abbasi,
Leigh Levinson,
Selma Šabanović, and
Hatice Gunes
(University of Cambridge, Cambridge, UK; Indiana University at Bloomington, Bloomington, USA)
Integrating robots into children's spaces demands careful attention to the likelihood that children may disclose information indicating that their welfare is at risk. To consider the active involvement of social robots in safeguarding children, we interviewed 22 community stakeholders who work with children professionally in education, social services, healthcare or legal services in the UK and the USA. All stakeholders saw value in having robots create playful, safe spaces that can help alleviate the anxiety and reduce the emotional burden of the safeguarding process. However, they worried about a robot's ability to handle disclosures, interpret context and not overburden welfare services. We present hypothetical roles that robots could adopt within the safeguarding pipeline and map them to design considerations that emphasise the need for relational understanding, structured thresholds, and transparent data collection. Together, these considerations provide directions for ethical, child-centred robot technologies in safeguarding youth.
Article Search
Article: hri26main-p8782-p
How Human Motion Prediction Quality Shapes Social Robot Navigation Performance in Constrained Spaces
Andrew Stratton,
Phani Teja Singamaneni,
Pranav Goyal,
Rachid Alami, and
Christoforos Mavrogiannis
(University of Michigan at Ann Arbor, Ann Arbor, USA; LAAS - CNRS - University of Toulouse, Toulouse, France; Inria, Nancy, France)
Motivated by the vision of integrating mobile robots closer to humans in warehouses, hospitals, manufacturing plants, and the home, we focus on robot navigation in dynamic and spatially constrained environments. Ensuring human safety, comfort, and efficiency in such settings requires that robots are endowed with a model of how humans move around them. Human motion prediction around robots is especially challenging due to the stochasticity of human behavior, differences in user preferences, and data scarcity. In this work, we perform a methodical investigation of the effects of human motion prediction quality on robot navigation performance, as well as human productivity and impressions. We design a scenario involving robot navigation among two human subjects in a constrained workspace and instantiate it in a user study (N=80) involving two different robot platforms, conducted across two sites from different world regions. Key findings include evidence that: 1) the widely adopted average displacement error is not a reliable predictor of robot navigation performance and human impressions; 2) the common assumption of human cooperation breaks down in constrained environments, with users often not reciprocating robot cooperation, and causing performance degradations; 3) more efficient robot navigation often comes at the expense of human efficiency and comfort.
Article Search
Article: hri26main-p8825-p
Sympathy as a Lens for Human–Robot Interaction: Analysing YouTube Responses to Robot Abuse
Vlatka Tolj,
Caterina Neef, and
Barbara Bruno
(Karlsruhe Institute of Technology, Karlsruhe, Germany)
When witnessing the abuse of others, humans generally exhibit emotional responses. A large number of studies in Human-Robot-Interaction (HRI) have shown that humans also react with sympathy when robots are abused, but most of these insights come from controlled laboratory studies using short videos and student samples. To complement existing research with observations drawn from real-world online discussions, this paper presents a sentiment analysis of 103,413 YouTube comments on videos depicting abuse of animal-like, humanoid, and cart-shaped robots. To validate our sentiment classification, we analysed the comments using a lexicon-based tool, two fine-tuned language models, and three general-purpose state-of-the-art large language models (LLMs). The comparison yielded interesting results: LLMs generally classified science-fiction–related comments, e.g., references to dystopian TV shows, as negative, while lexicon and fine-tuned models mainly labelled them as neutral. The six models agreed on the classification of a total of 27,427 comments, which we used to explore the sentiment expressions occurring across videos featuring robots with different physical forms.
Our findings provide large-scale, ecologically valid insights into how emotional responses to robot abuse are expressed and analysed in online video platforms.
Article Search
Article: hri26main-p8853-p
Learning by Doing: Teacher Professional Development Research in the Age of Social Robots
Alice Nardelli,
Anna Allegra Bixio,
Alice Stopponi,
Maria Filomia,
Alessia Bartolini,
Marco Milella,
Antonio Sgorbissa, and
Carmine Tommaso Recchiuto
(University of Genoa, Genoa, Italy; University of Perugia, Perugia, Italy)
The introduction of social robots in preschool settings has become a common research strategy for addressing educational challenges. Although teachers and educators play a central role in classroom dynamics, they are often underrepresented in studies on educational robots. Often, robots are presented as “black-boxes”, with little attention paid to providing teachers with dedicated training. This study describes the design and implementation of the Teacher Professional Development Research (TPDR) as a structured method for integrating social robots into early education, supporting teachers and educators. TPDR is an established educational practice that addresses pedagogical issues by engaging teachers as actors in the research process. Our project involved deploying a robot in four preschools and one nursery with a multicultural setting, primarily to foster intercultural integration. Both quantitative and qualitative data were collected to evaluate the impact of this approach on teachers' attitudes and willingness to adopt the robot. Findings indicate that the teachers gained a greater awareness of the robot’s social presence and a clearer understanding of its educational potential. There was also an overall positive shift in their intercultural sensitivity.
Article Search
Article: hri26main-p8927-p
“I trust you more than me on this”: Collaborating with a Social Robot during Open-Ended Problem Solving
Veronica Grosso,
Kerou Zhou,
Lakshmi Sanjana Challagundla, and
Joseph E. Michaelis
(University of Illinois at Chicago, Chicago, USA)
In human–robot collaboration, people's perceptions of robot abilities can shape how they interact with it. Drawing on insights from a pilot study, we hypothesized that a robot's backstory and gaze patterns might strengthen perceptions of competence. To examine this, we conducted a 2×2 (backstory x gaze patterns) mixed-method study in a museum-based lab setting, where 66 participants completed two open-ended gift-box assembly tasks (makeup vs. pet-owner items) with a Misty robot.
Contrary to our expectations and much of the HRI literature, we found no significant differences across experimental conditions. However, participants' interactions revealed important insights about collaboration dynamics. Two distinct collaboration levels emerged: while a quarter of participants maintained low levels of collaboration, the majority engaged in highly collaborative exchanges with the robot.
These collaboration levels were associated with different perceptions of warmth and competence and different interaction patterns. A thematic analysis showed that in low-collaboration cases, participants' self-efficacy (i.e. the confidence they held in their own task competence) did not appear to influence the interaction. However, within high-collaboration cases, self-efficacy shaped how participants engaged with the robot. Those with high self-efficacy critically evaluated the robot's contributions against their own expertise, whereas those with low self-efficacy leaned more heavily on the robot's guidance.
Across both groups, several factors seemed to enhance collaboration: the robot's answer justifications, occasional pushback from the robot, and participants' growing familiarity with the interaction. We discuss how future systems should account for users' confidence levels, balance responsiveness with constructive disagreement, and support adaptation to foster more balanced and effective collaboration.
Article Search
Article: hri26main-p9050-p
Emotional Entanglements and Emotional Sustainability in HRI
Hugo Simão,
Long-Jing Hsu,
Bengisu Cagiltay,
Isabel Neto,
Christopher D. Wallbridge,
Laura Santos,
Filipa Rocha,
Leigh Levinson,
Joao Silva Sequeira,
Tiago Guerreiro, and
Patricia Alves-Oliveira
(University of Lisbon, Lisbon, Portugal; University of Michigan at Ann Arbor, Ann Arbor, USA; Koç University, Istanbul, Türkiye; Cardiff University, Cardiff, UK; Politecnico di Milano, Milan, Italy; Indiana University at Bloomington, Bloomington, USA; Instituto Superior Técnico, Lisbon, Portugal)
Human-Robot Interaction (HRI) research in real-world settings may lead to unanticipated, emotionally charged moments. While impacts of these moments on participants are reported in the literature, researchers’ emotions, which can affect participants’ experiences, are often left unreported. Learning from these moments is essential for advancing HRI quality and real-world deployment success. We introduce "Emotional Entanglements" as a lens in HRI to define a researcher's capacity to anticipate, absorb, respond to, and recover from emotionally impactful events. Collecting testimonials using collaborative autoethnography from eleven researchers, we surface recurring emotional entanglements experienced in HRI studies, including tears with mixed meaning, participant attachment and loss upon robot withdrawal, and consequential participant decisions attributed to the robot, as well as how researchers navigated them amidst protocol constraints. This paper provides an actionable guide to "Emotional Sustainability in HRI", raising awareness of these often unreported situations and offering strategies for mitigation.
Article Search
Info
Article: hri26main-p9068-p
Beyond Conventional Robotic Forms: A Scoping Review of Artefact-Inspired Robots
Majken Kirkegaard Rasmussen and
Eléni Economidou
(Aarhus University, Aarhus, Denmark)
Most robotics frameworks classify robot appearance into three morphological categories: 1) anthropomorphic, 2) zoomorphic, and 3) technical, while often overlooking a fourth category: artefactinspired robots. In this scoping review, we foreground this underrepresented category, bringing together research that exemplifies it as an alternative design strategy, challenging the assumed inevitability of the three established robot design morphologies. The review offers an overview of artefact-inspired robots, contributing insights on the types of artefacts the robots were inspired by, the output modalities they primarily use to communicate, and an analysis of the visibility of the robotic capabilities based on affordances. Through this work, we invite scholars and practitioners to expand the design space of robot appearance by drawing inspiration from the forms and materialities of domestic objects, moving towards robots that blend more seamlessly into everyday environments.
Article Search
Article: hri26main-p9117-p
Towards Intelligible Human-Robot Interaction: An Active Inference Approach to Occluded Pedestrian Scenarios
Kai Chen,
Yuyao Huang, and
Guang Chen
(Tongji University, Shanghai, China)
The sudden appearance of occluded pedestrians presents a critical safety challenge in autonomous driving. Conventional rule-based or purely data-driven approaches struggle with the inherent high uncertainty of these long-tail scenarios. To tackle this challenge, we propose a novel framework grounded in Active Inference, which endows the agent with a human-like, belief-driven mechanism. Our framework leverages a Rao-Blackwellized Particle Filter (RBPF) to efficiently estimate the pedestrian's hybrid state. To emulate human-like cognitive processes under uncertainty, we introduce a Conditional Belief Reset mechanism and a Hypothesis Injection technique to explicitly model beliefs about the pedestrian's multiple latent intentions. Planning is achieved via a Cross-Entropy Method (CEM) enhanced Model Predictive Path Integral (MPPI) controller, which synergizes the efficient, iterative search of CEM with the inherent robustness of MPPI. Simulation experiments demonstrate that our approach significantly reduces the collision rate compared to reactive, rule-based, and reinforcement learning (RL) baselines, while also exhibiting explainable and human-like driving behavior that reflects the agent's internal belief state.
Article Search
Artifacts Available
Article: hri26main-p9133-p
GuideNav: User-Informed Development of a Vision-Only Robotic Navigation Assistant for Blind Travelers
Hochul Hwang,
Soowan Yang,
Jahir Sadik Monon,
Nicholas A. Giudice,
Sunghoon Ivan Lee,
Joydeep Biswas, and
Donghyun Kim
(University of Massachusetts at Amherst, Amherst, USA; DGIST, Daegu, Republic of Korea; University of Maine, Orono, USA; University of Texas at Austin, Austin, USA)
While commendable progress has been made in user-centric research on mobile assistive systems for blind and low-vision (BLV) individuals, references that directly inform robot navigation design remain rare. To bridge this gap, we conducted a comprehensive human study involving interviews with 26 guide dog handlers, four white cane users, nine guide dog trainers, and one O&M trainer, along with 15+ hours of observing guide dog–assisted walking. After de-identification, we open-sourced the dataset to promote human-centered development and informed decision-making for assistive systems for BLV people. Building on insights from this formative study, we developed GuideNav, a vision-only, teach-and-repeat navigation system. Inspired by how guide dogs are trained and assist their handlers, GuideNav autonomously repeats a path demonstrated by a sighted person using a robot. Specifically, the system constructs a topological representation of the taught route, integrates visual place recognition with temporal filtering, and employs a relative pose estimator to compute navigation actions---all without relying on costly, heavy, power-hungry sensors such as LiDAR. In field tests, GuideNav consistently achieved kilometer-scale route following across five outdoor environments, maintaining reliability despite noticeable scene variations between teach and repeat runs. A user study with 3 guide dog handlers and 1 guide dog trainer further confirmed the system’s feasibility, marking, to our knowledge, the first demonstration of a quadruped mobile system guiding a route in a manner comparable to guide dogs.
Preprint
Video
Info
Article: hri26main-p9148-p
Practical Insights into Designing Context-Aware Robot Voice Parameters in the Wild
Amy Koike,
Yuki Okafuji, and
Sichao Song
(University of Wisconsin-Madison, Madison, USA; CyberAgent, Tokyo, Japan)
Voice is an essential modality for human-robot interaction (HRI). The way a robot sounds plays a central role in shaping how humans perceive and engage with it, influencing factors such as intelligibility, understandability, and likability. Although prior work has examined voice design, most studies occur in controlled labs, leaving uncertainty about how results translate to real-world settings. To address this gap, we conducted two naturalistic deployment studies with a guidance robot in a shopping mall: (1) in-depth interviews with six participants, and (2) an eight-day field deployment using a 3×3 design varying speech rate and volume, yielding 725 survey responses. Our results show how real-world context shapes voice perception and inform adaptive, context-aware voice design for social robots in public spaces.
Article Search
Article: hri26main-p9334-p
Don’t Park There! Learning Socially-Appropriate Robot Parking Spots in the Home
De’Aira Bryant,
Apaar Sadhwani,
Hanxiao Fu,
William D. Smart, and
Dylan F. Glas
(Amazon Lab126, Sunnyvale, USA; Oregon State University, Corvallis, USA)
As autonomous social robots become more prevalent in home environments, they must decide where to position themselves within many different types of rooms or spaces, balancing accessibility with staying out of the way. This paper presents a machine learning approach to modeling user preferences for robot parking spots in the home using standard 2D occupancy maps. Our method learns spatial patterns from the information available in the occupancy maps and user-annotated floorplans without requiring specialized inputs. We evaluate the approach using floorplan data from 84 users who provided parking spot preferences after living with and evaluating a social robot in their homes for at least two weeks. Our method significantly outperforms a state-of-the-art baseline focused exclusively on avoiding walking paths. We demonstrate how the approach extends to additional map features and share insights about the types of preference patterns learned by the model. This contribution provides a framework that can incorporate new environmental inputs as robot perception capabilities evolve.
Article Search
Article: hri26main-p9335-p
Input Matters: How Telepresence Control Devices Affect Performance, Sense of Control, and User Experience
Pratyusha Ghosh,
Sachiko Matsumoto,
Vivek Gupte,
Robert Bloom,
Alex Chow,
Nandini Desai,
Donovan Le,
Tania K. Morimoto, and
Laurel D. Riek
(University of California at San Diego, La Jolla, USA)
Control devices are essential in shaping the user experience of telepresence robot operators. This is especially true for mobile telemanipulator robots (MTRs), which offer greater opportunities for social interaction compared to mobile or tabletop telepresence robots, but are more difficult to control. This increased difficulty can diminish key aspects of user experience, such as sense of control (SoC), which many users value over performance. However, the usability of telepresence control devices has been critically overlooked in HRI research. In a between-subjects study (n = 63), we investigate how three widely used control devices (mouse, gamepad, haptic controller) affect critical aspects of teleoperation: perceived SoC, safety, usability, and cognitive load; and task performance. Participants remotely controlled an MTR (Stretch) to wait on a customer in a social telepresence setting (cafe). We found that the type of control device significantly impacts both task performance and SoC in direct teleoperation, with the mouse having the best performance/SoC. There were notable tradeoffs in speeds, errors, and task completion times, and participants with higher SoC performed significantly better than those with lower SoC. We provide participant-informed suggestions for future control device design, such as allowing users to reconfigure its physical form, input sensitivity, and controls to align with video game conventions. By foregrounding the role of control devices, our work contributes to ongoing conversations in HRI around the tradeoffs between autonomy, usability, and user agency in social telepresence.
Article Search
Article: hri26main-p9383-p
Opportunities to Talk, Negotiate, and Laugh: Robot Behaviors That Shape Repeated Interactions in Groups of Older Adults
Sarah Gillet,
Yujing Zhang,
Donald McMillan,
Nicole Salomons, and
Iolanda Leite
(KTH Royal Institute of Technology, Stockholm, Sweden; Stockholm University, Stockholm, Sweden; Imperial College London, London, UK)
Feeling socially connected is important for personal well-being, yet many older adults report increasing loneliness and decreasing social connections. We explored how robots and their behaviors can support group interactions and foster social participation among older adults in a community center setting over repeated interactions. We developed a semi-autonomous collaborative and discussion-based variant of the game "With Other Words" for groups of three to four older adults and two robots. A facilitator robot (Furhat) mediated discussions using gaze and verbal support, while a guesser robot (Misty) attempted to guess the words that group members described 'with other words'. We invited 34 older adults aged 65+ to play the game in groups of three or four, three times over two to five weeks. An explorative mixed-method analysis, combining quantitative metrics with Ethnomethodological Conversation Analysis (EMCA), shows that robot gaze and verbal behaviors as well as negotiations around "wrangling" the guesser robot encouraged participation in the game. Further, verbal supporting behaviors elicited shared laughter but also led to breakdowns. While no direct significant improvement in social connectedness was observed, this work contributes to our understanding of how robot behaviors might shape interactions among older adults.
Article Search
Article: hri26main-p9450-p
Real-Time Hand Pose Tracking using 6-Axis IMUs
Anik Sarker,
Ziyi Kou,
Ergys Ristani,
Li Guan, and
Taylor Niehues
(Meta Reality Labs Research, Redmond, USA)
We introduce a real-time system for tracking hand pose using 6-
axis inertial measurement units (IMUs) without requiring magnetometers
or external sensors. Accurate hand pose tracking with
only 6-axis IMUs is known to be fundamentally challenging due to
the absence of a shared heading reference, leading to severe drift
and inter-sensor misalignment. To overcome these limitations, we
propose a hybrid method that combines a learning-based pose estimation
approach followed by a late-stage Extended Kalman Filter
(EKF). The learning-based model estimates noisy yet reasonable
hand poses and is trained with drift-insensitive features like gravity
vectors and wrist-relative gyroscope signals. On the other hand the
EKF can appropriately filter the noise from pose estimates leading
to robust tracking. Evaluated on a 12-hour dataset spanning
23 interaction tasks across 10 participants, our system improves
joint angle accuracy by 40% over an EKF-only baseline and by 18%
over a learning-only approach, achieving a mean joint error below 10°. The resulting framework enables real-time hand tracking invariant
to magnetic perturbations, occlusion, or lighting changes,
and is well suited for robotics, human–robot interaction (HRI), and
human-computer interaction (HCI) applications.
Article Search
Video
Article: hri26main-p9491-p
Robot-Assisted Exploration Decisions during a Planetary Analog Field Science Campaign
Ian C. Rankin,
Shipeng Liu,
Freya Whittaker,
Sean Buchmeier,
Liam Bouffard, and
Cristina G. Wilson
(Oregon State University, Corvallis, USA; University of Southern California, Los Angeles, USA)
We present a field deployment of a collaborative scientist-robot system that enables improved scientific gain for planetary science missions. In this system, the scientist interacts with the autonomy through a computer interface to specify prior disciplinary knowledge, hypothesis information, and refine the mission objectives using preferences and ratings. The goal of this interaction is to use autonomy where it works best, optimizing robot data collection paths under a set of constraints, while enabling scientists to communicate priors and objectives in a straightforward and efficient manner. The system was deployed with a quadruped during a planetary analog field science mission at White Sands National Park, New Mexico, to support the investigation of the surface and shallow subsurface properties of the Martian analog dunes. The system was used to generate robot paths for three different mission scientists, who had varying priors and objectives. A quadruped robot executed the path providing data on the stiffness of the surface measured through ground-leg interactions. The generated dense stiffness measurements were used by scientists to select refined locations for follow-up data collection. While, we found that more work is needed to fully incorporate human priors, our deployed system enabled scientists to collect mission-critical data more efficiently, and they were able to adapt the system to their science needs.
Article Search
Article: hri26main-p9591-p
Navigation beyond Wayfinding: Robots Collaborating with Visually Impaired Users for Environmental Interactions
Shaojun Cai,
Nuwan Janaka,
Ashwin Ram,
Janidu Shehan,
Yingjia Wan,
Kotaro Hara, and
David Hsu
(National University of Singapore, Singapore, Singapore; City University of Hong Kong, Kowloon Tong, Hong Kong; Saarland University - Saarland Informatics Campus, Saarbrücken, Germany; University of Moratuwa, Moratuwa, Sri Lanka; Institute of Psychology at Chinese Academy of Sciences, Beijing, China; Singapore Management University, Singapore, Singapore)
Robotic guidance systems have shown promise in supporting blind and visually impaired (BVI) individuals with wayfinding and obstacle avoidance. However, most existing systems assume a clear path and do not support a critical aspect of navigation—environmental interactions that require manipulating objects to enable movement. These interactions are challenging for a human–robot pair because they demand (i) precise localization and manipulation of interaction targets (e.g., pressing elevator buttons) and (ii) dynamic coordination between the user’s and robot’s movements (e.g., pulling out a chair to sit). We present a collaborative human–robot approach that combines our robotic guide dog’s precise sensing and localization capabilities with the user’s ability to perform physical manipulation. The system alternates between two modes: lead mode, where the robot detects and guides the user to the target, and adaptation mode, where the robot adjusts its motion as the user interacts with the environment (e.g., opening a door). Evaluation results show that our system enables navigation that is safer, smoother, and more efficient than both a traditional white cane and a non-adaptive guiding system, with the performance gap widening as tasks demand higher precision in locating interaction targets. These findings highlight the promise of human–robot collaboration in advancing assistive technologies toward more generalizable and realistic navigation support.
Article Search
Article: hri26main-p9603-p
Desirability of Proactive Robots: A User Study on Spoken Interaction Initiation
Anargh Viswanath and
Hendrik Buschmeier
(Bielefeld University, Bielefeld, Germany)
A critical aspect in proactive human–robot interaction is deciding when and how a robot should initiate spoken interaction. The nature of this initiation can shape whether an encounter feels natural and supportive, or intrusive and unwelcome. Often, technological development prioritises system capabilities, paying limited attention to the subjective experience of communication, vital for designing sociable robots. This paper presents the findings from an empirical study investigating user preferences and the perceived desirability of spoken interaction initiation with a service robot in a household setting. In the video-based study with 239 participants, we compared reactive person-initiated, reactive robot-initiated, and proactive robot-initiated modes. The results of the study revealed clear patterns with more than half favouring reactive person-initiated interaction, followed by proactive robot-initiated interaction. By combining thematic and statistical analyses, we found that the perceived user preferences are shaped more by general propensity towards interaction itself than by demographics or specific personality traits. Further, it reflected the tension between the comfort of privacy and control, and the appeal of naturalness and intelligent support in user preferences. The results from the study provide evidence necessitating the design of initiation strategies that flexibly adapt to different user profiles and contexts.
Article Search
Article: hri26main-p9668-p
Immersive Social Teleoperation Interface with Semi-automatic Ingroup Navigation for Intuitive Communication
Akitomo Takeda,
Stela Hanbyeol Seo,
Satoru Satake, and
Takayuki Kanda
(Kyoto University, Kyoto, Japan; ATR, Souraku-gun, Japan)
We introduce a novel immersive social teleoperation interface to perform social interaction intuitively and semi-automatic locomotion simultaneously. Teleoperated robots in complex, human-centric social environments present a significant challenge on simultaneously managing intricate navigation while engaging in natural social interaction. This dual task imposes a high cognitive load, hindering the fluidity and quality of social interaction. As existing systems typically prioritize either task-oriented teleoperation control or non-moving social interaction, failing to integrate dynamic locomotion with social engagement effectively. Our system combines a head-mounted display with a wide-angle, 270-degree video feed to support extensive situation awareness and a strong sense of presence to overcome these limitations by fostering an immersive and socially aware experience. Our interface performs low-level navigation of the robot by pointing at a place to go and selecting a person to follow. The operator can focus on high-level goals (social interactions). We evaluated our interface through a rigorous field experiment, using a testbed (remote guide) scenario developed through iterative pilot studies in a real-world shopping mall. Our findings demonstrate that the operator's task performance improves statistically significantly. We report other findings and discuss limitations and future improvements. In short, our novel interface, integrating immersive visualization with autonomous navigation, enables operators to achieve a more intuitive and engaging social interaction in dynamic remote social environments.
Article Search
Article: hri26main-p9759-p
Designing Care-fully: Robots for Acute Cancer Care
Sandhya Jayaraman,
Pratyusha Ghosh,
Soyon Kim,
Soham Satyadharma,
Angelique Taylor,
Christopher Coyne, and
Laurel D. Riek
(University of California at San Diego, La Jolla, USA; Cornell University, New York, USA; University of California at San Diego, San Diego, USA)
Patients with cancer (PwC) have a hard time getting prompt treatment in acute care settings, and feel unseen, unheard, and neglected. This is due to systemic problems: worldwide, Emergency Department (ED) healthcare workers (HCWs) are overworked and EDs are understaffed. Robots will not fix these problems; however, prior work suggests if well-designed and contextualized, they may support cancer care. Based on longstanding collaborations with PwC and ED HCWs, in this paper we report on an exploration of the design space of social robots for acute cancer care. Using a care ethics lens, we found robots can be uniquely positioned to amplify compassion within deeply human care relationships through their social presence, while performing routine tasks, such as patient monitoring. However, participants suggested the human experiences of pain and distress may remain elusive for robots to engage with meaningfully. Our work reveals HCWs and PwC saw robots as means to expand relational care in the ED, and explores how future HRI research may meaningfully support these care relationships.
Article Search
Article: hri26main-p9835-p
Comparing Robots and Non-robot Phygital Artefacts in Children’s Storytelling via a Systematic Review
Rosella Gennari,
Muhammad Bilal Khan,
Alessandra Melonio, and
Maria Angela Pellegrino
(Free University of Bozen-Bolzano, Bolzano, Italy; Ca’ Foscari University of Venice, Venice, Italy; Università degli Studi di Salerno, Fisciano, Italy)
Storytelling is a widely explored educational practice. Within Human–Robot Interaction (HRI), robots are extensively employed for storytelling. In parallel, non-robot phygital artefacts — physical
objects augmented with digital technologies — have been considered for supporting interaction and collaboration. However, systematic comparisons between these approaches remain limited.
This paper presents a systematic review of 135 out of 1,040 studies (2014–2025) involving participants under 18. The review compares robots and non-robot phygital artefacts used in storytelling. Studies were coded across analytic lenses on media, participation, collaboration, learning goals, and Artificial Intelligent support, enabling age-stratified comparison. Findings reveal that non-robot phygital artefacts more consistently foster peer collaboration and creative ownership, while robots primarily contribute social presence and novelty, often structuring participation sequentially. The review highlights implications for future HRI design, suggesting that robots should complement alternative phygital artefacts by promoting openness, adaptability, and collaboration in storytelling
Article Search
Info
Article: hri26main-p9966-p
Short Paper Contributions
BehaviorKit: A Multi-modal Real-Time Behavior Analysis Library for Robots
William Valentine,
Selma Šabanović,
David Crandall, and
Weslie Khoo
(Rose-Hulman Institute of Technology, USA; Indiana University, USA)
We present BehaviorKit, an open-source plug-and-play module that enables the addition of real-time behavior analysis using deep learning to any robot with internet access. The library bundles together gaze tracking, real-time transcription, textual sentiment analysis, facial emotion (valence and arousal) estimation, and face and pose landmark localization. The library is designed to be run on a GPU-powered computer, either on the robot or externally, to process the incoming visual and auditory inputs that the robot receives. The library connects via WebSocket to the robot, which receives the processed outputs from all of the models. The WebSocket client can easily connect to a ROS-powered robot, or a custom client can be written to adapt to any robot; alternatively, the library can also be run between two laptop computers. A small dataset is provided to test the framework. By packaging together and optimizing many commonly used models, we hope to enable easier access to high-performing behavior models for the HRI community. The code is available at https://github.com/IUB-RHouse/BehaviorKit.
Article Search
Article: hri26short-phrisc1010-p
A Custom Web Application to Control NAO using Hypertext Transfer Protocol Secure
Trenton Schulz and
Claudia-Andreea Badescu
(Norwegian Computing Center, Norway)
We present a web application for controlling NAO V5 and NAO V6 robots using Hypertext Transfer Protocol Secure (HTTPS). The application was designed for a study in a special school. The school
staff have found the application easy-to-use and versatile, and it may be useful to other researchers or people interested in controlling NAO. The HTTPS constraint and the locked-down nature of NAO introduced additional development challenges, and the solutions to these challenges are worth sharing with the HRI community. We present characteristics of the web application, implementation
details, how it has been set up, and how to use it. Although the application is usable in its current form, there are still things that can be improved to make the application more useful in other contexts. We therefore document how the remote control can be extended and potential starting points for improvement.
Article Search
Info
Article: hri26short-phrisc1016-p
ZTL: Lightweight Communication Patterns for HRI
Patrick Holthaus,
Trenton Schulz,
Lewis Riches,
Claudia-Andreea Badescu, and
Farshid Amirabdollahian
(University of Hertfordshire, UK; Norwegian Computing Center, Norway)
Human-robot interaction (HRI) programmers often struggle with operating older robot hardware due to the short support period provided by manufacturers and difficulties integrating modern software solutions. This paper introduces the ZTL Task Library (ZTL), a lightweight communication framework and protocol designed to decouple robot hardware from the operating platform via socket communication, thereby increasing robot lifetime. We present a task-based communication protocol facilitating the co-design of robot behaviours with non-programming experts. Our approach has been shown across different platforms to effectively mitigate incompatibilities between middlewares, simplifying control and usability, allowing for simultaneous addressing of multiple devices.
Article Search
Article: hri26short-phrisc1032-p
Neutral by Default? Replicating User Vocal Responses to Negative Affective Cues in Conversational Agents
Yong Ma,
Yuchong Zhang,
Di Fu,
Stephanie Zubicueta Portales, and
Morten Fjeld
(University of Bergen, Norway; KTH Royal Institute of Technology, Sweden; University of Surrey, UK; NTNU, Norway; Chalmers University of Technology, Sweden)
Conversational agents (CAs) increasingly detect users’ emotions, yet deciding how to respond, especially to negative affect, remains a central design challenge. We conducted a role-switching study in which participants reply as the CAs to simulated users expressing anger, sadness, or fear. Results reveal systematic, gender-linked patterns: most male participants favored a neutral, affect-balanced stance and prioritized clarification or task progress, whereas most female participants produced a wider range of non-neutral responses, more often using explicit empathy, reassurance, and reflective listening. We also observe differences in de-escalation phrasing, validation timing, and follow-up questioning across scenarios. These findings indicate that strategies for handling negative emotions vary with user characteristics and context. Based on these findings, we argue for adaptive CA response policies that calibrate first-turn acknowledgment and information-gathering, tailoring prosody and wording to emotional context in order to support de-escalation, perceived understanding, and user trust.
Article Search
Info
Article: hri26short-phrisc1036-p
Seeing Eye to Eye Again: In-the-Wild Replication Study with an Expressive Eye Display for the Stretch Mobile Manipulator
Antara Shah and
Naomi T. Fitter
(Oregon State University, USA)
In human-robot interaction, nonverbal robot cues such as gaze and emotional expression can potentially enhance coordination with nearby people via advantages like conveying a shared focus. At the same time, although laboratory studies have shown that expressive eye behavior increases the overall task performance while improving social perception of the robot, it is unclear whether these effects resoundingly generalize beyond the controlled laboratory environment into an unstructured public environment. We conducted an in-the-wild study to replicate the results of a previous in-lab study on expressive eye behavior (i.e., gaze and emotional expression) using a Hello Robot Stretch mobile manipulator. N = 55 participants interacted with the robot in a guided block placement task under one of four experiences: control, gaze only, emotion only, and both gaze and emotion combined. Participants in the gaze condition required significantly fewer attempts to select the correct block and reported the robot to be significantly more socially warm, suggesting that gaze may be a strong cue for directing collaboration as well as social perception in natural settings, while emotional expression could have more subtle impacts. These results can help robot practitioners create systems that fit more seamlessly and effectively into real-world settings.
Article Search
Article: hri26short-phrisc1046-p
Plausible Explanations Reduce Phantom Cost Perception in HRI
Benjamin Lebrun,
Christoph Bartneck, and
Andrew J. Vonasch
(University of Canterbury, New Zealand)
Recent studies found that people imagine phantom costs—bad intentions and risks—when a human or a robot makes an overly generous offer without sufficient explanations. However, these studies used a paradigm in which the agent justified a cookie with $2 offer by saying they had eaten cookies with friends, a scenario we think implausible for robots. The present study replicated this paradigm while measuring perceived plausibility of the agent's justification. Results indicated that, unlike humans, the justification of eating cookies with friends was perceived as implausible when said by a robot. This perceived implausibility increased perceived phantom costs, reduced trust in the robot, and decreased offer acceptance. This study suggests that phantom costs occur when explanations are both insufficient and implausible, highlighting the need for sufficient and plausible explanations to promote effective HRI.
Article Search
Article: hri26short-phrisc1050-p
Particle-Driven Robot Placement: A ROS 2 Plugin-Based Framework
Jesus Enrique Aleman Gallegos and
Sven Wachsmuth
(Bielefeld University, Germany)
The present work introduces a ROS 2 plugin-based framework for robot placement in dynamic 2D environments, aimed at Human–Robot Interaction (HRI) scenarios such as approaching and person transfer, while remaining applicable to broader service robot tasks. Candidate poses where the robot could be placed are represented as particles and evaluated against a multi-constraint specification composed of critical (hard) and non-critical (soft/scoring) constraints. Constraints are declared in XML using a concise syntax that supports logical (∧, ∨, ¬) and algebraic (∘, +) operators. The particles distribution adapts online to environmental changes via weighting and resampling, yielding feasible goal poses that respect specified constraints. The implementation is written in C++, integrates natively with ROS 2 through a plugin architecture, and exposes an action server interface for seamless use within higher-level planners.
Article Search
Info
Article: hri26short-phrisc1052-p
Quest2ROS2: A ROS 2 Framework for Bi-manual VR Teleoperation
Jialong Li,
Zhenguo Wang,
Tianci Wang,
Maj Stenmark, and
Volker Krueger
(Lund University, Sweden)
Quest2ROS2 is an open-source ROS2 framework for bi-manual teleoperation designed to scale robot data collection. Extending Quest2ROS, it overcomes workspace limitations via relative motion-based control, calculating robot movement from VR controller pose changes to enable intuitive, pose-independent operation. The framework integrates essential usability and safety features, including real-time RViz visualization, streamlined gripper control, and a pause-and-reset function for smooth transitions. We detail a modular architecture that supports ”Side-by-Side” and ”Mirror” control modes to optimize operator experience across diverse platforms. Code is available at: https://github.com/Taokt/Quest2ROS2.
Article Search
Info
Article: hri26short-phrisc1064-p
The Action-Engagement-Collaboration Triad: A Multimodal Analytical Framework for Human-Robot Collaboration
Arvind,
Naval Kishore Mehta,
Himanshu Kumar,
Sumeet Saurav, and
Sanjay Singh
(Academy of Scientific and Innovative Research, India; Central Electronics Engineering Research Institute, India)
Industrial human-robot-collaboration (HRC) depends on under-standing how people act, stay engaged, and work together with robots during shared tasks. In most prior work, these aspects are studied in isolation, which makes it hard to see the full picture of real collaboration. This study tackles that problem with an improved multimodal framework that jointly captures fine-grained human actions, engagement levels, and collaboration outcomes in an industrial assembly setting. It introduces a three-layer analytical model called MICRO-MESO-MACRO (M3), which links detailed action patterns to engagement dynamics and overall system efficiency. All data streams are aligned in time using a custom synchronization process so that visual, motion, and behavioral signals can be analyzed together in a consistent way. The results show clear relationships between varied action patterns, stable engagement, and better collaboration efficiency, providing a solid foundation for designing
adaptive, human-aware robotic partners. The annotated dataset and code are available at: https://github.com/arvindsihag/m3_analyzer.
Article Search
Info
Article: hri26short-phrisc1065-p
A Framework for Low-Latency, LLM-Driven Multimodal Interaction on the Pepper Robot
Erich Studerus,
Vivienne Jia Zhong, and
Stephan Vonschallen
(University of Applied Sciences and Arts Northwestern Switzerland, Switzerland; Zurich University of Applied Sciences, Switzerland)
Despite recent advances in integrating Large Language Models (LLMs) into social robotics, two weaknesses persist. First, existing implementations on platforms like Pepper often rely on cascaded Speech-to-Text (STT)→LLM→Text-to-Speech (TTS) pipelines, resulting in high latency and the loss of paralinguistic information. Second, most implementations fail to fully leverage the LLM’s capabilities for multimodal perception and agentic control. We present an open-source Android framework for the Pepper robot that addresses these limitations through two key innovations. First, we integrate end-to-end Speech-to-Speech (S2S) models to achieve low-latency interaction while preserving paralinguistic cues and enabling adaptive intonation. Second, we implement extensive Function Calling capabilities that elevate the LLM to an agentic planner, orchestrating robot actions (navigation, gaze control, tablet interaction) and integrating diverse multimodal feedback (vision, touch, system state). The framework runs on the robot’s tablet but can also be built to run on regular Android smartphones or tablets, decoupling development from robot hardware. This work provides the HRI community with a practical, extensible platform for exploring advanced LLM-driven embodied interaction.
Article Search
Info
Article: hri26short-phrisc1067-p
IVoice: An Open-Source Real-Time Closed-Loop Voice Monitoring and Feedback Testbed for Cognitively Assistive Robots in Therapy
Iman Noferesti and
Rahul Singh
(University of Iowa, USA)
IVoice is an open-source, real-time closed-loop voice monitoring testbed. It can be used to test and assess Cognitively Assistive Robots (CARs) that target voice rehabilitation. Built on a Unity–Python architecture, the testbed supports modules that continuously analyze pitch, loudness, and voice quality, providing adaptive multimodal feedback for voice training. IVoice includes a fully functional prototype that supports voice capture, calibration, a feedback module, and modules for data collection. All these modules interact with each other based on well-defined interfaces. As long as the interface definitions are supported, users of the testbed can modify one or more of the constituent modules.
Article Search
Info
Article: hri26short-phrisc1068-p
A Wizard for Kids: A Platform for Improvised Child–Robot Interactions
Davide Frova,
Monica Landoni,
Simone Arreghini, and
Antonio Paolillo
(USI Lugano, Switzerland; USI-SUPSI, Switzerland; Dalle Molle Institute for Artificial Intelligence, Switzerland)
We present an interface designed to operate social robots in a highly unpredictable context: a classroom, supporting user-centered design of innovative child–robot interactions.
Having a functional prototype enables rich user data elicitation and analysis, essential for understanding user needs and deriving meaningful requirements.
Deploying robots outside controlled laboratory conditions into a classroom, however, introduces challenges related to safety, robustness, and managing multiple, often noisy, interactions.
Our system addresses these challenges by providing a safe, flexible and resilient interface that ensures safe operation while allowing improvisation and adaptability to unpredictable children's behaviors.
The interface aims to be intuitive for non-expert users and support everyday teaching and learning activities in the classrooms.
By prioritizing usability, modularity, and robustness, our approach facilitates iterative design, accelerates the transition from Wizard-of-Oz prototyping to autonomous behaviors, and contributes to making child–robot interaction technologies more accessible and practical for diverse application domains.
Article Search
Article: hri26short-phrisc1071-p
alt.HRI
alt.HRI Papers
Clankers in the Cultural Imagination: Online Robophobia and Its Implications for Human-Robot Interaction
Julia Rosén,
Phillip Bach-Luong Tran Jr., and
Denise Y. Geiskkovitch
(McMaster University, Hamilton, Canada; Stockholm University, Stockholm, Sweden)
Robophobia is a recent and growing trend on social media, where users create humorous videos that play on the general fear of robots. While framed as jokes, these videos contribute to the public’s attitudes and expectations of robots, influencing what is socially and culturally acceptable. To investigate this emerging trend, we conducted a thematic analysis of 200 English-speaking TikTok videos using online ethnography to explore how users engage with robophobia and what implications this may have for the Human–Robot Interaction (HRI) field. Our findings show that robophobia on TikTok is predominantly expressed through humorous skits and verbal abuse of real-world robots in public spaces. Common themes include fears about humans in romantic relationships with robots, and frequent use of derogatory terms such as “clanker” to verbally "dehumanize" robots. The robophobia trend is culturally embedded and reveals people’s underlying attitudes and fears toward robots in society, both now and in the future. We discuss the implications of these findings for the field of Human–Robot Interaction (HRI), emphasizing how public expectations are shaped by cultural narratives, and stress the need for culturally sensitive, expectation-aware HRI research and robot design.
Article Search
Article: hri26althri-p2099-p
Robot Vandalism: A Senseless Act?
Anna Dobrosovestnova,
David J. Bailey,
Ralf Vetter, and
Masoumeh Mansouri
(Interdisciplinary Transformation University, Linz, Austria; University of Birmingham, Birmingham, UK)
In Human–Robot Interaction (HRI), vandalization of robots is predominantly framed as senseless or immoral behaviors, with mitigation efforts focused on system design improvements or user education. Drawing on sociological theories of vandalism, we examine three cases of robot destruction in public spaces: the burning of Waymo vehicles in Los Angeles (2025), the vandalism of a Knightscope K5 robot in San Francisco (2017), and the destruction of the hitchhiking robot hitchBOT in Philadelphia (2015). We argue that not all acts of robot vandalism are instances of "malicious" destruction. Rather, some can be understood as ideological or political vandalism - expressive or strategic acts embedded within broader social and political struggles. By situating these events in their urban and discursive contexts, our analysis moves beyond explanations grounded solely in individual psychology or system design, and invites a broader reflection within HRI on how robots as socio-technical artifacts become implicated in the politics of public space, power, and collective life.
Article Search
Article: hri26althri-p3015-p
Whose Robots? A Scientific Oligarchy in HRI Research?
Amol Deshmukh
(ETH Zurich, Zurich, Switzerland; University of Glasgow, Glasgow, UK)
This scientometric study provides a comprehensive analysis mapping
the geographic power structure that governs human-robot
interaction (HRI) research. This paper analyses the geographic distribution
of HRI research from 2005 to 2025 (n = 4,461 publications)
using Scopus-indexed data from six core HRI venues. The findings
reveal a research field characterised by profound geographic concentration
and scientific oligarchy, with Europe and North America
collectively accounting for 78.9% of publications and 79.2% of citations.
The analysis identifies three structural paradoxes that define
the field’s economy: (1) a ‘Maturation Paradox’ emerging from the
tension between rapid volume growth and declining impact among
established leaders; (2) a ‘Collaboration Paradox’ where international
partnerships yield asymmetric benefits, favouring emerging
regions while offering minimal gains for established leaders; and
(3) a ‘Concentration Paradox’, revealing the contradiction between
HRI’s global aspirations and a consolidated power structure where
influence remains fixed in culturally similar subregions. Trend analysis
raises critical questions about intellectual diversity and how
geographic concentration shapes the fundamental assumptions and
future trajectories of HRI research.
Article Search
Article: hri26althri-p3484-p
Responsible Humanoids: A Contradiction in Terms?
Séverin Lemaignan,
AJung Moon,
Simon Coghlan,
Emily C. Collins,
Vanessa Evers,
Nico Hochgeschwender,
Sara Ljungblad,
Michael Milford,
Sarah Moth-Lund Christensen,
Francisco J. Rodríguez Lera,
Pericle Salvini, and
Yi Yang
(PAL Robotics, Barcelona, Spain; CSIC, Barcelona, Spain; McGill University, Montreal, Canada; University of Melbourne, Melbourne, Australia; Univerisity of Manchester, Manchester, UK; University of Twente, Enschede, Netherlands; CWI, Amsterdam, Netherlands; Nanyang Technological University, Singapore, Singapore; University of Bremen, Bremen, Germany; University of Gothenburg, Gothenburg, Sweden; Chalmers University of Technology, Gothenburg, Sweden; Queensland University of Technology, Brisbane, Australia; University of Sheffield, Sheffield, UK; University of León, León, Spain; University of Oxford, Oxford, UK; KU Leuven, Leuven, Belgium)
In this paper, we critically examine the current "humanoid hype" in robotics, questioning its alignment with responsible robotics principles. While technical challenges drive internal fascination, the pervasive public image of humanoids demands deeper HRI engagement. We explore how responsible robotics concepts, such as privacy, dignity, and trust, are uniquely challenged or overlooked in the pursuit of anthropomorphic robot forms. By dissecting this hype, and mapping the main findings of the recently-published Roadmap for Responsible Robotics to the humanoids field, we aim to move beyond technical form-factor obsessions to understand the true societal implications and identify potential blind spots for the HRI community.
Article Search
Article: hri26althri-p4215-p
Improvisational Participatory Storming: A Toolkit of Improvisational Design Methods for Human-Robot Interaction
Katie Schneider Assaf,
Sawyer Collins,
Kevin Rich,
James Walker,
Nicholle Harris, and
Tom Williams
(Colorado School of Mines, Golden, USA; University of Colorado Boulder, Boulder, USA; Licensed Social Worker, Denver, USA)
Theatre-based design methods have become recognized as highly effective for robot interaction design. Yet there are many domains in which it would be inappropriate or ineffective for designers to role-play stakeholders, such as when working with vulnerable populations. In such cases, researchers typically engage in participatory methods, so those populations can directly contribute to the design process. We see a key design gap created by this tension: How might community members be effectively involved in theatre-based design methods? In this work, we bring together academics and practitioners across HRI, Theatre, Drama Therapy, and Applied Improvisation to address this challenge, and present Improvisational Participatory Storming (IPS) --- a novel Theatre-based Participatory Design Method that is uniquely well suited for Human-Robot Interaction. In presenting IPS, we make seven key contributions. Specifically, we identify
(1) a concrete three-section structure for IPS workshops;
(2) a novel reuse of Tabletop Role-Playing Game safety tools to mitigate risks in IPS activities;
(3) key design objectives to be met through IPS;
(4) context-specific constraints that inform which theatre-based design activities to use to meet those objectives;
(5) seven key roles in which participants may participate in IPS activities;
(6) key dimensions of IPS activities; and
(7) three ways that IPS activities can be sequenced to scaffold participation.
Article Search
Article: hri26althri-p5228-p
We Cannot Outsource What We Value Most: Toward Deployable Research Products in HRI
Kayla Matheus and
Brian Scassellati
(Yale University, New Haven, USA)
Human-Robot Interaction (HRI) continues to rely on commercial social robot platforms to support academic research. Yet again and again, these systems prove short-lived, inaccessible, or misaligned with research needs. We argue that this is not an industry problem – the goals, needs, and constraints of industry are inherently distinct. Instead, this is a fundamental structural problem in HRI research, and one that must be solved from within. In short, HRI researchers must build their own products. In this paper, we trace the recent problems of industry-supplied robots and frame a new type of HRI research artifact in response: Deployable Research Products (DRPs), which bridge the gap between lab prototypes and commercial products. Drawing on mental models from business and innovation theory, we outline the mindset shifts that HRI must embody to move towards DRPs. We conclude with three emerging examples of this alternative path in the HRI community. These projects differ in scope and approach but share a common thread: to ensure the longevity of our science, we cannot outsource what we value most.
Article Search
Article: hri26althri-p7730-p
Theory of Whose Mind? Exposing the Shortcomings of One of HRI’s Core Concepts
Cansu Elmadagli and
Jennifer Renoux
(Örebro University, Örebro, Sweden)
The concept of Theory of Mind (ToM) is central to many social robotic studies. It may be invoked during the design of social robots as a way to improve collaboration or create a form of "social intelligence". It is also considered as an established fact and never put in question.
However, many scholars have analysed the concept of ToM from a critical perspective and argued that it is, in fact, a theory with significant shortcomings, that rests on neuronormative and neuroprivileged grounds.
In this paper, we explore these arguments and what this change of perspective means for the field of Social Robotics. We argue that the field should abandon the concept of ToM and move forward to more appropriate and inclusive models and research questions, and propose some directions to do so.
Article Search
Article: hri26althri-p7858-p
Robot Fashion: The Social Psychology of Robot Clothing
Irvin Steve Cardenas,
Jong Hoon Kim,
Linda Ohrn-McDaniel,
Krissi R. Riewe Stevenson,
J.R. Campbell, and
Natalie Yeo
(Kent State University, Kent, USA)
Most robots today are naked, and nobody seems to mind! We argue they should. Robot attire is not cosmetic: clothing systematically shapes social cognition, cueing role expectations, warmth and competence inferences, touch norms, and trust. We introduce the Robot Fashion Psychology Model (RFPM), a mid-level theory linking what robots wear to how people interact with them, moderated by morphology and context. Grounded in fashion-led research-through-design that triangulates in-the-wild exemplars, a capsule collection spanning humanoids to drones, and reflexive critique - we translate RFPM into morphology aware garment guidelines and ethical guardrails against "trust-washing" and capability deception. We then ask: What happens when robots dress for each other? When fashion becomes visible only to machines? Robot wardrobes are coming. With them come new forms of designed influence. We offer foundations for designing them with intention rather than improvisation, while norms are still forming.
Article Search
Article: hri26althri-p7897-p
Post-growth Perspectives in HRI
Sofia Thunberg,
Mafalda Gamboa,
Ilaria Torre, and
Birgit Penzenstadler
(Chalmers University of Technology, Gothenburg, Sweden; University of Gothenburg, Gothenburg, Sweden; Uppsala University, Uppsala, Sweden; Lappeenranta University of Technology, Lappeenranta, Finland)
Human–Robot Interaction (HRI) research is starting to engage with sustainability, yet the field remains tied to economic models that assume continual growth, rapid technological development, and market expansion. This economic growth orientation raises questions about whether HRI can genuinely support ecological responsibility, given the resource intensity of robotics research, production, and deployment. In this contribution, we introduce a post-growth perspective to reframe the relationship between robotics, sustainability, and society. We argue that rather than striving for `green growth' within existing economic structures, HRI should engage critically with concepts such as degrowth and post-capitalism. By shifting attention from growth to development, we invite the community to consider what robotic futures are worth pursuing and for whom.
Article Search
Article: hri26althri-p7936-p
Let’s Talk about (Our and Robots’) Death: Mortality as a Core Principle in Human-Robot Interaction
Waki Kamino,
Long-Jing Hsu,
Selma Šabanović, and
Malte F. Jung
(Cornell University, Ithaca, USA; University of Michigan at Ann Arbor, Ann Arbor, USA; Indiana University, Bloomington, USA)
As robots achieve product longevity and increasingly enter intimate spaces of human life, from healthcare facilities to homes, they inevitably encounter contexts involving death, dying, grief, and loss. Nevertheless, mortality as a fundamental relational dimension has received limited systematic attention within Human-Robot Interaction (HRI) research. Drawing from empirical vignettes across multiple research projects, from robots at deathbeds and participants who passed away, to owners planning for robots they will leave behind, we demonstrate how death already surfaces in HRI, even when not designed for it. Through analysis of these encounters, we develop a framework for mortality-aware HRI that identifies key dimensions for designing and researching robots that acknowledge human and robotic finitude. We argue that as robots achieve long-term integration into people's lives, creating intertwined lifespans of humans and machines, the field must move beyond treating mortality as an edge case and instead recognize it as a core principle shaping how people relate to and make meaning with robots.
Article Search
Article: hri26althri-p9407-p
When Robots Should Break the Rules
Rebecca Ramnauth and
Brian Scassellati
(Yale University, New Haven, USA)
The fields of human-robot interaction (HRI) and robotics at large have developed around a stable set of assumptions about what robots are and how they should behave. These assumptions arise from the constitutive traits of robots, which together shape social expectations. Over time, these expectations have hardened into tacit rules that quietly govern research and design: robots should always engage, help, be productive, remain polite, never lie, never err, and never model harm. While these prevailing norms have merit, they also constrain the field's imagination of the interactions robots can meaningfully support. We propose rule-breaking as a generative design strategy and illustrate how deliberate violations—robots that interrupt, refuse, mislead, or err—can produce interactions that are more ethical, effective, and socially intelligent. In doing so, we argue for a more reflexive and imaginative HRI that learns as much from breaking the rules as from following them.
Article Search
Article: hri26althri-p9437-p
alt.HRI Pictorial
Robot as Self, Blurred Boundaries, and the Auxthetic Mind-Body: A Speculative Design through Poetry
Lux Miranda,
Ginevra Castellano, and
Katie Winkle
(Uppsala University, Uppsala, Sweden)
We present the concept of the auxthetic mind-body (AM): a system which extends the human mind and body to include robot bodies, artificial “thoughts,” and artificial feelings as part of the perception of “self.” While human-robot interaction research has long grappled with embodiment, the AM represents an as-yet unexplored space in this realm and raises a host of questions around its uses, consequences, and preservation of human agency. We explore the concept through speculative sociotechnical design, foregrounding how the technology might make us feel (rather than what it might do) as a guiding foundation for further development. Through poetry and marginalia, we invite readers to reflect on what it might mean to think, feel, and be with an AM. In doing so, we sketch both technical possibility and future worth longing for—one where the dissolution of the human-machine boundary can be meaningful, grounded, and less frightening than it may seem.
Article Search
Article: hri26althri-p7202-p
proc time: 1.61