ICMI 2017
19th ACM International Conference on Multimodal Interaction (ICMI 2017)
Powered by
Conference Publishing Consulting

19th ACM International Conference on Multimodal Interaction (ICMI 2017), November 13–17, 2017, Glasgow, UK

ICMI 2017 – Proceedings

Contents - Abstracts - Authors

Main Track

Oral Session 1: Children and Interaction

Tablets, Tabletops, and Smartphones: Cross-Platform Comparisons of Children’s Touchscreen Interactions
Julia Woodward, Alex Shaw, Aishat Aloba, Ayushi Jain, Jaime Ruiz, and Lisa Anthony
(University of Florida, USA)
Toward an Efficient Body Expression Recognition Based on the Synthesis of a Neutral Movement
Arthur Crenn, Alexandre Meyer, Rizwan Ahmed Khan, Hubert Konik, and Saida Bouakaz
(University of Lyon, France; University of Saint-Etienne, France)
Interactive Narration with a Child: Impact of Prosody and Facial Expressions
Ovidiu Șerban, Mukesh Barange, Sahba Zojaji, Alexandre Pauchet, Adeline Richard, and Emilie Chanoni
(Normandy University, France; University of Rouen, France)
Comparing Human and Machine Recognition of Children’s Touchscreen Stroke Gestures
Alex Shaw, Jaime Ruiz, and Lisa Anthony
(University of Florida, USA)

Oral Session 2: Understanding Human Behaviour

Virtual Debate Coach Design: Assessing Multimodal Argumentation Performance
Volha Petukhova, Tobias Mayer, Andrei Malchanau, and Harry Bunt
(Saarland University, Germany; Tilburg University, Netherlands)
Predicting the Distribution of Emotion Perception: Capturing Inter-rater Variability
Biqiao Zhang, Georg Essl, and Emily Mower Provost
(University of Michigan, USA; University of Wisconsin-Milwaukee, USA)
Automatically Predicting Human Knowledgeability through Non-verbal Cues
Abdelwahab Bourai, Tadas Baltrušaitis, and Louis-Philippe Morency
(Carnegie Mellon University, USA)
Pooling Acoustic and Lexical Features for the Prediction of Valence
Zakaria Aldeneh, Soheil Khorram, Dimitrios Dimitriadis, and Emily Mower Provost
(University of Michigan, USA; IBM Research, USA)

Oral Session 3: Touch and Gesture

Hand-to-Hand: An Intermanual Illusion of Movement
Dario Pittera, Marianna Obrist, and Ali Israr
(Disney Research, USA; University of Sussex, UK)
An Investigation of Dynamic Crossmodal Instantiation in TUIs
Feng Feng and Tony Stockman
(Queen Mary University of London, UK)
“Stop over There”: Natural Gesture and Speech Interaction for Non-critical Spontaneous Intervention in Autonomous Driving
Robert Tscharn, Marc Erich Latoschik, Diana Löffler, and Jörn Hurtienne
(University of Würzburg, Germany)
Pre-touch Proxemics: Moving the Design Space of Touch Targets from Still Graphics towards Proxemic Behaviors
Ilhan Aslan and Elisabeth André
(University of Augsburg, Germany)
Freehand Grasping in Mixed Reality: Analysing Variation during Transition Phase of Interaction
Maadh Al-Kalbani, Maite Frutos-Pascual, and Ian Williams
(Birmingham City University, UK)
Rhythmic Micro-Gestures: Discreet Interaction On-the-Go
Euan Freeman, Gareth Griffiths, and Stephen A. Brewster
(University of Glasgow, UK)

Oral Session 4: Sound and Interaction

Evaluation of Psychoacoustic Sound Parameters for Sonification
Jamie Ferguson and Stephen A. Brewster
(University of Glasgow, UK)
Utilising Natural Cross-Modal Mappings for Visual Control of Feature-Based Sound Synthesis
Augoustinos Tsiros and Grégory Leplâtre
(Edinburgh Napier University, UK)

Oral Session 5: Methodology

Automatic Classification of Auto-correction Errors in Predictive Text Entry Based on EEG and Context Information
Felix Putze, Maik Schünemann, Tanja Schultz, and Wolfgang Stuerzlinger ORCID logo
(University of Bremen, Germany; Simon Fraser University, Canada)
Cumulative Attributes for Pain Intensity Estimation
Joy O. Egede and Michel Valstar
(University of Nottingham at Ningbo, China; University of Nottingham, UK)
Towards the Use of Social Interaction Conventions as Prior for Gaze Model Adaptation
Rémy Siegfried, Yu Yu, and Jean-Marc Odobez
(Idiap, Switzerland; EPFL, Switzerland)
Multimodal Sentiment Analysis with Word-Level Fusion and Reinforcement Learning
Minghai Chen, Sen Wang, Paul Pu Liang, Tadas Baltrušaitis, Amir Zadeh, and Louis-Philippe Morency
(Carnegie Mellon University, USA)
IntelliPrompter: Speech-Based Dynamic Note Display Interface for Oral Presentations
Reza Asadi, Ha Trinh, Harriet J. Fell, and Timothy W. Bickmore
(Northeastern University, USA)

Oral Session 6: Artificial Agents and Wearable Sensors

Head and Shoulders: Automatic Error Detection in Human-Robot Interaction
Pauline Trung, Manuel Giuliani, Michael Miksch, Gerald Stollnberger, Susanne Stadler, Nicole Mirnig, and Manfred Tscheligi
(University of Salzburg, Austria; University of the West of England, UK; Austrian Institute of Technology, Austria)
The Reliability of Non-verbal Cues for Situated Reference Resolution and Their Interplay with Language: Implications for Human Robot Interaction
Stephanie Gross, Brigitte Krenn, and Matthias Scheutz
(Austrian Research Institute for Artificial Intelligence, Austria; Tufts University, USA)
Do You Speak to a Human or a Virtual Agent? Automatic Analysis of User’s Social Cues during Mediated Communication
Magalie Ochs, Nathan Libermann, Axel Boidin, and Thierry Chaminade
(Aix-Marseille University, France; University of Toulon, France; Picxel, France)
Estimating Verbal Expressions of Task and Social Cohesion in Meetings by Quantifying Paralinguistic Mimicry
Marjolein C. Nanninga, Yanxia Zhang, Nale Lehmann-Willenbrock, Zoltán Szlávik, and Hayley Hung
(Delft University of Technology, Netherlands; University of Amsterdam, Netherlands; VU University Amsterdam, Netherlands)
Data Augmentation of Wearable Sensor Data for Parkinson’s Disease Monitoring using Convolutional Neural Networks
Terry T. Um, Franz M. J. Pfister, Daniel Pichler, Satoshi Endo, Muriel Lang, Sandra Hirche, Urban Fietzek, and Dana Kulić
(University of Waterloo, Canada; LMU Munich, Germany; TU Munich, Germany; Schön Klinik München Schwabing, Germany)

Poster Session 1

Automatic Assessment of Communication Skill in Non-conventional Interview Settings: A Comparative Study
Pooja Rao S. B, Sowmya Rasipuram, Rahul Das, and Dinesh Babu Jayagopi
(IIIT Bangalore, India)
Low-Intrusive Recognition of Expressive Movement Qualities
Radoslaw Niewiadomski, Maurizio Mancini, Stefano Piana, Paolo Alborno, Gualtiero Volpe, and Antonio Camurri
(University of Genoa, Italy)
Digitising a Medical Clerking System with Multimodal Interaction Support
Harrison South, Martin Taylor, Huseyin Dogan, and Nan Jiang
(Bournemouth University, UK; Royal Bournemouth and Christchurch Hospital, UK)
GazeTap: Towards Hands-Free Interaction in the Operating Room
Benjamin Hatscher, Maria Luz, Lennart E. Nacke ORCID logo, Norbert Elkmann, Veit Müller, and Christian Hansen
(University of Magdeburg, Germany; University of Waterloo, Canada; Fraunhofer IFF, Germany)
Boxer: A Multimodal Collision Technique for Virtual Objects
Byungjoo Lee, Qiao Deng, Eve Hoggan, and Antti Oulasvirta
(Aalto University, Finland; KAIST, South Korea; Aarhus University, Denmark)
Trust Triggers for Multimodal Command and Control Interfaces
Helen Hastie, Xingkun Liu, and Pedro Patron
(Heriot-Watt University, UK; SeeByte, UK)
Info
TouchScope: A Hybrid Multitouch Oscilloscope Interface
Matthew Heinz, Sven Bertel, and Florian Echtler
(Bauhaus-Universität Weimar, Germany)
A Multimodal System to Characterise Melancholia: Cascaded Bag of Words Approach
Shalini Bhatia, Munawar Hayat, and Roland Goecke
(University of Canberra, Australia)
Crowdsourcing Ratings of Caller Engagement in Thin-Slice Videos of Human-Machine Dialog: Benefits and Pitfalls
Vikram Ramanarayanan, Chee Wee Leong, David Suendermann-Oeft, and Keelan Evanini
(ETS at San Francisco, USA; ETS at Princeton, USA)
Modelling Fusion of Modalities in Multimodal Interactive Systems with MMMM
Bruno Dumas ORCID logo, Jonathan Pirau, and Denis Lalanne
(University of Namur, Belgium; University of Fribourg, Switzerland)
Temporal Alignment using the Incremental Unit Framework
Casey Kennington, Ting Han, and David Schlangen
(Boise State University, USA; Bielefeld University, Germany)
Multimodal Gender Detection
Mohamed Abouelenien, Verónica Pérez-Rosas, Rada Mihalcea, and Mihai Burzo
(University of Michigan, USA)
How May I Help You? Behavior and Impressions in Hospitality Service Encounters
Skanda Muralidhar, Marianne Schmid Mast, and Daniel Gatica-Perez
(Idiap, Switzerland; EPFL, Switzerland; University of Lausanne, Switzerland)
Tracking Liking State in Brain Activity while Watching Multiple Movies
Naoto Terasawa, Hiroki Tanaka, Sakriani Sakti, and Satoshi Nakamura
(NAIST, Japan)
Does Serial Memory of Locations Benefit from Spatially Congruent Audiovisual Stimuli? Investigating the Effect of Adding Spatial Sound to Visuospatial Sequences
Benjamin Stahl and Georgios Marentakis
(Graz University of Technology, Austria)
ZSGL: Zero Shot Gestural Learning
Naveen Madapana and Juan Wachs
(Purdue University, USA)
Markov Reward Models for Analyzing Group Interaction
Gabriel Murray
(University of the Fraser Valley, Canada)
Info
Analyzing First Impressions of Warmth and Competence from Observable Nonverbal Cues in Expert-Novice Interactions
Beatrice Biancardi, Angelo Cafaro, and Catherine Pelachaud
(CNRS, France; UPMC, France)
The NoXi Database: Multimodal Recordings of Mediated Novice-Expert Interactions
Angelo Cafaro, Johannes Wagner, Tobias Baur, Soumia Dermouche, Mercedes Torres Torres, Catherine Pelachaud, Elisabeth André, and Michel Valstar
(CNRS, France; UPMC, France; University of Augsburg, Germany; University of Nottingham, UK)
Info
Head-Mounted Displays as Opera Glasses: Using Mixed-Reality to Deliver an Egalitarian User Experience during Live Events
Carl Bishop, Augusto Esteves, and Iain McGregor
(Edinburgh Napier University, UK)

Poster Session 2

Analyzing Gaze Behavior during Turn-Taking for Estimating Empathy Skill Level
Ryo Ishii, Shiro Kumano, and Kazuhiro Otsuka
(NTT, Japan)
Text Based User Comments as a Signal for Automatic Language Identification of Online Videos
A. Seza Doğruöz, Natalia Ponomareva, Sertan Girgin, Reshu Jain, and Christoph Oehler
(Xoogler, Turkey; Google, USA; Google, France; Google, Switzerland)
Gender and Emotion Recognition with Implicit User Signals
Maneesh Bilalpur, Seyed Mostafa Kia, Manisha Chawla, Tat-Seng Chua, and Ramanathan Subramanian
(IIIT Hyderabad, India; Radboud University, Netherlands; IIT Gandhinagar, India; National University of Singapore, Singapore; University of Glasgow, UK; Advanced Digital Sciences Center, Singapore)
Animating the Adelino Robot with ERIK: The Expressive Robotics Inverse Kinematics
Tiago Ribeiro and Ana Paiva
(INESC-ID, Portugal; University of Lisbon, Portugal)
Video
Automatic Detection of Pain from Spontaneous Facial Expressions
Fatma Meawad, Su-Yin Yang, and Fong Ling Loy
(University of Glasgow, UK; Tan Tock Seng Hospital, Singapore)
Evaluating Content-Centric vs. User-Centric Ad Affect Recognition
Abhinav Shukla, Shruti Shriya Gullapuram, Harish Katti, Karthik Yadati, Mohan Kankanhalli, and Ramanathan Subramanian
(IIIT Hyderabad, India; Indian Institute of Science, India; Delft University of Technology, Netherlands; National University of Singapore, Singapore; University of Glasgow at Singapore, Singapore)
A Domain Adaptation Approach to Improve Speaker Turn Embedding using Face Representation
Nam Le and Jean-Marc Odobez
(Idiap, Switzerland)
Computer Vision Based Fall Detection by a Convolutional Neural Network
Miao Yu, Liyun Gong, and Stefanos Kollias
(University of Lincoln, UK)
Predicting Meeting Extracts in Group Discussions using Multimodal Convolutional Neural Networks
Fumio Nihei, Yukiko I. Nakano, and Yutaka Takase
(Seikei University, Japan)
The Relationship between Task-Induced Stress, Vocal Changes, and Physiological State during a Dyadic Team Task
Catherine Neubauer, Mathieu Chollet, Sharon Mozgai, Mark Dennison, Peter Khooshabeh, and Stefan Scherer
(Army Research Lab at Playa Vista, USA; University of Southern California, USA)
Meyendtris: A Hands-Free, Multimodal Tetris Clone using Eye Tracking and Passive BCI for Intuitive Neuroadaptive Gaming
Laurens R. Krol, Sarah-Christin Freytag, and Thorsten O. Zander
(TU Berlin, Germany)
AMHUSE: A Multimodal dataset for HUmour SEnsing
Giuseppe Boccignone, Donatello Conte, Vittorio Cuculo, and Raffaella Lanzarotti
(University of Milan, Italy; University of Tours, France)
GazeTouchPIN: Protecting Sensitive Data on Mobile Devices using Secure Multimodal Authentication
Mohamed Khamis, Mariam Hassib, Emanuel von Zezschwitz, Andreas Bulling, and Florian Alt
(LMU Munich, Germany; Max Planck Institute for Informatics, Germany)
Video
Multi-task Learning of Social Psychology Assessments and Nonverbal Features for Automatic Leadership Identification
Cigdem Beyan, Francesca Capozzi, Cristina Becchio, and Vittorio Murino
(IIT Genoa, Italy; McGill University, Canada; University of Turin, Italy; University of Verona, Italy)
Multimodal Analysis of Vocal Collaborative Search: A Public Corpus and Results
Daniel McDuff, Paul Thomas, Mary Czerwinski, and Nick Craswell
(Microsoft Research, USA; Microsoft Research, Australia)
UE-HRI: A New Dataset for the Study of User Engagement in Spontaneous Human-Robot Interactions
Atef Ben-Youssef, Chloé Clavel, Slim Essid, Miriam Bilac, Marine Chamoux, and Angelica Lim
(Telecom ParisTech, France; University of Paris-Saclay, France; SoftBank Robotics, France)
Info
Mining a Multimodal Corpus of Doctor’s Training for Virtual Patient’s Feedbacks
Chris Porhet, Magalie Ochs, Jorane Saubesty, Grégoire de Montcheuil, and Roxane Bertrand
(Aix-Marseille University, France; CNRS, France; ENSAM, France; University of Toulon, France)
Info
Multimodal Affect Recognition in an Interactive Gaming Environment using Eye Tracking and Speech Signals
Ashwaq Alhargan, Neil Cooke, and Tareq Binjammaz
(University of Birmingham, UK; De Montfort University, UK)

proc time: 0.82