ICMI 2017
19th ACM International Conference on Multimodal Interaction (ICMI 2017)
Powered by
Conference Publishing Consulting

19th ACM International Conference on Multimodal Interaction (ICMI 2017), November 13–17, 2017, Glasgow, UK

ICMI 2017 – Proceedings

Contents - Abstracts - Authors

Frontmatter

Title Page
Message from the Chairs
ICMI 2017 Organization
Supporters and Sponsors

Invited Talks

Gastrophysics: Using Technology to Enhance the Experience of Food and Drink (Keynote)
Charles Spence
(University of Oxford, UK)
Publisher's Version Article Search
Collaborative Robots: From Action and Interaction to Collaboration (Keynote)
Danica Kragic
(KTH, Sweden)
Publisher's Version Article Search
Situated Conceptualization: A Framework for Multimodal Interaction (Keynote)
Lawrence Barsalou
(University of Glasgow, UK)
Publisher's Version Article Search
Steps towards Collaborative Multimodal Dialogue (Sustained Contribution Award)
Phil Cohen
(Voicebox Technologies, USA)
Publisher's Version Article Search

Main Track

Oral Session 1: Children and Interaction

Tablets, Tabletops, and Smartphones: Cross-Platform Comparisons of Children’s Touchscreen Interactions
Julia Woodward, Alex Shaw, Aishat Aloba, Ayushi Jain, Jaime Ruiz, and Lisa Anthony
(University of Florida, USA)
Publisher's Version Article Search
Toward an Efficient Body Expression Recognition Based on the Synthesis of a Neutral Movement
Arthur Crenn, Alexandre Meyer, Rizwan Ahmed Khan, Hubert Konik, and Saida Bouakaz
(University of Lyon, France; University of Saint-Etienne, France)
Publisher's Version Article Search
Interactive Narration with a Child: Impact of Prosody and Facial Expressions
Ovidiu Șerban, Mukesh Barange, Sahba Zojaji, Alexandre Pauchet, Adeline Richard, and Emilie Chanoni
(Normandy University, France; University of Rouen, France)
Publisher's Version Article Search
Comparing Human and Machine Recognition of Children’s Touchscreen Stroke Gestures
Alex Shaw, Jaime Ruiz, and Lisa Anthony
(University of Florida, USA)
Publisher's Version Article Search

Oral Session 2: Understanding Human Behaviour

Virtual Debate Coach Design: Assessing Multimodal Argumentation Performance
Volha Petukhova, Tobias Mayer, Andrei Malchanau, and Harry Bunt
(Saarland University, Germany; Tilburg University, Netherlands)
Publisher's Version Article Search
Predicting the Distribution of Emotion Perception: Capturing Inter-rater Variability
Biqiao Zhang, Georg Essl, and Emily Mower Provost
(University of Michigan, USA; University of Wisconsin-Milwaukee, USA)
Publisher's Version Article Search
Automatically Predicting Human Knowledgeability through Non-verbal Cues
Abdelwahab Bourai, Tadas Baltrušaitis, and Louis-Philippe Morency
(Carnegie Mellon University, USA)
Publisher's Version Article Search
Pooling Acoustic and Lexical Features for the Prediction of Valence
Zakaria Aldeneh, Soheil Khorram, Dimitrios Dimitriadis, and Emily Mower Provost
(University of Michigan, USA; IBM Research, USA)
Publisher's Version Article Search

Oral Session 3: Touch and Gesture

Hand-to-Hand: An Intermanual Illusion of Movement
Dario Pittera, Marianna Obrist, and Ali Israr
(Disney Research, USA; University of Sussex, UK)
Publisher's Version Article Search
An Investigation of Dynamic Crossmodal Instantiation in TUIs
Feng Feng and Tony Stockman
(Queen Mary University of London, UK)
Publisher's Version Article Search
“Stop over There”: Natural Gesture and Speech Interaction for Non-critical Spontaneous Intervention in Autonomous Driving
Robert Tscharn, Marc Erich Latoschik, Diana Löffler, and Jörn Hurtienne
(University of Würzburg, Germany)
Publisher's Version Article Search
Pre-touch Proxemics: Moving the Design Space of Touch Targets from Still Graphics towards Proxemic Behaviors
Ilhan Aslan and Elisabeth André
(University of Augsburg, Germany)
Publisher's Version Article Search
Freehand Grasping in Mixed Reality: Analysing Variation during Transition Phase of Interaction
Maadh Al-Kalbani, Maite Frutos-Pascual, and Ian Williams
(Birmingham City University, UK)
Publisher's Version Article Search
Rhythmic Micro-Gestures: Discreet Interaction On-the-Go
Euan Freeman, Gareth Griffiths, and Stephen A. Brewster
(University of Glasgow, UK)
Publisher's Version Article Search

Oral Session 4: Sound and Interaction

Evaluation of Psychoacoustic Sound Parameters for Sonification
Jamie Ferguson and Stephen A. Brewster
(University of Glasgow, UK)
Publisher's Version Article Search
Utilising Natural Cross-Modal Mappings for Visual Control of Feature-Based Sound Synthesis
Augoustinos Tsiros and Grégory Leplâtre
(Edinburgh Napier University, UK)
Publisher's Version Article Search Artifacts Available

Oral Session 5: Methodology

Automatic Classification of Auto-correction Errors in Predictive Text Entry Based on EEG and Context Information
Felix Putze, Maik Schünemann, Tanja Schultz, and Wolfgang Stuerzlinger
(University of Bremen, Germany; Simon Fraser University, Canada)
Publisher's Version Article Search
Cumulative Attributes for Pain Intensity Estimation
Joy O. Egede and Michel Valstar
(University of Nottingham at Ningbo, China; University of Nottingham, UK)
Publisher's Version Article Search
Towards the Use of Social Interaction Conventions as Prior for Gaze Model Adaptation
Rémy Siegfried, Yu Yu, and Jean-Marc Odobez
(Idiap, Switzerland; EPFL, Switzerland)
Publisher's Version Article Search
Multimodal Sentiment Analysis with Word-Level Fusion and Reinforcement Learning
Minghai Chen, Sen Wang, Paul Pu Liang, Tadas Baltrušaitis, Amir Zadeh, and Louis-Philippe Morency
(Carnegie Mellon University, USA)
Publisher's Version Article Search
IntelliPrompter: Speech-Based Dynamic Note Display Interface for Oral Presentations
Reza Asadi, Ha Trinh, Harriet J. Fell, and Timothy W. Bickmore
(Northeastern University, USA)
Publisher's Version Article Search

Oral Session 6: Artificial Agents and Wearable Sensors

Head and Shoulders: Automatic Error Detection in Human-Robot Interaction
Pauline Trung, Manuel Giuliani, Michael Miksch, Gerald Stollnberger, Susanne Stadler, Nicole Mirnig, and Manfred Tscheligi
(University of Salzburg, Austria; University of the West of England, UK; Austrian Institute of Technology, Austria)
Publisher's Version Article Search
The Reliability of Non-verbal Cues for Situated Reference Resolution and Their Interplay with Language: Implications for Human Robot Interaction
Stephanie Gross, Brigitte Krenn, and Matthias Scheutz
(Austrian Research Institute for Artificial Intelligence, Austria; Tufts University, USA)
Publisher's Version Article Search
Do You Speak to a Human or a Virtual Agent? Automatic Analysis of User’s Social Cues during Mediated Communication
Magalie Ochs, Nathan Libermann, Axel Boidin, and Thierry Chaminade
(Aix-Marseille University, France; University of Toulon, France; Picxel, France)
Publisher's Version Article Search
Estimating Verbal Expressions of Task and Social Cohesion in Meetings by Quantifying Paralinguistic Mimicry
Marjolein C. Nanninga, Yanxia Zhang, Nale Lehmann-Willenbrock, Zoltán Szlávik, and Hayley Hung
(Delft University of Technology, Netherlands; University of Amsterdam, Netherlands; VU University Amsterdam, Netherlands)
Publisher's Version Article Search
Data Augmentation of Wearable Sensor Data for Parkinson’s Disease Monitoring using Convolutional Neural Networks
Terry T. Um, Franz M. J. Pfister, Daniel Pichler, Satoshi Endo, Muriel Lang, Sandra Hirche, Urban Fietzek, and Dana Kulić
(University of Waterloo, Canada; LMU Munich, Germany; TU Munich, Germany; Schön Klinik München Schwabing, Germany)
Publisher's Version Article Search

Poster Session 1

Automatic Assessment of Communication Skill in Non-conventional Interview Settings: A Comparative Study
Pooja Rao S. B, Sowmya Rasipuram, Rahul Das, and Dinesh Babu Jayagopi
(IIIT Bangalore, India)
Publisher's Version Article Search
Low-Intrusive Recognition of Expressive Movement Qualities
Radoslaw Niewiadomski, Maurizio Mancini, Stefano Piana, Paolo Alborno, Gualtiero Volpe, and Antonio Camurri
(University of Genoa, Italy)
Publisher's Version Article Search
Digitising a Medical Clerking System with Multimodal Interaction Support
Harrison South, Martin Taylor, Huseyin Dogan, and Nan Jiang
(Bournemouth University, UK; Royal Bournemouth and Christchurch Hospital, UK)
Publisher's Version Article Search
GazeTap: Towards Hands-Free Interaction in the Operating Room
Benjamin Hatscher, Maria Luz, Lennart E. Nacke, Norbert Elkmann, Veit Müller, and Christian Hansen
(University of Magdeburg, Germany; University of Waterloo, Canada; Fraunhofer IFF, Germany)
Publisher's Version Article Search
Boxer: A Multimodal Collision Technique for Virtual Objects
Byungjoo Lee, Qiao Deng, Eve Hoggan, and Antti Oulasvirta
(Aalto University, Finland; KAIST, South Korea; Aarhus University, Denmark)
Publisher's Version Article Search
Trust Triggers for Multimodal Command and Control Interfaces
Helen Hastie, Xingkun Liu, and Pedro Patron
(Heriot-Watt University, UK; SeeByte, UK)
Publisher's Version Article Search Info
TouchScope: A Hybrid Multitouch Oscilloscope Interface
Matthew Heinz, Sven Bertel, and Florian Echtler
(Bauhaus-Universität Weimar, Germany)
Publisher's Version Article Search Video submitted (74 MB)
A Multimodal System to Characterise Melancholia: Cascaded Bag of Words Approach
Shalini Bhatia, Munawar Hayat, and Roland Goecke
(University of Canberra, Australia)
Publisher's Version Article Search
Crowdsourcing Ratings of Caller Engagement in Thin-Slice Videos of Human-Machine Dialog: Benefits and Pitfalls
Vikram Ramanarayanan, Chee Wee Leong, David Suendermann-Oeft, and Keelan Evanini
(ETS at San Francisco, USA; ETS at Princeton, USA)
Publisher's Version Article Search
Modelling Fusion of Modalities in Multimodal Interactive Systems with MMMM
Bruno Dumas, Jonathan Pirau, and Denis Lalanne
(University of Namur, Belgium; University of Fribourg, Switzerland)
Publisher's Version Article Search
Temporal Alignment using the Incremental Unit Framework
Casey Kennington, Ting Han, and David Schlangen
(Boise State University, USA; Bielefeld University, Germany)
Publisher's Version Article Search
Multimodal Gender Detection
Mohamed Abouelenien, Verónica Pérez-Rosas, Rada Mihalcea, and Mihai Burzo
(University of Michigan, USA)
Publisher's Version Article Search
How May I Help You? Behavior and Impressions in Hospitality Service Encounters
Skanda Muralidhar, Marianne Schmid Mast, and Daniel Gatica-Perez
(Idiap, Switzerland; EPFL, Switzerland; University of Lausanne, Switzerland)
Publisher's Version Article Search
Tracking Liking State in Brain Activity while Watching Multiple Movies
Naoto Terasawa, Hiroki Tanaka, Sakriani Sakti, and Satoshi Nakamura
(NAIST, Japan)
Publisher's Version Article Search
Does Serial Memory of Locations Benefit from Spatially Congruent Audiovisual Stimuli? Investigating the Effect of Adding Spatial Sound to Visuospatial Sequences
Benjamin Stahl and Georgios Marentakis
(Graz University of Technology, Austria)
Publisher's Version Article Search
ZSGL: Zero Shot Gestural Learning
Naveen Madapana and Juan Wachs
(Purdue University, USA)
Publisher's Version Article Search
Markov Reward Models for Analyzing Group Interaction
Gabriel Murray
(University of the Fraser Valley, Canada)
Publisher's Version Article Search Info
Analyzing First Impressions of Warmth and Competence from Observable Nonverbal Cues in Expert-Novice Interactions
Beatrice Biancardi, Angelo Cafaro, and Catherine Pelachaud
(CNRS, France; UPMC, France)
Publisher's Version Article Search
The NoXi Database: Multimodal Recordings of Mediated Novice-Expert Interactions
Angelo Cafaro, Johannes Wagner, Tobias Baur, Soumia Dermouche, Mercedes Torres Torres, Catherine Pelachaud, Elisabeth André, and Michel Valstar
(CNRS, France; UPMC, France; University of Augsburg, Germany; University of Nottingham, UK)
Publisher's Version Article Search Info
Head-Mounted Displays as Opera Glasses: Using Mixed-Reality to Deliver an Egalitarian User Experience during Live Events
Carl Bishop, Augusto Esteves, and Iain McGregor
(Edinburgh Napier University, UK)
Publisher's Version Article Search

Poster Session 2

Analyzing Gaze Behavior during Turn-Taking for Estimating Empathy Skill Level
Ryo Ishii, Shiro Kumano, and Kazuhiro Otsuka
(NTT, Japan)
Publisher's Version Article Search
Text Based User Comments as a Signal for Automatic Language Identification of Online Videos
A. Seza Doğruöz, Natalia Ponomareva, Sertan Girgin, Reshu Jain, and Christoph Oehler
(Xoogler, Turkey; Google, USA; Google, France; Google, Switzerland)
Publisher's Version Article Search
Gender and Emotion Recognition with Implicit User Signals
Maneesh Bilalpur, Seyed Mostafa Kia, Manisha Chawla, Tat-Seng Chua, and Ramanathan Subramanian
(IIIT Hyderabad, India; Radboud University, Netherlands; IIT Gandhinagar, India; National University of Singapore, Singapore; University of Glasgow, UK; Advanced Digital Sciences Center, Singapore)
Publisher's Version Article Search
Animating the Adelino Robot with ERIK: The Expressive Robotics Inverse Kinematics
Tiago Ribeiro and Ana Paiva
(INESC-ID, Portugal; University of Lisbon, Portugal)
Publisher's Version Article Search Video
Automatic Detection of Pain from Spontaneous Facial Expressions
Fatma Meawad, Su-Yin Yang, and Fong Ling Loy
(University of Glasgow, UK; Tan Tock Seng Hospital, Singapore)
Publisher's Version Article Search
Evaluating Content-Centric vs. User-Centric Ad Affect Recognition
Abhinav Shukla, Shruti Shriya Gullapuram, Harish Katti, Karthik Yadati, Mohan Kankanhalli, and Ramanathan Subramanian
(IIIT Hyderabad, India; Indian Institute of Science, India; Delft University of Technology, Netherlands; National University of Singapore, Singapore; University of Glasgow at Singapore, Singapore)
Publisher's Version Article Search
A Domain Adaptation Approach to Improve Speaker Turn Embedding using Face Representation
Nam Le and Jean-Marc Odobez
(Idiap, Switzerland)
Publisher's Version Article Search
Computer Vision Based Fall Detection by a Convolutional Neural Network
Miao Yu, Liyun Gong, and Stefanos Kollias
(University of Lincoln, UK)
Publisher's Version Article Search
Predicting Meeting Extracts in Group Discussions using Multimodal Convolutional Neural Networks
Fumio Nihei, Yukiko I. Nakano, and Yutaka Takase
(Seikei University, Japan)
Publisher's Version Article Search
The Relationship between Task-Induced Stress, Vocal Changes, and Physiological State during a Dyadic Team Task
Catherine Neubauer, Mathieu Chollet, Sharon Mozgai, Mark Dennison, Peter Khooshabeh, and Stefan Scherer
(Army Research Lab at Playa Vista, USA; University of Southern California, USA)
Publisher's Version Article Search
Meyendtris: A Hands-Free, Multimodal Tetris Clone using Eye Tracking and Passive BCI for Intuitive Neuroadaptive Gaming
Laurens R. Krol, Sarah-Christin Freytag, and Thorsten O. Zander
(TU Berlin, Germany)
Publisher's Version Article Search
AMHUSE: A Multimodal dataset for HUmour SEnsing
Giuseppe Boccignone, Donatello Conte, Vittorio Cuculo, and Raffaella Lanzarotti
(University of Milan, Italy; University of Tours, France)
Publisher's Version Article Search
GazeTouchPIN: Protecting Sensitive Data on Mobile Devices using Secure Multimodal Authentication
Mohamed Khamis, Mariam Hassib, Emanuel von Zezschwitz, Andreas Bulling, and Florian Alt
(LMU Munich, Germany; Max Planck Institute for Informatics, Germany)
Publisher's Version Article Search Video submitted (8 MB) Video
Multi-task Learning of Social Psychology Assessments and Nonverbal Features for Automatic Leadership Identification
Cigdem Beyan, Francesca Capozzi, Cristina Becchio, and Vittorio Murino
(IIT Genoa, Italy; McGill University, Canada; University of Turin, Italy; University of Verona, Italy)
Publisher's Version Article Search
Multimodal Analysis of Vocal Collaborative Search: A Public Corpus and Results
Daniel McDuff, Paul Thomas, Mary Czerwinski, and Nick Craswell
(Microsoft Research, USA; Microsoft Research, Australia)
Publisher's Version Article Search
UE-HRI: A New Dataset for the Study of User Engagement in Spontaneous Human-Robot Interactions
Atef Ben-Youssef, Chloé Clavel, Slim Essid, Miriam Bilac, Marine Chamoux, and Angelica Lim
(Telecom ParisTech, France; University of Paris-Saclay, France; SoftBank Robotics, France)
Publisher's Version Article Search Info
Mining a Multimodal Corpus of Doctor’s Training for Virtual Patient’s Feedbacks
Chris Porhet, Magalie Ochs, Jorane Saubesty, Grégoire de Montcheuil, and Roxane Bertrand
(Aix-Marseille University, France; CNRS, France; ENSAM, France; University of Toulon, France)
Publisher's Version Article Search Info
Multimodal Affect Recognition in an Interactive Gaming Environment using Eye Tracking and Speech Signals
Ashwaq Alhargan, Neil Cooke, and Tareq Binjammaz
(University of Birmingham, UK; De Montfort University, UK)
Publisher's Version Article Search

Demonstrations

Demonstrations 1

Multimodal Interaction in Classrooms: Implementation of Tangibles in Integrated Music and Math Lessons
Jennifer Müller, Uwe Oestermeier, and Peter Gerjets
(University of Tübingen, Germany; Leibniz-Institut für Wissensmedien, Germany)
Publisher's Version Article Search Info
Web-Based Interactive Media Authoring System with Multimodal Interaction
Bok Deuk Song, Yeon Jun Choi, and Jong Hyun Park
(ETRI, South Korea)
Publisher's Version Article Search
Textured Surfaces for Ultrasound Haptic Displays
Euan Freeman, Ross Anderson, Julie Williamson, Graham Wilson, and Stephen A. Brewster
(University of Glasgow, UK)
Publisher's Version Article Search
Rapid Development of Multimodal Interactive Systems: A Demonstration of Platform for Situated Intelligence
Dan Bohus, Sean Andrist, and Mihai Jalobeanu
(Microsoft, USA; Microsoft Research, USA)
Publisher's Version Article Search
MIRIAM: A Multimodal Chat-Based Interface for Autonomous Systems
Helen Hastie, Francisco Javier Chiyah Garcia, David A. Robb, Pedro Patron, and Atanas Laskov
(Heriot-Watt University, UK; SeeByte, UK)
Publisher's Version Article Search Info
SAM: The School Attachment Monitor
Dong-Bach Vo, Mohammad Tayarani, Maki Rooksby, Rui Huan, Alessandro Vinciarelli, Helen Minnis, and Stephen A. Brewster
(University of Glasgow, UK)
Publisher's Version Article Search
The Boston Massacre History Experience
David Novick, Laura Rodriguez, Aaron Pacheco, Aaron Rodriguez, Laura Hinojos, Brad Cartwright, Marco Cardiel, Ivan Gris Sepulveda, Olivia Rodriguez-Herrera, and Enrique Ponce
(University of Texas at El Paso, USA; Black Portal Productions, USA)
Publisher's Version Article Search
Demonstrating TouchScope: A Hybrid Multitouch Oscilloscope Interface
Matthew Heinz, Sven Bertel, and Florian Echtler
(Bauhaus-Universität Weimar, Germany)
Publisher's Version Article Search Video submitted (74 MB)
The MULTISIMO Multimodal Corpus of Collaborative Interactions
Maria Koutsombogera and Carl Vogel
(Trinity College, Ireland)
Publisher's Version Article Search
Using Mobile Virtual Reality to Empower People with Hidden Disabilities to Overcome Their Barriers
Matthieu Poyade, Glyn Morris, Ian Taylor, and Victor Portela
(Glasgow School of Art, UK; Friendly Access, UK; Crag3D, UK)
Publisher's Version Article Search Video submitted (14 MB)

Demonstrations 2

Bot or Not: Exploring the Fine Line between Cyber and Human Identity
Mirjam Wester, Matthew P. Aylett, and David A. Braude
(CereProc, UK)
Publisher's Version Article Search
Modulating the Non-verbal Social Signals of a Humanoid Robot
Amol Deshmukh, Bart Craenen, Alessandro Vinciarelli, and Mary Ellen Foster
(University of Glasgow, UK)
Publisher's Version Article Search Video submitted (36 MB) Video
Thermal In-Car Interaction for Navigation
Patrizia Di Campli San Vito, Stephen A. Brewster, Frank Pollick, and Stuart White
(University of Glasgow, UK; Jaguar Land Rover, UK)
Publisher's Version Article Search
AQUBE: An Interactive Music Reproduction System for Aquariums
Daisuke Sasaki, Musashi Nakajima, and Yoshihiro Kanno
(Waseda University, Japan; Tokyo Polytechnic University, Japan)
Publisher's Version Article Search
Real-Time Mixed-Reality Telepresence via 3D Reconstruction with HoloLens and Commodity Depth Sensors
Michal Joachimczak, Juan Liu, and Hiroshi Ando
(National Institute of Information and Communications Technology, Japan; Osaka University, Japan)
Publisher's Version Article Search
Evaluating Robot Facial Expressions
Ruth Aylett, Frank Broz, Ayan Ghosh, Peter McKenna, Gnanathusharan Rajendran, Mary Ellen Foster, Giorgio Roffo, and Alessandro Vinciarelli
(Heriot-Watt University, UK; University of Glasgow, UK)
Publisher's Version Article Search
Bimodal Feedback for In-Car Mid-Air Gesture Interaction
Gözel Shakeri, John H. Williamson, and Stephen A. Brewster
(University of Glasgow, UK)
Publisher's Version Article Search
A Modular, Multimodal Open-Source Virtual Interviewer Dialog Agent
Kirby Cofino, Vikram Ramanarayanan, Patrick Lange, David Pautler, David Suendermann-Oeft, and Keelan Evanini
(American University, USA; ETS at San Francisco, USA; ETS at Princeton, USA)
Publisher's Version Article Search
Wearable Interactive Display for the Local Positioning System (LPS)
Daniel M. Lofaro, Christopher Taylor, Ryan Tse, and Donald Sofge
(US Naval Research Lab, USA; George Mason University, USA; Thomas Jefferson High School for Science and Technology, USA)
Publisher's Version Article Search

Grand Challenge

From Individual to Group-Level Emotion Recognition: EmotiW 5.0
Abhinav Dhall, Roland Goecke, Shreya Ghosh, Jyoti Joshi, Jesse Hoey, and Tom Gedeon
(IIT Ropar, India; University of Canberra, Australia; University of Waterloo, Canada; Australian National University, Australia)
Publisher's Version Article Search Info
Multi-modal Emotion Recognition using Semi-supervised Learning and Multiple Neural Networks in the Wild
Dae Ha Kim, Min Kyu Lee, Dong Yoon Choi, and Byung Cheol Song
(Inha University, South Korea)
Publisher's Version Article Search
Modeling Multimodal Cues in a Deep Learning-Based Framework for Emotion Recognition in the Wild
Stefano Pini, Olfa Ben Ahmed, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara, and Benoit Huet
(University of Modena and Reggio Emilia, Italy; EURECOM, France)
Publisher's Version Article Search
Group-Level Emotion Recognition using Transfer Learning from Face Identification
Alexandr Rassadin, Alexey Gruzdev, and Andrey Savchenko
(National Research University Higher School of Economics, Russia)
Publisher's Version Article Search
Group Emotion Recognition with Individual Facial Emotion CNNs and Global Image Based CNNs
Lianzhi Tan, Kaipeng Zhang, Kai Wang, Xiaoxing Zeng, Xiaojiang Peng, and Yu Qiao
(SIAT at Chinese Academy of Sciences, China; National Taiwan University, Taiwan)
Publisher's Version Article Search
Learning Supervised Scoring Ensemble for Emotion Recognition in the Wild
Ping Hu, Dongqi Cai, Shandong Wang, Anbang Yao, and Yurong Chen
(Intel Labs, China)
Publisher's Version Article Search
Group Emotion Recognition in the Wild by Combining Deep Neural Networks for Facial Expression Classification and Scene-Context Analysis
Asad Abbas and Stephan K. Chalup
(University of Newcastle, Australia)
Publisher's Version Article Search
Temporal Multimodal Fusion for Video Emotion Classification in the Wild
Valentin Vielzeuf, Stéphane Pateux, and Frédéric Jurie
(Orange Labs, France; Normandy University, France; CNRS, France)
Publisher's Version Article Search
Audio-Visual Emotion Recognition using Deep Transfer Learning and Multiple Temporal Models
Xi Ouyang, Shigenori Kawaai, Ester Gue Hua Goh, Shengmei Shen, Wan Ding, Huaiping Ming, and Dong-Yan Huang
(Panasonic R&D Center, Singapore; Central China Normal University, China; Institute for Infocomm Research at A*STAR, Singapore)
Publisher's Version Article Search
Multi-Level Feature Fusion for Group-Level Emotion Recognition
B. Balaji and V. Ramana Murthy Oruganti
(Amrita University at Coimbatore, India)
Publisher's Version Article Search
A New Deep-Learning Framework for Group Emotion Recognition
Qinglan Wei, Yijia Zhao, Qihua Xu, Liandong Li, Jun He, Lejun Yu, and Bo Sun
(Beijing Normal University, China)
Publisher's Version Article Search
Emotion Recognition in the Wild using Deep Neural Networks and Bayesian Classifiers
Luca Surace, Massimiliano Patacchiola, Elena Battini Sönmez, William Spataro, and Angelo Cangelosi
(University of Calabria, Italy; Plymouth University, UK; Instanbul Bilgi University, Turkey)
Publisher's Version Article Search
Emotion Recognition with Multimodal Features and Temporal Models
Shuai Wang, Wenxuan Wang, Jinming Zhao, Shizhe Chen, Qin Jin, Shilei Zhang, and Yong Qin
(Renmin University of China, China; IBM Research Lab, China)
Publisher's Version Article Search
Group-Level Emotion Recognition using Deep Models on Image Scene, Faces, and Skeletons
Xin Guo, Luisa F. Polanía, and Kenneth E. Barner
(University of Delaware, USA; American Family Mutual Insurance Company, USA)
Publisher's Version Article Search

Doctoral Consortium

Towards Designing Speech Technology Based Assistive Interfaces for Children's Speech Therapy
Revathy Nayar
(University of Strathaclyde, UK)
Publisher's Version Article Search
Social Robots for Motivation and Engagement in Therapy
Katie Winkle
(Bristol Robotics Laboratory, UK)
Publisher's Version Article Search
Immersive Virtual Eating and Conditioned Food Responses
Nikita Mae B. Tuanquin
(University of Canterbury, New Zealand)
Publisher's Version Article Search
Towards Edible Interfaces: Designing Interactions with Food
Tom Gayler
(Lancaster University, UK)
Publisher's Version Article Search
Towards a Computational Model for First Impressions Generation
Beatrice Biancardi
(CNRS, France; UPMC, France)
Publisher's Version Article Search
A Decentralised Multimodal Integration of Social Signals: A Bio-Inspired Approach
Esma Mansouri-Benssassi
(University of St. Andrews, UK)
Publisher's Version Article Search
Human-Centered Recognition of Children's Touchscreen Gestures
Alex Shaw
(University of Florida, USA)
Publisher's Version Article Search
Cross-Modality Interaction between EEG Signals and Facial Expression
Soheil Rayatdoost
(University of Geneva, Switzerland)
Publisher's Version Article Search
Hybrid Models for Opinion Analysis in Speech Interactions
Valentin Barriere
(Telecom ParisTech, France; University of Paris-Saclay, France)
Publisher's Version Article Search
Evaluating Engagement in Digital Narratives from Facial Data
Rui Huan
(University of Glasgow, UK)
Publisher's Version Article Search
Social Signal Extraction from Egocentric Photo-Streams
Maedeh Aghaei
(University of Barcelona, Spain)
Publisher's Version Article Search
Multimodal Language Grounding for Improved Human-Robot Collaboration: Exploring Spatial Semantic Representations in the Shared Space of Attention
Dimosthenis Kontogiorgos
(KTH, Sweden)
Publisher's Version Article Search

Workshop Summaries

ISIAA 2017: 1st International Workshop on Investigating Social Interactions with Artificial Agents (Workshop Summary)
Thierry Chaminade, Fabrice Lefèvre, Noël Nguyen, and Magalie Ochs
(Aix-Marseille University, France; CNRS, France; University of Avignon, France)
Publisher's Version Article Search
WOCCI 2017: 6th International Workshop on Child Computer Interaction (Workshop Summary)
Keelan Evanini, Maryam Najafian, Saeid Safavi, and Kay Berkling
(ETS at Princeton, USA; Massachusetts Institute of Technology, USA; University of Hertfordshire, UK; DHBW Karlsruhe, Germany)
Publisher's Version Article Search
MIE 2017: 1st International Workshop on Multimodal Interaction for Education (Workshop Summary)
Gualtiero Volpe, Monica Gori, Nadia Bianchi-Berthouze, Gabriel Baud-Bovy, Paolo Alborno, and Erica Volta
(University of Genoa, Italy; IIT Genoa, Italy; University College London, UK)
Publisher's Version Article Search
Playlab: Telling Stories with Technology (Workshop Summary)
Julie Williamson, Tom Flint, and Chris Speed
(University of Glasgow, UK; Edinburgh Napier University, UK; Edinburgh College of Art, UK)
Publisher's Version Article Search Info
MHFI 2017: 2nd International Workshop on Multisensorial Approaches to Human-Food Interaction (Workshop Summary)
Carlos Velasco, Anton Nijholt, Marianna Obrist, Katsunori Okajima, Rick Schifferstein, and Charles Spence
(BI Norwegian Business School, Norway; University of Twente, Netherlands; University of Sussex, UK; Yokohama National University, Japan; TU Delft, Netherlands; University of Oxford, UK)
Publisher's Version Article Search

proc time: 9.44