ICMI 2016
18th ACM International Conference on Multimodal Interaction (ICMI 2016)
Powered by
Conference Publishing Consulting

18th ACM International Conference on Multimodal Interaction (ICMI 2016), November 12–16, 2016, Tokyo, Japan

ICMI 2016 – Proceedings

Contents - Abstracts - Authors

Frontmatter

Title Page
Message from the Chairs
ICMI 2016 Organization
ICMI 2016 Sponsors and Supporters

Invited Talks

Understanding People by Tracking Their Word Use (Keynote)
James W. Pennebaker
(University of Texas at Austin, USA)
Learning to Generate Images and Their Descriptions (Keynote)
Richard Zemel
(University of Toronto, Canada)
Embodied Media: Expanding Human Capacity via Virtual Reality and Telexistence (Keynote)
Susumu Tachi
(University of Tokyo, Japan)
Help Me If You Can: Towards Multiadaptive Interaction Platforms (ICMI Awardee Talk)
Wolfgang Wahlster
(DFKI, Germany)

Main Track

Oral Session 1: Multimodal Social Agents
Sun, Nov 13, 10:45 - 12:25, Miraikan Hall (Chair: Elisabeth Andre (Augsburg University))

Trust Me: Multimodal Signals of Trustworthiness
Gale Lucas, Giota Stratou, Shari Lieblich, and Jonathan Gratch
(University of Southern California, USA; Temple College, USA)
Semi-situated Learning of Verbal and Nonverbal Content for Repeated Human-Robot Interaction
Iolanda Leite, André Pereira, Allison Funkhouser, Boyang Li, and Jill Fain Lehman
(Disney Research, USA)
Towards Building an Attentive Artificial Listener: On the Perception of Attentiveness in Audio-Visual Feedback Tokens
Catharine Oertel, José Lopes, Yu Yu, Kenneth A. Funes Mora, Joakim Gustafson, Alan W. Black, and Jean-Marc Odobez
(KTH, Sweden; EPFL, Switzerland; Idiap, Switzerland; Carnegie Mellon University, USA)
Sequence-Based Multimodal Behavior Modeling for Social Agents
Soumia Dermouche and Catherine Pelachaud
(CNRS, France; Telecom ParisTech, France)

Oral Session 2: Physiological and Tactile Modalities
Sun, Nov 13, 14:00 - 15:30, Miraikan Hall (Chair: Jonathan Gratch (University of Southern California))

Adaptive Review for Mobile MOOC Learning via Implicit Physiological Signal Sensing
Phuong Pham and Jingtao Wang
(University of Pittsburgh, USA)
Visuotactile Integration for Depth Perception in Augmented Reality
Nina Rosa, Wolfgang Hürst, Peter Werkhoven, and Remco Veltkamp
(Utrecht University, Netherlands)
Exploring Multimodal Biosignal Features for Stress Detection during Indoor Mobility
Kyriaki Kalimeri and Charalampos Saitis
(ISI Foundation, Italy; TU Berlin, Germany)
An IDE for Multimodal Controls in Smart Buildings
Sebastian Peters, Jan Ole Johanssen, and Bernd Bruegge
(TU Munich, Germany)
Info

Poster Session 1
Sun, Nov 13, 16:00 - 18:00, Conference Room 3

Personalized Unknown Word Detection in Non-native Language Reading using Eye Gaze
Rui Hiraoka, Hiroki Tanaka, Sakriani Sakti, Graham Neubig, and Satoshi Nakamura
(Nara Institute of Science and Technology, Japan)
Discovering Facial Expressions for States of Amused, Persuaded, Informed, Sentimental and Inspired
Daniel McDuff
(Affectiva, USA; Microsoft Research, USA)
Do Speech Features for Detecting Cognitive Load Depend on Specific Languages?
Rui Chen, Tiantian Xie, Yingtao Xie, Tao Lin, and Ningjiu Tang
(Sichuan University, China; University of Maryland in Baltimore County, USA)
Training on the Job: Behavioral Analysis of Job Interviews in Hospitality
Skanda Muralidhar, Laurent Son Nguyen, Denise Frauendorfer, Jean-Marc Odobez, Marianne Schmid Mast, and Daniel Gatica-Perez
(Idiap, Switzerland; EPFL, Switzerland; University of Lausanne, Switzerland)
Emotion Spotting: Discovering Regions of Evidence in Audio-Visual Emotion Expressions
Yelin Kim and Emily Mower Provost
(SUNY Albany, USA; University of Michigan, USA)
Semi-supervised Model Personalization for Improved Detection of Learner's Emotional Engagement
Nese Alyuz, Eda Okur, Ece Oktay, Utku Genc, Sinem Aslan, Sinem Emine Mete, Bert Arnrich, and Asli Arslan Esme
(Intel, Turkey; Bogazici University, Turkey)
Driving Maneuver Prediction using Car Sensor and Driver Physiological Signals
Nanxiang Li, Teruhisa Misu, Ashish Tawari, Alexandre Miranda, Chihiro Suga, and Kikuo Fujimura
(Honda Research Institute, USA)
On Leveraging Crowdsourced Data for Automatic Perceived Stress Detection
Jonathan Aigrain, Arnaud Dapogny, Kévin Bailly, Séverine Dubuisson, Marcin Detyniecki, and Mohamed Chetouani
(UPMC, France; CNRS, France; LIP6, France; Polish Academy of Sciences, Poland)
Investigating the Impact of Automated Transcripts on Non-native Speakers' Listening Comprehension
Xun Cao, Naomi Yamashita, and Toru Ishida
(Kyoto University, Japan; NTT, Japan)
Speaker Impact on Audience Comprehension for Academic Presentations
Keith Curtis, Gareth J. F. Jones, and Nick Campbell
(Dublin City University, Ireland; Trinity College Dublin, Ireland)
EmoReact: A Multimodal Approach and Dataset for Recognizing Emotional Responses in Children
Behnaz Nojavanasghari, Tadas Baltrušaitis, Charles E. Hughes, and Louis-Philippe Morency
(University of Central Florida, USA; Carnegie Mellon University, USA)
Bimanual Input for Multiscale Navigation with Pressure and Touch Gestures
Sebastien Pelurson and Laurence Nigay
(University of Grenoble, France; LIG, France; CNRS, France)
Intervention-Free Selection using EEG and Eye Tracking
Felix Putze, Johannes Popp, Jutta Hild, Jürgen Beyerer, and Tanja Schultz
(University of Bremen, Germany; KIT, Germany; Fraunhofer IOSB, Germany)
Automated Scoring of Interview Videos using Doc2Vec Multimodal Feature Extraction Paradigm
Lei Chen, Gary Feng, Chee Wee Leong, Blair Lehman, Michelle Martin-Raugh, Harrison Kell, Chong Min Lee, and Su-Youn Yoon
(Educational Testing Service, USA)
Estimating Communication Skills using Dialogue Acts and Nonverbal Features in Multiple Discussion Datasets
Shogo Okada, Yoshihiko Ohtake, Yukiko I. Nakano, Yuki Hayashi, Hung-Hsuan Huang, Yutaka Takase, and Katsumi Nitta
(Tokyo Institute of Technology, Japan; Seikei University, Japan; Osaka Prefecture University, Japan; Ritsumeikan University, Japan)
Multi-Sensor Modeling of Teacher Instructional Segments in Live Classrooms
Patrick J. Donnelly, Nathaniel Blanchard, Borhan Samei, Andrew M. Olney, Xiaoyi Sun, Brooke Ward, Sean Kelly, Martin Nystrand, and Sidney K. D'Mello
(University of Notre Dame, USA; University of Memphis, USA; University of Wisconsin-Madison, USA; University of Pittsburgh, USA)

Oral Session 3: Groups, Teams, and Meetings
Mon, Nov 14, 10:20 - 12:00, Miraikan Hall (Chair: Nick Campbell (Trinity College Dublin))

Meeting Extracts for Discussion Summarization Based on Multimodal Nonverbal Information
Fumio Nihei, Yukiko I. Nakano, and Yutaka Takase
(Seikei University, Japan)
Getting to Know You: A Multimodal Investigation of Team Behavior and Resilience to Stress
Catherine Neubauer, Joshua Woolley, Peter Khooshabeh, and Stefan Scherer
(University of Southern California, USA; University of California at San Francisco, USA; Army Research Lab at Los Angeles, USA)
Measuring the Impact of Multimodal Behavioural Feedback Loops on Social Interactions
Ionut Damian, Tobias Baur, and Elisabeth André
(University of Augsburg, Germany)
Analyzing Mouth-Opening Transition Pattern for Predicting Next Speaker in Multi-party Meetings
Ryo Ishii, Shiro Kumano, and Kazuhiro Otsuka
(NTT, Japan)

Oral Session 4: Personality and Emotion
Mon, Nov 14, 13:20 - 14:40, Miraikan Hall (Chair: Jill Lehman (Disney Research))

Automatic Recognition of Self-Reported and Perceived Emotion: Does Joint Modeling Help?
Biqiao Zhang, Georg Essl, and Emily Mower Provost
(University of Michigan, USA; University of Wisconsin-Milwaukee, USA)
Personality Classification and Behaviour Interpretation: An Approach Based on Feature Categories
Sheng Fang, Catherine Achard, and Séverine Dubuisson
(UPMC, France)
Multiscale Kernel Locally Penalised Discriminant Analysis Exemplified by Emotion Recognition in Speech
Xinzhou Xu, Jun Deng, Maryna Gavryukova, Zixing Zhang, Li Zhao, and Björn Schuller
(Southeast University, China; TU Munich, Germany; University of Passau, Germany; Imperial College London, UK)
Estimating Self-Assessed Personality from Body Movements and Proximity in Crowded Mingling Scenarios
Laura Cabrera-Quiros, Ekin Gedik, and Hayley Hung
(Delft University of Technology, Netherlands; Instituto Tecnológico de Costa Rica, Costa Rica)

Poster Session 2
Mon, Nov 14, 15:00 - 17:00, Conference Room 3

Deep Learning Driven Hypergraph Representation for Image-Based Emotion Recognition
Yuchi Huang and Hanqing Lu
(Chinese Academy of Sciences, China)
Towards a Listening Agent: A System Generating Audiovisual Laughs and Smiles to Show Interest
Kevin El Haddad, Hüseyin Çakmak, Emer Gilmartin, Stéphane Dupont, and Thierry Dutoit
(University of Mons, Belgium; Trinity College Dublin, Ireland)
Sound Emblems for Affective Multimodal Output of a Robotic Tutor: A Perception Study
Helen Hastie, Pasquale Dente, Dennis Küster, and Arvid Kappas
(Heriot-Watt University, UK; Jacobs University, Germany)
Automatic Detection of Very Early Stage of Dementia through Multimodal Interaction with Computer Avatars
Hiroki Tanaka, Hiroyoshi Adachi, Norimichi Ukita, Takashi Kudo, and Satoshi Nakamura
(Nara Institute of Science and Technology, Japan; Osaka University Health Care Center, Japan)
MobileSSI: Asynchronous Fusion for Social Signal Interpretation in the Wild
Simon Flutura, Johannes Wagner, Florian Lingenfelser, Andreas Seiderer, and Elisabeth André
(University of Augsburg, Germany)
Info
Language Proficiency Assessment of English L2 Speakers Based on Joint Analysis of Prosody and Native Language
Yue Zhang, Felix Weninger, Anton Batliner, Florian Hönig, and Björn Schuller
(Imperial College London, UK; Nuance Communications, Germany; University of Passau, Germany; University of Erlangen-Nuremberg, Germany)
Training Deep Networks for Facial Expression Recognition with Crowd-Sourced Label Distribution
Emad Barsoum, Cha Zhang, Cristian Canton Ferrer, and Zhengyou Zhang
(Microsoft Research, USA)
Deep Multimodal Fusion for Persuasiveness Prediction
Behnaz Nojavanasghari, Deepak Gopinath, Jayanth Koushik, Tadas Baltrušaitis, and Louis-Philippe Morency
(University of Central Florida, USA; Carnegie Mellon University, USA)
Comparison of Three Implementations of HeadTurn: A Multimodal Interaction Technique with Gaze and Head Turns
Oleg Špakov, Poika Isokoski, Jari Kangas, Jussi Rantala, Deepak Akkil, and Roope Raisamo
(University of Tampere, Finland)
Effects of Multimodal Cues on Children's Perception of Uncanniness in a Social Robot
Maike Paetzel, Christopher Peters, Ingela Nyström, and Ginevra Castellano
(Uppsala University, Sweden; KTH, Sweden)
Multimodal Feedback for Finger-Based Interaction in Mobile Augmented Reality
Wolfgang Hürst and Kevin Vriens
(Utrecht University, Netherlands; TWNKLS, Netherlands)
Smooth Eye Movement Interaction using EOG Glasses
Murtaza Dhuliawala, Juyoung Lee, Junichi Shimizu, Andreas Bulling, Kai Kunze, Thad Starner, and Woontack Woo
(Georgia Institute of Technology, USA; KAIST, South Korea; Keio University, Japan; Max Planck Institute for Informatics, Germany)
Active Speaker Detection with Audio-Visual Co-training
Punarjay Chakravarty, Jeroen Zegers, Tinne Tuytelaars, and Hugo Van hamme
(KU Leuven, Belgium; iMinds, Belgium)
Video
Detecting Emergent Leader in a Meeting Environment using Nonverbal Visual Features Only
Cigdem Beyan, Nicolò Carissimi, Francesca Capozzi, Sebastiano Vascon, Matteo Bustreo, Antonio Pierro, Cristina Becchio, and Vittorio Murino
(IIT Genoa, Italy; McGill University, Canada; University of Venice, Italy; Sapienza University of Rome, Italy; University of Turin, Italy)
Stressful First Impressions in Job Interviews
Ailbhe N. Finnerty, Skanda Muralidhar, Laurent Son Nguyen, Fabio Pianesi, and Daniel Gatica-Perez
(Fondazione Bruno Kessler, Italy; CIMeC, Italy; Idiap, Switzerland; EPFL, Switzerland; EIT Digital, Italy)

Oral Session 5: Gesture, Touch, and Haptics
Tue, Nov 15, 11:00 - 12:30, Miraikan Hall (Chair: Sharon Oviatt (Incaa Designs))

Analyzing the Articulation Features of Children's Touchscreen Gestures
Alex Shaw and Lisa Anthony
(University of Florida, USA)
Reach Out and Touch Me: Effects of Four Distinct Haptic Technologies on Affective Touch in Virtual Reality
Imtiaj Ahmed, Ville Harjunen, Giulio Jacucci, Eve Hoggan, Niklas Ravaja, and Michiel M. Spapé
(Aalto University, Finland; University of Helsinki, Finland; Liverpool Hope University, UK)
Using Touchscreen Interaction Data to Predict Cognitive Workload
Philipp Mock, Peter Gerjets, Maike Tibus, Ulrich Trautwein, Korbinian Möller, and Wolfgang Rosenstiel
(Leibniz-Institut für Wissensmedien, Germany; University of Tübingen, Germany)
Exploration of Virtual Environments on Tablet: Comparison between Tactile and Tangible Interaction Techniques
Adrien Arnaud, Jean-Baptiste Corrégé, Céline Clavel, Michèle Gouiffès, and Mehdi Ammi
(CNRS, France; University of Paris-Saclay, France)

Oral Session 6: Skill Training and Assessment
Tue, Nov 15, 14:00 - 15:10, Miraikan Hall (Chair: Catherine Pelachaud (ISIR, University of Paris6))

Understanding the Impact of Personal Feedback on Face-to-Face Interactions in the Workplace
Afra Mashhadi, Akhil Mathur, Marc Van den Broeck, Geert Vanderhulst, and Fahim Kawsar
(Nokia Bell Labs, Ireland; Nokia Bell Labs, Belgium)
Asynchronous Video Interviews vs. Face-to-Face Interviews For Communication Skill Measurement: A Systematic Study
Sowmya Rasipuram, Pooja Rao S. B., and Dinesh Babu Jayagopi
(IIIT Bangalore, India)
Context and Cognitive State Triggered Interventions for Mobile MOOC Learning
Xiang Xiao and Jingtao Wang
(University of Pittsburgh, USA)
Native vs. Non-native Language Fluency Implications on Multimodal Interaction for Interpersonal Skills Training
Mathieu Chollet, Helmut Prendinger, and Stefan Scherer
(University of Southern California, USA; National Institute of Informatics, Japan)

Demonstrations

Demo Session 1
Sun, Nov 13, 16:00 - 18:00, Innovation Hall

Social Signal Processing for Dummies
Ionut Damian, Michael Dietz, Frank Gaibler, and Elisabeth André
(University of Augsburg, Germany)
Info
Metering "Black Holes": Networking Stand-Alone Applications for Distributed Multimodal Synchronization
Michael Cohen, Yousuke Nagayama, and Bektur Ryskeldiev
(University of Aizu, Japan)
Towards a Multimodal Adaptive Lighting System for Visually Impaired Children
Euan Freeman, Graham Wilson, and Stephen Brewster
(University of Glasgow, UK)
Multimodal Affective Feedback: Combining Thermal, Vibrotactile, Audio and Visual Signals
Graham Wilson, Euan Freeman, and Stephen Brewster
(University of Glasgow, UK)
Niki and Julie: A Robot and Virtual Human for Studying Multimodal Social Interaction
Ron Artstein, David Traum, Jill Boberg, Alesia Gainer, Jonathan Gratch, Emmanuel Johnson, Anton Leuski, and Mikio Nakano
(University of Southern California, USA; Honda Research Institute, Japan)
A Demonstration of Multimodal Debrief Generation for AUVs, Post-mission and In-mission
Helen Hastie, Xingkun Liu, and Pedro Patron
(Heriot-Watt University, UK; SeeByte, UK)
Laughter Detection in the Wild: Demonstrating a Tool for Mobile Social Signal Processing and Visualization
Simon Flutura, Johannes Wagner, Florian Lingenfelser, Andreas Seiderer, and Elisabeth André
(University of Augsburg, Germany)
Info

Demo Session 2
Mon, Nov 14, 15:00 - 17:00, Innovation Hall

Multimodal System for Public Speaking with Real Time Feedback: A Positive Computing Perspective
Fiona Dermody and Alistair Sutherland
(Dublin City University, Ireland)
Multimodal Biofeedback System Integrating Low-Cost Easy Sensing Devices
Wataru Hashiguchi, Junya Morita, Takatsugu Hirayama, Kenji Mase, Kazunori Yamada, and Mayu Yokoya
(Nagoya University, Japan; Shizuoka University, Japan; Panasonic, Japan)
A Telepresence System using a Flexible Textile Display
Kana Kushida and Hideyuki Nakanishi
(Osaka University, Japan)
Large-Scale Multimodal Movie Dialogue Corpus
Ryu Yasuhara, Masashi Inoue, Ikuya Suga, and Tetsuo Kosaka
(Yamagata University, Japan)
Info
Immersive Virtual Reality with Multimodal Interaction and Streaming Technology
Wan-Lun Tsai, You-Lun Hsu, Chi-Po Lin, Chen-Yu Zhu, Yu-Cheng Chen, and Min-Chun Hu
(National Cheng Kung University, Taiwan)
Video
Multimodal Interaction with the Autonomous Android ERICA
Divesh Lala, Pierrick Milhorat, Koji Inoue, Tianyu Zhao, and Tatsuya Kawahara
(Kyoto University, Japan)
Ask Alice: An Artificial Retrieval of Information Agent
Michel Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, and Jelte van Waterschoot
(University of Nottingham, UK; University of Augsburg, Germany; CNRS, France; Cereproc, UK; Cantoche, France; Imperial College London, UK; University of Twente, Netherlands)
Design of Multimodal Instructional Tutoring Agents using Augmented Reality and Smart Learning Objects
Anmol Srivastava and Pradeep Yammiyavar
(IIT Guwahati, India)
Info
AttentiveVideo: Quantifying Emotional Responses to Mobile Video Advertisements
Phuong Pham and Jingtao Wang
(University of Pittsburgh, USA)
Video
Young Merlin: An Embodied Conversational Agent in Virtual Reality
Ivan Gris, Diego A. Rivera, Alex Rayon, Adriana Camacho, and David Novick
(Inmerssion, USA)

EmotiW Challenge
Sat, Nov 12, 09:00 - 17:30, Time24: Room 183

EmotiW 2016: Video and Group-Level Emotion Recognition Challenges
Abhinav Dhall, Roland Goecke, Jyoti Joshi, Jesse Hoey, and Tom Gedeon
(University of Waterloo, Canada; University of Canberra, Australia; Australian National University, Australia)
Info
Emotion Recognition in the Wild from Videos using Images
Sarah Adel Bargal, Emad Barsoum, Cristian Canton Ferrer, and Cha Zhang
(Boston University, USA; Microsoft Research, USA)
A Deep Look into Group Happiness Prediction from Images
Aleksandra Cerekovic
Video-Based Emotion Recognition using CNN-RNN and C3D Hybrid Networks
Yin Fan, Xiangju Lu, Dian Li, and Yuanliu Liu
(iQiyi, China)
Info
LSTM for Dynamic Emotion and Group Emotion Recognition in the Wild
Bo Sun, Qinglan Wei, Liandong Li, Qihua Xu, Jun He, and Lejun Yu
(Beijing Normal University, China)
Multi-clue Fusion for Emotion Recognition in the Wild
Jingwei Yan, Wenming Zheng, Zhen Cui, Chuangao Tang, Tong Zhang, Yuan Zong, and Ning Sun
(Southeast University, China; Nanjing University of Posts and Telecommunications, China)
Multi-view Common Space Learning for Emotion Recognition in the Wild
Jianlong Wu, Zhouchen Lin, and Hongbin Zha
(Peking University, China; Shanghai Jiao Tong University, China)
HoloNet: Towards Robust Emotion Recognition in the Wild
Anbang Yao, Dongqi Cai, Ping Hu, Shandong Wang, Liang Sha, and Yurong Chen
(Intel Labs, China; Beihang University, China)
Group Happiness Assessment using Geometric Features and Dataset Balancing
Vassilios Vonikakis, Yasin Yazici, Viet Dung Nguyen, and Stefan Winkler
(Advanced Digital Sciences Center at University of Illinois, Singapore; Nanyang Technological University, Singapore)
Video
Happiness Level Prediction with Sequential Inputs via Multiple Regressions
Jianshu Li, Sujoy Roy, Jiashi Feng, and Terence Sim
(National University of Singapore, Singapore; SAP, Singapore)
Video Emotion Recognition in the Wild Based on Fusion of Multimodal Features
Shizhe Chen, Xinrui Li, Qin Jin, Shilei Zhang, and Yong Qin
(Renmin University of China, China; IBM Research, China)
Wild Wild Emotion: A Multimodal Ensemble Approach
John Gideon, Biqiao Zhang, Zakaria Aldeneh, Yelin Kim, Soheil Khorram, Duc Le, and Emily Mower Provost
(University of Michigan, USA; SUNY Albany, USA)
Audio and Face Video Emotion Recognition in the Wild using Deep Neural Networks and Small Datasets
Wan Ding, Mingyu Xu, Dongyan Huang, Weisi Lin, Minghui Dong, Xinguo Yu, and Haizhou Li
(Central China Normal University, China; University of British Columbia, Canada; A*STAR, Singapore; Nanyang Technological University, Singapore; National University of Singapore, Singapore)
Automatic Emotion Recognition in the Wild using an Ensemble of Static and Dynamic Representations
Mostafa Mehdipour Ghazi and Hazım Kemal Ekenel
(Sabanci University, Turkey; Istanbul Technical University, Turkey)

Doctoral Consortium
Sat, Nov 12, 09:00 - 17:30, Time24: Room 182 (Chair: Dirk Heylen (University of Twente); Samer Al Moubayed (KTH))

The Influence of Appearance and Interaction Strategy of a Social Robot on the Feeling of Uncanniness in Humans
Maike Paetzel
(Uppsala University, Sweden)
Viewing Support System for Multi-view Videos
Xueting Wang
(Nagoya University, Japan)
Engaging Children with Autism in a Shape Perception Task using a Haptic Force Feedback Interface
Alix Pérusseau-Lambert
(CEA LIST, France)
Modeling User's Decision Process through Gaze Behavior
Kei Shimonishi
(Kyoto University, Japan)
Multimodal Positive Computing System for Public Speaking with Real-Time Feedback
Fiona Dermody
(Dublin City University, Ireland)
Prediction/Assessment of Communication Skill using Multimodal Cues in Social Interactions
Sowmya Rasipuram
(IIIT Bangalore, India)
Player/Avatar Body Relations in Multimodal Augmented Reality Games
Nina Rosa
(Utrecht University, Netherlands)
Computational Model for Interpersonal Attitude Expression
Soumia Dermouche
(CNRS, France; Telecom ParisTech, France)
Assessing Symptoms of Excessive SNS Usage Based on User Behavior and Emotion
Ploypailin Intapong, Tipporn Laohakangvalvit, Tiranee Achalakul, and Michiko Ohkura
(Shibaura Institute of Technology, Japan; King Mongkut’s University of Technology Thonburi, Thailand)
Kawaii Feeling Estimation by Product Attributes and Biological Signals
Tipporn Laohakangvalvit, Tiranee Achalakul, and Michiko Ohkura
(Shibaura Institute of Technology, Japan; King Mongkut’s University of Technology Thonburi, Thailand)
Multimodal Sensing of Affect Intensity
Shalini Bhatia
(University of Canberra, Australia)
Enriching Student Learning Experience using Augmented Reality and Smart Learning Objects
Anmol Srivastava
(IIT Guwahati, India)
Automated Recognition of Facial Expressions Authenticity
Krystian Radlak and Bogdan Smolka
(Silesian University of Technology, Poland)
Improving the Generalizability of Emotion Recognition Systems: Towards Emotion Recognition in the Wild
Biqiao Zhang
(University of Michigan, USA)

Summaries

Grand Challenge Summary
Tue, Nov 15, 16:00 - 16:15, Miraikan Hall

Emotion Recognition in the Wild Challenge 2016
Abhinav Dhall, Roland Goecke, Jyoti Joshi, and Tom Gedeon
(University of Waterloo, Canada; University of Canberra, Australia; Australian National University, Australia)
Info

Workshop Summaries

1st International Workshop on Embodied Interaction with Smart Environments (Workshop Summary)
Patrick Holthaus, Thomas Hermann, Sebastian Wrede, Sven Wachsmuth, and Britta Wrede
(Bielefeld University, Germany)
ASSP4MI2016: 2nd International Workshop on Advancements in Social Signal Processing for Multimodal Interaction (Workshop Summary)
Khiet P. Truong, Dirk Heylen, Toyoaki Nishida, and Mohamed Chetouani
(University of Twente, Netherlands; Kyoto University, Japan; UPMC, France)
ERM4CT 2016: 2nd International Workshop on Emotion Representations and Modelling for Companion Systems (Workshop Summary)
Kim Hartmann, Ingo Siegert, Ali Albert Salah, and Khiet P. Truong
(University of Magdeburg, Germany; Bogazici University, Turkey; University of Twente, Netherlands)
International Workshop on Multimodal Virtual and Augmented Reality (Workshop Summary)
Wolfgang Hürst, Daisuke Iwai, and Prabhakaran Balakrishnan
(Utrecht University, Netherlands; Osaka University, Japan; University of Texas at Dallas, USA)
International Workshop on Social Learning and Multimodal Interaction for Designing Artificial Agents (Workshop Summary)
Mohamed Chetouani, Salvatore M. Anzalone, Giovanna Varni, Isabelle Hupont Torres, Ginevra Castellano, Angelica Lim, and Gentiane Venture
(UPMC, France; Paris 8 University, France; Uppsala University, Sweden; SoftBank Robotics, France; Tokyo University of Agriculture and Technology, Japan)
1st International Workshop on Multi-sensorial Approaches to Human-Food Interaction (Workshop Summary)
Anton Nijholt, Carlos Velasco, Kasun Karunanayaka, and Gijs Huisman
(University of Twente, Netherlands; BI Norwegian Business School, Norway; University of Oxford, UK; Imagineering Institute, Malaysia)
International Workshop on Multimodal Analyses Enabling Artificial Agents in Human-­Machine Interaction (Workshop Summary)
Ronald Böck, Francesca Bonin, Nick Campbell, and Ronald Poppe
(University of Magdeburg, Germany; IBM Research, Ireland; Trinity College Dublin, Ireland; Utrecht University, Netherlands)

proc time: 0.8