ICMI 2016 – Author Index |
Contents -
Abstracts -
Authors
|
A B C D F G H I J K L M N P R S T V W Y Z
André, Elisabeth |
ICMI '16-DEMO: "Laughter Detection in the ..."
Laughter Detection in the Wild: Demonstrating a Tool for Mobile Social Signal Processing and Visualization
Simon Flutura, Johannes Wagner, Florian Lingenfelser, Andreas Seiderer, and Elisabeth André (University of Augsburg, Germany) In this demo, we present MobileSSI, a flexible software framework for Android and embedded Linux platforms, that provides developers with tools to record, analyze and recognize human behavior in real-time on mobile devices. To illustrate the benefits of the framework for the analysis of social group dynamics in naturalistic mobile settings, we present a demonstrator for laughter recognition that was implemented with MobileSSI. The demonstrator makes use of smartphones for sensing and analyzing data and employs smartwatches and tablets for visualizing the results and providing user feedback. To enable communication within the resulting ecology of mobile devices, MobileSSI includes a web socket plugin. @InProceedings{ICMI16p406, author = {Simon Flutura and Johannes Wagner and Florian Lingenfelser and Andreas Seiderer and Elisabeth André}, title = {Laughter Detection in the Wild: Demonstrating a Tool for Mobile Social Signal Processing and Visualization}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {406--407}, doi = {}, year = {2016}, } Info ICMI '16-DEMO: "Social Signal Processing for ..." Social Signal Processing for Dummies Ionut Damian, Michael Dietz, Frank Gaibler, and Elisabeth André (University of Augsburg, Germany) We introduce SSJ Creator, a modern Android GUI enabling users to design and execute social signal processing pipelines using nothing but their smartphones and without writing a single line of code. It is based on a modular Java-based social signal processing framework (SSJ), which is able to perform realtime multimodal behaviour analysis on Android devices using both device internal and external sensors. @InProceedings{ICMI16p394, author = {Ionut Damian and Michael Dietz and Frank Gaibler and Elisabeth André}, title = {Social Signal Processing for Dummies}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {394--395}, doi = {}, year = {2016}, } Info ICMI '16-DEMO: "Ask Alice: An Artificial Retrieval ..." Ask Alice: An Artificial Retrieval of Information Agent Michel Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, and Jelte van Waterschoot (University of Nottingham, UK; University of Augsburg, Germany; CNRS, France; Cereproc, UK; Cantoche, France; Imperial College London, UK; University of Twente, Netherlands) We present a demonstration of the ARIA framework, a modular approach for rapid development of virtual humans for information retrieval that have linguistic, emotional, and social skills and a strong personality. We demonstrate the framework's capabilities in a scenario where `Alice in Wonderland', a popular English literature book, is embodied by a virtual human representing Alice. The user can engage in an information exchange dialogue, where Alice acts as the expert on the book, and the user as an interested novice. Besides speech recognition, sophisticated audio-visual behaviour analysis is used to inform the core agent dialogue module about the user's state and intentions, so that it can go beyond simple chat-bot dialogue. The behaviour generation module features a unique new capability of being able to deal gracefully with interruptions of the agent. @InProceedings{ICMI16p419, author = {Michel Valstar and Tobias Baur and Angelo Cafaro and Alexandru Ghitulescu and Blaise Potard and Johannes Wagner and Elisabeth André and Laurent Durieu and Matthew Aylett and Soumia Dermouche and Catherine Pelachaud and Eduardo Coutinho and Björn Schuller and Yue Zhang and Dirk Heylen and Mariët Theune and Jelte van Waterschoot}, title = {Ask Alice: An Artificial Retrieval of Information Agent}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {419--420}, doi = {}, year = {2016}, } |
|
Artstein, Ron |
ICMI '16-DEMO: "Niki and Julie: A Robot and ..."
Niki and Julie: A Robot and Virtual Human for Studying Multimodal Social Interaction
Ron Artstein, David Traum, Jill Boberg, Alesia Gainer, Jonathan Gratch, Emmanuel Johnson, Anton Leuski, and Mikio Nakano (University of Southern California, USA; Honda Research Institute, Japan) We demonstrate two agents, a robot and a virtual human, which can be used for studying factors that impact social influence. The agents engage in dialogue scenarios that build familiarity, share information, and attempt to influence a human participant. The scenarios are variants of the classical “survival task,” where members of a team rank the importance of a number of items (e.g., items that might help one survive a crash in the desert). These are ranked individually and then re-ranked following a team discussion, and the difference in ranking provides an objective measure of social influence. Survival tasks have been used in psychology, virtual human research, and human-robot interaction. Our agents are operated in a “Wizard-of-Oz” fashion, where a hidden human operator chooses the agents’ dialogue actions while interacting with an experiment participant. @InProceedings{ICMI16p402, author = {Ron Artstein and David Traum and Jill Boberg and Alesia Gainer and Jonathan Gratch and Emmanuel Johnson and Anton Leuski and Mikio Nakano}, title = {Niki and Julie: A Robot and Virtual Human for Studying Multimodal Social Interaction}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {402--403}, doi = {}, year = {2016}, } |
|
Aylett, Matthew |
ICMI '16-DEMO: "Ask Alice: An Artificial Retrieval ..."
Ask Alice: An Artificial Retrieval of Information Agent
Michel Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, and Jelte van Waterschoot (University of Nottingham, UK; University of Augsburg, Germany; CNRS, France; Cereproc, UK; Cantoche, France; Imperial College London, UK; University of Twente, Netherlands) We present a demonstration of the ARIA framework, a modular approach for rapid development of virtual humans for information retrieval that have linguistic, emotional, and social skills and a strong personality. We demonstrate the framework's capabilities in a scenario where `Alice in Wonderland', a popular English literature book, is embodied by a virtual human representing Alice. The user can engage in an information exchange dialogue, where Alice acts as the expert on the book, and the user as an interested novice. Besides speech recognition, sophisticated audio-visual behaviour analysis is used to inform the core agent dialogue module about the user's state and intentions, so that it can go beyond simple chat-bot dialogue. The behaviour generation module features a unique new capability of being able to deal gracefully with interruptions of the agent. @InProceedings{ICMI16p419, author = {Michel Valstar and Tobias Baur and Angelo Cafaro and Alexandru Ghitulescu and Blaise Potard and Johannes Wagner and Elisabeth André and Laurent Durieu and Matthew Aylett and Soumia Dermouche and Catherine Pelachaud and Eduardo Coutinho and Björn Schuller and Yue Zhang and Dirk Heylen and Mariët Theune and Jelte van Waterschoot}, title = {Ask Alice: An Artificial Retrieval of Information Agent}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {419--420}, doi = {}, year = {2016}, } |
|
Baur, Tobias |
ICMI '16-DEMO: "Ask Alice: An Artificial Retrieval ..."
Ask Alice: An Artificial Retrieval of Information Agent
Michel Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, and Jelte van Waterschoot (University of Nottingham, UK; University of Augsburg, Germany; CNRS, France; Cereproc, UK; Cantoche, France; Imperial College London, UK; University of Twente, Netherlands) We present a demonstration of the ARIA framework, a modular approach for rapid development of virtual humans for information retrieval that have linguistic, emotional, and social skills and a strong personality. We demonstrate the framework's capabilities in a scenario where `Alice in Wonderland', a popular English literature book, is embodied by a virtual human representing Alice. The user can engage in an information exchange dialogue, where Alice acts as the expert on the book, and the user as an interested novice. Besides speech recognition, sophisticated audio-visual behaviour analysis is used to inform the core agent dialogue module about the user's state and intentions, so that it can go beyond simple chat-bot dialogue. The behaviour generation module features a unique new capability of being able to deal gracefully with interruptions of the agent. @InProceedings{ICMI16p419, author = {Michel Valstar and Tobias Baur and Angelo Cafaro and Alexandru Ghitulescu and Blaise Potard and Johannes Wagner and Elisabeth André and Laurent Durieu and Matthew Aylett and Soumia Dermouche and Catherine Pelachaud and Eduardo Coutinho and Björn Schuller and Yue Zhang and Dirk Heylen and Mariët Theune and Jelte van Waterschoot}, title = {Ask Alice: An Artificial Retrieval of Information Agent}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {419--420}, doi = {}, year = {2016}, } |
|
Boberg, Jill |
ICMI '16-DEMO: "Niki and Julie: A Robot and ..."
Niki and Julie: A Robot and Virtual Human for Studying Multimodal Social Interaction
Ron Artstein, David Traum, Jill Boberg, Alesia Gainer, Jonathan Gratch, Emmanuel Johnson, Anton Leuski, and Mikio Nakano (University of Southern California, USA; Honda Research Institute, Japan) We demonstrate two agents, a robot and a virtual human, which can be used for studying factors that impact social influence. The agents engage in dialogue scenarios that build familiarity, share information, and attempt to influence a human participant. The scenarios are variants of the classical “survival task,” where members of a team rank the importance of a number of items (e.g., items that might help one survive a crash in the desert). These are ranked individually and then re-ranked following a team discussion, and the difference in ranking provides an objective measure of social influence. Survival tasks have been used in psychology, virtual human research, and human-robot interaction. Our agents are operated in a “Wizard-of-Oz” fashion, where a hidden human operator chooses the agents’ dialogue actions while interacting with an experiment participant. @InProceedings{ICMI16p402, author = {Ron Artstein and David Traum and Jill Boberg and Alesia Gainer and Jonathan Gratch and Emmanuel Johnson and Anton Leuski and Mikio Nakano}, title = {Niki and Julie: A Robot and Virtual Human for Studying Multimodal Social Interaction}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {402--403}, doi = {}, year = {2016}, } |
|
Brewster, Stephen |
ICMI '16-DEMO: "Multimodal Affective Feedback: ..."
Multimodal Affective Feedback: Combining Thermal, Vibrotactile, Audio and Visual Signals
Graham Wilson, Euan Freeman, and Stephen Brewster (University of Glasgow, UK) In this paper we describe a demonstration of our multimodal affective feedback designs, used in research to expand the emotional expressivity of interfaces. The feedback leverages inherent associations and reactions to thermal, vibrotactile, auditory and abstract visual designs to convey a range of affective states without any need for learning feedback encoding. All combinations of the different feedback channels can be utilised, depending on which combination best conveys a given state. All the signals are generated from a mobile phone augmented with thermal and vibrotactile stimulators, which will be available to conference visitors to see, touch, hear and, importantly, feel. @InProceedings{ICMI16p400, author = {Graham Wilson and Euan Freeman and Stephen Brewster}, title = {Multimodal Affective Feedback: Combining Thermal, Vibrotactile, Audio and Visual Signals}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {400--401}, doi = {}, year = {2016}, } ICMI '16-DEMO: "Towards a Multimodal Adaptive ..." Towards a Multimodal Adaptive Lighting System for Visually Impaired Children Euan Freeman, Graham Wilson, and Stephen Brewster (University of Glasgow, UK) Visually impaired children often have difficulty with everyday activities like locating items, e.g. favourite toys, and moving safely around the home. It is important to assist them during activities like these because it can promote independence from adults and helps to develop skills. Our demonstration shows our work towards a multimodal sensing and output system that adapts the lighting conditions at home to help visually impaired children with such tasks. @InProceedings{ICMI16p398, author = {Euan Freeman and Graham Wilson and Stephen Brewster}, title = {Towards a Multimodal Adaptive Lighting System for Visually Impaired Children}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {398--399}, doi = {}, year = {2016}, } |
|
Cafaro, Angelo |
ICMI '16-DEMO: "Ask Alice: An Artificial Retrieval ..."
Ask Alice: An Artificial Retrieval of Information Agent
Michel Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, and Jelte van Waterschoot (University of Nottingham, UK; University of Augsburg, Germany; CNRS, France; Cereproc, UK; Cantoche, France; Imperial College London, UK; University of Twente, Netherlands) We present a demonstration of the ARIA framework, a modular approach for rapid development of virtual humans for information retrieval that have linguistic, emotional, and social skills and a strong personality. We demonstrate the framework's capabilities in a scenario where `Alice in Wonderland', a popular English literature book, is embodied by a virtual human representing Alice. The user can engage in an information exchange dialogue, where Alice acts as the expert on the book, and the user as an interested novice. Besides speech recognition, sophisticated audio-visual behaviour analysis is used to inform the core agent dialogue module about the user's state and intentions, so that it can go beyond simple chat-bot dialogue. The behaviour generation module features a unique new capability of being able to deal gracefully with interruptions of the agent. @InProceedings{ICMI16p419, author = {Michel Valstar and Tobias Baur and Angelo Cafaro and Alexandru Ghitulescu and Blaise Potard and Johannes Wagner and Elisabeth André and Laurent Durieu and Matthew Aylett and Soumia Dermouche and Catherine Pelachaud and Eduardo Coutinho and Björn Schuller and Yue Zhang and Dirk Heylen and Mariët Theune and Jelte van Waterschoot}, title = {Ask Alice: An Artificial Retrieval of Information Agent}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {419--420}, doi = {}, year = {2016}, } |
|
Camacho, Adriana |
ICMI '16-DEMO: "Young Merlin: An Embodied ..."
Young Merlin: An Embodied Conversational Agent in Virtual Reality
Ivan Gris, Diego A. Rivera, Alex Rayon, Adriana Camacho, and David Novick (Inmerssion, USA) This paper describes a system for embodied conversational agents developed by Inmerssion and one of the applications—Young Merlin: Trial by Fire —built with this system. In the Merlin application, the ECA and a human interact with speech in virtual reality. The goal of this application is to provide engaging VR experiences that build rapport through storytelling and verbal interactions. The agent is fully automated, and his attitude towards the user changes over time depending on the interaction. The conversational system was built through a declarative approach that supports animations, markup language, and gesture recognition. Future versions of Merlin will implement multi-character dialogs, additional actions, and extended interaction time. @InProceedings{ICMI16p425, author = {Ivan Gris and Diego A. Rivera and Alex Rayon and Adriana Camacho and David Novick}, title = {Young Merlin: An Embodied Conversational Agent in Virtual Reality}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {425--426}, doi = {}, year = {2016}, } |
|
Chen, Yu-Cheng |
ICMI '16-DEMO: "Immersive Virtual Reality ..."
Immersive Virtual Reality with Multimodal Interaction and Streaming Technology
Wan-Lun Tsai, You-Lun Hsu, Chi-Po Lin, Chen-Yu Zhu, Yu-Cheng Chen, and Min-Chun Hu (National Cheng Kung University, Taiwan) In this demo, we present an immersive virtual reality (VR) system which integrates multimodal interaction sensors (i.e., smartphone, Kinect v2, and Myo armband) and streaming technology to improve the VR experience. The integrated system solves the common problems in most VR systems: (1) the very limited playing area due to transmission cable between computer and display/interaction devices, and (2) non-intuitive way of controlling virtual objects. We use Unreal Engine 4 to develop an immersive VR game with 6 interactive levels to demonstrate the feasibility of our system. In the game, the user not only can freely walk within a large playing area surrounded by multiple Kinect sensors but also select the virtual objects to grab and throw with the Myo armband. The experiment shows that our idea is workable for VR experience. @InProceedings{ICMI16p416, author = {Wan-Lun Tsai and You-Lun Hsu and Chi-Po Lin and Chen-Yu Zhu and Yu-Cheng Chen and Min-Chun Hu}, title = {Immersive Virtual Reality with Multimodal Interaction and Streaming Technology}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {416--416}, doi = {}, year = {2016}, } Video |
|
Cohen, Michael |
ICMI '16-DEMO: "Metering "Black Holes": ..."
Metering "Black Holes": Networking Stand-Alone Applications for Distributed Multimodal Synchronization
Michael Cohen, Yousuke Nagayama, and Bektur Ryskeldiev (University of Aizu, Japan) We have developed a phantom gui emulator that can read from otherwise stand-alone applications, complementing a separate parallel program that can write to such applications. In conjunction with the “Alice” desktop vr system and previously developed “Collaborative Virtual Environment” both of which are freely available, virtual scene exploration can synchronize with multimodal peers, including panoramic browsers, spatial sound renderers, and smartphone and tablet interfaces. @InProceedings{ICMI16p396, author = {Michael Cohen and Yousuke Nagayama and Bektur Ryskeldiev}, title = {Metering "Black Holes": Networking Stand-Alone Applications for Distributed Multimodal Synchronization}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {396--397}, doi = {}, year = {2016}, } |
|
Coutinho, Eduardo |
ICMI '16-DEMO: "Ask Alice: An Artificial Retrieval ..."
Ask Alice: An Artificial Retrieval of Information Agent
Michel Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, and Jelte van Waterschoot (University of Nottingham, UK; University of Augsburg, Germany; CNRS, France; Cereproc, UK; Cantoche, France; Imperial College London, UK; University of Twente, Netherlands) We present a demonstration of the ARIA framework, a modular approach for rapid development of virtual humans for information retrieval that have linguistic, emotional, and social skills and a strong personality. We demonstrate the framework's capabilities in a scenario where `Alice in Wonderland', a popular English literature book, is embodied by a virtual human representing Alice. The user can engage in an information exchange dialogue, where Alice acts as the expert on the book, and the user as an interested novice. Besides speech recognition, sophisticated audio-visual behaviour analysis is used to inform the core agent dialogue module about the user's state and intentions, so that it can go beyond simple chat-bot dialogue. The behaviour generation module features a unique new capability of being able to deal gracefully with interruptions of the agent. @InProceedings{ICMI16p419, author = {Michel Valstar and Tobias Baur and Angelo Cafaro and Alexandru Ghitulescu and Blaise Potard and Johannes Wagner and Elisabeth André and Laurent Durieu and Matthew Aylett and Soumia Dermouche and Catherine Pelachaud and Eduardo Coutinho and Björn Schuller and Yue Zhang and Dirk Heylen and Mariët Theune and Jelte van Waterschoot}, title = {Ask Alice: An Artificial Retrieval of Information Agent}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {419--420}, doi = {}, year = {2016}, } |
|
Damian, Ionut |
ICMI '16-DEMO: "Social Signal Processing for ..."
Social Signal Processing for Dummies
Ionut Damian, Michael Dietz, Frank Gaibler, and Elisabeth André (University of Augsburg, Germany) We introduce SSJ Creator, a modern Android GUI enabling users to design and execute social signal processing pipelines using nothing but their smartphones and without writing a single line of code. It is based on a modular Java-based social signal processing framework (SSJ), which is able to perform realtime multimodal behaviour analysis on Android devices using both device internal and external sensors. @InProceedings{ICMI16p394, author = {Ionut Damian and Michael Dietz and Frank Gaibler and Elisabeth André}, title = {Social Signal Processing for Dummies}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {394--395}, doi = {}, year = {2016}, } Info |
|
Dermody, Fiona |
ICMI '16-DEMO: "Multimodal System for Public ..."
Multimodal System for Public Speaking with Real Time Feedback: A Positive Computing Perspective
Fiona Dermody and Alistair Sutherland (Dublin City University, Ireland) A multimodal system for public speaking with real time feedback has been developed using the Microsoft Kinect. The system has been developed within the paradigm of positive computing which focuses on designing for user wellbeing. The system detects body pose, facial expressions and voice. Visual feedback is displayed to users on their speaking performance in real time. Users can view statistics on their utilisation of speaking modalities. The system also has a mentor avatar which appears alongside the user avatar to facilitate user training. Autocue mode allows a user to practice with set text from a chosen speech. @InProceedings{ICMI16p408, author = {Fiona Dermody and Alistair Sutherland}, title = {Multimodal System for Public Speaking with Real Time Feedback: A Positive Computing Perspective}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {408--409}, doi = {}, year = {2016}, } |
|
Dermouche, Soumia |
ICMI '16-DEMO: "Ask Alice: An Artificial Retrieval ..."
Ask Alice: An Artificial Retrieval of Information Agent
Michel Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, and Jelte van Waterschoot (University of Nottingham, UK; University of Augsburg, Germany; CNRS, France; Cereproc, UK; Cantoche, France; Imperial College London, UK; University of Twente, Netherlands) We present a demonstration of the ARIA framework, a modular approach for rapid development of virtual humans for information retrieval that have linguistic, emotional, and social skills and a strong personality. We demonstrate the framework's capabilities in a scenario where `Alice in Wonderland', a popular English literature book, is embodied by a virtual human representing Alice. The user can engage in an information exchange dialogue, where Alice acts as the expert on the book, and the user as an interested novice. Besides speech recognition, sophisticated audio-visual behaviour analysis is used to inform the core agent dialogue module about the user's state and intentions, so that it can go beyond simple chat-bot dialogue. The behaviour generation module features a unique new capability of being able to deal gracefully with interruptions of the agent. @InProceedings{ICMI16p419, author = {Michel Valstar and Tobias Baur and Angelo Cafaro and Alexandru Ghitulescu and Blaise Potard and Johannes Wagner and Elisabeth André and Laurent Durieu and Matthew Aylett and Soumia Dermouche and Catherine Pelachaud and Eduardo Coutinho and Björn Schuller and Yue Zhang and Dirk Heylen and Mariët Theune and Jelte van Waterschoot}, title = {Ask Alice: An Artificial Retrieval of Information Agent}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {419--420}, doi = {}, year = {2016}, } |
|
Dietz, Michael |
ICMI '16-DEMO: "Social Signal Processing for ..."
Social Signal Processing for Dummies
Ionut Damian, Michael Dietz, Frank Gaibler, and Elisabeth André (University of Augsburg, Germany) We introduce SSJ Creator, a modern Android GUI enabling users to design and execute social signal processing pipelines using nothing but their smartphones and without writing a single line of code. It is based on a modular Java-based social signal processing framework (SSJ), which is able to perform realtime multimodal behaviour analysis on Android devices using both device internal and external sensors. @InProceedings{ICMI16p394, author = {Ionut Damian and Michael Dietz and Frank Gaibler and Elisabeth André}, title = {Social Signal Processing for Dummies}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {394--395}, doi = {}, year = {2016}, } Info |
|
Durieu, Laurent |
ICMI '16-DEMO: "Ask Alice: An Artificial Retrieval ..."
Ask Alice: An Artificial Retrieval of Information Agent
Michel Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, and Jelte van Waterschoot (University of Nottingham, UK; University of Augsburg, Germany; CNRS, France; Cereproc, UK; Cantoche, France; Imperial College London, UK; University of Twente, Netherlands) We present a demonstration of the ARIA framework, a modular approach for rapid development of virtual humans for information retrieval that have linguistic, emotional, and social skills and a strong personality. We demonstrate the framework's capabilities in a scenario where `Alice in Wonderland', a popular English literature book, is embodied by a virtual human representing Alice. The user can engage in an information exchange dialogue, where Alice acts as the expert on the book, and the user as an interested novice. Besides speech recognition, sophisticated audio-visual behaviour analysis is used to inform the core agent dialogue module about the user's state and intentions, so that it can go beyond simple chat-bot dialogue. The behaviour generation module features a unique new capability of being able to deal gracefully with interruptions of the agent. @InProceedings{ICMI16p419, author = {Michel Valstar and Tobias Baur and Angelo Cafaro and Alexandru Ghitulescu and Blaise Potard and Johannes Wagner and Elisabeth André and Laurent Durieu and Matthew Aylett and Soumia Dermouche and Catherine Pelachaud and Eduardo Coutinho and Björn Schuller and Yue Zhang and Dirk Heylen and Mariët Theune and Jelte van Waterschoot}, title = {Ask Alice: An Artificial Retrieval of Information Agent}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {419--420}, doi = {}, year = {2016}, } |
|
Flutura, Simon |
ICMI '16-DEMO: "Laughter Detection in the ..."
Laughter Detection in the Wild: Demonstrating a Tool for Mobile Social Signal Processing and Visualization
Simon Flutura, Johannes Wagner, Florian Lingenfelser, Andreas Seiderer, and Elisabeth André (University of Augsburg, Germany) In this demo, we present MobileSSI, a flexible software framework for Android and embedded Linux platforms, that provides developers with tools to record, analyze and recognize human behavior in real-time on mobile devices. To illustrate the benefits of the framework for the analysis of social group dynamics in naturalistic mobile settings, we present a demonstrator for laughter recognition that was implemented with MobileSSI. The demonstrator makes use of smartphones for sensing and analyzing data and employs smartwatches and tablets for visualizing the results and providing user feedback. To enable communication within the resulting ecology of mobile devices, MobileSSI includes a web socket plugin. @InProceedings{ICMI16p406, author = {Simon Flutura and Johannes Wagner and Florian Lingenfelser and Andreas Seiderer and Elisabeth André}, title = {Laughter Detection in the Wild: Demonstrating a Tool for Mobile Social Signal Processing and Visualization}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {406--407}, doi = {}, year = {2016}, } Info |
|
Freeman, Euan |
ICMI '16-DEMO: "Multimodal Affective Feedback: ..."
Multimodal Affective Feedback: Combining Thermal, Vibrotactile, Audio and Visual Signals
Graham Wilson, Euan Freeman, and Stephen Brewster (University of Glasgow, UK) In this paper we describe a demonstration of our multimodal affective feedback designs, used in research to expand the emotional expressivity of interfaces. The feedback leverages inherent associations and reactions to thermal, vibrotactile, auditory and abstract visual designs to convey a range of affective states without any need for learning feedback encoding. All combinations of the different feedback channels can be utilised, depending on which combination best conveys a given state. All the signals are generated from a mobile phone augmented with thermal and vibrotactile stimulators, which will be available to conference visitors to see, touch, hear and, importantly, feel. @InProceedings{ICMI16p400, author = {Graham Wilson and Euan Freeman and Stephen Brewster}, title = {Multimodal Affective Feedback: Combining Thermal, Vibrotactile, Audio and Visual Signals}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {400--401}, doi = {}, year = {2016}, } ICMI '16-DEMO: "Towards a Multimodal Adaptive ..." Towards a Multimodal Adaptive Lighting System for Visually Impaired Children Euan Freeman, Graham Wilson, and Stephen Brewster (University of Glasgow, UK) Visually impaired children often have difficulty with everyday activities like locating items, e.g. favourite toys, and moving safely around the home. It is important to assist them during activities like these because it can promote independence from adults and helps to develop skills. Our demonstration shows our work towards a multimodal sensing and output system that adapts the lighting conditions at home to help visually impaired children with such tasks. @InProceedings{ICMI16p398, author = {Euan Freeman and Graham Wilson and Stephen Brewster}, title = {Towards a Multimodal Adaptive Lighting System for Visually Impaired Children}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {398--399}, doi = {}, year = {2016}, } |
|
Gaibler, Frank |
ICMI '16-DEMO: "Social Signal Processing for ..."
Social Signal Processing for Dummies
Ionut Damian, Michael Dietz, Frank Gaibler, and Elisabeth André (University of Augsburg, Germany) We introduce SSJ Creator, a modern Android GUI enabling users to design and execute social signal processing pipelines using nothing but their smartphones and without writing a single line of code. It is based on a modular Java-based social signal processing framework (SSJ), which is able to perform realtime multimodal behaviour analysis on Android devices using both device internal and external sensors. @InProceedings{ICMI16p394, author = {Ionut Damian and Michael Dietz and Frank Gaibler and Elisabeth André}, title = {Social Signal Processing for Dummies}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {394--395}, doi = {}, year = {2016}, } Info |
|
Gainer, Alesia |
ICMI '16-DEMO: "Niki and Julie: A Robot and ..."
Niki and Julie: A Robot and Virtual Human for Studying Multimodal Social Interaction
Ron Artstein, David Traum, Jill Boberg, Alesia Gainer, Jonathan Gratch, Emmanuel Johnson, Anton Leuski, and Mikio Nakano (University of Southern California, USA; Honda Research Institute, Japan) We demonstrate two agents, a robot and a virtual human, which can be used for studying factors that impact social influence. The agents engage in dialogue scenarios that build familiarity, share information, and attempt to influence a human participant. The scenarios are variants of the classical “survival task,” where members of a team rank the importance of a number of items (e.g., items that might help one survive a crash in the desert). These are ranked individually and then re-ranked following a team discussion, and the difference in ranking provides an objective measure of social influence. Survival tasks have been used in psychology, virtual human research, and human-robot interaction. Our agents are operated in a “Wizard-of-Oz” fashion, where a hidden human operator chooses the agents’ dialogue actions while interacting with an experiment participant. @InProceedings{ICMI16p402, author = {Ron Artstein and David Traum and Jill Boberg and Alesia Gainer and Jonathan Gratch and Emmanuel Johnson and Anton Leuski and Mikio Nakano}, title = {Niki and Julie: A Robot and Virtual Human for Studying Multimodal Social Interaction}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {402--403}, doi = {}, year = {2016}, } |
|
Ghitulescu, Alexandru |
ICMI '16-DEMO: "Ask Alice: An Artificial Retrieval ..."
Ask Alice: An Artificial Retrieval of Information Agent
Michel Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, and Jelte van Waterschoot (University of Nottingham, UK; University of Augsburg, Germany; CNRS, France; Cereproc, UK; Cantoche, France; Imperial College London, UK; University of Twente, Netherlands) We present a demonstration of the ARIA framework, a modular approach for rapid development of virtual humans for information retrieval that have linguistic, emotional, and social skills and a strong personality. We demonstrate the framework's capabilities in a scenario where `Alice in Wonderland', a popular English literature book, is embodied by a virtual human representing Alice. The user can engage in an information exchange dialogue, where Alice acts as the expert on the book, and the user as an interested novice. Besides speech recognition, sophisticated audio-visual behaviour analysis is used to inform the core agent dialogue module about the user's state and intentions, so that it can go beyond simple chat-bot dialogue. The behaviour generation module features a unique new capability of being able to deal gracefully with interruptions of the agent. @InProceedings{ICMI16p419, author = {Michel Valstar and Tobias Baur and Angelo Cafaro and Alexandru Ghitulescu and Blaise Potard and Johannes Wagner and Elisabeth André and Laurent Durieu and Matthew Aylett and Soumia Dermouche and Catherine Pelachaud and Eduardo Coutinho and Björn Schuller and Yue Zhang and Dirk Heylen and Mariët Theune and Jelte van Waterschoot}, title = {Ask Alice: An Artificial Retrieval of Information Agent}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {419--420}, doi = {}, year = {2016}, } |
|
Gratch, Jonathan |
ICMI '16-DEMO: "Niki and Julie: A Robot and ..."
Niki and Julie: A Robot and Virtual Human for Studying Multimodal Social Interaction
Ron Artstein, David Traum, Jill Boberg, Alesia Gainer, Jonathan Gratch, Emmanuel Johnson, Anton Leuski, and Mikio Nakano (University of Southern California, USA; Honda Research Institute, Japan) We demonstrate two agents, a robot and a virtual human, which can be used for studying factors that impact social influence. The agents engage in dialogue scenarios that build familiarity, share information, and attempt to influence a human participant. The scenarios are variants of the classical “survival task,” where members of a team rank the importance of a number of items (e.g., items that might help one survive a crash in the desert). These are ranked individually and then re-ranked following a team discussion, and the difference in ranking provides an objective measure of social influence. Survival tasks have been used in psychology, virtual human research, and human-robot interaction. Our agents are operated in a “Wizard-of-Oz” fashion, where a hidden human operator chooses the agents’ dialogue actions while interacting with an experiment participant. @InProceedings{ICMI16p402, author = {Ron Artstein and David Traum and Jill Boberg and Alesia Gainer and Jonathan Gratch and Emmanuel Johnson and Anton Leuski and Mikio Nakano}, title = {Niki and Julie: A Robot and Virtual Human for Studying Multimodal Social Interaction}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {402--403}, doi = {}, year = {2016}, } |
|
Gris, Ivan |
ICMI '16-DEMO: "Young Merlin: An Embodied ..."
Young Merlin: An Embodied Conversational Agent in Virtual Reality
Ivan Gris, Diego A. Rivera, Alex Rayon, Adriana Camacho, and David Novick (Inmerssion, USA) This paper describes a system for embodied conversational agents developed by Inmerssion and one of the applications—Young Merlin: Trial by Fire —built with this system. In the Merlin application, the ECA and a human interact with speech in virtual reality. The goal of this application is to provide engaging VR experiences that build rapport through storytelling and verbal interactions. The agent is fully automated, and his attitude towards the user changes over time depending on the interaction. The conversational system was built through a declarative approach that supports animations, markup language, and gesture recognition. Future versions of Merlin will implement multi-character dialogs, additional actions, and extended interaction time. @InProceedings{ICMI16p425, author = {Ivan Gris and Diego A. Rivera and Alex Rayon and Adriana Camacho and David Novick}, title = {Young Merlin: An Embodied Conversational Agent in Virtual Reality}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {425--426}, doi = {}, year = {2016}, } |
|
Hashiguchi, Wataru |
ICMI '16-DEMO: "Multimodal Biofeedback System ..."
Multimodal Biofeedback System Integrating Low-Cost Easy Sensing Devices
Wataru Hashiguchi, Junya Morita, Takatsugu Hirayama, Kenji Mase, Kazunori Yamada, and Mayu Yokoya (Nagoya University, Japan; Shizuoka University, Japan; Panasonic, Japan) We built a multimodal biofeedback system integrated with low-cost sensors and applied the system to support meditation training utilizing multimodal biofeedback to the user. Our meditation support system employs Electroencephalogram, heart rate variability and eye tracking. The first two biosignals are employed to assess the mental stress during meditation, and eye tracking is equipped to detect an interval where the user engages in meditation by monitoring the open/closed states of the eyes. @InProceedings{ICMI16p410, author = {Wataru Hashiguchi and Junya Morita and Takatsugu Hirayama and Kenji Mase and Kazunori Yamada and Mayu Yokoya}, title = {Multimodal Biofeedback System Integrating Low-Cost Easy Sensing Devices}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {410--411}, doi = {}, year = {2016}, } |
|
Hastie, Helen |
ICMI '16-DEMO: "A Demonstration of Multimodal ..."
A Demonstration of Multimodal Debrief Generation for AUVs, Post-mission and In-mission
Helen Hastie, Xingkun Liu, and Pedro Patron (Heriot-Watt University, UK; SeeByte, UK) A prototype will be demonstrated that takes activity and sensor data from Autonomous Underwater Vehicles (AUVs) and automatically generates multimodal output in the form of mission reports containing natural language and visual elements. Specifically, the system takes time-series sensor data, mission logs, together with mission plans as its input, and generates descriptions of the missions in natural language, which would be verbalised by a Text-to-Speech Synthesis (TTS) engine in a multimodal system. In addition, we will demonstrate an in-mission system that provides a stream of real-time updates in natural language, thus improving situation awareness of the operator and increasing trust in the system during missions. @InProceedings{ICMI16p404, author = {Helen Hastie and Xingkun Liu and Pedro Patron}, title = {A Demonstration of Multimodal Debrief Generation for AUVs, Post-mission and In-mission}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {404--405}, doi = {}, year = {2016}, } |
|
Heylen, Dirk |
ICMI '16-DEMO: "Ask Alice: An Artificial Retrieval ..."
Ask Alice: An Artificial Retrieval of Information Agent
Michel Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, and Jelte van Waterschoot (University of Nottingham, UK; University of Augsburg, Germany; CNRS, France; Cereproc, UK; Cantoche, France; Imperial College London, UK; University of Twente, Netherlands) We present a demonstration of the ARIA framework, a modular approach for rapid development of virtual humans for information retrieval that have linguistic, emotional, and social skills and a strong personality. We demonstrate the framework's capabilities in a scenario where `Alice in Wonderland', a popular English literature book, is embodied by a virtual human representing Alice. The user can engage in an information exchange dialogue, where Alice acts as the expert on the book, and the user as an interested novice. Besides speech recognition, sophisticated audio-visual behaviour analysis is used to inform the core agent dialogue module about the user's state and intentions, so that it can go beyond simple chat-bot dialogue. The behaviour generation module features a unique new capability of being able to deal gracefully with interruptions of the agent. @InProceedings{ICMI16p419, author = {Michel Valstar and Tobias Baur and Angelo Cafaro and Alexandru Ghitulescu and Blaise Potard and Johannes Wagner and Elisabeth André and Laurent Durieu and Matthew Aylett and Soumia Dermouche and Catherine Pelachaud and Eduardo Coutinho and Björn Schuller and Yue Zhang and Dirk Heylen and Mariët Theune and Jelte van Waterschoot}, title = {Ask Alice: An Artificial Retrieval of Information Agent}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {419--420}, doi = {}, year = {2016}, } |
|
Hirayama, Takatsugu |
ICMI '16-DEMO: "Multimodal Biofeedback System ..."
Multimodal Biofeedback System Integrating Low-Cost Easy Sensing Devices
Wataru Hashiguchi, Junya Morita, Takatsugu Hirayama, Kenji Mase, Kazunori Yamada, and Mayu Yokoya (Nagoya University, Japan; Shizuoka University, Japan; Panasonic, Japan) We built a multimodal biofeedback system integrated with low-cost sensors and applied the system to support meditation training utilizing multimodal biofeedback to the user. Our meditation support system employs Electroencephalogram, heart rate variability and eye tracking. The first two biosignals are employed to assess the mental stress during meditation, and eye tracking is equipped to detect an interval where the user engages in meditation by monitoring the open/closed states of the eyes. @InProceedings{ICMI16p410, author = {Wataru Hashiguchi and Junya Morita and Takatsugu Hirayama and Kenji Mase and Kazunori Yamada and Mayu Yokoya}, title = {Multimodal Biofeedback System Integrating Low-Cost Easy Sensing Devices}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {410--411}, doi = {}, year = {2016}, } |
|
Hsu, You-Lun |
ICMI '16-DEMO: "Immersive Virtual Reality ..."
Immersive Virtual Reality with Multimodal Interaction and Streaming Technology
Wan-Lun Tsai, You-Lun Hsu, Chi-Po Lin, Chen-Yu Zhu, Yu-Cheng Chen, and Min-Chun Hu (National Cheng Kung University, Taiwan) In this demo, we present an immersive virtual reality (VR) system which integrates multimodal interaction sensors (i.e., smartphone, Kinect v2, and Myo armband) and streaming technology to improve the VR experience. The integrated system solves the common problems in most VR systems: (1) the very limited playing area due to transmission cable between computer and display/interaction devices, and (2) non-intuitive way of controlling virtual objects. We use Unreal Engine 4 to develop an immersive VR game with 6 interactive levels to demonstrate the feasibility of our system. In the game, the user not only can freely walk within a large playing area surrounded by multiple Kinect sensors but also select the virtual objects to grab and throw with the Myo armband. The experiment shows that our idea is workable for VR experience. @InProceedings{ICMI16p416, author = {Wan-Lun Tsai and You-Lun Hsu and Chi-Po Lin and Chen-Yu Zhu and Yu-Cheng Chen and Min-Chun Hu}, title = {Immersive Virtual Reality with Multimodal Interaction and Streaming Technology}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {416--416}, doi = {}, year = {2016}, } Video |
|
Hu, Min-Chun |
ICMI '16-DEMO: "Immersive Virtual Reality ..."
Immersive Virtual Reality with Multimodal Interaction and Streaming Technology
Wan-Lun Tsai, You-Lun Hsu, Chi-Po Lin, Chen-Yu Zhu, Yu-Cheng Chen, and Min-Chun Hu (National Cheng Kung University, Taiwan) In this demo, we present an immersive virtual reality (VR) system which integrates multimodal interaction sensors (i.e., smartphone, Kinect v2, and Myo armband) and streaming technology to improve the VR experience. The integrated system solves the common problems in most VR systems: (1) the very limited playing area due to transmission cable between computer and display/interaction devices, and (2) non-intuitive way of controlling virtual objects. We use Unreal Engine 4 to develop an immersive VR game with 6 interactive levels to demonstrate the feasibility of our system. In the game, the user not only can freely walk within a large playing area surrounded by multiple Kinect sensors but also select the virtual objects to grab and throw with the Myo armband. The experiment shows that our idea is workable for VR experience. @InProceedings{ICMI16p416, author = {Wan-Lun Tsai and You-Lun Hsu and Chi-Po Lin and Chen-Yu Zhu and Yu-Cheng Chen and Min-Chun Hu}, title = {Immersive Virtual Reality with Multimodal Interaction and Streaming Technology}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {416--416}, doi = {}, year = {2016}, } Video |
|
Inoue, Koji |
ICMI '16-DEMO: "Multimodal Interaction with ..."
Multimodal Interaction with the Autonomous Android ERICA
Divesh Lala, Pierrick Milhorat, Koji Inoue, Tianyu Zhao, and Tatsuya Kawahara (Kyoto University, Japan) We demonstrate an interactive conversation with an android named ERICA. In this demonstration the user can converse with ERICA on a number of topics. We demonstrate both the dialog management system and the eye gaze behavior of ERICA used for indicating attention and turn taking. @InProceedings{ICMI16p417, author = {Divesh Lala and Pierrick Milhorat and Koji Inoue and Tianyu Zhao and Tatsuya Kawahara}, title = {Multimodal Interaction with the Autonomous Android ERICA}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {417--418}, doi = {}, year = {2016}, } |
|
Inoue, Masashi |
ICMI '16-DEMO: "Large-Scale Multimodal Movie ..."
Large-Scale Multimodal Movie Dialogue Corpus
Ryu Yasuhara, Masashi Inoue, Ikuya Suga, and Tetsuo Kosaka (Yamagata University, Japan) We present an outline of our newly created multimodal dialogue corpus that is constructed from public domain movies. Dialogues in movies are useful sources for analyzing human communication patterns. In addition, they can be used to train machine-learning-based dialogue processing systems. However, the movie files are processing intensive and they contain large portions of non-dialogue segments. Therefore, we created a corpus that contains only dialogue segments from movies. The corpus contains 165,368 dialogue segments taken from 1,722 movies. These dialogues are automatically segmented by using deep neural network-based voice activity detection with filtering rules. Our corpus can reduce the human workload and machine-processing effort required to analyze human dialogue behavior by using movies. @InProceedings{ICMI16p414, author = {Ryu Yasuhara and Masashi Inoue and Ikuya Suga and Tetsuo Kosaka}, title = {Large-Scale Multimodal Movie Dialogue Corpus}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {414--415}, doi = {}, year = {2016}, } Info |
|
Johnson, Emmanuel |
ICMI '16-DEMO: "Niki and Julie: A Robot and ..."
Niki and Julie: A Robot and Virtual Human for Studying Multimodal Social Interaction
Ron Artstein, David Traum, Jill Boberg, Alesia Gainer, Jonathan Gratch, Emmanuel Johnson, Anton Leuski, and Mikio Nakano (University of Southern California, USA; Honda Research Institute, Japan) We demonstrate two agents, a robot and a virtual human, which can be used for studying factors that impact social influence. The agents engage in dialogue scenarios that build familiarity, share information, and attempt to influence a human participant. The scenarios are variants of the classical “survival task,” where members of a team rank the importance of a number of items (e.g., items that might help one survive a crash in the desert). These are ranked individually and then re-ranked following a team discussion, and the difference in ranking provides an objective measure of social influence. Survival tasks have been used in psychology, virtual human research, and human-robot interaction. Our agents are operated in a “Wizard-of-Oz” fashion, where a hidden human operator chooses the agents’ dialogue actions while interacting with an experiment participant. @InProceedings{ICMI16p402, author = {Ron Artstein and David Traum and Jill Boberg and Alesia Gainer and Jonathan Gratch and Emmanuel Johnson and Anton Leuski and Mikio Nakano}, title = {Niki and Julie: A Robot and Virtual Human for Studying Multimodal Social Interaction}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {402--403}, doi = {}, year = {2016}, } |
|
Kawahara, Tatsuya |
ICMI '16-DEMO: "Multimodal Interaction with ..."
Multimodal Interaction with the Autonomous Android ERICA
Divesh Lala, Pierrick Milhorat, Koji Inoue, Tianyu Zhao, and Tatsuya Kawahara (Kyoto University, Japan) We demonstrate an interactive conversation with an android named ERICA. In this demonstration the user can converse with ERICA on a number of topics. We demonstrate both the dialog management system and the eye gaze behavior of ERICA used for indicating attention and turn taking. @InProceedings{ICMI16p417, author = {Divesh Lala and Pierrick Milhorat and Koji Inoue and Tianyu Zhao and Tatsuya Kawahara}, title = {Multimodal Interaction with the Autonomous Android ERICA}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {417--418}, doi = {}, year = {2016}, } |
|
Kosaka, Tetsuo |
ICMI '16-DEMO: "Large-Scale Multimodal Movie ..."
Large-Scale Multimodal Movie Dialogue Corpus
Ryu Yasuhara, Masashi Inoue, Ikuya Suga, and Tetsuo Kosaka (Yamagata University, Japan) We present an outline of our newly created multimodal dialogue corpus that is constructed from public domain movies. Dialogues in movies are useful sources for analyzing human communication patterns. In addition, they can be used to train machine-learning-based dialogue processing systems. However, the movie files are processing intensive and they contain large portions of non-dialogue segments. Therefore, we created a corpus that contains only dialogue segments from movies. The corpus contains 165,368 dialogue segments taken from 1,722 movies. These dialogues are automatically segmented by using deep neural network-based voice activity detection with filtering rules. Our corpus can reduce the human workload and machine-processing effort required to analyze human dialogue behavior by using movies. @InProceedings{ICMI16p414, author = {Ryu Yasuhara and Masashi Inoue and Ikuya Suga and Tetsuo Kosaka}, title = {Large-Scale Multimodal Movie Dialogue Corpus}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {414--415}, doi = {}, year = {2016}, } Info |
|
Kushida, Kana |
ICMI '16-DEMO: "A Telepresence System using ..."
A Telepresence System using a Flexible Textile Display
Kana Kushida and Hideyuki Nakanishi (Osaka University, Japan) In this study, we developed a telepresence system which has a flexible and deformable screen. The screen deforms in synchronization with the move of the objects on the projected video. This deformation of the display surface provides the perception of depth for the projected video. We attempted to add the perception of depth to the video of the remote person and extend a video conferencing system. The results of the experiment suggested that the perception of depth provided by system strengthens the presence of a remote person and an object on the video. @InProceedings{ICMI16p412, author = {Kana Kushida and Hideyuki Nakanishi}, title = {A Telepresence System using a Flexible Textile Display}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {412--413}, doi = {}, year = {2016}, } |
|
Lala, Divesh |
ICMI '16-DEMO: "Multimodal Interaction with ..."
Multimodal Interaction with the Autonomous Android ERICA
Divesh Lala, Pierrick Milhorat, Koji Inoue, Tianyu Zhao, and Tatsuya Kawahara (Kyoto University, Japan) We demonstrate an interactive conversation with an android named ERICA. In this demonstration the user can converse with ERICA on a number of topics. We demonstrate both the dialog management system and the eye gaze behavior of ERICA used for indicating attention and turn taking. @InProceedings{ICMI16p417, author = {Divesh Lala and Pierrick Milhorat and Koji Inoue and Tianyu Zhao and Tatsuya Kawahara}, title = {Multimodal Interaction with the Autonomous Android ERICA}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {417--418}, doi = {}, year = {2016}, } |
|
Leuski, Anton |
ICMI '16-DEMO: "Niki and Julie: A Robot and ..."
Niki and Julie: A Robot and Virtual Human for Studying Multimodal Social Interaction
Ron Artstein, David Traum, Jill Boberg, Alesia Gainer, Jonathan Gratch, Emmanuel Johnson, Anton Leuski, and Mikio Nakano (University of Southern California, USA; Honda Research Institute, Japan) We demonstrate two agents, a robot and a virtual human, which can be used for studying factors that impact social influence. The agents engage in dialogue scenarios that build familiarity, share information, and attempt to influence a human participant. The scenarios are variants of the classical “survival task,” where members of a team rank the importance of a number of items (e.g., items that might help one survive a crash in the desert). These are ranked individually and then re-ranked following a team discussion, and the difference in ranking provides an objective measure of social influence. Survival tasks have been used in psychology, virtual human research, and human-robot interaction. Our agents are operated in a “Wizard-of-Oz” fashion, where a hidden human operator chooses the agents’ dialogue actions while interacting with an experiment participant. @InProceedings{ICMI16p402, author = {Ron Artstein and David Traum and Jill Boberg and Alesia Gainer and Jonathan Gratch and Emmanuel Johnson and Anton Leuski and Mikio Nakano}, title = {Niki and Julie: A Robot and Virtual Human for Studying Multimodal Social Interaction}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {402--403}, doi = {}, year = {2016}, } |
|
Lin, Chi-Po |
ICMI '16-DEMO: "Immersive Virtual Reality ..."
Immersive Virtual Reality with Multimodal Interaction and Streaming Technology
Wan-Lun Tsai, You-Lun Hsu, Chi-Po Lin, Chen-Yu Zhu, Yu-Cheng Chen, and Min-Chun Hu (National Cheng Kung University, Taiwan) In this demo, we present an immersive virtual reality (VR) system which integrates multimodal interaction sensors (i.e., smartphone, Kinect v2, and Myo armband) and streaming technology to improve the VR experience. The integrated system solves the common problems in most VR systems: (1) the very limited playing area due to transmission cable between computer and display/interaction devices, and (2) non-intuitive way of controlling virtual objects. We use Unreal Engine 4 to develop an immersive VR game with 6 interactive levels to demonstrate the feasibility of our system. In the game, the user not only can freely walk within a large playing area surrounded by multiple Kinect sensors but also select the virtual objects to grab and throw with the Myo armband. The experiment shows that our idea is workable for VR experience. @InProceedings{ICMI16p416, author = {Wan-Lun Tsai and You-Lun Hsu and Chi-Po Lin and Chen-Yu Zhu and Yu-Cheng Chen and Min-Chun Hu}, title = {Immersive Virtual Reality with Multimodal Interaction and Streaming Technology}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {416--416}, doi = {}, year = {2016}, } Video |
|
Lingenfelser, Florian |
ICMI '16-DEMO: "Laughter Detection in the ..."
Laughter Detection in the Wild: Demonstrating a Tool for Mobile Social Signal Processing and Visualization
Simon Flutura, Johannes Wagner, Florian Lingenfelser, Andreas Seiderer, and Elisabeth André (University of Augsburg, Germany) In this demo, we present MobileSSI, a flexible software framework for Android and embedded Linux platforms, that provides developers with tools to record, analyze and recognize human behavior in real-time on mobile devices. To illustrate the benefits of the framework for the analysis of social group dynamics in naturalistic mobile settings, we present a demonstrator for laughter recognition that was implemented with MobileSSI. The demonstrator makes use of smartphones for sensing and analyzing data and employs smartwatches and tablets for visualizing the results and providing user feedback. To enable communication within the resulting ecology of mobile devices, MobileSSI includes a web socket plugin. @InProceedings{ICMI16p406, author = {Simon Flutura and Johannes Wagner and Florian Lingenfelser and Andreas Seiderer and Elisabeth André}, title = {Laughter Detection in the Wild: Demonstrating a Tool for Mobile Social Signal Processing and Visualization}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {406--407}, doi = {}, year = {2016}, } Info |
|
Liu, Xingkun |
ICMI '16-DEMO: "A Demonstration of Multimodal ..."
A Demonstration of Multimodal Debrief Generation for AUVs, Post-mission and In-mission
Helen Hastie, Xingkun Liu, and Pedro Patron (Heriot-Watt University, UK; SeeByte, UK) A prototype will be demonstrated that takes activity and sensor data from Autonomous Underwater Vehicles (AUVs) and automatically generates multimodal output in the form of mission reports containing natural language and visual elements. Specifically, the system takes time-series sensor data, mission logs, together with mission plans as its input, and generates descriptions of the missions in natural language, which would be verbalised by a Text-to-Speech Synthesis (TTS) engine in a multimodal system. In addition, we will demonstrate an in-mission system that provides a stream of real-time updates in natural language, thus improving situation awareness of the operator and increasing trust in the system during missions. @InProceedings{ICMI16p404, author = {Helen Hastie and Xingkun Liu and Pedro Patron}, title = {A Demonstration of Multimodal Debrief Generation for AUVs, Post-mission and In-mission}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {404--405}, doi = {}, year = {2016}, } |
|
Mase, Kenji |
ICMI '16-DEMO: "Multimodal Biofeedback System ..."
Multimodal Biofeedback System Integrating Low-Cost Easy Sensing Devices
Wataru Hashiguchi, Junya Morita, Takatsugu Hirayama, Kenji Mase, Kazunori Yamada, and Mayu Yokoya (Nagoya University, Japan; Shizuoka University, Japan; Panasonic, Japan) We built a multimodal biofeedback system integrated with low-cost sensors and applied the system to support meditation training utilizing multimodal biofeedback to the user. Our meditation support system employs Electroencephalogram, heart rate variability and eye tracking. The first two biosignals are employed to assess the mental stress during meditation, and eye tracking is equipped to detect an interval where the user engages in meditation by monitoring the open/closed states of the eyes. @InProceedings{ICMI16p410, author = {Wataru Hashiguchi and Junya Morita and Takatsugu Hirayama and Kenji Mase and Kazunori Yamada and Mayu Yokoya}, title = {Multimodal Biofeedback System Integrating Low-Cost Easy Sensing Devices}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {410--411}, doi = {}, year = {2016}, } |
|
Milhorat, Pierrick |
ICMI '16-DEMO: "Multimodal Interaction with ..."
Multimodal Interaction with the Autonomous Android ERICA
Divesh Lala, Pierrick Milhorat, Koji Inoue, Tianyu Zhao, and Tatsuya Kawahara (Kyoto University, Japan) We demonstrate an interactive conversation with an android named ERICA. In this demonstration the user can converse with ERICA on a number of topics. We demonstrate both the dialog management system and the eye gaze behavior of ERICA used for indicating attention and turn taking. @InProceedings{ICMI16p417, author = {Divesh Lala and Pierrick Milhorat and Koji Inoue and Tianyu Zhao and Tatsuya Kawahara}, title = {Multimodal Interaction with the Autonomous Android ERICA}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {417--418}, doi = {}, year = {2016}, } |
|
Morita, Junya |
ICMI '16-DEMO: "Multimodal Biofeedback System ..."
Multimodal Biofeedback System Integrating Low-Cost Easy Sensing Devices
Wataru Hashiguchi, Junya Morita, Takatsugu Hirayama, Kenji Mase, Kazunori Yamada, and Mayu Yokoya (Nagoya University, Japan; Shizuoka University, Japan; Panasonic, Japan) We built a multimodal biofeedback system integrated with low-cost sensors and applied the system to support meditation training utilizing multimodal biofeedback to the user. Our meditation support system employs Electroencephalogram, heart rate variability and eye tracking. The first two biosignals are employed to assess the mental stress during meditation, and eye tracking is equipped to detect an interval where the user engages in meditation by monitoring the open/closed states of the eyes. @InProceedings{ICMI16p410, author = {Wataru Hashiguchi and Junya Morita and Takatsugu Hirayama and Kenji Mase and Kazunori Yamada and Mayu Yokoya}, title = {Multimodal Biofeedback System Integrating Low-Cost Easy Sensing Devices}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {410--411}, doi = {}, year = {2016}, } |
|
Nagayama, Yousuke |
ICMI '16-DEMO: "Metering "Black Holes": ..."
Metering "Black Holes": Networking Stand-Alone Applications for Distributed Multimodal Synchronization
Michael Cohen, Yousuke Nagayama, and Bektur Ryskeldiev (University of Aizu, Japan) We have developed a phantom gui emulator that can read from otherwise stand-alone applications, complementing a separate parallel program that can write to such applications. In conjunction with the “Alice” desktop vr system and previously developed “Collaborative Virtual Environment” both of which are freely available, virtual scene exploration can synchronize with multimodal peers, including panoramic browsers, spatial sound renderers, and smartphone and tablet interfaces. @InProceedings{ICMI16p396, author = {Michael Cohen and Yousuke Nagayama and Bektur Ryskeldiev}, title = {Metering "Black Holes": Networking Stand-Alone Applications for Distributed Multimodal Synchronization}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {396--397}, doi = {}, year = {2016}, } |
|
Nakanishi, Hideyuki |
ICMI '16-DEMO: "A Telepresence System using ..."
A Telepresence System using a Flexible Textile Display
Kana Kushida and Hideyuki Nakanishi (Osaka University, Japan) In this study, we developed a telepresence system which has a flexible and deformable screen. The screen deforms in synchronization with the move of the objects on the projected video. This deformation of the display surface provides the perception of depth for the projected video. We attempted to add the perception of depth to the video of the remote person and extend a video conferencing system. The results of the experiment suggested that the perception of depth provided by system strengthens the presence of a remote person and an object on the video. @InProceedings{ICMI16p412, author = {Kana Kushida and Hideyuki Nakanishi}, title = {A Telepresence System using a Flexible Textile Display}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {412--413}, doi = {}, year = {2016}, } |
|
Nakano, Mikio |
ICMI '16-DEMO: "Niki and Julie: A Robot and ..."
Niki and Julie: A Robot and Virtual Human for Studying Multimodal Social Interaction
Ron Artstein, David Traum, Jill Boberg, Alesia Gainer, Jonathan Gratch, Emmanuel Johnson, Anton Leuski, and Mikio Nakano (University of Southern California, USA; Honda Research Institute, Japan) We demonstrate two agents, a robot and a virtual human, which can be used for studying factors that impact social influence. The agents engage in dialogue scenarios that build familiarity, share information, and attempt to influence a human participant. The scenarios are variants of the classical “survival task,” where members of a team rank the importance of a number of items (e.g., items that might help one survive a crash in the desert). These are ranked individually and then re-ranked following a team discussion, and the difference in ranking provides an objective measure of social influence. Survival tasks have been used in psychology, virtual human research, and human-robot interaction. Our agents are operated in a “Wizard-of-Oz” fashion, where a hidden human operator chooses the agents’ dialogue actions while interacting with an experiment participant. @InProceedings{ICMI16p402, author = {Ron Artstein and David Traum and Jill Boberg and Alesia Gainer and Jonathan Gratch and Emmanuel Johnson and Anton Leuski and Mikio Nakano}, title = {Niki and Julie: A Robot and Virtual Human for Studying Multimodal Social Interaction}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {402--403}, doi = {}, year = {2016}, } |
|
Novick, David |
ICMI '16-DEMO: "Young Merlin: An Embodied ..."
Young Merlin: An Embodied Conversational Agent in Virtual Reality
Ivan Gris, Diego A. Rivera, Alex Rayon, Adriana Camacho, and David Novick (Inmerssion, USA) This paper describes a system for embodied conversational agents developed by Inmerssion and one of the applications—Young Merlin: Trial by Fire —built with this system. In the Merlin application, the ECA and a human interact with speech in virtual reality. The goal of this application is to provide engaging VR experiences that build rapport through storytelling and verbal interactions. The agent is fully automated, and his attitude towards the user changes over time depending on the interaction. The conversational system was built through a declarative approach that supports animations, markup language, and gesture recognition. Future versions of Merlin will implement multi-character dialogs, additional actions, and extended interaction time. @InProceedings{ICMI16p425, author = {Ivan Gris and Diego A. Rivera and Alex Rayon and Adriana Camacho and David Novick}, title = {Young Merlin: An Embodied Conversational Agent in Virtual Reality}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {425--426}, doi = {}, year = {2016}, } |
|
Patron, Pedro |
ICMI '16-DEMO: "A Demonstration of Multimodal ..."
A Demonstration of Multimodal Debrief Generation for AUVs, Post-mission and In-mission
Helen Hastie, Xingkun Liu, and Pedro Patron (Heriot-Watt University, UK; SeeByte, UK) A prototype will be demonstrated that takes activity and sensor data from Autonomous Underwater Vehicles (AUVs) and automatically generates multimodal output in the form of mission reports containing natural language and visual elements. Specifically, the system takes time-series sensor data, mission logs, together with mission plans as its input, and generates descriptions of the missions in natural language, which would be verbalised by a Text-to-Speech Synthesis (TTS) engine in a multimodal system. In addition, we will demonstrate an in-mission system that provides a stream of real-time updates in natural language, thus improving situation awareness of the operator and increasing trust in the system during missions. @InProceedings{ICMI16p404, author = {Helen Hastie and Xingkun Liu and Pedro Patron}, title = {A Demonstration of Multimodal Debrief Generation for AUVs, Post-mission and In-mission}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {404--405}, doi = {}, year = {2016}, } |
|
Pelachaud, Catherine |
ICMI '16-DEMO: "Ask Alice: An Artificial Retrieval ..."
Ask Alice: An Artificial Retrieval of Information Agent
Michel Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, and Jelte van Waterschoot (University of Nottingham, UK; University of Augsburg, Germany; CNRS, France; Cereproc, UK; Cantoche, France; Imperial College London, UK; University of Twente, Netherlands) We present a demonstration of the ARIA framework, a modular approach for rapid development of virtual humans for information retrieval that have linguistic, emotional, and social skills and a strong personality. We demonstrate the framework's capabilities in a scenario where `Alice in Wonderland', a popular English literature book, is embodied by a virtual human representing Alice. The user can engage in an information exchange dialogue, where Alice acts as the expert on the book, and the user as an interested novice. Besides speech recognition, sophisticated audio-visual behaviour analysis is used to inform the core agent dialogue module about the user's state and intentions, so that it can go beyond simple chat-bot dialogue. The behaviour generation module features a unique new capability of being able to deal gracefully with interruptions of the agent. @InProceedings{ICMI16p419, author = {Michel Valstar and Tobias Baur and Angelo Cafaro and Alexandru Ghitulescu and Blaise Potard and Johannes Wagner and Elisabeth André and Laurent Durieu and Matthew Aylett and Soumia Dermouche and Catherine Pelachaud and Eduardo Coutinho and Björn Schuller and Yue Zhang and Dirk Heylen and Mariët Theune and Jelte van Waterschoot}, title = {Ask Alice: An Artificial Retrieval of Information Agent}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {419--420}, doi = {}, year = {2016}, } |
|
Pham, Phuong |
ICMI '16-DEMO: "AttentiveVideo: Quantifying ..."
AttentiveVideo: Quantifying Emotional Responses to Mobile Video Advertisements
Phuong Pham and Jingtao Wang (University of Pittsburgh, USA) This demo presents AttentiveVideo, a multi-modal video player that can collect and infer viewers’ emotional responses to video advertisements on unmodified smart phones. When a subsidized video advertisement is playing, AttentiveVideo uses on-lens finger gestures for tangible video control, and employs implicit photoplethysmography (PPG) sensing to infer viewers' attention, engagement, and sentimentality toward advertisements. Through a 24-participant pilot study, we found that AttentiveVideo is easy to learn and intuitive to use. More importantly, AttentiveVideo achieved good accuracies on a wide range of emotional measures (best average accuracy = 65.9%, kappa = 0.30 across 9 metrics). Our preliminary result shows the potential of both low-cost collection and deep understanding of emotional responses to mobile video advertisements. @InProceedings{ICMI16p423, author = {Phuong Pham and Jingtao Wang}, title = {AttentiveVideo: Quantifying Emotional Responses to Mobile Video Advertisements}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {423--424}, doi = {}, year = {2016}, } Video |
|
Potard, Blaise |
ICMI '16-DEMO: "Ask Alice: An Artificial Retrieval ..."
Ask Alice: An Artificial Retrieval of Information Agent
Michel Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, and Jelte van Waterschoot (University of Nottingham, UK; University of Augsburg, Germany; CNRS, France; Cereproc, UK; Cantoche, France; Imperial College London, UK; University of Twente, Netherlands) We present a demonstration of the ARIA framework, a modular approach for rapid development of virtual humans for information retrieval that have linguistic, emotional, and social skills and a strong personality. We demonstrate the framework's capabilities in a scenario where `Alice in Wonderland', a popular English literature book, is embodied by a virtual human representing Alice. The user can engage in an information exchange dialogue, where Alice acts as the expert on the book, and the user as an interested novice. Besides speech recognition, sophisticated audio-visual behaviour analysis is used to inform the core agent dialogue module about the user's state and intentions, so that it can go beyond simple chat-bot dialogue. The behaviour generation module features a unique new capability of being able to deal gracefully with interruptions of the agent. @InProceedings{ICMI16p419, author = {Michel Valstar and Tobias Baur and Angelo Cafaro and Alexandru Ghitulescu and Blaise Potard and Johannes Wagner and Elisabeth André and Laurent Durieu and Matthew Aylett and Soumia Dermouche and Catherine Pelachaud and Eduardo Coutinho and Björn Schuller and Yue Zhang and Dirk Heylen and Mariët Theune and Jelte van Waterschoot}, title = {Ask Alice: An Artificial Retrieval of Information Agent}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {419--420}, doi = {}, year = {2016}, } |
|
Rayon, Alex |
ICMI '16-DEMO: "Young Merlin: An Embodied ..."
Young Merlin: An Embodied Conversational Agent in Virtual Reality
Ivan Gris, Diego A. Rivera, Alex Rayon, Adriana Camacho, and David Novick (Inmerssion, USA) This paper describes a system for embodied conversational agents developed by Inmerssion and one of the applications—Young Merlin: Trial by Fire —built with this system. In the Merlin application, the ECA and a human interact with speech in virtual reality. The goal of this application is to provide engaging VR experiences that build rapport through storytelling and verbal interactions. The agent is fully automated, and his attitude towards the user changes over time depending on the interaction. The conversational system was built through a declarative approach that supports animations, markup language, and gesture recognition. Future versions of Merlin will implement multi-character dialogs, additional actions, and extended interaction time. @InProceedings{ICMI16p425, author = {Ivan Gris and Diego A. Rivera and Alex Rayon and Adriana Camacho and David Novick}, title = {Young Merlin: An Embodied Conversational Agent in Virtual Reality}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {425--426}, doi = {}, year = {2016}, } |
|
Rivera, Diego A. |
ICMI '16-DEMO: "Young Merlin: An Embodied ..."
Young Merlin: An Embodied Conversational Agent in Virtual Reality
Ivan Gris, Diego A. Rivera, Alex Rayon, Adriana Camacho, and David Novick (Inmerssion, USA) This paper describes a system for embodied conversational agents developed by Inmerssion and one of the applications—Young Merlin: Trial by Fire —built with this system. In the Merlin application, the ECA and a human interact with speech in virtual reality. The goal of this application is to provide engaging VR experiences that build rapport through storytelling and verbal interactions. The agent is fully automated, and his attitude towards the user changes over time depending on the interaction. The conversational system was built through a declarative approach that supports animations, markup language, and gesture recognition. Future versions of Merlin will implement multi-character dialogs, additional actions, and extended interaction time. @InProceedings{ICMI16p425, author = {Ivan Gris and Diego A. Rivera and Alex Rayon and Adriana Camacho and David Novick}, title = {Young Merlin: An Embodied Conversational Agent in Virtual Reality}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {425--426}, doi = {}, year = {2016}, } |
|
Ryskeldiev, Bektur |
ICMI '16-DEMO: "Metering "Black Holes": ..."
Metering "Black Holes": Networking Stand-Alone Applications for Distributed Multimodal Synchronization
Michael Cohen, Yousuke Nagayama, and Bektur Ryskeldiev (University of Aizu, Japan) We have developed a phantom gui emulator that can read from otherwise stand-alone applications, complementing a separate parallel program that can write to such applications. In conjunction with the “Alice” desktop vr system and previously developed “Collaborative Virtual Environment” both of which are freely available, virtual scene exploration can synchronize with multimodal peers, including panoramic browsers, spatial sound renderers, and smartphone and tablet interfaces. @InProceedings{ICMI16p396, author = {Michael Cohen and Yousuke Nagayama and Bektur Ryskeldiev}, title = {Metering "Black Holes": Networking Stand-Alone Applications for Distributed Multimodal Synchronization}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {396--397}, doi = {}, year = {2016}, } |
|
Schuller, Björn |
ICMI '16-DEMO: "Ask Alice: An Artificial Retrieval ..."
Ask Alice: An Artificial Retrieval of Information Agent
Michel Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, and Jelte van Waterschoot (University of Nottingham, UK; University of Augsburg, Germany; CNRS, France; Cereproc, UK; Cantoche, France; Imperial College London, UK; University of Twente, Netherlands) We present a demonstration of the ARIA framework, a modular approach for rapid development of virtual humans for information retrieval that have linguistic, emotional, and social skills and a strong personality. We demonstrate the framework's capabilities in a scenario where `Alice in Wonderland', a popular English literature book, is embodied by a virtual human representing Alice. The user can engage in an information exchange dialogue, where Alice acts as the expert on the book, and the user as an interested novice. Besides speech recognition, sophisticated audio-visual behaviour analysis is used to inform the core agent dialogue module about the user's state and intentions, so that it can go beyond simple chat-bot dialogue. The behaviour generation module features a unique new capability of being able to deal gracefully with interruptions of the agent. @InProceedings{ICMI16p419, author = {Michel Valstar and Tobias Baur and Angelo Cafaro and Alexandru Ghitulescu and Blaise Potard and Johannes Wagner and Elisabeth André and Laurent Durieu and Matthew Aylett and Soumia Dermouche and Catherine Pelachaud and Eduardo Coutinho and Björn Schuller and Yue Zhang and Dirk Heylen and Mariët Theune and Jelte van Waterschoot}, title = {Ask Alice: An Artificial Retrieval of Information Agent}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {419--420}, doi = {}, year = {2016}, } |
|
Seiderer, Andreas |
ICMI '16-DEMO: "Laughter Detection in the ..."
Laughter Detection in the Wild: Demonstrating a Tool for Mobile Social Signal Processing and Visualization
Simon Flutura, Johannes Wagner, Florian Lingenfelser, Andreas Seiderer, and Elisabeth André (University of Augsburg, Germany) In this demo, we present MobileSSI, a flexible software framework for Android and embedded Linux platforms, that provides developers with tools to record, analyze and recognize human behavior in real-time on mobile devices. To illustrate the benefits of the framework for the analysis of social group dynamics in naturalistic mobile settings, we present a demonstrator for laughter recognition that was implemented with MobileSSI. The demonstrator makes use of smartphones for sensing and analyzing data and employs smartwatches and tablets for visualizing the results and providing user feedback. To enable communication within the resulting ecology of mobile devices, MobileSSI includes a web socket plugin. @InProceedings{ICMI16p406, author = {Simon Flutura and Johannes Wagner and Florian Lingenfelser and Andreas Seiderer and Elisabeth André}, title = {Laughter Detection in the Wild: Demonstrating a Tool for Mobile Social Signal Processing and Visualization}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {406--407}, doi = {}, year = {2016}, } Info |
|
Srivastava, Anmol |
ICMI '16-DEMO: "Design of Multimodal Instructional ..."
Design of Multimodal Instructional Tutoring Agents using Augmented Reality and Smart Learning Objects
Anmol Srivastava and Pradeep Yammiyavar (IIT Guwahati, India) This demo presents a novel technique of enriching students’ learning experience in electronic engineering laboratories and the basis for its design. The system employs mobile augmented reality (AR) and physical smart objects that can be used in conjunction to assist students in laboratories. Such systems are capable of providing just-in-time information and sensing errors made while prototyping of specific electronic circuits. These systems can help reduce cognitive load of students in laboratories and bridge gaps between theory and practical applications that students face in laboratories. Two prototypes have been developed – (i) an intelligent breadboard prototype that can sense errors like loose wiring, wrong connections, etc. for a specific experiment, and, (ii) an AR application that provides visualization and instruction for circuit assembly and operating test equipment. The intelligent breadboard acts as a smart learning object. Design methods were used to conceptualize and build such systems. The idea is to merge practices of Human Computer Interaction with those of machine learning to design highly situated physically located tutoring systems for students. Such systems can help innovatively in teaching and learning in engineering laboratories. @InProceedings{ICMI16p421, author = {Anmol Srivastava and Pradeep Yammiyavar}, title = {Design of Multimodal Instructional Tutoring Agents using Augmented Reality and Smart Learning Objects}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {421--422}, doi = {}, year = {2016}, } Info |
|
Suga, Ikuya |
ICMI '16-DEMO: "Large-Scale Multimodal Movie ..."
Large-Scale Multimodal Movie Dialogue Corpus
Ryu Yasuhara, Masashi Inoue, Ikuya Suga, and Tetsuo Kosaka (Yamagata University, Japan) We present an outline of our newly created multimodal dialogue corpus that is constructed from public domain movies. Dialogues in movies are useful sources for analyzing human communication patterns. In addition, they can be used to train machine-learning-based dialogue processing systems. However, the movie files are processing intensive and they contain large portions of non-dialogue segments. Therefore, we created a corpus that contains only dialogue segments from movies. The corpus contains 165,368 dialogue segments taken from 1,722 movies. These dialogues are automatically segmented by using deep neural network-based voice activity detection with filtering rules. Our corpus can reduce the human workload and machine-processing effort required to analyze human dialogue behavior by using movies. @InProceedings{ICMI16p414, author = {Ryu Yasuhara and Masashi Inoue and Ikuya Suga and Tetsuo Kosaka}, title = {Large-Scale Multimodal Movie Dialogue Corpus}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {414--415}, doi = {}, year = {2016}, } Info |
|
Sutherland, Alistair |
ICMI '16-DEMO: "Multimodal System for Public ..."
Multimodal System for Public Speaking with Real Time Feedback: A Positive Computing Perspective
Fiona Dermody and Alistair Sutherland (Dublin City University, Ireland) A multimodal system for public speaking with real time feedback has been developed using the Microsoft Kinect. The system has been developed within the paradigm of positive computing which focuses on designing for user wellbeing. The system detects body pose, facial expressions and voice. Visual feedback is displayed to users on their speaking performance in real time. Users can view statistics on their utilisation of speaking modalities. The system also has a mentor avatar which appears alongside the user avatar to facilitate user training. Autocue mode allows a user to practice with set text from a chosen speech. @InProceedings{ICMI16p408, author = {Fiona Dermody and Alistair Sutherland}, title = {Multimodal System for Public Speaking with Real Time Feedback: A Positive Computing Perspective}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {408--409}, doi = {}, year = {2016}, } |
|
Theune, Mariët |
ICMI '16-DEMO: "Ask Alice: An Artificial Retrieval ..."
Ask Alice: An Artificial Retrieval of Information Agent
Michel Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, and Jelte van Waterschoot (University of Nottingham, UK; University of Augsburg, Germany; CNRS, France; Cereproc, UK; Cantoche, France; Imperial College London, UK; University of Twente, Netherlands) We present a demonstration of the ARIA framework, a modular approach for rapid development of virtual humans for information retrieval that have linguistic, emotional, and social skills and a strong personality. We demonstrate the framework's capabilities in a scenario where `Alice in Wonderland', a popular English literature book, is embodied by a virtual human representing Alice. The user can engage in an information exchange dialogue, where Alice acts as the expert on the book, and the user as an interested novice. Besides speech recognition, sophisticated audio-visual behaviour analysis is used to inform the core agent dialogue module about the user's state and intentions, so that it can go beyond simple chat-bot dialogue. The behaviour generation module features a unique new capability of being able to deal gracefully with interruptions of the agent. @InProceedings{ICMI16p419, author = {Michel Valstar and Tobias Baur and Angelo Cafaro and Alexandru Ghitulescu and Blaise Potard and Johannes Wagner and Elisabeth André and Laurent Durieu and Matthew Aylett and Soumia Dermouche and Catherine Pelachaud and Eduardo Coutinho and Björn Schuller and Yue Zhang and Dirk Heylen and Mariët Theune and Jelte van Waterschoot}, title = {Ask Alice: An Artificial Retrieval of Information Agent}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {419--420}, doi = {}, year = {2016}, } |
|
Traum, David |
ICMI '16-DEMO: "Niki and Julie: A Robot and ..."
Niki and Julie: A Robot and Virtual Human for Studying Multimodal Social Interaction
Ron Artstein, David Traum, Jill Boberg, Alesia Gainer, Jonathan Gratch, Emmanuel Johnson, Anton Leuski, and Mikio Nakano (University of Southern California, USA; Honda Research Institute, Japan) We demonstrate two agents, a robot and a virtual human, which can be used for studying factors that impact social influence. The agents engage in dialogue scenarios that build familiarity, share information, and attempt to influence a human participant. The scenarios are variants of the classical “survival task,” where members of a team rank the importance of a number of items (e.g., items that might help one survive a crash in the desert). These are ranked individually and then re-ranked following a team discussion, and the difference in ranking provides an objective measure of social influence. Survival tasks have been used in psychology, virtual human research, and human-robot interaction. Our agents are operated in a “Wizard-of-Oz” fashion, where a hidden human operator chooses the agents’ dialogue actions while interacting with an experiment participant. @InProceedings{ICMI16p402, author = {Ron Artstein and David Traum and Jill Boberg and Alesia Gainer and Jonathan Gratch and Emmanuel Johnson and Anton Leuski and Mikio Nakano}, title = {Niki and Julie: A Robot and Virtual Human for Studying Multimodal Social Interaction}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {402--403}, doi = {}, year = {2016}, } |
|
Tsai, Wan-Lun |
ICMI '16-DEMO: "Immersive Virtual Reality ..."
Immersive Virtual Reality with Multimodal Interaction and Streaming Technology
Wan-Lun Tsai, You-Lun Hsu, Chi-Po Lin, Chen-Yu Zhu, Yu-Cheng Chen, and Min-Chun Hu (National Cheng Kung University, Taiwan) In this demo, we present an immersive virtual reality (VR) system which integrates multimodal interaction sensors (i.e., smartphone, Kinect v2, and Myo armband) and streaming technology to improve the VR experience. The integrated system solves the common problems in most VR systems: (1) the very limited playing area due to transmission cable between computer and display/interaction devices, and (2) non-intuitive way of controlling virtual objects. We use Unreal Engine 4 to develop an immersive VR game with 6 interactive levels to demonstrate the feasibility of our system. In the game, the user not only can freely walk within a large playing area surrounded by multiple Kinect sensors but also select the virtual objects to grab and throw with the Myo armband. The experiment shows that our idea is workable for VR experience. @InProceedings{ICMI16p416, author = {Wan-Lun Tsai and You-Lun Hsu and Chi-Po Lin and Chen-Yu Zhu and Yu-Cheng Chen and Min-Chun Hu}, title = {Immersive Virtual Reality with Multimodal Interaction and Streaming Technology}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {416--416}, doi = {}, year = {2016}, } Video |
|
Valstar, Michel |
ICMI '16-DEMO: "Ask Alice: An Artificial Retrieval ..."
Ask Alice: An Artificial Retrieval of Information Agent
Michel Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, and Jelte van Waterschoot (University of Nottingham, UK; University of Augsburg, Germany; CNRS, France; Cereproc, UK; Cantoche, France; Imperial College London, UK; University of Twente, Netherlands) We present a demonstration of the ARIA framework, a modular approach for rapid development of virtual humans for information retrieval that have linguistic, emotional, and social skills and a strong personality. We demonstrate the framework's capabilities in a scenario where `Alice in Wonderland', a popular English literature book, is embodied by a virtual human representing Alice. The user can engage in an information exchange dialogue, where Alice acts as the expert on the book, and the user as an interested novice. Besides speech recognition, sophisticated audio-visual behaviour analysis is used to inform the core agent dialogue module about the user's state and intentions, so that it can go beyond simple chat-bot dialogue. The behaviour generation module features a unique new capability of being able to deal gracefully with interruptions of the agent. @InProceedings{ICMI16p419, author = {Michel Valstar and Tobias Baur and Angelo Cafaro and Alexandru Ghitulescu and Blaise Potard and Johannes Wagner and Elisabeth André and Laurent Durieu and Matthew Aylett and Soumia Dermouche and Catherine Pelachaud and Eduardo Coutinho and Björn Schuller and Yue Zhang and Dirk Heylen and Mariët Theune and Jelte van Waterschoot}, title = {Ask Alice: An Artificial Retrieval of Information Agent}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {419--420}, doi = {}, year = {2016}, } |
|
Wagner, Johannes |
ICMI '16-DEMO: "Laughter Detection in the ..."
Laughter Detection in the Wild: Demonstrating a Tool for Mobile Social Signal Processing and Visualization
Simon Flutura, Johannes Wagner, Florian Lingenfelser, Andreas Seiderer, and Elisabeth André (University of Augsburg, Germany) In this demo, we present MobileSSI, a flexible software framework for Android and embedded Linux platforms, that provides developers with tools to record, analyze and recognize human behavior in real-time on mobile devices. To illustrate the benefits of the framework for the analysis of social group dynamics in naturalistic mobile settings, we present a demonstrator for laughter recognition that was implemented with MobileSSI. The demonstrator makes use of smartphones for sensing and analyzing data and employs smartwatches and tablets for visualizing the results and providing user feedback. To enable communication within the resulting ecology of mobile devices, MobileSSI includes a web socket plugin. @InProceedings{ICMI16p406, author = {Simon Flutura and Johannes Wagner and Florian Lingenfelser and Andreas Seiderer and Elisabeth André}, title = {Laughter Detection in the Wild: Demonstrating a Tool for Mobile Social Signal Processing and Visualization}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {406--407}, doi = {}, year = {2016}, } Info ICMI '16-DEMO: "Ask Alice: An Artificial Retrieval ..." Ask Alice: An Artificial Retrieval of Information Agent Michel Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, and Jelte van Waterschoot (University of Nottingham, UK; University of Augsburg, Germany; CNRS, France; Cereproc, UK; Cantoche, France; Imperial College London, UK; University of Twente, Netherlands) We present a demonstration of the ARIA framework, a modular approach for rapid development of virtual humans for information retrieval that have linguistic, emotional, and social skills and a strong personality. We demonstrate the framework's capabilities in a scenario where `Alice in Wonderland', a popular English literature book, is embodied by a virtual human representing Alice. The user can engage in an information exchange dialogue, where Alice acts as the expert on the book, and the user as an interested novice. Besides speech recognition, sophisticated audio-visual behaviour analysis is used to inform the core agent dialogue module about the user's state and intentions, so that it can go beyond simple chat-bot dialogue. The behaviour generation module features a unique new capability of being able to deal gracefully with interruptions of the agent. @InProceedings{ICMI16p419, author = {Michel Valstar and Tobias Baur and Angelo Cafaro and Alexandru Ghitulescu and Blaise Potard and Johannes Wagner and Elisabeth André and Laurent Durieu and Matthew Aylett and Soumia Dermouche and Catherine Pelachaud and Eduardo Coutinho and Björn Schuller and Yue Zhang and Dirk Heylen and Mariët Theune and Jelte van Waterschoot}, title = {Ask Alice: An Artificial Retrieval of Information Agent}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {419--420}, doi = {}, year = {2016}, } |
|
Wang, Jingtao |
ICMI '16-DEMO: "AttentiveVideo: Quantifying ..."
AttentiveVideo: Quantifying Emotional Responses to Mobile Video Advertisements
Phuong Pham and Jingtao Wang (University of Pittsburgh, USA) This demo presents AttentiveVideo, a multi-modal video player that can collect and infer viewers’ emotional responses to video advertisements on unmodified smart phones. When a subsidized video advertisement is playing, AttentiveVideo uses on-lens finger gestures for tangible video control, and employs implicit photoplethysmography (PPG) sensing to infer viewers' attention, engagement, and sentimentality toward advertisements. Through a 24-participant pilot study, we found that AttentiveVideo is easy to learn and intuitive to use. More importantly, AttentiveVideo achieved good accuracies on a wide range of emotional measures (best average accuracy = 65.9%, kappa = 0.30 across 9 metrics). Our preliminary result shows the potential of both low-cost collection and deep understanding of emotional responses to mobile video advertisements. @InProceedings{ICMI16p423, author = {Phuong Pham and Jingtao Wang}, title = {AttentiveVideo: Quantifying Emotional Responses to Mobile Video Advertisements}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {423--424}, doi = {}, year = {2016}, } Video |
|
Waterschoot, Jelte van |
ICMI '16-DEMO: "Ask Alice: An Artificial Retrieval ..."
Ask Alice: An Artificial Retrieval of Information Agent
Michel Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, and Jelte van Waterschoot (University of Nottingham, UK; University of Augsburg, Germany; CNRS, France; Cereproc, UK; Cantoche, France; Imperial College London, UK; University of Twente, Netherlands) We present a demonstration of the ARIA framework, a modular approach for rapid development of virtual humans for information retrieval that have linguistic, emotional, and social skills and a strong personality. We demonstrate the framework's capabilities in a scenario where `Alice in Wonderland', a popular English literature book, is embodied by a virtual human representing Alice. The user can engage in an information exchange dialogue, where Alice acts as the expert on the book, and the user as an interested novice. Besides speech recognition, sophisticated audio-visual behaviour analysis is used to inform the core agent dialogue module about the user's state and intentions, so that it can go beyond simple chat-bot dialogue. The behaviour generation module features a unique new capability of being able to deal gracefully with interruptions of the agent. @InProceedings{ICMI16p419, author = {Michel Valstar and Tobias Baur and Angelo Cafaro and Alexandru Ghitulescu and Blaise Potard and Johannes Wagner and Elisabeth André and Laurent Durieu and Matthew Aylett and Soumia Dermouche and Catherine Pelachaud and Eduardo Coutinho and Björn Schuller and Yue Zhang and Dirk Heylen and Mariët Theune and Jelte van Waterschoot}, title = {Ask Alice: An Artificial Retrieval of Information Agent}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {419--420}, doi = {}, year = {2016}, } |
|
Wilson, Graham |
ICMI '16-DEMO: "Multimodal Affective Feedback: ..."
Multimodal Affective Feedback: Combining Thermal, Vibrotactile, Audio and Visual Signals
Graham Wilson, Euan Freeman, and Stephen Brewster (University of Glasgow, UK) In this paper we describe a demonstration of our multimodal affective feedback designs, used in research to expand the emotional expressivity of interfaces. The feedback leverages inherent associations and reactions to thermal, vibrotactile, auditory and abstract visual designs to convey a range of affective states without any need for learning feedback encoding. All combinations of the different feedback channels can be utilised, depending on which combination best conveys a given state. All the signals are generated from a mobile phone augmented with thermal and vibrotactile stimulators, which will be available to conference visitors to see, touch, hear and, importantly, feel. @InProceedings{ICMI16p400, author = {Graham Wilson and Euan Freeman and Stephen Brewster}, title = {Multimodal Affective Feedback: Combining Thermal, Vibrotactile, Audio and Visual Signals}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {400--401}, doi = {}, year = {2016}, } ICMI '16-DEMO: "Towards a Multimodal Adaptive ..." Towards a Multimodal Adaptive Lighting System for Visually Impaired Children Euan Freeman, Graham Wilson, and Stephen Brewster (University of Glasgow, UK) Visually impaired children often have difficulty with everyday activities like locating items, e.g. favourite toys, and moving safely around the home. It is important to assist them during activities like these because it can promote independence from adults and helps to develop skills. Our demonstration shows our work towards a multimodal sensing and output system that adapts the lighting conditions at home to help visually impaired children with such tasks. @InProceedings{ICMI16p398, author = {Euan Freeman and Graham Wilson and Stephen Brewster}, title = {Towards a Multimodal Adaptive Lighting System for Visually Impaired Children}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {398--399}, doi = {}, year = {2016}, } |
|
Yamada, Kazunori |
ICMI '16-DEMO: "Multimodal Biofeedback System ..."
Multimodal Biofeedback System Integrating Low-Cost Easy Sensing Devices
Wataru Hashiguchi, Junya Morita, Takatsugu Hirayama, Kenji Mase, Kazunori Yamada, and Mayu Yokoya (Nagoya University, Japan; Shizuoka University, Japan; Panasonic, Japan) We built a multimodal biofeedback system integrated with low-cost sensors and applied the system to support meditation training utilizing multimodal biofeedback to the user. Our meditation support system employs Electroencephalogram, heart rate variability and eye tracking. The first two biosignals are employed to assess the mental stress during meditation, and eye tracking is equipped to detect an interval where the user engages in meditation by monitoring the open/closed states of the eyes. @InProceedings{ICMI16p410, author = {Wataru Hashiguchi and Junya Morita and Takatsugu Hirayama and Kenji Mase and Kazunori Yamada and Mayu Yokoya}, title = {Multimodal Biofeedback System Integrating Low-Cost Easy Sensing Devices}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {410--411}, doi = {}, year = {2016}, } |
|
Yammiyavar, Pradeep |
ICMI '16-DEMO: "Design of Multimodal Instructional ..."
Design of Multimodal Instructional Tutoring Agents using Augmented Reality and Smart Learning Objects
Anmol Srivastava and Pradeep Yammiyavar (IIT Guwahati, India) This demo presents a novel technique of enriching students’ learning experience in electronic engineering laboratories and the basis for its design. The system employs mobile augmented reality (AR) and physical smart objects that can be used in conjunction to assist students in laboratories. Such systems are capable of providing just-in-time information and sensing errors made while prototyping of specific electronic circuits. These systems can help reduce cognitive load of students in laboratories and bridge gaps between theory and practical applications that students face in laboratories. Two prototypes have been developed – (i) an intelligent breadboard prototype that can sense errors like loose wiring, wrong connections, etc. for a specific experiment, and, (ii) an AR application that provides visualization and instruction for circuit assembly and operating test equipment. The intelligent breadboard acts as a smart learning object. Design methods were used to conceptualize and build such systems. The idea is to merge practices of Human Computer Interaction with those of machine learning to design highly situated physically located tutoring systems for students. Such systems can help innovatively in teaching and learning in engineering laboratories. @InProceedings{ICMI16p421, author = {Anmol Srivastava and Pradeep Yammiyavar}, title = {Design of Multimodal Instructional Tutoring Agents using Augmented Reality and Smart Learning Objects}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {421--422}, doi = {}, year = {2016}, } Info |
|
Yasuhara, Ryu |
ICMI '16-DEMO: "Large-Scale Multimodal Movie ..."
Large-Scale Multimodal Movie Dialogue Corpus
Ryu Yasuhara, Masashi Inoue, Ikuya Suga, and Tetsuo Kosaka (Yamagata University, Japan) We present an outline of our newly created multimodal dialogue corpus that is constructed from public domain movies. Dialogues in movies are useful sources for analyzing human communication patterns. In addition, they can be used to train machine-learning-based dialogue processing systems. However, the movie files are processing intensive and they contain large portions of non-dialogue segments. Therefore, we created a corpus that contains only dialogue segments from movies. The corpus contains 165,368 dialogue segments taken from 1,722 movies. These dialogues are automatically segmented by using deep neural network-based voice activity detection with filtering rules. Our corpus can reduce the human workload and machine-processing effort required to analyze human dialogue behavior by using movies. @InProceedings{ICMI16p414, author = {Ryu Yasuhara and Masashi Inoue and Ikuya Suga and Tetsuo Kosaka}, title = {Large-Scale Multimodal Movie Dialogue Corpus}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {414--415}, doi = {}, year = {2016}, } Info |
|
Yokoya, Mayu |
ICMI '16-DEMO: "Multimodal Biofeedback System ..."
Multimodal Biofeedback System Integrating Low-Cost Easy Sensing Devices
Wataru Hashiguchi, Junya Morita, Takatsugu Hirayama, Kenji Mase, Kazunori Yamada, and Mayu Yokoya (Nagoya University, Japan; Shizuoka University, Japan; Panasonic, Japan) We built a multimodal biofeedback system integrated with low-cost sensors and applied the system to support meditation training utilizing multimodal biofeedback to the user. Our meditation support system employs Electroencephalogram, heart rate variability and eye tracking. The first two biosignals are employed to assess the mental stress during meditation, and eye tracking is equipped to detect an interval where the user engages in meditation by monitoring the open/closed states of the eyes. @InProceedings{ICMI16p410, author = {Wataru Hashiguchi and Junya Morita and Takatsugu Hirayama and Kenji Mase and Kazunori Yamada and Mayu Yokoya}, title = {Multimodal Biofeedback System Integrating Low-Cost Easy Sensing Devices}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {410--411}, doi = {}, year = {2016}, } |
|
Zhang, Yue |
ICMI '16-DEMO: "Ask Alice: An Artificial Retrieval ..."
Ask Alice: An Artificial Retrieval of Information Agent
Michel Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, and Jelte van Waterschoot (University of Nottingham, UK; University of Augsburg, Germany; CNRS, France; Cereproc, UK; Cantoche, France; Imperial College London, UK; University of Twente, Netherlands) We present a demonstration of the ARIA framework, a modular approach for rapid development of virtual humans for information retrieval that have linguistic, emotional, and social skills and a strong personality. We demonstrate the framework's capabilities in a scenario where `Alice in Wonderland', a popular English literature book, is embodied by a virtual human representing Alice. The user can engage in an information exchange dialogue, where Alice acts as the expert on the book, and the user as an interested novice. Besides speech recognition, sophisticated audio-visual behaviour analysis is used to inform the core agent dialogue module about the user's state and intentions, so that it can go beyond simple chat-bot dialogue. The behaviour generation module features a unique new capability of being able to deal gracefully with interruptions of the agent. @InProceedings{ICMI16p419, author = {Michel Valstar and Tobias Baur and Angelo Cafaro and Alexandru Ghitulescu and Blaise Potard and Johannes Wagner and Elisabeth André and Laurent Durieu and Matthew Aylett and Soumia Dermouche and Catherine Pelachaud and Eduardo Coutinho and Björn Schuller and Yue Zhang and Dirk Heylen and Mariët Theune and Jelte van Waterschoot}, title = {Ask Alice: An Artificial Retrieval of Information Agent}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {419--420}, doi = {}, year = {2016}, } |
|
Zhao, Tianyu |
ICMI '16-DEMO: "Multimodal Interaction with ..."
Multimodal Interaction with the Autonomous Android ERICA
Divesh Lala, Pierrick Milhorat, Koji Inoue, Tianyu Zhao, and Tatsuya Kawahara (Kyoto University, Japan) We demonstrate an interactive conversation with an android named ERICA. In this demonstration the user can converse with ERICA on a number of topics. We demonstrate both the dialog management system and the eye gaze behavior of ERICA used for indicating attention and turn taking. @InProceedings{ICMI16p417, author = {Divesh Lala and Pierrick Milhorat and Koji Inoue and Tianyu Zhao and Tatsuya Kawahara}, title = {Multimodal Interaction with the Autonomous Android ERICA}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {417--418}, doi = {}, year = {2016}, } |
|
Zhu, Chen-Yu |
ICMI '16-DEMO: "Immersive Virtual Reality ..."
Immersive Virtual Reality with Multimodal Interaction and Streaming Technology
Wan-Lun Tsai, You-Lun Hsu, Chi-Po Lin, Chen-Yu Zhu, Yu-Cheng Chen, and Min-Chun Hu (National Cheng Kung University, Taiwan) In this demo, we present an immersive virtual reality (VR) system which integrates multimodal interaction sensors (i.e., smartphone, Kinect v2, and Myo armband) and streaming technology to improve the VR experience. The integrated system solves the common problems in most VR systems: (1) the very limited playing area due to transmission cable between computer and display/interaction devices, and (2) non-intuitive way of controlling virtual objects. We use Unreal Engine 4 to develop an immersive VR game with 6 interactive levels to demonstrate the feasibility of our system. In the game, the user not only can freely walk within a large playing area surrounded by multiple Kinect sensors but also select the virtual objects to grab and throw with the Myo armband. The experiment shows that our idea is workable for VR experience. @InProceedings{ICMI16p416, author = {Wan-Lun Tsai and You-Lun Hsu and Chi-Po Lin and Chen-Yu Zhu and Yu-Cheng Chen and Min-Chun Hu}, title = {Immersive Virtual Reality with Multimodal Interaction and Streaming Technology}, booktitle = {Proc.\ ICMI}, publisher = {ACM}, pages = {416--416}, doi = {}, year = {2016}, } Video |
74 authors
proc time: 1.65