Powered by
2017 IEEE Virtual Reality (VR),
March 18-22, 2017,
Los Angeles, CA, USA
Research Demos
FACETEQ Interface Demo for Emotion Expression in VR
Ifigeneia Mavridou, James T. McGhee, Mahyar Hamedi, Mohsen Fatoorechi, Andrew Cleal, Emili Ballaguer-Balester, Ellen Seiss, Graeme Cox, and Charles Nduka
(Bournemouth University, UK; Sussex Innovation Centre, UK)
Faceteq prototype v.05 is a wearable technology for measuring facial expressions and biometric responses for experimental studies in Virtual Reality. Developed by Emteq Ltd laboratory, Faceteq can enable new avenues for virtual reality research through combination of high performance patented dry sensor technologies, proprietary algorithms and real-time data acquisition and streaming. Emteq founded the Faceteq project with the aim to provide a human-centered additional tool for emotion expression, affective human-computer interaction and social virtual environments. The proposed demonstration will exhibit the hardware and its functionality by allowing attendees to experience three of the showcasing applications we developed this year.
@InProceedings{VR17p441,
author = {Ifigeneia Mavridou and James T. McGhee and Mahyar Hamedi and Mohsen Fatoorechi and Andrew Cleal and Emili Ballaguer-Balester and Ellen Seiss and Graeme Cox and Charles Nduka},
title = {FACETEQ Interface Demo for Emotion Expression in VR},
booktitle = {Proc.\ VR},
publisher = {IEEE},
pages = {441--442},
doi = {},
year = {2017},
}
Diminished Hand: A Diminished Reality-Based Work Area Visualization
Shohei Mori, Momoko Maezawa, Naoto Ienaga, and Hideo Saito
(Keio University, Japan)
Live instructor’s perspective videos are useful to present intuitive visual instructions for trainees in medical and industrial settings. In such videos, the instructor’s hands often hide the work area. In this demo, we present a diminished hand for visualizing the work area hidden by hands by capturing the work area with multiple cameras. To achieve the diminished reality, we use a light field rendering technique, in which light rays avoid passing through penalty points set in the unstructured light fields reconstructed from the multiple viewpoint images.
@InProceedings{VR17p443,
author = {Shohei Mori and Momoko Maezawa and Naoto Ienaga and Hideo Saito},
title = {Diminished Hand: A Diminished Reality-Based Work Area Visualization},
booktitle = {Proc.\ VR},
publisher = {IEEE},
pages = {443--444},
doi = {},
year = {2017},
}
Video
Jogging with a Virtual Runner using a See-Through HMD
Takeo Hamada, Michio Okada, and Michiteru Kitazaki
(Toyohashi University of Technology, Japan)
We present a novel assistive method for leading casual joggers by showing a virtual runner on see-through head-mounted display they worn. It moves at a constant pace specified in advance by them, and its motion synchronizes the user’s one. People can always visually check the pace by looking at it as a personal pacemaker. They are also motivated to keep running by regarding it as a jogging companion. Moreover, proposed method overcomes safety problem of AR apps. Its most body parts are transparent so that it doesn’t obstruct their view. This study, thus, may contribute to augment daily jogging experience.
@InProceedings{VR17p445,
author = {Takeo Hamada and Michio Okada and Michiteru Kitazaki},
title = {Jogging with a Virtual Runner using a See-Through HMD},
booktitle = {Proc.\ VR},
publisher = {IEEE},
pages = {445--446},
doi = {},
year = {2017},
}
Demonstration: Rapid One-Shot Acquisition of Dynamic VR Avatars
Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta,
Jean-Charles Bazin,
Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell
(Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA)
In this demonstration, we showcase a system for rapid acquisition of bespoke avatars for each participant (subject) in a social VR environment is presented. For each subject, the system automatically customizes a parametric avatar model to match the captured subject by adjusting its overall height, body and face shape parameters and generating a custom face texture.
@InProceedings{VR17p447,
author = {Charles Malleson and Maggie Kosek and Martin Klaudiny and Ivan Huerta and Jean-Charles Bazin and Alexander Sorkine-Hornung and Mark Mine and Kenny Mitchell},
title = {Demonstration: Rapid One-Shot Acquisition of Dynamic VR Avatars},
booktitle = {Proc.\ VR},
publisher = {IEEE},
pages = {447--448},
doi = {},
year = {2017},
}
Application of Redirected Walking in Room-Scale VR
Eike Langbehn, Paul Lubos,
Gerd Bruder, and Frank Steinicke
(University of Hamburg, Germany; University of Central Florida, USA)
Redirected walking (RDW) promises to allow near-natural walking in an infinitely large virtual environment (VE) by subtle manipulations of the virtual camera. Previous experiments showed that a physical radius of at least 22 meters is required for undetectable RDW. However, we found that it is possible to decrease this radius and to apply RDW to room-scale VR, i. e., up to approximately 5m × 5m. This is done by using curved paths in the VE instead of straight paths, and by coupling them together in a way that enables continuous walking. Furthermore, the corresponding paths in the real world are laid out in a way that fits perfectly into room-scale VR. In this research demo, users can experience RDW in a room-scale head-mounted display VR setup and explore a VE of approximately 25m × 25m.
@InProceedings{VR17p449,
author = {Eike Langbehn and Paul Lubos and Gerd Bruder and Frank Steinicke},
title = {Application of Redirected Walking in Room-Scale VR},
booktitle = {Proc.\ VR},
publisher = {IEEE},
pages = {449--450},
doi = {},
year = {2017},
}
Immersive Virtual Training for Substation Electricians
Eduardo H. Tanaka, Juliana A. Paludo, Rafael Bacchetti, Edgar V. Gadbem, Leonardo R. Domingues, Carlúcio S. Cordeiro, Olavo Giraldi Jr., Guilherme Alcarde Gallo, Adam Mendes da Silva, and Marcos H. Cascone
(Eldorado Research Institute, Brazil)
This research demonstration presents an Immersive Virtual Substation for Electricians Training. A substation is one of the most critical facilities of the electrical distribution system, so it is mandatory to keep its normal operation to deliver high standards of service, power quality and avoid blackouts to consumers. Therefore, it is necessary to give an effective training to the electricians who will operate the equipment in the substation and to prepare them to face emergencies on daily basis. The main purpose of the proposed Immersive Virtual Environment is to provide a realistic experience to trainees in a safe environment where they can interact with equipment, explore the facility and, mainly, practice basic and complex maneuvers to recover substation operations. Users can interact with this Immersive Virtual Environment using HMDs, joysticks or even an ordinary keyboard, mouse and monitor. Feedback from trainees and instructors who used the Immersive Virtual Environment was very positive, indicating that the objectives were fully achieved.
@InProceedings{VR17p451,
author = {Eduardo H. Tanaka and Juliana A. Paludo and Rafael Bacchetti and Edgar V. Gadbem and Leonardo R. Domingues and Carlúcio S. Cordeiro and Olavo Giraldi Jr. and Guilherme Alcarde Gallo and Adam Mendes da Silva and Marcos H. Cascone},
title = {Immersive Virtual Training for Substation Electricians},
booktitle = {Proc.\ VR},
publisher = {IEEE},
pages = {451--452},
doi = {},
year = {2017},
}
Video
Experiencing Guidance in 3D Spaces with a Vibrotactile Head-Mounted Display
Victor Adriel de Jesus Oliveira, Luca Brayda, Luciana Nedel, and Anderson Maciel
(Federal University of Rio Grande do Sul, Brazil; Fondazione Istituto Italiano di Tecnologia, Italy)
Vibrotactile feedback is broadly used to support different tasks in virtual and augmented reality applications, such as navigation, communication, attentional redirection, or to enhance the sense of presence in virtual environments. Thus, we aim to include the haptic component to the most popular wearable used in VR applications: the VR headset. After studying the acuity around the head for vibrating stimuli, and trying different parameters, actuators, and configurations, we developed a haptic guidance technique to be used in a vibrotactile Head-mounted Display (HMD). Our vibrotactile HMD was made to render the position of objects in a 3D space around the subject by varying both stimulus loci and vibration frequency. In this demonstration, the participants will interact with different scenarios where the mission is to select a number of predefined objects. However, instead of displaying occlusive graphical information to point to these objects, vibrotactile cues will provide guidance in the VR setup. Participants will see that our haptic guidance technique can be both easy to use and entertaining.
@InProceedings{VR17p453,
author = {Victor Adriel de Jesus Oliveira and Luca Brayda and Luciana Nedel and Anderson Maciel},
title = {Experiencing Guidance in 3D Spaces with a Vibrotactile Head-Mounted Display},
booktitle = {Proc.\ VR},
publisher = {IEEE},
pages = {453--454},
doi = {},
year = {2017},
}
Video
Info
3DPS: An Auto-calibrated Three-Dimensional Perspective-Corrected Spherical Display
Qian Zhou, Kai Wu,
Gregor Miller,
Ian Stavness, and
Sidney Fels
(University of British Columbia, Canada; University of Saskatchewan, Canada)
We describe an auto-calibrated 3D perspective-corrected spherical display that uses multiple rear projected pico-projectors. The display system is auto-calibrated via 3D reconstruction of each projected pixel on the display using a single inexpensive camera. With the automatic calibration, the multiple-projector system supports a seamless blended imagery on the spherical screen. Furthermore, we incorporate head tracking with the display to present 3D content with motion parallax by rendering perspective-corrected images based on the viewpoint. To show the effectiveness of this design, we implemented a view-dependent application that allows walk-around visualization from all angles for a single head-tracked user. We also implemented a view-independent application that supports a wall-papered rendering for multi-user viewing. Thus, both view-dependent 3D VR content and spherical 2D content, such as a globe, can be easily experienced with this display.
@InProceedings{VR17p455,
author = {Qian Zhou and Kai Wu and Gregor Miller and Ian Stavness and Sidney Fels},
title = {3DPS: An Auto-calibrated Three-Dimensional Perspective-Corrected Spherical Display},
booktitle = {Proc.\ VR},
publisher = {IEEE},
pages = {455--456},
doi = {},
year = {2017},
}
Video
Info
WebVR Meets WebRTC: Towards 360-Degree Social VR Experiences
Simon Gunkel, Martin Prins, Hans Stokking, and Omar Niamut
(TNO, Netherlands)
Virtual Reality (VR) and 360-degree video are reshaping the media landscape, creating a fertile business environment. During 2016 new 360-degree cameras and VR headsets entered the consumer market, distribution platforms are being established and new production studios are emerging. VR is evermore becoming a hot topic in research and industry and many new and exciting interactive VR content and experiences are emerging. The biggest gap we see in these experiences are social and shared aspects of VR. In this demo we present our ongoing efforts towards social and shared VR by developing a modular web based VR framework, that extends current video conferencing capabilities with new functionalities of Virtual and Mixed Reality. It allows us to connect two people together for mediated audio-visual interaction, while being able to engage in interactive content. Our framework allows to run extensive technological and user based trials in order to evaluate VR experiences and to build immersive multi-user interaction spaces. Our first results indicate that a high level of engagement and interaction between users is possible in our 360-degree VR set-up utilizing current web technologies.
@InProceedings{VR17p457,
author = {Simon Gunkel and Martin Prins and Hans Stokking and Omar Niamut},
title = {WebVR Meets WebRTC: Towards 360-Degree Social VR Experiences},
booktitle = {Proc.\ VR},
publisher = {IEEE},
pages = {457--458},
doi = {},
year = {2017},
}
Video
mpCubee: Towards a Mobile Perspective Cubic Display using Mobile Phones
Jens Grubert and
Matthias Kranz
(Coburg University, Germany; University of Passau, Germany)
While we witness significant changes in display technologies, to date, the majority of display form factors remain flat. The research community has investigated other geometric display configuration given the rise to cubic displays that create the illusion of a 3D virtual scene within the cube.
We present a self-contained mobile perspective cubic display (mpCubee) assembled from multiple smartphones. We achieve perspective correct projection of 3D content through head-tracking using built-in cameras in smartphones. Furthermore, our prototype allows to spatially manipulate 3D objects on individual axes due to the orthogonal configuration of touch displays.
@InProceedings{VR17p459,
author = {Jens Grubert and Matthias Kranz},
title = {mpCubee: Towards a Mobile Perspective Cubic Display using Mobile Phones},
booktitle = {Proc.\ VR},
publisher = {IEEE},
pages = {459--460},
doi = {},
year = {2017},
}
Towards Ad Hoc Mobile Multi-display Environments on Commodity Mobile Devices
Jens Grubert and
Matthias Kranz
(Coburg University, Germany; University of Passau, Germany)
We present a demonstration of HeadPhones (Headtracking + smartPhones), a novel approach for the spatial registration of multiple mobile devices into an ad hoc multi-display environment. We propose to employ the user's head as external reference frame for the registration of multiple mobile devices into a common coordinate system. Our approach allows for dynamic repositioning of devices during runtime without the need for external infrastructure such as separate cameras or fiducials. Specifically, our only requirements are local network connections and mobile devices with built-in front facing cameras. This way, HeadPhones enables spatially-aware multi-display applications in mobile contexts.
@InProceedings{VR17p461,
author = {Jens Grubert and Matthias Kranz},
title = {Towards Ad Hoc Mobile Multi-display Environments on Commodity Mobile Devices},
booktitle = {Proc.\ VR},
publisher = {IEEE},
pages = {461--462},
doi = {},
year = {2017},
}
Video
Info
ArcheoVR: Exploring Itapeva's Archeological Site
Eduardo Zilles Borba, Andre Montes, Marcio Almeida, Mario Nagamura, Roseli Lopes, Marcelo Knorich Zuffo, Astolfo Araujo, and
Regis Kopper
(University of São Paulo, Brazil; Duke University, USA)
This demo presents a fully immersive and interactive virtual environment (VE) – the ArcheoVR, which represents Itapeva Rocky Shelter, a prehistoric archeological site in Brazil. W workflow started with a real world data capture – laser scanners, drones and photogrammetry. Captured information was transformed into a carefully designed realistic 3D scene and interactive features that allows users to experience the virtual archeological site in real-time. The main objective of this VR model is to allow the general public to feel and explore an otherwise restricted and ephemeral site and to assess prototype tools intended for future digital archaeological exploration.
@InProceedings{VR17p463,
author = {Eduardo Zilles Borba and Andre Montes and Marcio Almeida and Mario Nagamura and Roseli Lopes and Marcelo Knorich Zuffo and Astolfo Araujo and Regis Kopper},
title = {ArcheoVR: Exploring Itapeva's Archeological Site},
booktitle = {Proc.\ VR},
publisher = {IEEE},
pages = {463--464},
doi = {},
year = {2017},
}
NIVR: Neuro Imaging in Virtual Reality
Tyler Ard,
David M. Krum, Thai Phan, Dominique Duncan, Ryan Essex, Mark Bolas, and Arthur Toga
(University of Southern California, USA)
Visualization is a critical component of neuroimaging, and how to best view data that is naturally three dimensional is a long standing question in neuroscience. Many approaches, programs, and techniques have been developed specifically for neuroimaging. However, exploration of 3D information through a 2D screen is inherently limited. Many neuroscientific researchers hope that with the recent commercialization and popularization of VR, it can offer the next-step in data visualization and exploration.
Neuro Imaging in Virtual Reality (NIVR), is a visualization suite that employs various immersive visualizations to represent neuroimaging information in VR. Some established techniques, such as raymarching volume visualization, are paired with newer techniques, such as near-field rendering, to provide a broad basis of how we can leverage VR to improve visualization and navigation of neuroimaging data. Several of the neuroscientific visualization approaches presented are, to our knowledge, the first of their kind.
NIVR offers not only an exploration of neuroscientific data visualization, but also a tool to expose and educate the public regarding recent advancements in the field of neuroimaging. By providing an engaging experience to explore new techniques and discoveries in neuroimaging, we hope to spark scientific interest through a broad audience.
Furthermore, neuroimaging offers deep and expansive datasets; a single scan can involve several gigabytes of information. Visualization and exploration of this type of information can be challenging, and real-time exploration of this information in VR even more so. NIVR explores pathways which make this possible, and offers preliminary stereo visualizations of these types of massive data.
@InProceedings{VR17p465,
author = {Tyler Ard and David M. Krum and Thai Phan and Dominique Duncan and Ryan Essex and Mark Bolas and Arthur Toga},
title = {NIVR: Neuro Imaging in Virtual Reality},
booktitle = {Proc.\ VR},
publisher = {IEEE},
pages = {465--466},
doi = {},
year = {2017},
}
VRAIN: Virtual Reality Assisted Intervention for Neuroimaging
Dominique Duncan, Bradley Newman, Adam Saslow, Emily Wanserski, Tyler Ard, Ryan Essex, and Arthur Toga
(University of Southern California, USA; RareFaction Interactive, USA)
The USC Stevens Neuroimaging and Informatics Institute in the Laboratory of Neuro Imaging (http://loni.usc.edu) has the largest collection/repository of neuroanatomical MRI scans in the world and is at the forefront of both brain imaging and data storage/processing technology. One of our workflow processes involves algorithmic segmentation of MRI scans into labeled anatomical regions (using FreeSurfer, currently the best software for this purpose). This algorithm is imprecise, and users must tediously correct errors manually by using a mouse and keyboard to edit individual MRI slices at a time. We demonstrate preliminary work to improve efficiency of this task by translating it into 3 dimensions and utilizing virtual reality user interfaces to edit multiple slices of data simultaneously.
@InProceedings{VR17p467,
author = {Dominique Duncan and Bradley Newman and Adam Saslow and Emily Wanserski and Tyler Ard and Ryan Essex and Arthur Toga},
title = {VRAIN: Virtual Reality Assisted Intervention for Neuroimaging},
booktitle = {Proc.\ VR},
publisher = {IEEE},
pages = {467--468},
doi = {},
year = {2017},
}
Video
Gesture-Based Augmented Reality Annotation
Yun Suk Chang,
Benjamin Nuernberger, Bo Luan, Tobias Höllerer, and John O’Donovan
(University of California at Santa Barbara, USA)
Drawing annotations with 3D hand gestures in augmented reality is useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. To explore and evaluate different 3D hand-gesture-based annotation drawing methods, we have developed an annotation drawing application using the HoloLens augmented reality development platform. The application can be used for highlighting objects at a distance and multi-user collaboration by annotating in the real world.
@InProceedings{VR17p469,
author = {Yun Suk Chang and Benjamin Nuernberger and Bo Luan and Tobias Höllerer and John O’Donovan},
title = {Gesture-Based Augmented Reality Annotation},
booktitle = {Proc.\ VR},
publisher = {IEEE},
pages = {469--470},
doi = {},
year = {2017},
}
Video
Virtual Field Trips with Networked Depth-Camera-Based Teacher, Heterogeneous Displays, and Example Energy Center Application
Jason W. Woodworth, Sam Ekong, and Christoph W. Borst
(University of Louisiana at Lafayette, USA)
This demo presents an approach to networked educational virtual
reality for virtual field trips and guided exploration. It shows an
asymmetric collaborative interface in which a remote teacher stands
in front of a large display and depth camera (Kinect) while students
are immersed with HMDs. The teacher’s front-facing mesh
is streamed into the environment to assist students and deliver instruction.
Our project uses commodity virtual reality hardware and
high-performance networks to allow students who are unable to
visit a real facility with an alternative that provides similar educational
benefits. Virtual facilities can further be augmented with
educational content through interactables or small games. We discuss
motivation, features, interface challenges, and ongoing testing.
@InProceedings{VR17p471,
author = {Jason W. Woodworth and Sam Ekong and Christoph W. Borst},
title = {Virtual Field Trips with Networked Depth-Camera-Based Teacher, Heterogeneous Displays, and Example Energy Center Application},
booktitle = {Proc.\ VR},
publisher = {IEEE},
pages = {471--472},
doi = {},
year = {2017},
}
Video
Rapid Creation of Photorealistic Virtual Reality Content with Consumer Depth Cameras
Chih-Fan Chen, Mark Bolas, and
Evan Suma Rosenberg
(University of Southern California, USA)
Virtual objects are essential for building environments in virtual reality (VR) applications. However, creating photorealistic 3D models is not easy, and handcrafting the detailed 3D model from a real object can be time and labor intensive. An alternative way is to build a structured camera array such as a light-stage to reconstruct the model from a real object. However, these technologies are very expensive and not practical for most users. In this work, we demonstrate a complete end-to-end pipeline for the capture, processing, and rendering of view-dependent 3D models in virtual reality from a single consumer-grade RGB-D camera. The geometry model and the camera trajectories are automatically reconstructed from a RGB-D image sequence captured offline. Based on the HMD position, selected images are used for real-time model rendering. The result of this pipeline is a 3D mesh with view-dependent textures suitable for real-time rendering in virtual reality. Specular reflections and light-burst effects are especially noticeable when users view the objects from different perspectives in a head-tracked environment.
@InProceedings{VR17p473,
author = {Chih-Fan Chen and Mark Bolas and Evan Suma Rosenberg},
title = {Rapid Creation of Photorealistic Virtual Reality Content with Consumer Depth Cameras},
booktitle = {Proc.\ VR},
publisher = {IEEE},
pages = {473--474},
doi = {},
year = {2017},
}
Video
Travel in Large-Scale Head-Worn VR: Pre-oriented Teleportation with WIMs and Previews
Carmine Elvezio,
Mengu Sukan,
Steven Feiner, and Barbara Tversky
(Columbia University, USA; Teachers College, USA)
We demonstrate an interaction technique that allows a user to point at a world-in-miniature representation of a city-scale virtual environment and perform efficient and precise teleportation by pre-orienting an avatar. A preview of the post-teleport view of the full-scale virtual environment updates interactively as the user adjusts the position, yaw, and pitch of the avatar's head with a pair of 6DoF-tracked controllers. We describe design decisions and contrast with alternative approaches to virtual travel.
@InProceedings{VR17p475,
author = {Carmine Elvezio and Mengu Sukan and Steven Feiner and Barbara Tversky},
title = {Travel in Large-Scale Head-Worn VR: Pre-oriented Teleportation with WIMs and Previews},
booktitle = {Proc.\ VR},
publisher = {IEEE},
pages = {475--476},
doi = {},
year = {2017},
}
Video
proc time: 0.35