VR 2017 – Author Index |
Contents -
Abstracts -
Authors
Online Calendar - iCal File |
A B C F G H I J K L M N O R S T V W X Y
Azmandian, Mahdi |
VR '17: "An Evaluation of Strategies ..."
An Evaluation of Strategies for Two-User Redirected Walking in Shared Physical Spaces
Mahdi Azmandian, Timofey Grechkin, and Evan Suma Rosenberg (University of Southern California, USA) As the focus of virtual reality technology is shifting from single-person experiences to multi-user interactions it becomes increasingly important to accommodate multiple co-located users within a shared real-world space. For locomotion and navigation, the introduction of multiple users moving both virtually and physically creates additional challenges related to potential user-on-user collisions. In this work, we focus on defining the extent of these challenges, in order to apply redirected walking to two users immersed in virtual reality experiences within a shared physical tracked space. Using a computer simulation framework, we explore the costs and benefits of splitting available physical space between users versus attempting to algorithmically prevent user-to-user collisions. We also explore fundamental components of collision prevention such as steering the users away from each other, forced stopping, and user re-orientation. Each component was analyzed for the number of potential disruptions to the flow of the virtual experience. We also develop a novel collision prevention algorithm that reduces overall interruptions by 17.6% and collision prevention events by 58.3%. Our results show that sharing space using our collision prevention method is superior to subdividing the tracked space. @InProceedings{VR17p91, author = {Mahdi Azmandian and Timofey Grechkin and Evan Suma Rosenberg}, title = {An Evaluation of Strategies for Two-User Redirected Walking in Shared Physical Spaces}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {91--98}, doi = {}, year = {2017}, } |
|
Bazin, Jean-Charles |
VR '17: "Rapid One-Shot Acquisition ..."
Rapid One-Shot Acquisition of Dynamic VR Avatars
Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell (Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA) We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and face shape at the fitting stage, while animation blendshapes allow the face to be animated. The subject assumes a T-pose and a single snapshot is captured using a stereo RGB plus depth sensor rig. Our system automatically aligns a photo texture and fits the 3D shape of the face. The body shape is stylized according to body dimensions estimated from segmented depth. The face identity blendweights are optimised according to image-based facial landmarks, while a custom texture map for the face is generated by warping the input images to a reference texture according to the facial landmarks. The total capture and processing time is under 10~seconds and the output is a light-weight, game-engine-ready avatar which is recognizable as the subject. We demonstrate our system in a VR environment in which each user sees the other users' animated avatars through a VR headset with real-time audio-based facial animation and live body motion tracking, affording an enhanced level of presence and social engagement compared to generic avatars. @InProceedings{VR17p131, author = {Charles Malleson and Maggie Kosek and Martin Klaudiny and Ivan Huerta and Jean-Charles Bazin and Alexander Sorkine-Hornung and Mark Mine and Kenny Mitchell}, title = {Rapid One-Shot Acquisition of Dynamic VR Avatars}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {131--140}, doi = {}, year = {2017}, } |
|
Beck, Stephan |
VR '17: "Sweeping-Based Volumetric ..."
Sweeping-Based Volumetric Calibration and Registration of Multiple RGBD-Sensors for 3D Capturing Systems
Stephan Beck and Bernd Froehlich (Bauhaus-Universität Weimar, Germany) The accurate calibration and registration of a set of color and depth (RGBD) sensors into a shared coordinate system is an essential requirement for 3D surround capturing systems. We present a method to calibrate multiple unsynchronized RGBD-sensors with high accuracy in a matter of minutes by sweeping a tracked checkerboard through the desired capturing space in front of each sensor. Through the sweeping process, a large number of robust correspondences between the depth and the color image as well as the 3D world positions can be automatically established. In order to obtain temporally synchronized correspondences between an RGBD-sensor’s data streams and the tracked target’s positions we apply an off-line optimization process based on error minimization and a coplanarity constraint. The correspondences are entered into a 3D look-up table which is used during runtime to transform depth and color information into the application’s world coordinate system. Our proposed method requires a manual effort of less than one minute per RGBD-sensor and achieves a high calibration accuracy with an average 3D error below 3.5 mm and an average texture reprojection error smaller than 1 pixel. @InProceedings{VR17p167, author = {Stephan Beck and Bernd Froehlich}, title = {Sweeping-Based Volumetric Calibration and Registration of Multiple RGBD-Sensors for 3D Capturing Systems}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {167--176}, doi = {}, year = {2017}, } Video Info |
|
Bodenheimer, Bobby |
VR '17: "Prism Aftereffects for Throwing ..."
Prism Aftereffects for Throwing with a Self-Avatar in an Immersive Virtual Environment
Bobby Bodenheimer, Sarah Creem-Regehr, Jeanine Stefanucci, Elena Shemetova, and William B. Thompson (Vanderbilt University, USA; University of Utah, USA) The use of first-person self-avatars in immersive virtual environments (VEs) has grown over recent years. It is unknown, however, how visual feedback from a self-avatar influences a user's online actions and subsequent calibration of actions within an immersive VE. The current paper uses a prism throwing adaptation paradigm to test the role of a self-avatar arm or full body on action calibration in a VE. Participants' throwing accuracy to a target on the ground was measured first in a normal viewing environment, then with the visual field rotated clockwise about their vertical axis by 17 degrees (prism simulation), and then again in the normal viewing environment with the prism distortion removed. Participants experienced either no-avatar, a first-person avatar arm and hand, or a first-person full body avatar during the entire experimental session, in a between-subjects manipulation. Results showed similar throwing error and adaptation during the prism exposure for all conditions, but a reduced aftereffect (displacement with respect to the target in the opposite direction of the prism-exposure) when the avatar arm or full body was present. The results are discussed in the context of how an avatar can provide a visual frame of reference to aid in action calibration. @InProceedings{VR17p141, author = {Bobby Bodenheimer and Sarah Creem-Regehr and Jeanine Stefanucci and Elena Shemetova and William B. Thompson}, title = {Prism Aftereffects for Throwing with a Self-Avatar in an Immersive Virtual Environment}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {141--147}, doi = {}, year = {2017}, } |
|
Bouyer, Guillaume |
VR '17: "Inducing Self-Motion Sensations ..."
Inducing Self-Motion Sensations in Driving Simulators using Force-Feedback and Haptic Motion
Guillaume Bouyer, Amine Chellali, and Anatole Lécuyer (University of Évry Val d'Essonne, France; Inria, France) Producing sensations of motion in driving simulators often requires using cumbersome and expensive motion platforms. In this ar- ticle we present a novel and alternative approach for producing self-motion sensations in driving simulations by relying on haptic- feedback. The method consists in applying a force-feedback pro- portional to the acceleration of the virtual vehicle directly in the hands of the driver, by means of a haptic device attached to the manipulated controller (or a steering wheel). We designed a proof- of-concept based on a standard gamepad physically attached at the extremity of a standard 3DOF haptic display. Haptic effects were designed to match notably the acceleration/braking (longitudinal forces) and left/right turns (lateral forces) of the virtual vehicle. A preliminary study conducted with 23 participants, engaged in gamepad-based active VR navigations in a straight line, showed that haptic motion effects globally improved the involvement and realism of motion sensation for participants with prior experience with haptic devices. Taken together, our results suggest that our approach could be further tested and used in driving simulators in entertainment and/or professional contexts. @InProceedings{VR17p84, author = {Guillaume Bouyer and Amine Chellali and Anatole Lécuyer}, title = {Inducing Self-Motion Sensations in Driving Simulators using Force-Feedback and Haptic Motion}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {84--90}, doi = {}, year = {2017}, } |
|
Bruder, Gerd |
VR '17: "Exploring the Effect of Vibrotactile ..."
Exploring the Effect of Vibrotactile Feedback through the Floor on Social Presence in an Immersive Virtual Environment
Myungho Lee, Gerd Bruder, and Gregory F. Welch (University of Central Florida, USA) We investigate the effect of vibrotactile feedback delivered to one's feet in an immersive virtual environment (IVE). In our study, participants observed a virtual environment where a virtual human (VH) walked toward the participants and paced back and forth within their social space. We compared three conditions as follows: participants in the Sound condition heard the footsteps of the VH; participants in the Vibration condition experienced the vibration of the footsteps along with the sounds; while participants in the Mute condition were not exposed to sound nor vibrotactile feedback. We found that the participants in the Vibration condition felt a higher social presence with the VH compared to those who did not feel the vibration. The participants in the Vibration condition also exhibited greater avoidance behavior while facing the VH and when the VH invaded their personal space. @InProceedings{VR17p105, author = {Myungho Lee and Gerd Bruder and Gregory F. Welch}, title = {Exploring the Effect of Vibrotactile Feedback through the Floor on Social Presence in an Immersive Virtual Environment}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {105--111}, doi = {}, year = {2017}, } |
|
Ceylan, Duygu |
VR '17: "6-DOF VR Videos with a Single ..."
6-DOF VR Videos with a Single 360-Camera
Jingwei Huang, Zhili Chen, Duygu Ceylan, and Hailin Jin (Stanford University, USA; Adobe Research, USA) Recent breakthroughs in consumer level virtual reality (VR) headsets is creating a growing user-base in demand of immersive, full 3D VR experiences. While monoscopic 360-videos are one of the most common content for VR headsets, they lack 3D information and thus cannot be viewed with full 6 degree-of-freedom (DOF). We present an approach that addresses this limitation via a novel warping algorithm that can synthesize new views both with rotational and translational motion of the viewpoint. This enables to playback an input monoscopic 360-video in a VR headset where the new viewpoints are determined by the 6-DOF motion of the headset. Our solution tailors standard structure-from-motion and dense reconstruction algorithms to work accurately for 360-videos and is optimized for GPUs to achieve VR frame rates (> 120 fps). We demonstrate the effectiveness our approach on a variety of videos with interesting content. @InProceedings{VR17p37, author = {Jingwei Huang and Zhili Chen and Duygu Ceylan and Hailin Jin}, title = {6-DOF VR Videos with a Single 360-Camera}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {37--44}, doi = {}, year = {2017}, } Video |
|
Cha, Young-Woon |
VR '17: "Optimizing Placement of Commodity ..."
Optimizing Placement of Commodity Depth Cameras for Known 3D Dynamic Scene Capture
Rohan Chabra, Adrian Ilie, Nicholas Rewkowski, Young-Woon Cha, and Henry Fuchs (University of North Carolina at Chapel Hill, USA) Commodity depth sensors, such as the Microsoft Kinect®, have been widely used for the capture and reconstruction of the 3D structure of a room-sized dynamic scene. Camera placement and coverage during capture significantly impact the quality of the resulting reconstruction. In particular, dynamic occlusions and sensor interference have been shown to result in poor resolution and holes in the reconstruction results. This paper presents a novel algorithmic framework and an off-line optimization of depth sensor placements for a given 3D dynamic scene, simulated using virtual 3D models. We derive a fitness metric for a particular configuration of sensors by combining factors such as visibility and resolution of the entire dynamic scene along with probabilities of interference between sensors. We employ this fitness metric both in a greedy algorithm that determines the number of depth sensors needed to cover the scene, and in a simulated annealing algorithm that optimizes the placements of those sensors. We compare our algorithm’s optimized placements with manual sensor placements for a real dynamic scene. We present quantitative assessments using our fitness metric, as well as qualitative assessments to demonstrate that our algorithm not only enhances the resolution and total coverage of the reconstruction but also fills in voids by avoiding occlusions and sensor interference when compared with the reconstruction of the same scene using a manual sensor placement. @InProceedings{VR17p157, author = {Rohan Chabra and Adrian Ilie and Nicholas Rewkowski and Young-Woon Cha and Henry Fuchs}, title = {Optimizing Placement of Commodity Depth Cameras for Known 3D Dynamic Scene Capture}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {157--166}, doi = {}, year = {2017}, } Video Info |
|
Chabra, Rohan |
VR '17: "Optimizing Placement of Commodity ..."
Optimizing Placement of Commodity Depth Cameras for Known 3D Dynamic Scene Capture
Rohan Chabra, Adrian Ilie, Nicholas Rewkowski, Young-Woon Cha, and Henry Fuchs (University of North Carolina at Chapel Hill, USA) Commodity depth sensors, such as the Microsoft Kinect®, have been widely used for the capture and reconstruction of the 3D structure of a room-sized dynamic scene. Camera placement and coverage during capture significantly impact the quality of the resulting reconstruction. In particular, dynamic occlusions and sensor interference have been shown to result in poor resolution and holes in the reconstruction results. This paper presents a novel algorithmic framework and an off-line optimization of depth sensor placements for a given 3D dynamic scene, simulated using virtual 3D models. We derive a fitness metric for a particular configuration of sensors by combining factors such as visibility and resolution of the entire dynamic scene along with probabilities of interference between sensors. We employ this fitness metric both in a greedy algorithm that determines the number of depth sensors needed to cover the scene, and in a simulated annealing algorithm that optimizes the placements of those sensors. We compare our algorithm’s optimized placements with manual sensor placements for a real dynamic scene. We present quantitative assessments using our fitness metric, as well as qualitative assessments to demonstrate that our algorithm not only enhances the resolution and total coverage of the reconstruction but also fills in voids by avoiding occlusions and sensor interference when compared with the reconstruction of the same scene using a manual sensor placement. @InProceedings{VR17p157, author = {Rohan Chabra and Adrian Ilie and Nicholas Rewkowski and Young-Woon Cha and Henry Fuchs}, title = {Optimizing Placement of Commodity Depth Cameras for Known 3D Dynamic Scene Capture}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {157--166}, doi = {}, year = {2017}, } Video Info |
|
Chaudhary, Aashish |
VR '17: "Enhancements to VTK Enabling ..."
Enhancements to VTK Enabling Scientific Visualization in Immersive Environments
Patrick O'Leary, Sankhesh Jhaveri, Aashish Chaudhary, William Sherman, Ken Martin, David Lonie, Eric Whiting, James Money, and Sandy McKenzie (Kitware, USA; Indiana University, USA; Idaho National Laboratory, USA) Modern scientific, engineering and medical computational simulations, as well as experimental and observational data sensing/measuring devices, produce enormous amounts of data. While statistical analysis is one tool that provides insight into this data, it is scientific visualization that is tactically important for scientific discovery, product design and data analysis. These benefits are impeded, however, when the scientific visualization algorithms are implemented from scratch | a time consuming and redundant process in immersive application development. This process then can greatly benefit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR) environment has only been attempted to varying degrees of success. In this paper, we demonstrate two new approaches to simplify this amalgamation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that provide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications. @InProceedings{VR17p186, author = {Patrick O'Leary and Sankhesh Jhaveri and Aashish Chaudhary and William Sherman and Ken Martin and David Lonie and Eric Whiting and James Money and Sandy McKenzie}, title = {Enhancements to VTK Enabling Scientific Visualization in Immersive Environments}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {186--194}, doi = {}, year = {2017}, } |
|
Chellali, Amine |
VR '17: "Inducing Self-Motion Sensations ..."
Inducing Self-Motion Sensations in Driving Simulators using Force-Feedback and Haptic Motion
Guillaume Bouyer, Amine Chellali, and Anatole Lécuyer (University of Évry Val d'Essonne, France; Inria, France) Producing sensations of motion in driving simulators often requires using cumbersome and expensive motion platforms. In this ar- ticle we present a novel and alternative approach for producing self-motion sensations in driving simulations by relying on haptic- feedback. The method consists in applying a force-feedback pro- portional to the acceleration of the virtual vehicle directly in the hands of the driver, by means of a haptic device attached to the manipulated controller (or a steering wheel). We designed a proof- of-concept based on a standard gamepad physically attached at the extremity of a standard 3DOF haptic display. Haptic effects were designed to match notably the acceleration/braking (longitudinal forces) and left/right turns (lateral forces) of the virtual vehicle. A preliminary study conducted with 23 participants, engaged in gamepad-based active VR navigations in a straight line, showed that haptic motion effects globally improved the involvement and realism of motion sensation for participants with prior experience with haptic devices. Taken together, our results suggest that our approach could be further tested and used in driving simulators in entertainment and/or professional contexts. @InProceedings{VR17p84, author = {Guillaume Bouyer and Amine Chellali and Anatole Lécuyer}, title = {Inducing Self-Motion Sensations in Driving Simulators using Force-Feedback and Haptic Motion}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {84--90}, doi = {}, year = {2017}, } |
|
Chen, Zhili |
VR '17: "6-DOF VR Videos with a Single ..."
6-DOF VR Videos with a Single 360-Camera
Jingwei Huang, Zhili Chen, Duygu Ceylan, and Hailin Jin (Stanford University, USA; Adobe Research, USA) Recent breakthroughs in consumer level virtual reality (VR) headsets is creating a growing user-base in demand of immersive, full 3D VR experiences. While monoscopic 360-videos are one of the most common content for VR headsets, they lack 3D information and thus cannot be viewed with full 6 degree-of-freedom (DOF). We present an approach that addresses this limitation via a novel warping algorithm that can synthesize new views both with rotational and translational motion of the viewpoint. This enables to playback an input monoscopic 360-video in a VR headset where the new viewpoints are determined by the 6-DOF motion of the headset. Our solution tailors standard structure-from-motion and dense reconstruction algorithms to work accurately for 360-videos and is optimized for GPUs to achieve VR frame rates (> 120 fps). We demonstrate the effectiveness our approach on a variety of videos with interesting content. @InProceedings{VR17p37, author = {Jingwei Huang and Zhili Chen and Duygu Ceylan and Hailin Jin}, title = {6-DOF VR Videos with a Single 360-Camera}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {37--44}, doi = {}, year = {2017}, } Video |
|
Cordar, Andrew |
VR '17: "Repeat after Me: Using Mixed ..."
Repeat after Me: Using Mixed Reality Humans to Influence Best Communication Practices
Andrew Cordar, Adam Wendling, Casey White, Samsun Lampotang, and Benjamin Lok (University of Florida, USA; University of Virginia, USA) In the past few years, advances have been made on how mixed reality humans (MRHs) can be used for interpersonal communication skills training for medical teams; however, little research has looked at how MRHs can influence communication skills during training. One way to influence communication skills is to leverage MRHs as models of communication behavior. We created a mixed reality medical team training exercise designed to impact communication behaviors that are critical for patient safety. We recruited anesthesia residents to go through an operating room training exercise with MRHs to assess and influence residents' closed loop communication behaviors during medication administration. We manipulated the behavior of the MRHs to determine if the MRHs could influence the residents' closed loop communication behavior. Our results showed that residents' closed loop communications behaviors were influenced by MRHs. Additionally, we found there was a statistically significant difference between groups based on which MRH behavior residents observed. Because the MRHs significantly impacted how residents communicated in simulation, this work expands the boundaries for how VR can be used and demonstrates that MRHs could be used as tools to address complex communication dynamics in a team setting @InProceedings{VR17p148, author = {Andrew Cordar and Adam Wendling and Casey White and Samsun Lampotang and Benjamin Lok}, title = {Repeat after Me: Using Mixed Reality Humans to Influence Best Communication Practices}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {148--156}, doi = {}, year = {2017}, } Video |
|
Creem-Regehr, Sarah |
VR '17: "Prism Aftereffects for Throwing ..."
Prism Aftereffects for Throwing with a Self-Avatar in an Immersive Virtual Environment
Bobby Bodenheimer, Sarah Creem-Regehr, Jeanine Stefanucci, Elena Shemetova, and William B. Thompson (Vanderbilt University, USA; University of Utah, USA) The use of first-person self-avatars in immersive virtual environments (VEs) has grown over recent years. It is unknown, however, how visual feedback from a self-avatar influences a user's online actions and subsequent calibration of actions within an immersive VE. The current paper uses a prism throwing adaptation paradigm to test the role of a self-avatar arm or full body on action calibration in a VE. Participants' throwing accuracy to a target on the ground was measured first in a normal viewing environment, then with the visual field rotated clockwise about their vertical axis by 17 degrees (prism simulation), and then again in the normal viewing environment with the prism distortion removed. Participants experienced either no-avatar, a first-person avatar arm and hand, or a first-person full body avatar during the entire experimental session, in a between-subjects manipulation. Results showed similar throwing error and adaptation during the prism exposure for all conditions, but a reduced aftereffect (displacement with respect to the target in the opposite direction of the prism-exposure) when the avatar arm or full body was present. The results are discussed in the context of how an avatar can provide a visual frame of reference to aid in action calibration. @InProceedings{VR17p141, author = {Bobby Bodenheimer and Sarah Creem-Regehr and Jeanine Stefanucci and Elena Shemetova and William B. Thompson}, title = {Prism Aftereffects for Throwing with a Self-Avatar in an Immersive Virtual Environment}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {141--147}, doi = {}, year = {2017}, } |
|
Fang, Qiang |
VR '17: "Acoustic VR in the Mouth: ..."
Acoustic VR in the Mouth: A Real-Time Speech-Driven Visual Tongue System
Ran Luo, Qiang Fang, Jianguo Wei, Wenhuan Lu, Weiwei Xu, and Yin Yang (University of New Mexico, USA; Chinese Academy of Social Sciences, China; Tianjin University, China; Zhejiang University, China) We propose an acoustic-VR system that converts acoustic signals of human language (Chinese) to realistic 3D tongue animation sequences in real time. It is known that directly capturing the 3D geometry of the tongue at a frame rate that matches the tongue's swift movement during the language production is challenging. This difficulty is handled by utilizing the electromagnetic articulography (EMA) sensor as the intermediate medium linking the acoustic data to the simulated virtual reality. We leverage Deep Neural Networks to train a model that maps the input acoustic signals to the positional information of pre-defined EMA sensors based on 1,108 utterances. Afterwards, we develop a novel reduced physics-based dynamics model for simulating the tongue's motion. Unlike the existing methods, our deformable model is nonlinear, volume-preserving, and accommodates collision between the tongue and the oral cavity (mostly with the jaw). The tongue's deformation could be highly localized which imposes extra difficulties for existing spectral model reduction methods. Alternatively, we adopt a spatial reduction method that allows an expressive subspace representation of the tongue's deformation. We systematically evaluate the simulated tongue shapes with real-world shapes acquired by MRI/CT. Our experiment demonstrates that the proposed system is able to deliver a realistic visual tongue animation corresponding to a user's speech signal. @InProceedings{VR17p112, author = {Ran Luo and Qiang Fang and Jianguo Wei and Wenhuan Lu and Weiwei Xu and Yin Yang}, title = {Acoustic VR in the Mouth: A Real-Time Speech-Driven Visual Tongue System}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {112--121}, doi = {}, year = {2017}, } Video |
|
Feng, Lele |
VR '17: "MagicToon: A 2D-to-3D Creative ..."
MagicToon: A 2D-to-3D Creative Cartoon Modeling System with Mobile AR
Lele Feng, Xubo Yang, and Shuangjiu Xiao (Shanghai Jiao Tong University, China) We present MagicToon, an interactive modeling system with mobile augmented reality (AR) that allows children to build 3D cartoon scenes creatively from their own 2D cartoon drawings on paper. Our system consists of two major components: an automatic 2D-to-3D cartoon model creator and an interactive model editor to construct more complicated AR scenes. The model creator can generate textured 3D cartoon models according to 2D drawings automatically and overlay them on the real world, bringing life to flat cartoon drawings. With our interactive model editor, the user can perform several optional operations on 3D models such as copying and animating in AR context through a touchscreen of a handheld device. The user can also author more complicated AR scenes by placing multiple registered drawings simultaneously. The results of our user study have shown that our system is easier to use compared with traditional sketch-based modeling systems and can give more play to children's innovations compared with AR coloring books. @InProceedings{VR17p195, author = {Lele Feng and Xubo Yang and Shuangjiu Xiao}, title = {MagicToon: A 2D-to-3D Creative Cartoon Modeling System with Mobile AR}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {195--204}, doi = {}, year = {2017}, } Video Info |
|
Froehlich, Bernd |
VR '17: "Sweeping-Based Volumetric ..."
Sweeping-Based Volumetric Calibration and Registration of Multiple RGBD-Sensors for 3D Capturing Systems
Stephan Beck and Bernd Froehlich (Bauhaus-Universität Weimar, Germany) The accurate calibration and registration of a set of color and depth (RGBD) sensors into a shared coordinate system is an essential requirement for 3D surround capturing systems. We present a method to calibrate multiple unsynchronized RGBD-sensors with high accuracy in a matter of minutes by sweeping a tracked checkerboard through the desired capturing space in front of each sensor. Through the sweeping process, a large number of robust correspondences between the depth and the color image as well as the 3D world positions can be automatically established. In order to obtain temporally synchronized correspondences between an RGBD-sensor’s data streams and the tracked target’s positions we apply an off-line optimization process based on error minimization and a coplanarity constraint. The correspondences are entered into a 3D look-up table which is used during runtime to transform depth and color information into the application’s world coordinate system. Our proposed method requires a manual effort of less than one minute per RGBD-sensor and achieves a high calibration accuracy with an average 3D error below 3.5 mm and an average texture reprojection error smaller than 1 pixel. @InProceedings{VR17p167, author = {Stephan Beck and Bernd Froehlich}, title = {Sweeping-Based Volumetric Calibration and Registration of Multiple RGBD-Sensors for 3D Capturing Systems}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {167--176}, doi = {}, year = {2017}, } Video Info |
|
Fuchs, Henry |
VR '17: "Optimizing Placement of Commodity ..."
Optimizing Placement of Commodity Depth Cameras for Known 3D Dynamic Scene Capture
Rohan Chabra, Adrian Ilie, Nicholas Rewkowski, Young-Woon Cha, and Henry Fuchs (University of North Carolina at Chapel Hill, USA) Commodity depth sensors, such as the Microsoft Kinect®, have been widely used for the capture and reconstruction of the 3D structure of a room-sized dynamic scene. Camera placement and coverage during capture significantly impact the quality of the resulting reconstruction. In particular, dynamic occlusions and sensor interference have been shown to result in poor resolution and holes in the reconstruction results. This paper presents a novel algorithmic framework and an off-line optimization of depth sensor placements for a given 3D dynamic scene, simulated using virtual 3D models. We derive a fitness metric for a particular configuration of sensors by combining factors such as visibility and resolution of the entire dynamic scene along with probabilities of interference between sensors. We employ this fitness metric both in a greedy algorithm that determines the number of depth sensors needed to cover the scene, and in a simulated annealing algorithm that optimizes the placements of those sensors. We compare our algorithm’s optimized placements with manual sensor placements for a real dynamic scene. We present quantitative assessments using our fitness metric, as well as qualitative assessments to demonstrate that our algorithm not only enhances the resolution and total coverage of the reconstruction but also fills in voids by avoiding occlusions and sensor interference when compared with the reconstruction of the same scene using a manual sensor placement. @InProceedings{VR17p157, author = {Rohan Chabra and Adrian Ilie and Nicholas Rewkowski and Young-Woon Cha and Henry Fuchs}, title = {Optimizing Placement of Commodity Depth Cameras for Known 3D Dynamic Scene Capture}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {157--166}, doi = {}, year = {2017}, } Video Info |
|
Gatterer, Clemens |
VR '17: "VRRobot: Robot Actuated Props ..."
VRRobot: Robot Actuated Props in an Infinite Virtual Environment
Emanuel Vonach, Clemens Gatterer, and Hannes Kaufmann (Vienna University of Technology, Austria) We present the design and development of a fully immersive virtual reality (VR) system that can provide prop-based haptic feedback in an infinite virtual environment. It is conceived as a research tool for studying topics related to haptics in VR and based on off-the-shelf components. A robotic arm moves physical props, dynamically matching pose and location of an object in the virtual world. When the user reaches for the virtual object, his or her hands also encounter it in the real physical space. The interaction is not limited to specific body parts and does not rely on an external structure like an exoskeleton. In combination with a locomotion platform for close-to-natural walking, this allows unrestricted haptic interaction in a natural way in virtual environments of unlimited size. We describe the concept, the hardware and software architecture in detail. We establish safety design guidelines for human-robot interaction in VR. Our technical evaluation shows good response times and accuracy. We report on a user study conducted with 34 participants indicating promising results, and discuss the potential of our system. @InProceedings{VR17p74, author = {Emanuel Vonach and Clemens Gatterer and Hannes Kaufmann}, title = {VRRobot: Robot Actuated Props in an Infinite Virtual Environment}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {74--83}, doi = {}, year = {2017}, } Video |
|
Grechkin, Timofey |
VR '17: "An Evaluation of Strategies ..."
An Evaluation of Strategies for Two-User Redirected Walking in Shared Physical Spaces
Mahdi Azmandian, Timofey Grechkin, and Evan Suma Rosenberg (University of Southern California, USA) As the focus of virtual reality technology is shifting from single-person experiences to multi-user interactions it becomes increasingly important to accommodate multiple co-located users within a shared real-world space. For locomotion and navigation, the introduction of multiple users moving both virtually and physically creates additional challenges related to potential user-on-user collisions. In this work, we focus on defining the extent of these challenges, in order to apply redirected walking to two users immersed in virtual reality experiences within a shared physical tracked space. Using a computer simulation framework, we explore the costs and benefits of splitting available physical space between users versus attempting to algorithmically prevent user-to-user collisions. We also explore fundamental components of collision prevention such as steering the users away from each other, forced stopping, and user re-orientation. Each component was analyzed for the number of potential disruptions to the flow of the virtual experience. We also develop a novel collision prevention algorithm that reduces overall interruptions by 17.6% and collision prevention events by 58.3%. Our results show that sharing space using our collision prevention method is superior to subdividing the tracked space. @InProceedings{VR17p91, author = {Mahdi Azmandian and Timofey Grechkin and Evan Suma Rosenberg}, title = {An Evaluation of Strategies for Two-User Redirected Walking in Shared Physical Spaces}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {91--98}, doi = {}, year = {2017}, } |
|
Gutenko, Ievgeniia |
VR '17: "Automatic Speed and Direction ..."
Automatic Speed and Direction Control along Constrained Navigation Paths
Seyedkoosha Mirhosseini, Ievgeniia Gutenko, Sushant Ojal, Joseph Marino, and Arie E. Kaufman (Stony Brook University, USA) For many virtual reality applications, a pre-calculated fly-through path is the de facto standard navigation method. Such a path is convenient for users and ensures coverage of critical areas throughout the scene. Traditional applications use constant camera speed, allow for fully user-controlled manual speed adjustment, or use automatic speed adjustment based on heuristics from the scene. We introduce two novel methods for constrained path navigation and exploration in virtual environments which rely on the natural orientation of the user’s head during scene exploration. Utilizing head tracking to obtain the user’s area of focus, we perform automatic camera speed adjustment to allow for natural off-axis scene examination. We expand this to include automatic camera navigation along the pre-computed path, abrogating the need for any navigational inputs from the user. Our techniques are applicable for any scene with a pre-computed navigation path, including medical applications such as virtual colonoscopy, coronary fly-through, or virtual angioscopy, and graph navigation. We compare the traditional methods (constant speed and manual speed adjustment) and our two methods (automatic speed adjustment and automatic speed/direction control) to determine the effect of speed adjustment on system usability, mental load, performance, and user accuracy. Through this evaluation we observe the effect of automatic speed adjustment compared to traditional methods. We observed no negative impact from automatic navigation, and the users performed as well as with the manual navigation. @InProceedings{VR17p29, author = {Seyedkoosha Mirhosseini and Ievgeniia Gutenko and Sushant Ojal and Joseph Marino and Arie E. Kaufman}, title = {Automatic Speed and Direction Control along Constrained Navigation Paths}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {29--36}, doi = {}, year = {2017}, } |
|
Huang, Jingwei |
VR '17: "6-DOF VR Videos with a Single ..."
6-DOF VR Videos with a Single 360-Camera
Jingwei Huang, Zhili Chen, Duygu Ceylan, and Hailin Jin (Stanford University, USA; Adobe Research, USA) Recent breakthroughs in consumer level virtual reality (VR) headsets is creating a growing user-base in demand of immersive, full 3D VR experiences. While monoscopic 360-videos are one of the most common content for VR headsets, they lack 3D information and thus cannot be viewed with full 6 degree-of-freedom (DOF). We present an approach that addresses this limitation via a novel warping algorithm that can synthesize new views both with rotational and translational motion of the viewpoint. This enables to playback an input monoscopic 360-video in a VR headset where the new viewpoints are determined by the 6-DOF motion of the headset. Our solution tailors standard structure-from-motion and dense reconstruction algorithms to work accurately for 360-videos and is optimized for GPUs to achieve VR frame rates (> 120 fps). We demonstrate the effectiveness our approach on a variety of videos with interesting content. @InProceedings{VR17p37, author = {Jingwei Huang and Zhili Chen and Duygu Ceylan and Hailin Jin}, title = {6-DOF VR Videos with a Single 360-Camera}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {37--44}, doi = {}, year = {2017}, } Video |
|
Huerta, Ivan |
VR '17: "Rapid One-Shot Acquisition ..."
Rapid One-Shot Acquisition of Dynamic VR Avatars
Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell (Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA) We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and face shape at the fitting stage, while animation blendshapes allow the face to be animated. The subject assumes a T-pose and a single snapshot is captured using a stereo RGB plus depth sensor rig. Our system automatically aligns a photo texture and fits the 3D shape of the face. The body shape is stylized according to body dimensions estimated from segmented depth. The face identity blendweights are optimised according to image-based facial landmarks, while a custom texture map for the face is generated by warping the input images to a reference texture according to the facial landmarks. The total capture and processing time is under 10~seconds and the output is a light-weight, game-engine-ready avatar which is recognizable as the subject. We demonstrate our system in a VR environment in which each user sees the other users' animated avatars through a VR headset with real-time audio-based facial animation and live body motion tracking, affording an enhanced level of presence and social engagement compared to generic avatars. @InProceedings{VR17p131, author = {Charles Malleson and Maggie Kosek and Martin Klaudiny and Ivan Huerta and Jean-Charles Bazin and Alexander Sorkine-Hornung and Mark Mine and Kenny Mitchell}, title = {Rapid One-Shot Acquisition of Dynamic VR Avatars}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {131--140}, doi = {}, year = {2017}, } |
|
Hulin, Thomas |
VR '17: "Evaluation of a Penalty and ..."
Evaluation of a Penalty and a Constraint-Based Haptic Rendering Algorithm with Different Haptic Interfaces and Stiffness Values
Mikel Sagardia and Thomas Hulin (German Aerospace Center, Germany) This work presents an evaluation study in which the effects of a penalty-based and a constraint-based haptic rendering algorithm on the user performance and perception are analyzed. A total of N = 24 participants performed in a within-design study three variations of peg-in-hole tasks in a virtual environment after trials in an identically replicated real scenario as a reference. In addition to the two mentioned haptic rendering paradigms, two haptic devices were used, the HUG and a Sigma.7, and the force stiffness was also varied with maximum and half values possible for each device. Both objective (time and trajectory, collision performance, and muscular effort) and subjective ratings (contact perception, ergonomy, and workload) were recorded and statistically analyzed. The results show that the constraint-based haptic rendering algorithm with a lower stiffness than the maximum possible yields the most realistic contact perception, while keeping the visual inter-penetration between the objects roughly at around 15% of that caused by penalty-based algorithm (i.e., non perceptible in many cases). This result is even more evident with the HUG, the haptic device with the highest force display capabilities, although user ratings point to the Sigma.7 as the device with highest usability and lowest workload indicators. Altogether, the paper provides qualitative and quantitative guidelines for mapping properties of haptic algorithms and devices to user performance and perception. @InProceedings{VR17p64, author = {Mikel Sagardia and Thomas Hulin}, title = {Evaluation of a Penalty and a Constraint-Based Haptic Rendering Algorithm with Different Haptic Interfaces and Stiffness Values}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {64--73}, doi = {}, year = {2017}, } |
|
Ilie, Adrian |
VR '17: "Optimizing Placement of Commodity ..."
Optimizing Placement of Commodity Depth Cameras for Known 3D Dynamic Scene Capture
Rohan Chabra, Adrian Ilie, Nicholas Rewkowski, Young-Woon Cha, and Henry Fuchs (University of North Carolina at Chapel Hill, USA) Commodity depth sensors, such as the Microsoft Kinect®, have been widely used for the capture and reconstruction of the 3D structure of a room-sized dynamic scene. Camera placement and coverage during capture significantly impact the quality of the resulting reconstruction. In particular, dynamic occlusions and sensor interference have been shown to result in poor resolution and holes in the reconstruction results. This paper presents a novel algorithmic framework and an off-line optimization of depth sensor placements for a given 3D dynamic scene, simulated using virtual 3D models. We derive a fitness metric for a particular configuration of sensors by combining factors such as visibility and resolution of the entire dynamic scene along with probabilities of interference between sensors. We employ this fitness metric both in a greedy algorithm that determines the number of depth sensors needed to cover the scene, and in a simulated annealing algorithm that optimizes the placements of those sensors. We compare our algorithm’s optimized placements with manual sensor placements for a real dynamic scene. We present quantitative assessments using our fitness metric, as well as qualitative assessments to demonstrate that our algorithm not only enhances the resolution and total coverage of the reconstruction but also fills in voids by avoiding occlusions and sensor interference when compared with the reconstruction of the same scene using a manual sensor placement. @InProceedings{VR17p157, author = {Rohan Chabra and Adrian Ilie and Nicholas Rewkowski and Young-Woon Cha and Henry Fuchs}, title = {Optimizing Placement of Commodity Depth Cameras for Known 3D Dynamic Scene Capture}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {157--166}, doi = {}, year = {2017}, } Video Info |
|
Itoh, Yuta |
VR '17: "Recognition and Mapping of ..."
Recognition and Mapping of Facial Expressions to Avatar by Embedded Photo Reflective Sensors in Head Mounted Display
Katsuhiro Suzuki, Fumihiko Nakamura, Jiu Otsuka, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, and Maki Sugimoto (Keio University, Japan) We propose a facial expression mapping technology between virtual avatars and Head-Mounted Display (HMD) users. HMD allow people to enjoy an immersive Virtual Reality (VR) experience. A virtual avatar can be a representative of the user in the virtual environment. However, the synchronization of the the virtual avatar's expressions with those of the HMD user is limited. The major problem of wearing an HMD is that a large portion of the user's face is occluded, making facial recognition difficult in an HMD-based virtual environment. To overcome this problem, we propose a facial expression mapping technology using retro-reflective photoelectric sensors. The sensors attached inside the HMD measures the distance between the sensors and the user's face. The distance values of five basic facial expressions (Neutral, Happy, Angry, Surprised, and Sad) are used for training the neural network to estimate the facial expression of a user. We achieved an overall accuracy of 88% in recognizing the facial expressions. Our system can also reproduce facial expression change in real-time through an existing avatar using regression. Consequently, our system enables estimation and reconstruction of facial expressions that correspond to the user's emotional changes. @InProceedings{VR17p177, author = {Katsuhiro Suzuki and Fumihiko Nakamura and Jiu Otsuka and Katsutoshi Masai and Yuta Itoh and Yuta Sugiura and Maki Sugimoto}, title = {Recognition and Mapping of Facial Expressions to Avatar by Embedded Photo Reflective Sensors in Head Mounted Display}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {177--185}, doi = {}, year = {2017}, } |
|
Jhaveri, Sankhesh |
VR '17: "Enhancements to VTK Enabling ..."
Enhancements to VTK Enabling Scientific Visualization in Immersive Environments
Patrick O'Leary, Sankhesh Jhaveri, Aashish Chaudhary, William Sherman, Ken Martin, David Lonie, Eric Whiting, James Money, and Sandy McKenzie (Kitware, USA; Indiana University, USA; Idaho National Laboratory, USA) Modern scientific, engineering and medical computational simulations, as well as experimental and observational data sensing/measuring devices, produce enormous amounts of data. While statistical analysis is one tool that provides insight into this data, it is scientific visualization that is tactically important for scientific discovery, product design and data analysis. These benefits are impeded, however, when the scientific visualization algorithms are implemented from scratch | a time consuming and redundant process in immersive application development. This process then can greatly benefit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR) environment has only been attempted to varying degrees of success. In this paper, we demonstrate two new approaches to simplify this amalgamation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that provide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications. @InProceedings{VR17p186, author = {Patrick O'Leary and Sankhesh Jhaveri and Aashish Chaudhary and William Sherman and Ken Martin and David Lonie and Eric Whiting and James Money and Sandy McKenzie}, title = {Enhancements to VTK Enabling Scientific Visualization in Immersive Environments}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {186--194}, doi = {}, year = {2017}, } |
|
Jin, Hailin |
VR '17: "6-DOF VR Videos with a Single ..."
6-DOF VR Videos with a Single 360-Camera
Jingwei Huang, Zhili Chen, Duygu Ceylan, and Hailin Jin (Stanford University, USA; Adobe Research, USA) Recent breakthroughs in consumer level virtual reality (VR) headsets is creating a growing user-base in demand of immersive, full 3D VR experiences. While monoscopic 360-videos are one of the most common content for VR headsets, they lack 3D information and thus cannot be viewed with full 6 degree-of-freedom (DOF). We present an approach that addresses this limitation via a novel warping algorithm that can synthesize new views both with rotational and translational motion of the viewpoint. This enables to playback an input monoscopic 360-video in a VR headset where the new viewpoints are determined by the 6-DOF motion of the headset. Our solution tailors standard structure-from-motion and dense reconstruction algorithms to work accurately for 360-videos and is optimized for GPUs to achieve VR frame rates (> 120 fps). We demonstrate the effectiveness our approach on a variety of videos with interesting content. @InProceedings{VR17p37, author = {Jingwei Huang and Zhili Chen and Duygu Ceylan and Hailin Jin}, title = {6-DOF VR Videos with a Single 360-Camera}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {37--44}, doi = {}, year = {2017}, } Video |
|
Kajimoto, Hiroyuki |
VR '17: "Wearable Tactile Device using ..."
Wearable Tactile Device using Mechanical and Electrical Stimulation for Fingertip Interaction with Virtual World
Vibol Yem and Hiroyuki Kajimoto (University of Electro-Communications, Japan) We developed “Finger Glove for Augmented Reality” (FinGAR), which combines electrical and mechanical stimulation to selectively stimulate skin sensory mechanoreceptors and provide tactile feedback of virtual objects. A DC motor provides high-frequency vibration and shear deformation to the whole finger, and an array of electrodes provide pressure and low-frequency vibration with high spatial resolution. FinGAR devices are attached to the thumb, index finger and middle finger. It is lightweight, simple in mechanism, easy to wear, and does not disturb the natural movements of the hand. All of these attributes are necessary for a general-purpose virtual reality system. User study was conducted to evaluate its ability to reproduce sensations of four tactile dimensions: macro roughness, friction, fine roughness and hardness. Result indicated that skin deformation and cathodic stimulation affect macro roughness and hardness, whereas high-frequency vibration and anodic stimulation affect friction and fine roughness. @InProceedings{VR17p99, author = {Vibol Yem and Hiroyuki Kajimoto}, title = {Wearable Tactile Device using Mechanical and Electrical Stimulation for Fingertip Interaction with Virtual World}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {99--104}, doi = {}, year = {2017}, } Video |
|
Kaufman, Arie E. |
VR '17: "Automatic Speed and Direction ..."
Automatic Speed and Direction Control along Constrained Navigation Paths
Seyedkoosha Mirhosseini, Ievgeniia Gutenko, Sushant Ojal, Joseph Marino, and Arie E. Kaufman (Stony Brook University, USA) For many virtual reality applications, a pre-calculated fly-through path is the de facto standard navigation method. Such a path is convenient for users and ensures coverage of critical areas throughout the scene. Traditional applications use constant camera speed, allow for fully user-controlled manual speed adjustment, or use automatic speed adjustment based on heuristics from the scene. We introduce two novel methods for constrained path navigation and exploration in virtual environments which rely on the natural orientation of the user’s head during scene exploration. Utilizing head tracking to obtain the user’s area of focus, we perform automatic camera speed adjustment to allow for natural off-axis scene examination. We expand this to include automatic camera navigation along the pre-computed path, abrogating the need for any navigational inputs from the user. Our techniques are applicable for any scene with a pre-computed navigation path, including medical applications such as virtual colonoscopy, coronary fly-through, or virtual angioscopy, and graph navigation. We compare the traditional methods (constant speed and manual speed adjustment) and our two methods (automatic speed adjustment and automatic speed/direction control) to determine the effect of speed adjustment on system usability, mental load, performance, and user accuracy. Through this evaluation we observe the effect of automatic speed adjustment compared to traditional methods. We observed no negative impact from automatic navigation, and the users performed as well as with the manual navigation. @InProceedings{VR17p29, author = {Seyedkoosha Mirhosseini and Ievgeniia Gutenko and Sushant Ojal and Joseph Marino and Arie E. Kaufman}, title = {Automatic Speed and Direction Control along Constrained Navigation Paths}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {29--36}, doi = {}, year = {2017}, } |
|
Kaufmann, Hannes |
VR '17: "VRRobot: Robot Actuated Props ..."
VRRobot: Robot Actuated Props in an Infinite Virtual Environment
Emanuel Vonach, Clemens Gatterer, and Hannes Kaufmann (Vienna University of Technology, Austria) We present the design and development of a fully immersive virtual reality (VR) system that can provide prop-based haptic feedback in an infinite virtual environment. It is conceived as a research tool for studying topics related to haptics in VR and based on off-the-shelf components. A robotic arm moves physical props, dynamically matching pose and location of an object in the virtual world. When the user reaches for the virtual object, his or her hands also encounter it in the real physical space. The interaction is not limited to specific body parts and does not rely on an external structure like an exoskeleton. In combination with a locomotion platform for close-to-natural walking, this allows unrestricted haptic interaction in a natural way in virtual environments of unlimited size. We describe the concept, the hardware and software architecture in detail. We establish safety design guidelines for human-robot interaction in VR. Our technical evaluation shows good response times and accuracy. We report on a user study conducted with 34 participants indicating promising results, and discuss the potential of our system. @InProceedings{VR17p74, author = {Emanuel Vonach and Clemens Gatterer and Hannes Kaufmann}, title = {VRRobot: Robot Actuated Props in an Infinite Virtual Environment}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {74--83}, doi = {}, year = {2017}, } Video |
|
Klaudiny, Martin |
VR '17: "Rapid One-Shot Acquisition ..."
Rapid One-Shot Acquisition of Dynamic VR Avatars
Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell (Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA) We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and face shape at the fitting stage, while animation blendshapes allow the face to be animated. The subject assumes a T-pose and a single snapshot is captured using a stereo RGB plus depth sensor rig. Our system automatically aligns a photo texture and fits the 3D shape of the face. The body shape is stylized according to body dimensions estimated from segmented depth. The face identity blendweights are optimised according to image-based facial landmarks, while a custom texture map for the face is generated by warping the input images to a reference texture according to the facial landmarks. The total capture and processing time is under 10~seconds and the output is a light-weight, game-engine-ready avatar which is recognizable as the subject. We demonstrate our system in a VR environment in which each user sees the other users' animated avatars through a VR headset with real-time audio-based facial animation and live body motion tracking, affording an enhanced level of presence and social engagement compared to generic avatars. @InProceedings{VR17p131, author = {Charles Malleson and Maggie Kosek and Martin Klaudiny and Ivan Huerta and Jean-Charles Bazin and Alexander Sorkine-Hornung and Mark Mine and Kenny Mitchell}, title = {Rapid One-Shot Acquisition of Dynamic VR Avatars}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {131--140}, doi = {}, year = {2017}, } |
|
Kopper, Regis |
VR '17: "Emotional Qualities of VR ..."
Emotional Qualities of VR Space
Asma Naz, Regis Kopper, Ryan P. McMahan, and Mihai Nadin (University of Texas at Dallas, USA; Duke University, USA) The emotional response a person has to a living space is predominantly affected by light, color and texture as space-making elements. In order to verify whether this phenomenon could be replicated in a simulated environment, we conducted a user study in a six-sided projected immersive display that utilized equivalent design attributes of brightness, color and texture in order to assess to which extent the emotional response in a simulated environment is affected by the same parameters affecting real environments. Since emotional response depends upon the context, we evaluated the emotional responses of two groups of users: inactive (passive) and active (performing a typical daily activity). The results from the perceptual study generated data from which design principles for a virtual living space are articulated. Such a space, as an alternative to expensive built dwellings, could potentially support new, minimalist lifestyles of occupants, defined as the neo-nomads, aligned with their work experience in the digital domain through the generation of emotional experiences of spaces. Data from the experiments confirmed the hypothesis that perceivable emotional aspects of real-world spaces could be successfully generated through simulation of design attributes in the virtual space. The subjective response to the virtual space was consistent with corresponding responses from real-world color and brightness emotional perception. Our data could serve the virtual reality (VR) community in its attempt to conceive of further applications of virtual spaces for well-defined activities. @InProceedings{VR17p3, author = {Asma Naz and Regis Kopper and Ryan P. McMahan and Mihai Nadin}, title = {Emotional Qualities of VR Space}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {3--11}, doi = {}, year = {2017}, } Info |
|
Kosek, Maggie |
VR '17: "Rapid One-Shot Acquisition ..."
Rapid One-Shot Acquisition of Dynamic VR Avatars
Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell (Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA) We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and face shape at the fitting stage, while animation blendshapes allow the face to be animated. The subject assumes a T-pose and a single snapshot is captured using a stereo RGB plus depth sensor rig. Our system automatically aligns a photo texture and fits the 3D shape of the face. The body shape is stylized according to body dimensions estimated from segmented depth. The face identity blendweights are optimised according to image-based facial landmarks, while a custom texture map for the face is generated by warping the input images to a reference texture according to the facial landmarks. The total capture and processing time is under 10~seconds and the output is a light-weight, game-engine-ready avatar which is recognizable as the subject. We demonstrate our system in a VR environment in which each user sees the other users' animated avatars through a VR headset with real-time audio-based facial animation and live body motion tracking, affording an enhanced level of presence and social engagement compared to generic avatars. @InProceedings{VR17p131, author = {Charles Malleson and Maggie Kosek and Martin Klaudiny and Ivan Huerta and Jean-Charles Bazin and Alexander Sorkine-Hornung and Mark Mine and Kenny Mitchell}, title = {Rapid One-Shot Acquisition of Dynamic VR Avatars}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {131--140}, doi = {}, year = {2017}, } |
|
Lampotang, Samsun |
VR '17: "Repeat after Me: Using Mixed ..."
Repeat after Me: Using Mixed Reality Humans to Influence Best Communication Practices
Andrew Cordar, Adam Wendling, Casey White, Samsun Lampotang, and Benjamin Lok (University of Florida, USA; University of Virginia, USA) In the past few years, advances have been made on how mixed reality humans (MRHs) can be used for interpersonal communication skills training for medical teams; however, little research has looked at how MRHs can influence communication skills during training. One way to influence communication skills is to leverage MRHs as models of communication behavior. We created a mixed reality medical team training exercise designed to impact communication behaviors that are critical for patient safety. We recruited anesthesia residents to go through an operating room training exercise with MRHs to assess and influence residents' closed loop communication behaviors during medication administration. We manipulated the behavior of the MRHs to determine if the MRHs could influence the residents' closed loop communication behavior. Our results showed that residents' closed loop communications behaviors were influenced by MRHs. Additionally, we found there was a statistically significant difference between groups based on which MRH behavior residents observed. Because the MRHs significantly impacted how residents communicated in simulation, this work expands the boundaries for how VR can be used and demonstrates that MRHs could be used as tools to address complex communication dynamics in a team setting @InProceedings{VR17p148, author = {Andrew Cordar and Adam Wendling and Casey White and Samsun Lampotang and Benjamin Lok}, title = {Repeat after Me: Using Mixed Reality Humans to Influence Best Communication Practices}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {148--156}, doi = {}, year = {2017}, } Video |
|
Lécuyer, Anatole |
VR '17: "Inducing Self-Motion Sensations ..."
Inducing Self-Motion Sensations in Driving Simulators using Force-Feedback and Haptic Motion
Guillaume Bouyer, Amine Chellali, and Anatole Lécuyer (University of Évry Val d'Essonne, France; Inria, France) Producing sensations of motion in driving simulators often requires using cumbersome and expensive motion platforms. In this ar- ticle we present a novel and alternative approach for producing self-motion sensations in driving simulations by relying on haptic- feedback. The method consists in applying a force-feedback pro- portional to the acceleration of the virtual vehicle directly in the hands of the driver, by means of a haptic device attached to the manipulated controller (or a steering wheel). We designed a proof- of-concept based on a standard gamepad physically attached at the extremity of a standard 3DOF haptic display. Haptic effects were designed to match notably the acceleration/braking (longitudinal forces) and left/right turns (lateral forces) of the virtual vehicle. A preliminary study conducted with 23 participants, engaged in gamepad-based active VR navigations in a straight line, showed that haptic motion effects globally improved the involvement and realism of motion sensation for participants with prior experience with haptic devices. Taken together, our results suggest that our approach could be further tested and used in driving simulators in entertainment and/or professional contexts. @InProceedings{VR17p84, author = {Guillaume Bouyer and Amine Chellali and Anatole Lécuyer}, title = {Inducing Self-Motion Sensations in Driving Simulators using Force-Feedback and Haptic Motion}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {84--90}, doi = {}, year = {2017}, } |
|
Lee, Myungho |
VR '17: "Exploring the Effect of Vibrotactile ..."
Exploring the Effect of Vibrotactile Feedback through the Floor on Social Presence in an Immersive Virtual Environment
Myungho Lee, Gerd Bruder, and Gregory F. Welch (University of Central Florida, USA) We investigate the effect of vibrotactile feedback delivered to one's feet in an immersive virtual environment (IVE). In our study, participants observed a virtual environment where a virtual human (VH) walked toward the participants and paced back and forth within their social space. We compared three conditions as follows: participants in the Sound condition heard the footsteps of the VH; participants in the Vibration condition experienced the vibration of the footsteps along with the sounds; while participants in the Mute condition were not exposed to sound nor vibrotactile feedback. We found that the participants in the Vibration condition felt a higher social presence with the VH compared to those who did not feel the vibration. The participants in the Vibration condition also exhibited greater avoidance behavior while facing the VH and when the VH invaded their personal space. @InProceedings{VR17p105, author = {Myungho Lee and Gerd Bruder and Gregory F. Welch}, title = {Exploring the Effect of Vibrotactile Feedback through the Floor on Social Presence in an Immersive Virtual Environment}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {105--111}, doi = {}, year = {2017}, } |
|
Lok, Benjamin |
VR '17: "Repeat after Me: Using Mixed ..."
Repeat after Me: Using Mixed Reality Humans to Influence Best Communication Practices
Andrew Cordar, Adam Wendling, Casey White, Samsun Lampotang, and Benjamin Lok (University of Florida, USA; University of Virginia, USA) In the past few years, advances have been made on how mixed reality humans (MRHs) can be used for interpersonal communication skills training for medical teams; however, little research has looked at how MRHs can influence communication skills during training. One way to influence communication skills is to leverage MRHs as models of communication behavior. We created a mixed reality medical team training exercise designed to impact communication behaviors that are critical for patient safety. We recruited anesthesia residents to go through an operating room training exercise with MRHs to assess and influence residents' closed loop communication behaviors during medication administration. We manipulated the behavior of the MRHs to determine if the MRHs could influence the residents' closed loop communication behavior. Our results showed that residents' closed loop communications behaviors were influenced by MRHs. Additionally, we found there was a statistically significant difference between groups based on which MRH behavior residents observed. Because the MRHs significantly impacted how residents communicated in simulation, this work expands the boundaries for how VR can be used and demonstrates that MRHs could be used as tools to address complex communication dynamics in a team setting @InProceedings{VR17p148, author = {Andrew Cordar and Adam Wendling and Casey White and Samsun Lampotang and Benjamin Lok}, title = {Repeat after Me: Using Mixed Reality Humans to Influence Best Communication Practices}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {148--156}, doi = {}, year = {2017}, } Video |
|
Lombart, Cindy |
VR '17: "A Study on the Use of an Immersive ..."
A Study on the Use of an Immersive Virtual Reality Store to Investigate Consumer Perceptions and Purchase Behavior toward Non-standard Fruits and Vegetables
Adrien Verhulst, Jean-Marie Normand, Cindy Lombart, and Guillaume Moreau (Ecole Centrale de Nantes, France; Audienca Business School, France) In this paper we present an immersive virtual reality user study aimed at investigating how customers perceive and if they would purchase non-standard (i.e. misshaped) fruits and vegetables (FaVs) in supermarkets and hypermarkets. Indeed, food waste is a major issue for the retail sector and a recent trend is to reduce it by selling non-standard goods. An important question for retailers relates to the FaVs' ``level of abnormality'' that consumers would agree to buy. However, this question cannot be tackled using ``classical'' marketing techniques that perform user studies within real shops since fresh produce such as FaVs tend to rot rapidly preventing studies to be repeatable or to be run for a long time. In order to overcome those limitations, we created a virtual grocery store with a fresh FaVs section where 142 participants were immersed using an Oculus Rift DK2 HMD. Participants were presented either ``normal'', ``slightly misshaped'', ``misshaped'' or ``severely misshaped'' FaVs. Results show that participants tend to purchase a similar number of FaVs whatever their deformity. Nevertheless participants' perceptions of the quality of the FaV depend on the level of abnormality. @InProceedings{VR17p55, author = {Adrien Verhulst and Jean-Marie Normand and Cindy Lombart and Guillaume Moreau}, title = {A Study on the Use of an Immersive Virtual Reality Store to Investigate Consumer Perceptions and Purchase Behavior toward Non-standard Fruits and Vegetables}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {55--63}, doi = {}, year = {2017}, } |
|
Lonie, David |
VR '17: "Enhancements to VTK Enabling ..."
Enhancements to VTK Enabling Scientific Visualization in Immersive Environments
Patrick O'Leary, Sankhesh Jhaveri, Aashish Chaudhary, William Sherman, Ken Martin, David Lonie, Eric Whiting, James Money, and Sandy McKenzie (Kitware, USA; Indiana University, USA; Idaho National Laboratory, USA) Modern scientific, engineering and medical computational simulations, as well as experimental and observational data sensing/measuring devices, produce enormous amounts of data. While statistical analysis is one tool that provides insight into this data, it is scientific visualization that is tactically important for scientific discovery, product design and data analysis. These benefits are impeded, however, when the scientific visualization algorithms are implemented from scratch | a time consuming and redundant process in immersive application development. This process then can greatly benefit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR) environment has only been attempted to varying degrees of success. In this paper, we demonstrate two new approaches to simplify this amalgamation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that provide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications. @InProceedings{VR17p186, author = {Patrick O'Leary and Sankhesh Jhaveri and Aashish Chaudhary and William Sherman and Ken Martin and David Lonie and Eric Whiting and James Money and Sandy McKenzie}, title = {Enhancements to VTK Enabling Scientific Visualization in Immersive Environments}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {186--194}, doi = {}, year = {2017}, } |
|
Lu, Wenhuan |
VR '17: "Acoustic VR in the Mouth: ..."
Acoustic VR in the Mouth: A Real-Time Speech-Driven Visual Tongue System
Ran Luo, Qiang Fang, Jianguo Wei, Wenhuan Lu, Weiwei Xu, and Yin Yang (University of New Mexico, USA; Chinese Academy of Social Sciences, China; Tianjin University, China; Zhejiang University, China) We propose an acoustic-VR system that converts acoustic signals of human language (Chinese) to realistic 3D tongue animation sequences in real time. It is known that directly capturing the 3D geometry of the tongue at a frame rate that matches the tongue's swift movement during the language production is challenging. This difficulty is handled by utilizing the electromagnetic articulography (EMA) sensor as the intermediate medium linking the acoustic data to the simulated virtual reality. We leverage Deep Neural Networks to train a model that maps the input acoustic signals to the positional information of pre-defined EMA sensors based on 1,108 utterances. Afterwards, we develop a novel reduced physics-based dynamics model for simulating the tongue's motion. Unlike the existing methods, our deformable model is nonlinear, volume-preserving, and accommodates collision between the tongue and the oral cavity (mostly with the jaw). The tongue's deformation could be highly localized which imposes extra difficulties for existing spectral model reduction methods. Alternatively, we adopt a spatial reduction method that allows an expressive subspace representation of the tongue's deformation. We systematically evaluate the simulated tongue shapes with real-world shapes acquired by MRI/CT. Our experiment demonstrates that the proposed system is able to deliver a realistic visual tongue animation corresponding to a user's speech signal. @InProceedings{VR17p112, author = {Ran Luo and Qiang Fang and Jianguo Wei and Wenhuan Lu and Weiwei Xu and Yin Yang}, title = {Acoustic VR in the Mouth: A Real-Time Speech-Driven Visual Tongue System}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {112--121}, doi = {}, year = {2017}, } Video |
|
Luo, Ran |
VR '17: "Acoustic VR in the Mouth: ..."
Acoustic VR in the Mouth: A Real-Time Speech-Driven Visual Tongue System
Ran Luo, Qiang Fang, Jianguo Wei, Wenhuan Lu, Weiwei Xu, and Yin Yang (University of New Mexico, USA; Chinese Academy of Social Sciences, China; Tianjin University, China; Zhejiang University, China) We propose an acoustic-VR system that converts acoustic signals of human language (Chinese) to realistic 3D tongue animation sequences in real time. It is known that directly capturing the 3D geometry of the tongue at a frame rate that matches the tongue's swift movement during the language production is challenging. This difficulty is handled by utilizing the electromagnetic articulography (EMA) sensor as the intermediate medium linking the acoustic data to the simulated virtual reality. We leverage Deep Neural Networks to train a model that maps the input acoustic signals to the positional information of pre-defined EMA sensors based on 1,108 utterances. Afterwards, we develop a novel reduced physics-based dynamics model for simulating the tongue's motion. Unlike the existing methods, our deformable model is nonlinear, volume-preserving, and accommodates collision between the tongue and the oral cavity (mostly with the jaw). The tongue's deformation could be highly localized which imposes extra difficulties for existing spectral model reduction methods. Alternatively, we adopt a spatial reduction method that allows an expressive subspace representation of the tongue's deformation. We systematically evaluate the simulated tongue shapes with real-world shapes acquired by MRI/CT. Our experiment demonstrates that the proposed system is able to deliver a realistic visual tongue animation corresponding to a user's speech signal. @InProceedings{VR17p112, author = {Ran Luo and Qiang Fang and Jianguo Wei and Wenhuan Lu and Weiwei Xu and Yin Yang}, title = {Acoustic VR in the Mouth: A Real-Time Speech-Driven Visual Tongue System}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {112--121}, doi = {}, year = {2017}, } Video |
|
MacQuarrie, Andrew |
VR '17: "Cinematic Virtual Reality: ..."
Cinematic Virtual Reality: Evaluating the Effect of Display Type on the Viewing Experience for Panoramic Video
Andrew MacQuarrie and Anthony Steed (University College London, UK) The proliferation of head-mounted displays (HMD) in the market means that cinematic virtual reality (CVR) is an increasingly popular format. We explore several metrics that may indicate advantages and disadvantages of CVR compared to traditional viewing formats such as TV. We explored the consumption of panoramic videos in three different display systems: a HMD, a SurroundVideo+ (SV+), and a standard 16:9 TV. The SV+ display features a TV with projected peripheral content. A between-groups experiment of 63 participants was conducted, in which participants watched panoramic videos in one of these three display conditions. Aspects examined in the experiment were spatial awareness, narrative engagement, enjoyment, memory, fear, attention, and a viewer’s concern about missing something. Our results indicated that the HMD offered a significant benefit in terms of enjoyment and spatial awareness, and our SV+ display offered a significant improvement in enjoyment over traditional TV. We were unable to confirm the work of a previous study that showed incidental memory may be lower in a HMD over a TV. Drawing attention and a viewer’s concern about missing something were also not significantly different between display conditions. It is clear that passive media viewing consists of a complex interplay of factors, such as the media itself, the characteristics of the display, as well as human aspects including perception and attention. While passive media viewing presents many challenges for evaluation, identifying a number of broadly applicable metrics will aid our understanding of these experiences, and allow the creation of better, more engaging CVR content and displays. @InProceedings{VR17p45, author = {Andrew MacQuarrie and Anthony Steed}, title = {Cinematic Virtual Reality: Evaluating the Effect of Display Type on the Viewing Experience for Panoramic Video}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {45--54}, doi = {}, year = {2017}, } |
|
Malleson, Charles |
VR '17: "Rapid One-Shot Acquisition ..."
Rapid One-Shot Acquisition of Dynamic VR Avatars
Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell (Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA) We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and face shape at the fitting stage, while animation blendshapes allow the face to be animated. The subject assumes a T-pose and a single snapshot is captured using a stereo RGB plus depth sensor rig. Our system automatically aligns a photo texture and fits the 3D shape of the face. The body shape is stylized according to body dimensions estimated from segmented depth. The face identity blendweights are optimised according to image-based facial landmarks, while a custom texture map for the face is generated by warping the input images to a reference texture according to the facial landmarks. The total capture and processing time is under 10~seconds and the output is a light-weight, game-engine-ready avatar which is recognizable as the subject. We demonstrate our system in a VR environment in which each user sees the other users' animated avatars through a VR headset with real-time audio-based facial animation and live body motion tracking, affording an enhanced level of presence and social engagement compared to generic avatars. @InProceedings{VR17p131, author = {Charles Malleson and Maggie Kosek and Martin Klaudiny and Ivan Huerta and Jean-Charles Bazin and Alexander Sorkine-Hornung and Mark Mine and Kenny Mitchell}, title = {Rapid One-Shot Acquisition of Dynamic VR Avatars}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {131--140}, doi = {}, year = {2017}, } |
|
Marino, Joseph |
VR '17: "Automatic Speed and Direction ..."
Automatic Speed and Direction Control along Constrained Navigation Paths
Seyedkoosha Mirhosseini, Ievgeniia Gutenko, Sushant Ojal, Joseph Marino, and Arie E. Kaufman (Stony Brook University, USA) For many virtual reality applications, a pre-calculated fly-through path is the de facto standard navigation method. Such a path is convenient for users and ensures coverage of critical areas throughout the scene. Traditional applications use constant camera speed, allow for fully user-controlled manual speed adjustment, or use automatic speed adjustment based on heuristics from the scene. We introduce two novel methods for constrained path navigation and exploration in virtual environments which rely on the natural orientation of the user’s head during scene exploration. Utilizing head tracking to obtain the user’s area of focus, we perform automatic camera speed adjustment to allow for natural off-axis scene examination. We expand this to include automatic camera navigation along the pre-computed path, abrogating the need for any navigational inputs from the user. Our techniques are applicable for any scene with a pre-computed navigation path, including medical applications such as virtual colonoscopy, coronary fly-through, or virtual angioscopy, and graph navigation. We compare the traditional methods (constant speed and manual speed adjustment) and our two methods (automatic speed adjustment and automatic speed/direction control) to determine the effect of speed adjustment on system usability, mental load, performance, and user accuracy. Through this evaluation we observe the effect of automatic speed adjustment compared to traditional methods. We observed no negative impact from automatic navigation, and the users performed as well as with the manual navigation. @InProceedings{VR17p29, author = {Seyedkoosha Mirhosseini and Ievgeniia Gutenko and Sushant Ojal and Joseph Marino and Arie E. Kaufman}, title = {Automatic Speed and Direction Control along Constrained Navigation Paths}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {29--36}, doi = {}, year = {2017}, } |
|
Martin, Ken |
VR '17: "Enhancements to VTK Enabling ..."
Enhancements to VTK Enabling Scientific Visualization in Immersive Environments
Patrick O'Leary, Sankhesh Jhaveri, Aashish Chaudhary, William Sherman, Ken Martin, David Lonie, Eric Whiting, James Money, and Sandy McKenzie (Kitware, USA; Indiana University, USA; Idaho National Laboratory, USA) Modern scientific, engineering and medical computational simulations, as well as experimental and observational data sensing/measuring devices, produce enormous amounts of data. While statistical analysis is one tool that provides insight into this data, it is scientific visualization that is tactically important for scientific discovery, product design and data analysis. These benefits are impeded, however, when the scientific visualization algorithms are implemented from scratch | a time consuming and redundant process in immersive application development. This process then can greatly benefit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR) environment has only been attempted to varying degrees of success. In this paper, we demonstrate two new approaches to simplify this amalgamation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that provide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications. @InProceedings{VR17p186, author = {Patrick O'Leary and Sankhesh Jhaveri and Aashish Chaudhary and William Sherman and Ken Martin and David Lonie and Eric Whiting and James Money and Sandy McKenzie}, title = {Enhancements to VTK Enabling Scientific Visualization in Immersive Environments}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {186--194}, doi = {}, year = {2017}, } |
|
Masai, Katsutoshi |
VR '17: "Recognition and Mapping of ..."
Recognition and Mapping of Facial Expressions to Avatar by Embedded Photo Reflective Sensors in Head Mounted Display
Katsuhiro Suzuki, Fumihiko Nakamura, Jiu Otsuka, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, and Maki Sugimoto (Keio University, Japan) We propose a facial expression mapping technology between virtual avatars and Head-Mounted Display (HMD) users. HMD allow people to enjoy an immersive Virtual Reality (VR) experience. A virtual avatar can be a representative of the user in the virtual environment. However, the synchronization of the the virtual avatar's expressions with those of the HMD user is limited. The major problem of wearing an HMD is that a large portion of the user's face is occluded, making facial recognition difficult in an HMD-based virtual environment. To overcome this problem, we propose a facial expression mapping technology using retro-reflective photoelectric sensors. The sensors attached inside the HMD measures the distance between the sensors and the user's face. The distance values of five basic facial expressions (Neutral, Happy, Angry, Surprised, and Sad) are used for training the neural network to estimate the facial expression of a user. We achieved an overall accuracy of 88% in recognizing the facial expressions. Our system can also reproduce facial expression change in real-time through an existing avatar using regression. Consequently, our system enables estimation and reconstruction of facial expressions that correspond to the user's emotional changes. @InProceedings{VR17p177, author = {Katsuhiro Suzuki and Fumihiko Nakamura and Jiu Otsuka and Katsutoshi Masai and Yuta Itoh and Yuta Sugiura and Maki Sugimoto}, title = {Recognition and Mapping of Facial Expressions to Avatar by Embedded Photo Reflective Sensors in Head Mounted Display}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {177--185}, doi = {}, year = {2017}, } |
|
McKenzie, Sandy |
VR '17: "Enhancements to VTK Enabling ..."
Enhancements to VTK Enabling Scientific Visualization in Immersive Environments
Patrick O'Leary, Sankhesh Jhaveri, Aashish Chaudhary, William Sherman, Ken Martin, David Lonie, Eric Whiting, James Money, and Sandy McKenzie (Kitware, USA; Indiana University, USA; Idaho National Laboratory, USA) Modern scientific, engineering and medical computational simulations, as well as experimental and observational data sensing/measuring devices, produce enormous amounts of data. While statistical analysis is one tool that provides insight into this data, it is scientific visualization that is tactically important for scientific discovery, product design and data analysis. These benefits are impeded, however, when the scientific visualization algorithms are implemented from scratch | a time consuming and redundant process in immersive application development. This process then can greatly benefit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR) environment has only been attempted to varying degrees of success. In this paper, we demonstrate two new approaches to simplify this amalgamation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that provide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications. @InProceedings{VR17p186, author = {Patrick O'Leary and Sankhesh Jhaveri and Aashish Chaudhary and William Sherman and Ken Martin and David Lonie and Eric Whiting and James Money and Sandy McKenzie}, title = {Enhancements to VTK Enabling Scientific Visualization in Immersive Environments}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {186--194}, doi = {}, year = {2017}, } |
|
McMahan, Ryan P. |
VR '17: "Emotional Qualities of VR ..."
Emotional Qualities of VR Space
Asma Naz, Regis Kopper, Ryan P. McMahan, and Mihai Nadin (University of Texas at Dallas, USA; Duke University, USA) The emotional response a person has to a living space is predominantly affected by light, color and texture as space-making elements. In order to verify whether this phenomenon could be replicated in a simulated environment, we conducted a user study in a six-sided projected immersive display that utilized equivalent design attributes of brightness, color and texture in order to assess to which extent the emotional response in a simulated environment is affected by the same parameters affecting real environments. Since emotional response depends upon the context, we evaluated the emotional responses of two groups of users: inactive (passive) and active (performing a typical daily activity). The results from the perceptual study generated data from which design principles for a virtual living space are articulated. Such a space, as an alternative to expensive built dwellings, could potentially support new, minimalist lifestyles of occupants, defined as the neo-nomads, aligned with their work experience in the digital domain through the generation of emotional experiences of spaces. Data from the experiments confirmed the hypothesis that perceivable emotional aspects of real-world spaces could be successfully generated through simulation of design attributes in the virtual space. The subjective response to the virtual space was consistent with corresponding responses from real-world color and brightness emotional perception. Our data could serve the virtual reality (VR) community in its attempt to conceive of further applications of virtual spaces for well-defined activities. @InProceedings{VR17p3, author = {Asma Naz and Regis Kopper and Ryan P. McMahan and Mihai Nadin}, title = {Emotional Qualities of VR Space}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {3--11}, doi = {}, year = {2017}, } Info |
|
Mehra, Ravish |
VR '17: "Efficient Construction of ..."
Efficient Construction of the Spatial Room Impulse Response
Carl Schissler, Peter Stirling, and Ravish Mehra (Oculus, USA; Facebook, USA) An important component of the modeling of sound propagation for virtual reality (VR) is the spatialization of the room impulse response (RIR) for directional listeners. This involves convolution of the listener's head-related transfer function (HRTF) with the RIR to generate a spatial room impulse response (SRIR) which can be used to auralize the sound entering the listener's ear canals. Previous approaches tend to evaluate the HRTF for each sound propagation path, though this is too slow for interactive VR latency requirements. We present a new technique for computation of the SRIR that performs the convolution with the HRTF in the spherical harmonic (SH) domain for RIR partitions of a fixed length. The main contribution is a novel perceptually-driven metric that adaptively determines the lowest SH order required for each partition to result in no perceptible error in the SRIR. By using lower SH order for some partitions, our technique saves a significant amount of computation and is almost an order of magnitude faster than the previous approach. We compared the subjective impact of this new method to the previous one and observe a strong scene-dependent preference for our technique. As a result, our method is the first that can compute high-quality spatial sound for the entire impulse response fast enough to meet the audio latency requirements of interactive virtual reality applications. @InProceedings{VR17p122, author = {Carl Schissler and Peter Stirling and Ravish Mehra}, title = {Efficient Construction of the Spatial Room Impulse Response}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {122--130}, doi = {}, year = {2017}, } |
|
Mine, Mark |
VR '17: "Rapid One-Shot Acquisition ..."
Rapid One-Shot Acquisition of Dynamic VR Avatars
Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell (Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA) We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and face shape at the fitting stage, while animation blendshapes allow the face to be animated. The subject assumes a T-pose and a single snapshot is captured using a stereo RGB plus depth sensor rig. Our system automatically aligns a photo texture and fits the 3D shape of the face. The body shape is stylized according to body dimensions estimated from segmented depth. The face identity blendweights are optimised according to image-based facial landmarks, while a custom texture map for the face is generated by warping the input images to a reference texture according to the facial landmarks. The total capture and processing time is under 10~seconds and the output is a light-weight, game-engine-ready avatar which is recognizable as the subject. We demonstrate our system in a VR environment in which each user sees the other users' animated avatars through a VR headset with real-time audio-based facial animation and live body motion tracking, affording an enhanced level of presence and social engagement compared to generic avatars. @InProceedings{VR17p131, author = {Charles Malleson and Maggie Kosek and Martin Klaudiny and Ivan Huerta and Jean-Charles Bazin and Alexander Sorkine-Hornung and Mark Mine and Kenny Mitchell}, title = {Rapid One-Shot Acquisition of Dynamic VR Avatars}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {131--140}, doi = {}, year = {2017}, } |
|
Mirhosseini, Seyedkoosha |
VR '17: "Automatic Speed and Direction ..."
Automatic Speed and Direction Control along Constrained Navigation Paths
Seyedkoosha Mirhosseini, Ievgeniia Gutenko, Sushant Ojal, Joseph Marino, and Arie E. Kaufman (Stony Brook University, USA) For many virtual reality applications, a pre-calculated fly-through path is the de facto standard navigation method. Such a path is convenient for users and ensures coverage of critical areas throughout the scene. Traditional applications use constant camera speed, allow for fully user-controlled manual speed adjustment, or use automatic speed adjustment based on heuristics from the scene. We introduce two novel methods for constrained path navigation and exploration in virtual environments which rely on the natural orientation of the user’s head during scene exploration. Utilizing head tracking to obtain the user’s area of focus, we perform automatic camera speed adjustment to allow for natural off-axis scene examination. We expand this to include automatic camera navigation along the pre-computed path, abrogating the need for any navigational inputs from the user. Our techniques are applicable for any scene with a pre-computed navigation path, including medical applications such as virtual colonoscopy, coronary fly-through, or virtual angioscopy, and graph navigation. We compare the traditional methods (constant speed and manual speed adjustment) and our two methods (automatic speed adjustment and automatic speed/direction control) to determine the effect of speed adjustment on system usability, mental load, performance, and user accuracy. Through this evaluation we observe the effect of automatic speed adjustment compared to traditional methods. We observed no negative impact from automatic navigation, and the users performed as well as with the manual navigation. @InProceedings{VR17p29, author = {Seyedkoosha Mirhosseini and Ievgeniia Gutenko and Sushant Ojal and Joseph Marino and Arie E. Kaufman}, title = {Automatic Speed and Direction Control along Constrained Navigation Paths}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {29--36}, doi = {}, year = {2017}, } |
|
Mitchell, Kenny |
VR '17: "Rapid One-Shot Acquisition ..."
Rapid One-Shot Acquisition of Dynamic VR Avatars
Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell (Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA) We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and face shape at the fitting stage, while animation blendshapes allow the face to be animated. The subject assumes a T-pose and a single snapshot is captured using a stereo RGB plus depth sensor rig. Our system automatically aligns a photo texture and fits the 3D shape of the face. The body shape is stylized according to body dimensions estimated from segmented depth. The face identity blendweights are optimised according to image-based facial landmarks, while a custom texture map for the face is generated by warping the input images to a reference texture according to the facial landmarks. The total capture and processing time is under 10~seconds and the output is a light-weight, game-engine-ready avatar which is recognizable as the subject. We demonstrate our system in a VR environment in which each user sees the other users' animated avatars through a VR headset with real-time audio-based facial animation and live body motion tracking, affording an enhanced level of presence and social engagement compared to generic avatars. @InProceedings{VR17p131, author = {Charles Malleson and Maggie Kosek and Martin Klaudiny and Ivan Huerta and Jean-Charles Bazin and Alexander Sorkine-Hornung and Mark Mine and Kenny Mitchell}, title = {Rapid One-Shot Acquisition of Dynamic VR Avatars}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {131--140}, doi = {}, year = {2017}, } |
|
Moghadam, Kasra Rahimi |
VR '17: "Guided Head Rotation and Amplified ..."
Guided Head Rotation and Amplified Head Rotation: Evaluating Semi-natural Travel and Viewing Techniques in Virtual Reality
Shyam Prathish Sargunam, Kasra Rahimi Moghadam, Mohamed Suhail, and Eric D. Ragan (Texas A&M University, USA) Traditionally in virtual reality systems, head tracking is used in head-mounted displays (HMDs) to allow users to control viewing using 360-degree head and body rotations. Our research explores interaction considerations that enable semi-natural methods of view control that will work for seated use of virtual reality with HMDs when physically turning all the way around is not ideal, such as when sitting on a couch or at a desk. We investigate the use of amplified head rotations so physically turning in a comfortable range can allow viewing of a 360-degree virtual range. Additionally, to avoid situations where the user's neck is turned in an uncomfortable position for an extended period, we also use redirection during virtual movement to gradually realign the user's head position back to the neutral, straight-ahead position. We ran a controlled experiment to evaluate guided head rotation and amplified head rotation without realignment during movement, and we compared both to traditional one-to-one head-tracked viewing as a baseline for reference. After a navigation task, overall errors on spatial orientation tasks were relatively low with all techniques, but orientation effects, sickness, and preferences varied depending on participants' 3D gaming habits. Using the guided rotation technique, participants who played 3D games performed better, reported higher preference scores, and demonstrated significantly lower sickness results compared to non-gamers. @InProceedings{VR17p19, author = {Shyam Prathish Sargunam and Kasra Rahimi Moghadam and Mohamed Suhail and Eric D. Ragan}, title = {Guided Head Rotation and Amplified Head Rotation: Evaluating Semi-natural Travel and Viewing Techniques in Virtual Reality}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {19--28}, doi = {}, year = {2017}, } Video |
|
Money, James |
VR '17: "Enhancements to VTK Enabling ..."
Enhancements to VTK Enabling Scientific Visualization in Immersive Environments
Patrick O'Leary, Sankhesh Jhaveri, Aashish Chaudhary, William Sherman, Ken Martin, David Lonie, Eric Whiting, James Money, and Sandy McKenzie (Kitware, USA; Indiana University, USA; Idaho National Laboratory, USA) Modern scientific, engineering and medical computational simulations, as well as experimental and observational data sensing/measuring devices, produce enormous amounts of data. While statistical analysis is one tool that provides insight into this data, it is scientific visualization that is tactically important for scientific discovery, product design and data analysis. These benefits are impeded, however, when the scientific visualization algorithms are implemented from scratch | a time consuming and redundant process in immersive application development. This process then can greatly benefit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR) environment has only been attempted to varying degrees of success. In this paper, we demonstrate two new approaches to simplify this amalgamation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that provide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications. @InProceedings{VR17p186, author = {Patrick O'Leary and Sankhesh Jhaveri and Aashish Chaudhary and William Sherman and Ken Martin and David Lonie and Eric Whiting and James Money and Sandy McKenzie}, title = {Enhancements to VTK Enabling Scientific Visualization in Immersive Environments}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {186--194}, doi = {}, year = {2017}, } |
|
Moreau, Guillaume |
VR '17: "A Study on the Use of an Immersive ..."
A Study on the Use of an Immersive Virtual Reality Store to Investigate Consumer Perceptions and Purchase Behavior toward Non-standard Fruits and Vegetables
Adrien Verhulst, Jean-Marie Normand, Cindy Lombart, and Guillaume Moreau (Ecole Centrale de Nantes, France; Audienca Business School, France) In this paper we present an immersive virtual reality user study aimed at investigating how customers perceive and if they would purchase non-standard (i.e. misshaped) fruits and vegetables (FaVs) in supermarkets and hypermarkets. Indeed, food waste is a major issue for the retail sector and a recent trend is to reduce it by selling non-standard goods. An important question for retailers relates to the FaVs' ``level of abnormality'' that consumers would agree to buy. However, this question cannot be tackled using ``classical'' marketing techniques that perform user studies within real shops since fresh produce such as FaVs tend to rot rapidly preventing studies to be repeatable or to be run for a long time. In order to overcome those limitations, we created a virtual grocery store with a fresh FaVs section where 142 participants were immersed using an Oculus Rift DK2 HMD. Participants were presented either ``normal'', ``slightly misshaped'', ``misshaped'' or ``severely misshaped'' FaVs. Results show that participants tend to purchase a similar number of FaVs whatever their deformity. Nevertheless participants' perceptions of the quality of the FaV depend on the level of abnormality. @InProceedings{VR17p55, author = {Adrien Verhulst and Jean-Marie Normand and Cindy Lombart and Guillaume Moreau}, title = {A Study on the Use of an Immersive Virtual Reality Store to Investigate Consumer Perceptions and Purchase Behavior toward Non-standard Fruits and Vegetables}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {55--63}, doi = {}, year = {2017}, } |
|
Nadin, Mihai |
VR '17: "Emotional Qualities of VR ..."
Emotional Qualities of VR Space
Asma Naz, Regis Kopper, Ryan P. McMahan, and Mihai Nadin (University of Texas at Dallas, USA; Duke University, USA) The emotional response a person has to a living space is predominantly affected by light, color and texture as space-making elements. In order to verify whether this phenomenon could be replicated in a simulated environment, we conducted a user study in a six-sided projected immersive display that utilized equivalent design attributes of brightness, color and texture in order to assess to which extent the emotional response in a simulated environment is affected by the same parameters affecting real environments. Since emotional response depends upon the context, we evaluated the emotional responses of two groups of users: inactive (passive) and active (performing a typical daily activity). The results from the perceptual study generated data from which design principles for a virtual living space are articulated. Such a space, as an alternative to expensive built dwellings, could potentially support new, minimalist lifestyles of occupants, defined as the neo-nomads, aligned with their work experience in the digital domain through the generation of emotional experiences of spaces. Data from the experiments confirmed the hypothesis that perceivable emotional aspects of real-world spaces could be successfully generated through simulation of design attributes in the virtual space. The subjective response to the virtual space was consistent with corresponding responses from real-world color and brightness emotional perception. Our data could serve the virtual reality (VR) community in its attempt to conceive of further applications of virtual spaces for well-defined activities. @InProceedings{VR17p3, author = {Asma Naz and Regis Kopper and Ryan P. McMahan and Mihai Nadin}, title = {Emotional Qualities of VR Space}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {3--11}, doi = {}, year = {2017}, } Info |
|
Nakamura, Fumihiko |
VR '17: "Recognition and Mapping of ..."
Recognition and Mapping of Facial Expressions to Avatar by Embedded Photo Reflective Sensors in Head Mounted Display
Katsuhiro Suzuki, Fumihiko Nakamura, Jiu Otsuka, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, and Maki Sugimoto (Keio University, Japan) We propose a facial expression mapping technology between virtual avatars and Head-Mounted Display (HMD) users. HMD allow people to enjoy an immersive Virtual Reality (VR) experience. A virtual avatar can be a representative of the user in the virtual environment. However, the synchronization of the the virtual avatar's expressions with those of the HMD user is limited. The major problem of wearing an HMD is that a large portion of the user's face is occluded, making facial recognition difficult in an HMD-based virtual environment. To overcome this problem, we propose a facial expression mapping technology using retro-reflective photoelectric sensors. The sensors attached inside the HMD measures the distance between the sensors and the user's face. The distance values of five basic facial expressions (Neutral, Happy, Angry, Surprised, and Sad) are used for training the neural network to estimate the facial expression of a user. We achieved an overall accuracy of 88% in recognizing the facial expressions. Our system can also reproduce facial expression change in real-time through an existing avatar using regression. Consequently, our system enables estimation and reconstruction of facial expressions that correspond to the user's emotional changes. @InProceedings{VR17p177, author = {Katsuhiro Suzuki and Fumihiko Nakamura and Jiu Otsuka and Katsutoshi Masai and Yuta Itoh and Yuta Sugiura and Maki Sugimoto}, title = {Recognition and Mapping of Facial Expressions to Avatar by Embedded Photo Reflective Sensors in Head Mounted Display}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {177--185}, doi = {}, year = {2017}, } |
|
Naz, Asma |
VR '17: "Emotional Qualities of VR ..."
Emotional Qualities of VR Space
Asma Naz, Regis Kopper, Ryan P. McMahan, and Mihai Nadin (University of Texas at Dallas, USA; Duke University, USA) The emotional response a person has to a living space is predominantly affected by light, color and texture as space-making elements. In order to verify whether this phenomenon could be replicated in a simulated environment, we conducted a user study in a six-sided projected immersive display that utilized equivalent design attributes of brightness, color and texture in order to assess to which extent the emotional response in a simulated environment is affected by the same parameters affecting real environments. Since emotional response depends upon the context, we evaluated the emotional responses of two groups of users: inactive (passive) and active (performing a typical daily activity). The results from the perceptual study generated data from which design principles for a virtual living space are articulated. Such a space, as an alternative to expensive built dwellings, could potentially support new, minimalist lifestyles of occupants, defined as the neo-nomads, aligned with their work experience in the digital domain through the generation of emotional experiences of spaces. Data from the experiments confirmed the hypothesis that perceivable emotional aspects of real-world spaces could be successfully generated through simulation of design attributes in the virtual space. The subjective response to the virtual space was consistent with corresponding responses from real-world color and brightness emotional perception. Our data could serve the virtual reality (VR) community in its attempt to conceive of further applications of virtual spaces for well-defined activities. @InProceedings{VR17p3, author = {Asma Naz and Regis Kopper and Ryan P. McMahan and Mihai Nadin}, title = {Emotional Qualities of VR Space}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {3--11}, doi = {}, year = {2017}, } Info |
|
Normand, Jean-Marie |
VR '17: "A Study on the Use of an Immersive ..."
A Study on the Use of an Immersive Virtual Reality Store to Investigate Consumer Perceptions and Purchase Behavior toward Non-standard Fruits and Vegetables
Adrien Verhulst, Jean-Marie Normand, Cindy Lombart, and Guillaume Moreau (Ecole Centrale de Nantes, France; Audienca Business School, France) In this paper we present an immersive virtual reality user study aimed at investigating how customers perceive and if they would purchase non-standard (i.e. misshaped) fruits and vegetables (FaVs) in supermarkets and hypermarkets. Indeed, food waste is a major issue for the retail sector and a recent trend is to reduce it by selling non-standard goods. An important question for retailers relates to the FaVs' ``level of abnormality'' that consumers would agree to buy. However, this question cannot be tackled using ``classical'' marketing techniques that perform user studies within real shops since fresh produce such as FaVs tend to rot rapidly preventing studies to be repeatable or to be run for a long time. In order to overcome those limitations, we created a virtual grocery store with a fresh FaVs section where 142 participants were immersed using an Oculus Rift DK2 HMD. Participants were presented either ``normal'', ``slightly misshaped'', ``misshaped'' or ``severely misshaped'' FaVs. Results show that participants tend to purchase a similar number of FaVs whatever their deformity. Nevertheless participants' perceptions of the quality of the FaV depend on the level of abnormality. @InProceedings{VR17p55, author = {Adrien Verhulst and Jean-Marie Normand and Cindy Lombart and Guillaume Moreau}, title = {A Study on the Use of an Immersive Virtual Reality Store to Investigate Consumer Perceptions and Purchase Behavior toward Non-standard Fruits and Vegetables}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {55--63}, doi = {}, year = {2017}, } |
|
Ojal, Sushant |
VR '17: "Automatic Speed and Direction ..."
Automatic Speed and Direction Control along Constrained Navigation Paths
Seyedkoosha Mirhosseini, Ievgeniia Gutenko, Sushant Ojal, Joseph Marino, and Arie E. Kaufman (Stony Brook University, USA) For many virtual reality applications, a pre-calculated fly-through path is the de facto standard navigation method. Such a path is convenient for users and ensures coverage of critical areas throughout the scene. Traditional applications use constant camera speed, allow for fully user-controlled manual speed adjustment, or use automatic speed adjustment based on heuristics from the scene. We introduce two novel methods for constrained path navigation and exploration in virtual environments which rely on the natural orientation of the user’s head during scene exploration. Utilizing head tracking to obtain the user’s area of focus, we perform automatic camera speed adjustment to allow for natural off-axis scene examination. We expand this to include automatic camera navigation along the pre-computed path, abrogating the need for any navigational inputs from the user. Our techniques are applicable for any scene with a pre-computed navigation path, including medical applications such as virtual colonoscopy, coronary fly-through, or virtual angioscopy, and graph navigation. We compare the traditional methods (constant speed and manual speed adjustment) and our two methods (automatic speed adjustment and automatic speed/direction control) to determine the effect of speed adjustment on system usability, mental load, performance, and user accuracy. Through this evaluation we observe the effect of automatic speed adjustment compared to traditional methods. We observed no negative impact from automatic navigation, and the users performed as well as with the manual navigation. @InProceedings{VR17p29, author = {Seyedkoosha Mirhosseini and Ievgeniia Gutenko and Sushant Ojal and Joseph Marino and Arie E. Kaufman}, title = {Automatic Speed and Direction Control along Constrained Navigation Paths}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {29--36}, doi = {}, year = {2017}, } |
|
O'Leary, Patrick |
VR '17: "Enhancements to VTK Enabling ..."
Enhancements to VTK Enabling Scientific Visualization in Immersive Environments
Patrick O'Leary, Sankhesh Jhaveri, Aashish Chaudhary, William Sherman, Ken Martin, David Lonie, Eric Whiting, James Money, and Sandy McKenzie (Kitware, USA; Indiana University, USA; Idaho National Laboratory, USA) Modern scientific, engineering and medical computational simulations, as well as experimental and observational data sensing/measuring devices, produce enormous amounts of data. While statistical analysis is one tool that provides insight into this data, it is scientific visualization that is tactically important for scientific discovery, product design and data analysis. These benefits are impeded, however, when the scientific visualization algorithms are implemented from scratch | a time consuming and redundant process in immersive application development. This process then can greatly benefit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR) environment has only been attempted to varying degrees of success. In this paper, we demonstrate two new approaches to simplify this amalgamation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that provide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications. @InProceedings{VR17p186, author = {Patrick O'Leary and Sankhesh Jhaveri and Aashish Chaudhary and William Sherman and Ken Martin and David Lonie and Eric Whiting and James Money and Sandy McKenzie}, title = {Enhancements to VTK Enabling Scientific Visualization in Immersive Environments}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {186--194}, doi = {}, year = {2017}, } |
|
Otsuka, Jiu |
VR '17: "Recognition and Mapping of ..."
Recognition and Mapping of Facial Expressions to Avatar by Embedded Photo Reflective Sensors in Head Mounted Display
Katsuhiro Suzuki, Fumihiko Nakamura, Jiu Otsuka, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, and Maki Sugimoto (Keio University, Japan) We propose a facial expression mapping technology between virtual avatars and Head-Mounted Display (HMD) users. HMD allow people to enjoy an immersive Virtual Reality (VR) experience. A virtual avatar can be a representative of the user in the virtual environment. However, the synchronization of the the virtual avatar's expressions with those of the HMD user is limited. The major problem of wearing an HMD is that a large portion of the user's face is occluded, making facial recognition difficult in an HMD-based virtual environment. To overcome this problem, we propose a facial expression mapping technology using retro-reflective photoelectric sensors. The sensors attached inside the HMD measures the distance between the sensors and the user's face. The distance values of five basic facial expressions (Neutral, Happy, Angry, Surprised, and Sad) are used for training the neural network to estimate the facial expression of a user. We achieved an overall accuracy of 88% in recognizing the facial expressions. Our system can also reproduce facial expression change in real-time through an existing avatar using regression. Consequently, our system enables estimation and reconstruction of facial expressions that correspond to the user's emotional changes. @InProceedings{VR17p177, author = {Katsuhiro Suzuki and Fumihiko Nakamura and Jiu Otsuka and Katsutoshi Masai and Yuta Itoh and Yuta Sugiura and Maki Sugimoto}, title = {Recognition and Mapping of Facial Expressions to Avatar by Embedded Photo Reflective Sensors in Head Mounted Display}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {177--185}, doi = {}, year = {2017}, } |
|
Ragan, Eric D. |
VR '17: "Guided Head Rotation and Amplified ..."
Guided Head Rotation and Amplified Head Rotation: Evaluating Semi-natural Travel and Viewing Techniques in Virtual Reality
Shyam Prathish Sargunam, Kasra Rahimi Moghadam, Mohamed Suhail, and Eric D. Ragan (Texas A&M University, USA) Traditionally in virtual reality systems, head tracking is used in head-mounted displays (HMDs) to allow users to control viewing using 360-degree head and body rotations. Our research explores interaction considerations that enable semi-natural methods of view control that will work for seated use of virtual reality with HMDs when physically turning all the way around is not ideal, such as when sitting on a couch or at a desk. We investigate the use of amplified head rotations so physically turning in a comfortable range can allow viewing of a 360-degree virtual range. Additionally, to avoid situations where the user's neck is turned in an uncomfortable position for an extended period, we also use redirection during virtual movement to gradually realign the user's head position back to the neutral, straight-ahead position. We ran a controlled experiment to evaluate guided head rotation and amplified head rotation without realignment during movement, and we compared both to traditional one-to-one head-tracked viewing as a baseline for reference. After a navigation task, overall errors on spatial orientation tasks were relatively low with all techniques, but orientation effects, sickness, and preferences varied depending on participants' 3D gaming habits. Using the guided rotation technique, participants who played 3D games performed better, reported higher preference scores, and demonstrated significantly lower sickness results compared to non-gamers. @InProceedings{VR17p19, author = {Shyam Prathish Sargunam and Kasra Rahimi Moghadam and Mohamed Suhail and Eric D. Ragan}, title = {Guided Head Rotation and Amplified Head Rotation: Evaluating Semi-natural Travel and Viewing Techniques in Virtual Reality}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {19--28}, doi = {}, year = {2017}, } Video |
|
Rewkowski, Nicholas |
VR '17: "Optimizing Placement of Commodity ..."
Optimizing Placement of Commodity Depth Cameras for Known 3D Dynamic Scene Capture
Rohan Chabra, Adrian Ilie, Nicholas Rewkowski, Young-Woon Cha, and Henry Fuchs (University of North Carolina at Chapel Hill, USA) Commodity depth sensors, such as the Microsoft Kinect®, have been widely used for the capture and reconstruction of the 3D structure of a room-sized dynamic scene. Camera placement and coverage during capture significantly impact the quality of the resulting reconstruction. In particular, dynamic occlusions and sensor interference have been shown to result in poor resolution and holes in the reconstruction results. This paper presents a novel algorithmic framework and an off-line optimization of depth sensor placements for a given 3D dynamic scene, simulated using virtual 3D models. We derive a fitness metric for a particular configuration of sensors by combining factors such as visibility and resolution of the entire dynamic scene along with probabilities of interference between sensors. We employ this fitness metric both in a greedy algorithm that determines the number of depth sensors needed to cover the scene, and in a simulated annealing algorithm that optimizes the placements of those sensors. We compare our algorithm’s optimized placements with manual sensor placements for a real dynamic scene. We present quantitative assessments using our fitness metric, as well as qualitative assessments to demonstrate that our algorithm not only enhances the resolution and total coverage of the reconstruction but also fills in voids by avoiding occlusions and sensor interference when compared with the reconstruction of the same scene using a manual sensor placement. @InProceedings{VR17p157, author = {Rohan Chabra and Adrian Ilie and Nicholas Rewkowski and Young-Woon Cha and Henry Fuchs}, title = {Optimizing Placement of Commodity Depth Cameras for Known 3D Dynamic Scene Capture}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {157--166}, doi = {}, year = {2017}, } Video Info |
|
Sagardia, Mikel |
VR '17: "Evaluation of a Penalty and ..."
Evaluation of a Penalty and a Constraint-Based Haptic Rendering Algorithm with Different Haptic Interfaces and Stiffness Values
Mikel Sagardia and Thomas Hulin (German Aerospace Center, Germany) This work presents an evaluation study in which the effects of a penalty-based and a constraint-based haptic rendering algorithm on the user performance and perception are analyzed. A total of N = 24 participants performed in a within-design study three variations of peg-in-hole tasks in a virtual environment after trials in an identically replicated real scenario as a reference. In addition to the two mentioned haptic rendering paradigms, two haptic devices were used, the HUG and a Sigma.7, and the force stiffness was also varied with maximum and half values possible for each device. Both objective (time and trajectory, collision performance, and muscular effort) and subjective ratings (contact perception, ergonomy, and workload) were recorded and statistically analyzed. The results show that the constraint-based haptic rendering algorithm with a lower stiffness than the maximum possible yields the most realistic contact perception, while keeping the visual inter-penetration between the objects roughly at around 15% of that caused by penalty-based algorithm (i.e., non perceptible in many cases). This result is even more evident with the HUG, the haptic device with the highest force display capabilities, although user ratings point to the Sigma.7 as the device with highest usability and lowest workload indicators. Altogether, the paper provides qualitative and quantitative guidelines for mapping properties of haptic algorithms and devices to user performance and perception. @InProceedings{VR17p64, author = {Mikel Sagardia and Thomas Hulin}, title = {Evaluation of a Penalty and a Constraint-Based Haptic Rendering Algorithm with Different Haptic Interfaces and Stiffness Values}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {64--73}, doi = {}, year = {2017}, } |
|
Sargunam, Shyam Prathish |
VR '17: "Guided Head Rotation and Amplified ..."
Guided Head Rotation and Amplified Head Rotation: Evaluating Semi-natural Travel and Viewing Techniques in Virtual Reality
Shyam Prathish Sargunam, Kasra Rahimi Moghadam, Mohamed Suhail, and Eric D. Ragan (Texas A&M University, USA) Traditionally in virtual reality systems, head tracking is used in head-mounted displays (HMDs) to allow users to control viewing using 360-degree head and body rotations. Our research explores interaction considerations that enable semi-natural methods of view control that will work for seated use of virtual reality with HMDs when physically turning all the way around is not ideal, such as when sitting on a couch or at a desk. We investigate the use of amplified head rotations so physically turning in a comfortable range can allow viewing of a 360-degree virtual range. Additionally, to avoid situations where the user's neck is turned in an uncomfortable position for an extended period, we also use redirection during virtual movement to gradually realign the user's head position back to the neutral, straight-ahead position. We ran a controlled experiment to evaluate guided head rotation and amplified head rotation without realignment during movement, and we compared both to traditional one-to-one head-tracked viewing as a baseline for reference. After a navigation task, overall errors on spatial orientation tasks were relatively low with all techniques, but orientation effects, sickness, and preferences varied depending on participants' 3D gaming habits. Using the guided rotation technique, participants who played 3D games performed better, reported higher preference scores, and demonstrated significantly lower sickness results compared to non-gamers. @InProceedings{VR17p19, author = {Shyam Prathish Sargunam and Kasra Rahimi Moghadam and Mohamed Suhail and Eric D. Ragan}, title = {Guided Head Rotation and Amplified Head Rotation: Evaluating Semi-natural Travel and Viewing Techniques in Virtual Reality}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {19--28}, doi = {}, year = {2017}, } Video |
|
Scevak, Jill |
VR '17: "Asking Ethical Questions in ..."
Asking Ethical Questions in Research using Immersive Virtual and Augmented Reality Technologies with Children and Youth
Erica Southgate, Shamus P. Smith, and Jill Scevak (University of Newcastle, Australia) The increasing availability of intensely immersive virtual, augmented and mixed reality experiences using head-mounted displays (HMD) has prompted deliberations about the ethical implications of using such technology to resolve technical issues and explore the complex cognitive, behavioral and social dynamics of human `virtuality'. However, little is known about the impact such immersive experiences will have on children (aged 0-18 years). This paper outlines perspectives on child development to present conceptual and practical frameworks for conducting ethical research with children using immersive HMD technologies. The paper addresses not only procedural ethics (gaining institutional approval) but also ethics-in-practice (on-going ethical decision-making). @InProceedings{VR17p12, author = {Erica Southgate and Shamus P. Smith and Jill Scevak}, title = {Asking Ethical Questions in Research using Immersive Virtual and Augmented Reality Technologies with Children and Youth}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {12--18}, doi = {}, year = {2017}, } |
|
Schissler, Carl |
VR '17: "Efficient Construction of ..."
Efficient Construction of the Spatial Room Impulse Response
Carl Schissler, Peter Stirling, and Ravish Mehra (Oculus, USA; Facebook, USA) An important component of the modeling of sound propagation for virtual reality (VR) is the spatialization of the room impulse response (RIR) for directional listeners. This involves convolution of the listener's head-related transfer function (HRTF) with the RIR to generate a spatial room impulse response (SRIR) which can be used to auralize the sound entering the listener's ear canals. Previous approaches tend to evaluate the HRTF for each sound propagation path, though this is too slow for interactive VR latency requirements. We present a new technique for computation of the SRIR that performs the convolution with the HRTF in the spherical harmonic (SH) domain for RIR partitions of a fixed length. The main contribution is a novel perceptually-driven metric that adaptively determines the lowest SH order required for each partition to result in no perceptible error in the SRIR. By using lower SH order for some partitions, our technique saves a significant amount of computation and is almost an order of magnitude faster than the previous approach. We compared the subjective impact of this new method to the previous one and observe a strong scene-dependent preference for our technique. As a result, our method is the first that can compute high-quality spatial sound for the entire impulse response fast enough to meet the audio latency requirements of interactive virtual reality applications. @InProceedings{VR17p122, author = {Carl Schissler and Peter Stirling and Ravish Mehra}, title = {Efficient Construction of the Spatial Room Impulse Response}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {122--130}, doi = {}, year = {2017}, } |
|
Shemetova, Elena |
VR '17: "Prism Aftereffects for Throwing ..."
Prism Aftereffects for Throwing with a Self-Avatar in an Immersive Virtual Environment
Bobby Bodenheimer, Sarah Creem-Regehr, Jeanine Stefanucci, Elena Shemetova, and William B. Thompson (Vanderbilt University, USA; University of Utah, USA) The use of first-person self-avatars in immersive virtual environments (VEs) has grown over recent years. It is unknown, however, how visual feedback from a self-avatar influences a user's online actions and subsequent calibration of actions within an immersive VE. The current paper uses a prism throwing adaptation paradigm to test the role of a self-avatar arm or full body on action calibration in a VE. Participants' throwing accuracy to a target on the ground was measured first in a normal viewing environment, then with the visual field rotated clockwise about their vertical axis by 17 degrees (prism simulation), and then again in the normal viewing environment with the prism distortion removed. Participants experienced either no-avatar, a first-person avatar arm and hand, or a first-person full body avatar during the entire experimental session, in a between-subjects manipulation. Results showed similar throwing error and adaptation during the prism exposure for all conditions, but a reduced aftereffect (displacement with respect to the target in the opposite direction of the prism-exposure) when the avatar arm or full body was present. The results are discussed in the context of how an avatar can provide a visual frame of reference to aid in action calibration. @InProceedings{VR17p141, author = {Bobby Bodenheimer and Sarah Creem-Regehr and Jeanine Stefanucci and Elena Shemetova and William B. Thompson}, title = {Prism Aftereffects for Throwing with a Self-Avatar in an Immersive Virtual Environment}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {141--147}, doi = {}, year = {2017}, } |
|
Sherman, William |
VR '17: "Enhancements to VTK Enabling ..."
Enhancements to VTK Enabling Scientific Visualization in Immersive Environments
Patrick O'Leary, Sankhesh Jhaveri, Aashish Chaudhary, William Sherman, Ken Martin, David Lonie, Eric Whiting, James Money, and Sandy McKenzie (Kitware, USA; Indiana University, USA; Idaho National Laboratory, USA) Modern scientific, engineering and medical computational simulations, as well as experimental and observational data sensing/measuring devices, produce enormous amounts of data. While statistical analysis is one tool that provides insight into this data, it is scientific visualization that is tactically important for scientific discovery, product design and data analysis. These benefits are impeded, however, when the scientific visualization algorithms are implemented from scratch | a time consuming and redundant process in immersive application development. This process then can greatly benefit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR) environment has only been attempted to varying degrees of success. In this paper, we demonstrate two new approaches to simplify this amalgamation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that provide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications. @InProceedings{VR17p186, author = {Patrick O'Leary and Sankhesh Jhaveri and Aashish Chaudhary and William Sherman and Ken Martin and David Lonie and Eric Whiting and James Money and Sandy McKenzie}, title = {Enhancements to VTK Enabling Scientific Visualization in Immersive Environments}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {186--194}, doi = {}, year = {2017}, } |
|
Smith, Shamus P. |
VR '17: "Asking Ethical Questions in ..."
Asking Ethical Questions in Research using Immersive Virtual and Augmented Reality Technologies with Children and Youth
Erica Southgate, Shamus P. Smith, and Jill Scevak (University of Newcastle, Australia) The increasing availability of intensely immersive virtual, augmented and mixed reality experiences using head-mounted displays (HMD) has prompted deliberations about the ethical implications of using such technology to resolve technical issues and explore the complex cognitive, behavioral and social dynamics of human `virtuality'. However, little is known about the impact such immersive experiences will have on children (aged 0-18 years). This paper outlines perspectives on child development to present conceptual and practical frameworks for conducting ethical research with children using immersive HMD technologies. The paper addresses not only procedural ethics (gaining institutional approval) but also ethics-in-practice (on-going ethical decision-making). @InProceedings{VR17p12, author = {Erica Southgate and Shamus P. Smith and Jill Scevak}, title = {Asking Ethical Questions in Research using Immersive Virtual and Augmented Reality Technologies with Children and Youth}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {12--18}, doi = {}, year = {2017}, } |
|
Sorkine-Hornung, Alexander |
VR '17: "Rapid One-Shot Acquisition ..."
Rapid One-Shot Acquisition of Dynamic VR Avatars
Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell (Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA) We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and face shape at the fitting stage, while animation blendshapes allow the face to be animated. The subject assumes a T-pose and a single snapshot is captured using a stereo RGB plus depth sensor rig. Our system automatically aligns a photo texture and fits the 3D shape of the face. The body shape is stylized according to body dimensions estimated from segmented depth. The face identity blendweights are optimised according to image-based facial landmarks, while a custom texture map for the face is generated by warping the input images to a reference texture according to the facial landmarks. The total capture and processing time is under 10~seconds and the output is a light-weight, game-engine-ready avatar which is recognizable as the subject. We demonstrate our system in a VR environment in which each user sees the other users' animated avatars through a VR headset with real-time audio-based facial animation and live body motion tracking, affording an enhanced level of presence and social engagement compared to generic avatars. @InProceedings{VR17p131, author = {Charles Malleson and Maggie Kosek and Martin Klaudiny and Ivan Huerta and Jean-Charles Bazin and Alexander Sorkine-Hornung and Mark Mine and Kenny Mitchell}, title = {Rapid One-Shot Acquisition of Dynamic VR Avatars}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {131--140}, doi = {}, year = {2017}, } |
|
Southgate, Erica |
VR '17: "Asking Ethical Questions in ..."
Asking Ethical Questions in Research using Immersive Virtual and Augmented Reality Technologies with Children and Youth
Erica Southgate, Shamus P. Smith, and Jill Scevak (University of Newcastle, Australia) The increasing availability of intensely immersive virtual, augmented and mixed reality experiences using head-mounted displays (HMD) has prompted deliberations about the ethical implications of using such technology to resolve technical issues and explore the complex cognitive, behavioral and social dynamics of human `virtuality'. However, little is known about the impact such immersive experiences will have on children (aged 0-18 years). This paper outlines perspectives on child development to present conceptual and practical frameworks for conducting ethical research with children using immersive HMD technologies. The paper addresses not only procedural ethics (gaining institutional approval) but also ethics-in-practice (on-going ethical decision-making). @InProceedings{VR17p12, author = {Erica Southgate and Shamus P. Smith and Jill Scevak}, title = {Asking Ethical Questions in Research using Immersive Virtual and Augmented Reality Technologies with Children and Youth}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {12--18}, doi = {}, year = {2017}, } |
|
Steed, Anthony |
VR '17: "Cinematic Virtual Reality: ..."
Cinematic Virtual Reality: Evaluating the Effect of Display Type on the Viewing Experience for Panoramic Video
Andrew MacQuarrie and Anthony Steed (University College London, UK) The proliferation of head-mounted displays (HMD) in the market means that cinematic virtual reality (CVR) is an increasingly popular format. We explore several metrics that may indicate advantages and disadvantages of CVR compared to traditional viewing formats such as TV. We explored the consumption of panoramic videos in three different display systems: a HMD, a SurroundVideo+ (SV+), and a standard 16:9 TV. The SV+ display features a TV with projected peripheral content. A between-groups experiment of 63 participants was conducted, in which participants watched panoramic videos in one of these three display conditions. Aspects examined in the experiment were spatial awareness, narrative engagement, enjoyment, memory, fear, attention, and a viewer’s concern about missing something. Our results indicated that the HMD offered a significant benefit in terms of enjoyment and spatial awareness, and our SV+ display offered a significant improvement in enjoyment over traditional TV. We were unable to confirm the work of a previous study that showed incidental memory may be lower in a HMD over a TV. Drawing attention and a viewer’s concern about missing something were also not significantly different between display conditions. It is clear that passive media viewing consists of a complex interplay of factors, such as the media itself, the characteristics of the display, as well as human aspects including perception and attention. While passive media viewing presents many challenges for evaluation, identifying a number of broadly applicable metrics will aid our understanding of these experiences, and allow the creation of better, more engaging CVR content and displays. @InProceedings{VR17p45, author = {Andrew MacQuarrie and Anthony Steed}, title = {Cinematic Virtual Reality: Evaluating the Effect of Display Type on the Viewing Experience for Panoramic Video}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {45--54}, doi = {}, year = {2017}, } |
|
Stefanucci, Jeanine |
VR '17: "Prism Aftereffects for Throwing ..."
Prism Aftereffects for Throwing with a Self-Avatar in an Immersive Virtual Environment
Bobby Bodenheimer, Sarah Creem-Regehr, Jeanine Stefanucci, Elena Shemetova, and William B. Thompson (Vanderbilt University, USA; University of Utah, USA) The use of first-person self-avatars in immersive virtual environments (VEs) has grown over recent years. It is unknown, however, how visual feedback from a self-avatar influences a user's online actions and subsequent calibration of actions within an immersive VE. The current paper uses a prism throwing adaptation paradigm to test the role of a self-avatar arm or full body on action calibration in a VE. Participants' throwing accuracy to a target on the ground was measured first in a normal viewing environment, then with the visual field rotated clockwise about their vertical axis by 17 degrees (prism simulation), and then again in the normal viewing environment with the prism distortion removed. Participants experienced either no-avatar, a first-person avatar arm and hand, or a first-person full body avatar during the entire experimental session, in a between-subjects manipulation. Results showed similar throwing error and adaptation during the prism exposure for all conditions, but a reduced aftereffect (displacement with respect to the target in the opposite direction of the prism-exposure) when the avatar arm or full body was present. The results are discussed in the context of how an avatar can provide a visual frame of reference to aid in action calibration. @InProceedings{VR17p141, author = {Bobby Bodenheimer and Sarah Creem-Regehr and Jeanine Stefanucci and Elena Shemetova and William B. Thompson}, title = {Prism Aftereffects for Throwing with a Self-Avatar in an Immersive Virtual Environment}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {141--147}, doi = {}, year = {2017}, } |
|
Stirling, Peter |
VR '17: "Efficient Construction of ..."
Efficient Construction of the Spatial Room Impulse Response
Carl Schissler, Peter Stirling, and Ravish Mehra (Oculus, USA; Facebook, USA) An important component of the modeling of sound propagation for virtual reality (VR) is the spatialization of the room impulse response (RIR) for directional listeners. This involves convolution of the listener's head-related transfer function (HRTF) with the RIR to generate a spatial room impulse response (SRIR) which can be used to auralize the sound entering the listener's ear canals. Previous approaches tend to evaluate the HRTF for each sound propagation path, though this is too slow for interactive VR latency requirements. We present a new technique for computation of the SRIR that performs the convolution with the HRTF in the spherical harmonic (SH) domain for RIR partitions of a fixed length. The main contribution is a novel perceptually-driven metric that adaptively determines the lowest SH order required for each partition to result in no perceptible error in the SRIR. By using lower SH order for some partitions, our technique saves a significant amount of computation and is almost an order of magnitude faster than the previous approach. We compared the subjective impact of this new method to the previous one and observe a strong scene-dependent preference for our technique. As a result, our method is the first that can compute high-quality spatial sound for the entire impulse response fast enough to meet the audio latency requirements of interactive virtual reality applications. @InProceedings{VR17p122, author = {Carl Schissler and Peter Stirling and Ravish Mehra}, title = {Efficient Construction of the Spatial Room Impulse Response}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {122--130}, doi = {}, year = {2017}, } |
|
Sugimoto, Maki |
VR '17: "Recognition and Mapping of ..."
Recognition and Mapping of Facial Expressions to Avatar by Embedded Photo Reflective Sensors in Head Mounted Display
Katsuhiro Suzuki, Fumihiko Nakamura, Jiu Otsuka, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, and Maki Sugimoto (Keio University, Japan) We propose a facial expression mapping technology between virtual avatars and Head-Mounted Display (HMD) users. HMD allow people to enjoy an immersive Virtual Reality (VR) experience. A virtual avatar can be a representative of the user in the virtual environment. However, the synchronization of the the virtual avatar's expressions with those of the HMD user is limited. The major problem of wearing an HMD is that a large portion of the user's face is occluded, making facial recognition difficult in an HMD-based virtual environment. To overcome this problem, we propose a facial expression mapping technology using retro-reflective photoelectric sensors. The sensors attached inside the HMD measures the distance between the sensors and the user's face. The distance values of five basic facial expressions (Neutral, Happy, Angry, Surprised, and Sad) are used for training the neural network to estimate the facial expression of a user. We achieved an overall accuracy of 88% in recognizing the facial expressions. Our system can also reproduce facial expression change in real-time through an existing avatar using regression. Consequently, our system enables estimation and reconstruction of facial expressions that correspond to the user's emotional changes. @InProceedings{VR17p177, author = {Katsuhiro Suzuki and Fumihiko Nakamura and Jiu Otsuka and Katsutoshi Masai and Yuta Itoh and Yuta Sugiura and Maki Sugimoto}, title = {Recognition and Mapping of Facial Expressions to Avatar by Embedded Photo Reflective Sensors in Head Mounted Display}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {177--185}, doi = {}, year = {2017}, } |
|
Sugiura, Yuta |
VR '17: "Recognition and Mapping of ..."
Recognition and Mapping of Facial Expressions to Avatar by Embedded Photo Reflective Sensors in Head Mounted Display
Katsuhiro Suzuki, Fumihiko Nakamura, Jiu Otsuka, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, and Maki Sugimoto (Keio University, Japan) We propose a facial expression mapping technology between virtual avatars and Head-Mounted Display (HMD) users. HMD allow people to enjoy an immersive Virtual Reality (VR) experience. A virtual avatar can be a representative of the user in the virtual environment. However, the synchronization of the the virtual avatar's expressions with those of the HMD user is limited. The major problem of wearing an HMD is that a large portion of the user's face is occluded, making facial recognition difficult in an HMD-based virtual environment. To overcome this problem, we propose a facial expression mapping technology using retro-reflective photoelectric sensors. The sensors attached inside the HMD measures the distance between the sensors and the user's face. The distance values of five basic facial expressions (Neutral, Happy, Angry, Surprised, and Sad) are used for training the neural network to estimate the facial expression of a user. We achieved an overall accuracy of 88% in recognizing the facial expressions. Our system can also reproduce facial expression change in real-time through an existing avatar using regression. Consequently, our system enables estimation and reconstruction of facial expressions that correspond to the user's emotional changes. @InProceedings{VR17p177, author = {Katsuhiro Suzuki and Fumihiko Nakamura and Jiu Otsuka and Katsutoshi Masai and Yuta Itoh and Yuta Sugiura and Maki Sugimoto}, title = {Recognition and Mapping of Facial Expressions to Avatar by Embedded Photo Reflective Sensors in Head Mounted Display}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {177--185}, doi = {}, year = {2017}, } |
|
Suhail, Mohamed |
VR '17: "Guided Head Rotation and Amplified ..."
Guided Head Rotation and Amplified Head Rotation: Evaluating Semi-natural Travel and Viewing Techniques in Virtual Reality
Shyam Prathish Sargunam, Kasra Rahimi Moghadam, Mohamed Suhail, and Eric D. Ragan (Texas A&M University, USA) Traditionally in virtual reality systems, head tracking is used in head-mounted displays (HMDs) to allow users to control viewing using 360-degree head and body rotations. Our research explores interaction considerations that enable semi-natural methods of view control that will work for seated use of virtual reality with HMDs when physically turning all the way around is not ideal, such as when sitting on a couch or at a desk. We investigate the use of amplified head rotations so physically turning in a comfortable range can allow viewing of a 360-degree virtual range. Additionally, to avoid situations where the user's neck is turned in an uncomfortable position for an extended period, we also use redirection during virtual movement to gradually realign the user's head position back to the neutral, straight-ahead position. We ran a controlled experiment to evaluate guided head rotation and amplified head rotation without realignment during movement, and we compared both to traditional one-to-one head-tracked viewing as a baseline for reference. After a navigation task, overall errors on spatial orientation tasks were relatively low with all techniques, but orientation effects, sickness, and preferences varied depending on participants' 3D gaming habits. Using the guided rotation technique, participants who played 3D games performed better, reported higher preference scores, and demonstrated significantly lower sickness results compared to non-gamers. @InProceedings{VR17p19, author = {Shyam Prathish Sargunam and Kasra Rahimi Moghadam and Mohamed Suhail and Eric D. Ragan}, title = {Guided Head Rotation and Amplified Head Rotation: Evaluating Semi-natural Travel and Viewing Techniques in Virtual Reality}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {19--28}, doi = {}, year = {2017}, } Video |
|
Suma Rosenberg, Evan |
VR '17: "An Evaluation of Strategies ..."
An Evaluation of Strategies for Two-User Redirected Walking in Shared Physical Spaces
Mahdi Azmandian, Timofey Grechkin, and Evan Suma Rosenberg (University of Southern California, USA) As the focus of virtual reality technology is shifting from single-person experiences to multi-user interactions it becomes increasingly important to accommodate multiple co-located users within a shared real-world space. For locomotion and navigation, the introduction of multiple users moving both virtually and physically creates additional challenges related to potential user-on-user collisions. In this work, we focus on defining the extent of these challenges, in order to apply redirected walking to two users immersed in virtual reality experiences within a shared physical tracked space. Using a computer simulation framework, we explore the costs and benefits of splitting available physical space between users versus attempting to algorithmically prevent user-to-user collisions. We also explore fundamental components of collision prevention such as steering the users away from each other, forced stopping, and user re-orientation. Each component was analyzed for the number of potential disruptions to the flow of the virtual experience. We also develop a novel collision prevention algorithm that reduces overall interruptions by 17.6% and collision prevention events by 58.3%. Our results show that sharing space using our collision prevention method is superior to subdividing the tracked space. @InProceedings{VR17p91, author = {Mahdi Azmandian and Timofey Grechkin and Evan Suma Rosenberg}, title = {An Evaluation of Strategies for Two-User Redirected Walking in Shared Physical Spaces}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {91--98}, doi = {}, year = {2017}, } |
|
Suzuki, Katsuhiro |
VR '17: "Recognition and Mapping of ..."
Recognition and Mapping of Facial Expressions to Avatar by Embedded Photo Reflective Sensors in Head Mounted Display
Katsuhiro Suzuki, Fumihiko Nakamura, Jiu Otsuka, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, and Maki Sugimoto (Keio University, Japan) We propose a facial expression mapping technology between virtual avatars and Head-Mounted Display (HMD) users. HMD allow people to enjoy an immersive Virtual Reality (VR) experience. A virtual avatar can be a representative of the user in the virtual environment. However, the synchronization of the the virtual avatar's expressions with those of the HMD user is limited. The major problem of wearing an HMD is that a large portion of the user's face is occluded, making facial recognition difficult in an HMD-based virtual environment. To overcome this problem, we propose a facial expression mapping technology using retro-reflective photoelectric sensors. The sensors attached inside the HMD measures the distance between the sensors and the user's face. The distance values of five basic facial expressions (Neutral, Happy, Angry, Surprised, and Sad) are used for training the neural network to estimate the facial expression of a user. We achieved an overall accuracy of 88% in recognizing the facial expressions. Our system can also reproduce facial expression change in real-time through an existing avatar using regression. Consequently, our system enables estimation and reconstruction of facial expressions that correspond to the user's emotional changes. @InProceedings{VR17p177, author = {Katsuhiro Suzuki and Fumihiko Nakamura and Jiu Otsuka and Katsutoshi Masai and Yuta Itoh and Yuta Sugiura and Maki Sugimoto}, title = {Recognition and Mapping of Facial Expressions to Avatar by Embedded Photo Reflective Sensors in Head Mounted Display}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {177--185}, doi = {}, year = {2017}, } |
|
Thompson, William B. |
VR '17: "Prism Aftereffects for Throwing ..."
Prism Aftereffects for Throwing with a Self-Avatar in an Immersive Virtual Environment
Bobby Bodenheimer, Sarah Creem-Regehr, Jeanine Stefanucci, Elena Shemetova, and William B. Thompson (Vanderbilt University, USA; University of Utah, USA) The use of first-person self-avatars in immersive virtual environments (VEs) has grown over recent years. It is unknown, however, how visual feedback from a self-avatar influences a user's online actions and subsequent calibration of actions within an immersive VE. The current paper uses a prism throwing adaptation paradigm to test the role of a self-avatar arm or full body on action calibration in a VE. Participants' throwing accuracy to a target on the ground was measured first in a normal viewing environment, then with the visual field rotated clockwise about their vertical axis by 17 degrees (prism simulation), and then again in the normal viewing environment with the prism distortion removed. Participants experienced either no-avatar, a first-person avatar arm and hand, or a first-person full body avatar during the entire experimental session, in a between-subjects manipulation. Results showed similar throwing error and adaptation during the prism exposure for all conditions, but a reduced aftereffect (displacement with respect to the target in the opposite direction of the prism-exposure) when the avatar arm or full body was present. The results are discussed in the context of how an avatar can provide a visual frame of reference to aid in action calibration. @InProceedings{VR17p141, author = {Bobby Bodenheimer and Sarah Creem-Regehr and Jeanine Stefanucci and Elena Shemetova and William B. Thompson}, title = {Prism Aftereffects for Throwing with a Self-Avatar in an Immersive Virtual Environment}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {141--147}, doi = {}, year = {2017}, } |
|
Verhulst, Adrien |
VR '17: "A Study on the Use of an Immersive ..."
A Study on the Use of an Immersive Virtual Reality Store to Investigate Consumer Perceptions and Purchase Behavior toward Non-standard Fruits and Vegetables
Adrien Verhulst, Jean-Marie Normand, Cindy Lombart, and Guillaume Moreau (Ecole Centrale de Nantes, France; Audienca Business School, France) In this paper we present an immersive virtual reality user study aimed at investigating how customers perceive and if they would purchase non-standard (i.e. misshaped) fruits and vegetables (FaVs) in supermarkets and hypermarkets. Indeed, food waste is a major issue for the retail sector and a recent trend is to reduce it by selling non-standard goods. An important question for retailers relates to the FaVs' ``level of abnormality'' that consumers would agree to buy. However, this question cannot be tackled using ``classical'' marketing techniques that perform user studies within real shops since fresh produce such as FaVs tend to rot rapidly preventing studies to be repeatable or to be run for a long time. In order to overcome those limitations, we created a virtual grocery store with a fresh FaVs section where 142 participants were immersed using an Oculus Rift DK2 HMD. Participants were presented either ``normal'', ``slightly misshaped'', ``misshaped'' or ``severely misshaped'' FaVs. Results show that participants tend to purchase a similar number of FaVs whatever their deformity. Nevertheless participants' perceptions of the quality of the FaV depend on the level of abnormality. @InProceedings{VR17p55, author = {Adrien Verhulst and Jean-Marie Normand and Cindy Lombart and Guillaume Moreau}, title = {A Study on the Use of an Immersive Virtual Reality Store to Investigate Consumer Perceptions and Purchase Behavior toward Non-standard Fruits and Vegetables}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {55--63}, doi = {}, year = {2017}, } |
|
Vonach, Emanuel |
VR '17: "VRRobot: Robot Actuated Props ..."
VRRobot: Robot Actuated Props in an Infinite Virtual Environment
Emanuel Vonach, Clemens Gatterer, and Hannes Kaufmann (Vienna University of Technology, Austria) We present the design and development of a fully immersive virtual reality (VR) system that can provide prop-based haptic feedback in an infinite virtual environment. It is conceived as a research tool for studying topics related to haptics in VR and based on off-the-shelf components. A robotic arm moves physical props, dynamically matching pose and location of an object in the virtual world. When the user reaches for the virtual object, his or her hands also encounter it in the real physical space. The interaction is not limited to specific body parts and does not rely on an external structure like an exoskeleton. In combination with a locomotion platform for close-to-natural walking, this allows unrestricted haptic interaction in a natural way in virtual environments of unlimited size. We describe the concept, the hardware and software architecture in detail. We establish safety design guidelines for human-robot interaction in VR. Our technical evaluation shows good response times and accuracy. We report on a user study conducted with 34 participants indicating promising results, and discuss the potential of our system. @InProceedings{VR17p74, author = {Emanuel Vonach and Clemens Gatterer and Hannes Kaufmann}, title = {VRRobot: Robot Actuated Props in an Infinite Virtual Environment}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {74--83}, doi = {}, year = {2017}, } Video |
|
Wei, Jianguo |
VR '17: "Acoustic VR in the Mouth: ..."
Acoustic VR in the Mouth: A Real-Time Speech-Driven Visual Tongue System
Ran Luo, Qiang Fang, Jianguo Wei, Wenhuan Lu, Weiwei Xu, and Yin Yang (University of New Mexico, USA; Chinese Academy of Social Sciences, China; Tianjin University, China; Zhejiang University, China) We propose an acoustic-VR system that converts acoustic signals of human language (Chinese) to realistic 3D tongue animation sequences in real time. It is known that directly capturing the 3D geometry of the tongue at a frame rate that matches the tongue's swift movement during the language production is challenging. This difficulty is handled by utilizing the electromagnetic articulography (EMA) sensor as the intermediate medium linking the acoustic data to the simulated virtual reality. We leverage Deep Neural Networks to train a model that maps the input acoustic signals to the positional information of pre-defined EMA sensors based on 1,108 utterances. Afterwards, we develop a novel reduced physics-based dynamics model for simulating the tongue's motion. Unlike the existing methods, our deformable model is nonlinear, volume-preserving, and accommodates collision between the tongue and the oral cavity (mostly with the jaw). The tongue's deformation could be highly localized which imposes extra difficulties for existing spectral model reduction methods. Alternatively, we adopt a spatial reduction method that allows an expressive subspace representation of the tongue's deformation. We systematically evaluate the simulated tongue shapes with real-world shapes acquired by MRI/CT. Our experiment demonstrates that the proposed system is able to deliver a realistic visual tongue animation corresponding to a user's speech signal. @InProceedings{VR17p112, author = {Ran Luo and Qiang Fang and Jianguo Wei and Wenhuan Lu and Weiwei Xu and Yin Yang}, title = {Acoustic VR in the Mouth: A Real-Time Speech-Driven Visual Tongue System}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {112--121}, doi = {}, year = {2017}, } Video |
|
Welch, Gregory F. |
VR '17: "Exploring the Effect of Vibrotactile ..."
Exploring the Effect of Vibrotactile Feedback through the Floor on Social Presence in an Immersive Virtual Environment
Myungho Lee, Gerd Bruder, and Gregory F. Welch (University of Central Florida, USA) We investigate the effect of vibrotactile feedback delivered to one's feet in an immersive virtual environment (IVE). In our study, participants observed a virtual environment where a virtual human (VH) walked toward the participants and paced back and forth within their social space. We compared three conditions as follows: participants in the Sound condition heard the footsteps of the VH; participants in the Vibration condition experienced the vibration of the footsteps along with the sounds; while participants in the Mute condition were not exposed to sound nor vibrotactile feedback. We found that the participants in the Vibration condition felt a higher social presence with the VH compared to those who did not feel the vibration. The participants in the Vibration condition also exhibited greater avoidance behavior while facing the VH and when the VH invaded their personal space. @InProceedings{VR17p105, author = {Myungho Lee and Gerd Bruder and Gregory F. Welch}, title = {Exploring the Effect of Vibrotactile Feedback through the Floor on Social Presence in an Immersive Virtual Environment}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {105--111}, doi = {}, year = {2017}, } |
|
Wendling, Adam |
VR '17: "Repeat after Me: Using Mixed ..."
Repeat after Me: Using Mixed Reality Humans to Influence Best Communication Practices
Andrew Cordar, Adam Wendling, Casey White, Samsun Lampotang, and Benjamin Lok (University of Florida, USA; University of Virginia, USA) In the past few years, advances have been made on how mixed reality humans (MRHs) can be used for interpersonal communication skills training for medical teams; however, little research has looked at how MRHs can influence communication skills during training. One way to influence communication skills is to leverage MRHs as models of communication behavior. We created a mixed reality medical team training exercise designed to impact communication behaviors that are critical for patient safety. We recruited anesthesia residents to go through an operating room training exercise with MRHs to assess and influence residents' closed loop communication behaviors during medication administration. We manipulated the behavior of the MRHs to determine if the MRHs could influence the residents' closed loop communication behavior. Our results showed that residents' closed loop communications behaviors were influenced by MRHs. Additionally, we found there was a statistically significant difference between groups based on which MRH behavior residents observed. Because the MRHs significantly impacted how residents communicated in simulation, this work expands the boundaries for how VR can be used and demonstrates that MRHs could be used as tools to address complex communication dynamics in a team setting @InProceedings{VR17p148, author = {Andrew Cordar and Adam Wendling and Casey White and Samsun Lampotang and Benjamin Lok}, title = {Repeat after Me: Using Mixed Reality Humans to Influence Best Communication Practices}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {148--156}, doi = {}, year = {2017}, } Video |
|
White, Casey |
VR '17: "Repeat after Me: Using Mixed ..."
Repeat after Me: Using Mixed Reality Humans to Influence Best Communication Practices
Andrew Cordar, Adam Wendling, Casey White, Samsun Lampotang, and Benjamin Lok (University of Florida, USA; University of Virginia, USA) In the past few years, advances have been made on how mixed reality humans (MRHs) can be used for interpersonal communication skills training for medical teams; however, little research has looked at how MRHs can influence communication skills during training. One way to influence communication skills is to leverage MRHs as models of communication behavior. We created a mixed reality medical team training exercise designed to impact communication behaviors that are critical for patient safety. We recruited anesthesia residents to go through an operating room training exercise with MRHs to assess and influence residents' closed loop communication behaviors during medication administration. We manipulated the behavior of the MRHs to determine if the MRHs could influence the residents' closed loop communication behavior. Our results showed that residents' closed loop communications behaviors were influenced by MRHs. Additionally, we found there was a statistically significant difference between groups based on which MRH behavior residents observed. Because the MRHs significantly impacted how residents communicated in simulation, this work expands the boundaries for how VR can be used and demonstrates that MRHs could be used as tools to address complex communication dynamics in a team setting @InProceedings{VR17p148, author = {Andrew Cordar and Adam Wendling and Casey White and Samsun Lampotang and Benjamin Lok}, title = {Repeat after Me: Using Mixed Reality Humans to Influence Best Communication Practices}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {148--156}, doi = {}, year = {2017}, } Video |
|
Whiting, Eric |
VR '17: "Enhancements to VTK Enabling ..."
Enhancements to VTK Enabling Scientific Visualization in Immersive Environments
Patrick O'Leary, Sankhesh Jhaveri, Aashish Chaudhary, William Sherman, Ken Martin, David Lonie, Eric Whiting, James Money, and Sandy McKenzie (Kitware, USA; Indiana University, USA; Idaho National Laboratory, USA) Modern scientific, engineering and medical computational simulations, as well as experimental and observational data sensing/measuring devices, produce enormous amounts of data. While statistical analysis is one tool that provides insight into this data, it is scientific visualization that is tactically important for scientific discovery, product design and data analysis. These benefits are impeded, however, when the scientific visualization algorithms are implemented from scratch | a time consuming and redundant process in immersive application development. This process then can greatly benefit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR) environment has only been attempted to varying degrees of success. In this paper, we demonstrate two new approaches to simplify this amalgamation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that provide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications. @InProceedings{VR17p186, author = {Patrick O'Leary and Sankhesh Jhaveri and Aashish Chaudhary and William Sherman and Ken Martin and David Lonie and Eric Whiting and James Money and Sandy McKenzie}, title = {Enhancements to VTK Enabling Scientific Visualization in Immersive Environments}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {186--194}, doi = {}, year = {2017}, } |
|
Xiao, Shuangjiu |
VR '17: "MagicToon: A 2D-to-3D Creative ..."
MagicToon: A 2D-to-3D Creative Cartoon Modeling System with Mobile AR
Lele Feng, Xubo Yang, and Shuangjiu Xiao (Shanghai Jiao Tong University, China) We present MagicToon, an interactive modeling system with mobile augmented reality (AR) that allows children to build 3D cartoon scenes creatively from their own 2D cartoon drawings on paper. Our system consists of two major components: an automatic 2D-to-3D cartoon model creator and an interactive model editor to construct more complicated AR scenes. The model creator can generate textured 3D cartoon models according to 2D drawings automatically and overlay them on the real world, bringing life to flat cartoon drawings. With our interactive model editor, the user can perform several optional operations on 3D models such as copying and animating in AR context through a touchscreen of a handheld device. The user can also author more complicated AR scenes by placing multiple registered drawings simultaneously. The results of our user study have shown that our system is easier to use compared with traditional sketch-based modeling systems and can give more play to children's innovations compared with AR coloring books. @InProceedings{VR17p195, author = {Lele Feng and Xubo Yang and Shuangjiu Xiao}, title = {MagicToon: A 2D-to-3D Creative Cartoon Modeling System with Mobile AR}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {195--204}, doi = {}, year = {2017}, } Video Info |
|
Xu, Weiwei |
VR '17: "Acoustic VR in the Mouth: ..."
Acoustic VR in the Mouth: A Real-Time Speech-Driven Visual Tongue System
Ran Luo, Qiang Fang, Jianguo Wei, Wenhuan Lu, Weiwei Xu, and Yin Yang (University of New Mexico, USA; Chinese Academy of Social Sciences, China; Tianjin University, China; Zhejiang University, China) We propose an acoustic-VR system that converts acoustic signals of human language (Chinese) to realistic 3D tongue animation sequences in real time. It is known that directly capturing the 3D geometry of the tongue at a frame rate that matches the tongue's swift movement during the language production is challenging. This difficulty is handled by utilizing the electromagnetic articulography (EMA) sensor as the intermediate medium linking the acoustic data to the simulated virtual reality. We leverage Deep Neural Networks to train a model that maps the input acoustic signals to the positional information of pre-defined EMA sensors based on 1,108 utterances. Afterwards, we develop a novel reduced physics-based dynamics model for simulating the tongue's motion. Unlike the existing methods, our deformable model is nonlinear, volume-preserving, and accommodates collision between the tongue and the oral cavity (mostly with the jaw). The tongue's deformation could be highly localized which imposes extra difficulties for existing spectral model reduction methods. Alternatively, we adopt a spatial reduction method that allows an expressive subspace representation of the tongue's deformation. We systematically evaluate the simulated tongue shapes with real-world shapes acquired by MRI/CT. Our experiment demonstrates that the proposed system is able to deliver a realistic visual tongue animation corresponding to a user's speech signal. @InProceedings{VR17p112, author = {Ran Luo and Qiang Fang and Jianguo Wei and Wenhuan Lu and Weiwei Xu and Yin Yang}, title = {Acoustic VR in the Mouth: A Real-Time Speech-Driven Visual Tongue System}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {112--121}, doi = {}, year = {2017}, } Video |
|
Yang, Xubo |
VR '17: "MagicToon: A 2D-to-3D Creative ..."
MagicToon: A 2D-to-3D Creative Cartoon Modeling System with Mobile AR
Lele Feng, Xubo Yang, and Shuangjiu Xiao (Shanghai Jiao Tong University, China) We present MagicToon, an interactive modeling system with mobile augmented reality (AR) that allows children to build 3D cartoon scenes creatively from their own 2D cartoon drawings on paper. Our system consists of two major components: an automatic 2D-to-3D cartoon model creator and an interactive model editor to construct more complicated AR scenes. The model creator can generate textured 3D cartoon models according to 2D drawings automatically and overlay them on the real world, bringing life to flat cartoon drawings. With our interactive model editor, the user can perform several optional operations on 3D models such as copying and animating in AR context through a touchscreen of a handheld device. The user can also author more complicated AR scenes by placing multiple registered drawings simultaneously. The results of our user study have shown that our system is easier to use compared with traditional sketch-based modeling systems and can give more play to children's innovations compared with AR coloring books. @InProceedings{VR17p195, author = {Lele Feng and Xubo Yang and Shuangjiu Xiao}, title = {MagicToon: A 2D-to-3D Creative Cartoon Modeling System with Mobile AR}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {195--204}, doi = {}, year = {2017}, } Video Info |
|
Yang, Yin |
VR '17: "Acoustic VR in the Mouth: ..."
Acoustic VR in the Mouth: A Real-Time Speech-Driven Visual Tongue System
Ran Luo, Qiang Fang, Jianguo Wei, Wenhuan Lu, Weiwei Xu, and Yin Yang (University of New Mexico, USA; Chinese Academy of Social Sciences, China; Tianjin University, China; Zhejiang University, China) We propose an acoustic-VR system that converts acoustic signals of human language (Chinese) to realistic 3D tongue animation sequences in real time. It is known that directly capturing the 3D geometry of the tongue at a frame rate that matches the tongue's swift movement during the language production is challenging. This difficulty is handled by utilizing the electromagnetic articulography (EMA) sensor as the intermediate medium linking the acoustic data to the simulated virtual reality. We leverage Deep Neural Networks to train a model that maps the input acoustic signals to the positional information of pre-defined EMA sensors based on 1,108 utterances. Afterwards, we develop a novel reduced physics-based dynamics model for simulating the tongue's motion. Unlike the existing methods, our deformable model is nonlinear, volume-preserving, and accommodates collision between the tongue and the oral cavity (mostly with the jaw). The tongue's deformation could be highly localized which imposes extra difficulties for existing spectral model reduction methods. Alternatively, we adopt a spatial reduction method that allows an expressive subspace representation of the tongue's deformation. We systematically evaluate the simulated tongue shapes with real-world shapes acquired by MRI/CT. Our experiment demonstrates that the proposed system is able to deliver a realistic visual tongue animation corresponding to a user's speech signal. @InProceedings{VR17p112, author = {Ran Luo and Qiang Fang and Jianguo Wei and Wenhuan Lu and Weiwei Xu and Yin Yang}, title = {Acoustic VR in the Mouth: A Real-Time Speech-Driven Visual Tongue System}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {112--121}, doi = {}, year = {2017}, } Video |
|
Yem, Vibol |
VR '17: "Wearable Tactile Device using ..."
Wearable Tactile Device using Mechanical and Electrical Stimulation for Fingertip Interaction with Virtual World
Vibol Yem and Hiroyuki Kajimoto (University of Electro-Communications, Japan) We developed “Finger Glove for Augmented Reality” (FinGAR), which combines electrical and mechanical stimulation to selectively stimulate skin sensory mechanoreceptors and provide tactile feedback of virtual objects. A DC motor provides high-frequency vibration and shear deformation to the whole finger, and an array of electrodes provide pressure and low-frequency vibration with high spatial resolution. FinGAR devices are attached to the thumb, index finger and middle finger. It is lightweight, simple in mechanism, easy to wear, and does not disturb the natural movements of the hand. All of these attributes are necessary for a general-purpose virtual reality system. User study was conducted to evaluate its ability to reproduce sensations of four tactile dimensions: macro roughness, friction, fine roughness and hardness. Result indicated that skin deformation and cathodic stimulation affect macro roughness and hardness, whereas high-frequency vibration and anodic stimulation affect friction and fine roughness. @InProceedings{VR17p99, author = {Vibol Yem and Hiroyuki Kajimoto}, title = {Wearable Tactile Device using Mechanical and Electrical Stimulation for Fingertip Interaction with Virtual World}, booktitle = {Proc.\ VR}, publisher = {IEEE}, pages = {99--104}, doi = {}, year = {2017}, } Video |
95 authors
proc time: 0.73