VR 2017 – Author Index |
Contents -
Abstracts -
Authors
Online Calendar - iCal File |
A B C D E F G H I J K L M N O P Q R S T V W X Y Z
Adcock, Matt |
![]() Theophilus Teo, Mitchell Norman, Matt Adcock, and Bruce H. Thomas (University of South Australia, Australia; CSIRO, Australia) This paper presents our new Virtual Reality (VR) interactive visualization techniques to assist users querying large image sets. The VR system allows users to query a set of images on four different filters such as locations and keywords. The goal is to investigate if a VR platform is preferred over a non-VR platform for viewing and querying large image sets. We employed an HTC Vive and a traditional desktop screen to represent VR and non-VR platforms. We found users preferred the VR platform over the traditional desktop screen. ![]() |
|
Adipradana, Yonathan Widya |
![]() Anthony Steed, Yonathan Widya Adipradana, and Sebastian Friston (University College London, UK) Video see-through augmented reality (VSAR) is an effective way of combing real and virtual scenes for head-mounted human computer interfaces. In this paper we present the AR-Rift 2 system, a cost-effective prototype VSAR system based around the Oculus Rift CV1 head-mounted display (HMD). Current consumer camera systems however typically have latencies far higher than the rendering pipeline of current consumer HMDs. They also have lower update rate than the display. We thus measure the latency of the video and implement a simple image-warping method to ensure smooth movement of the video. ![]() ![]() |
|
Allan-Blitz, Elijah |
![]() Dylan Southard, Elijah Allan-Blitz, Jordan Halsey, Christina Heller, and Artemis Joukowsky (VR Playhouse, USA; A-B Productions, USA; Farm Pond Pictures, USA) "Defying the Nazis VR" uses CGI, motion graphics, and archival documentary footage to re-create a heroic episode from World War II in VR, experimenting with the emotional power of virtual reality and with the medium as an educational tool. ![]() ![]() |
|
Allison, Robert S. |
![]() Jingbo Zhao, Robert S. Allison, Margarita Vinnikov, and Sion Jennings (York University, Canada; National Research Council, Canada) We present a method for estimating the Motion-to-Photon (End-to-End) latency of head mounted displays (HMDs). The specific HMD evaluated in our study was the Oculus Rift DK2, but the procedure is general. We mounted the HMD on a pendulum to introduce damped sinusoidal motion to the HMD during the pendulum swing. The latency was estimated by calculating the phase shift between the captured signals of the physical motion of the HMD and a motion-dependent gradient stimulus rendered on the display. We used the proposed method to estimate both rotational and translational Motion-to-Photon latencies of the Oculus Rift DK2. ![]() |
|
Almeida, Marcio |
![]() Eduardo Zilles Borba, Andre Montes, Marcio Almeida, Mario Nagamura, Roseli Lopes, Marcelo Knorich Zuffo, Astolfo Araujo, and Regis Kopper (University of São Paulo, Brazil; Duke University, USA) This demo presents a fully immersive and interactive virtual environment (VE) – the ArcheoVR, which represents Itapeva Rocky Shelter, a prehistoric archeological site in Brazil. W workflow started with a real world data capture – laser scanners, drones and photogrammetry. Captured information was transformed into a carefully designed realistic 3D scene and interactive features that allows users to experience the virtual archeological site in real-time. The main objective of this VR model is to allow the general public to feel and explore an otherwise restricted and ephemeral site and to assess prototype tools intended for future digital archaeological exploration. ![]() |
|
Amano, Toshiyuki |
![]() Yuta Itoh, Jason Orlosky, Kiyoshi Kiyokawa, Toshiyuki Amano, and Maki Sugimoto (Keio University, Japan; Osaka University, Japan; Wakayama University, Japan) We present a method for focal distance estimation of a freely-orienting eye using Purkinje-Sanson (PS) images, which are reflections of light on the inner structures of the eye. Using an infrared camera with a rigidly-fixed LED, our method creates an estimation model based on 3D gaze and the distance between reflections in the PS images that occur on the corneal surface and anterior surface of the eye lens. The distance between these two reflections changes with focus, so we associate that information to the focal distance on a user. Unlike conventional methods that mainly relies on 2D pupil size which is sensitive to scene lighting and the fourth PS image, our method detects the third PS image which is more representative of accommodation. Our feasibility study on a single user with a focal range from 15-45 cm shows that our method achieves mean and median absolute errors of 3.15 and 1.93 cm for a 10-degree viewing angle. The study shows that our method is also tolerant against environment lighting changes. ![]() ![]() Isao Shimana and Toshiyuki Amano (Wakayama University, Japan) In the field of SAR, the projector-camera system has been well studied; its radiometric model can be easily described by a color mixing matrix. Many SAR applications have proposed and created by using this model. However, this model can be used for reflectance component, but not for fluorescence component. In this paper, we propose RKS Projector–Camera response model for separating of the color mixing matrix’s reflectance components and fluorescence components and describe how to decompose them. ![]() |
|
Ananth, Manasa |
![]() Jessie Mann, Nicholas Polys, Rachel Diana, Manasa Ananth, Brad Herald, and Sweetuben Platel (Virginia Tech, USA) The design of Virginia Tech’s (VT) Study Hall emerges from the current cognitive neuroscience understanding of memory as a spatially mediated encoding process. The driving questions are: Does the sense of spatial navigation generated by an immersive virtual experience aid in memory formation? Does virtual spatial navigation, when paired with learning cues, enhance information encoding relative to nonspatial and nonvirtual processes? A pilot study was executed comparing recall on non-navigational memorization processes to processes involving mental and virtual navigation and we are currently running a full study to see if we can replicate these effects with a more demanding memory task and refined study design. ![]() |
|
Andersen, Thea |
![]() Thea Andersen, Gintare Anisimovaite, Anders Christiansen, Mohamed Hussein, Carol Lund, Thomas Nielsen, Eoin Rafferty, Niels C. Nilsson, Rolf Nordahl, and Stefania Serafin (Aalborg University at Copenhagen, Denmark) This poster describes a between-groups study (n=24) exploring the use of virtual reality (VR) for facilitating focused meditation. Half of the participants were exposed to a meditation session combing the sound of a guiding voice and a visual environment including virtual objects for the participants to focus on. The other half of the participants was only exposed to the auditory guide. The participants' experience of the sessions was assessed using self-reported measures of perceived concentration, temporal duration, stress reduction, and comfort. Interestingly, no statistically significant differences were found between the two conditions. This finding may be revealing in regards to the usefulness of VR-based meditation. ![]() |
|
Anglin, Julia |
![]() Ryan Spicer, Julia Anglin, David M. Krum, and Sook-Lei Liew (University of Southern California, USA) There are few effective treatments for rehabilitation of severe motor impairment after stroke. We developed a novel closed-loop neurofeedback system called REINVENT to promote motor recovery in this population. REINVENT (Rehabilitation Environment using the Integration of Neuromuscular-based Virtual Enhancements for Neural Training) harnesses recent advances in neuroscience, wearable sensors, and virtual technology and integrates low-cost electroencephalography (EEG) and electromyography (EMG) sensors with feedback in a head-mounted virtual reality display (VR) to provide neurofeedback when an individual’s neuromuscular signals indicate movement attempt, even in the absence of actual movement. Here we describe the REINVENT prototype and provide evidence of the feasibility and safety of using REINVENT with older adults. ![]() ![]() Julia Anglin, David Saldana, Allie Schmiesing, and Sook-Lei Liew (University of Southern California, USA) Immersive, head-mounted virtual reality (HMD-VR) can be a potentially useful tool for motor rehabilitation. However, it is unclear whether the motor skills learned in HMD-VR transfer to the non-virtual world and vice-versa. Here we used a well-established test of skilled motor learning, the Sequential Visual Isometric Pinch Task (SVIPT), to train individuals in either an HMD-VR or conventional training (CT) environment. Participants were then tested in both environments. Our results show that participants who train in the CT environment have an improvement in motor performance when they transfer to the HMD-VR environment. In contrast, participants who train in the HMD-VR environment show a decrease in skill level when transferring to the CT environment. This has implications for how training in HMD-VR and CT may affect performance in different environments. ![]() |
|
Anisimovaite, Gintare |
![]() Thea Andersen, Gintare Anisimovaite, Anders Christiansen, Mohamed Hussein, Carol Lund, Thomas Nielsen, Eoin Rafferty, Niels C. Nilsson, Rolf Nordahl, and Stefania Serafin (Aalborg University at Copenhagen, Denmark) This poster describes a between-groups study (n=24) exploring the use of virtual reality (VR) for facilitating focused meditation. Half of the participants were exposed to a meditation session combing the sound of a guiding voice and a visual environment including virtual objects for the participants to focus on. The other half of the participants was only exposed to the auditory guide. The participants' experience of the sessions was assessed using self-reported measures of perceived concentration, temporal duration, stress reduction, and comfort. Interestingly, no statistically significant differences were found between the two conditions. This finding may be revealing in regards to the usefulness of VR-based meditation. ![]() |
|
Araujo, Astolfo |
![]() Eduardo Zilles Borba, Andre Montes, Marcio Almeida, Mario Nagamura, Roseli Lopes, Marcelo Knorich Zuffo, Astolfo Araujo, and Regis Kopper (University of São Paulo, Brazil; Duke University, USA) This demo presents a fully immersive and interactive virtual environment (VE) – the ArcheoVR, which represents Itapeva Rocky Shelter, a prehistoric archeological site in Brazil. W workflow started with a real world data capture – laser scanners, drones and photogrammetry. Captured information was transformed into a carefully designed realistic 3D scene and interactive features that allows users to experience the virtual archeological site in real-time. The main objective of this VR model is to allow the general public to feel and explore an otherwise restricted and ephemeral site and to assess prototype tools intended for future digital archaeological exploration. ![]() |
|
Ard, Tyler |
![]() Tyler Ard, David M. Krum, Thai Phan, Dominique Duncan, Ryan Essex, Mark Bolas, and Arthur Toga (University of Southern California, USA) Visualization is a critical component of neuroimaging, and how to best view data that is naturally three dimensional is a long standing question in neuroscience. Many approaches, programs, and techniques have been developed specifically for neuroimaging. However, exploration of 3D information through a 2D screen is inherently limited. Many neuroscientific researchers hope that with the recent commercialization and popularization of VR, it can offer the next-step in data visualization and exploration. Neuro Imaging in Virtual Reality (NIVR), is a visualization suite that employs various immersive visualizations to represent neuroimaging information in VR. Some established techniques, such as raymarching volume visualization, are paired with newer techniques, such as near-field rendering, to provide a broad basis of how we can leverage VR to improve visualization and navigation of neuroimaging data. Several of the neuroscientific visualization approaches presented are, to our knowledge, the first of their kind. NIVR offers not only an exploration of neuroscientific data visualization, but also a tool to expose and educate the public regarding recent advancements in the field of neuroimaging. By providing an engaging experience to explore new techniques and discoveries in neuroimaging, we hope to spark scientific interest through a broad audience. Furthermore, neuroimaging offers deep and expansive datasets; a single scan can involve several gigabytes of information. Visualization and exploration of this type of information can be challenging, and real-time exploration of this information in VR even more so. NIVR explores pathways which make this possible, and offers preliminary stereo visualizations of these types of massive data. ![]() ![]() Dominique Duncan, Bradley Newman, Adam Saslow, Emily Wanserski, Tyler Ard, Ryan Essex, and Arthur Toga (University of Southern California, USA; RareFaction Interactive, USA) The USC Stevens Neuroimaging and Informatics Institute in the Laboratory of Neuro Imaging (http://loni.usc.edu) has the largest collection/repository of neuroanatomical MRI scans in the world and is at the forefront of both brain imaging and data storage/processing technology. One of our workflow processes involves algorithmic segmentation of MRI scans into labeled anatomical regions (using FreeSurfer, currently the best software for this purpose). This algorithm is imprecise, and users must tediously correct errors manually by using a mouse and keyboard to edit individual MRI slices at a time. We demonstrate preliminary work to improve efficiency of this task by translating it into 3 dimensions and utilizing virtual reality user interfaces to edit multiple slices of data simultaneously. ![]() ![]() |
|
Ardulov, Victor |
![]() Victor Ardulov and Oleg Pariser (Jet Propulsion Laboratory, USA) The Multimission Instrument Processing Laboratory (MIPL) at Jet Propulsion Laboratory (JPL) processes and analyzes, orbital and in-situ instrument data for both planetary and Earth science missions. Presenting 3D data in a meaningful and effective manner is of the utmost importance to furthering scientific research and conducting engineering operations.Visualizing data in an intuitive way by utilizing Virtual Reality (VR), allows users to immersively interact with their data in their respective environments. This paper examines several use-cases, across various missions, instruments, and environments, demonstrating the strengths and insights that VR has to offer scientists. ![]() |
|
Azimi, Ehsan |
![]() Ehsan Azimi, Long Qian, Peter Kazanzides, and Nassir Navab (Johns Hopkins University, USA; TU Munich, Germany) Uncertainty in measurement of point correspondences negatively affects the accuracy and precision in the calibration of head-mounted displays (HMD). Such errors depend on the sensors and pose estimation for video see-through HMD. For optical see-through systems, it additionally depends on the user's head motion and hand-eye coordination. Therefore, the distribution of alignment errors for optical see-through calibration are not isotropic, and one can estimate its process specific or user specific distribution based on interaction requirements of a given calibration process and the user's measurable head motion and hand-eye coordination characteristics. Current calibration methods, however, mostly utilize the DLT method which minimizes Euclidean distances for HMD projection matrix estimation, disregarding the anisotropicity in the alignment errors. We will show how to utilize the error covariance in order to take the anisotropic nature of error distribution into account. The main hypothesis of this study is that using Mahalonobis distance within the nonlinear optimization can improve the accuracy of the HMD calibration. To cover a wide range of possible realistic scenarios, several simulations were performed with variation in the extent of the anisotropicity in the input data along with other parameters. The simulation results indicate that our new method outperforms the standard DLT method both in accuracy and precision, and is more robust against user alignment errors. To the best of our knowledge, this is the first time that anisotropic noise has been accommodated in the optical see-through HMD calibration. ![]() ![]() Jianren Wang, Long Qian, Ehsan Azimi, and Peter Kazanzides (Shanghai Jiao Tong University, China; Johns Hopkins University, USA) An effective and simple method is proposed for multi-camera collaborative tracking, based on the prioritization of all tracking units, and then modeling the discrepancy between different tracking units as a locally static transformation error. Static error compensation is applied to the lower-priority tracking systems when high-priority trackers are not available. The method does not require high-end or carefully calibrated tracking units, and is able to effectively provide a comfortable augmented reality experience for users. A pilot study demonstrates the validity of the proposed method. ![]() |
|
Azmandian, Mahdi |
![]() Mahdi Azmandian, Timofey Grechkin, and Evan Suma Rosenberg (University of Southern California, USA) As the focus of virtual reality technology is shifting from single-person experiences to multi-user interactions it becomes increasingly important to accommodate multiple co-located users within a shared real-world space. For locomotion and navigation, the introduction of multiple users moving both virtually and physically creates additional challenges related to potential user-on-user collisions. In this work, we focus on defining the extent of these challenges, in order to apply redirected walking to two users immersed in virtual reality experiences within a shared physical tracked space. Using a computer simulation framework, we explore the costs and benefits of splitting available physical space between users versus attempting to algorithmically prevent user-to-user collisions. We also explore fundamental components of collision prevention such as steering the users away from each other, forced stopping, and user re-orientation. Each component was analyzed for the number of potential disruptions to the flow of the virtual experience. We also develop a novel collision prevention algorithm that reduces overall interruptions by 17.6% and collision prevention events by 58.3%. Our results show that sharing space using our collision prevention method is superior to subdividing the tracked space. ![]() |
|
Azuma, Makiko |
![]() Takuya Handa, Kenji Murase, Makiko Azuma, Toshihiro Shimizu, Satoru Kondo, and Hiroyuki Shinoda (NHK, Japan; University of Tokyo, Japan) The main goal of our research is to develop a haptic display that makes it possible to convey shapes, hardness, and textures of objects displayed on 3D TV. Our evolved device has three 5 mm diameter actuating spheres arranged in triangular geometry on each of three fingertips (thumb, index finger, middle finger). In this paper, we describe an overview of a novel haptic device and the first experimental results that twelve subjects had succeeded to recognize the size of cylinders and side geometry of a cuboid and a hexagonal prism. ![]() ![]() |
|
Bacchetti, Rafael |
![]() Eduardo H. Tanaka, Juliana A. Paludo, Rafael Bacchetti, Edgar V. Gadbem, Leonardo R. Domingues, Carlúcio S. Cordeiro, Olavo Giraldi Jr., Guilherme Alcarde Gallo, Adam Mendes da Silva, and Marcos H. Cascone (Eldorado Research Institute, Brazil) This research demonstration presents an Immersive Virtual Substation for Electricians Training. A substation is one of the most critical facilities of the electrical distribution system, so it is mandatory to keep its normal operation to deliver high standards of service, power quality and avoid blackouts to consumers. Therefore, it is necessary to give an effective training to the electricians who will operate the equipment in the substation and to prepare them to face emergencies on daily basis. The main purpose of the proposed Immersive Virtual Environment is to provide a realistic experience to trainees in a safe environment where they can interact with equipment, explore the facility and, mainly, practice basic and complex maneuvers to recover substation operations. Users can interact with this Immersive Virtual Environment using HMDs, joysticks or even an ordinary keyboard, mouse and monitor. Feedback from trainees and instructors who used the Immersive Virtual Environment was very positive, indicating that the objectives were fully achieved. ![]() ![]() |
|
Ballaguer-Balester, Emili |
![]() Ifigeneia Mavridou, James T. McGhee, Mahyar Hamedi, Mohsen Fatoorechi, Andrew Cleal, Emili Ballaguer-Balester, Ellen Seiss, Graeme Cox, and Charles Nduka (Bournemouth University, UK; Sussex Innovation Centre, UK) Faceteq prototype v.05 is a wearable technology for measuring facial expressions and biometric responses for experimental studies in Virtual Reality. Developed by Emteq Ltd laboratory, Faceteq can enable new avenues for virtual reality research through combination of high performance patented dry sensor technologies, proprietary algorithms and real-time data acquisition and streaming. Emteq founded the Faceteq project with the aim to provide a human-centered additional tool for emotion expression, affective human-computer interaction and social virtual environments. The proposed demonstration will exhibit the hardware and its functionality by allowing attendees to experience three of the showcasing applications we developed this year. ![]() |
|
Ban, Yuki |
![]() Keigo Matsumoto, Takuji Narumi, Yuki Ban, Tomohiro Tanikawa, and Michitaka Hirose (University of Tokyo, Japan) Redirected walking allows users to explore a large virtual environment while there is a limitation of the room size. Previous works tried to present users straight path in a virtual environment while they walked on a curved path in reality. We expand a previous technique to present users a various curved path in a virtual environment while they walked on a particular curved path or a straight path with/without haptics. Furthermore, we propose a novel estimation methodology to quantify walking paths which user has thought he walked in reality. The data from our experiment shows that users feel walking a various curved path in VR as same as one-to-one mapping condition. ![]() |
|
Barbulescu, Adela |
![]() Adela Barbulescu, Maxime Garcia, Antoine Begault, Laurence Boissieux, Marie-Paule Cani, Maxime Portaz, Alexis Viand, Romain Dulery, Pierre Heinish, Remi Ronfard, and Dominique Vaufreydaz (Inria, France; LJK Grenoble, France; University of Grenoble, France; LIG, France; Grenoble INP, France; CNRS, France) Pretend play is a storytelling technique, naturally used from very young ages, which relies on object substitution to represent the characters of the imagined story. We propose a system which assists the storyteller by generating a virtualized story from a recorded dialogue performed with 3D printed figurines. We capture the gestures and facial expressions of the storyteller using Kinect cameras and IMU sensors and transfer them to their virtual counterparts in the story-world. As a proof-of-concept, we demonstrate our system with an improvised story involving a prince and a witch, which was successfully recorded and transferred into 3D animation. ![]() |
|
Barmaki, Roghayeh |
![]() Felix Bork, Roghayeh Barmaki, Ulrich Eck, Pascal Fallavolita, Bernhard Fuerst, and Nassir Navab (TU Munich, Germany; Johns Hopkins University, USA; University of Ottawa, Canada) Screen-based Augmented Reality (AR) systems can be built as a window into the real world as often done in mobile AR applications or using the Magic Mirror metaphor, where users can see themselves with augmented graphics on a large display. The term Magic Mirror implies that the display shows the users enantiomorph, i.e. the mirror image, such that the system mimics a real-world physical mirror. However, the question arises whether one should design a traditional mirror, or instead display the true mirror image by means of a non-reversing mirror? We discuss the perceptual differences between these two mirror visualization concepts and present a first comparative study in the context of Magic Mirror anatomy teaching. ![]() |
|
Bazin, Jean-Charles |
![]() Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell (Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA) In this demonstration, we showcase a system for rapid acquisition of bespoke avatars for each participant (subject) in a social VR environment is presented. For each subject, the system automatically customizes a parametric avatar model to match the captured subject by adjusting its overall height, body and face shape parameters and generating a custom face texture. ![]() ![]() Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell (Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA) We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and face shape at the fitting stage, while animation blendshapes allow the face to be animated. The subject assumes a T-pose and a single snapshot is captured using a stereo RGB plus depth sensor rig. Our system automatically aligns a photo texture and fits the 3D shape of the face. The body shape is stylized according to body dimensions estimated from segmented depth. The face identity blendweights are optimised according to image-based facial landmarks, while a custom texture map for the face is generated by warping the input images to a reference texture according to the facial landmarks. The total capture and processing time is under 10~seconds and the output is a light-weight, game-engine-ready avatar which is recognizable as the subject. We demonstrate our system in a VR environment in which each user sees the other users' animated avatars through a VR headset with real-time audio-based facial animation and live body motion tracking, affording an enhanced level of presence and social engagement compared to generic avatars. ![]() |
|
Beck, Stephan |
![]() Stephan Beck and Bernd Froehlich (Bauhaus-Universität Weimar, Germany) The accurate calibration and registration of a set of color and depth (RGBD) sensors into a shared coordinate system is an essential requirement for 3D surround capturing systems. We present a method to calibrate multiple unsynchronized RGBD-sensors with high accuracy in a matter of minutes by sweeping a tracked checkerboard through the desired capturing space in front of each sensor. Through the sweeping process, a large number of robust correspondences between the depth and the color image as well as the 3D world positions can be automatically established. In order to obtain temporally synchronized correspondences between an RGBD-sensor’s data streams and the tracked target’s positions we apply an off-line optimization process based on error minimization and a coplanarity constraint. The correspondences are entered into a 3D look-up table which is used during runtime to transform depth and color information into the application’s world coordinate system. Our proposed method requires a manual effort of less than one minute per RGBD-sensor and achieves a high calibration accuracy with an average 3D error below 3.5 mm and an average texture reprojection error smaller than 1 pixel. ![]() ![]() ![]() |
|
Bednarz, Tomasz |
![]() June Kim and Tomasz Bednarz (UNSW, Australia; Queensland University of Technology, Australia; Data61 at CSIRO, Australia) Immersive technologies and particularly the Virtual Reality (VR) provide new exciting ways to see the world. Today, we introduce our research that successfully employed VR to the biodiversity conservation sciences. Jaguars are one of the endangered animals. It is certainly critical and compelling to preserve ecosystem for endangered animals. With the awareness of this, we endeavoured to establish a multidisciplinary VR project that implemented data from indigenous villagers (jaguar experts group A), the conventional knowledge of the field of jaguar ecosystem (from jaguar experts group B), and mathematical and statistical models. Our fascination lies in these questions: can we effectively bring together VR and analytical capabilities? Can VR be used to make this world a better place for living beings? Please enjoy our 360-degree images of jaguar habitats taken in the Peruvian Amazon. ![]() ![]() |
|
Begault, Antoine |
![]() Adela Barbulescu, Maxime Garcia, Antoine Begault, Laurence Boissieux, Marie-Paule Cani, Maxime Portaz, Alexis Viand, Romain Dulery, Pierre Heinish, Remi Ronfard, and Dominique Vaufreydaz (Inria, France; LJK Grenoble, France; University of Grenoble, France; LIG, France; Grenoble INP, France; CNRS, France) Pretend play is a storytelling technique, naturally used from very young ages, which relies on object substitution to represent the characters of the imagined story. We propose a system which assists the storyteller by generating a virtualized story from a recorded dialogue performed with 3D printed figurines. We capture the gestures and facial expressions of the storyteller using Kinect cameras and IMU sensors and transfer them to their virtual counterparts in the story-world. As a proof-of-concept, we demonstrate our system with an improvised story involving a prince and a witch, which was successfully recorded and transferred into 3D animation. ![]() |
|
Bente, Gary |
![]() Daniel Roth, Kristoffer Waldow, Marc Erich Latoschik, Arnulph Fuhrmann, and Gary Bente (University of Würzburg, Germany; University of Cologne, Germany; TH Köln, Germany; Michigan State University, USA) In this paper, we present SIAM-C, an avatar-mediated communication platform to study socially immersive interaction in virtual environments. The proposed system is capable of tracking, transmitting, representing body motion, facial expressions, and voice via virtual avatars and inherits the transmission of human behaviors that are available in real-life social interactions. Users are immersed using active stereoscopic rendering projected onto a life-size projection plane, utilizing the concept of “fish tank” virtual reality (VR). Our prototype connects two separate rooms and allows for socially immersive avatar-mediated communication in VR. ![]() |
|
Berthelsen, Theis |
![]() Andreas Ryge, Lui Thomsen, Theis Berthelsen, Jonatan S. Hvass, Lars Koreska, Casper Vollmers, Niels C. Nilsson, Rolf Nordahl, and Stefania Serafin (Aalborg University at Copenhagen, Denmark) In this paper we present a within-subjects study (n=26) comparing participants’ experience of three kinds of haptic feedback (no haptic feedback, low fidelity haptic feedback and high fidelity haptic feed- back) simulating the impact between a virtual baseball bat and ball. We noticed some minor effect on high fidelity versus low fidelity haptic feedback, but haptic feedback generally enhanced realism and quality of experience. ![]() |
|
Beshay, Joseph D. |
![]() Afshin Taghavi Nasrabadi, Anahita Mahzari, Joseph D. Beshay, and Ravi Prakash (University of Texas at Dallas, USA) Virtual reality and 360-degree video streaming are growing rapidly; however, streaming 360-degree video is very challenging due to high bandwidth requirements. To address this problem, the video quality is adjusted according to the user viewport prediction. High quality video is only streamed for the user viewport, reducing the overall bandwidth consumption. Existing solutions use shallow buffers limited by the accuracy of viewport prediction. Therefore, playback is prone to video freezes which are very destructive for the Quality of Experience(QoE). We propose using layered encoding for 360-degree video to improve QoE by reducing the probability of video freezes and the latency of response to the user head movements. Moreover, this scheme reduces the storage requirements significantly and improves in-network cache performance. ![]() |
|
Bhadsavle, Sarang S. |
![]() Sarang S. Bhadsavle, Xie Hunt Shannon Yap, Justin Segler, Rahul Jaisimha, Nishant Raman, Yengzhou Feng, Sierra J. Biggs, Micah Peoples, Robert B. Brenner, and Brian C. McCann (University of Texas at Austin, USA) Immersive technologies such as 360◦ cameras and head-mounted displays (HMDs) have become affordable to the average consumer, opening up new audiences for storytellers. However, existing immersive post-production software often requires too great a technical or financial investment for smaller creative shops, including most of the country’s newspapers and local broadcast journalism organizations. Game engines, for example, are unnecessarily complicated for simple 360◦ video projects. Introducing Immerj - an open source abstraction layer simplifying the Unity3D game engine’s interface for immersive content creators. Our primary collaborator, Professor R.B. Brenner, director of the School of Journalism at the University of Texas at Austin, organized hands-on demos with our team and journalists and designers from some of the top news organizations all over the country in order to follow a human-centered design process. In just over one year, a small team of undergraduate researchers at the Texas Advanced Computing Center (TACC) has created a potentially disruptive democratization of technology. ![]() |
|
Bhutra, Ghanshyam |
![]() Cullen Brown, Ghanshyam Bhutra, Mohamed Suhail, Qinghong Xu, and Eric D. Ragan (Texas A&M University, USA) Limited research has been performed attempting to handle multi-user storytelling environments in virtual reality. As such, a number of questions about handling story progression and maintaining user presence in a multi-user virtual environment have yet to be answered. We created a multi-user virtual reality story experience in which we intend to study a set of guided camera techniques and a set of gaze distractor techniques to determine how best to attract disparate users to the same story. Additionally, we describe our preliminary work and plans to study the effectiveness of these techniques, their effect on user presence, and generally how multiple users feel their actions affect the outcome of a story. ![]() |
|
Bian, Zhiyi |
![]() Zhong Zhou, Zhiyi Bian, and Zheng Zhuo (Beihang University, China) A novel prototype of MR (Mixed Reality) Sand Table is presented in this paper, that fuses multiple real-time video streaming into a physically united view. The main processes include geometric calibration and alignment, image blending and the final projection. Firstly we proposed a two-step MR alignment scheme which estimates the transform matrix between input video streaming and the sand table for coarse alignment, and deforms the input frame using moving least squares for accurate alignment. To overcome the video border distinction problem, we make a border-adaptive image stitching with brightness diffusion to merge the overlapping area. With the projection, the video area can be mixed into the sand table in real-time to provide a live physical mixed reality model. We build a prototype to demonstrate the effectiveness of the proposed method. This design could also be easily extended to large size with help of multiple projectors. The system proposed in this paper supports multiple user interaction in a broad area of applications such as surveillance, demonstration, action preview and discussion assistances. ![]() |
|
Biggs, Sierra J. |
![]() Sarang S. Bhadsavle, Xie Hunt Shannon Yap, Justin Segler, Rahul Jaisimha, Nishant Raman, Yengzhou Feng, Sierra J. Biggs, Micah Peoples, Robert B. Brenner, and Brian C. McCann (University of Texas at Austin, USA) Immersive technologies such as 360◦ cameras and head-mounted displays (HMDs) have become affordable to the average consumer, opening up new audiences for storytellers. However, existing immersive post-production software often requires too great a technical or financial investment for smaller creative shops, including most of the country’s newspapers and local broadcast journalism organizations. Game engines, for example, are unnecessarily complicated for simple 360◦ video projects. Introducing Immerj - an open source abstraction layer simplifying the Unity3D game engine’s interface for immersive content creators. Our primary collaborator, Professor R.B. Brenner, director of the School of Journalism at the University of Texas at Austin, organized hands-on demos with our team and journalists and designers from some of the top news organizations all over the country in order to follow a human-centered design process. In just over one year, a small team of undergraduate researchers at the Texas Advanced Computing Center (TACC) has created a potentially disruptive democratization of technology. ![]() |
|
Blix, Michael |
![]() Diego González-Zúñiga, Peter O'Shaughnessy, and Michael Blix (Samsung Research, UK; Samsung Research, USA) The tutorial focuses on Virtual Reality on the web and how researchers and developers can leverage its power to create content. The WebVR specification is presented, along with examples of how it works in a browser. Content creation is addressed by mentioning the available frameworks accompanied by a hands-on session in A-Frame. Additionally, the concept of Progressive Web App is explained and how it enables web experiences to work offline. ![]() |
|
Bodenheimer, Bobby |
![]() Bobby Bodenheimer, Sarah Creem-Regehr, Jeanine Stefanucci, Elena Shemetova, and William B. Thompson (Vanderbilt University, USA; University of Utah, USA) The use of first-person self-avatars in immersive virtual environments (VEs) has grown over recent years. It is unknown, however, how visual feedback from a self-avatar influences a user's online actions and subsequent calibration of actions within an immersive VE. The current paper uses a prism throwing adaptation paradigm to test the role of a self-avatar arm or full body on action calibration in a VE. Participants' throwing accuracy to a target on the ground was measured first in a normal viewing environment, then with the visual field rotated clockwise about their vertical axis by 17 degrees (prism simulation), and then again in the normal viewing environment with the prism distortion removed. Participants experienced either no-avatar, a first-person avatar arm and hand, or a first-person full body avatar during the entire experimental session, in a between-subjects manipulation. Results showed similar throwing error and adaptation during the prism exposure for all conditions, but a reduced aftereffect (displacement with respect to the target in the opposite direction of the prism-exposure) when the avatar arm or full body was present. The results are discussed in the context of how an avatar can provide a visual frame of reference to aid in action calibration. ![]() ![]() Richard A. Paris, Timothy P. McNamara, John J. Rieser, and Bobby Bodenheimer (Vanderbilt University, USA) Interesting virtual environments that permit free exploration are rarely small. A number of techniques have been developed to allow people to walk in larger virtual spaces than permitted by physical extent of the virtual reality hardware, and in this paper we compare three such methods in terms of how they affect presence and spatial awareness. In our first psychophysical study, we compared two methods of reorientation and one method of redirected walking on subjects' presence and spatial memory while navigating a pre-specified path. Our results suggested no difference between the two methods of reorientation but inferior performance of the redirected walking method. We further compared the two reorientation methods in a second psychophysical study involving free exploration and navigation in a large virtual environment. Our results provide criteria by which the choice of a locomotion method for navigating large virtual environments may be selected. ![]() |
|
Boettcher, Brady |
![]() Ross Tredinnick, Brady Boettcher, Simon Smith, Samuel Solovy, and Kevin Ponto (University of Wisconsin-Madison, USA) Unity3D has become a popular, freely available 3D game engine for design and construction of virtual environments. Unfortunately, the few options that currently exist for adapting Unity3D to distributed immersive tiled or projection-based VR display systems rely on closed commercial products. Uni-CAVE aims to solve this problem by creating a freely-available and easy to use Unity3D extension package for cluster-based VR display systems. This extension provides support for head and device tracking, stereo rendering and display synchronization. Furthermore, Uni-CAVE enables configuration within the Unity environment enabling researchers to get quickly up and running. ![]() |
|
Boissieux, Laurence |
![]() Adela Barbulescu, Maxime Garcia, Antoine Begault, Laurence Boissieux, Marie-Paule Cani, Maxime Portaz, Alexis Viand, Romain Dulery, Pierre Heinish, Remi Ronfard, and Dominique Vaufreydaz (Inria, France; LJK Grenoble, France; University of Grenoble, France; LIG, France; Grenoble INP, France; CNRS, France) Pretend play is a storytelling technique, naturally used from very young ages, which relies on object substitution to represent the characters of the imagined story. We propose a system which assists the storyteller by generating a virtualized story from a recorded dialogue performed with 3D printed figurines. We capture the gestures and facial expressions of the storyteller using Kinect cameras and IMU sensors and transfer them to their virtual counterparts in the story-world. As a proof-of-concept, we demonstrate our system with an improvised story involving a prince and a witch, which was successfully recorded and transferred into 3D animation. ![]() |
|
Bolas, Mark |
![]() Tyler Ard, David M. Krum, Thai Phan, Dominique Duncan, Ryan Essex, Mark Bolas, and Arthur Toga (University of Southern California, USA) Visualization is a critical component of neuroimaging, and how to best view data that is naturally three dimensional is a long standing question in neuroscience. Many approaches, programs, and techniques have been developed specifically for neuroimaging. However, exploration of 3D information through a 2D screen is inherently limited. Many neuroscientific researchers hope that with the recent commercialization and popularization of VR, it can offer the next-step in data visualization and exploration. Neuro Imaging in Virtual Reality (NIVR), is a visualization suite that employs various immersive visualizations to represent neuroimaging information in VR. Some established techniques, such as raymarching volume visualization, are paired with newer techniques, such as near-field rendering, to provide a broad basis of how we can leverage VR to improve visualization and navigation of neuroimaging data. Several of the neuroscientific visualization approaches presented are, to our knowledge, the first of their kind. NIVR offers not only an exploration of neuroscientific data visualization, but also a tool to expose and educate the public regarding recent advancements in the field of neuroimaging. By providing an engaging experience to explore new techniques and discoveries in neuroimaging, we hope to spark scientific interest through a broad audience. Furthermore, neuroimaging offers deep and expansive datasets; a single scan can involve several gigabytes of information. Visualization and exploration of this type of information can be challenging, and real-time exploration of this information in VR even more so. NIVR explores pathways which make this possible, and offers preliminary stereo visualizations of these types of massive data. ![]() ![]() Chih-Fan Chen, Mark Bolas, and Evan Suma Rosenberg (University of Southern California, USA) Virtual objects are essential for building environments in virtual reality (VR) applications. However, creating photorealistic 3D models is not easy, and handcrafting the detailed 3D model from a real object can be time and labor intensive. An alternative way is to build a structured camera array such as a light-stage to reconstruct the model from a real object. However, these technologies are very expensive and not practical for most users. In this work, we demonstrate a complete end-to-end pipeline for the capture, processing, and rendering of view-dependent 3D models in virtual reality from a single consumer-grade RGB-D camera. The geometry model and the camera trajectories are automatically reconstructed from a RGB-D image sequence captured offline. Based on the HMD position, selected images are used for real-time model rendering. The result of this pipeline is a 3D mesh with view-dependent textures suitable for real-time rendering in virtual reality. Specular reflections and light-burst effects are especially noticeable when users view the objects from different perspectives in a head-tracked environment. ![]() ![]() |
|
Bönsch, Andrea |
![]() Andrea Bönsch, Jonathan Wendt, Heiko Overath, Özgür Gürerk, Christine Harbring, Christian Grund, Thomas Kittsteiner, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Traditionally, experimental economics uses controlled and incentivized field and lab experiments to analyze economic behavior. However, investigating peer effects in the classic settings is challenging due to the reflection problem: Who is influencing whom? To overcome this, we enlarge the methodological toolbox of these experiments by means of Virtual Reality. After introducing and validating a real-effort sorting task, we embed a virtual agent as peer of a human subject, who independently performs an identical sorting task. We conducted two experiments investigating (a) the subject’s productivity adjustment due to peer effects and (b) the incentive effects on competition. Our results indicate a great potential for Virtual-Reality-based economic experiments. ![]() |
|
Borba, Eduardo Zilles |
![]() Eduardo Zilles Borba, Andre Montes, Marcio Almeida, Mario Nagamura, Roseli Lopes, Marcelo Knorich Zuffo, Astolfo Araujo, and Regis Kopper (University of São Paulo, Brazil; Duke University, USA) This demo presents a fully immersive and interactive virtual environment (VE) – the ArcheoVR, which represents Itapeva Rocky Shelter, a prehistoric archeological site in Brazil. W workflow started with a real world data capture – laser scanners, drones and photogrammetry. Captured information was transformed into a carefully designed realistic 3D scene and interactive features that allows users to experience the virtual archeological site in real-time. The main objective of this VR model is to allow the general public to feel and explore an otherwise restricted and ephemeral site and to assess prototype tools intended for future digital archaeological exploration. ![]() ![]() Eduardo Zilles Borba, Andre Montes, Roseli de Deus Lopes, Marcelo Knorich Zuffo, and Regis Kopper (University of São Paulo, Brazil; Duke University, USA) This poster presents the conceptual process of developing Itapeva 3D, a Virtual Reality (VR) archeology experience. It describes the technical spectrum of cyber-archeology process applied to the creation of a fully immersive and interactive virtual environment (VE), which represents Itapeva Rocky Shelter, a prehistoric archeological site in Brazil. The workflow starts with a real world data capture – laser scanners, drones and photogrammetry, continues with the transposition of the captured information into a 3D surface model capable of real-time rendering to head-mounted displays (HMDs), and ends with the design of interactive features allowing users to experience the virtual archeological site. The main objective of this VR model is to make plausible to general public to feel what it means to explore an otherwise restricted and ephemeral place. As final thoughts it is reported on preliminary results from an initial user observation. ![]() ![]() Eduardo Zilles Borba and Marcelo Knorich Zuffo (University of São Paulo, Brazil) This poster presents an initial study about people experience with advertising messages in Virtual Reality (VR) that simulates the urban space. Besides looking to the plastic and textual factors perceived by the users in the Virtual Environment (VE), this work also reflects about effects of immersion provided by different technological devices and its possible influences in the advertising message reception process – a head-mounted display (Oculus Rift DK2), a cavern automatic virtual environment (CAVE) and a desktop monitor (PC). To carry this empirical experiment, a 3D scenario that simulates a real city urban space was created and several advertising image formats were inserted on its landscape. User navigation through the urban space was designed in a first-person perspective. In short, we intend to accomplish two objectives: a) to identify which factors lead people to pay attention to adverting in immersive VE; b) to verify the immersion effects produced by different VR interfaces in the perception of advertising. ![]() |
|
Bork, Felix |
![]() Felix Bork, Roghayeh Barmaki, Ulrich Eck, Pascal Fallavolita, Bernhard Fuerst, and Nassir Navab (TU Munich, Germany; Johns Hopkins University, USA; University of Ottawa, Canada) Screen-based Augmented Reality (AR) systems can be built as a window into the real world as often done in mobile AR applications or using the Magic Mirror metaphor, where users can see themselves with augmented graphics on a large display. The term Magic Mirror implies that the display shows the users enantiomorph, i.e. the mirror image, such that the system mimics a real-world physical mirror. However, the question arises whether one should design a traditional mirror, or instead display the true mirror image by means of a non-reversing mirror? We discuss the perceptual differences between these two mirror visualization concepts and present a first comparative study in the context of Magic Mirror anatomy teaching. ![]() |
|
Borst, Christoph W. |
![]() Jason W. Woodworth, Sam Ekong, and Christoph W. Borst (University of Louisiana at Lafayette, USA) This demo presents an approach to networked educational virtual reality for virtual field trips and guided exploration. It shows an asymmetric collaborative interface in which a remote teacher stands in front of a large display and depth camera (Kinect) while students are immersed with HMDs. The teacher’s front-facing mesh is streamed into the environment to assist students and deliver instruction. Our project uses commodity virtual reality hardware and high-performance networks to allow students who are unable to visit a real facility with an alternative that provides similar educational benefits. Virtual facilities can further be augmented with educational content through interactables or small games. We discuss motivation, features, interface challenges, and ongoing testing. ![]() ![]() |
|
Boulic, Ronan |
![]() Thibault Porssut, Henrique G. Debarba, Elisa Canzoneri, Bruno Herbelin, and Ronan Boulic (EPFL, Switzerland) This project investigates the impact of a virtual zero gravity experience on the human gravity model. In the planned experiment, subjects are immersed with HMD and full body motion capture in a virtual world exhibiting either normal gravity or the apparent absence of gravity (i.e. body and objects floating in space). The study evaluates changes in the subjects’ gravity model by observing changes on motor planning of actions dependent on gravity. Our goal is to demonstrate that a virtual reality exposure can induce some modifications to the humans internal gravity model, even if users remain under normal gravity condition in reality. ![]() ![]() |
|
Bouyer, Guillaume |
![]() Guillaume Bouyer, Amine Chellali, and Anatole Lécuyer (University of Évry Val d'Essonne, France; Inria, France) Producing sensations of motion in driving simulators often requires using cumbersome and expensive motion platforms. In this ar- ticle we present a novel and alternative approach for producing self-motion sensations in driving simulations by relying on haptic- feedback. The method consists in applying a force-feedback pro- portional to the acceleration of the virtual vehicle directly in the hands of the driver, by means of a haptic device attached to the manipulated controller (or a steering wheel). We designed a proof- of-concept based on a standard gamepad physically attached at the extremity of a standard 3DOF haptic display. Haptic effects were designed to match notably the acceleration/braking (longitudinal forces) and left/right turns (lateral forces) of the virtual vehicle. A preliminary study conducted with 23 participants, engaged in gamepad-based active VR navigations in a straight line, showed that haptic motion effects globally improved the involvement and realism of motion sensation for participants with prior experience with haptic devices. Taken together, our results suggest that our approach could be further tested and used in driving simulators in entertainment and/or professional contexts. ![]() |
|
Brandão, William Losina |
![]() William Losina Brandão and Márcio Sarroglia Pinho (PUCRS, Brazil) Whether it in the military, law enforcement or private security, dismounted operators tend to deal with a large amount of volatile information that may or may not be relevant according to a variety of factors. In this paper we draft some ideas on the building blocks of an augmented reality system aimed to improve the situational awareness of dismounted operators by filtering, organizing, and displaying this information in a way that reduces the strain over the operator. ![]() |
|
Brayda, Luca |
![]() Victor Adriel de Jesus Oliveira, Luca Brayda, Luciana Nedel, and Anderson Maciel (Federal University of Rio Grande do Sul, Brazil; Fondazione Istituto Italiano di Tecnologia, Italy) Vibrotactile feedback is broadly used to support different tasks in virtual and augmented reality applications, such as navigation, communication, attentional redirection, or to enhance the sense of presence in virtual environments. Thus, we aim to include the haptic component to the most popular wearable used in VR applications: the VR headset. After studying the acuity around the head for vibrating stimuli, and trying different parameters, actuators, and configurations, we developed a haptic guidance technique to be used in a vibrotactile Head-mounted Display (HMD). Our vibrotactile HMD was made to render the position of objects in a 3D space around the subject by varying both stimulus loci and vibration frequency. In this demonstration, the participants will interact with different scenarios where the mission is to select a number of predefined objects. However, instead of displaying occlusive graphical information to point to these objects, vibrotactile cues will provide guidance in the VR setup. Participants will see that our haptic guidance technique can be both easy to use and entertaining. ![]() ![]() ![]() |
|
Brenner, Robert B. |
![]() Sarang S. Bhadsavle, Xie Hunt Shannon Yap, Justin Segler, Rahul Jaisimha, Nishant Raman, Yengzhou Feng, Sierra J. Biggs, Micah Peoples, Robert B. Brenner, and Brian C. McCann (University of Texas at Austin, USA) Immersive technologies such as 360◦ cameras and head-mounted displays (HMDs) have become affordable to the average consumer, opening up new audiences for storytellers. However, existing immersive post-production software often requires too great a technical or financial investment for smaller creative shops, including most of the country’s newspapers and local broadcast journalism organizations. Game engines, for example, are unnecessarily complicated for simple 360◦ video projects. Introducing Immerj - an open source abstraction layer simplifying the Unity3D game engine’s interface for immersive content creators. Our primary collaborator, Professor R.B. Brenner, director of the School of Journalism at the University of Texas at Austin, organized hands-on demos with our team and journalists and designers from some of the top news organizations all over the country in order to follow a human-centered design process. In just over one year, a small team of undergraduate researchers at the Texas Advanced Computing Center (TACC) has created a potentially disruptive democratization of technology. ![]() |
|
Broll, Wolfgang |
![]() Florian Weidner, Anne Hoesch, Sandra Poeschl, and Wolfgang Broll (TU Ilmenau, Germany) Up to now, most driving simulators use either small monitors or large immersive projection setups like 2D/3D screens or a CAVE. The recent improvements of VR-HMDs led to an increased application in driving simulation. However, the influence and comparability of various VR and non-VR displays has been hardly investigated. We present results of a user study investigating the different influence of non-VR (2D, stereoscopic 3D) and VR (HMD) on physiological responses, simulation sickness, and driving performance within a single driving simulator. In the study, 94 participants performed the Lane Change Task. Results indicate that a VR-HMD leads to similar data as stereoscopic 3D or 2D screens. We observed no significant difference regarding physiological responses or lane change performance. However, we measured significantly increased simulator sickness in the VR-HMD condition compared to stereoscopic 3D. ![]() |
|
Brooks Jr., Frederick P. |
![]() Richard Skarbez, Gregory F. Welch, Frederick P. Brooks Jr., and Mary C. Whitton (University of North Carolina at Chapel Hill, USA; University of Central Florida, USA) We discuss the design and results of an experiment investigating Plausibility Illusion in virtual human (VH) interactions, in particular, the coherence of conversation with a VH. This experiment was performed in combination with another experiment evaluating two display technologies. As that aspect of the study is not relevant to this poster, it will be mentioned only in the Materials section. Participants who interacted with a low-coherence VH looked around the room markedly more than participants interacting with a high-coherence VH, demonstrating that the level of coherence of VHs can have a detectable effect on user behavior and that head and gaze behavior can be used to evaluate the quality of a VH interaction. ![]() ![]() Richard Skarbez, Frederick P. Brooks Jr., and Mary C. Whitton (University of North Carolina at Chapel Hill, USA) We report on the design and results of an experiment investigating Slater's Place Illusion (PI) and Plausibility Illusion (Psi) in a virtual visual cliff environment. Existing presence questionnaires could not reliably distinguish the effects of PI from those of Psi. They did, however, indicate that high levels of PI-eliciting characteristics and Psi-eliciting characteristics together result in higher presence, compared to any of the other three conditions. Also, participants' heart rates responded markedly differently in the two Psi conditions; no such difference was observed across the PI conditions. ![]() |
|
Brown, Cullen |
![]() Cullen Brown, Ghanshyam Bhutra, Mohamed Suhail, Qinghong Xu, and Eric D. Ragan (Texas A&M University, USA) Limited research has been performed attempting to handle multi-user storytelling environments in virtual reality. As such, a number of questions about handling story progression and maintaining user presence in a multi-user virtual environment have yet to be answered. We created a multi-user virtual reality story experience in which we intend to study a set of guided camera techniques and a set of gaze distractor techniques to determine how best to attract disparate users to the same story. Additionally, we describe our preliminary work and plans to study the effectiveness of these techniques, their effect on user presence, and generally how multiple users feel their actions affect the outcome of a story. ![]() |
|
Bruder, Gerd |
![]() Eike Langbehn, Paul Lubos, Gerd Bruder, and Frank Steinicke (University of Hamburg, Germany; University of Central Florida, USA) Redirected walking (RDW) promises to allow near-natural walking in an infinitely large virtual environment (VE) by subtle manipulations of the virtual camera. Previous experiments showed that a physical radius of at least 22 meters is required for undetectable RDW. However, we found that it is possible to decrease this radius and to apply RDW to room-scale VR, i. e., up to approximately 5m × 5m. This is done by using curved paths in the VE instead of straight paths, and by coupling them together in a way that enables continuous walking. Furthermore, the corresponding paths in the real world are laid out in a way that fits perfectly into room-scale VR. In this research demo, users can experience RDW in a room-scale head-mounted display VR setup and explore a VE of approximately 25m × 25m. ![]() ![]() Myungho Lee, Gerd Bruder, and Gregory F. Welch (University of Central Florida, USA) We investigate the effect of vibrotactile feedback delivered to one's feet in an immersive virtual environment (IVE). In our study, participants observed a virtual environment where a virtual human (VH) walked toward the participants and paced back and forth within their social space. We compared three conditions as follows: participants in the Sound condition heard the footsteps of the VH; participants in the Vibration condition experienced the vibration of the footsteps along with the sounds; while participants in the Mute condition were not exposed to sound nor vibrotactile feedback. We found that the participants in the Vibration condition felt a higher social presence with the VH compared to those who did not feel the vibration. The participants in the Vibration condition also exhibited greater avoidance behavior while facing the VH and when the VH invaded their personal space. ![]() ![]() Omar Janeh, Eike Langbehn, Frank Steinicke, Gerd Bruder, Alessandro Gulberti, and Monika Poetter-Nerger (University of Hamburg, Germany; University of Central Florida, USA; University Medical Center Hamburg-Eppendorf, Germany) Our study investigates the effects of (non-)isometric mappings between physical movements and virtual motions in the virtual environment (VE) on walking biomechanics of older adults. Three primary domains (pace, base of support and phase) of spatio-temporal and temporo-phasic parameters were used to evaluate gait performance. Our results show similar results in pace and phasic domains when older adults walk in the VE in the isometric mapping condition compared to the corresponding parameters in the real world. We found significant differences in base of support for our user group between walking in the VE and real world. For non-isometric mappings we found an increased divergence of gait parameters in all domains correlating with the up- or down-scaled velocity of visual self-motion feedback. ![]() |
|
Bryan-Kinns, Nick |
![]() Liang Men, Nick Bryan-Kinns, Amelia Shivani Hassard, and Zixiang Ma (Queen Mary University of London, UK) In recent years, Virtual Reality (VR) applications have become widely available. An increase in popular interest raises questions about the use of the new medium for communication. While there is a wide variety of literature regarding scene transitions in films, novels and computer games, transitions in VR are not yet widely understood. As a medium that requires a high level of immersion, transitions are a desirable tool. This poster delineates an experiment studying the impact of transitions on user experience of presence in VR. ![]() ![]() |
|
Campbell, Julia |
![]() Peter Khooshabeh, Igor Choromanski, Catherine Neubauer, David M. Krum, Ryan Spicer, and Julia Campbell (US Army Research Lab, USA; University of Southern California, USA) Here we describe the design and usability evaluation of a mixed reality prototype to simulate the role of a tank platoon leader, who is an individual who not only is a tank commander, but also directs a platoon of three other tanks with their own respective tank commanders. The domain of tank commander training has relied on physical simulators of the actual Abrams tank and encapsulates the whole crew. The TALK-ON system we describe here focuses on training communication skills of the leader in a simulated tank crew. We report results from a usability evaluation and discuss how they will inform our future work for collective tank training. ![]() ![]() |
|
Cani, Marie-Paule |
![]() Adela Barbulescu, Maxime Garcia, Antoine Begault, Laurence Boissieux, Marie-Paule Cani, Maxime Portaz, Alexis Viand, Romain Dulery, Pierre Heinish, Remi Ronfard, and Dominique Vaufreydaz (Inria, France; LJK Grenoble, France; University of Grenoble, France; LIG, France; Grenoble INP, France; CNRS, France) Pretend play is a storytelling technique, naturally used from very young ages, which relies on object substitution to represent the characters of the imagined story. We propose a system which assists the storyteller by generating a virtualized story from a recorded dialogue performed with 3D printed figurines. We capture the gestures and facial expressions of the storyteller using Kinect cameras and IMU sensors and transfer them to their virtual counterparts in the story-world. As a proof-of-concept, we demonstrate our system with an improvised story involving a prince and a witch, which was successfully recorded and transferred into 3D animation. ![]() |
|
Canzoneri, Elisa |
![]() Thibault Porssut, Henrique G. Debarba, Elisa Canzoneri, Bruno Herbelin, and Ronan Boulic (EPFL, Switzerland) This project investigates the impact of a virtual zero gravity experience on the human gravity model. In the planned experiment, subjects are immersed with HMD and full body motion capture in a virtual world exhibiting either normal gravity or the apparent absence of gravity (i.e. body and objects floating in space). The study evaluates changes in the subjects’ gravity model by observing changes on motor planning of actions dependent on gravity. Our goal is to demonstrate that a virtual reality exposure can induce some modifications to the humans internal gravity model, even if users remain under normal gravity condition in reality. ![]() ![]() |
|
Cao, Lizhou |
![]() Chao Peng, Jeffrey T. Hansberger, Lizhou Cao, and Vaidyanath Areyur Shanthakumar (University of Alabama at Huntsville, USA; US Army Research Lab, USA) In a situation where a large and chaotic collection of digital images must be manually sorted or categorized, there are two challenges: (1) unnatural actions during a prolonged human-computer interaction and (2) limited display space for image browsing. An immersive 3D interface is prototyped, where a person sorts a large collection of digital images with his or her bare hands in a virtual environment, and performs hand motions matching characteristics of sorting gestures in the real world. The virtual reality environment provides extra levels of immersion for displaying images. ![]() |
|
Cardoso, Alexandre |
![]() Ígor Andrade Moraes, Alexandre Cardoso, Edgard Lamounier Jr., Milton Miranda Neto, and Isabela Cristina dos Santos Peres (Federal University of Uberlândia, Brazil) The profits and benefits offered by Virtual Reality technology had drawn attention of professionals from several scientific fields, including the power systems’, either for training or maintenance. For this purpose, 3D modeling is evidently pointed out as an imperative process for the conception of a Virtual Environment. Before the complexity of Hydroelectric Power Plants and Virtual Reality’s contribution for the Industrial area, planning the tridimensional construction of the virtual environment becomes necessary. Thus, this paper presents modeling techniques applicable to several hydroelectric structures, aiming to optimize the 3D construction of the target complexes. ![]() |
|
Cascone, Marcos H. |
![]() Eduardo H. Tanaka, Juliana A. Paludo, Rafael Bacchetti, Edgar V. Gadbem, Leonardo R. Domingues, Carlúcio S. Cordeiro, Olavo Giraldi Jr., Guilherme Alcarde Gallo, Adam Mendes da Silva, and Marcos H. Cascone (Eldorado Research Institute, Brazil) This research demonstration presents an Immersive Virtual Substation for Electricians Training. A substation is one of the most critical facilities of the electrical distribution system, so it is mandatory to keep its normal operation to deliver high standards of service, power quality and avoid blackouts to consumers. Therefore, it is necessary to give an effective training to the electricians who will operate the equipment in the substation and to prepare them to face emergencies on daily basis. The main purpose of the proposed Immersive Virtual Environment is to provide a realistic experience to trainees in a safe environment where they can interact with equipment, explore the facility and, mainly, practice basic and complex maneuvers to recover substation operations. Users can interact with this Immersive Virtual Environment using HMDs, joysticks or even an ordinary keyboard, mouse and monitor. Feedback from trainees and instructors who used the Immersive Virtual Environment was very positive, indicating that the objectives were fully achieved. ![]() ![]() |
|
Ceylan, Duygu |
![]() Jingwei Huang, Zhili Chen, Duygu Ceylan, and Hailin Jin (Stanford University, USA; Adobe Research, USA) Recent breakthroughs in consumer level virtual reality (VR) headsets is creating a growing user-base in demand of immersive, full 3D VR experiences. While monoscopic 360-videos are one of the most common content for VR headsets, they lack 3D information and thus cannot be viewed with full 6 degree-of-freedom (DOF). We present an approach that addresses this limitation via a novel warping algorithm that can synthesize new views both with rotational and translational motion of the viewpoint. This enables to playback an input monoscopic 360-video in a VR headset where the new viewpoints are determined by the 6-DOF motion of the headset. Our solution tailors standard structure-from-motion and dense reconstruction algorithms to work accurately for 360-videos and is optimized for GPUs to achieve VR frame rates (> 120 fps). We demonstrate the effectiveness our approach on a variety of videos with interesting content. ![]() ![]() |
|
Cha, Young-Woon |
![]() Rohan Chabra, Adrian Ilie, Nicholas Rewkowski, Young-Woon Cha, and Henry Fuchs (University of North Carolina at Chapel Hill, USA) Commodity depth sensors, such as the Microsoft Kinect®, have been widely used for the capture and reconstruction of the 3D structure of a room-sized dynamic scene. Camera placement and coverage during capture significantly impact the quality of the resulting reconstruction. In particular, dynamic occlusions and sensor interference have been shown to result in poor resolution and holes in the reconstruction results. This paper presents a novel algorithmic framework and an off-line optimization of depth sensor placements for a given 3D dynamic scene, simulated using virtual 3D models. We derive a fitness metric for a particular configuration of sensors by combining factors such as visibility and resolution of the entire dynamic scene along with probabilities of interference between sensors. We employ this fitness metric both in a greedy algorithm that determines the number of depth sensors needed to cover the scene, and in a simulated annealing algorithm that optimizes the placements of those sensors. We compare our algorithm’s optimized placements with manual sensor placements for a real dynamic scene. We present quantitative assessments using our fitness metric, as well as qualitative assessments to demonstrate that our algorithm not only enhances the resolution and total coverage of the reconstruction but also fills in voids by avoiding occlusions and sensor interference when compared with the reconstruction of the same scene using a manual sensor placement. ![]() ![]() ![]() |
|
Chabra, Rohan |
![]() Rohan Chabra, Adrian Ilie, Nicholas Rewkowski, Young-Woon Cha, and Henry Fuchs (University of North Carolina at Chapel Hill, USA) Commodity depth sensors, such as the Microsoft Kinect®, have been widely used for the capture and reconstruction of the 3D structure of a room-sized dynamic scene. Camera placement and coverage during capture significantly impact the quality of the resulting reconstruction. In particular, dynamic occlusions and sensor interference have been shown to result in poor resolution and holes in the reconstruction results. This paper presents a novel algorithmic framework and an off-line optimization of depth sensor placements for a given 3D dynamic scene, simulated using virtual 3D models. We derive a fitness metric for a particular configuration of sensors by combining factors such as visibility and resolution of the entire dynamic scene along with probabilities of interference between sensors. We employ this fitness metric both in a greedy algorithm that determines the number of depth sensors needed to cover the scene, and in a simulated annealing algorithm that optimizes the placements of those sensors. We compare our algorithm’s optimized placements with manual sensor placements for a real dynamic scene. We present quantitative assessments using our fitness metric, as well as qualitative assessments to demonstrate that our algorithm not only enhances the resolution and total coverage of the reconstruction but also fills in voids by avoiding occlusions and sensor interference when compared with the reconstruction of the same scene using a manual sensor placement. ![]() ![]() ![]() |
|
Chan, Leith K. Y. |
![]() Adrian K. T. Ng, Leith K. Y. Chan, and Henry Y. K. Lau (University of Hong Kong, China) The perceived distance estimation in an immersive virtual reality system is generally underestimated to the actual distance. Approaches had been found to provide users with better dimensional perception. One method used in head-mounted displays is to interact by walking with visual feedback, but it is not suitable for a CAVE-like system, like imseCAVE with confined spaces for walking. A verbal corrective feedback mechanism is proposed. The result shows that estimation accuracy generally improves after eight feedback trials although some estimations become overestimated. One possible explanation is the need of more verbal feedback trials. Further research on top-down approach for improvement in depth perception is suggested. ![]() |
|
Chang, Benjamin |
![]() Rebecca Rouse, Benjamin Chang, and Silvia Ruzanka (Rensselaer Polytechnic Institute, USA) Stop feeling bad about not having a language of VR, and embrace the multiplicity! This full day tutorial explores ways of applying the vibrant creativity of early media to VR, AR, and MR work today, using a new cross-historical concept called media of attraction. Participants will be guided through a prototyping process focused not on best practices, but on restriction mining, bespoke solutions, and associative creative strategies inspired by fascinating historical examples and artistic methods. The session concludes with prototype creation, and the development of speculative design work envisioning next technologies for media of attraction of the future. ![]() |
|
Chang, Yun Suk |
![]() Yun Suk Chang, Benjamin Nuernberger, Bo Luan, Tobias Höllerer, and John O’Donovan (University of California at Santa Barbara, USA) Drawing annotations with 3D hand gestures in augmented reality is useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. To explore and evaluate different 3D hand-gesture-based annotation drawing methods, we have developed an annotation drawing application using the HoloLens augmented reality development platform. The application can be used for highlighting objects at a distance and multi-user collaboration by annotating in the real world. ![]() ![]() |
|
Chardonnet, Jean-Rémy |
![]() Thibault Porssut and Jean-Rémy Chardonnet (EPFL, Switzerland; LE2I, France) We present a first study where we combine two asymetric virtual reality systems for telecollaboration purposes: a CAVE system and a head-mounted display (HMD), using a server-client type architecture. Experiments on a puzzle game in limited time, alone and in collaboration, show that combining asymetric systems reduces cognitive load. Moreover, the participants reported preferring working in collaboration and showed to be more efficient in collaboration. These results provide insights in combining several low cost HMDs with a unique expensive CAVE. ![]() ![]() |
|
Chaudhary, Aashish |
![]() Patrick O'Leary, Sankhesh Jhaveri, Aashish Chaudhary, William Sherman, Ken Martin, David Lonie, Eric Whiting, James Money, and Sandy McKenzie (Kitware, USA; Indiana University, USA; Idaho National Laboratory, USA) Modern scientific, engineering and medical computational simulations, as well as experimental and observational data sensing/measuring devices, produce enormous amounts of data. While statistical analysis is one tool that provides insight into this data, it is scientific visualization that is tactically important for scientific discovery, product design and data analysis. These benefits are impeded, however, when the scientific visualization algorithms are implemented from scratch | a time consuming and redundant process in immersive application development. This process then can greatly benefit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR) environment has only been attempted to varying degrees of success. In this paper, we demonstrate two new approaches to simplify this amalgamation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that provide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications. ![]() |
|
Chellali, Amine |
![]() Guillaume Bouyer, Amine Chellali, and Anatole Lécuyer (University of Évry Val d'Essonne, France; Inria, France) Producing sensations of motion in driving simulators often requires using cumbersome and expensive motion platforms. In this ar- ticle we present a novel and alternative approach for producing self-motion sensations in driving simulations by relying on haptic- feedback. The method consists in applying a force-feedback pro- portional to the acceleration of the virtual vehicle directly in the hands of the driver, by means of a haptic device attached to the manipulated controller (or a steering wheel). We designed a proof- of-concept based on a standard gamepad physically attached at the extremity of a standard 3DOF haptic display. Haptic effects were designed to match notably the acceleration/braking (longitudinal forces) and left/right turns (lateral forces) of the virtual vehicle. A preliminary study conducted with 23 participants, engaged in gamepad-based active VR navigations in a straight line, showed that haptic motion effects globally improved the involvement and realism of motion sensation for participants with prior experience with haptic devices. Taken together, our results suggest that our approach could be further tested and used in driving simulators in entertainment and/or professional contexts. ![]() ![]() Aylen Ricca, Amine Chellali, and Samir Otmane (University of Évry Val d'Essonne, France) Virtual Reality simulators are increasingly used for training novice surgeons. However, there is currently a lack of guidelines for achieving interaction fidelity for these systems. In this paper, we present the design of two navigation techniques for a needle insertion trainer. The two techniques were analyzed using a state-of-the-art fidelity framework to determine their level of interaction fidelity. A user study comparing both techniques suggests that the higher fidelity technique is more suited as a navigation technique for the needle insertion virtual trainer. ![]() |
|
Chen, Chih-Fan |
![]() Chih-Fan Chen, Mark Bolas, and Evan Suma Rosenberg (University of Southern California, USA) Virtual objects are essential for building environments in virtual reality (VR) applications. However, creating photorealistic 3D models is not easy, and handcrafting the detailed 3D model from a real object can be time and labor intensive. An alternative way is to build a structured camera array such as a light-stage to reconstruct the model from a real object. However, these technologies are very expensive and not practical for most users. In this work, we demonstrate a complete end-to-end pipeline for the capture, processing, and rendering of view-dependent 3D models in virtual reality from a single consumer-grade RGB-D camera. The geometry model and the camera trajectories are automatically reconstructed from a RGB-D image sequence captured offline. Based on the HMD position, selected images are used for real-time model rendering. The result of this pipeline is a 3D mesh with view-dependent textures suitable for real-time rendering in virtual reality. Specular reflections and light-burst effects are especially noticeable when users view the objects from different perspectives in a head-tracked environment. ![]() ![]() |
|
Chen, Xiaoming |
![]() Tianyu He, Xiaoming Chen, Zhibo Chen, Ye Li, Sen Liu, Junhui Hou, and Ying He (University of Science and Technology of China, China; City University of Hong Kong, China; Nanyang Technological University, Singapore) Learning “motion” online or from video tutorials is usually inefficient since it is difficult to deliver “motion” information in traditional ways and in the ordinary PC platform. This paper presents ImmerTai, a system that can efficiently teach motion, in particular Chinese Taichi motion, in various immersive environments. ImmerTai captures the Taichi expert’s motion and delivers to students the captured motion in multi-modal forms in immersive CAVE, HMD as well as ordinary PC environments. The students’ motions are captured too for quality assessment and utilized to form a virtual collaborative learning atmosphere. We built up a Taichi motion dataset with 150 fundamental Taichi motions captured from 30 students, on which we evaluated the learning effectiveness and user experience of ImmerTai. The results show that ImmerTai can enhance the learning efficiency by up to 17.4% and the learning quality by up to 32.3%. ![]() |
|
Chen, Zhibo |
![]() Tianyu He, Xiaoming Chen, Zhibo Chen, Ye Li, Sen Liu, Junhui Hou, and Ying He (University of Science and Technology of China, China; City University of Hong Kong, China; Nanyang Technological University, Singapore) Learning “motion” online or from video tutorials is usually inefficient since it is difficult to deliver “motion” information in traditional ways and in the ordinary PC platform. This paper presents ImmerTai, a system that can efficiently teach motion, in particular Chinese Taichi motion, in various immersive environments. ImmerTai captures the Taichi expert’s motion and delivers to students the captured motion in multi-modal forms in immersive CAVE, HMD as well as ordinary PC environments. The students’ motions are captured too for quality assessment and utilized to form a virtual collaborative learning atmosphere. We built up a Taichi motion dataset with 150 fundamental Taichi motions captured from 30 students, on which we evaluated the learning effectiveness and user experience of ImmerTai. The results show that ImmerTai can enhance the learning efficiency by up to 17.4% and the learning quality by up to 32.3%. ![]() |
|
Chen, Zhili |
![]() Jingwei Huang, Zhili Chen, Duygu Ceylan, and Hailin Jin (Stanford University, USA; Adobe Research, USA) Recent breakthroughs in consumer level virtual reality (VR) headsets is creating a growing user-base in demand of immersive, full 3D VR experiences. While monoscopic 360-videos are one of the most common content for VR headsets, they lack 3D information and thus cannot be viewed with full 6 degree-of-freedom (DOF). We present an approach that addresses this limitation via a novel warping algorithm that can synthesize new views both with rotational and translational motion of the viewpoint. This enables to playback an input monoscopic 360-video in a VR headset where the new viewpoints are determined by the 6-DOF motion of the headset. Our solution tailors standard structure-from-motion and dense reconstruction algorithms to work accurately for 360-videos and is optimized for GPUs to achieve VR frame rates (> 120 fps). We demonstrate the effectiveness our approach on a variety of videos with interesting content. ![]() ![]() |
|
Cheng, Haonan |
![]() Kai Wang, Haonan Cheng, and Shiguang Liu (Tianjin University, China) This paper presents a novel framework to generate the sound of outdoor natural scenes, such as waterfall, ocean, etc. Our method firstly simulates liquid with a grid-based method. Then combined with the movement of liquid, we generate seed-particles which represent bubbles, foams or splashes. Next, we assign each seed-particles a radius with a new radius distribution model. By calculating the bubbles’ pressure wave we generate the sound. Experiments demonstrated that our novel framework can efficiently synthesize the sounds for natural scenes. ![]() |
|
Cho, Hyunwoo |
![]() Hyunwoo Cho, Sung-Uk Jung, and Hyung-Keun Jee (ETRI, South Korea) For live television broadcast such as the educational program for children conducted through viewer participation, the smooth integration of virtual contents and the interaction between the casts and them are quite important issues. Recently there have been many attempts to make aggressive use of interactive virtual contents in live broadcast due to the advancement of AR/VR technology and virtual studio technology. These previous works have many limitations that do not support real-time 3D space recognition or immersive interaction. In this sense, we propose an augmented reality based real-time broadcasting system which perceives the indoor space using a broadcasting camera and a RGB-D camera. Also, the system can support the real-time interaction between the augmented virtual contents and the casts. The contribution of this work is the development of a new augmented reality based broadcasting system that not only enables filming using compatible interactive 3D contents in live broadcast but also drastically reduces the production costs. For the practical use, the proposed system was demonstrated in the actual broadcast program called “Ding Dong Dang Kindergarten” which is a representative children educational program on the national broadcasting channel of Korea. ![]() |
|
Chok, Lionel |
![]() Lionel Chok (iMMERSiVELY, Singapore) Singapore: Inside Out 2015 is an international creative showcase featuring a collection of multisensorial experiences designed by the country’s creative talents. After making its successful debut in Beijing, the travelling showcase stops at Brick Lane Yard here in London from 2428 June for its stint in the capital, before heading to New York in September and a homecoming finale in Singapore in November 2015 (Singapore Inside Out 2015). An energetic, crossdisciplinary showcase of contemporary creative disciplines featuring architecture, food, fashion, film, music, literature, design and the visual arts, this celebration of creativity and collaboration that spans three continents will inspire you to revisit existing preconceptions and discover new perspectives of Singapore and its creative landscape. Having captured seven 360 spherical videos at this event in London itself, I set out to develop all these 360 spherical videos together in one 360 Virtual Reality (VR) Android mobile app, complete with visual interactions via line of sight for directions, graphics, audio and perhaps even transitions. This is to also show a way of providing diegetic means of how these (gaze) interactions will work between and within each 360 videos. The development process would be as follow: 1. Using Unity3D with the Google Cardboard SDK and CSharp scripting 2. Mapping 360 videos inside Unity3D 3. Importing them into Unity3D and using scripting components from within Unity3D to add gaze interactions to navigate between the different 360 videos and other forms of interactions. Launched as an Android apk to your mobile phone, the final “Singapore Inside Out 2015 (London)” interactive VR (Virtual Reality) mobile app enables a viewer to be transported to Brick Lane Yard where the original traveling showcase was last held. Using your line of sight, one can look all around each 360 video to relive the experience of being there, as well as find designated buttons for gaze interaction to be activated. In total, there are about over a dozen interactive features to gaze at. From choosing between VR and Cardboard mode, transition between different 360 videos, displaying credits and graphics to the simple back function, just looking at a certain point inside the 360 video to activate these functions alone took almost more than half of the total development time period. Part of it was the consideration for the duration, the number of seconds we had to set for the gaze to take effect. As each 360 video cannot be previewed from inside the spherical object, aligning these buttons in the right positions while working in Unity has proved to be tedious and painstaking. In addition, the scripted components did not work all the time during all of the testings, and also differed when used between the PC and the Mac. But the most difficult challenge had to be actually scripting in CSharp. In spite of all these challenges both creatively and technically the app was finally completed, and now for all to experience and relive the festival experience in VR! ![]() |
|
Choromanski, Igor |
![]() Peter Khooshabeh, Igor Choromanski, Catherine Neubauer, David M. Krum, Ryan Spicer, and Julia Campbell (US Army Research Lab, USA; University of Southern California, USA) Here we describe the design and usability evaluation of a mixed reality prototype to simulate the role of a tank platoon leader, who is an individual who not only is a tank commander, but also directs a platoon of three other tanks with their own respective tank commanders. The domain of tank commander training has relied on physical simulators of the actual Abrams tank and encapsulates the whole crew. The TALK-ON system we describe here focuses on training communication skills of the leader in a simulated tank crew. We report results from a usability evaluation and discuss how they will inform our future work for collective tank training. ![]() ![]() |
|
Christiansen, Anders |
![]() Thea Andersen, Gintare Anisimovaite, Anders Christiansen, Mohamed Hussein, Carol Lund, Thomas Nielsen, Eoin Rafferty, Niels C. Nilsson, Rolf Nordahl, and Stefania Serafin (Aalborg University at Copenhagen, Denmark) This poster describes a between-groups study (n=24) exploring the use of virtual reality (VR) for facilitating focused meditation. Half of the participants were exposed to a meditation session combing the sound of a guiding voice and a visual environment including virtual objects for the participants to focus on. The other half of the participants was only exposed to the auditory guide. The participants' experience of the sessions was assessed using self-reported measures of perceived concentration, temporal duration, stress reduction, and comfort. Interestingly, no statistically significant differences were found between the two conditions. This finding may be revealing in regards to the usefulness of VR-based meditation. ![]() |
|
Cleal, Andrew |
![]() Ifigeneia Mavridou, James T. McGhee, Mahyar Hamedi, Mohsen Fatoorechi, Andrew Cleal, Emili Ballaguer-Balester, Ellen Seiss, Graeme Cox, and Charles Nduka (Bournemouth University, UK; Sussex Innovation Centre, UK) Faceteq prototype v.05 is a wearable technology for measuring facial expressions and biometric responses for experimental studies in Virtual Reality. Developed by Emteq Ltd laboratory, Faceteq can enable new avenues for virtual reality research through combination of high performance patented dry sensor technologies, proprietary algorithms and real-time data acquisition and streaming. Emteq founded the Faceteq project with the aim to provide a human-centered additional tool for emotion expression, affective human-computer interaction and social virtual environments. The proposed demonstration will exhibit the hardware and its functionality by allowing attendees to experience three of the showcasing applications we developed this year. ![]() |
|
Collingwoode-Williams, Tara |
![]() Tara Collingwoode-Williams, Marco Gillies, Cade McCall, and Xueni Pan (University of London, UK; University of York, UK) We are interested the effect of lip and arm synchronization on body ownership in VR (the illusion that the users own a virtual body). Participants were invited to give a presentation in an HMD, while seeing in a virtual mirror a gender-matched avatar who copied their arm and lip movements in sync and a-sync conditions. We measure participants’ reaction with questionnaires administrated verbally after their presentation while immersed in VR. The result suggested an interaction effect of arm and lip, showing reports of higher level of embodiment with the congruent as compared to the incongruent conditions. Further study is needed to confirm if the same interaction effect can be captured with objective measurements. ![]() ![]() |
|
Cook, Margaret |
![]() Jinsil Hwaryoung Seo, Brian Smith, Margaret Cook, Michelle Pine, Erica Malone, Steven Leal, and Jinkyo Suh (Texas A&M University, USA) We present Anatomy Builder VR that examines how a virtual reality (VR) system can support embodied learning in anatomy education. The backbone of the project is to pursue an alternative constructivist pedagogical model for learning canine anatomy. The main focus of the study was to identify and assemble bones in the live-animal orientation, using real thoracic limb bones in a bone box and digital pelvic limb bones in the Anatomy Builder VR. Eleven college students participated in the study. The pilot study showed that participants most enjoyed interacting with anatomical contents within the VR program. Participants spent less time assembling bones in the VR, and instead spent a longer time tuning the orientation of each VR bone in the 3D space. This study showed how a constructivist method could support anatomy education while using virtual reality technology in an active and experiential way. ![]() |
|
Cordar, Andrew |
![]() Andrew Cordar, Adam Wendling, Casey White, Samsun Lampotang, and Benjamin Lok (University of Florida, USA; University of Virginia, USA) In the past few years, advances have been made on how mixed reality humans (MRHs) can be used for interpersonal communication skills training for medical teams; however, little research has looked at how MRHs can influence communication skills during training. One way to influence communication skills is to leverage MRHs as models of communication behavior. We created a mixed reality medical team training exercise designed to impact communication behaviors that are critical for patient safety. We recruited anesthesia residents to go through an operating room training exercise with MRHs to assess and influence residents' closed loop communication behaviors during medication administration. We manipulated the behavior of the MRHs to determine if the MRHs could influence the residents' closed loop communication behavior. Our results showed that residents' closed loop communications behaviors were influenced by MRHs. Additionally, we found there was a statistically significant difference between groups based on which MRH behavior residents observed. Because the MRHs significantly impacted how residents communicated in simulation, this work expands the boundaries for how VR can be used and demonstrates that MRHs could be used as tools to address complex communication dynamics in a team setting ![]() ![]() |
|
Cordeiro, Carlúcio S. |
![]() Eduardo H. Tanaka, Juliana A. Paludo, Rafael Bacchetti, Edgar V. Gadbem, Leonardo R. Domingues, Carlúcio S. Cordeiro, Olavo Giraldi Jr., Guilherme Alcarde Gallo, Adam Mendes da Silva, and Marcos H. Cascone (Eldorado Research Institute, Brazil) This research demonstration presents an Immersive Virtual Substation for Electricians Training. A substation is one of the most critical facilities of the electrical distribution system, so it is mandatory to keep its normal operation to deliver high standards of service, power quality and avoid blackouts to consumers. Therefore, it is necessary to give an effective training to the electricians who will operate the equipment in the substation and to prepare them to face emergencies on daily basis. The main purpose of the proposed Immersive Virtual Environment is to provide a realistic experience to trainees in a safe environment where they can interact with equipment, explore the facility and, mainly, practice basic and complex maneuvers to recover substation operations. Users can interact with this Immersive Virtual Environment using HMDs, joysticks or even an ordinary keyboard, mouse and monitor. Feedback from trainees and instructors who used the Immersive Virtual Environment was very positive, indicating that the objectives were fully achieved. ![]() ![]() |
|
Costa, Raphael |
![]() Raphael Costa, Rongkai Guo, and John Quarles (University of Texas at San Antonio, USA; Kennesaw State University, USA) The objective of this research is to compare the effectiveness of different tracking devices underwater. There have been few works in aquatic virtual reality (VR) - i.e., VR systems that can be used in a real underwater environment. Moreover, the works that have been done have noted limitations on tracking accuracy. Our initial test results suggest that inertial measurement units work well underwater for orientation tracking but a different approach is needed for position tracking. Towards this goal, we have waterproofed and evaluated several consumer tracking systems intended for gaming to determine the most effective approaches. First, we informally tested infrared systems and fiducial marker based systems, which demonstrated significant limitations of optical approaches. Next, we quantitatively compared inertial measurement units (IMU) and a magnetic tracking system both above water (as a baseline) and underwater. By comparing the devices’ rotation data, we have discovered that the magnetic tracking system implemented by the Razer Hydra is more accurate underwater as compared to a phone-based IMU. This suggests that magnetic tracking systems should be further explored for underwater VR applications. ![]() |
|
Cox, Graeme |
![]() Ifigeneia Mavridou, James T. McGhee, Mahyar Hamedi, Mohsen Fatoorechi, Andrew Cleal, Emili Ballaguer-Balester, Ellen Seiss, Graeme Cox, and Charles Nduka (Bournemouth University, UK; Sussex Innovation Centre, UK) Faceteq prototype v.05 is a wearable technology for measuring facial expressions and biometric responses for experimental studies in Virtual Reality. Developed by Emteq Ltd laboratory, Faceteq can enable new avenues for virtual reality research through combination of high performance patented dry sensor technologies, proprietary algorithms and real-time data acquisition and streaming. Emteq founded the Faceteq project with the aim to provide a human-centered additional tool for emotion expression, affective human-computer interaction and social virtual environments. The proposed demonstration will exhibit the hardware and its functionality by allowing attendees to experience three of the showcasing applications we developed this year. ![]() |
|
Creem-Regehr, Sarah |
![]() Bobby Bodenheimer, Sarah Creem-Regehr, Jeanine Stefanucci, Elena Shemetova, and William B. Thompson (Vanderbilt University, USA; University of Utah, USA) The use of first-person self-avatars in immersive virtual environments (VEs) has grown over recent years. It is unknown, however, how visual feedback from a self-avatar influences a user's online actions and subsequent calibration of actions within an immersive VE. The current paper uses a prism throwing adaptation paradigm to test the role of a self-avatar arm or full body on action calibration in a VE. Participants' throwing accuracy to a target on the ground was measured first in a normal viewing environment, then with the visual field rotated clockwise about their vertical axis by 17 degrees (prism simulation), and then again in the normal viewing environment with the prism distortion removed. Participants experienced either no-avatar, a first-person avatar arm and hand, or a first-person full body avatar during the entire experimental session, in a between-subjects manipulation. Results showed similar throwing error and adaptation during the prism exposure for all conditions, but a reduced aftereffect (displacement with respect to the target in the opposite direction of the prism-exposure) when the avatar arm or full body was present. The results are discussed in the context of how an avatar can provide a visual frame of reference to aid in action calibration. ![]() |
|
Cruz, Marvis |
![]() Grace M. Rodriguez, Marvis Cruz, Andrew Solis, and Brian C. McCann (University of Puerto Rico, Puerto Rico; University of Florida, USA; University of Texas at Austin, USA) Through their experience with the ICERT REU program at the Texas Advanced Computing Center (TACC), two undergraduate students from the University of Puerto Rico and the University of Florida have initiated a collaboration between their home institutions and TACC exploring the possibility of using immersion to simulate perceptual disturbances. Perceptual disturbances are subjective in nature, and difficult to communicate verbally. Often caretakers or those closest to sufferers have difficulty understanding the nature of their suffering. Immersion provides an exciting opportunity to directly communicate percepts with clinicians and loved ones. Here, we present a prototype environment meant to simulate some of the perceptual disturbances associated with seizures and epilepsy. Following further validation of our approach, we hope to promote awareness and empathy for these often jarring phenomena. ![]() |
|
Daher, Salam |
![]() Salam Daher (University of Central Florida, USA) Currently healthcare practitioners use standardized patients, physical mannequins, and virtual patients as surrogates for real patients to provide a safe learning environment for students. Each of these simulators has different limitation that could be mitigated with various degrees of fidelity to represent medical cues. As we are exploring different ways to simulate a human patient and their effects on learning, we would like to compare the dynamic visuals between spatial augmented reality and a optical see-through augmented reality where a patient is rendered using the HoloLens and how that affects depth perception, task completion, and social presence. ![]() |
|
Danieau, Fabien |
![]() Fabien Danieau, Antoine Guillo, and Renaud Doré (Technicolor R&I, France; ENSAM, France) Immersive videos allow users to freely explore 4 π steradian scenes within head-mounted displays (HMD), leading to a strong feeling of immersion. However users may miss important elements of the narrative if not facing them. Hence, we propose four visual effects to guide the user’s attention. After an informal pilot study, two of the most efficient effects were evaluated through a user study. Results show that our approach has potential but it remains challenging to implicitly drive the user’s attention outside of the field of view. ![]() ![]() |
|
Dargahi, Javad |
![]() Ehsan Zahedi, Hadi Rahmat-Khah, Javad Dargahi, and Mehrdad Zadeh (Concordia University, Canada; Kettering University, USA) This paper presents the results of a two-fold study on the incorporation of upper limb's movement into measuring of user performance in a virtual reality (VR) based training simulation. VR simulators have been developed to assess and improve minimally invasive surgery (MIS) skills. While these simulators are currently being used, most skill evaluation methods are limited to measuring and computing performance metrics regarding the MIS tool tip movement. In this study, a VR simulator is developed to measure and analyze the movements of upper limb joints. The movement analysis from the first experiment suggests that the kinematic data of upper limb can be used to discriminate an expert surgeon from a novice trainee. The results from the second experiment show that the motion of non-dominant hand has a significant effect on the performance of dominant hand. ![]() |
|
Debarba, Henrique G. |
![]() Thibault Porssut, Henrique G. Debarba, Elisa Canzoneri, Bruno Herbelin, and Ronan Boulic (EPFL, Switzerland) This project investigates the impact of a virtual zero gravity experience on the human gravity model. In the planned experiment, subjects are immersed with HMD and full body motion capture in a virtual world exhibiting either normal gravity or the apparent absence of gravity (i.e. body and objects floating in space). The study evaluates changes in the subjects’ gravity model by observing changes on motor planning of actions dependent on gravity. Our goal is to demonstrate that a virtual reality exposure can induce some modifications to the humans internal gravity model, even if users remain under normal gravity condition in reality. ![]() ![]() |
|
De Deus Lopes, Roseli |
![]() Eduardo Zilles Borba, Andre Montes, Roseli de Deus Lopes, Marcelo Knorich Zuffo, and Regis Kopper (University of São Paulo, Brazil; Duke University, USA) This poster presents the conceptual process of developing Itapeva 3D, a Virtual Reality (VR) archeology experience. It describes the technical spectrum of cyber-archeology process applied to the creation of a fully immersive and interactive virtual environment (VE), which represents Itapeva Rocky Shelter, a prehistoric archeological site in Brazil. The workflow starts with a real world data capture – laser scanners, drones and photogrammetry, continues with the transposition of the captured information into a 3D surface model capable of real-time rendering to head-mounted displays (HMDs), and ends with the design of interactive features allowing users to experience the virtual archeological site. The main objective of this VR model is to make plausible to general public to feel what it means to explore an otherwise restricted and ephemeral place. As final thoughts it is reported on preliminary results from an initial user observation. ![]() |
|
Deguchi, Daisuke |
![]() Norimasa Kobori, Daisuke Deguchi, Ichiro Ide, and Hiroshi Murase (Toyota, Japan; Nagoya University, Japan) We propose a novel marker for robot's grasping task which has the following three aspects: (i) it is easy-to-find in a cluttered background, (ii) it is calculable for its posture (iii) its size is compact. The proposed marker is composed of a random dots pattern, and uses keypoint detection and a scale estimation by Spectral SIFT for dots detection and data decoding. The data is encoded by the scale size of dots, and the same dots in the marker work for both marker detection and data decoding. As a result, the proposed marker size can be compact. We confirmed the effectiveness of the proposed marker through experiments. ![]() |
|
De Jesus Oliveira, Victor Adriel |
![]() Victor Adriel de Jesus Oliveira, Luca Brayda, Luciana Nedel, and Anderson Maciel (Federal University of Rio Grande do Sul, Brazil; Fondazione Istituto Italiano di Tecnologia, Italy) Vibrotactile feedback is broadly used to support different tasks in virtual and augmented reality applications, such as navigation, communication, attentional redirection, or to enhance the sense of presence in virtual environments. Thus, we aim to include the haptic component to the most popular wearable used in VR applications: the VR headset. After studying the acuity around the head for vibrating stimuli, and trying different parameters, actuators, and configurations, we developed a haptic guidance technique to be used in a vibrotactile Head-mounted Display (HMD). Our vibrotactile HMD was made to render the position of objects in a 3D space around the subject by varying both stimulus loci and vibration frequency. In this demonstration, the participants will interact with different scenarios where the mission is to select a number of predefined objects. However, instead of displaying occlusive graphical information to point to these objects, vibrotactile cues will provide guidance in the VR setup. Participants will see that our haptic guidance technique can be both easy to use and entertaining. ![]() ![]() ![]() ![]() Victor Adriel de Jesus Oliveira (Federal University of Rio Grande do Sul, Brazil) We looked up to elements present in speech articulation to introduce the proactive haptic articulation as a novel approach for intercommunication. The ability to use a haptic interface as a tool for implicit communication can supplement communication and support near and remote collaborative tasks in virtual and physical environments. In addition, the proactive articulation can be applied during the design process, including the user in the construction of more dynamic and optimized vibrotactile vocabularies. In this proposal, we discuss the thesis of the haptic proactive communication and our method to assess and implement it. Our goal is to understand the phenomena related to the proactive articulation of haptic signals and its use for communication and for the design of optimized tactile vocabularies. ![]() ![]() |
|
Denn, Grant |
![]() Ka Chun Yu, Kamran Sahami, Victoria Sahami, Larry Sessions, and Grant Denn (Denver Museum of Nature & Science, USA; Metropolitan State University of Denver, USA) Although fulldome video digital theaters evolved from traditional planetariums, they are more akin to virtual reality (VR) theaters that create large-scale, group immersive experiences. In order to help understand how immersion and wide fields-of-view (FOV) impact learning, we studied the use of visualizations on topics that do and do not require spatial understanding in astronomy classes. We find a significant difference between students who viewed visualizations in the dome versus those that saw non-immersive content in their classrooms, with the former showing the greatest retention. Our results suggest that immersive visuals help free up cognitive resources that can be used to build mental models requiring spatial understanding, and the physical display size combined with the wide FOV may result in greater attention. Although fulldome is a complementary medium to traditional VR, our results have implications for future head-mounted displays. ![]() |
|
Diana, Rachel |
![]() Jessie Mann, Nicholas Polys, Rachel Diana, Manasa Ananth, Brad Herald, and Sweetuben Platel (Virginia Tech, USA) The design of Virginia Tech’s (VT) Study Hall emerges from the current cognitive neuroscience understanding of memory as a spatially mediated encoding process. The driving questions are: Does the sense of spatial navigation generated by an immersive virtual experience aid in memory formation? Does virtual spatial navigation, when paired with learning cues, enhance information encoding relative to nonspatial and nonvirtual processes? A pilot study was executed comparing recall on non-navigational memorization processes to processes involving mental and virtual navigation and we are currently running a full study to see if we can replicate these effects with a more demanding memory task and refined study design. ![]() |
|
Domingues, Leonardo R. |
![]() Eduardo H. Tanaka, Juliana A. Paludo, Rafael Bacchetti, Edgar V. Gadbem, Leonardo R. Domingues, Carlúcio S. Cordeiro, Olavo Giraldi Jr., Guilherme Alcarde Gallo, Adam Mendes da Silva, and Marcos H. Cascone (Eldorado Research Institute, Brazil) This research demonstration presents an Immersive Virtual Substation for Electricians Training. A substation is one of the most critical facilities of the electrical distribution system, so it is mandatory to keep its normal operation to deliver high standards of service, power quality and avoid blackouts to consumers. Therefore, it is necessary to give an effective training to the electricians who will operate the equipment in the substation and to prepare them to face emergencies on daily basis. The main purpose of the proposed Immersive Virtual Environment is to provide a realistic experience to trainees in a safe environment where they can interact with equipment, explore the facility and, mainly, practice basic and complex maneuvers to recover substation operations. Users can interact with this Immersive Virtual Environment using HMDs, joysticks or even an ordinary keyboard, mouse and monitor. Feedback from trainees and instructors who used the Immersive Virtual Environment was very positive, indicating that the objectives were fully achieved. ![]() ![]() |
|
Doré, Renaud |
![]() Fabien Danieau, Antoine Guillo, and Renaud Doré (Technicolor R&I, France; ENSAM, France) Immersive videos allow users to freely explore 4 π steradian scenes within head-mounted displays (HMD), leading to a strong feeling of immersion. However users may miss important elements of the narrative if not facing them. Hence, we propose four visual effects to guide the user’s attention. After an informal pilot study, two of the most efficient effects were evaluated through a user study. Results show that our approach has potential but it remains challenging to implicitly drive the user’s attention outside of the field of view. ![]() ![]() |
|
Duh, Henry Been-Lirn |
![]() Jie Guo, Dongdong Weng, Henry Been-Lirn Duh, Yue Liu, and Yongtian Wang (Beijing Institute of Technology, China; La Trobe University, Australia) There are few negative effects to make people discomfort using virtual reality systems. In this paper, we investigated the effects of visual fatigue when wearing head-mounted displays (HMD) and compared the results with those from the smartphones. Forty subjects were recruited and divided into two different groups. The visual fatigue scale was measured to assess the subjects’ performance. The results indicated that visual fatigue caused by the conflict of focal distance and vergence distance was less severe than visual fatigue caused by long-term focus without accommodation. ![]() |
|
Dulery, Romain |
![]() Adela Barbulescu, Maxime Garcia, Antoine Begault, Laurence Boissieux, Marie-Paule Cani, Maxime Portaz, Alexis Viand, Romain Dulery, Pierre Heinish, Remi Ronfard, and Dominique Vaufreydaz (Inria, France; LJK Grenoble, France; University of Grenoble, France; LIG, France; Grenoble INP, France; CNRS, France) Pretend play is a storytelling technique, naturally used from very young ages, which relies on object substitution to represent the characters of the imagined story. We propose a system which assists the storyteller by generating a virtualized story from a recorded dialogue performed with 3D printed figurines. We capture the gestures and facial expressions of the storyteller using Kinect cameras and IMU sensors and transfer them to their virtual counterparts in the story-world. As a proof-of-concept, we demonstrate our system with an improvised story involving a prince and a witch, which was successfully recorded and transferred into 3D animation. ![]() |
|
Duncan, Dominique |
![]() Tyler Ard, David M. Krum, Thai Phan, Dominique Duncan, Ryan Essex, Mark Bolas, and Arthur Toga (University of Southern California, USA) Visualization is a critical component of neuroimaging, and how to best view data that is naturally three dimensional is a long standing question in neuroscience. Many approaches, programs, and techniques have been developed specifically for neuroimaging. However, exploration of 3D information through a 2D screen is inherently limited. Many neuroscientific researchers hope that with the recent commercialization and popularization of VR, it can offer the next-step in data visualization and exploration. Neuro Imaging in Virtual Reality (NIVR), is a visualization suite that employs various immersive visualizations to represent neuroimaging information in VR. Some established techniques, such as raymarching volume visualization, are paired with newer techniques, such as near-field rendering, to provide a broad basis of how we can leverage VR to improve visualization and navigation of neuroimaging data. Several of the neuroscientific visualization approaches presented are, to our knowledge, the first of their kind. NIVR offers not only an exploration of neuroscientific data visualization, but also a tool to expose and educate the public regarding recent advancements in the field of neuroimaging. By providing an engaging experience to explore new techniques and discoveries in neuroimaging, we hope to spark scientific interest through a broad audience. Furthermore, neuroimaging offers deep and expansive datasets; a single scan can involve several gigabytes of information. Visualization and exploration of this type of information can be challenging, and real-time exploration of this information in VR even more so. NIVR explores pathways which make this possible, and offers preliminary stereo visualizations of these types of massive data. ![]() ![]() Dominique Duncan, Bradley Newman, Adam Saslow, Emily Wanserski, Tyler Ard, Ryan Essex, and Arthur Toga (University of Southern California, USA; RareFaction Interactive, USA) The USC Stevens Neuroimaging and Informatics Institute in the Laboratory of Neuro Imaging (http://loni.usc.edu) has the largest collection/repository of neuroanatomical MRI scans in the world and is at the forefront of both brain imaging and data storage/processing technology. One of our workflow processes involves algorithmic segmentation of MRI scans into labeled anatomical regions (using FreeSurfer, currently the best software for this purpose). This algorithm is imprecise, and users must tediously correct errors manually by using a mouse and keyboard to edit individual MRI slices at a time. We demonstrate preliminary work to improve efficiency of this task by translating it into 3 dimensions and utilizing virtual reality user interfaces to edit multiple slices of data simultaneously. ![]() ![]() |
|
Dunn, Charles |
![]() Charles Dunn and Brian Knott (YouVisit, USA) Spherical data compression methods for Virtual Reality (VR) currently leverage popular rectangular data encoding algorithms. Traditional compression algorithms have massive adoption and hardware support on computers and mobile devices. Efficiently utilizing these two-dimensional compression methods for spherical data necessitates a projection from the three-dimensional surface of a sphere to a two-dimensional rectangle. Any such projection affects the final resolution distribution of the data after decoding. Popular projections used for VR video benefit from mathematical or geometric simplicity, but result in suboptimal resolution distributions. We introduce a method for generating a projection to match a desired resolution function. This method allows for customized projections with smooth, continuous and optimal resolution functions. Compared to commonly used projections, our resolution-defined projections drastically improve compression ratios for any given quality. ![]() |
|
Eck, Ulrich |
![]() Felix Bork, Roghayeh Barmaki, Ulrich Eck, Pascal Fallavolita, Bernhard Fuerst, and Nassir Navab (TU Munich, Germany; Johns Hopkins University, USA; University of Ottawa, Canada) Screen-based Augmented Reality (AR) systems can be built as a window into the real world as often done in mobile AR applications or using the Magic Mirror metaphor, where users can see themselves with augmented graphics on a large display. The term Magic Mirror implies that the display shows the users enantiomorph, i.e. the mirror image, such that the system mimics a real-world physical mirror. However, the question arises whether one should design a traditional mirror, or instead display the true mirror image by means of a non-reversing mirror? We discuss the perceptual differences between these two mirror visualization concepts and present a first comparative study in the context of Magic Mirror anatomy teaching. ![]() |
|
Ekong, Sam |
![]() Jason W. Woodworth, Sam Ekong, and Christoph W. Borst (University of Louisiana at Lafayette, USA) This demo presents an approach to networked educational virtual reality for virtual field trips and guided exploration. It shows an asymmetric collaborative interface in which a remote teacher stands in front of a large display and depth camera (Kinect) while students are immersed with HMDs. The teacher’s front-facing mesh is streamed into the environment to assist students and deliver instruction. Our project uses commodity virtual reality hardware and high-performance networks to allow students who are unable to visit a real facility with an alternative that provides similar educational benefits. Virtual facilities can further be augmented with educational content through interactables or small games. We discuss motivation, features, interface challenges, and ongoing testing. ![]() ![]() |
|
Elvezio, Carmine |
![]() Carmine Elvezio, Mengu Sukan, Steven Feiner, and Barbara Tversky (Columbia University, USA; Teachers College, USA) We demonstrate an interaction technique that allows a user to point at a world-in-miniature representation of a city-scale virtual environment and perform efficient and precise teleportation by pre-orienting an avatar. A preview of the post-teleport view of the full-scale virtual environment updates interactively as the user adjusts the position, yaw, and pitch of the avatar's head with a pair of 6DoF-tracked controllers. We describe design decisions and contrast with alternative approaches to virtual travel. ![]() ![]() |
|
Erfanian, Aida |
![]() Aida Erfanian, Stanley Tarng, Yaoping Hu, Jérémy Plouzeau, and Frédéric Merienne (University of Calgary, Canada; LE2I, France) Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies showed that the integration of visual and haptic cues follows maximum likelihood estimation (MLE). Little effort focuses however on the mechanism of integrating force and vibrotactile cues. We thus investigated MLE’s suitability for integrating these cues. Within a VE, human users undertook 3D interaction of navigating a flying drone along a high-voltage transmission line for inspection. The users received individual force or vibrotactile cues, and their combinations in collocated and dislocated settings. The users’ task performance including completion time and accuracy was assessed under each individual cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This agreed with the applicability of tactile cues for sensing 3D surfaces, herein setting a baseline for using MLE. The task performance under the collocated setting indicated a degree of combining the individual cues. In contrast, the performance under the dislocated setting was alike under the individual vibrotactile cue. These observations imply a possible role of MLE in integrating force and vibrotactile cues for 3D user interaction within VEs. ![]() |
|
Essex, Ryan |
![]() Tyler Ard, David M. Krum, Thai Phan, Dominique Duncan, Ryan Essex, Mark Bolas, and Arthur Toga (University of Southern California, USA) Visualization is a critical component of neuroimaging, and how to best view data that is naturally three dimensional is a long standing question in neuroscience. Many approaches, programs, and techniques have been developed specifically for neuroimaging. However, exploration of 3D information through a 2D screen is inherently limited. Many neuroscientific researchers hope that with the recent commercialization and popularization of VR, it can offer the next-step in data visualization and exploration. Neuro Imaging in Virtual Reality (NIVR), is a visualization suite that employs various immersive visualizations to represent neuroimaging information in VR. Some established techniques, such as raymarching volume visualization, are paired with newer techniques, such as near-field rendering, to provide a broad basis of how we can leverage VR to improve visualization and navigation of neuroimaging data. Several of the neuroscientific visualization approaches presented are, to our knowledge, the first of their kind. NIVR offers not only an exploration of neuroscientific data visualization, but also a tool to expose and educate the public regarding recent advancements in the field of neuroimaging. By providing an engaging experience to explore new techniques and discoveries in neuroimaging, we hope to spark scientific interest through a broad audience. Furthermore, neuroimaging offers deep and expansive datasets; a single scan can involve several gigabytes of information. Visualization and exploration of this type of information can be challenging, and real-time exploration of this information in VR even more so. NIVR explores pathways which make this possible, and offers preliminary stereo visualizations of these types of massive data. ![]() ![]() Dominique Duncan, Bradley Newman, Adam Saslow, Emily Wanserski, Tyler Ard, Ryan Essex, and Arthur Toga (University of Southern California, USA; RareFaction Interactive, USA) The USC Stevens Neuroimaging and Informatics Institute in the Laboratory of Neuro Imaging (http://loni.usc.edu) has the largest collection/repository of neuroanatomical MRI scans in the world and is at the forefront of both brain imaging and data storage/processing technology. One of our workflow processes involves algorithmic segmentation of MRI scans into labeled anatomical regions (using FreeSurfer, currently the best software for this purpose). This algorithm is imprecise, and users must tediously correct errors manually by using a mouse and keyboard to edit individual MRI slices at a time. We demonstrate preliminary work to improve efficiency of this task by translating it into 3 dimensions and utilizing virtual reality user interfaces to edit multiple slices of data simultaneously. ![]() ![]() |
|
Fallavolita, Pascal |
![]() Felix Bork, Roghayeh Barmaki, Ulrich Eck, Pascal Fallavolita, Bernhard Fuerst, and Nassir Navab (TU Munich, Germany; Johns Hopkins University, USA; University of Ottawa, Canada) Screen-based Augmented Reality (AR) systems can be built as a window into the real world as often done in mobile AR applications or using the Magic Mirror metaphor, where users can see themselves with augmented graphics on a large display. The term Magic Mirror implies that the display shows the users enantiomorph, i.e. the mirror image, such that the system mimics a real-world physical mirror. However, the question arises whether one should design a traditional mirror, or instead display the true mirror image by means of a non-reversing mirror? We discuss the perceptual differences between these two mirror visualization concepts and present a first comparative study in the context of Magic Mirror anatomy teaching. ![]() |
|
Fang, Qiang |
![]() Ran Luo, Qiang Fang, Jianguo Wei, Wenhuan Lu, Weiwei Xu, and Yin Yang (University of New Mexico, USA; Chinese Academy of Social Sciences, China; Tianjin University, China; Zhejiang University, China) We propose an acoustic-VR system that converts acoustic signals of human language (Chinese) to realistic 3D tongue animation sequences in real time. It is known that directly capturing the 3D geometry of the tongue at a frame rate that matches the tongue's swift movement during the language production is challenging. This difficulty is handled by utilizing the electromagnetic articulography (EMA) sensor as the intermediate medium linking the acoustic data to the simulated virtual reality. We leverage Deep Neural Networks to train a model that maps the input acoustic signals to the positional information of pre-defined EMA sensors based on 1,108 utterances. Afterwards, we develop a novel reduced physics-based dynamics model for simulating the tongue's motion. Unlike the existing methods, our deformable model is nonlinear, volume-preserving, and accommodates collision between the tongue and the oral cavity (mostly with the jaw). The tongue's deformation could be highly localized which imposes extra difficulties for existing spectral model reduction methods. Alternatively, we adopt a spatial reduction method that allows an expressive subspace representation of the tongue's deformation. We systematically evaluate the simulated tongue shapes with real-world shapes acquired by MRI/CT. Our experiment demonstrates that the proposed system is able to deliver a realistic visual tongue animation corresponding to a user's speech signal. ![]() ![]() |
|
Fatoorechi, Mohsen |
![]() Ifigeneia Mavridou, James T. McGhee, Mahyar Hamedi, Mohsen Fatoorechi, Andrew Cleal, Emili Ballaguer-Balester, Ellen Seiss, Graeme Cox, and Charles Nduka (Bournemouth University, UK; Sussex Innovation Centre, UK) Faceteq prototype v.05 is a wearable technology for measuring facial expressions and biometric responses for experimental studies in Virtual Reality. Developed by Emteq Ltd laboratory, Faceteq can enable new avenues for virtual reality research through combination of high performance patented dry sensor technologies, proprietary algorithms and real-time data acquisition and streaming. Emteq founded the Faceteq project with the aim to provide a human-centered additional tool for emotion expression, affective human-computer interaction and social virtual environments. The proposed demonstration will exhibit the hardware and its functionality by allowing attendees to experience three of the showcasing applications we developed this year. ![]() |
|
Feiner, Steven |
![]() Carmine Elvezio, Mengu Sukan, Steven Feiner, and Barbara Tversky (Columbia University, USA; Teachers College, USA) We demonstrate an interaction technique that allows a user to point at a world-in-miniature representation of a city-scale virtual environment and perform efficient and precise teleportation by pre-orienting an avatar. A preview of the post-teleport view of the full-scale virtual environment updates interactively as the user adjusts the position, yaw, and pitch of the avatar's head with a pair of 6DoF-tracked controllers. We describe design decisions and contrast with alternative approaches to virtual travel. ![]() ![]() |
|
Fels, Sidney |
![]() Qian Zhou, Kai Wu, Gregor Miller, Ian Stavness, and Sidney Fels (University of British Columbia, Canada; University of Saskatchewan, Canada) We describe an auto-calibrated 3D perspective-corrected spherical display that uses multiple rear projected pico-projectors. The display system is auto-calibrated via 3D reconstruction of each projected pixel on the display using a single inexpensive camera. With the automatic calibration, the multiple-projector system supports a seamless blended imagery on the spherical screen. Furthermore, we incorporate head tracking with the display to present 3D content with motion parallax by rendering perspective-corrected images based on the viewpoint. To show the effectiveness of this design, we implemented a view-dependent application that allows walk-around visualization from all angles for a single head-tracked user. We also implemented a view-independent application that supports a wall-papered rendering for multi-user viewing. Thus, both view-dependent 3D VR content and spherical 2D content, such as a globe, can be easily experienced with this display. ![]() ![]() ![]() |
|
Feng, Lele |
![]() Lele Feng, Xubo Yang, and Shuangjiu Xiao (Shanghai Jiao Tong University, China) We present MagicToon, an interactive modeling system with mobile augmented reality (AR) that allows children to build 3D cartoon scenes creatively from their own 2D cartoon drawings on paper. Our system consists of two major components: an automatic 2D-to-3D cartoon model creator and an interactive model editor to construct more complicated AR scenes. The model creator can generate textured 3D cartoon models according to 2D drawings automatically and overlay them on the real world, bringing life to flat cartoon drawings. With our interactive model editor, the user can perform several optional operations on 3D models such as copying and animating in AR context through a touchscreen of a handheld device. The user can also author more complicated AR scenes by placing multiple registered drawings simultaneously. The results of our user study have shown that our system is easier to use compared with traditional sketch-based modeling systems and can give more play to children's innovations compared with AR coloring books. ![]() ![]() ![]() |
|
Feng, Yengzhou |
![]() Sarang S. Bhadsavle, Xie Hunt Shannon Yap, Justin Segler, Rahul Jaisimha, Nishant Raman, Yengzhou Feng, Sierra J. Biggs, Micah Peoples, Robert B. Brenner, and Brian C. McCann (University of Texas at Austin, USA) Immersive technologies such as 360◦ cameras and head-mounted displays (HMDs) have become affordable to the average consumer, opening up new audiences for storytellers. However, existing immersive post-production software often requires too great a technical or financial investment for smaller creative shops, including most of the country’s newspapers and local broadcast journalism organizations. Game engines, for example, are unnecessarily complicated for simple 360◦ video projects. Introducing Immerj - an open source abstraction layer simplifying the Unity3D game engine’s interface for immersive content creators. Our primary collaborator, Professor R.B. Brenner, director of the School of Journalism at the University of Texas at Austin, organized hands-on demos with our team and journalists and designers from some of the top news organizations all over the country in order to follow a human-centered design process. In just over one year, a small team of undergraduate researchers at the Texas Advanced Computing Center (TACC) has created a potentially disruptive democratization of technology. ![]() |
|
Ferdous, Sharif Mohammad Shahnewaz |
![]() Sharif Mohammad Shahnewaz Ferdous (University of Texas at San Antonio, USA) Most people experience some imbalance in a fully immersive Virtual Environment (VE) (i.e., wearing a Head Mounted Display (HMD) that blocks the users view of the real world). However, this imbalance is significantly worse in People with Balance Impairments (PwBIs) and minimal research has been done to improve this. In addition to imbalance problem, lack of proper visual cues can lead to different accessibility problems for PwBIs (e.g., small reach from the fear of imbalance, decreased gait performance, etc.) We plan to explore the effects of different visual cues on peoples’ balance, reach, gait, etc. Based on our primary study, we propose to incorporate additional visual cues in VEs that proved to significantly improve balance of PwBIs while they are standing and playing in a VE. We plan to further investigate if additional visual cues have similar effects in augmented reality. We are also developing studies to research reach and gait in VR as our future work. ![]() |
|
Fernando, Charith Lasantha |
![]() Yasuyuki Inoue, Fumihiro Kato, MHD Yamen Saraiji, Charith Lasantha Fernando, and Susumu Tachi (University of Tokyo, Japan; Keio University, Japan) In this paper, we analyze the subjective feelings about the body of the operator of a telexistence system. We investigate whether a mirror reflection and self-touch affect body ownership and agency for a surrogate robot avatar in a virtual reality experiment. Results showed that the presence of tactile sensations synchronized with the view of self-touch events enhanced mirror self-recognition. ![]() ![]() Fumihiro Kato, Charith Lasantha Fernando, Yasuyuki Inoue, and Susumu Tachi (University of Tokyo, Japan; Keio University, Japan) We have developed a classification method of tactile feeling using a stacked autoencoder-based neural network on haptic primary colors. The haptic primary colors principle is a concept of decomposing the human sensation of tactile feeling into force, vibration, and temperature. Images were obtained from variation in the frequency of the time series of the tactile feeling obtained when tracing a surface of an object, features were extracted by employing a stacked autoencoder using a neural network with two hidden layers, and supervised learning was conducted. We confirmed that the tactile feeling for three different surface materials can be classified with an accuracy of 82.0[%]. ![]() |
|
Fisher, Joshua A. |
![]() Joshua A. Fisher, Amit Garg, Karan Pratap Singh, and Wesley Wang (Georgia Institute of Technology, USA) Natural movement and locomotion in Virtual Environments (VE) is constrained by the user’s immediate physical space. To overcome this obstacle, researchers have established the use of impossible spaces. This work illustrates how impossible spaces can be utilized to enhance the aesthetics of, and presence within, an interactive narrative. This is done by creating impossible spaces with a narrative intent. First, locomotion and impossible spaces in VR are surveyed; second, the benefits of using intentional impossible spaces from a narrative design perspective is presented; third, a VR narrative called Ares is put forth as a prototype; and fourth, a user study is explored. Impossible spaces with a narrative intent intertwines narratology with the world’s aesthetics to enhance dramatic agency. ![]() ![]() |
|
Fohl, Wolfgang |
![]() Malte Nogalski and Wolfgang Fohl (University of Applied Sciences Hamburg, Germany) This paper summarizes the detailed paths of participants in redirected walking (RDW) curvature gain experiments. The experiments were carried out in a wave field synthesis (WFS) system of 5x6 meters. Some users were blindfolded and had to control their walking by acoustical cues only, others wore an Oculus Rift DK2 which presented them a virtual scenery in addition. A marker at the participant’s head allowed us to record the paths with our high-precision tracking system. The naive assumption of RDW with curvature gains would be that the test persons walk on the circumference of a circle, but the observed walking patterns were much more complex. Test persons showed very individual walking patterns while exploring the virtual environment. Many of these patterns may be explained as a sequence: 1. walk a few steps toward the assumed target position, 2. check for deviations, 3. adjust path to new assumed target position, which results in different patterns of various path curvature. The consequences for the application of RDW techniques are: Curvature gain tries to guide the users on a circular arc: the ”ideal path”, whereas the real paths are mostly outside of the circle of the ideal path. The deviations in the audio-only case are much larger than in the audio-visual case. The measured curvature gain thresholds systematically under-estimate the required walking space, as they do not account for the required extra space for walking outside the circular path. ![]() |
|
Freiberg, Jacob |
![]() Jacob Freiberg, Alexandra Kitson, and Bernhard E. Riecke (Simon Fraser University, Canada) With affordable high performance VR displays becoming commonplace, users are becoming increasingly aware of the need for well-designed locomotion interfaces that support these displays. After considering the needs of users, we quantitatively evaluated an embodied locomotion interface called the Navichair according to usability needs and fulfillment of system requirements. Specifically, we investigated influences of locomotion interfaces (joystick vs. an embodied motion cueing chair) and display type (HMD vs. projection screen) on a spatial updating pointing task. Our findings indicate that our embodied VR locomotion interface provided users with an immersive experience of a space without requiring a significant investment of set up time. Design lessons and future design goals of our interface are discussed. ![]() |
|
Freitag, Sebastian |
![]() Sebastian Freitag, Clemens Löbbert, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Viewpoint quality estimation methods allow the determination of the most informative position in a scene. However, a single position usually cannot represent an entire scene, requiring instead a set of several viewpoints. Measuring the quality of such a set of views, however, is not trivial, and the computation of an optimal set of views is an NP-hard problem. Therefore, in this work, we propose three methods to estimate the quality of a set of views. Furthermore, we evaluate three approaches for computing an approximation to the optimal set (two of them new) regarding effectiveness and efficiency. ![]() ![]() Sebastian Freitag, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) The manual adjustment of travel speed to cover medium or large distances in virtual environments may increase cognitive load, and manual travel at high speeds can lead to cybersickness due to inaccurate steering. In this work, we present an approach to quickly pass regions where the environment does not change much, using automated suggestions based on the computation of common visibility. In a user study, we show that our method can reduce cybersickness when compared with manual speed control. ![]() |
|
Friston, Sebastian |
![]() Anthony Steed, Yonathan Widya Adipradana, and Sebastian Friston (University College London, UK) Video see-through augmented reality (VSAR) is an effective way of combing real and virtual scenes for head-mounted human computer interfaces. In this paper we present the AR-Rift 2 system, a cost-effective prototype VSAR system based around the Oculus Rift CV1 head-mounted display (HMD). Current consumer camera systems however typically have latencies far higher than the rendering pipeline of current consumer HMDs. They also have lower update rate than the display. We thus measure the latency of the video and implement a simple image-warping method to ensure smooth movement of the video. ![]() ![]() |
|
Froehlich, Bernd |
![]() Stephan Beck and Bernd Froehlich (Bauhaus-Universität Weimar, Germany) The accurate calibration and registration of a set of color and depth (RGBD) sensors into a shared coordinate system is an essential requirement for 3D surround capturing systems. We present a method to calibrate multiple unsynchronized RGBD-sensors with high accuracy in a matter of minutes by sweeping a tracked checkerboard through the desired capturing space in front of each sensor. Through the sweeping process, a large number of robust correspondences between the depth and the color image as well as the 3D world positions can be automatically established. In order to obtain temporally synchronized correspondences between an RGBD-sensor’s data streams and the tracked target’s positions we apply an off-line optimization process based on error minimization and a coplanarity constraint. The correspondences are entered into a 3D look-up table which is used during runtime to transform depth and color information into the application’s world coordinate system. Our proposed method requires a manual effort of less than one minute per RGBD-sensor and achieves a high calibration accuracy with an average 3D error below 3.5 mm and an average texture reprojection error smaller than 1 pixel. ![]() ![]() ![]() |
|
Froeschl, Mario |
![]() Annette Mossel, Mario Froeschl, Christian Schoenauer, Andreas Peer, Johannes Goellner, and Hannes Kaufmann (Vienna University of Technology, Austria; M2DMasterMind Development, Austria) We present the VROnSite platform that enables immersive training of first responder on-site squad leaders. Our training platform is fully immersive, entirely untethered to ease use and provides two means of navigation - abstract and natural walking - to simulate stress and exhaustion, two important factors for decision making. With the platform's capabilities, we close a gap in prior art for first responder training. Our research is closely interlocked with stakeholders from fire brigades and paramedics to gather early feedback in an iterative design process. In this paper, we present our first research results, which are the system's design rationale, the single user training prototype and results from a preliminary user study. ![]() |
|
Fuchs, Henry |
![]() Rohan Chabra, Adrian Ilie, Nicholas Rewkowski, Young-Woon Cha, and Henry Fuchs (University of North Carolina at Chapel Hill, USA) Commodity depth sensors, such as the Microsoft Kinect®, have been widely used for the capture and reconstruction of the 3D structure of a room-sized dynamic scene. Camera placement and coverage during capture significantly impact the quality of the resulting reconstruction. In particular, dynamic occlusions and sensor interference have been shown to result in poor resolution and holes in the reconstruction results. This paper presents a novel algorithmic framework and an off-line optimization of depth sensor placements for a given 3D dynamic scene, simulated using virtual 3D models. We derive a fitness metric for a particular configuration of sensors by combining factors such as visibility and resolution of the entire dynamic scene along with probabilities of interference between sensors. We employ this fitness metric both in a greedy algorithm that determines the number of depth sensors needed to cover the scene, and in a simulated annealing algorithm that optimizes the placements of those sensors. We compare our algorithm’s optimized placements with manual sensor placements for a real dynamic scene. We present quantitative assessments using our fitness metric, as well as qualitative assessments to demonstrate that our algorithm not only enhances the resolution and total coverage of the reconstruction but also fills in voids by avoiding occlusions and sensor interference when compared with the reconstruction of the same scene using a manual sensor placement. ![]() ![]() ![]() |
|
Fuerst, Bernhard |
![]() Felix Bork, Roghayeh Barmaki, Ulrich Eck, Pascal Fallavolita, Bernhard Fuerst, and Nassir Navab (TU Munich, Germany; Johns Hopkins University, USA; University of Ottawa, Canada) Screen-based Augmented Reality (AR) systems can be built as a window into the real world as often done in mobile AR applications or using the Magic Mirror metaphor, where users can see themselves with augmented graphics on a large display. The term Magic Mirror implies that the display shows the users enantiomorph, i.e. the mirror image, such that the system mimics a real-world physical mirror. However, the question arises whether one should design a traditional mirror, or instead display the true mirror image by means of a non-reversing mirror? We discuss the perceptual differences between these two mirror visualization concepts and present a first comparative study in the context of Magic Mirror anatomy teaching. ![]() |
|
Fuhrmann, Arnulph |
![]() Daniel Roth, Kristoffer Waldow, Marc Erich Latoschik, Arnulph Fuhrmann, and Gary Bente (University of Würzburg, Germany; University of Cologne, Germany; TH Köln, Germany; Michigan State University, USA) In this paper, we present SIAM-C, an avatar-mediated communication platform to study socially immersive interaction in virtual environments. The proposed system is capable of tracking, transmitting, representing body motion, facial expressions, and voice via virtual avatars and inherits the transmission of human behaviors that are available in real-life social interactions. Users are immersed using active stereoscopic rendering projected onto a life-size projection plane, utilizing the concept of “fish tank” virtual reality (VR). Our prototype connects two separate rooms and allows for socially immersive avatar-mediated communication in VR. ![]() |
|
Gadbem, Edgar V. |
![]() Eduardo H. Tanaka, Juliana A. Paludo, Rafael Bacchetti, Edgar V. Gadbem, Leonardo R. Domingues, Carlúcio S. Cordeiro, Olavo Giraldi Jr., Guilherme Alcarde Gallo, Adam Mendes da Silva, and Marcos H. Cascone (Eldorado Research Institute, Brazil) This research demonstration presents an Immersive Virtual Substation for Electricians Training. A substation is one of the most critical facilities of the electrical distribution system, so it is mandatory to keep its normal operation to deliver high standards of service, power quality and avoid blackouts to consumers. Therefore, it is necessary to give an effective training to the electricians who will operate the equipment in the substation and to prepare them to face emergencies on daily basis. The main purpose of the proposed Immersive Virtual Environment is to provide a realistic experience to trainees in a safe environment where they can interact with equipment, explore the facility and, mainly, practice basic and complex maneuvers to recover substation operations. Users can interact with this Immersive Virtual Environment using HMDs, joysticks or even an ordinary keyboard, mouse and monitor. Feedback from trainees and instructors who used the Immersive Virtual Environment was very positive, indicating that the objectives were fully achieved. ![]() ![]() |
|
Gallo, Guilherme Alcarde |
![]() Eduardo H. Tanaka, Juliana A. Paludo, Rafael Bacchetti, Edgar V. Gadbem, Leonardo R. Domingues, Carlúcio S. Cordeiro, Olavo Giraldi Jr., Guilherme Alcarde Gallo, Adam Mendes da Silva, and Marcos H. Cascone (Eldorado Research Institute, Brazil) This research demonstration presents an Immersive Virtual Substation for Electricians Training. A substation is one of the most critical facilities of the electrical distribution system, so it is mandatory to keep its normal operation to deliver high standards of service, power quality and avoid blackouts to consumers. Therefore, it is necessary to give an effective training to the electricians who will operate the equipment in the substation and to prepare them to face emergencies on daily basis. The main purpose of the proposed Immersive Virtual Environment is to provide a realistic experience to trainees in a safe environment where they can interact with equipment, explore the facility and, mainly, practice basic and complex maneuvers to recover substation operations. Users can interact with this Immersive Virtual Environment using HMDs, joysticks or even an ordinary keyboard, mouse and monitor. Feedback from trainees and instructors who used the Immersive Virtual Environment was very positive, indicating that the objectives were fully achieved. ![]() ![]() |
|
Garcia, Maxime |
![]() Adela Barbulescu, Maxime Garcia, Antoine Begault, Laurence Boissieux, Marie-Paule Cani, Maxime Portaz, Alexis Viand, Romain Dulery, Pierre Heinish, Remi Ronfard, and Dominique Vaufreydaz (Inria, France; LJK Grenoble, France; University of Grenoble, France; LIG, France; Grenoble INP, France; CNRS, France) Pretend play is a storytelling technique, naturally used from very young ages, which relies on object substitution to represent the characters of the imagined story. We propose a system which assists the storyteller by generating a virtualized story from a recorded dialogue performed with 3D printed figurines. We capture the gestures and facial expressions of the storyteller using Kinect cameras and IMU sensors and transfer them to their virtual counterparts in the story-world. As a proof-of-concept, we demonstrate our system with an improvised story involving a prince and a witch, which was successfully recorded and transferred into 3D animation. ![]() |
|
Garcia Estrada, Jose |
![]() Jose Garcia Estrada and Adalberto L. Simeone (University of Portsmouth, UK) This poster introduces the development of a recommender system to guide users in adapting the Virtual Environment into matching objects in the physical world. Emphasis is placed on avoiding cognitive overload resulting from providing options for substitution without considering the number of physical objects present. This is the first step towards a comprehensive recommender system for user-driven adaptation of Virtual Environments through immersive Virtual Reality systems. ![]() |
|
Garg, Amit |
![]() Joshua A. Fisher, Amit Garg, Karan Pratap Singh, and Wesley Wang (Georgia Institute of Technology, USA) Natural movement and locomotion in Virtual Environments (VE) is constrained by the user’s immediate physical space. To overcome this obstacle, researchers have established the use of impossible spaces. This work illustrates how impossible spaces can be utilized to enhance the aesthetics of, and presence within, an interactive narrative. This is done by creating impossible spaces with a narrative intent. First, locomotion and impossible spaces in VR are surveyed; second, the benefits of using intentional impossible spaces from a narrative design perspective is presented; third, a VR narrative called Ares is put forth as a prototype; and fourth, a user study is explored. Impossible spaces with a narrative intent intertwines narratology with the world’s aesthetics to enhance dramatic agency. ![]() ![]() |
|
Gatterer, Clemens |
![]() Emanuel Vonach, Clemens Gatterer, and Hannes Kaufmann (Vienna University of Technology, Austria) We present the design and development of a fully immersive virtual reality (VR) system that can provide prop-based haptic feedback in an infinite virtual environment. It is conceived as a research tool for studying topics related to haptics in VR and based on off-the-shelf components. A robotic arm moves physical props, dynamically matching pose and location of an object in the virtual world. When the user reaches for the virtual object, his or her hands also encounter it in the real physical space. The interaction is not limited to specific body parts and does not rely on an external structure like an exoskeleton. In combination with a locomotion platform for close-to-natural walking, this allows unrestricted haptic interaction in a natural way in virtual environments of unlimited size. We describe the concept, the hardware and software architecture in detail. We establish safety design guidelines for human-robot interaction in VR. Our technical evaluation shows good response times and accuracy. We report on a user study conducted with 34 participants indicating promising results, and discuss the potential of our system. ![]() ![]() |
|
Georgiev, Georgi V. |
![]() Georgi V. Georgiev, Kaori Yamada, Toshiharu Taura, Vassilis Kostakos, Matti Pouke, Sylvia Tzvetanova Yung, and Timo Ojala (University of Oulu, Finland; Kobe University, Japan; University of Bedfordshire, UK) Here we propose an interactive system to augment creative design thinking using networks of concepts in a virtual reality environment. We discuss how to augment the human capacity to be creative through dynamic suggestions providing new and original ideas, based on specific semantic network characteristics. We outline directions to explore the structures of the concept network and their connection to creative concept generation. It is expected that augmented creative thinking will allow the user to have more original ideas and thus be more innovative. ![]() |
|
Gillies, Marco |
![]() Tara Collingwoode-Williams, Marco Gillies, Cade McCall, and Xueni Pan (University of London, UK; University of York, UK) We are interested the effect of lip and arm synchronization on body ownership in VR (the illusion that the users own a virtual body). Participants were invited to give a presentation in an HMD, while seeing in a virtual mirror a gender-matched avatar who copied their arm and lip movements in sync and a-sync conditions. We measure participants’ reaction with questionnaires administrated verbally after their presentation while immersed in VR. The result suggested an interaction effect of arm and lip, showing reports of higher level of embodiment with the congruent as compared to the incongruent conditions. Further study is needed to confirm if the same interaction effect can be captured with objective measurements. ![]() ![]() |
|
Giraldi Jr., Olavo |
![]() Eduardo H. Tanaka, Juliana A. Paludo, Rafael Bacchetti, Edgar V. Gadbem, Leonardo R. Domingues, Carlúcio S. Cordeiro, Olavo Giraldi Jr., Guilherme Alcarde Gallo, Adam Mendes da Silva, and Marcos H. Cascone (Eldorado Research Institute, Brazil) This research demonstration presents an Immersive Virtual Substation for Electricians Training. A substation is one of the most critical facilities of the electrical distribution system, so it is mandatory to keep its normal operation to deliver high standards of service, power quality and avoid blackouts to consumers. Therefore, it is necessary to give an effective training to the electricians who will operate the equipment in the substation and to prepare them to face emergencies on daily basis. The main purpose of the proposed Immersive Virtual Environment is to provide a realistic experience to trainees in a safe environment where they can interact with equipment, explore the facility and, mainly, practice basic and complex maneuvers to recover substation operations. Users can interact with this Immersive Virtual Environment using HMDs, joysticks or even an ordinary keyboard, mouse and monitor. Feedback from trainees and instructors who used the Immersive Virtual Environment was very positive, indicating that the objectives were fully achieved. ![]() ![]() |
|
Goellner, Johannes |
![]() Annette Mossel, Mario Froeschl, Christian Schoenauer, Andreas Peer, Johannes Goellner, and Hannes Kaufmann (Vienna University of Technology, Austria; M2DMasterMind Development, Austria) We present the VROnSite platform that enables immersive training of first responder on-site squad leaders. Our training platform is fully immersive, entirely untethered to ease use and provides two means of navigation - abstract and natural walking - to simulate stress and exhaustion, two important factors for decision making. With the platform's capabilities, we close a gap in prior art for first responder training. Our research is closely interlocked with stakeholders from fire brigades and paramedics to gather early feedback in an iterative design process. In this paper, we present our first research results, which are the system's design rationale, the single user training prototype and results from a preliminary user study. ![]() |
|
González-Zúñiga, Diego |
![]() Diego González-Zúñiga, Peter O'Shaughnessy, and Michael Blix (Samsung Research, UK; Samsung Research, USA) The tutorial focuses on Virtual Reality on the web and how researchers and developers can leverage its power to create content. The WebVR specification is presented, along with examples of how it works in a browser. Content creation is addressed by mentioning the available frameworks accompanied by a hands-on session in A-Frame. Additionally, the concept of Progressive Web App is explained and how it enables web experiences to work offline. ![]() |
|
Grandi, Jerônimo G. |
![]() Jerônimo G. Grandi (Federal University of Rio Grande do Sul, Brazil) We explore design approaches for cooperative work in virtual manipulation tasks. We seek to understand the fundamental aspects of the human cooperation and design interfaces and manipulation actions to enhance the group's ability to solve complex manipulation tasks in various immersion scenarios. ![]() |
|
Grani, Francesco |
![]() Rasmus B. Lind, Victor Milesen, Dina M. Smed, Simone P. Vinkel, Francesco Grani, Niels C. Nilsson, Lars Reng, Rolf Nordahl, and Stefania Serafin (Aalborg University at Copenhagen, Denmark) In this paper we propose an experiment that evaluates the influence of audience noise on the feeling of presence and the perceived qual- ity in a virtual reality concert experience delivered using Wave Field Synthesis. A 360 degree video of a live rock concert from a local band was recorded. Single sound sources from the stage and the PA system were recorded, as well as the audience noise, and impulse responses of the concert venue. The audience noise was imple- mented in the production phase. A comparative study compared an experience with and without audience noise. In a between sub- ject experiment with 30 participants we found that audience noise does not have a significant impact on presence. However, qualita- tive evaluations show that the naturalness of the sonic experience delivered through wavefield synthesis had a positive impact on the participants. ![]() |
|
Grechkin, Timofey |
![]() Mahdi Azmandian, Timofey Grechkin, and Evan Suma Rosenberg (University of Southern California, USA) As the focus of virtual reality technology is shifting from single-person experiences to multi-user interactions it becomes increasingly important to accommodate multiple co-located users within a shared real-world space. For locomotion and navigation, the introduction of multiple users moving both virtually and physically creates additional challenges related to potential user-on-user collisions. In this work, we focus on defining the extent of these challenges, in order to apply redirected walking to two users immersed in virtual reality experiences within a shared physical tracked space. Using a computer simulation framework, we explore the costs and benefits of splitting available physical space between users versus attempting to algorithmically prevent user-to-user collisions. We also explore fundamental components of collision prevention such as steering the users away from each other, forced stopping, and user re-orientation. Each component was analyzed for the number of potential disruptions to the flow of the virtual experience. We also develop a novel collision prevention algorithm that reduces overall interruptions by 17.6% and collision prevention events by 58.3%. Our results show that sharing space using our collision prevention method is superior to subdividing the tracked space. ![]() |
|
Grubert, Jens |
![]() Jens Grubert and Matthias Kranz (Coburg University, Germany; University of Passau, Germany) While we witness significant changes in display technologies, to date, the majority of display form factors remain flat. The research community has investigated other geometric display configuration given the rise to cubic displays that create the illusion of a 3D virtual scene within the cube. We present a self-contained mobile perspective cubic display (mpCubee) assembled from multiple smartphones. We achieve perspective correct projection of 3D content through head-tracking using built-in cameras in smartphones. Furthermore, our prototype allows to spatially manipulate 3D objects on individual axes due to the orthogonal configuration of touch displays. ![]() ![]() Jens Grubert and Matthias Kranz (Coburg University, Germany; University of Passau, Germany) We present a demonstration of HeadPhones (Headtracking + smartPhones), a novel approach for the spatial registration of multiple mobile devices into an ad hoc multi-display environment. We propose to employ the user's head as external reference frame for the registration of multiple mobile devices into a common coordinate system. Our approach allows for dynamic repositioning of devices during runtime without the need for external infrastructure such as separate cameras or fiducials. Specifically, our only requirements are local network connections and mobile devices with built-in front facing cameras. This way, HeadPhones enables spatially-aware multi-display applications in mobile contexts. ![]() ![]() ![]() |
|
Grund, Christian |
![]() Andrea Bönsch, Jonathan Wendt, Heiko Overath, Özgür Gürerk, Christine Harbring, Christian Grund, Thomas Kittsteiner, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Traditionally, experimental economics uses controlled and incentivized field and lab experiments to analyze economic behavior. However, investigating peer effects in the classic settings is challenging due to the reflection problem: Who is influencing whom? To overcome this, we enlarge the methodological toolbox of these experiments by means of Virtual Reality. After introducing and validating a real-effort sorting task, we embed a virtual agent as peer of a human subject, who independently performs an identical sorting task. We conducted two experiments investigating (a) the subject’s productivity adjustment due to peer effects and (b) the incentive effects on competition. Our results indicate a great potential for Virtual-Reality-based economic experiments. ![]() |
|
Guillo, Antoine |
![]() Fabien Danieau, Antoine Guillo, and Renaud Doré (Technicolor R&I, France; ENSAM, France) Immersive videos allow users to freely explore 4 π steradian scenes within head-mounted displays (HMD), leading to a strong feeling of immersion. However users may miss important elements of the narrative if not facing them. Hence, we propose four visual effects to guide the user’s attention. After an informal pilot study, two of the most efficient effects were evaluated through a user study. Results show that our approach has potential but it remains challenging to implicitly drive the user’s attention outside of the field of view. ![]() ![]() |
|
Gulberti, Alessandro |
![]() Omar Janeh, Eike Langbehn, Frank Steinicke, Gerd Bruder, Alessandro Gulberti, and Monika Poetter-Nerger (University of Hamburg, Germany; University of Central Florida, USA; University Medical Center Hamburg-Eppendorf, Germany) Our study investigates the effects of (non-)isometric mappings between physical movements and virtual motions in the virtual environment (VE) on walking biomechanics of older adults. Three primary domains (pace, base of support and phase) of spatio-temporal and temporo-phasic parameters were used to evaluate gait performance. Our results show similar results in pace and phasic domains when older adults walk in the VE in the isometric mapping condition compared to the corresponding parameters in the real world. We found significant differences in base of support for our user group between walking in the VE and real world. For non-isometric mappings we found an increased divergence of gait parameters in all domains correlating with the up- or down-scaled velocity of visual self-motion feedback. ![]() |
|
Gunkel, Simon |
![]() Simon Gunkel, Martin Prins, Hans Stokking, and Omar Niamut (TNO, Netherlands) Virtual Reality (VR) and 360-degree video are reshaping the media landscape, creating a fertile business environment. During 2016 new 360-degree cameras and VR headsets entered the consumer market, distribution platforms are being established and new production studios are emerging. VR is evermore becoming a hot topic in research and industry and many new and exciting interactive VR content and experiences are emerging. The biggest gap we see in these experiences are social and shared aspects of VR. In this demo we present our ongoing efforts towards social and shared VR by developing a modular web based VR framework, that extends current video conferencing capabilities with new functionalities of Virtual and Mixed Reality. It allows us to connect two people together for mediated audio-visual interaction, while being able to engage in interactive content. Our framework allows to run extensive technological and user based trials in order to evaluate VR experiences and to build immersive multi-user interaction spaces. Our first results indicate that a high level of engagement and interaction between users is possible in our 360-degree VR set-up utilizing current web technologies. ![]() ![]() |
|
Guo, Jie |
![]() Jie Guo, Dongdong Weng, Henry Been-Lirn Duh, Yue Liu, and Yongtian Wang (Beijing Institute of Technology, China; La Trobe University, Australia) There are few negative effects to make people discomfort using virtual reality systems. In this paper, we investigated the effects of visual fatigue when wearing head-mounted displays (HMD) and compared the results with those from the smartphones. Forty subjects were recruited and divided into two different groups. The visual fatigue scale was measured to assess the subjects’ performance. The results indicated that visual fatigue caused by the conflict of focal distance and vergence distance was less severe than visual fatigue caused by long-term focus without accommodation. ![]() |
|
Guo, Rongkai |
![]() Raphael Costa, Rongkai Guo, and John Quarles (University of Texas at San Antonio, USA; Kennesaw State University, USA) The objective of this research is to compare the effectiveness of different tracking devices underwater. There have been few works in aquatic virtual reality (VR) - i.e., VR systems that can be used in a real underwater environment. Moreover, the works that have been done have noted limitations on tracking accuracy. Our initial test results suggest that inertial measurement units work well underwater for orientation tracking but a different approach is needed for position tracking. Towards this goal, we have waterproofed and evaluated several consumer tracking systems intended for gaming to determine the most effective approaches. First, we informally tested infrared systems and fiducial marker based systems, which demonstrated significant limitations of optical approaches. Next, we quantitatively compared inertial measurement units (IMU) and a magnetic tracking system both above water (as a baseline) and underwater. By comparing the devices’ rotation data, we have discovered that the magnetic tracking system implemented by the Razer Hydra is more accurate underwater as compared to a phone-based IMU. This suggests that magnetic tracking systems should be further explored for underwater VR applications. ![]() |
|
Gürerk, Özgür |
![]() Andrea Bönsch, Jonathan Wendt, Heiko Overath, Özgür Gürerk, Christine Harbring, Christian Grund, Thomas Kittsteiner, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Traditionally, experimental economics uses controlled and incentivized field and lab experiments to analyze economic behavior. However, investigating peer effects in the classic settings is challenging due to the reflection problem: Who is influencing whom? To overcome this, we enlarge the methodological toolbox of these experiments by means of Virtual Reality. After introducing and validating a real-effort sorting task, we embed a virtual agent as peer of a human subject, who independently performs an identical sorting task. We conducted two experiments investigating (a) the subject’s productivity adjustment due to peer effects and (b) the incentive effects on competition. Our results indicate a great potential for Virtual-Reality-based economic experiments. ![]() |
|
Gutenko, Ievgeniia |
![]() Seyedkoosha Mirhosseini, Ievgeniia Gutenko, Sushant Ojal, Joseph Marino, and Arie E. Kaufman (Stony Brook University, USA) For many virtual reality applications, a pre-calculated fly-through path is the de facto standard navigation method. Such a path is convenient for users and ensures coverage of critical areas throughout the scene. Traditional applications use constant camera speed, allow for fully user-controlled manual speed adjustment, or use automatic speed adjustment based on heuristics from the scene. We introduce two novel methods for constrained path navigation and exploration in virtual environments which rely on the natural orientation of the user’s head during scene exploration. Utilizing head tracking to obtain the user’s area of focus, we perform automatic camera speed adjustment to allow for natural off-axis scene examination. We expand this to include automatic camera navigation along the pre-computed path, abrogating the need for any navigational inputs from the user. Our techniques are applicable for any scene with a pre-computed navigation path, including medical applications such as virtual colonoscopy, coronary fly-through, or virtual angioscopy, and graph navigation. We compare the traditional methods (constant speed and manual speed adjustment) and our two methods (automatic speed adjustment and automatic speed/direction control) to determine the effect of speed adjustment on system usability, mental load, performance, and user accuracy. Through this evaluation we observe the effect of automatic speed adjustment compared to traditional methods. We observed no negative impact from automatic navigation, and the users performed as well as with the manual navigation. ![]() |
|
Ha, Gyutae |
![]() Hojun Lee, Gyutae Ha, Sangho Lee, and Shiho Kim (Yonsei University, South Korea) We have implemented a mixed reality telepresence platform providing a user experience (UX) of exchanging emotional expressions as well as information among a group of participants. The implemented system provides a platform to experience an immersive live scene through a Head-Mounted Display (HMD) and sensory information to a VR HMD user at a remote place. Moreover, the user at a remote place can share and exchange emotional expressions with other users at another remote location by using 360° cameras, environmental sensors compliant with MPEG-V, and a game cloud server combined with a technique of holographic display. We demonstrated that emotional expressions of an HMD worn participant were shared with a group of other participants in the remote place while watching a sports game on a big screen TV. ![]() |
|
Hagita, Norihiro |
![]() Taishi Sawabe, Masayuki Kanbara, and Norihiro Hagita (NAIST, Japan; ATR, Japan) This paper presents an approach for motion sickness reduction while riding an autonomous vehicle. It proposes the Diminished Reality (DR) method for an acceleration stimulus to reduce motion sickness for the autonomous vehicle. One of the main causes of motion sickness is a repeated acceleration. In order to diminish the acceleration stimulus in the autonomous vehicle, vection illusion is used to induce the user to make a preliminary movement against the real acceleration. The Balance Wii Board is used to measure participant's movement of the center of gravity to verify the effectiveness of the method with vection. The experimental result of 9 participants shows that the proposed method of using vection could reduce acceleration stimulus compared with the conventional method. ![]() |
|
Hakulinen, Jaakko |
![]() Toni Pakkanen, Jaakko Hakulinen, Tero Jokela, Ismo Rakkolainen, Jari Kangas, Petri Piippo, Roope Raisamo, and Marja Salmimaa (University of Tampere, Finland; Nokia, Finland) Immersive 360° video needs new ways of interaction. We compared three different interaction methods to find out which one of them is the most applicable for controlling 360° video playback. The compared methods were: remote control, pointing with head orientation, and hand gestures. A WebVR-based 360° video player was built for the experiment. ![]() |
|
Halsey, Jordan |
![]() Dylan Southard, Elijah Allan-Blitz, Jordan Halsey, Christina Heller, and Artemis Joukowsky (VR Playhouse, USA; A-B Productions, USA; Farm Pond Pictures, USA) "Defying the Nazis VR" uses CGI, motion graphics, and archival documentary footage to re-create a heroic episode from World War II in VR, experimenting with the emotional power of virtual reality and with the medium as an educational tool. ![]() ![]() |
|
Hamada, Takeo |
![]() Takeo Hamada, Michio Okada, and Michiteru Kitazaki (Toyohashi University of Technology, Japan) We present a novel assistive method for leading casual joggers by showing a virtual runner on see-through head-mounted display they worn. It moves at a constant pace specified in advance by them, and its motion synchronizes the user’s one. People can always visually check the pace by looking at it as a personal pacemaker. They are also motivated to keep running by regarding it as a jogging companion. Moreover, proposed method overcomes safety problem of AR apps. Its most body parts are transparent so that it doesn’t obstruct their view. This study, thus, may contribute to augment daily jogging experience. ![]() |
|
Hamasaki, Takumi |
![]() Yuichi Hiroi, Yuta Itoh, Takumi Hamasaki, Daisuke Iwai, and Maki Sugimoto (Keio University, Japan; Osaka University, Japan) We propose a hybrid SAR concept combining a projector and Optical See-Through Head-Mounted Displays (OST-HMD). Our proposed hybrid SAR system utilizes OST-HMD as an extra rendering layer to render a view-dependent property in OST-HMDs according to the viewer's viewpoint. Combined with view-independent components created by a static projector, the viewer can see richer material contents. Unlike conventional SAR systems, our system theoretically allows unlimited number of viewers seeing enhanced contents in the same space while keeping the existing SAR experiences. Furthermore, the system enhances the total dynamic range, the maximum intensity, and the resolution of perceived materials. With a proof-of-concept system that consists of a projector and an OST-HMD, we qualitatively demonstrate that our system successfully creates hybrid rendering on a hemisphere object from five horizontal viewpoints. Our quantitative evaluation also shows that our system increases the dynamic range by 2.1 times and the maximum intensity by 1.9 times compared to an ordinary SAR system. ![]() |
|
Hamedi, Mahyar |
![]() Ifigeneia Mavridou, James T. McGhee, Mahyar Hamedi, Mohsen Fatoorechi, Andrew Cleal, Emili Ballaguer-Balester, Ellen Seiss, Graeme Cox, and Charles Nduka (Bournemouth University, UK; Sussex Innovation Centre, UK) Faceteq prototype v.05 is a wearable technology for measuring facial expressions and biometric responses for experimental studies in Virtual Reality. Developed by Emteq Ltd laboratory, Faceteq can enable new avenues for virtual reality research through combination of high performance patented dry sensor technologies, proprietary algorithms and real-time data acquisition and streaming. Emteq founded the Faceteq project with the aim to provide a human-centered additional tool for emotion expression, affective human-computer interaction and social virtual environments. The proposed demonstration will exhibit the hardware and its functionality by allowing attendees to experience three of the showcasing applications we developed this year. ![]() |
|
Han, Dustin T. |
![]() Dustin T. Han, Shyam Prathish Sargunam, and Eric D. Ragan (Texas A&M University, USA) The use of self avatars in virtual reality (VR) can bring users a stronger sense of presence and produce a more compelling experience by providing additional visual feedback during interactions. Avatars also become increasingly more relevant in VR as they provide a user with an identity for social interactions in multi-user settings. However, with current consumer VR setups that include only a head mounted display and hand controllers, implementation of self avatars are generally limited in the ability to mimic actions performed in the real world. Our work explores the idea of simulating a wide range of upper body motions using motion and positional data from only the head and hand motion data. We present a method to differentiate head and hip motions using information from captured motion data and applying corresponding changes to a virtual avatar. We discuss our approach and initial results. ![]() |
|
Handa, Takuya |
![]() Takuya Handa, Kenji Murase, Makiko Azuma, Toshihiro Shimizu, Satoru Kondo, and Hiroyuki Shinoda (NHK, Japan; University of Tokyo, Japan) The main goal of our research is to develop a haptic display that makes it possible to convey shapes, hardness, and textures of objects displayed on 3D TV. Our evolved device has three 5 mm diameter actuating spheres arranged in triangular geometry on each of three fingertips (thumb, index finger, middle finger). In this paper, we describe an overview of a novel haptic device and the first experimental results that twelve subjects had succeeded to recognize the size of cylinders and side geometry of a cuboid and a hexagonal prism. ![]() ![]() |
|
Hansberger, Jeffrey T. |
![]() Chao Peng, Jeffrey T. Hansberger, Lizhou Cao, and Vaidyanath Areyur Shanthakumar (University of Alabama at Huntsville, USA; US Army Research Lab, USA) In a situation where a large and chaotic collection of digital images must be manually sorted or categorized, there are two challenges: (1) unnatural actions during a prolonged human-computer interaction and (2) limited display space for image browsing. An immersive 3D interface is prototyped, where a person sorts a large collection of digital images with his or her bare hands in a virtual environment, and performs hand motions matching characteristics of sorting gestures in the real world. The virtual reality environment provides extra levels of immersion for displaying images. ![]() |
|
Harbring, Christine |
![]() Andrea Bönsch, Jonathan Wendt, Heiko Overath, Özgür Gürerk, Christine Harbring, Christian Grund, Thomas Kittsteiner, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Traditionally, experimental economics uses controlled and incentivized field and lab experiments to analyze economic behavior. However, investigating peer effects in the classic settings is challenging due to the reflection problem: Who is influencing whom? To overcome this, we enlarge the methodological toolbox of these experiments by means of Virtual Reality. After introducing and validating a real-effort sorting task, we embed a virtual agent as peer of a human subject, who independently performs an identical sorting task. We conducted two experiments investigating (a) the subject’s productivity adjustment due to peer effects and (b) the incentive effects on competition. Our results indicate a great potential for Virtual-Reality-based economic experiments. ![]() |
|
Hashemian, Abraham M. |
![]() Alexandra Kitson, Abraham M. Hashemian, Ekaterina R. Stepanova, Ernst Kruijff, and Bernhard E. Riecke (Simon Fraser University, Canada; Bonn-Rhein-Sieg University of Applied Sciences, Germany) We describe here a pilot user study comparing five different locomotion interfaces for virtual reality (VR) locomotion. We compared a standard non-motion cueing interface, Joystick, with four leaning-based seated motion-cueing interfaces: NaviChair, MuvMan, Head-Directed and Swivel Chair. The aim of this mixed methods study was to investigate the usability and user experience of each interface, in order to better understand relevant factors and guide the design of future ground-based VR locomotion interfaces. We asked participants to give talk-aloud feedback and simultaneously recorded their responses while they were performing a search task in VR. Afterwards, participants completed an online questionnaire. Although the Joystick was rated as more comfortable and precise than the other interfaces, the leaning-based interfaces showed a trend to provide more enjoyment and a greater sense of self-motion. There were also potential issues of using velocity-control for rotations in leaning-based interfaces when using HMDs instead of stationary displays. Developers need to focus on improving the controllability and perceived safety of these seated motion cueing interfaces. ![]() |
|
Hassard, Amelia Shivani |
![]() Liang Men, Nick Bryan-Kinns, Amelia Shivani Hassard, and Zixiang Ma (Queen Mary University of London, UK) In recent years, Virtual Reality (VR) applications have become widely available. An increase in popular interest raises questions about the use of the new medium for communication. While there is a wide variety of literature regarding scene transitions in films, novels and computer games, transitions in VR are not yet widely understood. As a medium that requires a high level of immersion, transitions are a desirable tool. This poster delineates an experiment studying the impact of transitions on user experience of presence in VR. ![]() ![]() |
|
He, Tianyu |
![]() Tianyu He, Xiaoming Chen, Zhibo Chen, Ye Li, Sen Liu, Junhui Hou, and Ying He (University of Science and Technology of China, China; City University of Hong Kong, China; Nanyang Technological University, Singapore) Learning “motion” online or from video tutorials is usually inefficient since it is difficult to deliver “motion” information in traditional ways and in the ordinary PC platform. This paper presents ImmerTai, a system that can efficiently teach motion, in particular Chinese Taichi motion, in various immersive environments. ImmerTai captures the Taichi expert’s motion and delivers to students the captured motion in multi-modal forms in immersive CAVE, HMD as well as ordinary PC environments. The students’ motions are captured too for quality assessment and utilized to form a virtual collaborative learning atmosphere. We built up a Taichi motion dataset with 150 fundamental Taichi motions captured from 30 students, on which we evaluated the learning effectiveness and user experience of ImmerTai. The results show that ImmerTai can enhance the learning efficiency by up to 17.4% and the learning quality by up to 32.3%. ![]() |
|
He, Ying |
![]() Tianyu He, Xiaoming Chen, Zhibo Chen, Ye Li, Sen Liu, Junhui Hou, and Ying He (University of Science and Technology of China, China; City University of Hong Kong, China; Nanyang Technological University, Singapore) Learning “motion” online or from video tutorials is usually inefficient since it is difficult to deliver “motion” information in traditional ways and in the ordinary PC platform. This paper presents ImmerTai, a system that can efficiently teach motion, in particular Chinese Taichi motion, in various immersive environments. ImmerTai captures the Taichi expert’s motion and delivers to students the captured motion in multi-modal forms in immersive CAVE, HMD as well as ordinary PC environments. The students’ motions are captured too for quality assessment and utilized to form a virtual collaborative learning atmosphere. We built up a Taichi motion dataset with 150 fundamental Taichi motions captured from 30 students, on which we evaluated the learning effectiveness and user experience of ImmerTai. The results show that ImmerTai can enhance the learning efficiency by up to 17.4% and the learning quality by up to 32.3%. ![]() |
|
Heinish, Pierre |
![]() Adela Barbulescu, Maxime Garcia, Antoine Begault, Laurence Boissieux, Marie-Paule Cani, Maxime Portaz, Alexis Viand, Romain Dulery, Pierre Heinish, Remi Ronfard, and Dominique Vaufreydaz (Inria, France; LJK Grenoble, France; University of Grenoble, France; LIG, France; Grenoble INP, France; CNRS, France) Pretend play is a storytelling technique, naturally used from very young ages, which relies on object substitution to represent the characters of the imagined story. We propose a system which assists the storyteller by generating a virtualized story from a recorded dialogue performed with 3D printed figurines. We capture the gestures and facial expressions of the storyteller using Kinect cameras and IMU sensors and transfer them to their virtual counterparts in the story-world. As a proof-of-concept, we demonstrate our system with an improvised story involving a prince and a witch, which was successfully recorded and transferred into 3D animation. ![]() |
|
Heller, Christina |
![]() Dylan Southard, Elijah Allan-Blitz, Jordan Halsey, Christina Heller, and Artemis Joukowsky (VR Playhouse, USA; A-B Productions, USA; Farm Pond Pictures, USA) "Defying the Nazis VR" uses CGI, motion graphics, and archival documentary footage to re-create a heroic episode from World War II in VR, experimenting with the emotional power of virtual reality and with the medium as an educational tool. ![]() ![]() |
|
Hentschel, Bernd |
![]() Tom Vierjahn, Daniel Zielasko, Kees van Kooten, Peter Messmer, Bernd Hentschel, Torsten W. Kuhlen, and Benjamin Weyers (RWTH Aachen University, Germany; JARA-HPC, Germany; NVIDIA, Germany; NVIDIA, Switzerland) Immersive visualization (IV) fosters the creation of mental images of a data set, a scene, a procedure, etc. We devise an initial version of a design space for categorizing workflows that take advantage of IV. From this categorization, specific requirements for an actual, seamless IV-integration can be derived. We validate the design space with three workflows investigated in our research projects. ![]() |
|
Herald, Brad |
![]() Jessie Mann, Nicholas Polys, Rachel Diana, Manasa Ananth, Brad Herald, and Sweetuben Platel (Virginia Tech, USA) The design of Virginia Tech’s (VT) Study Hall emerges from the current cognitive neuroscience understanding of memory as a spatially mediated encoding process. The driving questions are: Does the sense of spatial navigation generated by an immersive virtual experience aid in memory formation? Does virtual spatial navigation, when paired with learning cues, enhance information encoding relative to nonspatial and nonvirtual processes? A pilot study was executed comparing recall on non-navigational memorization processes to processes involving mental and virtual navigation and we are currently running a full study to see if we can replicate these effects with a more demanding memory task and refined study design. ![]() |
|
Herbelin, Bruno |
![]() Thibault Porssut, Henrique G. Debarba, Elisa Canzoneri, Bruno Herbelin, and Ronan Boulic (EPFL, Switzerland) This project investigates the impact of a virtual zero gravity experience on the human gravity model. In the planned experiment, subjects are immersed with HMD and full body motion capture in a virtual world exhibiting either normal gravity or the apparent absence of gravity (i.e. body and objects floating in space). The study evaluates changes in the subjects’ gravity model by observing changes on motor planning of actions dependent on gravity. Our goal is to demonstrate that a virtual reality exposure can induce some modifications to the humans internal gravity model, even if users remain under normal gravity condition in reality. ![]() ![]() |
|
Hiroi, Yuichi |
![]() Yuichi Hiroi, Yuta Itoh, Takumi Hamasaki, Daisuke Iwai, and Maki Sugimoto (Keio University, Japan; Osaka University, Japan) We propose a hybrid SAR concept combining a projector and Optical See-Through Head-Mounted Displays (OST-HMD). Our proposed hybrid SAR system utilizes OST-HMD as an extra rendering layer to render a view-dependent property in OST-HMDs according to the viewer's viewpoint. Combined with view-independent components created by a static projector, the viewer can see richer material contents. Unlike conventional SAR systems, our system theoretically allows unlimited number of viewers seeing enhanced contents in the same space while keeping the existing SAR experiences. Furthermore, the system enhances the total dynamic range, the maximum intensity, and the resolution of perceived materials. With a proof-of-concept system that consists of a projector and an OST-HMD, we qualitatively demonstrate that our system successfully creates hybrid rendering on a hemisphere object from five horizontal viewpoints. Our quantitative evaluation also shows that our system increases the dynamic range by 2.1 times and the maximum intensity by 1.9 times compared to an ordinary SAR system. ![]() |
|
Hirose, Michitaka |
![]() Keigo Matsumoto, Takuji Narumi, Yuki Ban, Tomohiro Tanikawa, and Michitaka Hirose (University of Tokyo, Japan) Redirected walking allows users to explore a large virtual environment while there is a limitation of the room size. Previous works tried to present users straight path in a virtual environment while they walked on a curved path in reality. We expand a previous technique to present users a various curved path in a virtual environment while they walked on a particular curved path or a straight path with/without haptics. Furthermore, we propose a novel estimation methodology to quantify walking paths which user has thought he walked in reality. The data from our experiment shows that users feel walking a various curved path in VR as same as one-to-one mapping condition. ![]() |
|
Hirota, Koichi |
![]() Masato Kurosawa, Ken Ito, Yasushi Ikei, Koichi Hirota, and Michiteru Kitazaki (Tokyo Metropolitan University, Japan; University of Electro-Communications, Japan; Toyohashi University of Technology, Japan) The present study investigates the augmentation effect of airflow on the sensation of a virtual reality walk. The intensity of cutaneous sensation evoked by airflow during the real and virtual walk was measured. The airflow stimulus was added to the participant with passive vestibular motion and visual presentation. The result suggests that the sensation of walking was strongly increased by adding the airflow stimulus to the vestibular and optic presentations. The cutaneous sensation of airflow was perceived higher for the sitting participant than during a real walk in both a single and the combined stimuli. The equivalent speed of airflow for the sitting participant was lowered from the airflow speed in the real walk. ![]() ![]() Shiho Saito, Koichi Hirota, and Takuya Nojima (University of Electro-Communications, Japan) The Kitchen Knife Safety Educator (KKse) is a safety education system designed to teach children how to correctly use cooking knives. Cooking is important for children to learn about what they eat. In addition, that is also important for daily communication between children and their parents. However, it is dangerous for young children to handle cooking knives. Because of this danger, parents often try to keep their young children away from the kitchen. Our proposed system will contribute to not only improving children’s cooking skills, but also improving communication between parents and children. The system composed of a virtual knife with haptic feedback function, a touch/force sensitive virtual food and a two-dimensional force sensitive cutting board. This system was developed to teach a fundamental cutting method, the “thrusting cut”. This paper describes the detail of the system. ![]() |
|
Høeg, Emil R. |
![]() Emil R. Høeg, Kevin V. Ruder, Niels C. Nilsson, Rolf Nordahl, and Stefania Serafin (Aalborg University at Copenhagen, Denmark) This poster describes a within-groups study (n=17) comparing participants' experience of three different input conditions for instigating virtual teleportation (button clicking, physical jumping, and fist clenching). The results indicated that teleportation by clicking a button generally required less explicit attention and was perceived as more enjoyable, less disorienting, and less physically demanding. ![]() |
|
Hoesch, Anne |
![]() Florian Weidner, Anne Hoesch, Sandra Poeschl, and Wolfgang Broll (TU Ilmenau, Germany) Up to now, most driving simulators use either small monitors or large immersive projection setups like 2D/3D screens or a CAVE. The recent improvements of VR-HMDs led to an increased application in driving simulation. However, the influence and comparability of various VR and non-VR displays has been hardly investigated. We present results of a user study investigating the different influence of non-VR (2D, stereoscopic 3D) and VR (HMD) on physiological responses, simulation sickness, and driving performance within a single driving simulator. In the study, 94 participants performed the Lane Change Task. Results indicate that a VR-HMD leads to similar data as stereoscopic 3D or 2D screens. We observed no significant difference regarding physiological responses or lane change performance. However, we measured significantly increased simulator sickness in the VR-HMD condition compared to stereoscopic 3D. ![]() |
|
Hogan, Brendan J. |
![]() Quba Michalski, Brendan J. Hogan, and Jamie Hunsdale (QubaVR, USA; Impossible Acoustic, USA) In a secret science facility, gravity has been conquered. “Down” is no longer a direction, but a choice. Step into the center of modified chambers and witness the laws of nature be broken in this five-experiment series. VR has all but torn down the barriers between the imagination of the creator and the experience of the viewer. A concept like The Pull simply does not translate into traditional 2D. We can suggest concepts through TVs and monitors, but we can’t truly experience them — and breaking the very laws of nature is something that can only be experienced. While flat media limits us to hinting and coaxing at an experience, by creating in VR, I can more faithfully share my vision with you, the viewer. For a few minutes – for five chambers – I can truly invite you into my world. ![]() |
|
Höllerer, Tobias |
![]() Yun Suk Chang, Benjamin Nuernberger, Bo Luan, Tobias Höllerer, and John O’Donovan (University of California at Santa Barbara, USA) Drawing annotations with 3D hand gestures in augmented reality is useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. To explore and evaluate different 3D hand-gesture-based annotation drawing methods, we have developed an annotation drawing application using the HoloLens augmented reality development platform. The application can be used for highlighting objects at a distance and multi-user collaboration by annotating in the real world. ![]() ![]() ![]() Tobias Höllerer (University of California at Santa Barbara, USA) VR and AR hold enormous promises as paradigm-shifting ubiquitous technologies. The investment in these technologies by leading IT companies, as well as the buy-in and general excitement from outside investors, technologists, and content producers has never been more palpable. There are good reasons to be excited about the field. The real question will be if the technologies can add sufficient value to people’s lives to establish themselves as more than just niche products. My path in this presentation will lead from a personal estimation of what matters for adoption of new technologies to important innovations we have witnessed on the road to anywhere/anytime use of immersive technologies. In recent years, one track of research in my lab has been concerned with the simulation of possible future capabilities in AR. With the goal to conduct controlled user studies evaluating technologies that are just not possible yet (such as a truly wide-field-of-view augmented reality display), we turn to high-end VR to simulate, predict, and assess these possible futures. In the far future, when technological hurdles, such as real-time reconstruction of photorealistic environment models, are removed, VR and AR naturally converge. Until then, we have a very interesting playing field full of technological constraints to have fun with. ![]() ![]() Dieter Schmalstieg and Tobias Höllerer (Graz University of Technology, Austria; University of California at Santa Barbara, USA) This tutorial will provide a detailed introduction to Augmented Reality (AR). AR is a key user-interface technology for personalized, situated information delivery, navigation, on-demand instruction and games. The widespread availability and rapid evolution of smartphones and new devices such as Hololens enables software-only solutions for AR, where it was previously necessary to assemble custom hardware solutions. However, ergonomic and technical limitations of existing devices make this a challenging endeavor. In particular, it is necessary to design novel efficient real-time computer vision and computer graphics algorithms, and create new lightweight forms of interaction with the environment through small form-factor devices. This tutorial will present selected technical achievements in this field and highlight some examples of successful application prototypes. ![]() ![]() |
|
Hosseini, Mohammad |
![]() Mohammad Hosseini (University of Illinois at Urbana-Champaign, USA) We have proposed an adaptive view-aware bandwidth-efficient 360 VR video streaming framework based on the tiling features of MPEG-DASH SRD. We extend MPEG-DASH SRD to the 3D space of 360 VR videos, and showcase a dynamic view-aware adaptation technique to tackle the high bandwidth demands of streaming 360 VR videos to wireless VR headsets. As a part of our contributions, we spatially partition the underlying 3D mesh into multiple 3D sub-meshes, and construct an efficient 3D geometry mesh called "hexaface sphere" to optimally represent tiled 360 VR videos in the 3D space. We then spatially divide the 360 videos into multiple tiles while encoding and packaging, use MPEG-DASH SRD to describe the spatial relationship of tiles in the 3D space, and prioritize the tiles in the Field of View (FoV) for view-aware adaptation. The initial evaluations that we conducted show that we can save up to 72% of the required bandwidth on 360 VR video streaming with minor negative quality impacts compared to the baseline scenario when no adaptations is applied. ![]() |
|
Hou, Junhui |
![]() Tianyu He, Xiaoming Chen, Zhibo Chen, Ye Li, Sen Liu, Junhui Hou, and Ying He (University of Science and Technology of China, China; City University of Hong Kong, China; Nanyang Technological University, Singapore) Learning “motion” online or from video tutorials is usually inefficient since it is difficult to deliver “motion” information in traditional ways and in the ordinary PC platform. This paper presents ImmerTai, a system that can efficiently teach motion, in particular Chinese Taichi motion, in various immersive environments. ImmerTai captures the Taichi expert’s motion and delivers to students the captured motion in multi-modal forms in immersive CAVE, HMD as well as ordinary PC environments. The students’ motions are captured too for quality assessment and utilized to form a virtual collaborative learning atmosphere. We built up a Taichi motion dataset with 150 fundamental Taichi motions captured from 30 students, on which we evaluated the learning effectiveness and user experience of ImmerTai. The results show that ImmerTai can enhance the learning efficiency by up to 17.4% and the learning quality by up to 32.3%. ![]() |
|
Hu, Yaoping |
![]() Aida Erfanian, Stanley Tarng, Yaoping Hu, Jérémy Plouzeau, and Frédéric Merienne (University of Calgary, Canada; LE2I, France) Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies showed that the integration of visual and haptic cues follows maximum likelihood estimation (MLE). Little effort focuses however on the mechanism of integrating force and vibrotactile cues. We thus investigated MLE’s suitability for integrating these cues. Within a VE, human users undertook 3D interaction of navigating a flying drone along a high-voltage transmission line for inspection. The users received individual force or vibrotactile cues, and their combinations in collocated and dislocated settings. The users’ task performance including completion time and accuracy was assessed under each individual cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This agreed with the applicability of tactile cues for sensing 3D surfaces, herein setting a baseline for using MLE. The task performance under the collocated setting indicated a degree of combining the individual cues. In contrast, the performance under the dislocated setting was alike under the individual vibrotactile cue. These observations imply a possible role of MLE in integrating force and vibrotactile cues for 3D user interaction within VEs. ![]() |
|
Huang, Jingwei |
![]() Jingwei Huang, Zhili Chen, Duygu Ceylan, and Hailin Jin (Stanford University, USA; Adobe Research, USA) Recent breakthroughs in consumer level virtual reality (VR) headsets is creating a growing user-base in demand of immersive, full 3D VR experiences. While monoscopic 360-videos are one of the most common content for VR headsets, they lack 3D information and thus cannot be viewed with full 6 degree-of-freedom (DOF). We present an approach that addresses this limitation via a novel warping algorithm that can synthesize new views both with rotational and translational motion of the viewpoint. This enables to playback an input monoscopic 360-video in a VR headset where the new viewpoints are determined by the 6-DOF motion of the headset. Our solution tailors standard structure-from-motion and dense reconstruction algorithms to work accurately for 360-videos and is optimized for GPUs to achieve VR frame rates (> 120 fps). We demonstrate the effectiveness our approach on a variety of videos with interesting content. ![]() ![]() |
|
Huang, Ping |
![]() Hengheng Zhao, Ping Huang, and Junfeng Yao (Xiamen University, China) Coloring book can inspire imaginary and creativity of children. However, with the rapid development of digital devices and internet, traditional coloring book tends to be not attractive for children any more. Thus, we propose an idea of applying augmented reality technology to traditional coloring book. After children finish coloring characters in the printed coloring book, they can inspect their work using a mobile device. The drawing is detected and tracked so that the video stream is augmented with a 3D character textured according to their coloring. This is possible thanks to several novel technical contributions. We present a texture process that generates texture map for 3D augmented reality character from 2D colored drawing using a lookup map. Considering the movement of the mobile device and drawing, we give an efficient method to track the drawing surface. ![]() |
|
Huerta, Ivan |
![]() Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell (Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA) In this demonstration, we showcase a system for rapid acquisition of bespoke avatars for each participant (subject) in a social VR environment is presented. For each subject, the system automatically customizes a parametric avatar model to match the captured subject by adjusting its overall height, body and face shape parameters and generating a custom face texture. ![]() ![]() Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell (Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA) We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and face shape at the fitting stage, while animation blendshapes allow the face to be animated. The subject assumes a T-pose and a single snapshot is captured using a stereo RGB plus depth sensor rig. Our system automatically aligns a photo texture and fits the 3D shape of the face. The body shape is stylized according to body dimensions estimated from segmented depth. The face identity blendweights are optimised according to image-based facial landmarks, while a custom texture map for the face is generated by warping the input images to a reference texture according to the facial landmarks. The total capture and processing time is under 10~seconds and the output is a light-weight, game-engine-ready avatar which is recognizable as the subject. We demonstrate our system in a VR environment in which each user sees the other users' animated avatars through a VR headset with real-time audio-based facial animation and live body motion tracking, affording an enhanced level of presence and social engagement compared to generic avatars. ![]() |
|
Hulin, Thomas |
![]() Mikel Sagardia and Thomas Hulin (German Aerospace Center, Germany) This work presents an evaluation study in which the effects of a penalty-based and a constraint-based haptic rendering algorithm on the user performance and perception are analyzed. A total of N = 24 participants performed in a within-design study three variations of peg-in-hole tasks in a virtual environment after trials in an identically replicated real scenario as a reference. In addition to the two mentioned haptic rendering paradigms, two haptic devices were used, the HUG and a Sigma.7, and the force stiffness was also varied with maximum and half values possible for each device. Both objective (time and trajectory, collision performance, and muscular effort) and subjective ratings (contact perception, ergonomy, and workload) were recorded and statistically analyzed. The results show that the constraint-based haptic rendering algorithm with a lower stiffness than the maximum possible yields the most realistic contact perception, while keeping the visual inter-penetration between the objects roughly at around 15% of that caused by penalty-based algorithm (i.e., non perceptible in many cases). This result is even more evident with the HUG, the haptic device with the highest force display capabilities, although user ratings point to the Sigma.7 as the device with highest usability and lowest workload indicators. Altogether, the paper provides qualitative and quantitative guidelines for mapping properties of haptic algorithms and devices to user performance and perception. ![]() |
|
Hunsdale, Jamie |
![]() Quba Michalski, Brendan J. Hogan, and Jamie Hunsdale (QubaVR, USA; Impossible Acoustic, USA) In a secret science facility, gravity has been conquered. “Down” is no longer a direction, but a choice. Step into the center of modified chambers and witness the laws of nature be broken in this five-experiment series. VR has all but torn down the barriers between the imagination of the creator and the experience of the viewer. A concept like The Pull simply does not translate into traditional 2D. We can suggest concepts through TVs and monitors, but we can’t truly experience them — and breaking the very laws of nature is something that can only be experienced. While flat media limits us to hinting and coaxing at an experience, by creating in VR, I can more faithfully share my vision with you, the viewer. For a few minutes – for five chambers – I can truly invite you into my world. ![]() |
|
Hussein, Mohamed |
![]() Thea Andersen, Gintare Anisimovaite, Anders Christiansen, Mohamed Hussein, Carol Lund, Thomas Nielsen, Eoin Rafferty, Niels C. Nilsson, Rolf Nordahl, and Stefania Serafin (Aalborg University at Copenhagen, Denmark) This poster describes a between-groups study (n=24) exploring the use of virtual reality (VR) for facilitating focused meditation. Half of the participants were exposed to a meditation session combing the sound of a guiding voice and a visual environment including virtual objects for the participants to focus on. The other half of the participants was only exposed to the auditory guide. The participants' experience of the sessions was assessed using self-reported measures of perceived concentration, temporal duration, stress reduction, and comfort. Interestingly, no statistically significant differences were found between the two conditions. This finding may be revealing in regards to the usefulness of VR-based meditation. ![]() |
|
Hvass, Jonatan S. |
![]() Jonatan S. Hvass, Oliver Larsen, Kasper B. Vendelbo, Niels C. Nilsson, Rolf Nordahl, and Stefania Serafin (Aalborg University at Copenhagen, Denmark) Previous research on visual realism and presence has not involved scenarios, graphics, and hardware representative of commercially available VR games. This poster details a between-subjects study (n=50) exploring if polygon count and texture resolution influence presence during exposure to a VR game. The results suggest that a higher polygon count and texture resolution increased presence as assessed by means of self-reports and physiological measures. ![]() ![]() Andreas Ryge, Lui Thomsen, Theis Berthelsen, Jonatan S. Hvass, Lars Koreska, Casper Vollmers, Niels C. Nilsson, Rolf Nordahl, and Stefania Serafin (Aalborg University at Copenhagen, Denmark) In this paper we present a within-subjects study (n=26) comparing participants’ experience of three kinds of haptic feedback (no haptic feedback, low fidelity haptic feedback and high fidelity haptic feed- back) simulating the impact between a virtual baseball bat and ball. We noticed some minor effect on high fidelity versus low fidelity haptic feedback, but haptic feedback generally enhanced realism and quality of experience. ![]() |
|
Ide, Ichiro |
![]() Norimasa Kobori, Daisuke Deguchi, Ichiro Ide, and Hiroshi Murase (Toyota, Japan; Nagoya University, Japan) We propose a novel marker for robot's grasping task which has the following three aspects: (i) it is easy-to-find in a cluttered background, (ii) it is calculable for its posture (iii) its size is compact. The proposed marker is composed of a random dots pattern, and uses keypoint detection and a scale estimation by Spectral SIFT for dots detection and data decoding. The data is encoded by the scale size of dots, and the same dots in the marker work for both marker detection and data decoding. As a result, the proposed marker size can be compact. We confirmed the effectiveness of the proposed marker through experiments. ![]() |
|
Ienaga, Naoto |
![]() Shohei Mori, Momoko Maezawa, Naoto Ienaga, and Hideo Saito (Keio University, Japan) Live instructor’s perspective videos are useful to present intuitive visual instructions for trainees in medical and industrial settings. In such videos, the instructor’s hands often hide the work area. In this demo, we present a diminished hand for visualizing the work area hidden by hands by capturing the work area with multiple cameras. To achieve the diminished reality, we use a light field rendering technique, in which light rays avoid passing through penalty points set in the unstructured light fields reconstructed from the multiple viewpoint images. ![]() ![]() |
|
Ikei, Yasushi |
![]() Masato Kurosawa, Ken Ito, Yasushi Ikei, Koichi Hirota, and Michiteru Kitazaki (Tokyo Metropolitan University, Japan; University of Electro-Communications, Japan; Toyohashi University of Technology, Japan) The present study investigates the augmentation effect of airflow on the sensation of a virtual reality walk. The intensity of cutaneous sensation evoked by airflow during the real and virtual walk was measured. The airflow stimulus was added to the participant with passive vestibular motion and visual presentation. The result suggests that the sensation of walking was strongly increased by adding the airflow stimulus to the vestibular and optic presentations. The cutaneous sensation of airflow was perceived higher for the sitting participant than during a real walk in both a single and the combined stimuli. The equivalent speed of airflow for the sitting participant was lowered from the airflow speed in the real walk. ![]() |
|
Ilie, Adrian |
![]() Rohan Chabra, Adrian Ilie, Nicholas Rewkowski, Young-Woon Cha, and Henry Fuchs (University of North Carolina at Chapel Hill, USA) Commodity depth sensors, such as the Microsoft Kinect®, have been widely used for the capture and reconstruction of the 3D structure of a room-sized dynamic scene. Camera placement and coverage during capture significantly impact the quality of the resulting reconstruction. In particular, dynamic occlusions and sensor interference have been shown to result in poor resolution and holes in the reconstruction results. This paper presents a novel algorithmic framework and an off-line optimization of depth sensor placements for a given 3D dynamic scene, simulated using virtual 3D models. We derive a fitness metric for a particular configuration of sensors by combining factors such as visibility and resolution of the entire dynamic scene along with probabilities of interference between sensors. We employ this fitness metric both in a greedy algorithm that determines the number of depth sensors needed to cover the scene, and in a simulated annealing algorithm that optimizes the placements of those sensors. We compare our algorithm’s optimized placements with manual sensor placements for a real dynamic scene. We present quantitative assessments using our fitness metric, as well as qualitative assessments to demonstrate that our algorithm not only enhances the resolution and total coverage of the reconstruction but also fills in voids by avoiding occlusions and sensor interference when compared with the reconstruction of the same scene using a manual sensor placement. ![]() ![]() ![]() |
|
Inoue, Yasuyuki |
![]() Yasuyuki Inoue, Fumihiro Kato, MHD Yamen Saraiji, Charith Lasantha Fernando, and Susumu Tachi (University of Tokyo, Japan; Keio University, Japan) In this paper, we analyze the subjective feelings about the body of the operator of a telexistence system. We investigate whether a mirror reflection and self-touch affect body ownership and agency for a surrogate robot avatar in a virtual reality experiment. Results showed that the presence of tactile sensations synchronized with the view of self-touch events enhanced mirror self-recognition. ![]() ![]() Fumihiro Kato, Charith Lasantha Fernando, Yasuyuki Inoue, and Susumu Tachi (University of Tokyo, Japan; Keio University, Japan) We have developed a classification method of tactile feeling using a stacked autoencoder-based neural network on haptic primary colors. The haptic primary colors principle is a concept of decomposing the human sensation of tactile feeling into force, vibration, and temperature. Images were obtained from variation in the frequency of the time series of the tactile feeling obtained when tracing a surface of an object, features were extracted by employing a stacked autoencoder using a neural network with two hidden layers, and supervised learning was conducted. We confirmed that the tactile feeling for three different surface materials can be classified with an accuracy of 82.0[%]. ![]() |
|
Ito, Ken |
![]() Masato Kurosawa, Ken Ito, Yasushi Ikei, Koichi Hirota, and Michiteru Kitazaki (Tokyo Metropolitan University, Japan; University of Electro-Communications, Japan; Toyohashi University of Technology, Japan) The present study investigates the augmentation effect of airflow on the sensation of a virtual reality walk. The intensity of cutaneous sensation evoked by airflow during the real and virtual walk was measured. The airflow stimulus was added to the participant with passive vestibular motion and visual presentation. The result suggests that the sensation of walking was strongly increased by adding the airflow stimulus to the vestibular and optic presentations. The cutaneous sensation of airflow was perceived higher for the sitting participant than during a real walk in both a single and the combined stimuli. The equivalent speed of airflow for the sitting participant was lowered from the airflow speed in the real walk. ![]() |
|
Itoh, Yuta |
![]() Katsuhiro Suzuki, Fumihiko Nakamura, Jiu Otsuka, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, and Maki Sugimoto (Keio University, Japan) We propose a facial expression mapping technology between virtual avatars and Head-Mounted Display (HMD) users. HMD allow people to enjoy an immersive Virtual Reality (VR) experience. A virtual avatar can be a representative of the user in the virtual environment. However, the synchronization of the the virtual avatar's expressions with those of the HMD user is limited. The major problem of wearing an HMD is that a large portion of the user's face is occluded, making facial recognition difficult in an HMD-based virtual environment. To overcome this problem, we propose a facial expression mapping technology using retro-reflective photoelectric sensors. The sensors attached inside the HMD measures the distance between the sensors and the user's face. The distance values of five basic facial expressions (Neutral, Happy, Angry, Surprised, and Sad) are used for training the neural network to estimate the facial expression of a user. We achieved an overall accuracy of 88% in recognizing the facial expressions. Our system can also reproduce facial expression change in real-time through an existing avatar using regression. Consequently, our system enables estimation and reconstruction of facial expressions that correspond to the user's emotional changes. ![]() ![]() Yuichi Hiroi, Yuta Itoh, Takumi Hamasaki, Daisuke Iwai, and Maki Sugimoto (Keio University, Japan; Osaka University, Japan) We propose a hybrid SAR concept combining a projector and Optical See-Through Head-Mounted Displays (OST-HMD). Our proposed hybrid SAR system utilizes OST-HMD as an extra rendering layer to render a view-dependent property in OST-HMDs according to the viewer's viewpoint. Combined with view-independent components created by a static projector, the viewer can see richer material contents. Unlike conventional SAR systems, our system theoretically allows unlimited number of viewers seeing enhanced contents in the same space while keeping the existing SAR experiences. Furthermore, the system enhances the total dynamic range, the maximum intensity, and the resolution of perceived materials. With a proof-of-concept system that consists of a projector and an OST-HMD, we qualitatively demonstrate that our system successfully creates hybrid rendering on a hemisphere object from five horizontal viewpoints. Our quantitative evaluation also shows that our system increases the dynamic range by 2.1 times and the maximum intensity by 1.9 times compared to an ordinary SAR system. ![]() ![]() Yuta Itoh, Jason Orlosky, Kiyoshi Kiyokawa, Toshiyuki Amano, and Maki Sugimoto (Keio University, Japan; Osaka University, Japan; Wakayama University, Japan) We present a method for focal distance estimation of a freely-orienting eye using Purkinje-Sanson (PS) images, which are reflections of light on the inner structures of the eye. Using an infrared camera with a rigidly-fixed LED, our method creates an estimation model based on 3D gaze and the distance between reflections in the PS images that occur on the corneal surface and anterior surface of the eye lens. The distance between these two reflections changes with focus, so we associate that information to the focal distance on a user. Unlike conventional methods that mainly relies on 2D pupil size which is sensitive to scene lighting and the fourth PS image, our method detects the third PS image which is more representative of accommodation. Our feasibility study on a single user with a focal range from 15-45 cm shows that our method achieves mean and median absolute errors of 3.15 and 1.93 cm for a 10-degree viewing angle. The study shows that our method is also tolerant against environment lighting changes. ![]() |
|
Iwai, Daisuke |
![]() Yuichi Hiroi, Yuta Itoh, Takumi Hamasaki, Daisuke Iwai, and Maki Sugimoto (Keio University, Japan; Osaka University, Japan) We propose a hybrid SAR concept combining a projector and Optical See-Through Head-Mounted Displays (OST-HMD). Our proposed hybrid SAR system utilizes OST-HMD as an extra rendering layer to render a view-dependent property in OST-HMDs according to the viewer's viewpoint. Combined with view-independent components created by a static projector, the viewer can see richer material contents. Unlike conventional SAR systems, our system theoretically allows unlimited number of viewers seeing enhanced contents in the same space while keeping the existing SAR experiences. Furthermore, the system enhances the total dynamic range, the maximum intensity, and the resolution of perceived materials. With a proof-of-concept system that consists of a projector and an OST-HMD, we qualitatively demonstrate that our system successfully creates hybrid rendering on a hemisphere object from five horizontal viewpoints. Our quantitative evaluation also shows that our system increases the dynamic range by 2.1 times and the maximum intensity by 1.9 times compared to an ordinary SAR system. ![]() |
|
Jaisimha, Rahul |
![]() Sarang S. Bhadsavle, Xie Hunt Shannon Yap, Justin Segler, Rahul Jaisimha, Nishant Raman, Yengzhou Feng, Sierra J. Biggs, Micah Peoples, Robert B. Brenner, and Brian C. McCann (University of Texas at Austin, USA) Immersive technologies such as 360◦ cameras and head-mounted displays (HMDs) have become affordable to the average consumer, opening up new audiences for storytellers. However, existing immersive post-production software often requires too great a technical or financial investment for smaller creative shops, including most of the country’s newspapers and local broadcast journalism organizations. Game engines, for example, are unnecessarily complicated for simple 360◦ video projects. Introducing Immerj - an open source abstraction layer simplifying the Unity3D game engine’s interface for immersive content creators. Our primary collaborator, Professor R.B. Brenner, director of the School of Journalism at the University of Texas at Austin, organized hands-on demos with our team and journalists and designers from some of the top news organizations all over the country in order to follow a human-centered design process. In just over one year, a small team of undergraduate researchers at the Texas Advanced Computing Center (TACC) has created a potentially disruptive democratization of technology. ![]() |
|
Janeh, Omar |
![]() Omar Janeh, Eike Langbehn, Frank Steinicke, Gerd Bruder, Alessandro Gulberti, and Monika Poetter-Nerger (University of Hamburg, Germany; University of Central Florida, USA; University Medical Center Hamburg-Eppendorf, Germany) Our study investigates the effects of (non-)isometric mappings between physical movements and virtual motions in the virtual environment (VE) on walking biomechanics of older adults. Three primary domains (pace, base of support and phase) of spatio-temporal and temporo-phasic parameters were used to evaluate gait performance. Our results show similar results in pace and phasic domains when older adults walk in the VE in the isometric mapping condition compared to the corresponding parameters in the real world. We found significant differences in base of support for our user group between walking in the VE and real world. For non-isometric mappings we found an increased divergence of gait parameters in all domains correlating with the up- or down-scaled velocity of visual self-motion feedback. ![]() |
|
Jee, Hyung-Keun |
![]() Hyunwoo Cho, Sung-Uk Jung, and Hyung-Keun Jee (ETRI, South Korea) For live television broadcast such as the educational program for children conducted through viewer participation, the smooth integration of virtual contents and the interaction between the casts and them are quite important issues. Recently there have been many attempts to make aggressive use of interactive virtual contents in live broadcast due to the advancement of AR/VR technology and virtual studio technology. These previous works have many limitations that do not support real-time 3D space recognition or immersive interaction. In this sense, we propose an augmented reality based real-time broadcasting system which perceives the indoor space using a broadcasting camera and a RGB-D camera. Also, the system can support the real-time interaction between the augmented virtual contents and the casts. The contribution of this work is the development of a new augmented reality based broadcasting system that not only enables filming using compatible interactive 3D contents in live broadcast but also drastically reduces the production costs. For the practical use, the proposed system was demonstrated in the actual broadcast program called “Ding Dong Dang Kindergarten” which is a representative children educational program on the national broadcasting channel of Korea. ![]() |
|
Jennings, Sion |
![]() Jingbo Zhao, Robert S. Allison, Margarita Vinnikov, and Sion Jennings (York University, Canada; National Research Council, Canada) We present a method for estimating the Motion-to-Photon (End-to-End) latency of head mounted displays (HMDs). The specific HMD evaluated in our study was the Oculus Rift DK2, but the procedure is general. We mounted the HMD on a pendulum to introduce damped sinusoidal motion to the HMD during the pendulum swing. The latency was estimated by calculating the phase shift between the captured signals of the physical motion of the HMD and a motion-dependent gradient stimulus rendered on the display. We used the proposed method to estimate both rotational and translational Motion-to-Photon latencies of the Oculus Rift DK2. ![]() |
|
Jerald, Jason |
![]() Jason Jerald (NextGen Interactions, USA) VR has the potential to provide experiences and deliver results that cannot be otherwise achieved. However, interacting with immersive applications is not always straightforward and it is not just about an interface for the user to reach their goals. It is also about users working in an intuitive manner that is a pleasurable experience and devoid of frustration. Although VR systems and applications are incredibly complex, it is up to designers to take on the challenge of having the VR application intuitively communicate to users how the virtual world and its tools work so that those users can achieve their goals in an elegant and comfortable manner. ![]() |
|
Jhaveri, Sankhesh |
![]() Patrick O'Leary, Sankhesh Jhaveri, Aashish Chaudhary, William Sherman, Ken Martin, David Lonie, Eric Whiting, James Money, and Sandy McKenzie (Kitware, USA; Indiana University, USA; Idaho National Laboratory, USA) Modern scientific, engineering and medical computational simulations, as well as experimental and observational data sensing/measuring devices, produce enormous amounts of data. While statistical analysis is one tool that provides insight into this data, it is scientific visualization that is tactically important for scientific discovery, product design and data analysis. These benefits are impeded, however, when the scientific visualization algorithms are implemented from scratch | a time consuming and redundant process in immersive application development. This process then can greatly benefit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR) environment has only been attempted to varying degrees of success. In this paper, we demonstrate two new approaches to simplify this amalgamation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that provide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications. ![]() |
|
Jin, Hailin |
![]() Jingwei Huang, Zhili Chen, Duygu Ceylan, and Hailin Jin (Stanford University, USA; Adobe Research, USA) Recent breakthroughs in consumer level virtual reality (VR) headsets is creating a growing user-base in demand of immersive, full 3D VR experiences. While monoscopic 360-videos are one of the most common content for VR headsets, they lack 3D information and thus cannot be viewed with full 6 degree-of-freedom (DOF). We present an approach that addresses this limitation via a novel warping algorithm that can synthesize new views both with rotational and translational motion of the viewpoint. This enables to playback an input monoscopic 360-video in a VR headset where the new viewpoints are determined by the 6-DOF motion of the headset. Our solution tailors standard structure-from-motion and dense reconstruction algorithms to work accurately for 360-videos and is optimized for GPUs to achieve VR frame rates (> 120 fps). We demonstrate the effectiveness our approach on a variety of videos with interesting content. ![]() ![]() |
|
Jokela, Tero |
![]() Toni Pakkanen, Jaakko Hakulinen, Tero Jokela, Ismo Rakkolainen, Jari Kangas, Petri Piippo, Roope Raisamo, and Marja Salmimaa (University of Tampere, Finland; Nokia, Finland) Immersive 360° video needs new ways of interaction. We compared three different interaction methods to find out which one of them is the most applicable for controlling 360° video playback. The compared methods were: remote control, pointing with head orientation, and hand gestures. A WebVR-based 360° video player was built for the experiment. ![]() |
|
Joukowsky, Artemis |
![]() Dylan Southard, Elijah Allan-Blitz, Jordan Halsey, Christina Heller, and Artemis Joukowsky (VR Playhouse, USA; A-B Productions, USA; Farm Pond Pictures, USA) "Defying the Nazis VR" uses CGI, motion graphics, and archival documentary footage to re-create a heroic episode from World War II in VR, experimenting with the emotional power of virtual reality and with the medium as an educational tool. ![]() ![]() |
|
Jung, Byunghoo |
![]() Mohit Singh and Byunghoo Jung (Purdue University, USA) This paper presents an AC magnetic field based High-Definition Personal Area Tracking (PAT) system. A low-power transmitter antenna acts as a reference for three tracker modules. One module, attached to the Head Mount Display (HMD), tracks the position and orientation of user's head and the other two hand-held modules act as an interface device (like virtual hands) in Virtual Reality. This precise, low power, low latency, non-line-of-sight system provides an easy-to-use human-computer interface. The system achieves a precision of 1 mm in position with 0.1 degree in orientation and an accuracy of 20 cm in position at a distance of 2 m from the antenna. The transmitter and the receiver consume 5 W and 0.4 W of power, respectively, providing 140 updates/sec with 11 ms of latency. ![]() |
|
Jung, Jinwoong |
![]() Jinwoong Jung, Joon-Young Lee, Byungmoon Kim, and Seungyong Lee (POSTECH, South Korea; Adobe Research, USA) With the recent advent of 360 cameras, spherical panorama images are becoming more popular and widely available. In a spherical panorama, alignment of the scene orientation to the image axes is important for providing comfortable and pleasant viewing experiences using VR headsets and traditional displays. This paper presents an automatic framework for upright adjustment of 360 spherical panorama images without any prior information, such as depths and Gyro sensor data. We take the Atlanta world assumption and use the horizontal and vertical lines in the scene to formulate a cost function for upright adjustment. Our method produces visually pleasing results for a variety of real-world spherical panoramas in less than a second. ![]() |
|
Jung, Sung-Uk |
![]() Hyunwoo Cho, Sung-Uk Jung, and Hyung-Keun Jee (ETRI, South Korea) For live television broadcast such as the educational program for children conducted through viewer participation, the smooth integration of virtual contents and the interaction between the casts and them are quite important issues. Recently there have been many attempts to make aggressive use of interactive virtual contents in live broadcast due to the advancement of AR/VR technology and virtual studio technology. These previous works have many limitations that do not support real-time 3D space recognition or immersive interaction. In this sense, we propose an augmented reality based real-time broadcasting system which perceives the indoor space using a broadcasting camera and a RGB-D camera. Also, the system can support the real-time interaction between the augmented virtual contents and the casts. The contribution of this work is the development of a new augmented reality based broadcasting system that not only enables filming using compatible interactive 3D contents in live broadcast but also drastically reduces the production costs. For the practical use, the proposed system was demonstrated in the actual broadcast program called “Ding Dong Dang Kindergarten” which is a representative children educational program on the national broadcasting channel of Korea. ![]() |
|
Kai, Toshihiro |
![]() Itsuo Kumazawa, Toshihiro Kai, Yoshikazu Onuki, and Shunsuke Ono (Tokyo Institute of Technology, Japan) The frame rate of existing stereo cameras is not enough to track quick hand or finger actions. It also requires lots of computational cost to find correspondence between stereo images to compute distance. The recently commercialized 3D position sensors such as TOF cameras or Leap Motion needs strong illumination to ensure sufficient optical energy for the high frame rate sensing. To overcome these problems, this paper proposes to use a pair of optical-mouse-sensors as a stereo image sensor to measure 3D-velocity and use it to extrapolate 3D position measured by a low-frame-rate stereo camera. It is shown that quick hand actions are tracked under ordinary in-door lighting condition. As 2D velocities are computed inside the optical-mouse-sensors, computation and communication costs are drastically reduced. ![]() |
|
Kajimoto, Hiroyuki |
![]() Vibol Yem and Hiroyuki Kajimoto (University of Electro-Communications, Japan) We developed “Finger Glove for Augmented Reality” (FinGAR), which combines electrical and mechanical stimulation to selectively stimulate skin sensory mechanoreceptors and provide tactile feedback of virtual objects. A DC motor provides high-frequency vibration and shear deformation to the whole finger, and an array of electrodes provide pressure and low-frequency vibration with high spatial resolution. FinGAR devices are attached to the thumb, index finger and middle finger. It is lightweight, simple in mechanism, easy to wear, and does not disturb the natural movements of the hand. All of these attributes are necessary for a general-purpose virtual reality system. User study was conducted to evaluate its ability to reproduce sensations of four tactile dimensions: macro roughness, friction, fine roughness and hardness. Result indicated that skin deformation and cathodic stimulation affect macro roughness and hardness, whereas high-frequency vibration and anodic stimulation affect friction and fine roughness. ![]() ![]() |
|
Kanbara, Masayuki |
![]() Taishi Sawabe, Masayuki Kanbara, and Norihiro Hagita (NAIST, Japan; ATR, Japan) This paper presents an approach for motion sickness reduction while riding an autonomous vehicle. It proposes the Diminished Reality (DR) method for an acceleration stimulus to reduce motion sickness for the autonomous vehicle. One of the main causes of motion sickness is a repeated acceleration. In order to diminish the acceleration stimulus in the autonomous vehicle, vection illusion is used to induce the user to make a preliminary movement against the real acceleration. The Balance Wii Board is used to measure participant's movement of the center of gravity to verify the effectiveness of the method with vection. The experimental result of 9 participants shows that the proposed method of using vection could reduce acceleration stimulus compared with the conventional method. ![]() |
|
Kang, Hyo Jeong |
![]() Hyo Jeong Kang (University of Wisconsin-Madison, USA) The medium of virtual reality enables new opportunities for the experience products and shopping environment that may combine best features of both physical and digital market place. As little is known on how best to create virtual reality marketplace, the current research aims to explore required features for VR market user interface and its impact on shopping behavior. As a first step toward endeavor, we will empirically test three different user interfaces; 2D interface style, 3D skeuomorphic interface style and interface that combines features of both 2D and 3D inter-action techniques. ![]() |
|
Kang, Sin-Hwa |
![]() David M. Krum, Thai Phan, and Sin-Hwa Kang (University of Southern California, USA) As interaction techniques involving scaling of motor space in virtual reality are becoming more prevalent, it is important to understand how individuals adapt to such scalings and how they re-adapt back to non-scaled norms. This preliminary work examines how individuals, performing a targeted ball throwing task, adapted to addition and removal of a translational scaling of the ball's forward flight. This was examined under various conditions: flight of the ball shown with no delay, hidden flight of the ball with no delay, and hidden flight with a 2 second delay. Hiding the ball’s flight, as well as the delay, created disruptions in the ability of the participants to perform the task and adapt to new scaling conditions. ![]() |
|
Kangas, Jari |
![]() Toni Pakkanen, Jaakko Hakulinen, Tero Jokela, Ismo Rakkolainen, Jari Kangas, Petri Piippo, Roope Raisamo, and Marja Salmimaa (University of Tampere, Finland; Nokia, Finland) Immersive 360° video needs new ways of interaction. We compared three different interaction methods to find out which one of them is the most applicable for controlling 360° video playback. The compared methods were: remote control, pointing with head orientation, and hand gestures. A WebVR-based 360° video player was built for the experiment. ![]() |
|
Kato, Fumihiro |
![]() Yasuyuki Inoue, Fumihiro Kato, MHD Yamen Saraiji, Charith Lasantha Fernando, and Susumu Tachi (University of Tokyo, Japan; Keio University, Japan) In this paper, we analyze the subjective feelings about the body of the operator of a telexistence system. We investigate whether a mirror reflection and self-touch affect body ownership and agency for a surrogate robot avatar in a virtual reality experiment. Results showed that the presence of tactile sensations synchronized with the view of self-touch events enhanced mirror self-recognition. ![]() ![]() Fumihiro Kato, Charith Lasantha Fernando, Yasuyuki Inoue, and Susumu Tachi (University of Tokyo, Japan; Keio University, Japan) We have developed a classification method of tactile feeling using a stacked autoencoder-based neural network on haptic primary colors. The haptic primary colors principle is a concept of decomposing the human sensation of tactile feeling into force, vibration, and temperature. Images were obtained from variation in the frequency of the time series of the tactile feeling obtained when tracing a surface of an object, features were extracted by employing a stacked autoencoder using a neural network with two hidden layers, and supervised learning was conducted. We confirmed that the tactile feeling for three different surface materials can be classified with an accuracy of 82.0[%]. ![]() |
|
Kaufman, Arie E. |
![]() Seyedkoosha Mirhosseini, Ievgeniia Gutenko, Sushant Ojal, Joseph Marino, and Arie E. Kaufman (Stony Brook University, USA) For many virtual reality applications, a pre-calculated fly-through path is the de facto standard navigation method. Such a path is convenient for users and ensures coverage of critical areas throughout the scene. Traditional applications use constant camera speed, allow for fully user-controlled manual speed adjustment, or use automatic speed adjustment based on heuristics from the scene. We introduce two novel methods for constrained path navigation and exploration in virtual environments which rely on the natural orientation of the user’s head during scene exploration. Utilizing head tracking to obtain the user’s area of focus, we perform automatic camera speed adjustment to allow for natural off-axis scene examination. We expand this to include automatic camera navigation along the pre-computed path, abrogating the need for any navigational inputs from the user. Our techniques are applicable for any scene with a pre-computed navigation path, including medical applications such as virtual colonoscopy, coronary fly-through, or virtual angioscopy, and graph navigation. We compare the traditional methods (constant speed and manual speed adjustment) and our two methods (automatic speed adjustment and automatic speed/direction control) to determine the effect of speed adjustment on system usability, mental load, performance, and user accuracy. Through this evaluation we observe the effect of automatic speed adjustment compared to traditional methods. We observed no negative impact from automatic navigation, and the users performed as well as with the manual navigation. ![]() |
|
Kaufmann, Hannes |
![]() Emanuel Vonach, Clemens Gatterer, and Hannes Kaufmann (Vienna University of Technology, Austria) We present the design and development of a fully immersive virtual reality (VR) system that can provide prop-based haptic feedback in an infinite virtual environment. It is conceived as a research tool for studying topics related to haptics in VR and based on off-the-shelf components. A robotic arm moves physical props, dynamically matching pose and location of an object in the virtual world. When the user reaches for the virtual object, his or her hands also encounter it in the real physical space. The interaction is not limited to specific body parts and does not rely on an external structure like an exoskeleton. In combination with a locomotion platform for close-to-natural walking, this allows unrestricted haptic interaction in a natural way in virtual environments of unlimited size. We describe the concept, the hardware and software architecture in detail. We establish safety design guidelines for human-robot interaction in VR. Our technical evaluation shows good response times and accuracy. We report on a user study conducted with 34 participants indicating promising results, and discuss the potential of our system. ![]() ![]() ![]() Annette Mossel, Mario Froeschl, Christian Schoenauer, Andreas Peer, Johannes Goellner, and Hannes Kaufmann (Vienna University of Technology, Austria; M2DMasterMind Development, Austria) We present the VROnSite platform that enables immersive training of first responder on-site squad leaders. Our training platform is fully immersive, entirely untethered to ease use and provides two means of navigation - abstract and natural walking - to simulate stress and exhaustion, two important factors for decision making. With the platform's capabilities, we close a gap in prior art for first responder training. Our research is closely interlocked with stakeholders from fire brigades and paramedics to gather early feedback in an iterative design process. In this paper, we present our first research results, which are the system's design rationale, the single user training prototype and results from a preliminary user study. ![]() |
|
Kazanzides, Peter |
![]() Ehsan Azimi, Long Qian, Peter Kazanzides, and Nassir Navab (Johns Hopkins University, USA; TU Munich, Germany) Uncertainty in measurement of point correspondences negatively affects the accuracy and precision in the calibration of head-mounted displays (HMD). Such errors depend on the sensors and pose estimation for video see-through HMD. For optical see-through systems, it additionally depends on the user's head motion and hand-eye coordination. Therefore, the distribution of alignment errors for optical see-through calibration are not isotropic, and one can estimate its process specific or user specific distribution based on interaction requirements of a given calibration process and the user's measurable head motion and hand-eye coordination characteristics. Current calibration methods, however, mostly utilize the DLT method which minimizes Euclidean distances for HMD projection matrix estimation, disregarding the anisotropicity in the alignment errors. We will show how to utilize the error covariance in order to take the anisotropic nature of error distribution into account. The main hypothesis of this study is that using Mahalonobis distance within the nonlinear optimization can improve the accuracy of the HMD calibration. To cover a wide range of possible realistic scenarios, several simulations were performed with variation in the extent of the anisotropicity in the input data along with other parameters. The simulation results indicate that our new method outperforms the standard DLT method both in accuracy and precision, and is more robust against user alignment errors. To the best of our knowledge, this is the first time that anisotropic noise has been accommodated in the optical see-through HMD calibration. ![]() ![]() Jianren Wang, Long Qian, Ehsan Azimi, and Peter Kazanzides (Shanghai Jiao Tong University, China; Johns Hopkins University, USA) An effective and simple method is proposed for multi-camera collaborative tracking, based on the prioritization of all tracking units, and then modeling the discrepancy between different tracking units as a locally static transformation error. Static error compensation is applied to the lower-priority tracking systems when high-priority trackers are not available. The method does not require high-end or carefully calibrated tracking units, and is able to effectively provide a comfortable augmented reality experience for users. A pilot study demonstrates the validity of the proposed method. ![]() |
|
Khooshabeh, Peter |
![]() Peter Khooshabeh, Igor Choromanski, Catherine Neubauer, David M. Krum, Ryan Spicer, and Julia Campbell (US Army Research Lab, USA; University of Southern California, USA) Here we describe the design and usability evaluation of a mixed reality prototype to simulate the role of a tank platoon leader, who is an individual who not only is a tank commander, but also directs a platoon of three other tanks with their own respective tank commanders. The domain of tank commander training has relied on physical simulators of the actual Abrams tank and encapsulates the whole crew. The TALK-ON system we describe here focuses on training communication skills of the leader in a simulated tank crew. We report results from a usability evaluation and discuss how they will inform our future work for collective tank training. ![]() ![]() |
|
Kim, Byungmoon |
![]() Jinwoong Jung, Joon-Young Lee, Byungmoon Kim, and Seungyong Lee (POSTECH, South Korea; Adobe Research, USA) With the recent advent of 360 cameras, spherical panorama images are becoming more popular and widely available. In a spherical panorama, alignment of the scene orientation to the image axes is important for providing comfortable and pleasant viewing experiences using VR headsets and traditional displays. This paper presents an automatic framework for upright adjustment of 360 spherical panorama images without any prior information, such as depths and Gyro sensor data. We take the Atlanta world assumption and use the horizontal and vertical lines in the scene to formulate a cost function for upright adjustment. Our method produces visually pleasing results for a variety of real-world spherical panoramas in less than a second. ![]() |
|
Kim, June |
![]() June Kim and Tomasz Bednarz (UNSW, Australia; Queensland University of Technology, Australia; Data61 at CSIRO, Australia) Immersive technologies and particularly the Virtual Reality (VR) provide new exciting ways to see the world. Today, we introduce our research that successfully employed VR to the biodiversity conservation sciences. Jaguars are one of the endangered animals. It is certainly critical and compelling to preserve ecosystem for endangered animals. With the awareness of this, we endeavoured to establish a multidisciplinary VR project that implemented data from indigenous villagers (jaguar experts group A), the conventional knowledge of the field of jaguar ecosystem (from jaguar experts group B), and mathematical and statistical models. Our fascination lies in these questions: can we effectively bring together VR and analytical capabilities? Can VR be used to make this world a better place for living beings? Please enjoy our 360-degree images of jaguar habitats taken in the Peruvian Amazon. ![]() ![]() |
|
Kim, Shiho |
![]() Hojun Lee, Gyutae Ha, Sangho Lee, and Shiho Kim (Yonsei University, South Korea) We have implemented a mixed reality telepresence platform providing a user experience (UX) of exchanging emotional expressions as well as information among a group of participants. The implemented system provides a platform to experience an immersive live scene through a Head-Mounted Display (HMD) and sensory information to a VR HMD user at a remote place. Moreover, the user at a remote place can share and exchange emotional expressions with other users at another remote location by using 360° cameras, environmental sensors compliant with MPEG-V, and a game cloud server combined with a technique of holographic display. We demonstrated that emotional expressions of an HMD worn participant were shared with a group of other participants in the remote place while watching a sports game on a big screen TV. ![]() |
|
Kitazaki, Michiteru |
![]() Takeo Hamada, Michio Okada, and Michiteru Kitazaki (Toyohashi University of Technology, Japan) We present a novel assistive method for leading casual joggers by showing a virtual runner on see-through head-mounted display they worn. It moves at a constant pace specified in advance by them, and its motion synchronizes the user’s one. People can always visually check the pace by looking at it as a personal pacemaker. They are also motivated to keep running by regarding it as a jogging companion. Moreover, proposed method overcomes safety problem of AR apps. Its most body parts are transparent so that it doesn’t obstruct their view. This study, thus, may contribute to augment daily jogging experience. ![]() ![]() Masato Kurosawa, Ken Ito, Yasushi Ikei, Koichi Hirota, and Michiteru Kitazaki (Tokyo Metropolitan University, Japan; University of Electro-Communications, Japan; Toyohashi University of Technology, Japan) The present study investigates the augmentation effect of airflow on the sensation of a virtual reality walk. The intensity of cutaneous sensation evoked by airflow during the real and virtual walk was measured. The airflow stimulus was added to the participant with passive vestibular motion and visual presentation. The result suggests that the sensation of walking was strongly increased by adding the airflow stimulus to the vestibular and optic presentations. The cutaneous sensation of airflow was perceived higher for the sitting participant than during a real walk in both a single and the combined stimuli. The equivalent speed of airflow for the sitting participant was lowered from the airflow speed in the real walk. ![]() |
|
Kitson, Alexandra |
![]() Alexandra Kitson, Abraham M. Hashemian, Ekaterina R. Stepanova, Ernst Kruijff, and Bernhard E. Riecke (Simon Fraser University, Canada; Bonn-Rhein-Sieg University of Applied Sciences, Germany) We describe here a pilot user study comparing five different locomotion interfaces for virtual reality (VR) locomotion. We compared a standard non-motion cueing interface, Joystick, with four leaning-based seated motion-cueing interfaces: NaviChair, MuvMan, Head-Directed and Swivel Chair. The aim of this mixed methods study was to investigate the usability and user experience of each interface, in order to better understand relevant factors and guide the design of future ground-based VR locomotion interfaces. We asked participants to give talk-aloud feedback and simultaneously recorded their responses while they were performing a search task in VR. Afterwards, participants completed an online questionnaire. Although the Joystick was rated as more comfortable and precise than the other interfaces, the leaning-based interfaces showed a trend to provide more enjoyment and a greater sense of self-motion. There were also potential issues of using velocity-control for rotations in leaning-based interfaces when using HMDs instead of stationary displays. Developers need to focus on improving the controllability and perceived safety of these seated motion cueing interfaces. ![]() ![]() Jacob Freiberg, Alexandra Kitson, and Bernhard E. Riecke (Simon Fraser University, Canada) With affordable high performance VR displays becoming commonplace, users are becoming increasingly aware of the need for well-designed locomotion interfaces that support these displays. After considering the needs of users, we quantitatively evaluated an embodied locomotion interface called the Navichair according to usability needs and fulfillment of system requirements. Specifically, we investigated influences of locomotion interfaces (joystick vs. an embodied motion cueing chair) and display type (HMD vs. projection screen) on a spatial updating pointing task. Our findings indicate that our embodied VR locomotion interface provided users with an immersive experience of a space without requiring a significant investment of set up time. Design lessons and future design goals of our interface are discussed. ![]() |
|
Kittsteiner, Thomas |
![]() Andrea Bönsch, Jonathan Wendt, Heiko Overath, Özgür Gürerk, Christine Harbring, Christian Grund, Thomas Kittsteiner, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Traditionally, experimental economics uses controlled and incentivized field and lab experiments to analyze economic behavior. However, investigating peer effects in the classic settings is challenging due to the reflection problem: Who is influencing whom? To overcome this, we enlarge the methodological toolbox of these experiments by means of Virtual Reality. After introducing and validating a real-effort sorting task, we embed a virtual agent as peer of a human subject, who independently performs an identical sorting task. We conducted two experiments investigating (a) the subject’s productivity adjustment due to peer effects and (b) the incentive effects on competition. Our results indicate a great potential for Virtual-Reality-based economic experiments. ![]() |
|
Kiyokawa, Kiyoshi |
![]() Yuta Itoh, Jason Orlosky, Kiyoshi Kiyokawa, Toshiyuki Amano, and Maki Sugimoto (Keio University, Japan; Osaka University, Japan; Wakayama University, Japan) We present a method for focal distance estimation of a freely-orienting eye using Purkinje-Sanson (PS) images, which are reflections of light on the inner structures of the eye. Using an infrared camera with a rigidly-fixed LED, our method creates an estimation model based on 3D gaze and the distance between reflections in the PS images that occur on the corneal surface and anterior surface of the eye lens. The distance between these two reflections changes with focus, so we associate that information to the focal distance on a user. Unlike conventional methods that mainly relies on 2D pupil size which is sensitive to scene lighting and the fourth PS image, our method detects the third PS image which is more representative of accommodation. Our feasibility study on a single user with a focal range from 15-45 cm shows that our method achieves mean and median absolute errors of 3.15 and 1.93 cm for a 10-degree viewing angle. The study shows that our method is also tolerant against environment lighting changes. ![]() |
|
Klaudiny, Martin |
![]() Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell (Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA) In this demonstration, we showcase a system for rapid acquisition of bespoke avatars for each participant (subject) in a social VR environment is presented. For each subject, the system automatically customizes a parametric avatar model to match the captured subject by adjusting its overall height, body and face shape parameters and generating a custom face texture. ![]() ![]() Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell (Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA) We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and face shape at the fitting stage, while animation blendshapes allow the face to be animated. The subject assumes a T-pose and a single snapshot is captured using a stereo RGB plus depth sensor rig. Our system automatically aligns a photo texture and fits the 3D shape of the face. The body shape is stylized according to body dimensions estimated from segmented depth. The face identity blendweights are optimised according to image-based facial landmarks, while a custom texture map for the face is generated by warping the input images to a reference texture according to the facial landmarks. The total capture and processing time is under 10~seconds and the output is a light-weight, game-engine-ready avatar which is recognizable as the subject. We demonstrate our system in a VR environment in which each user sees the other users' animated avatars through a VR headset with real-time audio-based facial animation and live body motion tracking, affording an enhanced level of presence and social engagement compared to generic avatars. ![]() |
|
Knott, Brian |
![]() Charles Dunn and Brian Knott (YouVisit, USA) Spherical data compression methods for Virtual Reality (VR) currently leverage popular rectangular data encoding algorithms. Traditional compression algorithms have massive adoption and hardware support on computers and mobile devices. Efficiently utilizing these two-dimensional compression methods for spherical data necessitates a projection from the three-dimensional surface of a sphere to a two-dimensional rectangle. Any such projection affects the final resolution distribution of the data after decoding. Popular projections used for VR video benefit from mathematical or geometric simplicity, but result in suboptimal resolution distributions. We introduce a method for generating a projection to match a desired resolution function. This method allows for customized projections with smooth, continuous and optimal resolution functions. Compared to commonly used projections, our resolution-defined projections drastically improve compression ratios for any given quality. ![]() |
|
Kobori, Norimasa |
![]() Norimasa Kobori, Daisuke Deguchi, Ichiro Ide, and Hiroshi Murase (Toyota, Japan; Nagoya University, Japan) We propose a novel marker for robot's grasping task which has the following three aspects: (i) it is easy-to-find in a cluttered background, (ii) it is calculable for its posture (iii) its size is compact. The proposed marker is composed of a random dots pattern, and uses keypoint detection and a scale estimation by Spectral SIFT for dots detection and data decoding. The data is encoded by the scale size of dots, and the same dots in the marker work for both marker detection and data decoding. As a result, the proposed marker size can be compact. We confirmed the effectiveness of the proposed marker through experiments. ![]() |
|
Kondo, Satoru |
![]() Takuya Handa, Kenji Murase, Makiko Azuma, Toshihiro Shimizu, Satoru Kondo, and Hiroyuki Shinoda (NHK, Japan; University of Tokyo, Japan) The main goal of our research is to develop a haptic display that makes it possible to convey shapes, hardness, and textures of objects displayed on 3D TV. Our evolved device has three 5 mm diameter actuating spheres arranged in triangular geometry on each of three fingertips (thumb, index finger, middle finger). In this paper, we describe an overview of a novel haptic device and the first experimental results that twelve subjects had succeeded to recognize the size of cylinders and side geometry of a cuboid and a hexagonal prism. ![]() ![]() |
|
Kooten, Kees van |
![]() Tom Vierjahn, Daniel Zielasko, Kees van Kooten, Peter Messmer, Bernd Hentschel, Torsten W. Kuhlen, and Benjamin Weyers (RWTH Aachen University, Germany; JARA-HPC, Germany; NVIDIA, Germany; NVIDIA, Switzerland) Immersive visualization (IV) fosters the creation of mental images of a data set, a scene, a procedure, etc. We devise an initial version of a design space for categorizing workflows that take advantage of IV. From this categorization, specific requirements for an actual, seamless IV-integration can be derived. We validate the design space with three workflows investigated in our research projects. ![]() |
|
Kopper, Regis |
![]() Eduardo Zilles Borba, Andre Montes, Marcio Almeida, Mario Nagamura, Roseli Lopes, Marcelo Knorich Zuffo, Astolfo Araujo, and Regis Kopper (University of São Paulo, Brazil; Duke University, USA) This demo presents a fully immersive and interactive virtual environment (VE) – the ArcheoVR, which represents Itapeva Rocky Shelter, a prehistoric archeological site in Brazil. W workflow started with a real world data capture – laser scanners, drones and photogrammetry. Captured information was transformed into a carefully designed realistic 3D scene and interactive features that allows users to experience the virtual archeological site in real-time. The main objective of this VR model is to allow the general public to feel and explore an otherwise restricted and ephemeral site and to assess prototype tools intended for future digital archaeological exploration. ![]() ![]() Asma Naz, Regis Kopper, Ryan P. McMahan, and Mihai Nadin (University of Texas at Dallas, USA; Duke University, USA) The emotional response a person has to a living space is predominantly affected by light, color and texture as space-making elements. In order to verify whether this phenomenon could be replicated in a simulated environment, we conducted a user study in a six-sided projected immersive display that utilized equivalent design attributes of brightness, color and texture in order to assess to which extent the emotional response in a simulated environment is affected by the same parameters affecting real environments. Since emotional response depends upon the context, we evaluated the emotional responses of two groups of users: inactive (passive) and active (performing a typical daily activity). The results from the perceptual study generated data from which design principles for a virtual living space are articulated. Such a space, as an alternative to expensive built dwellings, could potentially support new, minimalist lifestyles of occupants, defined as the neo-nomads, aligned with their work experience in the digital domain through the generation of emotional experiences of spaces. Data from the experiments confirmed the hypothesis that perceivable emotional aspects of real-world spaces could be successfully generated through simulation of design attributes in the virtual space. The subjective response to the virtual space was consistent with corresponding responses from real-world color and brightness emotional perception. Our data could serve the virtual reality (VR) community in its attempt to conceive of further applications of virtual spaces for well-defined activities. ![]() ![]() ![]() David J. Zielinski, Derek Nankivil, and Regis Kopper (Duke University, USA) We propose Specimen Box, an interaction technique that allows world-fixed display (such as CAVEs) users to naturally hold a plausible physical object while manipulating virtual content inside it. This virtual content is rendered based on the tracked position of the box. Specimen Box provides the weight and tactile feel of an actual object and does not occlude rendered objects in the scene. The end result is that the user sees the virtual content as if it exists inside the clear physical box. We conducted a user study which involved a cognitively loaded inspection task requiring extensive manipulation of the box. We compared Specimen Box to Grab-and-Twirl, a naturalistic bimanual manipulation technique that closely mimics the mechanics of our proposed technique. Results show that performance was significantly faster with Specimen Box. Further, performance of the control technique was positively affected by experience with Specimen Box. ![]() ![]() Eduardo Zilles Borba, Andre Montes, Roseli de Deus Lopes, Marcelo Knorich Zuffo, and Regis Kopper (University of São Paulo, Brazil; Duke University, USA) This poster presents the conceptual process of developing Itapeva 3D, a Virtual Reality (VR) archeology experience. It describes the technical spectrum of cyber-archeology process applied to the creation of a fully immersive and interactive virtual environment (VE), which represents Itapeva Rocky Shelter, a prehistoric archeological site in Brazil. The workflow starts with a real world data capture – laser scanners, drones and photogrammetry, continues with the transposition of the captured information into a 3D surface model capable of real-time rendering to head-mounted displays (HMDs), and ends with the design of interactive features allowing users to experience the virtual archeological site. The main objective of this VR model is to make plausible to general public to feel what it means to explore an otherwise restricted and ephemeral place. As final thoughts it is reported on preliminary results from an initial user observation. ![]() |
|
Koreska, Lars |
![]() Andreas Ryge, Lui Thomsen, Theis Berthelsen, Jonatan S. Hvass, Lars Koreska, Casper Vollmers, Niels C. Nilsson, Rolf Nordahl, and Stefania Serafin (Aalborg University at Copenhagen, Denmark) In this paper we present a within-subjects study (n=26) comparing participants’ experience of three kinds of haptic feedback (no haptic feedback, low fidelity haptic feedback and high fidelity haptic feed- back) simulating the impact between a virtual baseball bat and ball. We noticed some minor effect on high fidelity versus low fidelity haptic feedback, but haptic feedback generally enhanced realism and quality of experience. ![]() |
|
Kosek, Maggie |
![]() Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell (Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA) In this demonstration, we showcase a system for rapid acquisition of bespoke avatars for each participant (subject) in a social VR environment is presented. For each subject, the system automatically customizes a parametric avatar model to match the captured subject by adjusting its overall height, body and face shape parameters and generating a custom face texture. ![]() ![]() Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell (Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA) We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and face shape at the fitting stage, while animation blendshapes allow the face to be animated. The subject assumes a T-pose and a single snapshot is captured using a stereo RGB plus depth sensor rig. Our system automatically aligns a photo texture and fits the 3D shape of the face. The body shape is stylized according to body dimensions estimated from segmented depth. The face identity blendweights are optimised according to image-based facial landmarks, while a custom texture map for the face is generated by warping the input images to a reference texture according to the facial landmarks. The total capture and processing time is under 10~seconds and the output is a light-weight, game-engine-ready avatar which is recognizable as the subject. We demonstrate our system in a VR environment in which each user sees the other users' animated avatars through a VR headset with real-time audio-based facial animation and live body motion tracking, affording an enhanced level of presence and social engagement compared to generic avatars. ![]() |
|
Kostakos, Vassilis |
![]() Georgi V. Georgiev, Kaori Yamada, Toshiharu Taura, Vassilis Kostakos, Matti Pouke, Sylvia Tzvetanova Yung, and Timo Ojala (University of Oulu, Finland; Kobe University, Japan; University of Bedfordshire, UK) Here we propose an interactive system to augment creative design thinking using networks of concepts in a virtual reality environment. We discuss how to augment the human capacity to be creative through dynamic suggestions providing new and original ideas, based on specific semantic network characteristics. We outline directions to explore the structures of the concept network and their connection to creative concept generation. It is expected that augmented creative thinking will allow the user to have more original ideas and thus be more innovative. ![]() |
|
Kranz, Matthias |
![]() Jens Grubert and Matthias Kranz (Coburg University, Germany; University of Passau, Germany) While we witness significant changes in display technologies, to date, the majority of display form factors remain flat. The research community has investigated other geometric display configuration given the rise to cubic displays that create the illusion of a 3D virtual scene within the cube. We present a self-contained mobile perspective cubic display (mpCubee) assembled from multiple smartphones. We achieve perspective correct projection of 3D content through head-tracking using built-in cameras in smartphones. Furthermore, our prototype allows to spatially manipulate 3D objects on individual axes due to the orthogonal configuration of touch displays. ![]() ![]() Jens Grubert and Matthias Kranz (Coburg University, Germany; University of Passau, Germany) We present a demonstration of HeadPhones (Headtracking + smartPhones), a novel approach for the spatial registration of multiple mobile devices into an ad hoc multi-display environment. We propose to employ the user's head as external reference frame for the registration of multiple mobile devices into a common coordinate system. Our approach allows for dynamic repositioning of devices during runtime without the need for external infrastructure such as separate cameras or fiducials. Specifically, our only requirements are local network connections and mobile devices with built-in front facing cameras. This way, HeadPhones enables spatially-aware multi-display applications in mobile contexts. ![]() ![]() ![]() |
|
Kruijff, Ernst |
![]() Alexandra Kitson, Abraham M. Hashemian, Ekaterina R. Stepanova, Ernst Kruijff, and Bernhard E. Riecke (Simon Fraser University, Canada; Bonn-Rhein-Sieg University of Applied Sciences, Germany) We describe here a pilot user study comparing five different locomotion interfaces for virtual reality (VR) locomotion. We compared a standard non-motion cueing interface, Joystick, with four leaning-based seated motion-cueing interfaces: NaviChair, MuvMan, Head-Directed and Swivel Chair. The aim of this mixed methods study was to investigate the usability and user experience of each interface, in order to better understand relevant factors and guide the design of future ground-based VR locomotion interfaces. We asked participants to give talk-aloud feedback and simultaneously recorded their responses while they were performing a search task in VR. Afterwards, participants completed an online questionnaire. Although the Joystick was rated as more comfortable and precise than the other interfaces, the leaning-based interfaces showed a trend to provide more enjoyment and a greater sense of self-motion. There were also potential issues of using velocity-control for rotations in leaning-based interfaces when using HMDs instead of stationary displays. Developers need to focus on improving the controllability and perceived safety of these seated motion cueing interfaces. ![]() ![]() Ernst Kruijff and Bernhard E. Riecke (Bonn-Rhein-Sieg University of Applied Sciences, Germany; Simon Fraser University, Canada) In this course, we will take a detailed look at various breeds of spatial navigation interfaces that allow for locomotion in digital 3D environments such as games, virtual environments or even the exploration of abstract data sets. We will closely look into the basics of navigation, unraveling the psychophysics (including wayfinding) and actual navigation (travel) aspects. The theoretical foundations form the basis for the practical skill set we will develop, by providing an in-depth discussion of navigation devices and techniques, and a step-by-step discussion of multiple real-world case studies. Doing so, we will cover the full range of navigation techniques from handheld to full-body, highly engaging and partly unconventional methods and tackle spatial navigation with hands-on-experience and tips for design and validation of novel interfaces. In particular, we will be looking at affordable setups, rapid prototyping methods and ways to “trick” out users to enable a realistic feeling of self-motion in the explored environments. As such, the course unites the theory and practice of spatial navigation, serving as entry point to understand and improve upon currently existing methods for the application domain at hand. ![]() |
|
Krum, David M. |
![]() Tyler Ard, David M. Krum, Thai Phan, Dominique Duncan, Ryan Essex, Mark Bolas, and Arthur Toga (University of Southern California, USA) Visualization is a critical component of neuroimaging, and how to best view data that is naturally three dimensional is a long standing question in neuroscience. Many approaches, programs, and techniques have been developed specifically for neuroimaging. However, exploration of 3D information through a 2D screen is inherently limited. Many neuroscientific researchers hope that with the recent commercialization and popularization of VR, it can offer the next-step in data visualization and exploration. Neuro Imaging in Virtual Reality (NIVR), is a visualization suite that employs various immersive visualizations to represent neuroimaging information in VR. Some established techniques, such as raymarching volume visualization, are paired with newer techniques, such as near-field rendering, to provide a broad basis of how we can leverage VR to improve visualization and navigation of neuroimaging data. Several of the neuroscientific visualization approaches presented are, to our knowledge, the first of their kind. NIVR offers not only an exploration of neuroscientific data visualization, but also a tool to expose and educate the public regarding recent advancements in the field of neuroimaging. By providing an engaging experience to explore new techniques and discoveries in neuroimaging, we hope to spark scientific interest through a broad audience. Furthermore, neuroimaging offers deep and expansive datasets; a single scan can involve several gigabytes of information. Visualization and exploration of this type of information can be challenging, and real-time exploration of this information in VR even more so. NIVR explores pathways which make this possible, and offers preliminary stereo visualizations of these types of massive data. ![]() ![]() David M. Krum, Thai Phan, and Sin-Hwa Kang (University of Southern California, USA) As interaction techniques involving scaling of motor space in virtual reality are becoming more prevalent, it is important to understand how individuals adapt to such scalings and how they re-adapt back to non-scaled norms. This preliminary work examines how individuals, performing a targeted ball throwing task, adapted to addition and removal of a translational scaling of the ball's forward flight. This was examined under various conditions: flight of the ball shown with no delay, hidden flight of the ball with no delay, and hidden flight with a 2 second delay. Hiding the ball’s flight, as well as the delay, created disruptions in the ability of the participants to perform the task and adapt to new scaling conditions. ![]() ![]() Peter Khooshabeh, Igor Choromanski, Catherine Neubauer, David M. Krum, Ryan Spicer, and Julia Campbell (US Army Research Lab, USA; University of Southern California, USA) Here we describe the design and usability evaluation of a mixed reality prototype to simulate the role of a tank platoon leader, who is an individual who not only is a tank commander, but also directs a platoon of three other tanks with their own respective tank commanders. The domain of tank commander training has relied on physical simulators of the actual Abrams tank and encapsulates the whole crew. The TALK-ON system we describe here focuses on training communication skills of the leader in a simulated tank crew. We report results from a usability evaluation and discuss how they will inform our future work for collective tank training. ![]() ![]() ![]() Ryan Spicer, Julia Anglin, David M. Krum, and Sook-Lei Liew (University of Southern California, USA) There are few effective treatments for rehabilitation of severe motor impairment after stroke. We developed a novel closed-loop neurofeedback system called REINVENT to promote motor recovery in this population. REINVENT (Rehabilitation Environment using the Integration of Neuromuscular-based Virtual Enhancements for Neural Training) harnesses recent advances in neuroscience, wearable sensors, and virtual technology and integrates low-cost electroencephalography (EEG) and electromyography (EMG) sensors with feedback in a head-mounted virtual reality display (VR) to provide neurofeedback when an individual’s neuromuscular signals indicate movement attempt, even in the absence of actual movement. Here we describe the REINVENT prototype and provide evidence of the feasibility and safety of using REINVENT with older adults. ![]() |
|
Kuhlen, Torsten W. |
![]() Andrea Bönsch, Jonathan Wendt, Heiko Overath, Özgür Gürerk, Christine Harbring, Christian Grund, Thomas Kittsteiner, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Traditionally, experimental economics uses controlled and incentivized field and lab experiments to analyze economic behavior. However, investigating peer effects in the classic settings is challenging due to the reflection problem: Who is influencing whom? To overcome this, we enlarge the methodological toolbox of these experiments by means of Virtual Reality. After introducing and validating a real-effort sorting task, we embed a virtual agent as peer of a human subject, who independently performs an identical sorting task. We conducted two experiments investigating (a) the subject’s productivity adjustment due to peer effects and (b) the incentive effects on competition. Our results indicate a great potential for Virtual-Reality-based economic experiments. ![]() ![]() Sebastian Freitag, Clemens Löbbert, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Viewpoint quality estimation methods allow the determination of the most informative position in a scene. However, a single position usually cannot represent an entire scene, requiring instead a set of several viewpoints. Measuring the quality of such a set of views, however, is not trivial, and the computation of an optimal set of views is an NP-hard problem. Therefore, in this work, we propose three methods to estimate the quality of a set of views. Furthermore, we evaluate three approaches for computing an approximation to the optimal set (two of them new) regarding effectiveness and efficiency. ![]() ![]() Daniel Zielasko, Neha Neha, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) The use of non-verbal vocal input (NVVI) as a hand-free trigger approach has proven to be valuable in previous work [Zielasko2015]. Nevertheless, BlowClick's original detection method is vulnerable to false positives and, thus, is limited in its potential use, e.g., together with acoustic feedback for the trigger. Therefore, we extend the existing approach by adding common machine learning methods. We found that a support vector machine (SVM) with Gaussian kernel performs best for detecting blowing with at least the same latency and more precision as before. Furthermore, we added acoustic feedback to the NVVI trigger, which increases the user's confidence. To evaluate the advanced trigger technique, we conducted a user study (n=33). The results confirm that it is a reliable trigger; alone and as part of a hands-free point-and-click interface. ![]() ![]() Tom Vierjahn, Daniel Zielasko, Kees van Kooten, Peter Messmer, Bernd Hentschel, Torsten W. Kuhlen, and Benjamin Weyers (RWTH Aachen University, Germany; JARA-HPC, Germany; NVIDIA, Germany; NVIDIA, Switzerland) Immersive visualization (IV) fosters the creation of mental images of a data set, a scene, a procedure, etc. We devise an initial version of a design space for categorizing workflows that take advantage of IV. From this categorization, specific requirements for an actual, seamless IV-integration can be derived. We validate the design space with three workflows investigated in our research projects. ![]() ![]() Sebastian Freitag, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) The manual adjustment of travel speed to cover medium or large distances in virtual environments may increase cognitive load, and manual travel at high speeds can lead to cybersickness due to inaccurate steering. In this work, we present an approach to quickly pass regions where the environment does not change much, using automated suggestions based on the computation of common visibility. In a user study, we show that our method can reduce cybersickness when compared with manual speed control. ![]() ![]() Sebastian Pick, Andrew S. Puika, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany; Shinta VR, Indonesia) Choosing an adequate system control technique is crucial to support complex interaction scenarios in virtual reality applications. In this work, we compare an existing hierarchical pie-menu-based approach with a speech-recognition-based one in terms of task performance and user experience in a formal user study. As testbed, we use a factory planning application featuring a large set of system control options. ![]() |
|
Kumazawa, Itsuo |
![]() Itsuo Kumazawa, Toshihiro Kai, Yoshikazu Onuki, and Shunsuke Ono (Tokyo Institute of Technology, Japan) The frame rate of existing stereo cameras is not enough to track quick hand or finger actions. It also requires lots of computational cost to find correspondence between stereo images to compute distance. The recently commercialized 3D position sensors such as TOF cameras or Leap Motion needs strong illumination to ensure sufficient optical energy for the high frame rate sensing. To overcome these problems, this paper proposes to use a pair of optical-mouse-sensors as a stereo image sensor to measure 3D-velocity and use it to extrapolate 3D position measured by a low-frame-rate stereo camera. It is shown that quick hand actions are tracked under ordinary in-door lighting condition. As 2D velocities are computed inside the optical-mouse-sensors, computation and communication costs are drastically reduced. ![]() ![]() Itsuo Kumazawa, Souma Suzuki, Yoshikazu Onuki, and Shunsuke Ono (Tokyo Institute of Technology, Japan) This paper presents a simple but effective way of enhancing tactile stimulus by a mechanism with springs to preserve elastic energies charged in a prior energy-charging phase and discharge them to enhance the force to hit a finger in the stimulating phase. With this mechanism, a small and light stimulator attached to the fingertip is developed and demonstrated to generate the tactile feedback strong enough to make people feel as if their fingers collide with a virtual object. It is also shown that the durations of the two phases can be as short as a few milliseconds so that the latency in tactile feedback can be negligible. The performance of the mechanism and the effectiveness of its tactile feedback are evaluated for in-air key-press and swipe operations. ![]() ![]() Yoshikazu Onuki, Shunsuke Ono, and Itsuo Kumazawa (Tokyo Institute of Technology, Japan) Simulator sickness is an issue in virtual reality environments. In a virtual world, sensory conflict between visual sensation and self-motion perception occurs readily. Contradiction between visual and vestibular sensation is a dominant cause of motion sickness. Vection is a visually evoked illusion of self-motion. Vection occurs when a stationary human experiences locomotor stimulation over a wider area of the field of view, and senses motion when in fact there is none. Strong vection has been associated with simulator sickness. In this poster, the authors present results of a pilot study based on a hypothesis that simulator sickness can be mitigated by passively responding to the body sway. Commercially available air cushions were applied for VR environments. Measurable mitigation of simulator sickness was achieved by physically responding to vection. Allowing body sway encourages moder-ating the sensory conflict between visual sensation and self-motion perception. Also, the shapes of air cushions on seat backs were found to be an important variable. ![]() |
|
Kurosawa, Masato |
![]() Masato Kurosawa, Ken Ito, Yasushi Ikei, Koichi Hirota, and Michiteru Kitazaki (Tokyo Metropolitan University, Japan; University of Electro-Communications, Japan; Toyohashi University of Technology, Japan) The present study investigates the augmentation effect of airflow on the sensation of a virtual reality walk. The intensity of cutaneous sensation evoked by airflow during the real and virtual walk was measured. The airflow stimulus was added to the participant with passive vestibular motion and visual presentation. The result suggests that the sensation of walking was strongly increased by adding the airflow stimulus to the vestibular and optic presentations. The cutaneous sensation of airflow was perceived higher for the sitting participant than during a real walk in both a single and the combined stimuli. The equivalent speed of airflow for the sitting participant was lowered from the airflow speed in the real walk. ![]() |
|
Lamounier Jr., Edgard |
![]() Ígor Andrade Moraes, Alexandre Cardoso, Edgard Lamounier Jr., Milton Miranda Neto, and Isabela Cristina dos Santos Peres (Federal University of Uberlândia, Brazil) The profits and benefits offered by Virtual Reality technology had drawn attention of professionals from several scientific fields, including the power systems’, either for training or maintenance. For this purpose, 3D modeling is evidently pointed out as an imperative process for the conception of a Virtual Environment. Before the complexity of Hydroelectric Power Plants and Virtual Reality’s contribution for the Industrial area, planning the tridimensional construction of the virtual environment becomes necessary. Thus, this paper presents modeling techniques applicable to several hydroelectric structures, aiming to optimize the 3D construction of the target complexes. ![]() |
|
Lampotang, Samsun |
![]() Andrew Cordar, Adam Wendling, Casey White, Samsun Lampotang, and Benjamin Lok (University of Florida, USA; University of Virginia, USA) In the past few years, advances have been made on how mixed reality humans (MRHs) can be used for interpersonal communication skills training for medical teams; however, little research has looked at how MRHs can influence communication skills during training. One way to influence communication skills is to leverage MRHs as models of communication behavior. We created a mixed reality medical team training exercise designed to impact communication behaviors that are critical for patient safety. We recruited anesthesia residents to go through an operating room training exercise with MRHs to assess and influence residents' closed loop communication behaviors during medication administration. We manipulated the behavior of the MRHs to determine if the MRHs could influence the residents' closed loop communication behavior. Our results showed that residents' closed loop communications behaviors were influenced by MRHs. Additionally, we found there was a statistically significant difference between groups based on which MRH behavior residents observed. Because the MRHs significantly impacted how residents communicated in simulation, this work expands the boundaries for how VR can be used and demonstrates that MRHs could be used as tools to address complex communication dynamics in a team setting ![]() ![]() |
|
Langbehn, Eike |
![]() Eike Langbehn, Paul Lubos, Gerd Bruder, and Frank Steinicke (University of Hamburg, Germany; University of Central Florida, USA) Redirected walking (RDW) promises to allow near-natural walking in an infinitely large virtual environment (VE) by subtle manipulations of the virtual camera. Previous experiments showed that a physical radius of at least 22 meters is required for undetectable RDW. However, we found that it is possible to decrease this radius and to apply RDW to room-scale VR, i. e., up to approximately 5m × 5m. This is done by using curved paths in the VE instead of straight paths, and by coupling them together in a way that enables continuous walking. Furthermore, the corresponding paths in the real world are laid out in a way that fits perfectly into room-scale VR. In this research demo, users can experience RDW in a room-scale head-mounted display VR setup and explore a VE of approximately 25m × 25m. ![]() ![]() Omar Janeh, Eike Langbehn, Frank Steinicke, Gerd Bruder, Alessandro Gulberti, and Monika Poetter-Nerger (University of Hamburg, Germany; University of Central Florida, USA; University Medical Center Hamburg-Eppendorf, Germany) Our study investigates the effects of (non-)isometric mappings between physical movements and virtual motions in the virtual environment (VE) on walking biomechanics of older adults. Three primary domains (pace, base of support and phase) of spatio-temporal and temporo-phasic parameters were used to evaluate gait performance. Our results show similar results in pace and phasic domains when older adults walk in the VE in the isometric mapping condition compared to the corresponding parameters in the real world. We found significant differences in base of support for our user group between walking in the VE and real world. For non-isometric mappings we found an increased divergence of gait parameters in all domains correlating with the up- or down-scaled velocity of visual self-motion feedback. ![]() |
|
Larsen, Oliver |
![]() Jonatan S. Hvass, Oliver Larsen, Kasper B. Vendelbo, Niels C. Nilsson, Rolf Nordahl, and Stefania Serafin (Aalborg University at Copenhagen, Denmark) Previous research on visual realism and presence has not involved scenarios, graphics, and hardware representative of commercially available VR games. This poster details a between-subjects study (n=50) exploring if polygon count and texture resolution influence presence during exposure to a VR game. The results suggest that a higher polygon count and texture resolution increased presence as assessed by means of self-reports and physiological measures. ![]() |
|
Latoschik, Marc Erich |
![]() Daniel Roth, Kristoffer Waldow, Marc Erich Latoschik, Arnulph Fuhrmann, and Gary Bente (University of Würzburg, Germany; University of Cologne, Germany; TH Köln, Germany; Michigan State University, USA) In this paper, we present SIAM-C, an avatar-mediated communication platform to study socially immersive interaction in virtual environments. The proposed system is capable of tracking, transmitting, representing body motion, facial expressions, and voice via virtual avatars and inherits the transmission of human behaviors that are available in real-life social interactions. Users are immersed using active stereoscopic rendering projected onto a life-size projection plane, utilizing the concept of “fish tank” virtual reality (VR). Our prototype connects two separate rooms and allows for socially immersive avatar-mediated communication in VR. ![]() |
|
Lau, Henry Y. K. |
![]() Adrian K. T. Ng, Leith K. Y. Chan, and Henry Y. K. Lau (University of Hong Kong, China) The perceived distance estimation in an immersive virtual reality system is generally underestimated to the actual distance. Approaches had been found to provide users with better dimensional perception. One method used in head-mounted displays is to interact by walking with visual feedback, but it is not suitable for a CAVE-like system, like imseCAVE with confined spaces for walking. A verbal corrective feedback mechanism is proposed. The result shows that estimation accuracy generally improves after eight feedback trials although some estimations become overestimated. One possible explanation is the need of more verbal feedback trials. Further research on top-down approach for improvement in depth perception is suggested. ![]() |
|
Leal, Steven |
![]() Jinsil Hwaryoung Seo, Brian Smith, Margaret Cook, Michelle Pine, Erica Malone, Steven Leal, and Jinkyo Suh (Texas A&M University, USA) We present Anatomy Builder VR that examines how a virtual reality (VR) system can support embodied learning in anatomy education. The backbone of the project is to pursue an alternative constructivist pedagogical model for learning canine anatomy. The main focus of the study was to identify and assemble bones in the live-animal orientation, using real thoracic limb bones in a bone box and digital pelvic limb bones in the Anatomy Builder VR. Eleven college students participated in the study. The pilot study showed that participants most enjoyed interacting with anatomical contents within the VR program. Participants spent less time assembling bones in the VR, and instead spent a longer time tuning the orientation of each VR bone in the 3D space. This study showed how a constructivist method could support anatomy education while using virtual reality technology in an active and experiential way. ![]() |
|
Lécuyer, Anatole |
![]() Guillaume Bouyer, Amine Chellali, and Anatole Lécuyer (University of Évry Val d'Essonne, France; Inria, France) Producing sensations of motion in driving simulators often requires using cumbersome and expensive motion platforms. In this ar- ticle we present a novel and alternative approach for producing self-motion sensations in driving simulations by relying on haptic- feedback. The method consists in applying a force-feedback pro- portional to the acceleration of the virtual vehicle directly in the hands of the driver, by means of a haptic device attached to the manipulated controller (or a steering wheel). We designed a proof- of-concept based on a standard gamepad physically attached at the extremity of a standard 3DOF haptic display. Haptic effects were designed to match notably the acceleration/braking (longitudinal forces) and left/right turns (lateral forces) of the virtual vehicle. A preliminary study conducted with 23 participants, engaged in gamepad-based active VR navigations in a straight line, showed that haptic motion effects globally improved the involvement and realism of motion sensation for participants with prior experience with haptic devices. Taken together, our results suggest that our approach could be further tested and used in driving simulators in entertainment and/or professional contexts. ![]() |
|
Lee, Hojun |
![]() Hojun Lee, Gyutae Ha, Sangho Lee, and Shiho Kim (Yonsei University, South Korea) We have implemented a mixed reality telepresence platform providing a user experience (UX) of exchanging emotional expressions as well as information among a group of participants. The implemented system provides a platform to experience an immersive live scene through a Head-Mounted Display (HMD) and sensory information to a VR HMD user at a remote place. Moreover, the user at a remote place can share and exchange emotional expressions with other users at another remote location by using 360° cameras, environmental sensors compliant with MPEG-V, and a game cloud server combined with a technique of holographic display. We demonstrated that emotional expressions of an HMD worn participant were shared with a group of other participants in the remote place while watching a sports game on a big screen TV. ![]() |
|
Lee, Joon-Young |
![]() Jinwoong Jung, Joon-Young Lee, Byungmoon Kim, and Seungyong Lee (POSTECH, South Korea; Adobe Research, USA) With the recent advent of 360 cameras, spherical panorama images are becoming more popular and widely available. In a spherical panorama, alignment of the scene orientation to the image axes is important for providing comfortable and pleasant viewing experiences using VR headsets and traditional displays. This paper presents an automatic framework for upright adjustment of 360 spherical panorama images without any prior information, such as depths and Gyro sensor data. We take the Atlanta world assumption and use the horizontal and vertical lines in the scene to formulate a cost function for upright adjustment. Our method produces visually pleasing results for a variety of real-world spherical panoramas in less than a second. ![]() |
|
Lee, Myungho |
![]() Myungho Lee, Gerd Bruder, and Gregory F. Welch (University of Central Florida, USA) We investigate the effect of vibrotactile feedback delivered to one's feet in an immersive virtual environment (IVE). In our study, participants observed a virtual environment where a virtual human (VH) walked toward the participants and paced back and forth within their social space. We compared three conditions as follows: participants in the Sound condition heard the footsteps of the VH; participants in the Vibration condition experienced the vibration of the footsteps along with the sounds; while participants in the Mute condition were not exposed to sound nor vibrotactile feedback. We found that the participants in the Vibration condition felt a higher social presence with the VH compared to those who did not feel the vibration. The participants in the Vibration condition also exhibited greater avoidance behavior while facing the VH and when the VH invaded their personal space. ![]() |
|
Lee, Sangho |
![]() Hojun Lee, Gyutae Ha, Sangho Lee, and Shiho Kim (Yonsei University, South Korea) We have implemented a mixed reality telepresence platform providing a user experience (UX) of exchanging emotional expressions as well as information among a group of participants. The implemented system provides a platform to experience an immersive live scene through a Head-Mounted Display (HMD) and sensory information to a VR HMD user at a remote place. Moreover, the user at a remote place can share and exchange emotional expressions with other users at another remote location by using 360° cameras, environmental sensors compliant with MPEG-V, and a game cloud server combined with a technique of holographic display. We demonstrated that emotional expressions of an HMD worn participant were shared with a group of other participants in the remote place while watching a sports game on a big screen TV. ![]() |
|
Lee, Seungyong |
![]() Jinwoong Jung, Joon-Young Lee, Byungmoon Kim, and Seungyong Lee (POSTECH, South Korea; Adobe Research, USA) With the recent advent of 360 cameras, spherical panorama images are becoming more popular and widely available. In a spherical panorama, alignment of the scene orientation to the image axes is important for providing comfortable and pleasant viewing experiences using VR headsets and traditional displays. This paper presents an automatic framework for upright adjustment of 360 spherical panorama images without any prior information, such as depths and Gyro sensor data. We take the Atlanta world assumption and use the horizontal and vertical lines in the scene to formulate a cost function for upright adjustment. Our method produces visually pleasing results for a variety of real-world spherical panoramas in less than a second. ![]() |
|
Li, Gang |
![]() Gang Li, Yue Liu, and Yongtian Wang (Beijing Institute of Technology, China) View management techniques are commonly used for labelling of objects in augmented reality environments. Combining with image analysis, search space and adaptive representations, they can be utilized to achieve desired labelling tasks. However, the evaluation of different search space methods on labelling are still an open problem. In this paper, we propose an image analysis based view management method, which first adopts the image processing to superimpose 2D labels to the specific object. We then conduct three search space methods to an augmented reality scenario. Without the requirements of setting rules and constraints for occlusion among the labels, the results of three search space methods are evaluated by using objective analysis of related parameters. The evaluation results indicate that different search space methods could generate different time costs and occlusion, thereby affecting the final labelling effects. ![]() |
|
Li, Ye |
![]() Tianyu He, Xiaoming Chen, Zhibo Chen, Ye Li, Sen Liu, Junhui Hou, and Ying He (University of Science and Technology of China, China; City University of Hong Kong, China; Nanyang Technological University, Singapore) Learning “motion” online or from video tutorials is usually inefficient since it is difficult to deliver “motion” information in traditional ways and in the ordinary PC platform. This paper presents ImmerTai, a system that can efficiently teach motion, in particular Chinese Taichi motion, in various immersive environments. ImmerTai captures the Taichi expert’s motion and delivers to students the captured motion in multi-modal forms in immersive CAVE, HMD as well as ordinary PC environments. The students’ motions are captured too for quality assessment and utilized to form a virtual collaborative learning atmosphere. We built up a Taichi motion dataset with 150 fundamental Taichi motions captured from 30 students, on which we evaluated the learning effectiveness and user experience of ImmerTai. The results show that ImmerTai can enhance the learning efficiency by up to 17.4% and the learning quality by up to 32.3%. ![]() |
|
Liew, Sook-Lei |
![]() Ryan Spicer, Julia Anglin, David M. Krum, and Sook-Lei Liew (University of Southern California, USA) There are few effective treatments for rehabilitation of severe motor impairment after stroke. We developed a novel closed-loop neurofeedback system called REINVENT to promote motor recovery in this population. REINVENT (Rehabilitation Environment using the Integration of Neuromuscular-based Virtual Enhancements for Neural Training) harnesses recent advances in neuroscience, wearable sensors, and virtual technology and integrates low-cost electroencephalography (EEG) and electromyography (EMG) sensors with feedback in a head-mounted virtual reality display (VR) to provide neurofeedback when an individual’s neuromuscular signals indicate movement attempt, even in the absence of actual movement. Here we describe the REINVENT prototype and provide evidence of the feasibility and safety of using REINVENT with older adults. ![]() ![]() Julia Anglin, David Saldana, Allie Schmiesing, and Sook-Lei Liew (University of Southern California, USA) Immersive, head-mounted virtual reality (HMD-VR) can be a potentially useful tool for motor rehabilitation. However, it is unclear whether the motor skills learned in HMD-VR transfer to the non-virtual world and vice-versa. Here we used a well-established test of skilled motor learning, the Sequential Visual Isometric Pinch Task (SVIPT), to train individuals in either an HMD-VR or conventional training (CT) environment. Participants were then tested in both environments. Our results show that participants who train in the CT environment have an improvement in motor performance when they transfer to the HMD-VR environment. In contrast, participants who train in the HMD-VR environment show a decrease in skill level when transferring to the CT environment. This has implications for how training in HMD-VR and CT may affect performance in different environments. ![]() |
|
Lind, Rasmus B. |
![]() Rasmus B. Lind, Victor Milesen, Dina M. Smed, Simone P. Vinkel, Francesco Grani, Niels C. Nilsson, Lars Reng, Rolf Nordahl, and Stefania Serafin (Aalborg University at Copenhagen, Denmark) In this paper we propose an experiment that evaluates the influence of audience noise on the feeling of presence and the perceived qual- ity in a virtual reality concert experience delivered using Wave Field Synthesis. A 360 degree video of a live rock concert from a local band was recorded. Single sound sources from the stage and the PA system were recorded, as well as the audience noise, and impulse responses of the concert venue. The audience noise was imple- mented in the production phase. A comparative study compared an experience with and without audience noise. In a between sub- ject experiment with 30 participants we found that audience noise does not have a significant impact on presence. However, qualita- tive evaluations show that the naturalness of the sonic experience delivered through wavefield synthesis had a positive impact on the participants. ![]() |
|
Lindemann, Patrick |
![]() Patrick Lindemann and Gerhard Rigoll (TU Munich, Germany) We anticipate advancements in mixed reality device technology which might benefit driver-car interaction scenarios and present a simulated diminished reality interface for car drivers. It runs in a custom driving simulation and allows drivers to perceive otherwise occluded objects of the environment through the car body. We expect to obtain insights that will be relevant to future real-world applications. We conducted a pre-study with participants performing a driving task with the prototype in a CAVE-like virtual environment. Users preferred large-sized see-through areas over small ones but had differing opinions on the level of transparency to use. In future work, we plan additional evaluations of the driving performance and will further extend the simulation. ![]() |
|
Liu, Sen |
![]() Tianyu He, Xiaoming Chen, Zhibo Chen, Ye Li, Sen Liu, Junhui Hou, and Ying He (University of Science and Technology of China, China; City University of Hong Kong, China; Nanyang Technological University, Singapore) Learning “motion” online or from video tutorials is usually inefficient since it is difficult to deliver “motion” information in traditional ways and in the ordinary PC platform. This paper presents ImmerTai, a system that can efficiently teach motion, in particular Chinese Taichi motion, in various immersive environments. ImmerTai captures the Taichi expert’s motion and delivers to students the captured motion in multi-modal forms in immersive CAVE, HMD as well as ordinary PC environments. The students’ motions are captured too for quality assessment and utilized to form a virtual collaborative learning atmosphere. We built up a Taichi motion dataset with 150 fundamental Taichi motions captured from 30 students, on which we evaluated the learning effectiveness and user experience of ImmerTai. The results show that ImmerTai can enhance the learning efficiency by up to 17.4% and the learning quality by up to 32.3%. ![]() |
|
Liu, Shiguang |
![]() Kai Wang, Haonan Cheng, and Shiguang Liu (Tianjin University, China) This paper presents a novel framework to generate the sound of outdoor natural scenes, such as waterfall, ocean, etc. Our method firstly simulates liquid with a grid-based method. Then combined with the movement of liquid, we generate seed-particles which represent bubbles, foams or splashes. Next, we assign each seed-particles a radius with a new radius distribution model. By calculating the bubbles’ pressure wave we generate the sound. Experiments demonstrated that our novel framework can efficiently synthesize the sounds for natural scenes. ![]() |
|
Liu, Yue |
![]() Zhenliang Zhang, Dongdong Weng, Yue Liu, Yongtian Wang, and Xinjun Zhao (Beijing Institute of Technology, China; Science and Technology on Complex Land Systems Simulation Laboratory, China) The most commonly used single point active alignment method (SPAAM) is based on a static pinhole camera model, in which it is assumed that both the eye and the HMD are fixed. This leads to a limitation for calibration precision. In this work, we propose a dynamic pinhole camera model according to the fact that the human eye would experience an obvious displacement over the whole calibration process. Based on such a camera model, we propose a new calibration data acquisition method called the region-induced data enhancement (RIDE) to revise the calibration data. The experimental results prove that the proposed dynamic model performs better than the traditional static model in actual calibration. ![]() ![]() Jie Guo, Dongdong Weng, Henry Been-Lirn Duh, Yue Liu, and Yongtian Wang (Beijing Institute of Technology, China; La Trobe University, Australia) There are few negative effects to make people discomfort using virtual reality systems. In this paper, we investigated the effects of visual fatigue when wearing head-mounted displays (HMD) and compared the results with those from the smartphones. Forty subjects were recruited and divided into two different groups. The visual fatigue scale was measured to assess the subjects’ performance. The results indicated that visual fatigue caused by the conflict of focal distance and vergence distance was less severe than visual fatigue caused by long-term focus without accommodation. ![]() ![]() Gang Li, Yue Liu, and Yongtian Wang (Beijing Institute of Technology, China) View management techniques are commonly used for labelling of objects in augmented reality environments. Combining with image analysis, search space and adaptive representations, they can be utilized to achieve desired labelling tasks. However, the evaluation of different search space methods on labelling are still an open problem. In this paper, we propose an image analysis based view management method, which first adopts the image processing to superimpose 2D labels to the specific object. We then conduct three search space methods to an augmented reality scenario. Without the requirements of setting rules and constraints for occlusion among the labels, the results of three search space methods are evaluated by using objective analysis of related parameters. The evaluation results indicate that different search space methods could generate different time costs and occlusion, thereby affecting the final labelling effects. ![]() |
|
Löbbert, Clemens |
![]() Sebastian Freitag, Clemens Löbbert, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Viewpoint quality estimation methods allow the determination of the most informative position in a scene. However, a single position usually cannot represent an entire scene, requiring instead a set of several viewpoints. Measuring the quality of such a set of views, however, is not trivial, and the computation of an optimal set of views is an NP-hard problem. Therefore, in this work, we propose three methods to estimate the quality of a set of views. Furthermore, we evaluate three approaches for computing an approximation to the optimal set (two of them new) regarding effectiveness and efficiency. ![]() |
|
Lok, Benjamin |
![]() Andrew Cordar, Adam Wendling, Casey White, Samsun Lampotang, and Benjamin Lok (University of Florida, USA; University of Virginia, USA) In the past few years, advances have been made on how mixed reality humans (MRHs) can be used for interpersonal communication skills training for medical teams; however, little research has looked at how MRHs can influence communication skills during training. One way to influence communication skills is to leverage MRHs as models of communication behavior. We created a mixed reality medical team training exercise designed to impact communication behaviors that are critical for patient safety. We recruited anesthesia residents to go through an operating room training exercise with MRHs to assess and influence residents' closed loop communication behaviors during medication administration. We manipulated the behavior of the MRHs to determine if the MRHs could influence the residents' closed loop communication behavior. Our results showed that residents' closed loop communications behaviors were influenced by MRHs. Additionally, we found there was a statistically significant difference between groups based on which MRH behavior residents observed. Because the MRHs significantly impacted how residents communicated in simulation, this work expands the boundaries for how VR can be used and demonstrates that MRHs could be used as tools to address complex communication dynamics in a team setting ![]() ![]() |
|
Lombart, Cindy |
![]() Adrien Verhulst, Jean-Marie Normand, Cindy Lombart, and Guillaume Moreau (Ecole Centrale de Nantes, France; Audienca Business School, France) In this paper we present an immersive virtual reality user study aimed at investigating how customers perceive and if they would purchase non-standard (i.e. misshaped) fruits and vegetables (FaVs) in supermarkets and hypermarkets. Indeed, food waste is a major issue for the retail sector and a recent trend is to reduce it by selling non-standard goods. An important question for retailers relates to the FaVs' ``level of abnormality'' that consumers would agree to buy. However, this question cannot be tackled using ``classical'' marketing techniques that perform user studies within real shops since fresh produce such as FaVs tend to rot rapidly preventing studies to be repeatable or to be run for a long time. In order to overcome those limitations, we created a virtual grocery store with a fresh FaVs section where 142 participants were immersed using an Oculus Rift DK2 HMD. Participants were presented either ``normal'', ``slightly misshaped'', ``misshaped'' or ``severely misshaped'' FaVs. Results show that participants tend to purchase a similar number of FaVs whatever their deformity. Nevertheless participants' perceptions of the quality of the FaV depend on the level of abnormality. ![]() |
|
Lonie, David |
![]() Patrick O'Leary, Sankhesh Jhaveri, Aashish Chaudhary, William Sherman, Ken Martin, David Lonie, Eric Whiting, James Money, and Sandy McKenzie (Kitware, USA; Indiana University, USA; Idaho National Laboratory, USA) Modern scientific, engineering and medical computational simulations, as well as experimental and observational data sensing/measuring devices, produce enormous amounts of data. While statistical analysis is one tool that provides insight into this data, it is scientific visualization that is tactically important for scientific discovery, product design and data analysis. These benefits are impeded, however, when the scientific visualization algorithms are implemented from scratch | a time consuming and redundant process in immersive application development. This process then can greatly benefit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR) environment has only been attempted to varying degrees of success. In this paper, we demonstrate two new approaches to simplify this amalgamation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that provide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications. ![]() |
|
Lopes, Roseli |
![]() Eduardo Zilles Borba, Andre Montes, Marcio Almeida, Mario Nagamura, Roseli Lopes, Marcelo Knorich Zuffo, Astolfo Araujo, and Regis Kopper (University of São Paulo, Brazil; Duke University, USA) This demo presents a fully immersive and interactive virtual environment (VE) – the ArcheoVR, which represents Itapeva Rocky Shelter, a prehistoric archeological site in Brazil. W workflow started with a real world data capture – laser scanners, drones and photogrammetry. Captured information was transformed into a carefully designed realistic 3D scene and interactive features that allows users to experience the virtual archeological site in real-time. The main objective of this VR model is to allow the general public to feel and explore an otherwise restricted and ephemeral site and to assess prototype tools intended for future digital archaeological exploration. ![]() |
|
Lu, Wenhuan |
![]() Ran Luo, Qiang Fang, Jianguo Wei, Wenhuan Lu, Weiwei Xu, and Yin Yang (University of New Mexico, USA; Chinese Academy of Social Sciences, China; Tianjin University, China; Zhejiang University, China) We propose an acoustic-VR system that converts acoustic signals of human language (Chinese) to realistic 3D tongue animation sequences in real time. It is known that directly capturing the 3D geometry of the tongue at a frame rate that matches the tongue's swift movement during the language production is challenging. This difficulty is handled by utilizing the electromagnetic articulography (EMA) sensor as the intermediate medium linking the acoustic data to the simulated virtual reality. We leverage Deep Neural Networks to train a model that maps the input acoustic signals to the positional information of pre-defined EMA sensors based on 1,108 utterances. Afterwards, we develop a novel reduced physics-based dynamics model for simulating the tongue's motion. Unlike the existing methods, our deformable model is nonlinear, volume-preserving, and accommodates collision between the tongue and the oral cavity (mostly with the jaw). The tongue's deformation could be highly localized which imposes extra difficulties for existing spectral model reduction methods. Alternatively, we adopt a spatial reduction method that allows an expressive subspace representation of the tongue's deformation. We systematically evaluate the simulated tongue shapes with real-world shapes acquired by MRI/CT. Our experiment demonstrates that the proposed system is able to deliver a realistic visual tongue animation corresponding to a user's speech signal. ![]() ![]() |
|
Luan, Bo |
![]() Yun Suk Chang, Benjamin Nuernberger, Bo Luan, Tobias Höllerer, and John O’Donovan (University of California at Santa Barbara, USA) Drawing annotations with 3D hand gestures in augmented reality is useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. To explore and evaluate different 3D hand-gesture-based annotation drawing methods, we have developed an annotation drawing application using the HoloLens augmented reality development platform. The application can be used for highlighting objects at a distance and multi-user collaboration by annotating in the real world. ![]() ![]() |
|
Lubos, Paul |
![]() Eike Langbehn, Paul Lubos, Gerd Bruder, and Frank Steinicke (University of Hamburg, Germany; University of Central Florida, USA) Redirected walking (RDW) promises to allow near-natural walking in an infinitely large virtual environment (VE) by subtle manipulations of the virtual camera. Previous experiments showed that a physical radius of at least 22 meters is required for undetectable RDW. However, we found that it is possible to decrease this radius and to apply RDW to room-scale VR, i. e., up to approximately 5m × 5m. This is done by using curved paths in the VE instead of straight paths, and by coupling them together in a way that enables continuous walking. Furthermore, the corresponding paths in the real world are laid out in a way that fits perfectly into room-scale VR. In this research demo, users can experience RDW in a room-scale head-mounted display VR setup and explore a VE of approximately 25m × 25m. ![]() |
|
Lund, Carol |
![]() Thea Andersen, Gintare Anisimovaite, Anders Christiansen, Mohamed Hussein, Carol Lund, Thomas Nielsen, Eoin Rafferty, Niels C. Nilsson, Rolf Nordahl, and Stefania Serafin (Aalborg University at Copenhagen, Denmark) This poster describes a between-groups study (n=24) exploring the use of virtual reality (VR) for facilitating focused meditation. Half of the participants were exposed to a meditation session combing the sound of a guiding voice and a visual environment including virtual objects for the participants to focus on. The other half of the participants was only exposed to the auditory guide. The participants' experience of the sessions was assessed using self-reported measures of perceived concentration, temporal duration, stress reduction, and comfort. Interestingly, no statistically significant differences were found between the two conditions. This finding may be revealing in regards to the usefulness of VR-based meditation. ![]() |
|
Luo, Ran |
![]() Ran Luo, Qiang Fang, Jianguo Wei, Wenhuan Lu, Weiwei Xu, and Yin Yang (University of New Mexico, USA; Chinese Academy of Social Sciences, China; Tianjin University, China; Zhejiang University, China) We propose an acoustic-VR system that converts acoustic signals of human language (Chinese) to realistic 3D tongue animation sequences in real time. It is known that directly capturing the 3D geometry of the tongue at a frame rate that matches the tongue's swift movement during the language production is challenging. This difficulty is handled by utilizing the electromagnetic articulography (EMA) sensor as the intermediate medium linking the acoustic data to the simulated virtual reality. We leverage Deep Neural Networks to train a model that maps the input acoustic signals to the positional information of pre-defined EMA sensors based on 1,108 utterances. Afterwards, we develop a novel reduced physics-based dynamics model for simulating the tongue's motion. Unlike the existing methods, our deformable model is nonlinear, volume-preserving, and accommodates collision between the tongue and the oral cavity (mostly with the jaw). The tongue's deformation could be highly localized which imposes extra difficulties for existing spectral model reduction methods. Alternatively, we adopt a spatial reduction method that allows an expressive subspace representation of the tongue's deformation. We systematically evaluate the simulated tongue shapes with real-world shapes acquired by MRI/CT. Our experiment demonstrates that the proposed system is able to deliver a realistic visual tongue animation corresponding to a user's speech signal. ![]() ![]() |
|
Ma, Zixiang |
![]() Liang Men, Nick Bryan-Kinns, Amelia Shivani Hassard, and Zixiang Ma (Queen Mary University of London, UK) In recent years, Virtual Reality (VR) applications have become widely available. An increase in popular interest raises questions about the use of the new medium for communication. While there is a wide variety of literature regarding scene transitions in films, novels and computer games, transitions in VR are not yet widely understood. As a medium that requires a high level of immersion, transitions are a desirable tool. This poster delineates an experiment studying the impact of transitions on user experience of presence in VR. ![]() ![]() |
|
Maciel, Anderson |
![]() Victor Adriel de Jesus Oliveira, Luca Brayda, Luciana Nedel, and Anderson Maciel (Federal University of Rio Grande do Sul, Brazil; Fondazione Istituto Italiano di Tecnologia, Italy) Vibrotactile feedback is broadly used to support different tasks in virtual and augmented reality applications, such as navigation, communication, attentional redirection, or to enhance the sense of presence in virtual environments. Thus, we aim to include the haptic component to the most popular wearable used in VR applications: the VR headset. After studying the acuity around the head for vibrating stimuli, and trying different parameters, actuators, and configurations, we developed a haptic guidance technique to be used in a vibrotactile Head-mounted Display (HMD). Our vibrotactile HMD was made to render the position of objects in a 3D space around the subject by varying both stimulus loci and vibration frequency. In this demonstration, the participants will interact with different scenarios where the mission is to select a number of predefined objects. However, instead of displaying occlusive graphical information to point to these objects, vibrotactile cues will provide guidance in the VR setup. Participants will see that our haptic guidance technique can be both easy to use and entertaining. ![]() ![]() ![]() |
|
MacQuarrie, Andrew |
![]() Andrew MacQuarrie and Anthony Steed (University College London, UK) The proliferation of head-mounted displays (HMD) in the market means that cinematic virtual reality (CVR) is an increasingly popular format. We explore several metrics that may indicate advantages and disadvantages of CVR compared to traditional viewing formats such as TV. We explored the consumption of panoramic videos in three different display systems: a HMD, a SurroundVideo+ (SV+), and a standard 16:9 TV. The SV+ display features a TV with projected peripheral content. A between-groups experiment of 63 participants was conducted, in which participants watched panoramic videos in one of these three display conditions. Aspects examined in the experiment were spatial awareness, narrative engagement, enjoyment, memory, fear, attention, and a viewer’s concern about missing something. Our results indicated that the HMD offered a significant benefit in terms of enjoyment and spatial awareness, and our SV+ display offered a significant improvement in enjoyment over traditional TV. We were unable to confirm the work of a previous study that showed incidental memory may be lower in a HMD over a TV. Drawing attention and a viewer’s concern about missing something were also not significantly different between display conditions. It is clear that passive media viewing consists of a complex interplay of factors, such as the media itself, the characteristics of the display, as well as human aspects including perception and attention. While passive media viewing presents many challenges for evaluation, identifying a number of broadly applicable metrics will aid our understanding of these experiences, and allow the creation of better, more engaging CVR content and displays. ![]() |
|
Maezawa, Momoko |
![]() Shohei Mori, Momoko Maezawa, Naoto Ienaga, and Hideo Saito (Keio University, Japan) Live instructor’s perspective videos are useful to present intuitive visual instructions for trainees in medical and industrial settings. In such videos, the instructor’s hands often hide the work area. In this demo, we present a diminished hand for visualizing the work area hidden by hands by capturing the work area with multiple cameras. To achieve the diminished reality, we use a light field rendering technique, in which light rays avoid passing through penalty points set in the unstructured light fields reconstructed from the multiple viewpoint images. ![]() ![]() |
|
Mahzari, Anahita |
![]() Afshin Taghavi Nasrabadi, Anahita Mahzari, Joseph D. Beshay, and Ravi Prakash (University of Texas at Dallas, USA) Virtual reality and 360-degree video streaming are growing rapidly; however, streaming 360-degree video is very challenging due to high bandwidth requirements. To address this problem, the video quality is adjusted according to the user viewport prediction. High quality video is only streamed for the user viewport, reducing the overall bandwidth consumption. Existing solutions use shallow buffers limited by the accuracy of viewport prediction. Therefore, playback is prone to video freezes which are very destructive for the Quality of Experience(QoE). We propose using layered encoding for 360-degree video to improve QoE by reducing the probability of video freezes and the latency of response to the user head movements. Moreover, this scheme reduces the storage requirements significantly and improves in-network cache performance. ![]() |
|
Malleson, Charles |
![]() Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell (Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA) In this demonstration, we showcase a system for rapid acquisition of bespoke avatars for each participant (subject) in a social VR environment is presented. For each subject, the system automatically customizes a parametric avatar model to match the captured subject by adjusting its overall height, body and face shape parameters and generating a custom face texture. ![]() ![]() Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell (Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA) We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and face shape at the fitting stage, while animation blendshapes allow the face to be animated. The subject assumes a T-pose and a single snapshot is captured using a stereo RGB plus depth sensor rig. Our system automatically aligns a photo texture and fits the 3D shape of the face. The body shape is stylized according to body dimensions estimated from segmented depth. The face identity blendweights are optimised according to image-based facial landmarks, while a custom texture map for the face is generated by warping the input images to a reference texture according to the facial landmarks. The total capture and processing time is under 10~seconds and the output is a light-weight, game-engine-ready avatar which is recognizable as the subject. We demonstrate our system in a VR environment in which each user sees the other users' animated avatars through a VR headset with real-time audio-based facial animation and live body motion tracking, affording an enhanced level of presence and social engagement compared to generic avatars. ![]() |
|
Malone, Erica |
![]() Jinsil Hwaryoung Seo, Brian Smith, Margaret Cook, Michelle Pine, Erica Malone, Steven Leal, and Jinkyo Suh (Texas A&M University, USA) We present Anatomy Builder VR that examines how a virtual reality (VR) system can support embodied learning in anatomy education. The backbone of the project is to pursue an alternative constructivist pedagogical model for learning canine anatomy. The main focus of the study was to identify and assemble bones in the live-animal orientation, using real thoracic limb bones in a bone box and digital pelvic limb bones in the Anatomy Builder VR. Eleven college students participated in the study. The pilot study showed that participants most enjoyed interacting with anatomical contents within the VR program. Participants spent less time assembling bones in the VR, and instead spent a longer time tuning the orientation of each VR bone in the 3D space. This study showed how a constructivist method could support anatomy education while using virtual reality technology in an active and experiential way. ![]() |
|
Mann, Jessie |
![]() Jessie Mann, Nicholas Polys, Rachel Diana, Manasa Ananth, Brad Herald, and Sweetuben Platel (Virginia Tech, USA) The design of Virginia Tech’s (VT) Study Hall emerges from the current cognitive neuroscience understanding of memory as a spatially mediated encoding process. The driving questions are: Does the sense of spatial navigation generated by an immersive virtual experience aid in memory formation? Does virtual spatial navigation, when paired with learning cues, enhance information encoding relative to nonspatial and nonvirtual processes? A pilot study was executed comparing recall on non-navigational memorization processes to processes involving mental and virtual navigation and we are currently running a full study to see if we can replicate these effects with a more demanding memory task and refined study design. ![]() |
|
Marino, Joseph |
![]() Seyedkoosha Mirhosseini, Ievgeniia Gutenko, Sushant Ojal, Joseph Marino, and Arie E. Kaufman (Stony Brook University, USA) For many virtual reality applications, a pre-calculated fly-through path is the de facto standard navigation method. Such a path is convenient for users and ensures coverage of critical areas throughout the scene. Traditional applications use constant camera speed, allow for fully user-controlled manual speed adjustment, or use automatic speed adjustment based on heuristics from the scene. We introduce two novel methods for constrained path navigation and exploration in virtual environments which rely on the natural orientation of the user’s head during scene exploration. Utilizing head tracking to obtain the user’s area of focus, we perform automatic camera speed adjustment to allow for natural off-axis scene examination. We expand this to include automatic camera navigation along the pre-computed path, abrogating the need for any navigational inputs from the user. Our techniques are applicable for any scene with a pre-computed navigation path, including medical applications such as virtual colonoscopy, coronary fly-through, or virtual angioscopy, and graph navigation. We compare the traditional methods (constant speed and manual speed adjustment) and our two methods (automatic speed adjustment and automatic speed/direction control) to determine the effect of speed adjustment on system usability, mental load, performance, and user accuracy. Through this evaluation we observe the effect of automatic speed adjustment compared to traditional methods. We observed no negative impact from automatic navigation, and the users performed as well as with the manual navigation. ![]() |
|
Martin, Ken |
![]() Patrick O'Leary, Sankhesh Jhaveri, Aashish Chaudhary, William Sherman, Ken Martin, David Lonie, Eric Whiting, James Money, and Sandy McKenzie (Kitware, USA; Indiana University, USA; Idaho National Laboratory, USA) Modern scientific, engineering and medical computational simulations, as well as experimental and observational data sensing/measuring devices, produce enormous amounts of data. While statistical analysis is one tool that provides insight into this data, it is scientific visualization that is tactically important for scientific discovery, product design and data analysis. These benefits are impeded, however, when the scientific visualization algorithms are implemented from scratch | a time consuming and redundant process in immersive application development. This process then can greatly benefit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR) environment has only been attempted to varying degrees of success. In this paper, we demonstrate two new approaches to simplify this amalgamation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that provide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications. ![]() |
|
Masai, Katsutoshi |
![]() Katsuhiro Suzuki, Fumihiko Nakamura, Jiu Otsuka, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, and Maki Sugimoto (Keio University, Japan) We propose a facial expression mapping technology between virtual avatars and Head-Mounted Display (HMD) users. HMD allow people to enjoy an immersive Virtual Reality (VR) experience. A virtual avatar can be a representative of the user in the virtual environment. However, the synchronization of the the virtual avatar's expressions with those of the HMD user is limited. The major problem of wearing an HMD is that a large portion of the user's face is occluded, making facial recognition difficult in an HMD-based virtual environment. To overcome this problem, we propose a facial expression mapping technology using retro-reflective photoelectric sensors. The sensors attached inside the HMD measures the distance between the sensors and the user's face. The distance values of five basic facial expressions (Neutral, Happy, Angry, Surprised, and Sad) are used for training the neural network to estimate the facial expression of a user. We achieved an overall accuracy of 88% in recognizing the facial expressions. Our system can also reproduce facial expression change in real-time through an existing avatar using regression. Consequently, our system enables estimation and reconstruction of facial expressions that correspond to the user's emotional changes. ![]() |
|
Matsumoto, Keigo |
![]() Keigo Matsumoto, Takuji Narumi, Yuki Ban, Tomohiro Tanikawa, and Michitaka Hirose (University of Tokyo, Japan) Redirected walking allows users to explore a large virtual environment while there is a limitation of the room size. Previous works tried to present users straight path in a virtual environment while they walked on a curved path in reality. We expand a previous technique to present users a various curved path in a virtual environment while they walked on a particular curved path or a straight path with/without haptics. Furthermore, we propose a novel estimation methodology to quantify walking paths which user has thought he walked in reality. The data from our experiment shows that users feel walking a various curved path in VR as same as one-to-one mapping condition. ![]() |
|
Mavridou, Ifigeneia |
![]() Ifigeneia Mavridou, James T. McGhee, Mahyar Hamedi, Mohsen Fatoorechi, Andrew Cleal, Emili Ballaguer-Balester, Ellen Seiss, Graeme Cox, and Charles Nduka (Bournemouth University, UK; Sussex Innovation Centre, UK) Faceteq prototype v.05 is a wearable technology for measuring facial expressions and biometric responses for experimental studies in Virtual Reality. Developed by Emteq Ltd laboratory, Faceteq can enable new avenues for virtual reality research through combination of high performance patented dry sensor technologies, proprietary algorithms and real-time data acquisition and streaming. Emteq founded the Faceteq project with the aim to provide a human-centered additional tool for emotion expression, affective human-computer interaction and social virtual environments. The proposed demonstration will exhibit the hardware and its functionality by allowing attendees to experience three of the showcasing applications we developed this year. ![]() |
|
McCall, Cade |
![]() Tara Collingwoode-Williams, Marco Gillies, Cade McCall, and Xueni Pan (University of London, UK; University of York, UK) We are interested the effect of lip and arm synchronization on body ownership in VR (the illusion that the users own a virtual body). Participants were invited to give a presentation in an HMD, while seeing in a virtual mirror a gender-matched avatar who copied their arm and lip movements in sync and a-sync conditions. We measure participants’ reaction with questionnaires administrated verbally after their presentation while immersed in VR. The result suggested an interaction effect of arm and lip, showing reports of higher level of embodiment with the congruent as compared to the incongruent conditions. Further study is needed to confirm if the same interaction effect can be captured with objective measurements. ![]() ![]() |
|
McCann, Brian C. |
![]() Grace M. Rodriguez, Marvis Cruz, Andrew Solis, and Brian C. McCann (University of Puerto Rico, Puerto Rico; University of Florida, USA; University of Texas at Austin, USA) Through their experience with the ICERT REU program at the Texas Advanced Computing Center (TACC), two undergraduate students from the University of Puerto Rico and the University of Florida have initiated a collaboration between their home institutions and TACC exploring the possibility of using immersion to simulate perceptual disturbances. Perceptual disturbances are subjective in nature, and difficult to communicate verbally. Often caretakers or those closest to sufferers have difficulty understanding the nature of their suffering. Immersion provides an exciting opportunity to directly communicate percepts with clinicians and loved ones. Here, we present a prototype environment meant to simulate some of the perceptual disturbances associated with seizures and epilepsy. Following further validation of our approach, we hope to promote awareness and empathy for these often jarring phenomena. ![]() ![]() Sarang S. Bhadsavle, Xie Hunt Shannon Yap, Justin Segler, Rahul Jaisimha, Nishant Raman, Yengzhou Feng, Sierra J. Biggs, Micah Peoples, Robert B. Brenner, and Brian C. McCann (University of Texas at Austin, USA) Immersive technologies such as 360◦ cameras and head-mounted displays (HMDs) have become affordable to the average consumer, opening up new audiences for storytellers. However, existing immersive post-production software often requires too great a technical or financial investment for smaller creative shops, including most of the country’s newspapers and local broadcast journalism organizations. Game engines, for example, are unnecessarily complicated for simple 360◦ video projects. Introducing Immerj - an open source abstraction layer simplifying the Unity3D game engine’s interface for immersive content creators. Our primary collaborator, Professor R.B. Brenner, director of the School of Journalism at the University of Texas at Austin, organized hands-on demos with our team and journalists and designers from some of the top news organizations all over the country in order to follow a human-centered design process. In just over one year, a small team of undergraduate researchers at the Texas Advanced Computing Center (TACC) has created a potentially disruptive democratization of technology. ![]() |
|
McGhee, James T. |
![]() Ifigeneia Mavridou, James T. McGhee, Mahyar Hamedi, Mohsen Fatoorechi, Andrew Cleal, Emili Ballaguer-Balester, Ellen Seiss, Graeme Cox, and Charles Nduka (Bournemouth University, UK; Sussex Innovation Centre, UK) Faceteq prototype v.05 is a wearable technology for measuring facial expressions and biometric responses for experimental studies in Virtual Reality. Developed by Emteq Ltd laboratory, Faceteq can enable new avenues for virtual reality research through combination of high performance patented dry sensor technologies, proprietary algorithms and real-time data acquisition and streaming. Emteq founded the Faceteq project with the aim to provide a human-centered additional tool for emotion expression, affective human-computer interaction and social virtual environments. The proposed demonstration will exhibit the hardware and its functionality by allowing attendees to experience three of the showcasing applications we developed this year. ![]() |
|
McKenzie, Sandy |
![]() Patrick O'Leary, Sankhesh Jhaveri, Aashish Chaudhary, William Sherman, Ken Martin, David Lonie, Eric Whiting, James Money, and Sandy McKenzie (Kitware, USA; Indiana University, USA; Idaho National Laboratory, USA) Modern scientific, engineering and medical computational simulations, as well as experimental and observational data sensing/measuring devices, produce enormous amounts of data. While statistical analysis is one tool that provides insight into this data, it is scientific visualization that is tactically important for scientific discovery, product design and data analysis. These benefits are impeded, however, when the scientific visualization algorithms are implemented from scratch | a time consuming and redundant process in immersive application development. This process then can greatly benefit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR) environment has only been attempted to varying degrees of success. In this paper, we demonstrate two new approaches to simplify this amalgamation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that provide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications. ![]() |
|
McMahan, Ryan P. |
![]() Asma Naz, Regis Kopper, Ryan P. McMahan, and Mihai Nadin (University of Texas at Dallas, USA; Duke University, USA) The emotional response a person has to a living space is predominantly affected by light, color and texture as space-making elements. In order to verify whether this phenomenon could be replicated in a simulated environment, we conducted a user study in a six-sided projected immersive display that utilized equivalent design attributes of brightness, color and texture in order to assess to which extent the emotional response in a simulated environment is affected by the same parameters affecting real environments. Since emotional response depends upon the context, we evaluated the emotional responses of two groups of users: inactive (passive) and active (performing a typical daily activity). The results from the perceptual study generated data from which design principles for a virtual living space are articulated. Such a space, as an alternative to expensive built dwellings, could potentially support new, minimalist lifestyles of occupants, defined as the neo-nomads, aligned with their work experience in the digital domain through the generation of emotional experiences of spaces. Data from the experiments confirmed the hypothesis that perceivable emotional aspects of real-world spaces could be successfully generated through simulation of design attributes in the virtual space. The subjective response to the virtual space was consistent with corresponding responses from real-world color and brightness emotional perception. Our data could serve the virtual reality (VR) community in its attempt to conceive of further applications of virtual spaces for well-defined activities. ![]() ![]() |
|
McNamara, Timothy P. |
![]() Richard A. Paris, Timothy P. McNamara, John J. Rieser, and Bobby Bodenheimer (Vanderbilt University, USA) Interesting virtual environments that permit free exploration are rarely small. A number of techniques have been developed to allow people to walk in larger virtual spaces than permitted by physical extent of the virtual reality hardware, and in this paper we compare three such methods in terms of how they affect presence and spatial awareness. In our first psychophysical study, we compared two methods of reorientation and one method of redirected walking on subjects' presence and spatial memory while navigating a pre-specified path. Our results suggested no difference between the two methods of reorientation but inferior performance of the redirected walking method. We further compared the two reorientation methods in a second psychophysical study involving free exploration and navigation in a large virtual environment. Our results provide criteria by which the choice of a locomotion method for navigating large virtual environments may be selected. ![]() |
|
Mehra, Ravish |
![]() Carl Schissler, Peter Stirling, and Ravish Mehra (Oculus, USA; Facebook, USA) An important component of the modeling of sound propagation for virtual reality (VR) is the spatialization of the room impulse response (RIR) for directional listeners. This involves convolution of the listener's head-related transfer function (HRTF) with the RIR to generate a spatial room impulse response (SRIR) which can be used to auralize the sound entering the listener's ear canals. Previous approaches tend to evaluate the HRTF for each sound propagation path, though this is too slow for interactive VR latency requirements. We present a new technique for computation of the SRIR that performs the convolution with the HRTF in the spherical harmonic (SH) domain for RIR partitions of a fixed length. The main contribution is a novel perceptually-driven metric that adaptively determines the lowest SH order required for each partition to result in no perceptible error in the SRIR. By using lower SH order for some partitions, our technique saves a significant amount of computation and is almost an order of magnitude faster than the previous approach. We compared the subjective impact of this new method to the previous one and observe a strong scene-dependent preference for our technique. As a result, our method is the first that can compute high-quality spatial sound for the entire impulse response fast enough to meet the audio latency requirements of interactive virtual reality applications. ![]() |
|
Men, Liang |
![]() Liang Men, Nick Bryan-Kinns, Amelia Shivani Hassard, and Zixiang Ma (Queen Mary University of London, UK) In recent years, Virtual Reality (VR) applications have become widely available. An increase in popular interest raises questions about the use of the new medium for communication. While there is a wide variety of literature regarding scene transitions in films, novels and computer games, transitions in VR are not yet widely understood. As a medium that requires a high level of immersion, transitions are a desirable tool. This poster delineates an experiment studying the impact of transitions on user experience of presence in VR. ![]() ![]() |
|
Merienne, Frédéric |
![]() Aida Erfanian, Stanley Tarng, Yaoping Hu, Jérémy Plouzeau, and Frédéric Merienne (University of Calgary, Canada; LE2I, France) Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies showed that the integration of visual and haptic cues follows maximum likelihood estimation (MLE). Little effort focuses however on the mechanism of integrating force and vibrotactile cues. We thus investigated MLE’s suitability for integrating these cues. Within a VE, human users undertook 3D interaction of navigating a flying drone along a high-voltage transmission line for inspection. The users received individual force or vibrotactile cues, and their combinations in collocated and dislocated settings. The users’ task performance including completion time and accuracy was assessed under each individual cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This agreed with the applicability of tactile cues for sensing 3D surfaces, herein setting a baseline for using MLE. The task performance under the collocated setting indicated a degree of combining the individual cues. In contrast, the performance under the dislocated setting was alike under the individual vibrotactile cue. These observations imply a possible role of MLE in integrating force and vibrotactile cues for 3D user interaction within VEs. ![]() |
|
Messmer, Peter |
![]() Tom Vierjahn, Daniel Zielasko, Kees van Kooten, Peter Messmer, Bernd Hentschel, Torsten W. Kuhlen, and Benjamin Weyers (RWTH Aachen University, Germany; JARA-HPC, Germany; NVIDIA, Germany; NVIDIA, Switzerland) Immersive visualization (IV) fosters the creation of mental images of a data set, a scene, a procedure, etc. We devise an initial version of a design space for categorizing workflows that take advantage of IV. From this categorization, specific requirements for an actual, seamless IV-integration can be derived. We validate the design space with three workflows investigated in our research projects. ![]() |
|
Michalski, Quba |
![]() Quba Michalski, Brendan J. Hogan, and Jamie Hunsdale (QubaVR, USA; Impossible Acoustic, USA) In a secret science facility, gravity has been conquered. “Down” is no longer a direction, but a choice. Step into the center of modified chambers and witness the laws of nature be broken in this five-experiment series. VR has all but torn down the barriers between the imagination of the creator and the experience of the viewer. A concept like The Pull simply does not translate into traditional 2D. We can suggest concepts through TVs and monitors, but we can’t truly experience them — and breaking the very laws of nature is something that can only be experienced. While flat media limits us to hinting and coaxing at an experience, by creating in VR, I can more faithfully share my vision with you, the viewer. For a few minutes – for five chambers – I can truly invite you into my world. ![]() |
|
Milesen, Victor |
![]() Rasmus B. Lind, Victor Milesen, Dina M. Smed, Simone P. Vinkel, Francesco Grani, Niels C. Nilsson, Lars Reng, Rolf Nordahl, and Stefania Serafin (Aalborg University at Copenhagen, Denmark) In this paper we propose an experiment that evaluates the influence of audience noise on the feeling of presence and the perceived qual- ity in a virtual reality concert experience delivered using Wave Field Synthesis. A 360 degree video of a live rock concert from a local band was recorded. Single sound sources from the stage and the PA system were recorded, as well as the audience noise, and impulse responses of the concert venue. The audience noise was imple- mented in the production phase. A comparative study compared an experience with and without audience noise. In a between sub- ject experiment with 30 participants we found that audience noise does not have a significant impact on presence. However, qualita- tive evaluations show that the naturalness of the sonic experience delivered through wavefield synthesis had a positive impact on the participants. ![]() |
|
Miller, Gregor |
![]() Qian Zhou, Kai Wu, Gregor Miller, Ian Stavness, and Sidney Fels (University of British Columbia, Canada; University of Saskatchewan, Canada) We describe an auto-calibrated 3D perspective-corrected spherical display that uses multiple rear projected pico-projectors. The display system is auto-calibrated via 3D reconstruction of each projected pixel on the display using a single inexpensive camera. With the automatic calibration, the multiple-projector system supports a seamless blended imagery on the spherical screen. Furthermore, we incorporate head tracking with the display to present 3D content with motion parallax by rendering perspective-corrected images based on the viewpoint. To show the effectiveness of this design, we implemented a view-dependent application that allows walk-around visualization from all angles for a single head-tracked user. We also implemented a view-independent application that supports a wall-papered rendering for multi-user viewing. Thus, both view-dependent 3D VR content and spherical 2D content, such as a globe, can be easily experienced with this display. ![]() ![]() ![]() |
|
Mine, Mark |
![]() Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell (Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA) In this demonstration, we showcase a system for rapid acquisition of bespoke avatars for each participant (subject) in a social VR environment is presented. For each subject, the system automatically customizes a parametric avatar model to match the captured subject by adjusting its overall height, body and face shape parameters and generating a custom face texture. ![]() ![]() Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell (Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA) We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and face shape at the fitting stage, while animation blendshapes allow the face to be animated. The subject assumes a T-pose and a single snapshot is captured using a stereo RGB plus depth sensor rig. Our system automatically aligns a photo texture and fits the 3D shape of the face. The body shape is stylized according to body dimensions estimated from segmented depth. The face identity blendweights are optimised according to image-based facial landmarks, while a custom texture map for the face is generated by warping the input images to a reference texture according to the facial landmarks. The total capture and processing time is under 10~seconds and the output is a light-weight, game-engine-ready avatar which is recognizable as the subject. We demonstrate our system in a VR environment in which each user sees the other users' animated avatars through a VR headset with real-time audio-based facial animation and live body motion tracking, affording an enhanced level of presence and social engagement compared to generic avatars. ![]() |
|
Mirhosseini, Seyedkoosha |
![]() Seyedkoosha Mirhosseini, Ievgeniia Gutenko, Sushant Ojal, Joseph Marino, and Arie E. Kaufman (Stony Brook University, USA) For many virtual reality applications, a pre-calculated fly-through path is the de facto standard navigation method. Such a path is convenient for users and ensures coverage of critical areas throughout the scene. Traditional applications use constant camera speed, allow for fully user-controlled manual speed adjustment, or use automatic speed adjustment based on heuristics from the scene. We introduce two novel methods for constrained path navigation and exploration in virtual environments which rely on the natural orientation of the user’s head during scene exploration. Utilizing head tracking to obtain the user’s area of focus, we perform automatic camera speed adjustment to allow for natural off-axis scene examination. We expand this to include automatic camera navigation along the pre-computed path, abrogating the need for any navigational inputs from the user. Our techniques are applicable for any scene with a pre-computed navigation path, including medical applications such as virtual colonoscopy, coronary fly-through, or virtual angioscopy, and graph navigation. We compare the traditional methods (constant speed and manual speed adjustment) and our two methods (automatic speed adjustment and automatic speed/direction control) to determine the effect of speed adjustment on system usability, mental load, performance, and user accuracy. Through this evaluation we observe the effect of automatic speed adjustment compared to traditional methods. We observed no negative impact from automatic navigation, and the users performed as well as with the manual navigation. ![]() |
|
Mitchell, Kenny |
![]() |