Powered by
Conference Publishing Consulting

2017 IEEE Virtual Reality (VR), March 18-22, 2017, Los Angeles, CA, USA

VR 2017 – Proceedings

Contents - Abstracts - Authors
Online Calendar - iCal File

Frontmatter

Title Page


Message from the General Chairs
It is our great pleasure to welcome you to IEEE Virtual Reality (VR) 2017, the 19th annual meeting of the premiere international conference focused on research in virtual reality! It is a tremendous honor to host the conference on its first return to Los Angeles since 2003. For this year, we selected a location in Manhattan Beach, taking advantage of its proximity to the emerging “Silicon Beach” technology hub in the Los Angeles Westside region.

Message from the Program Chairs
We are pleased to present the technical papers for the 2017 IEEE Virtual Reality Conference (IEEE VR 2017), held March 18–22, 2017 in Los Angeles, California.

Committees
Committees


Keynotes

Notes on Virtual and Augmented Reality (Keynote)
Tobias Höllerer
(University of California at Santa Barbara, USA)
VR and AR hold enormous promises as paradigm-shifting ubiquitous technologies. The investment in these technologies by leading IT companies, as well as the buy-in and general excitement from outside investors, technologists, and content producers has never been more palpable. There are good reasons to be excited about the field. The real question will be if the technologies can add sufficient value to people’s lives to establish themselves as more than just niche products. My path in this presentation will lead from a personal estimation of what matters for adoption of new technologies to important innovations we have witnessed on the road to anywhere/anytime use of immersive technologies. In recent years, one track of research in my lab has been concerned with the simulation of possible future capabilities in AR. With the goal to conduct controlled user studies evaluating technologies that are just not possible yet (such as a truly wide-field-of-view augmented reality display), we turn to high-end VR to simulate, predict, and assess these possible futures. In the far future, when technological hurdles, such as real-time reconstruction of photorealistic environment models, are removed, VR and AR naturally converge. Until then, we have a very interesting playing field full of technological constraints to have fun with.

A Perspective from the Long View: 35 Years in VR (Keynote)
David A. Smith
(Wearality, USA)
I have been working in VR and interactive 3D for a long time. I have had the pleasure of knowing and working with many of the people that are directly responsible for creating the magic in the world that we live in today. These are the people that started with a virtual blank page and created their own reality. Their vision defined a vector into their future that we have had the privilege of extending into ours. Knowing where this vector started gives us an incredible perspective on where it is today and where it is going. I will describe my personal journey along this vector over the last 35 years, demonstrate a few things I am working on today, and speculate about where this vector into the future may take us.


Conference Papers

Plausibility, Emotions, and Ethics
Mon, Mar 20, 10:30 - 12:00, Ballroom A/B (Chair: Eric Hodgson)

Emotional Qualities of VR Space
Asma Naz, Regis Kopper, Ryan P. McMahan, and Mihai Nadin
(University of Texas at Dallas, USA; Duke University, USA)
The emotional response a person has to a living space is predominantly affected by light, color and texture as space-making elements. In order to verify whether this phenomenon could be replicated in a simulated environment, we conducted a user study in a six-sided projected immersive display that utilized equivalent design attributes of brightness, color and texture in order to assess to which extent the emotional response in a simulated environment is affected by the same parameters affecting real environments. Since emotional response depends upon the context, we evaluated the emotional responses of two groups of users: inactive (passive) and active (performing a typical daily activity). The results from the perceptual study generated data from which design principles for a virtual living space are articulated. Such a space, as an alternative to expensive built dwellings, could potentially support new, minimalist lifestyles of occupants, defined as the neo-nomads, aligned with their work experience in the digital domain through the generation of emotional experiences of spaces. Data from the experiments confirmed the hypothesis that perceivable emotional aspects of real-world spaces could be successfully generated through simulation of design attributes in the virtual space. The subjective response to the virtual space was consistent with corresponding responses from real-world color and brightness emotional perception. Our data could serve the virtual reality (VR) community in its attempt to conceive of further applications of virtual spaces for well-defined activities.

Info
Asking Ethical Questions in Research using Immersive Virtual and Augmented Reality Technologies with Children and Youth
Erica Southgate, Shamus P. Smith, and Jill Scevak
(University of Newcastle, Australia)
The increasing availability of intensely immersive virtual, augmented and mixed reality experiences using head-mounted displays (HMD) has prompted deliberations about the ethical implications of using such technology to resolve technical issues and explore the complex cognitive, behavioral and social dynamics of human `virtuality'. However, little is known about the impact such immersive experiences will have on children (aged 0-18 years). This paper outlines perspectives on child development to present conceptual and practical frameworks for conducting ethical research with children using immersive HMD technologies. The paper addresses not only procedural ethics (gaining institutional approval) but also ethics-in-practice (on-going ethical decision-making).

Travel and Navigation
Mon, Mar 20, 13:30 - 14:40, Ballroom A/B (Chair: Tabitha Peck)

Guided Head Rotation and Amplified Head Rotation: Evaluating Semi-natural Travel and Viewing Techniques in Virtual Reality
Shyam Prathish Sargunam, Kasra Rahimi Moghadam, Mohamed Suhail, and Eric D. Ragan
(Texas A&M University, USA)
Traditionally in virtual reality systems, head tracking is used in head-mounted displays (HMDs) to allow users to control viewing using 360-degree head and body rotations. Our research explores interaction considerations that enable semi-natural methods of view control that will work for seated use of virtual reality with HMDs when physically turning all the way around is not ideal, such as when sitting on a couch or at a desk. We investigate the use of amplified head rotations so physically turning in a comfortable range can allow viewing of a 360-degree virtual range. Additionally, to avoid situations where the user's neck is turned in an uncomfortable position for an extended period, we also use redirection during virtual movement to gradually realign the user's head position back to the neutral, straight-ahead position. We ran a controlled experiment to evaluate guided head rotation and amplified head rotation without realignment during movement, and we compared both to traditional one-to-one head-tracked viewing as a baseline for reference. After a navigation task, overall errors on spatial orientation tasks were relatively low with all techniques, but orientation effects, sickness, and preferences varied depending on participants' 3D gaming habits. Using the guided rotation technique, participants who played 3D games performed better, reported higher preference scores, and demonstrated significantly lower sickness results compared to non-gamers.

Video
Automatic Speed and Direction Control along Constrained Navigation Paths
Seyedkoosha Mirhosseini, Ievgeniia Gutenko, Sushant Ojal, Joseph Marino, and Arie E. Kaufman
(Stony Brook University, USA)
For many virtual reality applications, a pre-calculated fly-through path is the de facto standard navigation method. Such a path is convenient for users and ensures coverage of critical areas throughout the scene. Traditional applications use constant camera speed, allow for fully user-controlled manual speed adjustment, or use automatic speed adjustment based on heuristics from the scene. We introduce two novel methods for constrained path navigation and exploration in virtual environments which rely on the natural orientation of the user’s head during scene exploration. Utilizing head tracking to obtain the user’s area of focus, we perform automatic camera speed adjustment to allow for natural off-axis scene examination. We expand this to include automatic camera navigation along the pre-computed path, abrogating the need for any navigational inputs from the user. Our techniques are applicable for any scene with a pre-computed navigation path, including medical applications such as virtual colonoscopy, coronary fly-through, or virtual angioscopy, and graph navigation. We compare the traditional methods (constant speed and manual speed adjustment) and our two methods (automatic speed adjustment and automatic speed/direction control) to determine the effect of speed adjustment on system usability, mental load, performance, and user accuracy. Through this evaluation we observe the effect of automatic speed adjustment compared to traditional methods. We observed no negative impact from automatic navigation, and the users performed as well as with the manual navigation.

360° Video Cinematic Experience
Tue, Mar 21, 08:30 - 10:00, Ballroom A (Chair: Gerd Bruder)

6-DOF VR Videos with a Single 360-Camera
Jingwei Huang, Zhili Chen, Duygu Ceylan, and Hailin Jin
(Stanford University, USA; Adobe Research, USA)
Recent breakthroughs in consumer level virtual reality (VR) headsets is creating a growing user-base in demand of immersive, full 3D VR experiences. While monoscopic 360-videos are one of the most common content for VR headsets, they lack 3D information and thus cannot be viewed with full 6 degree-of-freedom (DOF). We present an approach that addresses this limitation via a novel warping algorithm that can synthesize new views both with rotational and translational motion of the viewpoint. This enables to playback an input monoscopic 360-video in a VR headset where the new viewpoints are determined by the 6-DOF motion of the headset. Our solution tailors standard structure-from-motion and dense reconstruction algorithms to work accurately for 360-videos and is optimized for GPUs to achieve VR frame rates (> 120 fps). We demonstrate the effectiveness our approach on a variety of videos with interesting content.

Video
Cinematic Virtual Reality: Evaluating the Effect of Display Type on the Viewing Experience for Panoramic Video
Andrew MacQuarrie and Anthony Steed
(University College London, UK)
The proliferation of head-mounted displays (HMD) in the market means that cinematic virtual reality (CVR) is an increasingly popular format. We explore several metrics that may indicate advantages and disadvantages of CVR compared to traditional viewing formats such as TV. We explored the consumption of panoramic videos in three different display systems: a HMD, a SurroundVideo+ (SV+), and a standard 16:9 TV. The SV+ display features a TV with projected peripheral content. A between-groups experiment of 63 participants was conducted, in which participants watched panoramic videos in one of these three display conditions. Aspects examined in the experiment were spatial awareness, narrative engagement, enjoyment, memory, fear, attention, and a viewer’s concern about missing something. Our results indicated that the HMD offered a significant benefit in terms of enjoyment and spatial awareness, and our SV+ display offered a significant improvement in enjoyment over traditional TV. We were unable to confirm the work of a previous study that showed incidental memory may be lower in a HMD over a TV. Drawing attention and a viewer’s concern about missing something were also not significantly different between display conditions. It is clear that passive media viewing consists of a complex interplay of factors, such as the media itself, the characteristics of the display, as well as human aspects including perception and attention. While passive media viewing presents many challenges for evaluation, identifying a number of broadly applicable metrics will aid our understanding of these experiences, and allow the creation of better, more engaging CVR content and displays.

Extraordinary Environments and Abnormal Objects
Tue, Mar 21, 08:30 - 10:00, Ballroom B (Chair: Xubo Yang)

A Study on the Use of an Immersive Virtual Reality Store to Investigate Consumer Perceptions and Purchase Behavior toward Non-standard Fruits and Vegetables
Adrien Verhulst, Jean-Marie Normand, Cindy Lombart, and Guillaume Moreau
(Ecole Centrale de Nantes, France; Audienca Business School, France)
In this paper we present an immersive virtual reality user study aimed at investigating how customers perceive and if they would purchase non-standard (i.e. misshaped) fruits and vegetables (FaVs) in supermarkets and hypermarkets. Indeed, food waste is a major issue for the retail sector and a recent trend is to reduce it by selling non-standard goods. An important question for retailers relates to the FaVs' ``level of abnormality'' that consumers would agree to buy. However, this question cannot be tackled using ``classical'' marketing techniques that perform user studies within real shops since fresh produce such as FaVs tend to rot rapidly preventing studies to be repeatable or to be run for a long time. In order to overcome those limitations, we created a virtual grocery store with a fresh FaVs section where 142 participants were immersed using an Oculus Rift DK2 HMD. Participants were presented either ``normal'', ``slightly misshaped'', ``misshaped'' or ``severely misshaped'' FaVs. Results show that participants tend to purchase a similar number of FaVs whatever their deformity. Nevertheless participants' perceptions of the quality of the FaV depend on the level of abnormality.

Haptics
Tue, Mar 21, 10:30 - 12:00, Ballroom A (Chair: Robert Lindeman)

Evaluation of a Penalty and a Constraint-Based Haptic Rendering Algorithm with Different Haptic Interfaces and Stiffness Values
Mikel Sagardia and Thomas Hulin
(German Aerospace Center, Germany)
This work presents an evaluation study in which the effects of a penalty-based and a constraint-based haptic rendering algorithm on the user performance and perception are analyzed. A total of N = 24 participants performed in a within-design study three variations of peg-in-hole tasks in a virtual environment after trials in an identically replicated real scenario as a reference. In addition to the two mentioned haptic rendering paradigms, two haptic devices were used, the HUG and a Sigma.7, and the force stiffness was also varied with maximum and half values possible for each device. Both objective (time and trajectory, collision performance, and muscular effort) and subjective ratings (contact perception, ergonomy, and workload) were recorded and statistically analyzed. The results show that the constraint-based haptic rendering algorithm with a lower stiffness than the maximum possible yields the most realistic contact perception, while keeping the visual inter-penetration between the objects roughly at around 15% of that caused by penalty-based algorithm (i.e., non perceptible in many cases). This result is even more evident with the HUG, the haptic device with the highest force display capabilities, although user ratings point to the Sigma.7 as the device with highest usability and lowest workload indicators. Altogether, the paper provides qualitative and quantitative guidelines for mapping properties of haptic algorithms and devices to user performance and perception.

VRRobot: Robot Actuated Props in an Infinite Virtual Environment
Emanuel Vonach, Clemens Gatterer, and Hannes Kaufmann
(Vienna University of Technology, Austria)
We present the design and development of a fully immersive virtual reality (VR) system that can provide prop-based haptic feedback in an infinite virtual environment. It is conceived as a research tool for studying topics related to haptics in VR and based on off-the-shelf components. A robotic arm moves physical props, dynamically matching pose and location of an object in the virtual world. When the user reaches for the virtual object, his or her hands also encounter it in the real physical space. The interaction is not limited to specific body parts and does not rely on an external structure like an exoskeleton. In combination with a locomotion platform for close-to-natural walking, this allows unrestricted haptic interaction in a natural way in virtual environments of unlimited size. We describe the concept, the hardware and software architecture in detail. We establish safety design guidelines for human-robot interaction in VR. Our technical evaluation shows good response times and accuracy. We report on a user study conducted with 34 participants indicating promising results, and discuss the potential of our system.

Video
Inducing Self-Motion Sensations in Driving Simulators using Force-Feedback and Haptic Motion
Guillaume Bouyer, Amine Chellali, and Anatole Lécuyer
(University of Évry Val d'Essonne, France; Inria, France)
Producing sensations of motion in driving simulators often requires using cumbersome and expensive motion platforms. In this ar- ticle we present a novel and alternative approach for producing self-motion sensations in driving simulations by relying on haptic- feedback. The method consists in applying a force-feedback pro- portional to the acceleration of the virtual vehicle directly in the hands of the driver, by means of a haptic device attached to the manipulated controller (or a steering wheel). We designed a proof- of-concept based on a standard gamepad physically attached at the extremity of a standard 3DOF haptic display. Haptic effects were designed to match notably the acceleration/braking (longitudinal forces) and left/right turns (lateral forces) of the virtual vehicle. A preliminary study conducted with 23 participants, engaged in gamepad-based active VR navigations in a straight line, showed that haptic motion effects globally improved the involvement and realism of motion sensation for participants with prior experience with haptic devices. Taken together, our results suggest that our approach could be further tested and used in driving simulators in entertainment and/or professional contexts.

Walking Alone and Together
Tue, Mar 21, 10:30 - 12:00, Ballroom B (Chair: Niels Christian Nilsson)

An Evaluation of Strategies for Two-User Redirected Walking in Shared Physical Spaces
Mahdi Azmandian, Timofey Grechkin, and Evan Suma Rosenberg
(University of Southern California, USA)
As the focus of virtual reality technology is shifting from single-person experiences to multi-user interactions it becomes increasingly important to accommodate multiple co-located users within a shared real-world space. For locomotion and navigation, the introduction of multiple users moving both virtually and physically creates additional challenges related to potential user-on-user collisions. In this work, we focus on defining the extent of these challenges, in order to apply redirected walking to two users immersed in virtual reality experiences within a shared physical tracked space. Using a computer simulation framework, we explore the costs and benefits of splitting available physical space between users versus attempting to algorithmically prevent user-to-user collisions. We also explore fundamental components of collision prevention such as steering the users away from each other, forced stopping, and user re-orientation. Each component was analyzed for the number of potential disruptions to the flow of the virtual experience. We also develop a novel collision prevention algorithm that reduces overall interruptions by 17.6% and collision prevention events by 58.3%. Our results show that sharing space using our collision prevention method is superior to subdividing the tracked space.

Touch and Vibrotactile Feedback
Tue, Mar 21, 13:30 - 14:40, Ballroom A/B (Chair: Anatole Lecyuer)

Wearable Tactile Device using Mechanical and Electrical Stimulation for Fingertip Interaction with Virtual World
Vibol Yem and Hiroyuki Kajimoto
(University of Electro-Communications, Japan)
We developed “Finger Glove for Augmented Reality” (FinGAR), which combines electrical and mechanical stimulation to selectively stimulate skin sensory mechanoreceptors and provide tactile feedback of virtual objects. A DC motor provides high-frequency vibration and shear deformation to the whole finger, and an array of electrodes provide pressure and low-frequency vibration with high spatial resolution. FinGAR devices are attached to the thumb, index finger and middle finger. It is lightweight, simple in mechanism, easy to wear, and does not disturb the natural movements of the hand. All of these attributes are necessary for a general-purpose virtual reality system. User study was conducted to evaluate its ability to reproduce sensations of four tactile dimensions: macro roughness, friction, fine roughness and hardness. Result indicated that skin deformation and cathodic stimulation affect macro roughness and hardness, whereas high-frequency vibration and anodic stimulation affect friction and fine roughness.

Video
Exploring the Effect of Vibrotactile Feedback through the Floor on Social Presence in an Immersive Virtual Environment
Myungho Lee, Gerd Bruder, and Gregory F. Welch
(University of Central Florida, USA)
We investigate the effect of vibrotactile feedback delivered to one's feet in an immersive virtual environment (IVE). In our study, participants observed a virtual environment where a virtual human (VH) walked toward the participants and paced back and forth within their social space. We compared three conditions as follows: participants in the Sound condition heard the footsteps of the VH; participants in the Vibration condition experienced the vibration of the footsteps along with the sounds; while participants in the Mute condition were not exposed to sound nor vibrotactile feedback. We found that the participants in the Vibration condition felt a higher social presence with the VH compared to those who did not feel the vibration. The participants in the Vibration condition also exhibited greater avoidance behavior while facing the VH and when the VH invaded their personal space.

Acoustics and Auditory Displays
Tue, Mar 21, 16:00 - 17:15, Ballroom A/B (Chair: Stefania Serrafin)

Acoustic VR in the Mouth: A Real-Time Speech-Driven Visual Tongue System
Ran Luo, Qiang Fang, Jianguo Wei, Wenhuan Lu, Weiwei Xu, and Yin Yang
(University of New Mexico, USA; Chinese Academy of Social Sciences, China; Tianjin University, China; Zhejiang University, China)
We propose an acoustic-VR system that converts acoustic signals of human language (Chinese) to realistic 3D tongue animation sequences in real time. It is known that directly capturing the 3D geometry of the tongue at a frame rate that matches the tongue's swift movement during the language production is challenging. This difficulty is handled by utilizing the electromagnetic articulography (EMA) sensor as the intermediate medium linking the acoustic data to the simulated virtual reality. We leverage Deep Neural Networks to train a model that maps the input acoustic signals to the positional information of pre-defined EMA sensors based on 1,108 utterances. Afterwards, we develop a novel reduced physics-based dynamics model for simulating the tongue's motion. Unlike the existing methods, our deformable model is nonlinear, volume-preserving, and accommodates collision between the tongue and the oral cavity (mostly with the jaw). The tongue's deformation could be highly localized which imposes extra difficulties for existing spectral model reduction methods. Alternatively, we adopt a spatial reduction method that allows an expressive subspace representation of the tongue's deformation. We systematically evaluate the simulated tongue shapes with real-world shapes acquired by MRI/CT. Our experiment demonstrates that the proposed system is able to deliver a realistic visual tongue animation corresponding to a user's speech signal.

Video
Efficient Construction of the Spatial Room Impulse Response
Carl Schissler, Peter Stirling, and Ravish Mehra
(Oculus, USA; Facebook, USA)
An important component of the modeling of sound propagation for virtual reality (VR) is the spatialization of the room impulse response (RIR) for directional listeners. This involves convolution of the listener's head-related transfer function (HRTF) with the RIR to generate a spatial room impulse response (SRIR) which can be used to auralize the sound entering the listener's ear canals. Previous approaches tend to evaluate the HRTF for each sound propagation path, though this is too slow for interactive VR latency requirements. We present a new technique for computation of the SRIR that performs the convolution with the HRTF in the spherical harmonic (SH) domain for RIR partitions of a fixed length. The main contribution is a novel perceptually-driven metric that adaptively determines the lowest SH order required for each partition to result in no perceptible error in the SRIR. By using lower SH order for some partitions, our technique saves a significant amount of computation and is almost an order of magnitude faster than the previous approach. We compared the subjective impact of this new method to the previous one and observe a strong scene-dependent preference for our technique. As a result, our method is the first that can compute high-quality spatial sound for the entire impulse response fast enough to meet the audio latency requirements of interactive virtual reality applications.

Avatars and Virtual Humans
Wed, Mar 22, 08:30 - 10:00, Ballroom A/B (Chair: Eric Ragan)

Rapid One-Shot Acquisition of Dynamic VR Avatars
Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell
(Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA)
We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and face shape at the fitting stage, while animation blendshapes allow the face to be animated. The subject assumes a T-pose and a single snapshot is captured using a stereo RGB plus depth sensor rig. Our system automatically aligns a photo texture and fits the 3D shape of the face. The body shape is stylized according to body dimensions estimated from segmented depth. The face identity blendweights are optimised according to image-based facial landmarks, while a custom texture map for the face is generated by warping the input images to a reference texture according to the facial landmarks. The total capture and processing time is under 10~seconds and the output is a light-weight, game-engine-ready avatar which is recognizable as the subject. We demonstrate our system in a VR environment in which each user sees the other users' animated avatars through a VR headset with real-time audio-based facial animation and live body motion tracking, affording an enhanced level of presence and social engagement compared to generic avatars.

Prism Aftereffects for Throwing with a Self-Avatar in an Immersive Virtual Environment
Bobby Bodenheimer, Sarah Creem-Regehr, Jeanine Stefanucci, Elena Shemetova, and William B. Thompson
(Vanderbilt University, USA; University of Utah, USA)
The use of first-person self-avatars in immersive virtual environments (VEs) has grown over recent years. It is unknown, however, how visual feedback from a self-avatar influences a user's online actions and subsequent calibration of actions within an immersive VE. The current paper uses a prism throwing adaptation paradigm to test the role of a self-avatar arm or full body on action calibration in a VE. Participants' throwing accuracy to a target on the ground was measured first in a normal viewing environment, then with the visual field rotated clockwise about their vertical axis by 17 degrees (prism simulation), and then again in the normal viewing environment with the prism distortion removed. Participants experienced either no-avatar, a first-person avatar arm and hand, or a first-person full body avatar during the entire experimental session, in a between-subjects manipulation. Results showed similar throwing error and adaptation during the prism exposure for all conditions, but a reduced aftereffect (displacement with respect to the target in the opposite direction of the prism-exposure) when the avatar arm or full body was present. The results are discussed in the context of how an avatar can provide a visual frame of reference to aid in action calibration.

Repeat after Me: Using Mixed Reality Humans to Influence Best Communication Practices
Andrew Cordar, Adam Wendling, Casey White, Samsun Lampotang, and Benjamin Lok
(University of Florida, USA; University of Virginia, USA)
In the past few years, advances have been made on how mixed reality humans (MRHs) can be used for interpersonal communication skills training for medical teams; however, little research has looked at how MRHs can influence communication skills during training. One way to influence communication skills is to leverage MRHs as models of communication behavior. We created a mixed reality medical team training exercise designed to impact communication behaviors that are critical for patient safety. We recruited anesthesia residents to go through an operating room training exercise with MRHs to assess and influence residents' closed loop communication behaviors during medication administration. We manipulated the behavior of the MRHs to determine if the MRHs could influence the residents' closed loop communication behavior. Our results showed that residents' closed loop communications behaviors were influenced by MRHs. Additionally, we found there was a statistically significant difference between groups based on which MRH behavior residents observed. Because the MRHs significantly impacted how residents communicated in simulation, this work expands the boundaries for how VR can be used and demonstrates that MRHs could be used as tools to address complex communication dynamics in a team setting

Video

Motion Tracking and Capturing
Wed, Mar 22, 10:30 - 12:00, Ballroom A/B (Chair: Regis Kopper)

Optimizing Placement of Commodity Depth Cameras for Known 3D Dynamic Scene Capture
Rohan Chabra, Adrian Ilie, Nicholas Rewkowski, Young-Woon Cha, and Henry Fuchs
(University of North Carolina at Chapel Hill, USA)
Commodity depth sensors, such as the Microsoft Kinect®, have been widely used for the capture and reconstruction of the 3D structure of a room-sized dynamic scene. Camera placement and coverage during capture significantly impact the quality of the resulting reconstruction. In particular, dynamic occlusions and sensor interference have been shown to result in poor resolution and holes in the reconstruction results. This paper presents a novel algorithmic framework and an off-line optimization of depth sensor placements for a given 3D dynamic scene, simulated using virtual 3D models. We derive a fitness metric for a particular configuration of sensors by combining factors such as visibility and resolution of the entire dynamic scene along with probabilities of interference between sensors. We employ this fitness metric both in a greedy algorithm that determines the number of depth sensors needed to cover the scene, and in a simulated annealing algorithm that optimizes the placements of those sensors. We compare our algorithm’s optimized placements with manual sensor placements for a real dynamic scene. We present quantitative assessments using our fitness metric, as well as qualitative assessments to demonstrate that our algorithm not only enhances the resolution and total coverage of the reconstruction but also fills in voids by avoiding occlusions and sensor interference when compared with the reconstruction of the same scene using a manual sensor placement.

Video Info
Sweeping-Based Volumetric Calibration and Registration of Multiple RGBD-Sensors for 3D Capturing Systems
Stephan Beck and Bernd Froehlich
(Bauhaus-Universität Weimar, Germany)
The accurate calibration and registration of a set of color and depth (RGBD) sensors into a shared coordinate system is an essential requirement for 3D surround capturing systems. We present a method to calibrate multiple unsynchronized RGBD-sensors with high accuracy in a matter of minutes by sweeping a tracked checkerboard through the desired capturing space in front of each sensor. Through the sweeping process, a large number of robust correspondences between the depth and the color image as well as the 3D world positions can be automatically established. In order to obtain temporally synchronized correspondences between an RGBD-sensor’s data streams and the tracked target’s positions we apply an off-line optimization process based on error minimization and a coplanarity constraint. The correspondences are entered into a 3D look-up table which is used during runtime to transform depth and color information into the application’s world coordinate system. Our proposed method requires a manual effort of less than one minute per RGBD-sensor and achieves a high calibration accuracy with an average 3D error below 3.5 mm and an average texture reprojection error smaller than 1 pixel.

Video Info
Recognition and Mapping of Facial Expressions to Avatar by Embedded Photo Reflective Sensors in Head Mounted Display
Katsuhiro Suzuki, Fumihiko Nakamura, Jiu Otsuka, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, and Maki Sugimoto
(Keio University, Japan)
We propose a facial expression mapping technology between virtual avatars and Head-Mounted Display (HMD) users. HMD allow people to enjoy an immersive Virtual Reality (VR) experience. A virtual avatar can be a representative of the user in the virtual environment. However, the synchronization of the the virtual avatar's expressions with those of the HMD user is limited. The major problem of wearing an HMD is that a large portion of the user's face is occluded, making facial recognition difficult in an HMD-based virtual environment. To overcome this problem, we propose a facial expression mapping technology using retro-reflective photoelectric sensors. The sensors attached inside the HMD measures the distance between the sensors and the user's face. The distance values of five basic facial expressions (Neutral, Happy, Angry, Surprised, and Sad) are used for training the neural network to estimate the facial expression of a user. We achieved an overall accuracy of 88% in recognizing the facial expressions. Our system can also reproduce facial expression change in real-time through an existing avatar using regression. Consequently, our system enables estimation and reconstruction of facial expressions that correspond to the user's emotional changes.

Systems and Applications
Wed, Mar 22, 13:30 - 15:00, Ballroom A/B (Chair: Pablo Figueroa)

Enhancements to VTK Enabling Scientific Visualization in Immersive Environments
Patrick O'Leary, Sankhesh Jhaveri, Aashish Chaudhary, William Sherman, Ken Martin, David Lonie, Eric Whiting, James Money, and Sandy McKenzie
(Kitware, USA; Indiana University, USA; Idaho National Laboratory, USA)
Modern scientific, engineering and medical computational simulations, as well as experimental and observational data sensing/measuring devices, produce enormous amounts of data. While statistical analysis is one tool that provides insight into this data, it is scientific visualization that is tactically important for scientific discovery, product design and data analysis. These benefits are impeded, however, when the scientific visualization algorithms are implemented from scratch | a time consuming and redundant process in immersive application development. This process then can greatly benefit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR) environment has only been attempted to varying degrees of success. In this paper, we demonstrate two new approaches to simplify this amalgamation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that provide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications.

MagicToon: A 2D-to-3D Creative Cartoon Modeling System with Mobile AR
Lele Feng, Xubo Yang, and Shuangjiu Xiao
(Shanghai Jiao Tong University, China)
We present MagicToon, an interactive modeling system with mobile augmented reality (AR) that allows children to build 3D cartoon scenes creatively from their own 2D cartoon drawings on paper. Our system consists of two major components: an automatic 2D-to-3D cartoon model creator and an interactive model editor to construct more complicated AR scenes. The model creator can generate textured 3D cartoon models according to 2D drawings automatically and overlay them on the real world, bringing life to flat cartoon drawings. With our interactive model editor, the user can perform several optional operations on 3D models such as copying and animating in AR context through a touchscreen of a handheld device. The user can also author more complicated AR scenes by placing multiple registered drawings simultaneously. The results of our user study have shown that our system is easier to use compared with traditional sketch-based modeling systems and can give more play to children's innovations compared with AR coloring books.

Video Info

Posters

Poster Session A
Mon, Mar 20, 14:40 - 15:30, Ballroom A/B

Attention Guidance for Immersive Video Content in Head-Mounted Displays
Fabien Danieau, Antoine Guillo, and Renaud Doré
(Technicolor R&I, France; ENSAM, France)
Immersive videos allow users to freely explore 4 π steradian scenes within head-mounted displays (HMD), leading to a strong feeling of immersion. However users may miss important elements of the narrative if not facing them. Hence, we propose four visual effects to guide the user’s attention. After an informal pilot study, two of the most efficient effects were evaluated through a user study. Results show that our approach has potential but it remains challenging to implicitly drive the user’s attention outside of the field of view.

Video
A System for Creating Virtual Reality Content from Make-Believe Games
Adela Barbulescu, Maxime Garcia, Antoine Begault, Laurence Boissieux, Marie-Paule Cani, Maxime Portaz, Alexis Viand, Romain Dulery, Pierre Heinish, Remi Ronfard, and Dominique Vaufreydaz
(Inria, France; LJK Grenoble, France; University of Grenoble, France; LIG, France; Grenoble INP, France; CNRS, France)
Pretend play is a storytelling technique, naturally used from very young ages, which relies on object substitution to represent the characters of the imagined story. We propose a system which assists the storyteller by generating a virtualized story from a recorded dialogue performed with 3D printed figurines. We capture the gestures and facial expressions of the storyteller using Kinect cameras and IMU sensors and transfer them to their virtual counterparts in the story-world. As a proof-of-concept, we demonstrate our system with an improvised story involving a prince and a witch, which was successfully recorded and transferred into 3D animation.

High-Definition Wireless Personal Area Tracking using AC Magnetic Field for Virtual Reality
Mohit Singh and Byunghoo Jung
(Purdue University, USA)
This paper presents an AC magnetic field based High-Definition Personal Area Tracking (PAT) system. A low-power transmitter antenna acts as a reference for three tracker modules. One module, attached to the Head Mount Display (HMD), tracks the position and orientation of user's head and the other two hand-held modules act as an interface device (like virtual hands) in Virtual Reality. This precise, low power, low latency, non-line-of-sight system provides an easy-to-use human-computer interface. The system achieves a precision of 1 mm in position with 0.1 degree in orientation and an accuracy of 20 cm in position at a distance of 2 m from the antenna. The transmitter and the receiver consume 5 W and 0.4 W of power, respectively, providing 140 updates/sec with 11 ms of latency.

HySAR: Hybrid Material Rendering by an Optical See-Through Head-Mounted Display with Spatial Augmented Reality Projection
Yuichi Hiroi, Yuta Itoh, Takumi Hamasaki, Daisuke Iwai, and Maki Sugimoto
(Keio University, Japan; Osaka University, Japan)
We propose a hybrid SAR concept combining a projector and Optical See-Through Head-Mounted Displays (OST-HMD). Our proposed hybrid SAR system utilizes OST-HMD as an extra rendering layer to render a view-dependent property in OST-HMDs according to the viewer's viewpoint. Combined with view-independent components created by a static projector, the viewer can see richer material contents. Unlike conventional SAR systems, our system theoretically allows unlimited number of viewers seeing enhanced contents in the same space while keeping the existing SAR experiences. Furthermore, the system enhances the total dynamic range, the maximum intensity, and the resolution of perceived materials. With a proof-of-concept system that consists of a projector and an OST-HMD, we qualitatively demonstrate that our system successfully creates hybrid rendering on a hemisphere object from five horizontal viewpoints. Our quantitative evaluation also shows that our system increases the dynamic range by 2.1 times and the maximum intensity by 1.9 times compared to an ordinary SAR system.

Monocular Focus Estimation Method for a Freely-Orienting Eye using Purkinje-Sanson Images
Yuta Itoh, Jason Orlosky, Kiyoshi Kiyokawa, Toshiyuki Amano, and Maki Sugimoto
(Keio University, Japan; Osaka University, Japan; Wakayama University, Japan)
We present a method for focal distance estimation of a freely-orienting eye using Purkinje-Sanson (PS) images, which are reflections of light on the inner structures of the eye. Using an infrared camera with a rigidly-fixed LED, our method creates an estimation model based on 3D gaze and the distance between reflections in the PS images that occur on the corneal surface and anterior surface of the eye lens. The distance between these two reflections changes with focus, so we associate that information to the focal distance on a user. Unlike conventional methods that mainly relies on 2D pupil size which is sensitive to scene lighting and the fourth PS image, our method detects the third PS image which is more representative of accommodation. Our feasibility study on a single user with a focal range from 15-45 cm shows that our method achieves mean and median absolute errors of 3.15 and 1.93 cm for a 10-degree viewing angle. The study shows that our method is also tolerant against environment lighting changes.

Lean into It: Exploring Leaning-Based Motion Cueing Interfaces for Virtual Reality Movement
Alexandra Kitson, Abraham M. Hashemian, Ekaterina R. Stepanova, Ernst Kruijff, and Bernhard E. Riecke
(Simon Fraser University, Canada; Bonn-Rhein-Sieg University of Applied Sciences, Germany)
We describe here a pilot user study comparing five different locomotion interfaces for virtual reality (VR) locomotion. We compared a standard non-motion cueing interface, Joystick, with four leaning-based seated motion-cueing interfaces: NaviChair, MuvMan, Head-Directed and Swivel Chair. The aim of this mixed methods study was to investigate the usability and user experience of each interface, in order to better understand relevant factors and guide the design of future ground-based VR locomotion interfaces. We asked participants to give talk-aloud feedback and simultaneously recorded their responses while they were performing a search task in VR. Afterwards, participants completed an online questionnaire. Although the Joystick was rated as more comfortable and precise than the other interfaces, the leaning-based interfaces showed a trend to provide more enjoyment and a greater sense of self-motion. There were also potential issues of using velocity-control for rotations in leaning-based interfaces when using HMDs instead of stationary displays. Developers need to focus on improving the controllability and perceived safety of these seated motion cueing interfaces.

Biomechanical Analysis of (Non-)Isometric Virtual Walking of Older Adults
Omar Janeh, Eike Langbehn, Frank Steinicke, Gerd Bruder, Alessandro Gulberti, and Monika Poetter-Nerger
(University of Hamburg, Germany; University of Central Florida, USA; University Medical Center Hamburg-Eppendorf, Germany)
Our study investigates the effects of (non-)isometric mappings between physical movements and virtual motions in the virtual environment (VE) on walking biomechanics of older adults. Three primary domains (pace, base of support and phase) of spatio-temporal and temporo-phasic parameters were used to evaluate gait performance. Our results show similar results in pace and phasic domains when older adults walk in the VE in the isometric mapping condition compared to the corresponding parameters in the real world. We found significant differences in base of support for our user group between walking in the VE and real world. For non-isometric mappings we found an increased divergence of gait parameters in all domains correlating with the up- or down-scaled velocity of visual self-motion feedback.

Robust Optical See-Through Head-Mounted Display Calibration: Taking Anisotropic Nature of User Interaction Errors into Account
Ehsan Azimi, Long Qian, Peter Kazanzides, and Nassir Navab
(Johns Hopkins University, USA; TU Munich, Germany)
Uncertainty in measurement of point correspondences negatively affects the accuracy and precision in the calibration of head-mounted displays (HMD). Such errors depend on the sensors and pose estimation for video see-through HMD. For optical see-through systems, it additionally depends on the user's head motion and hand-eye coordination. Therefore, the distribution of alignment errors for optical see-through calibration are not isotropic, and one can estimate its process specific or user specific distribution based on interaction requirements of a given calibration process and the user's measurable head motion and hand-eye coordination characteristics. Current calibration methods, however, mostly utilize the DLT method which minimizes Euclidean distances for HMD projection matrix estimation, disregarding the anisotropicity in the alignment errors. We will show how to utilize the error covariance in order to take the anisotropic nature of error distribution into account. The main hypothesis of this study is that using Mahalonobis distance within the nonlinear optimization can improve the accuracy of the HMD calibration. To cover a wide range of possible realistic scenarios, several simulations were performed with variation in the extent of the anisotropicity in the input data along with other parameters. The simulation results indicate that our new method outperforms the standard DLT method both in accuracy and precision, and is more robust against user alignment errors. To the best of our knowledge, this is the first time that anisotropic noise has been accommodated in the optical see-through HMD calibration.

6 Degrees-of-Freedom Manipulation with a Transparent, Tangible Object in World-Fixed Virtual Reality Displays
David J. Zielinski, Derek Nankivil, and Regis Kopper
(Duke University, USA)
We propose Specimen Box, an interaction technique that allows world-fixed display (such as CAVEs) users to naturally hold a plausible physical object while manipulating virtual content inside it. This virtual content is rendered based on the tracked position of the box. Specimen Box provides the weight and tactile feel of an actual object and does not occlude rendered objects in the scene. The end result is that the user sees the virtual content as if it exists inside the clear physical box. We conducted a user study which involved a cognitively loaded inspection task requiring extensive manipulation of the box. We compared Specimen Box to Grab-and-Twirl, a naturalistic bimanual manipulation technique that closely mimics the mechanics of our proposed technique. Results show that performance was significantly faster with Specimen Box. Further, performance of the control technique was positively affected by experience with Specimen Box.

Proposal of a Spectral Random Dots Marker using Local Feature for Posture Estimation
Norimasa Kobori, Daisuke Deguchi, Ichiro Ide, and Hiroshi Murase
(Toyota, Japan; Nagoya University, Japan)
We propose a novel marker for robot's grasping task which has the following three aspects: (i) it is easy-to-find in a cluttered background, (ii) it is calculable for its posture (iii) its size is compact. The proposed marker is composed of a random dots pattern, and uses keypoint detection and a scale estimation by Spectral SIFT for dots detection and data decoding. The data is encoded by the scale size of dots, and the same dots in the marker work for both marker detection and data decoding. As a result, the proposed marker size can be compact. We confirmed the effectiveness of the proposed marker through experiments.

All Are Welcome: Using VR Ethnography to Explore Harassment Behavior in Immersive Social Virtual Reality
Ketaki Shriram and Raz Schwartz
(Stanford University, USA; Oculus, USA)
The growing ubiquity of VR headsets has given rise to questions around harassment in social virtual reality. This paper presents two studies. In the first, a pilot ethnographic study, users were interviewed in immersive social virtual reality about their experiences and behaviors in these spaces. Harassment was occasional, and those in female avatars reported more harassment than those in male avatars. In Study Two, a quantitative survey was conducted to validate ethnographic results. A large percentage of users witness harassment in virtual reality. These studies provide mixed methods insight of user demographics and behaviors in the relatively new social VR space.

Study of Interaction Fidelity for Two Viewpoint Changing Techniques in a Virtual Biopsy Trainer
Aylen Ricca, Amine Chellali, and Samir Otmane
(University of Évry Val d'Essonne, France)
Virtual Reality simulators are increasingly used for training novice surgeons. However, there is currently a lack of guidelines for achieving interaction fidelity for these systems. In this paper, we present the design of two navigation techniques for a needle insertion trainer. The two techniques were analyzed using a state-of-the-art fidelity framework to determine their level of interaction fidelity. A user study comparing both techniques suggests that the higher fidelity technique is more suited as a navigation technique for the needle insertion virtual trainer.

Conditions Influencing Perception of Wind Direction by the Head
Takuya Nakano and Yasuyuki Yanagida
(Meijo University, Japan)
Recently, several virtual reality (VR) systems using wind have been built to enhance the user’s sense of presence. If wind is used, users need not use additional devices, and some studies have concluded that using wind and presenting a video simultaneously improves this sense of presence. In these studies, however, wind sources were sparsely arranged; with such an arrangement, it is unclear whether an accurate environment was reproduced. A number of variables including gender, age, and the facial hit rate may affect the perception of wind direction. In the proposed study, we have examined the effect of these variables on perception of wind direction by the head.

The AR-Rift 2 Prototype
Anthony Steed, Yonathan Widya Adipradana, and Sebastian Friston
(University College London, UK)
Video see-through augmented reality (VSAR) is an effective way of combing real and virtual scenes for head-mounted human computer interfaces. In this paper we present the AR-Rift 2 system, a cost-effective prototype VSAR system based around the Oculus Rift CV1 head-mounted display (HMD). Current consumer camera systems however typically have latencies far higher than the rendering pipeline of current consumer HMDs. They also have lower update rate than the display. We thus measure the latency of the video and implement a simple image-warping method to ensure smooth movement of the video.

Info
Motor Adaptation in Response to Scaling and Diminished Feedback in Virtual Reality
David M. Krum, Thai Phan, and Sin-Hwa Kang
(University of Southern California, USA)
As interaction techniques involving scaling of motor space in virtual reality are becoming more prevalent, it is important to understand how individuals adapt to such scalings and how they re-adapt back to non-scaled norms. This preliminary work examines how individuals, performing a targeted ball throwing task, adapted to addition and removal of a translational scaling of the ball's forward flight. This was examined under various conditions: flight of the ball shown with no delay, hidden flight of the ball with no delay, and hidden flight with a 2 second delay. Hiding the ball’s flight, as well as the delay, created disruptions in the ability of the participants to perform the task and adapt to new scaling conditions.

Separation of Reflective and Fluorescent Components using the Color Mixing Matrix
Isao Shimana and Toshiyuki Amano
(Wakayama University, Japan)
In the field of SAR, the projector-camera system has been well studied; its radiometric model can be easily described by a color mixing matrix. Many SAR applications have proposed and created by using this model. However, this model can be used for reflectance component, but not for fluorescence component. In this paper, we propose RKS Projector–Camera response model for separating of the color mixing matrix’s reflectance components and fluorescence components and describe how to decompose them.

Group Immersive Education with Digital Fulldome Planetariums
Ka Chun Yu, Kamran Sahami, Victoria Sahami, Larry Sessions, and Grant Denn
(Denver Museum of Nature & Science, USA; Metropolitan State University of Denver, USA)
Although fulldome video digital theaters evolved from traditional planetariums, they are more akin to virtual reality (VR) theaters that create large-scale, group immersive experiences. In order to help understand how immersion and wide fields-of-view (FOV) impact learning, we studied the use of visualizations on topics that do and do not require spatial understanding in astronomy classes. We find a significant difference between students who viewed visualizations in the dome versus those that saw non-immersive content in their classrooms, with the former showing the greatest retention. Our results suggest that immersive visuals help free up cognitive resources that can be used to build mental models requiring spatial understanding, and the physical display size combined with the wide FOV may result in greater attention. Although fulldome is a complementary medium to traditional VR, our results have implications for future head-mounted displays.

MR Sand Table: Mixing Real-Time Video Streaming in Physical Models
Zhong Zhou, Zhiyi Bian, and Zheng Zhuo
(Beihang University, China)
A novel prototype of MR (Mixed Reality) Sand Table is presented in this paper, that fuses multiple real-time video streaming into a physically united view. The main processes include geometric calibration and alignment, image blending and the final projection. Firstly we proposed a two-step MR alignment scheme which estimates the transform matrix between input video streaming and the sand table for coarse alignment, and deforms the input frame using moving least squares for accurate alignment. To overcome the video border distinction problem, we make a border-adaptive image stitching with brightness diffusion to merge the overlapping area. With the projection, the video area can be mixed into the sand table in real-time to provide a live physical mixed reality model. We build a prototype to demonstrate the effectiveness of the proposed method. This design could also be easily extended to large size with help of multiple projectors. The system proposed in this paper supports multiple user interaction in a broad area of applications such as surveillance, demonstration, action preview and discussion assistances.

Mobile Collaborative Mixed Reality for Supporting Scientific Inquiry and Visualization of Earth Science Data
Suya You and Charles K. Thompson
(University of Southern California, USA; Jet Propulsion Laboratory, USA)
This work seeks to apply the emerging virtual and mixed reality techniques to visual exploration and visualization of earth science data. A novel system is developed to facilitate a collaborative mixed reality visualization, enabling both in-situ and off-site users to simultaneously interact with and visualize science data within mixed reality realm. We implement the prototype system in the context of visualizing earth terrain data. We report our current prototype effort and preliminary results.

Augmenting Creative Design Thinking using Networks of Concepts
Georgi V. Georgiev, Kaori Yamada, Toshiharu Taura, Vassilis Kostakos, Matti Pouke, Sylvia Tzvetanova Yung, and Timo Ojala
(University of Oulu, Finland; Kobe University, Japan; University of Bedfordshire, UK)
Here we propose an interactive system to augment creative design thinking using networks of concepts in a virtual reality environment. We discuss how to augment the human capacity to be creative through dynamic suggestions providing new and original ideas, based on specific semantic network characteristics. We outline directions to explore the structures of the concept network and their connection to creative concept generation. It is expected that augmented creative thinking will allow the user to have more original ideas and thus be more innovative.

RIDE: Region-Induced Data Enhancement Method for Dynamic Calibration of Optical See-Through Head-Mounted Displays
Zhenliang Zhang, Dongdong Weng, Yue Liu, Yongtian Wang, and Xinjun Zhao
(Beijing Institute of Technology, China; Science and Technology on Complex Land Systems Simulation Laboratory, China)
The most commonly used single point active alignment method (SPAAM) is based on a static pinhole camera model, in which it is assumed that both the eye and the HMD are fixed. This leads to a limitation for calibration precision. In this work, we propose a dynamic pinhole camera model according to the fact that the human eye would experience an obvious displacement over the whole calibration process. Based on such a camera model, we propose a new calibration data acquisition method called the region-induced data enhancement (RIDE) to revise the calibration data. The experimental results prove that the proposed dynamic model performs better than the traditional static model in actual calibration.

Turn Physically Curved Paths into Virtual Curved Paths
Keigo Matsumoto, Takuji Narumi, Yuki Ban, Tomohiro Tanikawa, and Michitaka Hirose
(University of Tokyo, Japan)
Redirected walking allows users to explore a large virtual environment while there is a limitation of the room size. Previous works tried to present users straight path in a virtual environment while they walked on a curved path in reality. We expand a previous technique to present users a various curved path in a virtual environment while they walked on a particular curved path or a straight path with/without haptics. Furthermore, we propose a novel estimation methodology to quantify walking paths which user has thought he walked in reality. The data from our experiment shows that users feel walking a various curved path in VR as same as one-to-one mapping condition.

Effects of Using HMDs on Visual Fatigue in Virtual Environments
Jie Guo, Dongdong Weng, Henry Been-Lirn Duh, Yue Liu, and Yongtian Wang
(Beijing Institute of Technology, China; La Trobe University, Australia)
There are few negative effects to make people discomfort using virtual reality systems. In this paper, we investigated the effects of visual fatigue when wearing head-mounted displays (HMD) and compared the results with those from the smartphones. Forty subjects were recruited and divided into two different groups. The visual fatigue scale was measured to assess the subjects’ performance. The results indicated that visual fatigue caused by the conflict of focal distance and vergence distance was less severe than visual fatigue caused by long-term focus without accommodation.

Upright Adjustment of 360 Spherical Panoramas
Jinwoong Jung, Joon-Young Lee, Byungmoon Kim, and Seungyong Lee
(POSTECH, South Korea; Adobe Research, USA)
With the recent advent of 360 cameras, spherical panorama images are becoming more popular and widely available. In a spherical panorama, alignment of the scene orientation to the image axes is important for providing comfortable and pleasant viewing experiences using VR headsets and traditional displays. This paper presents an automatic framework for upright adjustment of 360 spherical panorama images without any prior information, such as depths and Gyro sensor data. We take the Atlanta world assumption and use the horizontal and vertical lines in the scene to formulate a cost function for upright adjustment. Our method produces visually pleasing results for a variety of real-world spherical panoramas in less than a second.

The Effect of Lip and Arm Synchronization on Embodiment: A Pilot Study
Tara Collingwoode-Williams, Marco Gillies, Cade McCall, and Xueni Pan
(University of London, UK; University of York, UK)
We are interested the effect of lip and arm synchronization on body ownership in VR (the illusion that the users own a virtual body). Participants were invited to give a presentation in an HMD, while seeing in a virtual mirror a gender-matched avatar who copied their arm and lip movements in sync and a-sync conditions. We measure participants’ reaction with questionnaires administrated verbally after their presentation while immersed in VR. The result suggested an interaction effect of arm and lip, showing reports of higher level of embodiment with the congruent as compared to the incongruent conditions. Further study is needed to confirm if the same interaction effect can be captured with objective measurements.

Video
Virtual Reality Based Training: Evaluation of User Performance by Capturing Upper Limb Motion
Ehsan Zahedi, Hadi Rahmat-Khah, Javad Dargahi, and Mehrdad Zadeh
(Concordia University, Canada; Kettering University, USA)
This paper presents the results of a two-fold study on the incorporation of upper limb's movement into measuring of user performance in a virtual reality (VR) based training simulation. VR simulators have been developed to assess and improve minimally invasive surgery (MIS) skills. While these simulators are currently being used, most skill evaluation methods are limited to measuring and computing performance metrics regarding the MIS tool tip movement. In this study, a VR simulator is developed to measure and analyze the movements of upper limb joints. The movement analysis from the first experiment suggests that the kinematic data of upper limb can be used to discriminate an expert surgeon from a novice trainee. The results from the second experiment show that the motion of non-dominant hand has a significant effect on the performance of dominant hand.

Mechanism of Integrating Force and Vibrotactile Cues for 3D User Interaction within Virtual Environments
Aida Erfanian, Stanley Tarng, Yaoping Hu, Jérémy Plouzeau, and Frédéric Merienne
(University of Calgary, Canada; LE2I, France)
Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies showed that the integration of visual and haptic cues follows maximum likelihood estimation (MLE). Little effort focuses however on the mechanism of integrating force and vibrotactile cues. We thus investigated MLE’s suitability for integrating these cues. Within a VE, human users undertook 3D interaction of navigating a flying drone along a high-voltage transmission line for inspection. The users received individual force or vibrotactile cues, and their combinations in collocated and dislocated settings. The users’ task performance including completion time and accuracy was assessed under each individual cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This agreed with the applicability of tactile cues for sensing 3D surfaces, herein setting a baseline for using MLE. The task performance under the collocated setting indicated a degree of combining the individual cues. In contrast, the performance under the dislocated setting was alike under the individual vibrotactile cue. These observations imply a possible role of MLE in integrating force and vibrotactile cues for 3D user interaction within VEs.

Socially Immersive Avatar-Based Communication
Daniel Roth, Kristoffer Waldow, Marc Erich Latoschik, Arnulph Fuhrmann, and Gary Bente
(University of Würzburg, Germany; University of Cologne, Germany; TH Köln, Germany; Michigan State University, USA)
In this paper, we present SIAM-C, an avatar-mediated communication platform to study socially immersive interaction in virtual environments. The proposed system is capable of tracking, transmitting, representing body motion, facial expressions, and voice via virtual avatars and inherits the transmission of human behaviors that are available in real-life social interactions. Users are immersed using active stereoscopic rendering projected onto a life-size projection plane, utilizing the concept of “fish tank” virtual reality (VR). Our prototype connects two separate rooms and allows for socially immersive avatar-mediated communication in VR.

A Comparison of Methods for Navigation and Wayfinding in Large Virtual Environments using Walking
Richard A. Paris, Timothy P. McNamara, John J. Rieser, and Bobby Bodenheimer
(Vanderbilt University, USA)
Interesting virtual environments that permit free exploration are rarely small. A number of techniques have been developed to allow people to walk in larger virtual spaces than permitted by physical extent of the virtual reality hardware, and in this paper we compare three such methods in terms of how they affect presence and spatial awareness. In our first psychophysical study, we compared two methods of reorientation and one method of redirected walking on subjects' presence and spatial memory while navigating a pre-specified path. Our results suggested no difference between the two methods of reorientation but inferior performance of the redirected walking method. We further compared the two reorientation methods in a second psychophysical study involving free exploration and navigation in a large virtual environment. Our results provide criteria by which the choice of a locomotion method for navigating large virtual environments may be selected.

Immersive Data Interaction for Planetary and Earth Sciences
Victor Ardulov and Oleg Pariser
(Jet Propulsion Laboratory, USA)
The Multimission Instrument Processing Laboratory (MIPL) at Jet Propulsion Laboratory (JPL) processes and analyzes, orbital and in-situ instrument data for both planetary and Earth science missions. Presenting 3D data in a meaningful and effective manner is of the utmost importance to furthering scientific research and conducting engineering operations.Visualizing data in an intuitive way by utilizing Virtual Reality (VR), allows users to immersively interact with their data in their respective environments. This paper examines several use-cases, across various missions, instruments, and environments, demonstrating the strengths and insights that VR has to offer scientists.

Bodiless Embodiment: A Descriptive Survey of Avatar Bodily Coherence in First-Wave Consumer VR Applications
Dooley Murphy
(University of Copenhagen, Denmark)
This preliminary study surveys whether/which avatar body parts are visible in first-wave consumer virtual reality (VR) applications for the HTC Vive (n = 200). A simple coding schema for assessing avatar bodily coherence (ABC) is piloted and evaluated. Results provide a snapshot of ABC in popular high-end VR applications in Q3 2016. It is reported that 86.5% of sampled items feature fully invisible avatars, 9% depict hands only, and 4.5% feature a head, torso, or legs, but with some degree of bodily incoherence. Findings suggest that users may experience a sense of ownership and/or agency over their virtual actions even in the absence of visible avatar body parts. This informs research questions and hypotheses for future experimental enquiry into how bodily representation may interplay with user cognition, perceived virtual embodiment (body ownership illusion and sense of agency; body schema–image relations), and spatial telepresence. For instance: To what extent/under what conditions do the users of consumer VR systems demonstrate a sense of bodily vulnerability (a drive for bodily preservation) when no virtual body is present/visible?

Curvature Gains in Redirected Walking: A Closer Look
Malte Nogalski and Wolfgang Fohl
(University of Applied Sciences Hamburg, Germany)
This paper summarizes the detailed paths of participants in redirected walking (RDW) curvature gain experiments. The experiments were carried out in a wave field synthesis (WFS) system of 5x6 meters. Some users were blindfolded and had to control their walking by acoustical cues only, others wore an Oculus Rift DK2 which presented them a virtual scenery in addition. A marker at the participant’s head allowed us to record the paths with our high-precision tracking system. The naive assumption of RDW with curvature gains would be that the test persons walk on the circumference of a circle, but the observed walking patterns were much more complex. Test persons showed very individual walking patterns while exploring the virtual environment. Many of these patterns may be explained as a sequence: 1. walk a few steps toward the assumed target position, 2. check for deviations, 3. adjust path to new assumed target position, which results in different patterns of various path curvature. The consequences for the application of RDW techniques are: Curvature gain tries to guide the users on a circular arc: the ”ideal path”, whereas the real paths are mostly outside of the circle of the ideal path. The deviations in the audio-only case are much larger than in the audio-visual case. The measured curvature gain thresholds systematically under-estimate the required walking space, as they do not account for the required extra space for walking outside the circular path.

Catching a Real Ball in Virtual Reality
Matthew K. X. J. Pan and Günter Niemeyer
(Disney Research, USA)
We present a system enabling users to accurately catch a real ball while immersed in a virtual reality environment. We ex-amine three visualizations: rendering a matching virtual ball, the predicted trajectory of the ball, and a target catching point lying on the predicted trajectory. In our demonstration system, we track the projectile motion of a ball as it is being tossed between users. Using Unscented Kalman Filtering, we generate predictive estimates of the ball’s motion as it approaches the catcher. The predictive assistance visualizations effectively increases the user’s senses but can also alter the user’s strategy in catching.

Towards Usable Underwater Virtual Reality Systems
Raphael Costa, Rongkai Guo, and John Quarles
(University of Texas at San Antonio, USA; Kennesaw State University, USA)
The objective of this research is to compare the effectiveness of different tracking devices underwater. There have been few works in aquatic virtual reality (VR) - i.e., VR systems that can be used in a real underwater environment. Moreover, the works that have been done have noted limitations on tracking accuracy. Our initial test results suggest that inertial measurement units work well underwater for orientation tracking but a different approach is needed for position tracking. Towards this goal, we have waterproofed and evaluated several consumer tracking systems intended for gaming to determine the most effective approaches. First, we informally tested infrared systems and fiducial marker based systems, which demonstrated significant limitations of optical approaches. Next, we quantitatively compared inertial measurement units (IMU) and a magnetic tracking system both above water (as a baseline) and underwater. By comparing the devices’ rotation data, we have discovered that the magnetic tracking system implemented by the Razer Hydra is more accurate underwater as compared to a phone-based IMU. This suggests that magnetic tracking systems should be further explored for underwater VR applications.

Development and Evaluation of a Hands-Free Motion Cueing Interface for Ground-Based Navigation
Jacob Freiberg, Alexandra Kitson, and Bernhard E. Riecke
(Simon Fraser University, Canada)
With affordable high performance VR displays becoming commonplace, users are becoming increasingly aware of the need for well-designed locomotion interfaces that support these displays. After considering the needs of users, we quantitatively evaluated an embodied locomotion interface called the Navichair according to usability needs and fulfillment of system requirements. Specifically, we investigated influences of locomotion interfaces (joystick vs. an embodied motion cueing chair) and display type (HMD vs. projection screen) on a spatial updating pointing task. Our findings indicate that our embodied VR locomotion interface provided users with an immersive experience of a space without requiring a significant investment of set up time. Design lessons and future design goals of our interface are discussed.

Preliminary Exploration: Perceived Egocentric Distance Measures in Room-Scale Spaces using Consumer-Grade Head Mounted Displays
Alex Peer and Kevin Ponto
(University of Wisconsin-Madison, USA)
Distance misperception (sometimes, distance compression) in immersive virtual environments is an active area of study, and the recent availability of consumer-grade display and tracking technologies raises new questions: Can misperceptions be measured within the small tracking volumes of consumer-grade technology? Are measures practical within this space directly comparable, or are some preferable to others? Do contemporary displays even induce distance misperceptions? This work explores these questions.

Diminished Reality for Acceleration Stimulus: Motion Sickness Reduction with Vection for Autonomous Driving
Taishi Sawabe, Masayuki Kanbara, and Norihiro Hagita
(NAIST, Japan; ATR, Japan)
This paper presents an approach for motion sickness reduction while riding an autonomous vehicle. It proposes the Diminished Reality (DR) method for an acceleration stimulus to reduce motion sickness for the autonomous vehicle. One of the main causes of motion sickness is a repeated acceleration. In order to diminish the acceleration stimulus in the autonomous vehicle, vection illusion is used to induce the user to make a preliminary movement against the real acceleration. The Balance Wii Board is used to measure participant's movement of the center of gravity to verify the effectiveness of the method with vection. The experimental result of 9 participants shows that the proposed method of using vection could reduce acceleration stimulus compared with the conventional method.

Interaction with WebVR 360° Video Player: Comparing Three Interaction Paradigms
Toni Pakkanen, Jaakko Hakulinen, Tero Jokela, Ismo Rakkolainen, Jari Kangas, Petri Piippo, Roope Raisamo, and Marja Salmimaa
(University of Tampere, Finland; Nokia, Finland)
Immersive 360° video needs new ways of interaction. We compared three different interaction methods to find out which one of them is the most applicable for controlling 360° video playback. The compared methods were: remote control, pointing with head orientation, and hand gestures. A WebVR-based 360° video player was built for the experiment.

Comparing VR and Non-VR Driving Simulations: An Experimental User Study
Florian Weidner, Anne Hoesch, Sandra Poeschl, and Wolfgang Broll
(TU Ilmenau, Germany)
Up to now, most driving simulators use either small monitors or large immersive projection setups like 2D/3D screens or a CAVE. The recent improvements of VR-HMDs led to an increased application in driving simulation. However, the influence and comparability of various VR and non-VR displays has been hardly investigated.
We present results of a user study investigating the different influence of non-VR (2D, stereoscopic 3D) and VR (HMD) on physiological responses, simulation sickness, and driving performance within a single driving simulator. In the study, 94 participants performed the Lane Change Task.
Results indicate that a VR-HMD leads to similar data as stereoscopic 3D or 2D screens. We observed no significant difference regarding physiological responses or lane change performance. However, we measured significantly increased simulator sickness in the VR-HMD condition compared to stereoscopic 3D.

Evaluation of Airflow Effect on a VR Walk
Masato Kurosawa, Ken Ito, Yasushi Ikei, Koichi Hirota, and Michiteru Kitazaki
(Tokyo Metropolitan University, Japan; University of Electro-Communications, Japan; Toyohashi University of Technology, Japan)
The present study investigates the augmentation effect of airflow on the sensation of a virtual reality walk. The intensity of cutaneous sensation evoked by airflow during the real and virtual walk was measured. The airflow stimulus was added to the participant with passive vestibular motion and visual presentation. The result suggests that the sensation of walking was strongly increased by adding the airflow stimulus to the vestibular and optic presentations. The cutaneous sensation of airflow was perceived higher for the sitting participant than during a real walk in both a single and the combined stimuli. The equivalent speed of airflow for the sitting participant was lowered from the airflow speed in the real walk.

The Impact of Transitions on User Experience in Virtual Reality
Liang Men, Nick Bryan-Kinns, Amelia Shivani Hassard, and Zixiang Ma
(Queen Mary University of London, UK)
In recent years, Virtual Reality (VR) applications have become widely available. An increase in popular interest raises questions about the use of the new medium for communication. While there is a wide variety of literature regarding scene transitions in films, novels and computer games, transitions in VR are not yet widely understood. As a medium that requires a high level of immersion, transitions are a desirable tool. This poster delineates an experiment studying the impact of transitions on user experience of presence in VR.

Video
Coherence Changes Gaze Behavior in Virtual Human Interactions
Richard Skarbez, Gregory F. Welch, Frederick P. Brooks Jr., and Mary C. Whitton
(University of North Carolina at Chapel Hill, USA; University of Central Florida, USA)
We discuss the design and results of an experiment investigating Plausibility Illusion in virtual human (VH) interactions, in particular, the coherence of conversation with a VH. This experiment was performed in combination with another experiment evaluating two display technologies. As that aspect of the study is not relevant to this poster, it will be mentioned only in the Materials section. Participants who interacted with a low-coherence VH looked around the room markedly more than participants interacting with a high-coherence VH, demonstrating that the level of coherence of VHs can have a detectable effect on user behavior and that head and gaze behavior can be used to evaluate the quality of a VH interaction.

Asymetric Telecollaboration in Virtual Reality
Thibault Porssut and Jean-Rémy Chardonnet
(EPFL, Switzerland; LE2I, France)
We present a first study where we combine two asymetric virtual reality systems for telecollaboration purposes: a CAVE system and a head-mounted display (HMD), using a server-client type architecture. Experiments on a puzzle game in limited time, alone and in collaboration, show that combining asymetric systems reduces cognitive load. Moreover, the participants reported preferring working in collaboration and showed to be more efficient in collaboration. These results provide insights in combining several low cost HMDs with a unique expensive CAVE.

Video
An Immersive Approach to Visualizing Perceptual Disturbances
Grace M. Rodriguez, Marvis Cruz, Andrew Solis, and Brian C. McCann
(University of Puerto Rico, Puerto Rico; University of Florida, USA; University of Texas at Austin, USA)
Through their experience with the ICERT REU program at the Texas Advanced Computing Center (TACC), two undergraduate students from the University of Puerto Rico and the University of Florida have initiated a collaboration between their home institutions and TACC exploring the possibility of using immersion to simulate perceptual disturbances. Perceptual disturbances are subjective in nature, and difficult to communicate verbally. Often caretakers or those closest to sufferers have difficulty understanding the nature of their suffering. Immersion provides an exciting opportunity to directly communicate percepts with clinicians and loved ones. Here, we present a prototype environment meant to simulate some of the perceptual disturbances associated with seizures and epilepsy. Following further validation of our approach, we hope to promote awareness and empathy for these often jarring phenomena.

Corrective Feedback for Depth Perception in CAVE-Like Systems
Adrian K. T. Ng, Leith K. Y. Chan, and Henry Y. K. Lau
(University of Hong Kong, China)
The perceived distance estimation in an immersive virtual reality system is generally underestimated to the actual distance. Approaches had been found to provide users with better dimensional perception. One method used in head-mounted displays is to interact by walking with visual feedback, but it is not suitable for a CAVE-like system, like imseCAVE with confined spaces for walking. A verbal corrective feedback mechanism is proposed. The result shows that estimation accuracy generally improves after eight feedback trials although some estimations become overestimated. One possible explanation is the need of more verbal feedback trials. Further research on top-down approach for improvement in depth perception is suggested.

Measurement of 3D-Velocity by High-Frame-Rate Optical Mouse Sensors to Extrapolate 3D Position Captured by a Low-Frame-Rate Stereo Camera
Itsuo Kumazawa, Toshihiro Kai, Yoshikazu Onuki, and Shunsuke Ono
(Tokyo Institute of Technology, Japan)
The frame rate of existing stereo cameras is not enough to track quick hand or finger actions. It also requires lots of computational cost to find correspondence between stereo images to compute distance. The recently commercialized 3D position sensors such as TOF cameras or Leap Motion needs strong illumination to ensure sufficient optical energy for the high frame rate sensing. To overcome these problems, this paper proposes to use a pair of optical-mouse-sensors as a stereo image sensor to measure 3D-velocity and use it to extrapolate 3D position measured by a low-frame-rate stereo camera. It is shown that quick hand actions are tracked under ordinary in-door lighting condition. As 2D velocities are computed inside the optical-mouse-sensors, computation and communication costs are drastically reduced.

Using Augmented Reality to Improve Dismounted Operators' Situation Awareness
William Losina Brandão and Márcio Sarroglia Pinho
(PUCRS, Brazil)
Whether it in the military, law enforcement or private security, dismounted operators tend to deal with a large amount of volatile information that may or may not be relevant according to a variety of factors. In this paper we draft some ideas on the building blocks of an augmented reality system aimed to improve the situational awareness of dismounted operators by filtering, organizing, and displaying this information in a way that reduces the strain over the operator.

Gauntlet: Travel Technique for Immersive Environments using Non-dominant Hand
Mathew Tomberlin, Liudmila Tahai, and Krzysztof Pietroszek
(California State University at Monterey Bay, USA; University of Waterloo, Canada)
We present Gauntlet, a travel technique for immersive environments that uses non-dominant hand tracking and a fist gesture to translate and rotate the viewport. The technique allows for simultaneous use of the dominant hand for other spatial input tasks. Applications of Gauntlet include FPS games, and other application domains where navigation should be performed together with other tasks. We release the technique along with an example application, a VR horror game, as an open source project.

Peers at Work: Economic Real-Effort Experiments in the Presence of Virtual Co-workers
Andrea Bönsch, Jonathan Wendt, Heiko Overath, Özgür Gürerk, Christine Harbring, Christian Grund, Thomas Kittsteiner, and Torsten W. Kuhlen
(RWTH Aachen University, Germany; JARA-HPC, Germany)
Traditionally, experimental economics uses controlled and incentivized field and lab experiments to analyze economic behavior. However, investigating peer effects in the classic settings is challenging due to the reflection problem: Who is influencing whom? To overcome this, we enlarge the methodological toolbox of these experiments by means of Virtual Reality. After introducing and validating a real-effort sorting task, we embed a virtual agent as peer of a human subject, who independently performs an identical sorting task. We conducted two experiments investigating (a) the subject’s productivity adjustment due to peer effects and (b) the incentive effects on competition. Our results indicate a great potential for Virtual-Reality-based economic experiments.

Efficient Sound Synthesis for Natural Scenes
Kai Wang, Haonan Cheng, and Shiguang Liu
(Tianjin University, China)
This paper presents a novel framework to generate the sound of outdoor natural scenes, such as waterfall, ocean, etc. Our method firstly simulates liquid with a grid-based method. Then combined with the movement of liquid, we generate seed-particles which represent bubbles, foams or splashes. Next, we assign each seed-particles a radius with a new radius distribution model. By calculating the bubbles’ pressure wave we generate the sound. Experiments demonstrated that our novel framework can efficiently synthesize the sounds for natural scenes.

Poster Session B
Tue, Mar 21, 14:40 - 15:30, Ballroom A/B

A Diminished Reality Simulation for Driver-Car Interaction with Transparent Cockpits
Patrick Lindemann and Gerhard Rigoll
(TU Munich, Germany)
We anticipate advancements in mixed reality device technology which might benefit driver-car interaction scenarios and present a simulated diminished reality interface for car drivers. It runs in a custom driving simulation and allows drivers to perceive otherwise occluded objects of the environment through the car body. We expect to obtain insights that will be relevant to future real-world applications. We conducted a pre-study with participants performing a driving task with the prototype in a CAVE-like virtual environment. Users preferred large-sized see-through areas over small ones but had differing opinions on the level of transparency to use. In future work, we plan additional evaluations of the driving performance and will further extend the simulation.

Immersive and Collaborative Taichi Motion Learning in Various VR Environments
Tianyu He, Xiaoming Chen, Zhibo Chen, Ye Li, Sen Liu, Junhui Hou, and Ying He
(University of Science and Technology of China, China; City University of Hong Kong, China; Nanyang Technological University, Singapore)
Learning “motion” online or from video tutorials is usually inefficient since it is difficult to deliver “motion” information in traditional ways and in the ordinary PC platform. This paper presents ImmerTai, a system that can efficiently teach motion, in particular Chinese Taichi motion, in various immersive environments. ImmerTai captures the Taichi expert’s motion and delivers to students the captured motion in multi-modal forms in immersive CAVE, HMD as well as ordinary PC environments. The students’ motions are captured too for quality assessment and utilized to form a virtual collaborative learning atmosphere. We built up a Taichi motion dataset with 150 fundamental Taichi motions captured from 30 students, on which we evaluated the learning effectiveness and user experience of ImmerTai. The results show that ImmerTai can enhance the learning efficiency by up to 17.4% and the learning quality by up to 32.3%.

Virtual Zero Gravity Impact on Internal Gravity Model
Thibault Porssut, Henrique G. Debarba, Elisa Canzoneri, Bruno Herbelin, and Ronan Boulic
(EPFL, Switzerland)
This project investigates the impact of a virtual zero gravity experience on the human gravity model. In the planned experiment, subjects are immersed with HMD and full body motion capture in a virtual world exhibiting either normal gravity or the apparent absence of gravity (i.e. body and objects floating in space). The study evaluates changes in the subjects’ gravity model by observing changes on motor planning of actions dependent on gravity. Our goal is to demonstrate that a virtual reality exposure can induce some modifications to the humans internal gravity model, even if users remain under normal gravity condition in reality.

Video
Approximating Optimal Sets of Views in Virtual Scenes
Sebastian Freitag, Clemens Löbbert, Benjamin Weyers, and Torsten W. Kuhlen
(RWTH Aachen University, Germany; JARA-HPC, Germany)
Viewpoint quality estimation methods allow the determination of the most informative position in a scene. However, a single position usually cannot represent an entire scene, requiring instead a set of several viewpoints. Measuring the quality of such a set of views, however, is not trivial, and the computation of an optimal set of views is an NP-hard problem. Therefore, in this work, we propose three methods to estimate the quality of a set of views. Furthermore, we evaluate three approaches for computing an approximation to the optimal set (two of them new) regarding effectiveness and efficiency.

Estimating the Motion-to-Photon Latency in Head Mounted Displays
Jingbo Zhao, Robert S. Allison, Margarita Vinnikov, and Sion Jennings
(York University, Canada; National Research Council, Canada)
We present a method for estimating the Motion-to-Photon (End-to-End) latency of head mounted displays (HMDs). The specific HMD evaluated in our study was the Oculus Rift DK2, but the procedure is general. We mounted the HMD on a pendulum to introduce damped sinusoidal motion to the HMD during the pendulum swing. The latency was estimated by calculating the phase shift between the captured signals of the physical motion of the HMD and a motion-dependent gradient stimulus rendered on the display. We used the proposed method to estimate both rotational and translational Motion-to-Photon latencies of the Oculus Rift DK2.

Object Location Memory Error in Virtual and Real Environments
Mengxin Xu, María Murcia-López, and Anthony Steed
(University College London, UK)
We aim to further explore the transfer of spatial knowledge from virtual to real spaces. Based on previous research on spatial memory in immersive virtual reality (VR) we ran a study that looked at the effect of three locomotion techniques (joystick, pointing-and-teleporting and walking-in-place) on object location learning and recall. Participants were asked to learn the location of a virtual object in a virtual environment (VE). After a short period of time they were asked to recall the location by placing a real version of the object in the real-world equivalent environment. Results indicate that the average placement error, or distance between original and recalled object location, is approximately 20cm for all locomotion technique conditions. This result is similar to the outcome of a previous study on spatial memory in VEs that used real walking. We report this unexpected finding and suggest further work on spatial memory in VR by recommending the replication of this study in different environments and using objects with a wider diversity of properties, including varying sizes and shapes.

Tactile Feedback Enhanced with Discharged Elastic Energy and Its Effectiveness for In-Air Key-Press and Swipe Operations
Itsuo Kumazawa, Souma Suzuki, Yoshikazu Onuki, and Shunsuke Ono
(Tokyo Institute of Technology, Japan)
This paper presents a simple but effective way of enhancing tactile stimulus by a mechanism with springs to preserve elastic energies charged in a prior energy-charging phase and discharge them to enhance the force to hit a finger in the stimulating phase. With this mechanism, a small and light stimulator attached to the fingertip is developed and demonstrated to generate the tactile feedback strong enough to make people feel as if their fingers collide with a virtual object. It is also shown that the durations of the two phases can be as short as a few milliseconds so that the latency in tactile feedback can be negligible. The performance of the mechanism and the effectiveness of its tactile feedback are evaluated for in-air key-press and swipe operations.

BlowClick 2.0: A Trigger Based on Non-verbal Vocal Input
Daniel Zielasko, Neha Neha, Benjamin Weyers, and Torsten W. Kuhlen
(RWTH Aachen University, Germany; JARA-HPC, Germany)
The use of non-verbal vocal input (NVVI) as a hand-free trigger approach has proven to be valuable in previous work [Zielasko2015]. Nevertheless, BlowClick's original detection method is vulnerable to false positives and, thus, is limited in its potential use, e.g., together with acoustic feedback for the trigger. Therefore, we extend the existing approach by adding common machine learning methods. We found that a support vector machine (SVM) with Gaussian kernel performs best for detecting blowing with at least the same latency and more precision as before. Furthermore, we added acoustic feedback to the NVVI trigger, which increases the user's confidence. To evaluate the advanced trigger technique, we conducted a user study (n=33). The results confirm that it is a reliable trigger; alone and as part of a hands-free point-and-click interface.

KKse: Safety Education System of the Child in the Kitchen Knife Cooking
Shiho Saito, Koichi Hirota, and Takuya Nojima
(University of Electro-Communications, Japan)
The Kitchen Knife Safety Educator (KKse) is a safety education system designed to teach children how to correctly use cooking knives. Cooking is important for children to learn about what they eat. In addition, that is also important for daily communication between children and their parents. However, it is dangerous for young children to handle cooking knives. Because of this danger, parents often try to keep their young children away from the kitchen. Our proposed system will contribute to not only improving children’s cooking skills, but also improving communication between parents and children. The system composed of a virtual knife with haptic feedback function, a touch/force sensitive virtual food and a two-dimensional force sensitive cutting board. This system was developed to teach a fundamental cutting method, the “thrusting cut”. This paper describes the detail of the system.

Air Cushion: A Pilot Study of the Passive Technique to Mitigate Simulator Sickness by Responding to Vection
Yoshikazu Onuki, Shunsuke Ono, and Itsuo Kumazawa
(Tokyo Institute of Technology, Japan)
Simulator sickness is an issue in virtual reality environments. In a virtual world, sensory conflict between visual sensation and self-motion perception occurs readily. Contradiction between visual and vestibular sensation is a dominant cause of motion sickness. Vection is a visually evoked illusion of self-motion. Vection occurs when a stationary human experiences locomotor stimulation over a wider area of the field of view, and senses motion when in fact there is none. Strong vection has been associated with simulator sickness. In this poster, the authors present results of a pilot study based on a hypothesis that simulator sickness can be mitigated by passively responding to the body sway. Commercially available air cushions were applied for VR environments. Measurable mitigation of simulator sickness was achieved by physically responding to vection. Allowing body sway encourages moder-ating the sensory conflict between visual sensation and self-motion perception. Also, the shapes of air cushions on seat backs were found to be an important variable.

A Haptic Three-Dimensional Shape Display with Three Fingers Grasping
Takuya Handa, Kenji Murase, Makiko Azuma, Toshihiro Shimizu, Satoru Kondo, and Hiroyuki Shinoda
(NHK, Japan; University of Tokyo, Japan)
The main goal of our research is to develop a haptic display that makes it possible to convey shapes, hardness, and textures of objects displayed on 3D TV. Our evolved device has three 5 mm diameter actuating spheres arranged in triangular geometry on each of three fingertips (thumb, index finger, middle finger). In this paper, we describe an overview of a novel haptic device and the first experimental results that twelve subjects had succeeded to recognize the size of cylinders and side geometry of a cuboid and a hexagonal prism.

Info
Data Fragment: Virtual Reality for Viewing and Querying Large Image Sets
Theophilus Teo, Mitchell Norman, Matt Adcock, and Bruce H. ThomasORCID logo
(University of South Australia, Australia; CSIRO, Australia)
This paper presents our new Virtual Reality (VR) interactive visualization techniques to assist users querying large image sets. The VR system allows users to query a set of images on four different filters such as locations and keywords. The goal is to investigate if a VR platform is preferred over a non-VR platform for viewing and querying large image sets. We employed an HTC Vive and a traditional desktop screen to represent VR and non-VR platforms. We found users preferred the VR platform over the traditional desktop screen.

Towards a Design Space Characterizing Workflows That Take Advantage of Immersive Visualization
Tom Vierjahn, Daniel Zielasko, Kees van Kooten, Peter Messmer, Bernd Hentschel, Torsten W. Kuhlen, and Benjamin Weyers
(RWTH Aachen University, Germany; JARA-HPC, Germany; NVIDIA, Germany; NVIDIA, Switzerland)
Immersive visualization (IV) fosters the creation of mental images of a data set, a scene, a procedure, etc. We devise an initial version of a design space for categorizing workflows that take advantage of IV. From this categorization, specific requirements for an actual, seamless IV-integration can be derived. We validate the design space with three workflows investigated in our research projects.

Hand Gesture Controls for Image Categorization in Immersive Virtual Environments
Chao Peng, Jeffrey T. Hansberger, Lizhou Cao, and Vaidyanath Areyur Shanthakumar
(University of Alabama at Huntsville, USA; US Army Research Lab, USA)
In a situation where a large and chaotic collection of digital images must be manually sorted or categorized, there are two challenges: (1) unnatural actions during a prolonged human-computer interaction and (2) limited display space for image browsing. An immersive 3D interface is prototyped, where a person sorts a large collection of digital images with his or her bare hands in a virtual environment, and performs hand motions matching characteristics of sorting gestures in the real world. The virtual reality environment provides extra levels of immersion for displaying images.

Mixed Reality Training for Tank Platoon Leader Communication Skills
Peter Khooshabeh, Igor Choromanski, Catherine Neubauer, David M. Krum, Ryan Spicer, and Julia Campbell
(US Army Research Lab, USA; University of Southern California, USA)
Here we describe the design and usability evaluation of a mixed reality prototype to simulate the role of a tank platoon leader, who is an individual who not only is a tank commander, but also directs a platoon of three other tanks with their own respective tank commanders. The domain of tank commander training has relied on physical simulators of the actual Abrams tank and encapsulates the whole crew. The TALK-ON system we describe here focuses on training communication skills of the leader in a simulated tank crew. We report results from a usability evaluation and discuss how they will inform our future work for collective tank training.

Video
Prioritization and Static Error Compensation for Multi-camera Collaborative Tracking in Augmented Reality
Jianren Wang, Long Qian, Ehsan Azimi, and Peter Kazanzides
(Shanghai Jiao Tong University, China; Johns Hopkins University, USA)
An effective and simple method is proposed for multi-camera collaborative tracking, based on the prioritization of all tracking units, and then modeling the discrepancy between different tracking units as a locally static transformation error. Static error compensation is applied to the lower-priority tracking systems when high-priority trackers are not available. The method does not require high-end or carefully calibrated tracking units, and is able to effectively provide a comfortable augmented reality experience for users. A pilot study demonstrates the validity of the proposed method.

Resolution-Defined Projections for Virtual Reality Video Compression
Charles Dunn and Brian Knott
(YouVisit, USA)
Spherical data compression methods for Virtual Reality (VR) currently leverage popular rectangular data encoding algorithms. Traditional compression algorithms have massive adoption and hardware support on computers and mobile devices. Efficiently utilizing these two-dimensional compression methods for spherical data necessitates a projection from the three-dimensional surface of a sphere to a two-dimensional rectangle. Any such projection affects the final resolution distribution of the data after decoding. Popular projections used for VR video benefit from mathematical or geometric simplicity, but result in suboptimal resolution distributions. We introduce a method for generating a projection to match a desired resolution function. This method allows for customized projections with smooth, continuous and optimal resolution functions. Compared to commonly used projections, our resolution-defined projections drastically improve compression ratios for any given quality.

The Effect of Geometric Realism on Presence in a Virtual Reality Game
Jonatan S. Hvass, Oliver Larsen, Kasper B. Vendelbo, Niels C. Nilsson, Rolf Nordahl, and Stefania Serafin
(Aalborg University at Copenhagen, Denmark)
Previous research on visual realism and presence has not involved scenarios, graphics, and hardware representative of commercially available VR games. This poster details a between-subjects study (n=50) exploring if polygon count and texture resolution influence presence during exposure to a VR game. The results suggest that a higher polygon count and texture resolution increased presence as assessed by means of self-reports and physiological measures.

An Exploration of Input Conditions for Virtual Teleportation
Emil R. Høeg, Kevin V. Ruder, Niels C. Nilsson, Rolf Nordahl, and Stefania Serafin
(Aalborg University at Copenhagen, Denmark)
This poster describes a within-groups study (n=17) comparing participants' experience of three different input conditions for instigating virtual teleportation (button clicking, physical jumping, and fist clenching). The results indicated that teleportation by clicking a button generally required less explicit attention and was perceived as more enjoyable, less disorienting, and less physically demanding.

A Preliminary Study of Users' Experiences of Meditation in Virtual Reality
Thea Andersen, Gintare Anisimovaite, Anders Christiansen, Mohamed Hussein, Carol Lund, Thomas Nielsen, Eoin Rafferty, Niels C. Nilsson, Rolf Nordahl, and Stefania Serafin
(Aalborg University at Copenhagen, Denmark)
This poster describes a between-groups study (n=24) exploring the use of virtual reality (VR) for facilitating focused meditation. Half of the participants were exposed to a meditation session combing the sound of a guiding voice and a visual environment including virtual objects for the participants to focus on. The other half of the participants was only exposed to the auditory guide. The participants' experience of the sessions was assessed using self-reported measures of perceived concentration, temporal duration, stress reduction, and comfort. Interestingly, no statistically significant differences were found between the two conditions. This finding may be revealing in regards to the usefulness of VR-based meditation.

Observation of Mirror Reflection and Voluntary Self-Touch Enhance Self-Recognition for a Telexistence Robot
Yasuyuki Inoue, Fumihiro Kato, MHD Yamen Saraiji, Charith Lasantha Fernando, and Susumu Tachi
(University of Tokyo, Japan; Keio University, Japan)
In this paper, we analyze the subjective feelings about the body of the operator of a telexistence system. We investigate whether a mirror reflection and self-touch affect body ownership and agency for a surrogate robot avatar in a virtual reality experiment. Results showed that the presence of tactile sensations synchronized with the view of self-touch events enhanced mirror self-recognition.

Adaptive 360-Degree Video Streaming using Layered Video Coding
Afshin Taghavi Nasrabadi, Anahita Mahzari, Joseph D. Beshay, and Ravi Prakash
(University of Texas at Dallas, USA)
Virtual reality and 360-degree video streaming are growing rapidly; however, streaming 360-degree video is very challenging due to high bandwidth requirements. To address this problem, the video quality is adjusted according to the user viewport prediction. High quality video is only streamed for the user viewport, reducing the overall bandwidth consumption. Existing solutions use shallow buffers limited by the accuracy of viewport prediction. Therefore, playback is prone to video freezes which are very destructive for the Quality of Experience(QoE). We propose using layered encoding for 360-degree video to improve QoE by reducing the probability of video freezes and the latency of response to the user head movements. Moreover, this scheme reduces the storage requirements significantly and improves in-network cache performance.

A Mixed Reality Tele-presence Platform to Exchange Emotion and Sensory Information Based on MPEG-V Standard
Hojun Lee, Gyutae Ha, Sangho Lee, and Shiho Kim
(Yonsei University, South Korea)
We have implemented a mixed reality telepresence platform providing a user experience (UX) of exchanging emotional expressions as well as information among a group of participants. The implemented system provides a platform to experience an immersive live scene through a Head-Mounted Display (HMD) and sensory information to a VR HMD user at a remote place. Moreover, the user at a remote place can share and exchange emotional expressions with other users at another remote location by using 360° cameras, environmental sensors compliant with MPEG-V, and a game cloud server combined with a technique of holographic display. We demonstrated that emotional expressions of an HMD worn participant were shared with a group of other participants in the remote place while watching a sports game on a big screen TV.

Evaluation of Labelling Layout Methods in Augmented Reality
Gang Li, Yue Liu, and Yongtian Wang
(Beijing Institute of Technology, China)
View management techniques are commonly used for labelling of objects in augmented reality environments. Combining with image analysis, search space and adaptive representations, they can be utilized to achieve desired labelling tasks. However, the evaluation of different search space methods on labelling are still an open problem. In this paper, we propose an image analysis based view management method, which first adopts the image processing to superimpose 2D labels to the specific object. We then conduct three search space methods to an augmented reality scenario. Without the requirements of setting rules and constraints for occlusion among the labels, the results of three search space methods are evaluated by using objective analysis of related parameters. The evaluation results indicate that different search space methods could generate different time costs and occlusion, thereby affecting the final labelling effects.

Real-Time Interactive AR System for Broadcasting
Hyunwoo Cho, Sung-Uk Jung, and Hyung-Keun Jee
(ETRI, South Korea)
For live television broadcast such as the educational program for children conducted through viewer participation, the smooth integration of virtual contents and the interaction between the casts and them are quite important issues. Recently there have been many attempts to make aggressive use of interactive virtual contents in live broadcast due to the advancement of AR/VR technology and virtual studio technology. These previous works have many limitations that do not support real-time 3D space recognition or immersive interaction. In this sense, we propose an augmented reality based real-time broadcasting system which perceives the indoor space using a broadcasting camera and a RGB-D camera. Also, the system can support the real-time interaction between the augmented virtual contents and the casts. The contribution of this work is the development of a new augmented reality based broadcasting system that not only enables filming using compatible interactive 3D contents in live broadcast but also drastically reduces the production costs. For the practical use, the proposed system was demonstrated in the actual broadcast program called “Ding Dong Dang Kindergarten” which is a representative children educational program on the national broadcasting channel of Korea.

Texturing of Augmented Reality Character Based on Colored Drawing
Hengheng Zhao, Ping Huang, and Junfeng Yao
(Xiamen University, China)
Coloring book can inspire imaginary and creativity of children. However, with the rapid development of digital devices and internet, traditional coloring book tends to be not attractive for children any more. Thus, we propose an idea of applying augmented reality technology to traditional coloring book. After children finish coloring characters in the printed coloring book, they can inspect their work using a mobile device. The drawing is detected and tracked so that the video stream is augmented with a 3D character textured according to their coloring. This is possible thanks to several novel technical contributions. We present a texture process that generates texture map for 3D augmented reality character from 2D colored drawing using a lookup map. Considering the movement of the mobile device and drawing, we give an efficient method to track the drawing surface.

VROnSite: Towards Immersive Training of First Responder Squad Leaders in Untethered Virtual Reality
Annette Mossel, Mario Froeschl, Christian Schoenauer, Andreas Peer, Johannes Goellner, and Hannes Kaufmann
(Vienna University of Technology, Austria; M2DMasterMind Development, Austria)
We present the VROnSite platform that enables immersive training of first responder on-site squad leaders. Our training platform is fully immersive, entirely untethered to ease use and provides two means of navigation - abstract and natural walking - to simulate stress and exhaustion, two important factors for decision making. With the platform's capabilities, we close a gap in prior art for first responder training. Our research is closely interlocked with stakeholders from fire brigades and paramedics to gather early feedback in an iterative design process. In this paper, we present our first research results, which are the system's design rationale, the single user training prototype and results from a preliminary user study.

Recommender System for Physical Object Substitution in VR
Jose Garcia Estrada and Adalberto L. Simeone
(University of Portsmouth, UK)
This poster introduces the development of a recommender system to guide users in adapting the Virtual Environment into matching objects in the physical world. Emphasis is placed on avoiding cognitive overload resulting from providing options for substitution without considering the number of physical objects present. This is the first step towards a comprehensive recommender system for user-driven adaptation of Virtual Environments through immersive Virtual Reality systems.

Itapeva 3D: Being Indiana Jones in Virtual Reality
Eduardo Zilles Borba, Andre Montes, Roseli de Deus Lopes, Marcelo Knorich Zuffo, and Regis Kopper
(University of São Paulo, Brazil; Duke University, USA)
This poster presents the conceptual process of developing Itapeva 3D, a Virtual Reality (VR) archeology experience. It describes the technical spectrum of cyber-archeology process applied to the creation of a fully immersive and interactive virtual environment (VE), which represents Itapeva Rocky Shelter, a prehistoric archeological site in Brazil. The workflow starts with a real world data capture – laser scanners, drones and photogrammetry, continues with the transposition of the captured information into a 3D surface model capable of real-time rendering to head-mounted displays (HMDs), and ends with the design of interactive features allowing users to experience the virtual archeological site. The main objective of this VR model is to make plausible to general public to feel what it means to explore an otherwise restricted and ephemeral place. As final thoughts it is reported on preliminary results from an initial user observation.

Sound Design in Virtual Reality Concert Experiences using a Wave Field Synthesis Approach
Rasmus B. Lind, Victor Milesen, Dina M. Smed, Simone P. Vinkel, Francesco Grani, Niels C. Nilsson, Lars Reng, Rolf Nordahl, and Stefania Serafin
(Aalborg University at Copenhagen, Denmark)
In this paper we propose an experiment that evaluates the influence of audience noise on the feeling of presence and the perceived qual- ity in a virtual reality concert experience delivered using Wave Field Synthesis. A 360 degree video of a live rock concert from a local band was recorded. Single sound sources from the stage and the PA system were recorded, as well as the audience noise, and impulse responses of the concert venue. The audience noise was imple- mented in the production phase. A comparative study compared an experience with and without audience noise. In a between sub- ject experiment with 30 participants we found that audience noise does not have a significant impact on presence. However, qualita- tive evaluations show that the naturalness of the sonic experience delivered through wavefield synthesis had a positive impact on the participants.

Effect on High versus Low Fidelity Haptic Feedback in a Virtual Reality Baseball Simulation
Andreas Ryge, Lui Thomsen, Theis Berthelsen, Jonatan S. Hvass, Lars Koreska, Casper Vollmers, Niels C. Nilsson, Rolf Nordahl, and Stefania Serafin
(Aalborg University at Copenhagen, Denmark)
In this paper we present a within-subjects study (n=26) comparing participants’ experience of three kinds of haptic feedback (no haptic feedback, low fidelity haptic feedback and high fidelity haptic feed- back) simulating the impact between a virtual baseball bat and ball. We noticed some minor effect on high fidelity versus low fidelity haptic feedback, but haptic feedback generally enhanced realism and quality of experience.

Immerj: A Novel System for Democratizing Immersive Storytelling
Sarang S. Bhadsavle, Xie Hunt Shannon Yap, Justin Segler, Rahul Jaisimha, Nishant Raman, Yengzhou Feng, Sierra J. Biggs, Micah Peoples, Robert B. Brenner, and Brian C. McCann
(University of Texas at Austin, USA)
Immersive technologies such as 360◦ cameras and head-mounted displays (HMDs) have become affordable to the average consumer, opening up new audiences for storytellers. However, existing immersive post-production software often requires too great a technical or financial investment for smaller creative shops, including most of the country’s newspapers and local broadcast journalism organizations. Game engines, for example, are unnecessarily complicated for simple 360◦ video projects. Introducing Immerj - an open source abstraction layer simplifying the Unity3D game engine’s interface for immersive content creators. Our primary collaborator, Professor R.B. Brenner, director of the School of Journalism at the University of Texas at Austin, organized hands-on demos with our team and journalists and designers from some of the top news organizations all over the country in order to follow a human-centered design process. In just over one year, a small team of undergraduate researchers at the Texas Advanced Computing Center (TACC) has created a potentially disruptive democratization of technology.

Assisted Travel Based on Common Visibility and Navigation Meshes
Sebastian Freitag, Benjamin Weyers, and Torsten W. Kuhlen
(RWTH Aachen University, Germany; JARA-HPC, Germany)
The manual adjustment of travel speed to cover medium or large distances in virtual environments may increase cognitive load, and manual travel at high speeds can lead to cybersickness due to inaccurate steering. In this work, we present an approach to quickly pass regions where the environment does not change much, using automated suggestions based on the computation of common visibility. In a user study, we show that our method can reduce cybersickness when compared with manual speed control.

Advertising Perception with Immersive Virtual Reality Devices
Eduardo Zilles Borba and Marcelo Knorich Zuffo
(University of São Paulo, Brazil)
This poster presents an initial study about people experience with advertising messages in Virtual Reality (VR) that simulates the urban space. Besides looking to the plastic and textual factors perceived by the users in the Virtual Environment (VE), this work also reflects about effects of immersion provided by different technological devices and its possible influences in the advertising message reception process – a head-mounted display (Oculus Rift DK2), a cavern automatic virtual environment (CAVE) and a desktop monitor (PC). To carry this empirical experiment, a 3D scenario that simulates a real city urban space was created and several advertising image formats were inserted on its landscape. User navigation through the urban space was designed in a first-person perspective. In short, we intend to accomplish two objectives: a) to identify which factors lead people to pay attention to adverting in immersive VE; b) to verify the immersion effects produced by different VR interfaces in the perception of advertising.

Exploring Non-reversing Magic Mirrors for Screen-Based Augmented Reality Systems
Felix Bork, Roghayeh Barmaki, Ulrich Eck, Pascal Fallavolita, Bernhard Fuerst, and Nassir Navab
(TU Munich, Germany; Johns Hopkins University, USA; University of Ottawa, Canada)
Screen-based Augmented Reality (AR) systems can be built as a window into the real world as often done in mobile AR applications or using the Magic Mirror metaphor, where users can see themselves with augmented graphics on a large display. The term Magic Mirror implies that the display shows the users enantiomorph, i.e. the mirror image, such that the system mimics a real-world physical mirror. However, the question arises whether one should design a traditional mirror, or instead display the true mirror image by means of a non-reversing mirror? We discuss the perceptual differences between these two mirror visualization concepts and present a first comparative study in the context of Magic Mirror anatomy teaching.

Towards Understanding Scene Transition Techniques in Immersive 360 Movies and Cinematic Experiences
Kasra Rahimi Moghadam and Eric D. Ragan
(Texas A&M University, USA)
Many researchers have studied methods of effective travel in virtual environments, but little work has considered scene transitions, which may be important for virtual reality experiences like immersive 360 degree movies. In this research, we designed and evaluated three different scene transition techniques in two environments, conducted a pilot study, and collected metrics related to sickness, spatial orientation, and preference. Our preliminary results indicate that faster techniques are generally preferred by gamers and more gradual transitions are preferred by participants with less experience with 3D gaming and virtual reality.

Coordinating Attention and Cooperation in Multi-user Virtual Reality Narratives
Cullen Brown, Ghanshyam Bhutra, Mohamed Suhail, Qinghong Xu, and Eric D. Ragan
(Texas A&M University, USA)
Limited research has been performed attempting to handle multi-user storytelling environments in virtual reality. As such, a number of questions about handling story progression and maintaining user presence in a multi-user virtual environment have yet to be answered. We created a multi-user virtual reality story experience in which we intend to study a set of guided camera techniques and a set of gaze distractor techniques to determine how best to attract disparate users to the same story. Additionally, we describe our preliminary work and plans to study the effectiveness of these techniques, their effect on user presence, and generally how multiple users feel their actions affect the outcome of a story.

Designing Intentional Impossible Spaces in Virtual Reality Narratives: A Case Study
Joshua A. Fisher, Amit Garg, Karan Pratap Singh, and Wesley Wang
(Georgia Institute of Technology, USA)
Natural movement and locomotion in Virtual Environments (VE) is constrained by the user’s immediate physical space. To overcome this obstacle, researchers have established the use of impossible spaces. This work illustrates how impossible spaces can be utilized to enhance the aesthetics of, and presence within, an interactive narrative. This is done by creating impossible spaces with a narrative intent. First, locomotion and impossible spaces in VR are surveyed; second, the benefits of using intentional impossible spaces from a narrative design perspective is presented; third, a VR narrative called Ares is put forth as a prototype; and fourth, a user study is explored. Impossible spaces with a narrative intent intertwines narratology with the world’s aesthetics to enhance dramatic agency.

Video
Comparison of a Speech-Based and a Pie-Menu-Based Interaction Metaphor for Application Control
Sebastian Pick, Andrew S. Puika, and Torsten W. Kuhlen
(RWTH Aachen University, Germany; JARA-HPC, Germany; Shinta VR, Indonesia)
Choosing an adequate system control technique is crucial to support complex interaction scenarios in virtual reality applications. In this work, we compare an existing hierarchical pie-menu-based approach with a speech-recognition-based one in terms of task performance and user experience in a formal user study. As testbed, we use a factory planning application featuring a large set of system control options.

Virginia Tech's Study Hall: A Virtual Method of Loci Mnemotechnic Study using a Neurologically-Based, Mechanism-Driven, Approach to Immersive Learning Research
Jessie Mann, Nicholas Polys, Rachel Diana, Manasa Ananth, Brad Herald, and Sweetuben Platel
(Virginia Tech, USA)
The design of Virginia Tech’s (VT) Study Hall emerges from the current cognitive neuroscience understanding of memory as a spatially mediated encoding process. The driving questions are: Does the sense of spatial navigation generated by an immersive virtual experience aid in memory formation? Does virtual spatial navigation, when paired with learning cues, enhance information encoding relative to nonspatial and nonvirtual processes? A pilot study was executed comparing recall on non-navigational memorization processes to processes involving mental and virtual navigation and we are currently running a full study to see if we can replicate these effects with a more demanding memory task and refined study design.

REINVENT: A Low-Cost, Virtual Reality Brain-Computer Interface for Severe Stroke Upper Limb Motor Recovery
Ryan Spicer, Julia Anglin, David M. Krum, and Sook-Lei Liew
(University of Southern California, USA)
There are few effective treatments for rehabilitation of severe motor impairment after stroke. We developed a novel closed-loop neurofeedback system called REINVENT to promote motor recovery in this population. REINVENT (Rehabilitation Environment using the Integration of Neuromuscular-based Virtual Enhancements for Neural Training) harnesses recent advances in neuroscience, wearable sensors, and virtual technology and integrates low-cost electroencephalography (EEG) and electromyography (EMG) sensors with feedback in a head-mounted virtual reality display (VR) to provide neurofeedback when an individual’s neuromuscular signals indicate movement attempt, even in the absence of actual movement. Here we describe the REINVENT prototype and provide evidence of the feasibility and safety of using REINVENT with older adults.

Simulating Anthropomorphic Upper Body Actions in Virtual Reality using Head and Hand Motion Data
Dustin T. Han, Shyam Prathish Sargunam, and Eric D. Ragan
(Texas A&M University, USA)
The use of self avatars in virtual reality (VR) can bring users a stronger sense of presence and produce a more compelling experience by providing additional visual feedback during interactions. Avatars also become increasingly more relevant in VR as they provide a user with an identity for social interactions in multi-user settings. However, with current consumer VR setups that include only a head mounted display and hand controllers, implementation of self avatars are generally limited in the ability to mimic actions performed in the real world. Our work explores the idea of simulating a wide range of upper body motions using motion and positional data from only the head and hand motion data. We present a method to differentiate head and hip motions using information from captured motion data and applying corresponding changes to a virtual avatar. We discuss our approach and initial results.

Contextualizing Construction Accident Reports in Virtual Environments for Safety Education
Alyssa M. Peña and Eric D. Ragan
(Texas A&M University, USA)
Safety education is important in the construction industry. While research has been done on virtual environments for construction safety education, there is no set method for effectively contextualizing safety information and engaging students. In this research, we study the design of virtual environments to represent construction accident reports provided by the Occupational Health and Safety Administration (OSHA). We looked at different designs to contextualize the report data through space, visuals, and text. Users can explore the environment and interact through immersive virtual reality to learn more about a particular accident.

Video
Classification Method of Tactile Feeling using Stacked Autoencoder Based on Haptic Primary Colors
Fumihiro Kato, Charith Lasantha Fernando, Yasuyuki Inoue, and Susumu Tachi
(University of Tokyo, Japan; Keio University, Japan)
We have developed a classification method of tactile feeling using a stacked autoencoder-based neural network on haptic primary colors. The haptic primary colors principle is a concept of decomposing the human sensation of tactile feeling into force, vibration, and temperature. Images were obtained from variation in the frequency of the time series of the tactile feeling obtained when tracing a surface of an object, features were extracted by employing a stacked autoencoder using a neural network with two hidden layers, and supervised learning was conducted. We confirmed that the tactile feeling for three different surface materials can be classified with an accuracy of 82.0[%].

Uni-CAVE: A Unity3D Plugin for Non-head Mounted VR Display Systems
Ross Tredinnick, Brady Boettcher, Simon Smith, Samuel Solovy, and Kevin Ponto
(University of Wisconsin-Madison, USA)
Unity3D has become a popular, freely available 3D game engine for design and construction of virtual environments. Unfortunately, the few options that currently exist for adapting Unity3D to distributed immersive tiled or projection-based VR display systems rely on closed commercial products. Uni-CAVE aims to solve this problem by creating a freely-available and easy to use Unity3D extension package for cluster-based VR display systems. This extension provides support for head and device tracking, stereo rendering and display synchronization. Furthermore, Uni-CAVE enables configuration within the Unity environment enabling researchers to get quickly up and running.

3D Action Reconstruction using Virtual Player to Assist Karate Training
Kazumoto Tanaka
(Kindai University, Japan)
It is well known that sport skill learning is facilitated by video observation of players’ actions in the target sport. A viewpoint change function is desirable when a learner observes the actions using video images. However, in general, viewpoint changes for observation are not possible because most videos are filmed from a fixed point using a single video camera. The objective of this research is to develop a method that generates a 3D human model of a player (i.e., a virtual player) from a single image and enable observation of the virtual player's action from any point of view. As a first step, this study focused on karate training and developed a semiautomatic method for 3D reconstruction from video images of sparring in karate.

Immersion and Coherence in a Visual Cliff Environment
Richard Skarbez, Frederick P. Brooks Jr., and Mary C. Whitton
(University of North Carolina at Chapel Hill, USA)
We report on the design and results of an experiment investigating Slater's Place Illusion (PI) and Plausibility Illusion (Psi) in a virtual visual cliff environment. Existing presence questionnaires could not reliably distinguish the effects of PI from those of Psi. They did, however, indicate that high levels of PI-eliciting characteristics and Psi-eliciting characteristics together result in higher presence, compared to any of the other three conditions. Also, participants' heart rates responded markedly differently in the two Psi conditions; no such difference was observed across the PI conditions.

Anatomy Builder VR: Applying a Constructive Learning Method in the Virtual Reality Canine Skeletal System
Jinsil Hwaryoung Seo, Brian Smith, Margaret Cook, Michelle Pine, Erica Malone, Steven Leal, and Jinkyo Suh
(Texas A&M University, USA)
We present Anatomy Builder VR that examines how a virtual reality (VR) system can support embodied learning in anatomy education. The backbone of the project is to pursue an alternative constructivist pedagogical model for learning canine anatomy. The main focus of the study was to identify and assemble bones in the live-animal orientation, using real thoracic limb bones in a bone box and digital pelvic limb bones in the Anatomy Builder VR. Eleven college students participated in the study. The pilot study showed that participants most enjoyed interacting with anatomical contents within the VR program. Participants spent less time assembling bones in the VR, and instead spent a longer time tuning the orientation of each VR bone in the 3D space. This study showed how a constructivist method could support anatomy education while using virtual reality technology in an active and experiential way.

Transfer of a Skilled Motor Learning Task between Virtual and Conventional Environments
Julia Anglin, David Saldana, Allie Schmiesing, and Sook-Lei Liew
(University of Southern California, USA)
Immersive, head-mounted virtual reality (HMD-VR) can be a potentially useful tool for motor rehabilitation. However, it is unclear whether the motor skills learned in HMD-VR transfer to the non-virtual world and vice-versa. Here we used a well-established test of skilled motor learning, the Sequential Visual Isometric Pinch Task (SVIPT), to train individuals in either an HMD-VR or conventional training (CT) environment. Participants were then tested in both environments. Our results show that participants who train in the CT environment have an improvement in motor performance when they transfer to the HMD-VR environment. In contrast, participants who train in the HMD-VR environment show a decrease in skill level when transferring to the CT environment. This has implications for how training in HMD-VR and CT may affect performance in different environments.

A Methodology for Optimized Generation of Virtual Environments Based on Hydroelectric Power Plants
Ígor Andrade Moraes, Alexandre Cardoso, Edgard Lamounier Jr., Milton Miranda Neto, and Isabela Cristina dos Santos Peres
(Federal University of Uberlândia, Brazil)
The profits and benefits offered by Virtual Reality technology had drawn attention of professionals from several scientific fields, including the power systems’, either for training or maintenance. For this purpose, 3D modeling is evidently pointed out as an imperative process for the conception of a Virtual Environment. Before the complexity of Hydroelectric Power Plants and Virtual Reality’s contribution for the Industrial area, planning the tridimensional construction of the virtual environment becomes necessary. Thus, this paper presents modeling techniques applicable to several hydroelectric structures, aiming to optimize the 3D construction of the target complexes.


Doctoral Consortium
Sat, Mar 18, 08:30 - 17:00, R219

Steering Locomotion by Vestibular Perturbation in Room-Scale VR
Misha Sra
(Massachusetts Institute of Technology, USA)
Advances in consumer virtual reality (VR) technology have made using natural locomotion for navigation in VR a possibility. While walking in VR can enhance immersion and reduce motion sickness, it introduces a few challenges. Walking is only possible within virtual environments (VEs) that fit inside the boundaries of the tracked physical space, which for most users is quite small and carries a high potential for collisions with physical objects around the tracked area. In my thesis, I explore visual and physiological steering techniques that complement the traditional redirected walking technique of scene rotation to alter a user's walking trajectory in the physical space. In this paper, I present the physiological technique.

Cognitive Psychology and Human Factors Engineering of Virtual Reality
Adrian K. T. Ng
(University of Hong Kong, China)
This position paper summarizes the author's research interest in Cognitive Psychology and Human-Computer Interaction in the imseCAVE, a CAVE-like system in the University of Hong Kong. Several areas of interest were explored while finding the thesis topic for the Ph.D. research. They include a perception research on distance estimation with proposed error correction mechanism, neurofeedback meditation with EEG in VR and the effect with audio and video, the study of training transfer in VR training, the comparison and research of cybersickness between HMD and the imseCAVE, and comparing VR gaming in TV, HMD, and the imseCAVE by performance, activity level and time perception. With a broad interest, the exact direction is still in the search and requires future exploration.

Design and Assessment of Haptic Interfaces: An Essay on Proactive Haptic Articulation
Victor Adriel de Jesus Oliveira
(Federal University of Rio Grande do Sul, Brazil)
We looked up to elements present in speech articulation to introduce the proactive haptic articulation as a novel approach for intercommunication. The ability to use a haptic interface as a tool for implicit communication can supplement communication and support near and remote collaborative tasks in virtual and physical environments. In addition, the proactive articulation can be applied during the design process, including the user in the construction of more dynamic and optimized vibrotactile vocabularies. In this proposal, we discuss the thesis of the haptic proactive communication and our method to assess and implement it. Our goal is to understand the phenomena related to the proactive articulation of haptic signals and its use for communication and for the design of optimized tactile vocabularies.

Info
Designing Next Generation Marketplace: The Effect of 3D VR Store Interface Design on Shopping Behavior
Hyo Jeong Kang
(University of Wisconsin-Madison, USA)
The medium of virtual reality enables new opportunities for the experience products and shopping environment that may combine best features of both physical and digital market place. As little is known on how best to create virtual reality marketplace, the current research aims to explore required features for VR market user interface and its impact on shopping behavior. As a first step toward endeavor, we will empirically test three different user interfaces; 2D interface style, 3D skeuomorphic interface style and interface that combines features of both 2D and 3D inter-action techniques.

Gaze Estimation Based on Head Movements in Virtual Reality Applications using Deep Learning
Agata Marta Soccini
(University of Torino, Italy)
Gaze detection in Virtual Reality systems is mostly performed using eye-tracking devices. The coordinates of the sight, as well as other data regarding the eyes, are used as input values for the applications. While this trend is becoming more and more popular in the interaction design of immersive systems, most visors do not come with an embedded eye-tracker, especially those that are low cost and maybe based on mobile phones. We suggest implementing an innovative gaze estimation system into virtual environments as a source of information regarding users intentions. We propose a solution based on a combination of the features of the images and the movement of the head as an input of a Deep Convolutional Neural Network capable of inferring the 2D gaze coordinates in the imaging plane.

On Exploring the Mitigation of Distance Misperception in Virtual Reality
Alex Peer
(University of Wisconsin-Madison, USA)
Misperception of egocentric distances in virtual reality is a well established effect, with many known influencing factors but no clear cause or correction. Herein is proposed a course of research that explores this effect on three fronts: exploring perceptual calibrations, corrections based on known influences and observed misperception rather than a perfect understanding of the causes of misperception; exploring when adaptations due to feedback might exhibit undesirable effects; establishing contexts within practical tasks when distance misperceptions should be expected to have an effect.

Optical See-Through vs. Spatial Augmented Reality Simulators for Medical Applications
Salam Daher
(University of Central Florida, USA)
Currently healthcare practitioners use standardized patients, physical mannequins, and virtual patients as surrogates for real patients to provide a safe learning environment for students. Each of these simulators has different limitation that could be mitigated with various degrees of fidelity to represent medical cues. As we are exploring different ways to simulate a human patient and their effects on learning, we would like to compare the dynamic visuals between spatial augmented reality and a optical see-through augmented reality where a patient is rendered using the HoloLens and how that affects depth perception, task completion, and social presence.

Design of Collaborative 3D User Interfaces for Virtual and Augmented Reality
Jerônimo G. Grandi
(Federal University of Rio Grande do Sul, Brazil)
We explore design approaches for cooperative work in virtual manipulation tasks. We seek to understand the fundamental aspects of the human cooperation and design interfaces and manipulation actions to enhance the group's ability to solve complex manipulation tasks in various immersion scenarios.

Improve Accessibility of Virtual and Augmented Reality for People with Balance Impairments
Sharif Mohammad Shahnewaz Ferdous
(University of Texas at San Antonio, USA)
Most people experience some imbalance in a fully immersive Virtual Environment (VE) (i.e., wearing a Head Mounted Display (HMD) that blocks the users view of the real world). However, this imbalance is significantly worse in People with Balance Impairments (PwBIs) and minimal research has been done to improve this. In addition to imbalance problem, lack of proper visual cues can lead to different accessibility problems for PwBIs (e.g., small reach from the fear of imbalance, decreased gait performance, etc.) We plan to explore the effects of different visual cues on peoples’ balance, reach, gait, etc. Based on our primary study, we propose to incorporate additional visual cues in VEs that proved to significantly improve balance of PwBIs while they are standing and playing in a VE. We plan to further investigate if additional visual cues have similar effects in augmented reality. We are also developing studies to research reach and gait in VR as our future work.

View-Aware Tile-Based Adaptations in 360 Virtual Reality Video Streaming
Mohammad Hosseini
(University of Illinois at Urbana-Champaign, USA)
We have proposed an adaptive view-aware bandwidth-efficient 360 VR video streaming framework based on the tiling features of MPEG-DASH SRD. We extend MPEG-DASH SRD to the 3D space of 360 VR videos, and showcase a dynamic view-aware adaptation technique to tackle the high bandwidth demands of streaming 360 VR videos to wireless VR headsets. As a part of our contributions, we spatially partition the underlying 3D mesh into multiple 3D sub-meshes, and construct an efficient 3D geometry mesh called "hexaface sphere" to optimally represent tiled 360 VR videos in the 3D space. We then spatially divide the 360 videos into multiple tiles while encoding and packaging, use MPEG-DASH SRD to describe the spatial relationship of tiles in the 3D space, and prioritize the tiles in the Field of View (FoV) for view-aware adaptation. The initial evaluations that we conducted show that we can save up to 72% of the required bandwidth on 360 VR video streaming with minor negative quality impacts compared to the baseline scenario when no adaptations is applied.


Tutorials

Augmented Reality: Principles and Practice
Dieter Schmalstieg and Tobias Höllerer
(Graz University of Technology, Austria; University of California at Santa Barbara, USA)
This tutorial will provide a detailed introduction to Augmented Reality (AR). AR is a key user-interface technology for personalized, situated information delivery, navigation, on-demand instruction and games. The widespread availability and rapid evolution of smartphones and new devices such as Hololens enables software-only solutions for AR, where it was previously necessary to assemble custom hardware solutions. However, ergonomic and technical limitations of existing devices make this a challenging endeavor. In particular, it is necessary to design novel efficient real-time computer vision and computer graphics algorithms, and create new lightweight forms of interaction with the environment through small form-factor devices. This tutorial will present selected technical achievements in this field and highlight some examples of successful application prototypes.

Info
Hypertextual Reality: VR on the Web
Diego González-Zúñiga, Peter O'Shaughnessy, and Michael Blix
(Samsung Research, UK; Samsung Research, USA)
The tutorial focuses on Virtual Reality on the web and how researchers and developers can leverage its power to create content. The WebVR specification is presented, along with examples of how it works in a browser. Content creation is addressed by mentioning the available frameworks accompanied by a hands-on session in A-Frame. Additionally, the concept of Progressive Web App is explained and how it enables web experiences to work offline.

Diving into the Multiplicity: Liberating Your Design Process from a Convention-Centered Approach
Rebecca Rouse, Benjamin Chang, and Silvia Ruzanka ORCID logo
(Rensselaer Polytechnic Institute, USA)
Stop feeling bad about not having a language of VR, and embrace the multiplicity! This full day tutorial explores ways of applying the vibrant creativity of early media to VR, AR, and MR work today, using a new cross-historical concept called media of attraction. Participants will be guided through a prototyping process focused not on best practices, but on restriction mining, bespoke solutions, and associative creative strategies inspired by fascinating historical examples and artistic methods. The session concludes with prototype creation, and the development of speculative design work envisioning next technologies for media of attraction of the future.

Human-Centered Design for Immersive Interactions
Jason Jerald
(NextGen Interactions, USA)
VR has the potential to provide experiences and deliver results that cannot be otherwise achieved. However, interacting with immersive applications is not always straightforward and it is not just about an interface for the user to reach their goals. It is also about users working in an intuitive manner that is a pleasurable experience and devoid of frustration. Although VR systems and applications are incredibly complex, it is up to designers to take on the challenge of having the VR application intuitively communicate to users how the virtual world and its tools work so that those users can achieve their goals in an elegant and comfortable manner.

Navigation Interfaces for Virtual Reality and Gaming: Theory and Practice
Ernst Kruijff and Bernhard E. Riecke
(Bonn-Rhein-Sieg University of Applied Sciences, Germany; Simon Fraser University, Canada)
In this course, we will take a detailed look at various breeds of spatial navigation interfaces that allow for locomotion in digital 3D environments such as games, virtual environments or even the exploration of abstract data sets. We will closely look into the basics of navigation, unraveling the psychophysics (including wayfinding) and actual navigation (travel) aspects. The theoretical foundations form the basis for the practical skill set we will develop, by providing an in-depth discussion of navigation devices and techniques, and a step-by-step discussion of multiple real-world case studies. Doing so, we will cover the full range of navigation techniques from handheld to full-body, highly engaging and partly unconventional methods and tackle spatial navigation with hands-on-experience and tips for design and validation of novel interfaces. In particular, we will be looking at affordable setups, rapid prototyping methods and ways to “trick” out users to enable a realistic feeling of self-motion in the explored environments. As such, the course unites the theory and practice of spatial navigation, serving as entry point to understand and improve upon currently existing methods for the application domain at hand.


Videos

The Pull
Quba Michalski, Brendan J. Hogan, and Jamie Hunsdale
(QubaVR, USA; Impossible Acoustic, USA)
In a secret science facility, gravity has been conquered. “Down” is no longer a direction, but a choice. Step into the center of modified chambers and witness the laws of nature be broken in this five-experiment series. VR has all but torn down the barriers between the imagination of the creator and the experience of the viewer. A concept like The Pull simply does not translate into traditional 2D. We can suggest concepts through TVs and monitors, but we can’t truly experience them — and breaking the very laws of nature is something that can only be experienced. While flat media limits us to hinting and coaxing at an experience, by creating in VR, I can more faithfully share my vision with you, the viewer. For a few minutes – for five chambers – I can truly invite you into my world.

Virtual Reality to Save Endangered Animals: Many Eyes on the Wild
June Kim and Tomasz Bednarz
(UNSW, Australia; Queensland University of Technology, Australia; Data61 at CSIRO, Australia)
Immersive technologies and particularly the Virtual Reality (VR) provide new exciting ways to see the world. Today, we introduce our research that successfully employed VR to the biodiversity conservation sciences. Jaguars are one of the endangered animals. It is certainly critical and compelling to preserve ecosystem for endangered animals. With the awareness of this, we endeavoured to establish a multidisciplinary VR project that implemented data from indigenous villagers (jaguar experts group A), the conventional knowledge of the field of jaguar ecosystem (from jaguar experts group B), and mathematical and statistical models. Our fascination lies in these questions: can we effectively bring together VR and analytical capabilities? Can VR be used to make this world a better place for living beings? Please enjoy our 360-degree images of jaguar habitats taken in the Peruvian Amazon.

Video
Defying the Nazis VR
Dylan Southard, Elijah Allan-Blitz, Jordan Halsey, Christina Heller, and Artemis Joukowsky
(VR Playhouse, USA; A-B Productions, USA; Farm Pond Pictures, USA)
"Defying the Nazis VR" uses CGI, motion graphics, and archival documentary footage to re-create a heroic episode from World War II in VR, experimenting with the emotional power of virtual reality and with the medium as an educational tool.

Info
Singapore Inside Out London 2015 in VR
Lionel Chok
(iMMERSiVELY, Singapore)
Singapore: Inside Out 2015 is an international creative showcase featuring a collection of multi­sensorial experiences designed by the country’s creative talents. After making its successful debut in Beijing, the travelling showcase stops at Brick Lane Yard here in London from 24­28 June for its stint in the capital, before heading to New York in September and a homecoming finale in Singapore in November 2015 (Singapore Inside Out 2015). An energetic, cross­disciplinary showcase of contemporary creative disciplines featuring ­ architecture, food, fashion, film, music, literature, design and the visual arts, this celebration of creativity and collaboration that spans three continents will inspire you to revisit existing preconceptions and discover new perspectives of Singapore and its creative landscape. Having captured seven 360 spherical videos at this event in London itself, I set out to develop all these 360 spherical videos together in one 360 Virtual Reality (VR) Android mobile app, complete with visual interactions via line of sight for directions, graphics, audio and perhaps even ­ transitions. This is to also show a way of providing diegetic means of how these (gaze) interactions will work between and within each 360 videos. The development process would be as follow: 1. Using Unity3D with the Google Cardboard SDK and CSharp scripting 2. Mapping 360 videos inside Unity3D 3. Importing them into Unity3D and using scripting components from within Unity3D to add gaze interactions to navigate between the different 360 videos and other forms of interactions. Launched as an Android apk to your mobile phone, the final “Singapore Inside Out 2015 (London)” interactive VR (Virtual Reality) mobile app enables a viewer to be transported to Brick Lane Yard where the original traveling showcase was last held. Using your line of sight, one can look all around each 360 video to relive the experience of being there, as well as find designated buttons for gaze interaction to be activated. In total, there are about over a dozen interactive features to gaze at. From choosing between VR and Cardboard mode, transition between different 360 videos, displaying credits and graphics to the simple back function, just looking at a certain point inside the 360 video to activate these functions alone ­ took almost more than half of the total development time period. Part of it was the consideration for the duration, the number of seconds ­ we had to set for the gaze to take effect. As each 360 video cannot be previewed from inside the spherical object, aligning these buttons in the right positions while working in Unity has proved to be tedious and painstaking. In addition, the scripted components did not work all the time during all of the testings, and also differed when used between the PC and the Mac. But the most difficult challenge had to be actually scripting in CSharp. In spite of all these challenges ­ both creatively and technically ­ the app was finally completed, and now for all to experience and relive the festival experience in VR!

Genome Gazing: A 360° Stereoscopic Animation for Google Cardboard
Kate Patterson
(Garvan Institute of Medical Research, Australia)
Scientific concepts related to genomics and epigenetics can be abstract and are mostly invisible to the human eye. Communication of these concepts to a lay audience is particularly challenging. Visualizing DNA and associated molecules via an embodied experience offers the user an opportunity to interact with this scientific data in a more meaningful way. This project explores new ways to visualize and represent the major molecular players in genomics and epigenetics using virtual reality via portable head mounted displays.

Smoke Water Fire (2016)
Mark J. Stock
Smoke Water Fire, (2016), is a digital video for stereoscopic virtual reality systems. It is a radiosity rendered computer simulation of the collision of many fluid volumes, and required 8 months to generate. Instead of the forms being rendered as a natural material, though, they are presented in their pure, computational form. By stripping the fluid of its physical context, it can exist in a purely ephemeral, numerical, virtual state.


Research Demos

FACETEQ Interface Demo for Emotion Expression in VR
Ifigeneia Mavridou, James T. McGhee, Mahyar Hamedi, Mohsen Fatoorechi, Andrew Cleal, Emili Ballaguer-Balester, Ellen Seiss, Graeme Cox, and Charles Nduka
(Bournemouth University, UK; Sussex Innovation Centre, UK)
Faceteq prototype v.05 is a wearable technology for measuring facial expressions and biometric responses for experimental studies in Virtual Reality. Developed by Emteq Ltd laboratory, Faceteq can enable new avenues for virtual reality research through combination of high performance patented dry sensor technologies, proprietary algorithms and real-time data acquisition and streaming. Emteq founded the Faceteq project with the aim to provide a human-centered additional tool for emotion expression, affective human-computer interaction and social virtual environments. The proposed demonstration will exhibit the hardware and its functionality by allowing attendees to experience three of the showcasing applications we developed this year.

Diminished Hand: A Diminished Reality-Based Work Area Visualization
Shohei Mori, Momoko Maezawa, Naoto Ienaga, and Hideo Saito
(Keio University, Japan)
Live instructor’s perspective videos are useful to present intuitive visual instructions for trainees in medical and industrial settings. In such videos, the instructor’s hands often hide the work area. In this demo, we present a diminished hand for visualizing the work area hidden by hands by capturing the work area with multiple cameras. To achieve the diminished reality, we use a light field rendering technique, in which light rays avoid passing through penalty points set in the unstructured light fields reconstructed from the multiple viewpoint images.

Video
Jogging with a Virtual Runner using a See-Through HMD
Takeo Hamada, Michio Okada, and Michiteru Kitazaki
(Toyohashi University of Technology, Japan)
We present a novel assistive method for leading casual joggers by showing a virtual runner on see-through head-mounted display they worn. It moves at a constant pace specified in advance by them, and its motion synchronizes the user’s one. People can always visually check the pace by looking at it as a personal pacemaker. They are also motivated to keep running by regarding it as a jogging companion. Moreover, proposed method overcomes safety problem of AR apps. Its most body parts are transparent so that it doesn’t obstruct their view. This study, thus, may contribute to augment daily jogging experience.

Demonstration: Rapid One-Shot Acquisition of Dynamic VR Avatars
Charles Malleson, Maggie Kosek, Martin Klaudiny, Ivan Huerta, Jean-Charles Bazin, Alexander Sorkine-Hornung, Mark Mine, and Kenny Mitchell
(Disney Research, UK; Edinburgh Napier University, UK; Disney Research, Switzerland; Walt Disney Imagineering, USA)
In this demonstration, we showcase a system for rapid acquisition of bespoke avatars for each participant (subject) in a social VR environment is presented. For each subject, the system automatically customizes a parametric avatar model to match the captured subject by adjusting its overall height, body and face shape parameters and generating a custom face texture.

Application of Redirected Walking in Room-Scale VR
Eike Langbehn, Paul Lubos, Gerd Bruder, and Frank Steinicke ORCID logo
(University of Hamburg, Germany; University of Central Florida, USA)
Redirected walking (RDW) promises to allow near-natural walking in an infinitely large virtual environment (VE) by subtle manipulations of the virtual camera. Previous experiments showed that a physical radius of at least 22 meters is required for undetectable RDW. However, we found that it is possible to decrease this radius and to apply RDW to room-scale VR, i. e., up to approximately 5m × 5m. This is done by using curved paths in the VE instead of straight paths, and by coupling them together in a way that enables continuous walking. Furthermore, the corresponding paths in the real world are laid out in a way that fits perfectly into room-scale VR. In this research demo, users can experience RDW in a room-scale head-mounted display VR setup and explore a VE of approximately 25m × 25m.

Immersive Virtual Training for Substation Electricians
Eduardo H. Tanaka, Juliana A. Paludo, Rafael Bacchetti, Edgar V. Gadbem, Leonardo R. Domingues, Carlúcio S. Cordeiro, Olavo Giraldi Jr., Guilherme Alcarde Gallo, Adam Mendes da Silva, and Marcos H. Cascone
(Eldorado Research Institute, Brazil)
This research demonstration presents an Immersive Virtual Substation for Electricians Training. A substation is one of the most critical facilities of the electrical distribution system, so it is mandatory to keep its normal operation to deliver high standards of service, power quality and avoid blackouts to consumers. Therefore, it is necessary to give an effective training to the electricians who will operate the equipment in the substation and to prepare them to face emergencies on daily basis. The main purpose of the proposed Immersive Virtual Environment is to provide a realistic experience to trainees in a safe environment where they can interact with equipment, explore the facility and, mainly, practice basic and complex maneuvers to recover substation operations. Users can interact with this Immersive Virtual Environment using HMDs, joysticks or even an ordinary keyboard, mouse and monitor. Feedback from trainees and instructors who used the Immersive Virtual Environment was very positive, indicating that the objectives were fully achieved.

Video
Experiencing Guidance in 3D Spaces with a Vibrotactile Head-Mounted Display
Victor Adriel de Jesus Oliveira, Luca Brayda, Luciana Nedel, and Anderson Maciel
(Federal University of Rio Grande do Sul, Brazil; Fondazione Istituto Italiano di Tecnologia, Italy)
Vibrotactile feedback is broadly used to support different tasks in virtual and augmented reality applications, such as navigation, communication, attentional redirection, or to enhance the sense of presence in virtual environments. Thus, we aim to include the haptic component to the most popular wearable used in VR applications: the VR headset. After studying the acuity around the head for vibrating stimuli, and trying different parameters, actuators, and configurations, we developed a haptic guidance technique to be used in a vibrotactile Head-mounted Display (HMD). Our vibrotactile HMD was made to render the position of objects in a 3D space around the subject by varying both stimulus loci and vibration frequency. In this demonstration, the participants will interact with different scenarios where the mission is to select a number of predefined objects. However, instead of displaying occlusive graphical information to point to these objects, vibrotactile cues will provide guidance in the VR setup. Participants will see that our haptic guidance technique can be both easy to use and entertaining.

Video Info
3DPS: An Auto-calibrated Three-Dimensional Perspective-Corrected Spherical Display
Qian Zhou, Kai Wu, Gregor Miller, Ian Stavness, and Sidney Fels
(University of British Columbia, Canada; University of Saskatchewan, Canada)
We describe an auto-calibrated 3D perspective-corrected spherical display that uses multiple rear projected pico-projectors. The display system is auto-calibrated via 3D reconstruction of each projected pixel on the display using a single inexpensive camera. With the automatic calibration, the multiple-projector system supports a seamless blended imagery on the spherical screen. Furthermore, we incorporate head tracking with the display to present 3D content with motion parallax by rendering perspective-corrected images based on the viewpoint. To show the effectiveness of this design, we implemented a view-dependent application that allows walk-around visualization from all angles for a single head-tracked user. We also implemented a view-independent application that supports a wall-papered rendering for multi-user viewing. Thus, both view-dependent 3D VR content and spherical 2D content, such as a globe, can be easily experienced with this display.

Video Info
WebVR Meets WebRTC: Towards 360-Degree Social VR Experiences
Simon Gunkel, Martin Prins, Hans Stokking, and Omar Niamut
(TNO, Netherlands)
Virtual Reality (VR) and 360-degree video are reshaping the media landscape, creating a fertile business environment. During 2016 new 360-degree cameras and VR headsets entered the consumer market, distribution platforms are being established and new production studios are emerging. VR is evermore becoming a hot topic in research and industry and many new and exciting interactive VR content and experiences are emerging. The biggest gap we see in these experiences are social and shared aspects of VR. In this demo we present our ongoing efforts towards social and shared VR by developing a modular web based VR framework, that extends current video conferencing capabilities with new functionalities of Virtual and Mixed Reality. It allows us to connect two people together for mediated audio-visual interaction, while being able to engage in interactive content. Our framework allows to run extensive technological and user based trials in order to evaluate VR experiences and to build immersive multi-user interaction spaces. Our first results indicate that a high level of engagement and interaction between users is possible in our 360-degree VR set-up utilizing current web technologies.

Video
mpCubee: Towards a Mobile Perspective Cubic Display using Mobile Phones
Jens Grubert and Matthias Kranz
(Coburg University, Germany; University of Passau, Germany)
While we witness significant changes in display technologies, to date, the majority of display form factors remain flat. The research community has investigated other geometric display configuration given the rise to cubic displays that create the illusion of a 3D virtual scene within the cube.
We present a self-contained mobile perspective cubic display (mpCubee) assembled from multiple smartphones. We achieve perspective correct projection of 3D content through head-tracking using built-in cameras in smartphones. Furthermore, our prototype allows to spatially manipulate 3D objects on individual axes due to the orthogonal configuration of touch displays.

Towards Ad Hoc Mobile Multi-display Environments on Commodity Mobile Devices
Jens Grubert and Matthias Kranz
(Coburg University, Germany; University of Passau, Germany)
We present a demonstration of HeadPhones (Headtracking + smartPhones), a novel approach for the spatial registration of multiple mobile devices into an ad hoc multi-display environment. We propose to employ the user's head as external reference frame for the registration of multiple mobile devices into a common coordinate system. Our approach allows for dynamic repositioning of devices during runtime without the need for external infrastructure such as separate cameras or fiducials. Specifically, our only requirements are local network connections and mobile devices with built-in front facing cameras. This way, HeadPhones enables spatially-aware multi-display applications in mobile contexts.

Video Info
ArcheoVR: Exploring Itapeva's Archeological Site
Eduardo Zilles Borba, Andre Montes, Marcio Almeida, Mario Nagamura, Roseli Lopes, Marcelo Knorich Zuffo, Astolfo Araujo, and Regis Kopper
(University of São Paulo, Brazil; Duke University, USA)
This demo presents a fully immersive and interactive virtual environment (VE) – the ArcheoVR, which represents Itapeva Rocky Shelter, a prehistoric archeological site in Brazil. W workflow started with a real world data capture – laser scanners, drones and photogrammetry. Captured information was transformed into a carefully designed realistic 3D scene and interactive features that allows users to experience the virtual archeological site in real-time. The main objective of this VR model is to allow the general public to feel and explore an otherwise restricted and ephemeral site and to assess prototype tools intended for future digital archaeological exploration.

NIVR: Neuro Imaging in Virtual Reality
Tyler Ard, David M. Krum, Thai Phan, Dominique Duncan, Ryan Essex, Mark Bolas, and Arthur Toga
(University of Southern California, USA)
Visualization is a critical component of neuroimaging, and how to best view data that is naturally three dimensional is a long standing question in neuroscience. Many approaches, programs, and techniques have been developed specifically for neuroimaging. However, exploration of 3D information through a 2D screen is inherently limited. Many neuroscientific researchers hope that with the recent commercialization and popularization of VR, it can offer the next-step in data visualization and exploration.
Neuro Imaging in Virtual Reality (NIVR), is a visualization suite that employs various immersive visualizations to represent neuroimaging information in VR. Some established techniques, such as raymarching volume visualization, are paired with newer techniques, such as near-field rendering, to provide a broad basis of how we can leverage VR to improve visualization and navigation of neuroimaging data. Several of the neuroscientific visualization approaches presented are, to our knowledge, the first of their kind.
NIVR offers not only an exploration of neuroscientific data visualization, but also a tool to expose and educate the public regarding recent advancements in the field of neuroimaging. By providing an engaging experience to explore new techniques and discoveries in neuroimaging, we hope to spark scientific interest through a broad audience.
Furthermore, neuroimaging offers deep and expansive datasets; a single scan can involve several gigabytes of information. Visualization and exploration of this type of information can be challenging, and real-time exploration of this information in VR even more so. NIVR explores pathways which make this possible, and offers preliminary stereo visualizations of these types of massive data.

VRAIN: Virtual Reality Assisted Intervention for Neuroimaging
Dominique Duncan, Bradley Newman, Adam Saslow, Emily Wanserski, Tyler Ard, Ryan Essex, and Arthur Toga
(University of Southern California, USA; RareFaction Interactive, USA)
The USC Stevens Neuroimaging and Informatics Institute in the Laboratory of Neuro Imaging (http://loni.usc.edu) has the largest collection/repository of neuroanatomical MRI scans in the world and is at the forefront of both brain imaging and data storage/processing technology. One of our workflow processes involves algorithmic segmentation of MRI scans into labeled anatomical regions (using FreeSurfer, currently the best software for this purpose). This algorithm is imprecise, and users must tediously correct errors manually by using a mouse and keyboard to edit individual MRI slices at a time. We demonstrate preliminary work to improve efficiency of this task by translating it into 3 dimensions and utilizing virtual reality user interfaces to edit multiple slices of data simultaneously.

Video
Gesture-Based Augmented Reality Annotation
Yun Suk Chang, Benjamin Nuernberger, Bo Luan, Tobias Höllerer, and John O’Donovan
(University of California at Santa Barbara, USA)
Drawing annotations with 3D hand gestures in augmented reality is useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. To explore and evaluate different 3D hand-gesture-based annotation drawing methods, we have developed an annotation drawing application using the HoloLens augmented reality development platform. The application can be used for highlighting objects at a distance and multi-user collaboration by annotating in the real world.

Video
Virtual Field Trips with Networked Depth-Camera-Based Teacher, Heterogeneous Displays, and Example Energy Center Application
Jason W. Woodworth, Sam Ekong, and Christoph W. Borst
(University of Louisiana at Lafayette, USA)
This demo presents an approach to networked educational virtual reality for virtual field trips and guided exploration. It shows an asymmetric collaborative interface in which a remote teacher stands in front of a large display and depth camera (Kinect) while students are immersed with HMDs. The teacher’s front-facing mesh is streamed into the environment to assist students and deliver instruction. Our project uses commodity virtual reality hardware and high-performance networks to allow students who are unable to visit a real facility with an alternative that provides similar educational benefits. Virtual facilities can further be augmented with educational content through interactables or small games. We discuss motivation, features, interface challenges, and ongoing testing.

Video
Rapid Creation of Photorealistic Virtual Reality Content with Consumer Depth Cameras
Chih-Fan Chen, Mark Bolas, and Evan Suma Rosenberg
(University of Southern California, USA)
Virtual objects are essential for building environments in virtual reality (VR) applications. However, creating photorealistic 3D models is not easy, and handcrafting the detailed 3D model from a real object can be time and labor intensive. An alternative way is to build a structured camera array such as a light-stage to reconstruct the model from a real object. However, these technologies are very expensive and not practical for most users. In this work, we demonstrate a complete end-to-end pipeline for the capture, processing, and rendering of view-dependent 3D models in virtual reality from a single consumer-grade RGB-D camera. The geometry model and the camera trajectories are automatically reconstructed from a RGB-D image sequence captured offline. Based on the HMD position, selected images are used for real-time model rendering. The result of this pipeline is a 3D mesh with view-dependent textures suitable for real-time rendering in virtual reality. Specular reflections and light-burst effects are especially noticeable when users view the objects from different perspectives in a head-tracked environment.

Video
Travel in Large-Scale Head-Worn VR: Pre-oriented Teleportation with WIMs and Previews
Carmine Elvezio, Mengu Sukan, Steven Feiner, and Barbara Tversky
(Columbia University, USA; Teachers College, USA)
We demonstrate an interaction technique that allows a user to point at a world-in-miniature representation of a city-scale virtual environment and perform efficient and precise teleportation by pre-orienting an avatar. A preview of the post-teleport view of the full-scale virtual environment updates interactively as the user adjusts the position, yaw, and pitch of the avatar's head with a pair of 6DoF-tracked controllers. We describe design decisions and contrast with alternative approaches to virtual travel.

Video

proc time: 2.08