3DUI 2017 – Author Index |
Contents -
Abstracts -
Authors
Online Calendar - iCal File |
A B C D E F G H J K L M N O P Q R S T V W Y Z
Achibet, Merwan |
![]() Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. ![]() |
|
Afonso, Luis |
![]() Luis Afonso, Paulo Dias, Carlos Ferreira, and Beatriz Sousa Santos (University of Aveiro, Portugal) How does the virtual representation of the user's hands influence the performance on a button selection task performed in a tablet-based interaction within an immersive virtual environment? To answer this question, we asked 55 participants to use three conditions: no-hand avatar, realistic avatar and translucent avatar. The participants were faster but made slightly more errors while using the no-avatar condition, and considered easier to perform the task with the translucent avatar. ![]() |
|
Araujo, Astolfo |
![]() Ana Grasielle Corrêa, Eduardo Zilles Borba, Roseli Lopes, Marcelo Knorich Zuffo, Astolfo Araujo, and Regis Kopper (University of São Paulo, Brazil; Duke University, USA) In this work we present a study about usability experience of users in a cyber-archeological environment. We researched how they explore a realistic 3D environment in Virtual Reality (VR) through archaeometry conventional techniques. Our objective is to evaluate users experiences with interactive archaeometry tools with archaeologist (not a VR expert) and compare results with VR experts (not an archeology expert). Two hypothesis will be tested: a) it’s possible to simulate the virtual world realistically as the real one?; b) if this VR model is passive of exploration, is it possible to create 3DUI analytical tools to help archaeologist to manipulate archaeometry tools? To explore these hypotheses we conducted experimental tests with ten users and the results are promising. ![]() |
|
Ardouin, Jérôme |
![]() Guillaume Cortes, Eric Marchand, Jérôme Ardouin, and Anatole Lécuyer (Realyz, France; University of Rennes 1, France; Inria, France) We propose an approach to greatly increase the tracking workspace of VR applications without adding new sensors. Our approach relies on controlled cameras able to follow the tracked markers all around the VR workspace providing 6DoF tracking data. We designed the proof-of-concept of such approach based on two consumer-grade cameras and a pan-tilt head. The resulting tracking workspace could be greatly increased depending on the actuators’ range of motion. The accuracy error and jitter were found to be rather limited during camera motion (resp. 0.3cm and 0.02cm). Therefore, whenever the final VR application does not require a perfect tracking accuracy over the entire workspace, we recommend using our approach in order to enlarge the tracking workspace. ![]() |
|
Argelaguet, Ferran |
![]() Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. ![]() ![]() Ferran Argelaguet, Mélanie Ducoffe, Anatole Lécuyer, and Remi Gribonval (Inria, France; IRISA, France; ENS, France) Advances in motion tracking technology, especially for commodity hardware, still require robust 3D gesture recognition in order to fully exploit the benefits of natural user interfaces. In this paper, we introduce a novel 3D gesture recognition algorithm based on the sparse representation of 3D human motion. The sparse representation of human motion provides a set of features that can be used to efficiently classify gestures in real-time. Compared to existing gesture recognition systems, sparse representation, the proposed approach enables full spatial and rotation invariance and provides high tolerance to noise. Moreover, the proposed classification scheme takes into account the inter-user variability which increases gesture classification accuracy in user-independent scenarios. We validated our approach with existing motion databases for gestural interaction and performed a user evaluation with naive subjects to show its robustness to arbitrarily defined gestures. The results showed that our classification scheme has high classification accuracy for user-independent scenarios even with users who have different handedness. We believe that sparse representation of human motion will pave the way for a new generation of 3D gesture recognition systems in order to fully open the potential of natural user interfaces. ![]() ![]() |
|
Ariza N., Oscar J. |
![]() Oscar J. Ariza N., Markus Lange, Frank Steinicke, and Gerd Bruder (University of Hamburg, Germany; University of Würzburg, Germany; University of Central Florida, USA) The selection of objects in virtual reality (VR) located on the periphery and outside the field of view (FOV) requires a visual search by rotations of the HMD, which can reduce the performance of interaction. Our work explores the use of a pair of self-made wireless and wearable devices which once attached to the user’s head provide assistive vibrotactile cues for guidance in order to reduce the time used to turn and locate a target object. We present an experiment based on a dual-tasking method to analyze cognitive demands and performance metrics during a selection task. ![]() |
|
Attanasio, Giuseppe |
![]() Giuseppe Attanasio, Alberto Cannavò, Francesca Cibrario, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) Using digital instruments to support artistic expression and creativity is a hot topic. In this work, we focused on the design of a suitable interface for Augmented Reality-based constructive art on handheld devices. Issues to be faced encompassed how to give artists sense of spatial dimensions, how to provide them with different tools for realizing artworks, and how much moving away from ``the real'' and going towards ``the virtual''. Through a touch-capable device, such as a smartphone or a tablet, we offer artists a clean workspace, where they can decide when to introduce artworks and tools. In fact, besides exploiting the multi-touch functionality and the gyroscopes/accelerometers to manipulate artworks in six degrees of freedom (6DOF), the proposed solution exploits a set of printed markers that can be brought into the camera's field of view to make specific virtual tools appear in the augmented scene. With such tools, artists can decide to control, e.g., manipulation speed, scale factor, scene parameters, etc., thus complementing functionalities that can be accessed via the device's screen. ![]() |
|
Babu, Sabarish V. |
![]() Ayush Bhargava, Jeffrey Bertrand, and Sabarish V. Babu (Clemson University, USA) In this paper, we present Augmented-Art Creation Tool (AACT) as our solution to the IEEE 3DUI 2017 challenge. Our solution employs multi-finger touch gestures along with the built-in camera and accelerometer on a mobile device for interaction in an Augmented Reality (AR) setup. We leverage a user's knowledge of touch gestures like pinching, swiping, etc. and physical device movement to interact with the environment making the metaphor intuitive. The system helps prevent occlusion by using the accelerometer and allowing touch gestures anywhere on the screen. The interaction metaphor allows for successful art piece creation and assembly. ![]() ![]() ![]() Nathanael Harrell, Grayson Bonds, Xiaojia Wang, Sean Valent, Elham Ebrahimi, and Sabarish V. Babu (Clemson University, USA) We present our metaphor for object translation, rotation, and rescaling and particle parameter manipulation in an augmented reality environment using an Android smartphone or tablet for the 2017 3DUI Competition in Los Angeles, California. Our metaphor aims to map the three-dimensional interaction of objects in a real world space to the two-dimensional plane of a smartphone or tablet screen. Our final product is the result of experimentation with different metaphors for translation, rotation, rescaling, and particle parameter manipulation and was guided by the feedback of voluntary product testers. The result is an interaction technique between a mobile device and the virtual world which we believe to be intuitive. ![]() ![]() ![]() Jeffrey Bertrand, Ayush Bhargava, Kapil Chalil Madathil, Anand Gramopadhye, and Sabarish V. Babu (Clemson University, USA) In this study, we empirically evaluated the effects of presentation method and simulation fidelity on task performance and psychomotor skills acquisition in an immersive bimanual simulation towards precision metrology education. In a 2 x 2 experiment design, we investigated a large-screen immersive display (LSID) with a head-mounted display (HMD), and the presence versus absence of gravity. Advantages of the HMD include interacting with the simulation in a more natural manner as compared to using a large-screen immersive display due to the similarities between the interactions afforded in the virtual compared to the real-world task. Suspending the laws of physics may have an effect on usability and in turn could affect learning outcomes. Our dependent variables consisted of a pre and post cognition questionnaire, quantitative performance measures, perceived workload and system usefulness, and a psychomotor assessment to measure to what extent transfer of learning took place from the virtual to the real world. Results indicate that the HMD condition was preferable to the immersive display in several metrics while the no-gravity condition resulted in users adopting strategies that were not advantageous for task performance. ![]() |
|
Bailenson, Jeremy |
![]() Salam Daher, Kangsoo Kim, Myungho Lee, Gerd Bruder, Ryan Schubert, Jeremy Bailenson, and Gregory F. Welch (University of Central Florida, USA; Stanford University, USA) We explore whether a peripheral observation of apparent mutual social presence between a virtual human (VH) and a virtual human confederate (VHC) can increase a subject’s sense of social presence with the VH. Human subjects were asked to play a game with a VH. Half of the subjects were primed by being exposed to a brief but apparently engaging conversation between the VHC and the VH. The primed subjects reported being significantly more excited, alert, had significantly higher measures of Co-Presence, Attentional Allocation, and Message Understanding dimensions of social presence for the VH, compared to those who were not primed. ![]() |
|
Balcazar, Ruben |
![]() Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. ![]() |
|
Barreto, Armando |
![]() Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. ![]() |
|
Basting, Oliver |
![]() Oliver Basting, Arnulph Fuhrmann, and Stefan M. Grünvogel (TH Köln, Germany) The following paper investigates the effect on the intensity of per- ceived vection by changing the field of view (FOV) using a head- mounted display (HMD) in a virtual environment (VE). For this purpose a study was carried out, where the participants were situ- ated in a vection evoking VE using a HMD. During the experiment, the VE was presented with different FOVs, and a measurement of the felt intensity of vection was performed. The results indicate that a decrease of the FOV invokes a decrease of the intensity of perceived vection. ![]() |
|
Bechmann, Dominique |
![]() Sabah Boustila, Dominique Bechmann, and Antonio Capobianco (University of Strasbourg, France) Stereo and head tracking are considered as distance perception cues in virtual environment. Several studies have investigated their influence on several tasks. Results were different among studies. In this paper, we conducted a complete experiment investigating the influence of the stereo and head tracking in the specific context of virtual visits of houses during architectural project review with clients. We manipulated the stereo and head tracking in four conditions and we examined effects of the two factors on distance perception (room dimensions, habitability, etc.), task difficulty, presence and simulator sickness. Results reveal a significant effect of the stereo on the estimation of the habitability, the dimensions of the rooms and task difficulty. However, the effect of stereo and head tracking was not significant on the presence and simulator sickness. ![]() |
|
Belloc, Olavo |
![]() André Montes Rodrigues, Olavo Belloc, Eduardo Zilles Borba, Mario Nagamura, and Marcelo Knorich Zuffo (University of São Paulo, Brazil) In this work we present an interactive 3D object manipulation system using off the shelf mobile devices coupled with Augmented Reality (AR) technology that allows editing 3D objects by way of natural interactions based on tangible interfaces paradigms. The set-up consists of a mobile device, an interactive wand marker and AR markers laid on a table. The system allows users to change viewpoint and execute operations on 3D objects - simultaneous translation and rotation, scaling, cloning or deleting - by unconstrained natural interactions, leveraging user’s proficiency on daily object manipulation tasks and speeding up such typical 3D manipulation operations. Depth perception was significantly enhanced with dynamic shadows, allowing fast alignment and accurate positioning of objects. The prototype presented here allows successful completion of the three challenges proposed by the 2017 3DUI Contest, as validated by a preliminary informal user study with participants from the target audience and also from the general public. ![]() |
|
Bernal, Jonathan |
![]() Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. ![]() |
|
Berndt, Iago |
![]() Jerônimo G. Grandi, Iago Berndt, Henrique G. Debarba, Luciana Nedel, and Anderson Maciel (Federal University of Rio Grande do Sul, Brazil; Artanim Foundation, Switzerland) Interaction in augmented reality environments may be very complex, depending on the degrees of freedom (DOFs) required for the task. In this work we present a 3D user interface for collaborative manipulation of virtual objects in augmented reality (AR) environments. It maps position -- acquired with a camera and fiducial markers -- and touchscreen input of a handheld device into gestures to select, move, rotate and scale virtual objects. As these transformations require the control of multiple DOFs, collaboration is proposed as a solution to coordinate the modification of each and all the available DOFs. Users are free to decide their own manipulation roles. All virtual elements are displayed directly on the mobile device as an overlay of the camera capture, providing an individual point of view of the AR environment to each user. ![]() ![]() |
|
Bertrand, Jeffrey |
![]() Ayush Bhargava, Jeffrey Bertrand, and Sabarish V. Babu (Clemson University, USA) In this paper, we present Augmented-Art Creation Tool (AACT) as our solution to the IEEE 3DUI 2017 challenge. Our solution employs multi-finger touch gestures along with the built-in camera and accelerometer on a mobile device for interaction in an Augmented Reality (AR) setup. We leverage a user's knowledge of touch gestures like pinching, swiping, etc. and physical device movement to interact with the environment making the metaphor intuitive. The system helps prevent occlusion by using the accelerometer and allowing touch gestures anywhere on the screen. The interaction metaphor allows for successful art piece creation and assembly. ![]() ![]() ![]() Jeffrey Bertrand, Ayush Bhargava, Kapil Chalil Madathil, Anand Gramopadhye, and Sabarish V. Babu (Clemson University, USA) In this study, we empirically evaluated the effects of presentation method and simulation fidelity on task performance and psychomotor skills acquisition in an immersive bimanual simulation towards precision metrology education. In a 2 x 2 experiment design, we investigated a large-screen immersive display (LSID) with a head-mounted display (HMD), and the presence versus absence of gravity. Advantages of the HMD include interacting with the simulation in a more natural manner as compared to using a large-screen immersive display due to the similarities between the interactions afforded in the virtual compared to the real-world task. Suspending the laws of physics may have an effect on usability and in turn could affect learning outcomes. Our dependent variables consisted of a pre and post cognition questionnaire, quantitative performance measures, perceived workload and system usefulness, and a psychomotor assessment to measure to what extent transfer of learning took place from the virtual to the real world. Results indicate that the HMD condition was preferable to the immersive display in several metrics while the no-gravity condition resulted in users adopting strategies that were not advantageous for task performance. ![]() |
|
Bhargava, Ayush |
![]() Ayush Bhargava, Jeffrey Bertrand, and Sabarish V. Babu (Clemson University, USA) In this paper, we present Augmented-Art Creation Tool (AACT) as our solution to the IEEE 3DUI 2017 challenge. Our solution employs multi-finger touch gestures along with the built-in camera and accelerometer on a mobile device for interaction in an Augmented Reality (AR) setup. We leverage a user's knowledge of touch gestures like pinching, swiping, etc. and physical device movement to interact with the environment making the metaphor intuitive. The system helps prevent occlusion by using the accelerometer and allowing touch gestures anywhere on the screen. The interaction metaphor allows for successful art piece creation and assembly. ![]() ![]() ![]() Jeffrey Bertrand, Ayush Bhargava, Kapil Chalil Madathil, Anand Gramopadhye, and Sabarish V. Babu (Clemson University, USA) In this study, we empirically evaluated the effects of presentation method and simulation fidelity on task performance and psychomotor skills acquisition in an immersive bimanual simulation towards precision metrology education. In a 2 x 2 experiment design, we investigated a large-screen immersive display (LSID) with a head-mounted display (HMD), and the presence versus absence of gravity. Advantages of the HMD include interacting with the simulation in a more natural manner as compared to using a large-screen immersive display due to the similarities between the interactions afforded in the virtual compared to the real-world task. Suspending the laws of physics may have an effect on usability and in turn could affect learning outcomes. Our dependent variables consisted of a pre and post cognition questionnaire, quantitative performance measures, perceived workload and system usefulness, and a psychomotor assessment to measure to what extent transfer of learning took place from the virtual to the real world. Results indicate that the HMD condition was preferable to the immersive display in several metrics while the no-gravity condition resulted in users adopting strategies that were not advantageous for task performance. ![]() |
|
Billinghurst, Mark |
![]() Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman, and Mark Billinghurst (University of South Australia, Australia; University of Canterbury, New Zealand) Eye tracking technology in a head-mounted display has undergone rapid advancement in recent years, making it possible for researchers to explore new interaction techniques using natural eye movements. This paper explores three novel eye-gaze-based interaction techniques: (1) Duo-Reticles, eye-gaze selection based on eye-gaze and inertial reticles, (2) Radial Pursuit, cluttered-object selection that takes advantage of smooth pursuit, and (3) Nod and Roll, head-gesture-based interaction based on the vestibulo-ocular reflex. In an initial user study, we compare each technique against a baseline condition in a scenario that demonstrates its strengths and weaknesses. ![]() ![]() ![]() |
|
Bonds, Grayson |
![]() Nathanael Harrell, Grayson Bonds, Xiaojia Wang, Sean Valent, Elham Ebrahimi, and Sabarish V. Babu (Clemson University, USA) We present our metaphor for object translation, rotation, and rescaling and particle parameter manipulation in an augmented reality environment using an Android smartphone or tablet for the 2017 3DUI Competition in Los Angeles, California. Our metaphor aims to map the three-dimensional interaction of objects in a real world space to the two-dimensional plane of a smartphone or tablet screen. Our final product is the result of experimentation with different metaphors for translation, rotation, rescaling, and particle parameter manipulation and was guided by the feedback of voluntary product testers. The result is an interaction technique between a mobile device and the virtual world which we believe to be intuitive. ![]() ![]() |
|
Bönsch, Andrea |
![]() Andrea Bönsch, Tom Vierjahn, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Embodied, virtual agents provide users assistance in agent-based support systems. To this end, two closely linked factors have to be considered for the agents’ behavioral design: their presence time (PT), i.e., the time in which the agents are visible, and the approaching time (AT), i.e., the time span between the user’s calling for an agent and the agent’s actual availability. This work focuses on human-like assistants that are embedded in immersive scenes but that are required only temporarily. To the best of our knowledge, guidelines for a suitable trade-off between PT and AT of these assistants do not yet exist. We address this gap by presenting the results of a controlled within-subjects study in a CAVE. While keeping a low PT so that the agent is not perceived as annoying, three strategies affecting the AT, namely fading, walking, and running, are evaluated by 40 subjects. The results indicate no clear preference for either behavior. Instead, the necessity of a better trade-off between a low AT and an agent’s realistic behavior is demonstrated. ![]() |
|
Borba, Eduardo Zilles |
![]() André Montes Rodrigues, Olavo Belloc, Eduardo Zilles Borba, Mario Nagamura, and Marcelo Knorich Zuffo (University of São Paulo, Brazil) In this work we present an interactive 3D object manipulation system using off the shelf mobile devices coupled with Augmented Reality (AR) technology that allows editing 3D objects by way of natural interactions based on tangible interfaces paradigms. The set-up consists of a mobile device, an interactive wand marker and AR markers laid on a table. The system allows users to change viewpoint and execute operations on 3D objects - simultaneous translation and rotation, scaling, cloning or deleting - by unconstrained natural interactions, leveraging user’s proficiency on daily object manipulation tasks and speeding up such typical 3D manipulation operations. Depth perception was significantly enhanced with dynamic shadows, allowing fast alignment and accurate positioning of objects. The prototype presented here allows successful completion of the three challenges proposed by the 2017 3DUI Contest, as validated by a preliminary informal user study with participants from the target audience and also from the general public. ![]() ![]() Ana Grasielle Corrêa, Eduardo Zilles Borba, Roseli Lopes, Marcelo Knorich Zuffo, Astolfo Araujo, and Regis Kopper (University of São Paulo, Brazil; Duke University, USA) In this work we present a study about usability experience of users in a cyber-archeological environment. We researched how they explore a realistic 3D environment in Virtual Reality (VR) through archaeometry conventional techniques. Our objective is to evaluate users experiences with interactive archaeometry tools with archaeologist (not a VR expert) and compare results with VR experts (not an archeology expert). Two hypothesis will be tested: a) it’s possible to simulate the virtual world realistically as the real one?; b) if this VR model is passive of exploration, is it possible to create 3DUI analytical tools to help archaeologist to manipulate archaeometry tools? To explore these hypotheses we conducted experimental tests with ten users and the results are promising. ![]() |
|
Borst, Christoph W. |
![]() Prabhakar V. Vemavarapu and Christoph W. Borst (University of Louisiana at Lafayette, USA) An indirect touch 3D interface using a two-sided handheld touch device for interactions with dense datasets on stereoscopic displays. This work explores the possibilities for a smartphone able to sense touch on both sides. Two android phones are combined back-to-back. The top touch surface is used for primary or fine interactions (selection /translation/ rotation) and bottom surface for coarser aspects such as mode control or feature extraction. Two surfaces are programmed to recognize input from 4 digits – two top and two bottom. The four touch areas enable 3D object selection, manipulation, and feature extraction using combinations of simultaneous touches. ![]() ![]() Jason W. Woodworth and Christoph W. Borst (University of Louisiana at Lafayette, USA) We address a 3D pointing problem for a "virtual mirror" view used in collaborative VR. The virtual mirror is a large TV display showing a depth-camera-based image of a user in a surrounding virtual environment. There are problems with pointing and communicating to remote users due to the indirectness of pointing in a mirror and a low sense of depth. We propose several visual cues to help the user control pointing depth, and present an initial user study, providing a basis for further refinement and investigation of techniques. ![]() ![]() |
|
Boustila, Sabah |
![]() Sabah Boustila, Dominique Bechmann, and Antonio Capobianco (University of Strasbourg, France) Stereo and head tracking are considered as distance perception cues in virtual environment. Several studies have investigated their influence on several tasks. Results were different among studies. In this paper, we conducted a complete experiment investigating the influence of the stereo and head tracking in the specific context of virtual visits of houses during architectural project review with clients. We manipulated the stereo and head tracking in four conditions and we examined effects of the two factors on distance perception (room dimensions, habitability, etc.), task difficulty, presence and simulator sickness. Results reveal a significant effect of the stereo on the estimation of the habitability, the dimensions of the rooms and task difficulty. However, the effect of stereo and head tracking was not significant on the presence and simulator sickness. ![]() |
|
Bowman, Doug A. |
![]() Run Yu, Wallace S. Lages, Mahdi Nabiyouni, Brandon Ray, Navyaram Kondur, Vikram Chandrashekar, and Doug A. Bowman (Virginia Tech, USA) We present two novel redirection techniques to enable real walking in large virtual environments (VEs) using only “room-scale” tracked spaces. The techniques, called Bookshelf and Bird, provide narrative-consistent redirection to keep the user inside the physical space, and require the user to walk to explore the VE. The underlying concept, called cell-based redirection, is to divide the virtual world into discrete cells that have the same size as the physical tracking space. The techniques then can redirect the users without altering the relationship between the user, the physical space, and the virtual cell. In this way, users can always access the entire cell using real walking. ![]() ![]() |
|
Bruder, Gerd |
![]() Oscar J. Ariza N., Markus Lange, Frank Steinicke, and Gerd Bruder (University of Hamburg, Germany; University of Würzburg, Germany; University of Central Florida, USA) The selection of objects in virtual reality (VR) located on the periphery and outside the field of view (FOV) requires a visual search by rotations of the HMD, which can reduce the performance of interaction. Our work explores the use of a pair of self-made wireless and wearable devices which once attached to the user’s head provide assistive vibrotactile cues for guidance in order to reduce the time used to turn and locate a target object. We present an experiment based on a dual-tasking method to analyze cognitive demands and performance metrics during a selection task. ![]() ![]() Salam Daher, Kangsoo Kim, Myungho Lee, Gerd Bruder, Ryan Schubert, Jeremy Bailenson, and Gregory F. Welch (University of Central Florida, USA; Stanford University, USA) We explore whether a peripheral observation of apparent mutual social presence between a virtual human (VH) and a virtual human confederate (VHC) can increase a subject’s sense of social presence with the VH. Human subjects were asked to play a game with a VH. Half of the subjects were primed by being exposed to a brief but apparently engaging conversation between the VHC and the VH. The primed subjects reported being significantly more excited, alert, had significantly higher measures of Co-Presence, Attentional Allocation, and Message Understanding dimensions of social presence for the VH, compared to those who were not primed. ![]() |
|
Cannavò, Alberto |
![]() Giuseppe Attanasio, Alberto Cannavò, Francesca Cibrario, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) Using digital instruments to support artistic expression and creativity is a hot topic. In this work, we focused on the design of a suitable interface for Augmented Reality-based constructive art on handheld devices. Issues to be faced encompassed how to give artists sense of spatial dimensions, how to provide them with different tools for realizing artworks, and how much moving away from ``the real'' and going towards ``the virtual''. Through a touch-capable device, such as a smartphone or a tablet, we offer artists a clean workspace, where they can decide when to introduce artworks and tools. In fact, besides exploiting the multi-touch functionality and the gyroscopes/accelerometers to manipulate artworks in six degrees of freedom (6DOF), the proposed solution exploits a set of printed markers that can be brought into the camera's field of view to make specific virtual tools appear in the augmented scene. With such tools, artists can decide to control, e.g., manipulation speed, scale factor, scene parameters, etc., thus complementing functionalities that can be accessed via the device's screen. ![]() ![]() Alberto Cannavò, Fabio Cermelli, Vincenzo Chiaramida, Giovanni Ciccone, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) A 3D User Interface for manipulating virtual objects in Augmented Reality scenarios on handheld devices is presented. The proposed solution takes advantage of two interaction techniques. The former (named ``cursor mode'') exploits a cursor, which position and movement are bound to the view of the device; the cursor allows the user to select objects and to perform coarse-grain manipulations by moving the device. The latter (referred to as ``tuning mode'') uses the physical affordances of a tangible interface to provide the user with the possibility to refine objects in all their aspects (position, rotation, scale, color, and so forth) with a fine-grained control. ![]() |
|
Capobianco, Antonio |
![]() Sabah Boustila, Dominique Bechmann, and Antonio Capobianco (University of Strasbourg, France) Stereo and head tracking are considered as distance perception cues in virtual environment. Several studies have investigated their influence on several tasks. Results were different among studies. In this paper, we conducted a complete experiment investigating the influence of the stereo and head tracking in the specific context of virtual visits of houses during architectural project review with clients. We manipulated the stereo and head tracking in four conditions and we examined effects of the two factors on distance perception (room dimensions, habitability, etc.), task difficulty, presence and simulator sickness. Results reveal a significant effect of the stereo on the estimation of the habitability, the dimensions of the rooms and task difficulty. However, the effect of stereo and head tracking was not significant on the presence and simulator sickness. ![]() |
|
Cermelli, Fabio |
![]() Alberto Cannavò, Fabio Cermelli, Vincenzo Chiaramida, Giovanni Ciccone, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) A 3D User Interface for manipulating virtual objects in Augmented Reality scenarios on handheld devices is presented. The proposed solution takes advantage of two interaction techniques. The former (named ``cursor mode'') exploits a cursor, which position and movement are bound to the view of the device; the cursor allows the user to select objects and to perform coarse-grain manipulations by moving the device. The latter (referred to as ``tuning mode'') uses the physical affordances of a tangible interface to provide the user with the possibility to refine objects in all their aspects (position, rotation, scale, color, and so forth) with a fine-grained control. ![]() |
|
Chandrashekar, Vikram |
![]() Run Yu, Wallace S. Lages, Mahdi Nabiyouni, Brandon Ray, Navyaram Kondur, Vikram Chandrashekar, and Doug A. Bowman (Virginia Tech, USA) We present two novel redirection techniques to enable real walking in large virtual environments (VEs) using only “room-scale” tracked spaces. The techniques, called Bookshelf and Bird, provide narrative-consistent redirection to keep the user inside the physical space, and require the user to walk to explore the VE. The underlying concept, called cell-based redirection, is to divide the virtual world into discrete cells that have the same size as the physical tracking space. The techniques then can redirect the users without altering the relationship between the user, the physical space, and the virtual cell. In this way, users can always access the entire cell using real walking. ![]() ![]() |
|
Chang, Yun Suk |
![]() Yun Suk Chang, Benjamin Nuernberger, Bo Luan, and Tobias Höllerer (University of California at Santa Barbara, USA) Drawing annotations with 3D hand gestures in augmented reality are useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn circle and arrow annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. In this paper, we compare different annotation methods using the HoloLens augmented reality development platform: Surface-Drawing and Air-Drawing, with either raw but smoothed or interpreted and beautified gesture input. For the Surface-Drawing method, users control a cursor that is projected onto the world model, allowing gesture input to occur directly on the surfaces of real-world objects. For the Air-Drawing method, gesture drawing occurs at the user's fingertip and is projected onto the world model on release. The methods have different characteristics regarding necessitated vergence switches and afforded cursor control. We performed an experiment in which users draw on two different real-world objects at different distances using the different methods. Results indicate that Surface-Drawing is more accurate than Air-Drawing and Beautified annotations are drawn faster than Non-Beautified; participants also preferred Surface-Drawing and Beautified. ![]() ![]() |
|
Chardonnet, Jean-Rémy |
![]() José Luis Dorado, Pablo Figueroa, Jean-Rémy Chardonnet, Frédéric Merienne, and José Tiberio Hernández (University of Andes, Colombia; LE2I, France) This study presents a comparison of the influence of different VR environments in the task of selecting a preferred seat in an opera theater. We used gaze-based raycasting and headsets in a low-cost head-mounted display (HMD) (GearVR); and a virtual wand, head tracking, and headsets in a CAVE, two somewhat opposing technologies in the spectrum of current VR systems. Visual rendering and the selection technique depend on the capabilities of each environment, whereas the sound is approximated in both environments. Results show that subjects can select similar seats but their decisions differ between both environments. The results obtained can be useful in guiding the development of future VR applications. ![]() |
|
Chellali, Amine |
![]() Siju Wu, Aylen Ricca, Amine Chellali, and Samir Otmane (University of Evry, France) Standard 3D widgets are used for object manipulation in desktop CAD applications but are less suited for use on touchscreens. We propose two 3D constrained manipulation techniques for Tablet PCs. Using finger identification, the dominant hand's index, middle and ring fingers are mapped with the X, Y and Z axes. Users can then trigger different manipulation tasks using specific chording gestures. A user study to assess usability and efficiency permitted to identify the gestures that are the most suitable for each manipulation task. Some design recommendations for an efficient 3D constrained manipulations technique are presented. ![]() |
|
Chiaramida, Vincenzo |
![]() Alberto Cannavò, Fabio Cermelli, Vincenzo Chiaramida, Giovanni Ciccone, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) A 3D User Interface for manipulating virtual objects in Augmented Reality scenarios on handheld devices is presented. The proposed solution takes advantage of two interaction techniques. The former (named ``cursor mode'') exploits a cursor, which position and movement are bound to the view of the device; the cursor allows the user to select objects and to perform coarse-grain manipulations by moving the device. The latter (referred to as ``tuning mode'') uses the physical affordances of a tangible interface to provide the user with the possibility to refine objects in all their aspects (position, rotation, scale, color, and so forth) with a fine-grained control. ![]() |
|
Chipana, Miriam Luque |
![]() Bhuvaneswari Sarupuri, Miriam Luque Chipana, and Robert W. Lindeman (University of Canterbury, New Zealand) We present Trigger Walking, a low-fatigue travel technique for immersive virtual reality which uses hand-held controllers to move about more naturally within a limited physical space. Most commercial applications use some form of teleportation or physical walking for moving around in a virtual space. However, teleportation can be disorienting, due to the sudden change in the environment when teleported to another location. Physical walking techniques are more physically demanding, leading to fatigue. Hence, we explore the use of two spatial controllers that accompany commercial headsets to walk by taking a virtual step each time a controller trigger is pulled. The user has the choice of using the orientation of a single-controller, the average of both controllers, or that of the head to determine the direction of walking, and speed can be controlled by changing the angle of the controller to the Frontal plane. ![]() |
|
Chowdhury, Tanvir Irfan |
![]() Tanvir Irfan Chowdhury, Raphael Costa, and John Quarles (University of Texas at San Antonio, USA) The purpose of this poster is to explain our study on the effect of the sense of presence on one aspect of learning, information recall, in an immersive (vs. non-immersive) virtual reality (VR) disability simulation (DS). We hypothesized that a higher level of immersion and involvement in a VR disability simulation that leads to a high sense of presence will help the user improve information recall. We conducted a between subjects experiment in which participants were presented information about multiple sclerosis (MS) in different immersive conditions and afterwards they attempted to recall the information. The results from our study suggest that participants who were in immersive conditions were able to recall the information more effectively than the participants who experienced a non-immersive condition. ![]() ![]() |
|
Ciambrone, Andrew |
![]() Mohamed Handosa, Denis Gračanin, Hicham G. Elmongui, and Andrew Ciambrone (Virginia Tech, USA; Alexandria University, Egypt) Lighting can play an essential role in supporting user tasks as well as creating an ambiance. Although users may feel excited about the supported functionality when a complex indoor lighting system is first deployed, the lack of a convenient interface may prevent them from taking the full advantage of the system. We propose a system and a 3D interaction technique for controlling indoor lights. The system is extendable and supports multiple users to control lights using either gestures or a GUI. Using a single Kinect sensor, the user is able to control lights from different positions in the room while standing or sitting down within the tracking range of the sensor. The selection and manipulation accuracy of the proposed technique together with the ease of use compared to other alternatives makes it a promising lighting control technique. ![]() ![]() |
|
Cibrario, Francesca |
![]() Giuseppe Attanasio, Alberto Cannavò, Francesca Cibrario, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) Using digital instruments to support artistic expression and creativity is a hot topic. In this work, we focused on the design of a suitable interface for Augmented Reality-based constructive art on handheld devices. Issues to be faced encompassed how to give artists sense of spatial dimensions, how to provide them with different tools for realizing artworks, and how much moving away from ``the real'' and going towards ``the virtual''. Through a touch-capable device, such as a smartphone or a tablet, we offer artists a clean workspace, where they can decide when to introduce artworks and tools. In fact, besides exploiting the multi-touch functionality and the gyroscopes/accelerometers to manipulate artworks in six degrees of freedom (6DOF), the proposed solution exploits a set of printed markers that can be brought into the camera's field of view to make specific virtual tools appear in the augmented scene. With such tools, artists can decide to control, e.g., manipulation speed, scale factor, scene parameters, etc., thus complementing functionalities that can be accessed via the device's screen. ![]() |
|
Ciccone, Giovanni |
![]() Alberto Cannavò, Fabio Cermelli, Vincenzo Chiaramida, Giovanni Ciccone, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) A 3D User Interface for manipulating virtual objects in Augmented Reality scenarios on handheld devices is presented. The proposed solution takes advantage of two interaction techniques. The former (named ``cursor mode'') exploits a cursor, which position and movement are bound to the view of the device; the cursor allows the user to select objects and to perform coarse-grain manipulations by moving the device. The latter (referred to as ``tuning mode'') uses the physical affordances of a tangible interface to provide the user with the possibility to refine objects in all their aspects (position, rotation, scale, color, and so forth) with a fine-grained control. ![]() |
|
Clergeaud, Damien |
![]() Damien Clergeaud and Pascal Guitton (Inria, France; University of Bordeaux, France) Virtual Reality experiments have enabled immersed users to perform virtual tasks in a Virtual Environment (VE). Before beginning a task, however, users must locate and select the different objects they will need in the VE. This first step is important and affects global performance in the virtual task. If a user takes too long to locate and select an object, the duration of the task is increased. Moreover, both the comfort and global efficiency of users deteriorate as search and selection time increase. We have developed Pano, a technique which reduces this time by increasing the user’s “natural” field of view. More precisely, we provide a 360° panoramic virtual image which is displayed on a specific window, called the PanoWindow. Thanks to the PanoWindow, users can perceive and interact with the part of the Virtual Environment (VE) that is behind them without any additional head or body movement. In this paper, we present two user studies with 30 and 21 participants in different VEs. For the first study, participants were invited to perform object-finding tasks with and without Pano. The second study involved position-estimating tasks in order to know if the PanoWindow image enables users to build an accurate representation of the environment. First, the results show that Pano both reduces task duration and improves user comfort. Second, they demonstrate that good object-localization accuracy can be achieved using Pano. ![]() |
|
Clifford, Rory M. S. |
![]() Rory M. S. Clifford, Nikita Mae B. Tuanquin, and Robert W. Lindeman (University of Canterbury, New Zealand) Virtual Reality (VR) enables us to freely operate in a space that is unconstrained by physical laws and limitations. To take advantage of this aspect, we have developed a technique for pseudo-telekinetic object manipulation in VR using slight downward tilt of the head to simulate Jedi concentration. This telekinetic ability draws inspiration from The Force abilities exhibited in the Star Wars universe, and is particularly well suited to VR because it provides the ability to interact with and manipulate objects at a distance. We implemented force translate, force rotate, force push and force pull behaviours as examples of the general concept of force extension. We conducted exploratory user testing to assess telekinesis as a suitable interaction metaphor. Subject performance and feedback varied between participants but were generally encouraging. ![]() |
|
Cordeiro, Eduardo |
![]() Daniel Mendes, Daniel Medeiros, Eduardo Cordeiro, Maurício Sousa, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal) Selecting objects outside user's arm-reach in Virtual Reality still poses significant challenges. Techniques proposed to overcome such limitations often follow arm-extension metaphors or favor the use of selection volumes combined with ray-casting. Nonetheless, these approaches work for room sized and sparse environments, and they do not scale to larger scenarios with many objects. We introduce PRECIOUS, a novel mid-air technique for selecting out-of-reach objects. It employs an iterative progressive refinement, using cone-casting to select multiple objects and moving users closer to them in each step, allowing accurate selections. A user evaluation showed that PRECIOUS compares favorably against existing approaches, being the most versatile. ![]() |
|
Corrêa, Ana Grasielle |
![]() Ana Grasielle Corrêa, Eduardo Zilles Borba, Roseli Lopes, Marcelo Knorich Zuffo, Astolfo Araujo, and Regis Kopper (University of São Paulo, Brazil; Duke University, USA) In this work we present a study about usability experience of users in a cyber-archeological environment. We researched how they explore a realistic 3D environment in Virtual Reality (VR) through archaeometry conventional techniques. Our objective is to evaluate users experiences with interactive archaeometry tools with archaeologist (not a VR expert) and compare results with VR experts (not an archeology expert). Two hypothesis will be tested: a) it’s possible to simulate the virtual world realistically as the real one?; b) if this VR model is passive of exploration, is it possible to create 3DUI analytical tools to help archaeologist to manipulate archaeometry tools? To explore these hypotheses we conducted experimental tests with ten users and the results are promising. ![]() |
|
Cortes, Guillaume |
![]() Guillaume Cortes, Eric Marchand, Jérôme Ardouin, and Anatole Lécuyer (Realyz, France; University of Rennes 1, France; Inria, France) We propose an approach to greatly increase the tracking workspace of VR applications without adding new sensors. Our approach relies on controlled cameras able to follow the tracked markers all around the VR workspace providing 6DoF tracking data. We designed the proof-of-concept of such approach based on two consumer-grade cameras and a pan-tilt head. The resulting tracking workspace could be greatly increased depending on the actuators’ range of motion. The accuracy error and jitter were found to be rather limited during camera motion (resp. 0.3cm and 0.02cm). Therefore, whenever the final VR application does not require a perfect tracking accuracy over the entire workspace, we recommend using our approach in order to enlarge the tracking workspace. ![]() |
|
Costa, Raphael |
![]() Tanvir Irfan Chowdhury, Raphael Costa, and John Quarles (University of Texas at San Antonio, USA) The purpose of this poster is to explain our study on the effect of the sense of presence on one aspect of learning, information recall, in an immersive (vs. non-immersive) virtual reality (VR) disability simulation (DS). We hypothesized that a higher level of immersion and involvement in a VR disability simulation that leads to a high sense of presence will help the user improve information recall. We conducted a between subjects experiment in which participants were presented information about multiple sclerosis (MS) in different immersive conditions and afterwards they attempted to recall the information. The results from our study suggest that participants who were in immersive conditions were able to recall the information more effectively than the participants who experienced a non-immersive condition. ![]() ![]() |
|
Daher, Salam |
![]() Salam Daher, Kangsoo Kim, Myungho Lee, Gerd Bruder, Ryan Schubert, Jeremy Bailenson, and Gregory F. Welch (University of Central Florida, USA; Stanford University, USA) We explore whether a peripheral observation of apparent mutual social presence between a virtual human (VH) and a virtual human confederate (VHC) can increase a subject’s sense of social presence with the VH. Human subjects were asked to play a game with a VH. Half of the subjects were primed by being exposed to a brief but apparently engaging conversation between the VHC and the VH. The primed subjects reported being significantly more excited, alert, had significantly higher measures of Co-Presence, Attentional Allocation, and Message Understanding dimensions of social presence for the VH, compared to those who were not primed. ![]() |
|
Debarba, Henrique G. |
![]() Jerônimo G. Grandi, Iago Berndt, Henrique G. Debarba, Luciana Nedel, and Anderson Maciel (Federal University of Rio Grande do Sul, Brazil; Artanim Foundation, Switzerland) Interaction in augmented reality environments may be very complex, depending on the degrees of freedom (DOFs) required for the task. In this work we present a 3D user interface for collaborative manipulation of virtual objects in augmented reality (AR) environments. It maps position -- acquired with a camera and fiducial markers -- and touchscreen input of a handheld device into gestures to select, move, rotate and scale virtual objects. As these transformations require the control of multiple DOFs, collaboration is proposed as a solution to coordinate the modification of each and all the available DOFs. Users are free to decide their own manipulation roles. All virtual elements are displayed directly on the mobile device as an overlay of the camera capture, providing an individual point of view of the AR environment to each user. ![]() ![]() |
|
De Oliveira, Thomas Volpato |
![]() Vicenzo Abichequer Sangalli, Thomas Volpato de Oliveira, Leonardo Pavanatto Soares, and Márcio Sarroglia Pinho (PUCRS, Brazil) In this work, a 3D mobile interface to create sculptures in an augmented reality environment tracked by AR markers is presented. A raycasting technique was used to interact with the objects in the scene, as well as 2D and 3D interfaces to manipulate and modify the objects. The users can move, delete, paint and duplicate virtual objects using 6 DOFs techniques. ![]() |
|
Dias, Paulo |
![]() Luis Afonso, Paulo Dias, Carlos Ferreira, and Beatriz Sousa Santos (University of Aveiro, Portugal) How does the virtual representation of the user's hands influence the performance on a button selection task performed in a tablet-based interaction within an immersive virtual environment? To answer this question, we asked 55 participants to use three conditions: no-hand avatar, realistic avatar and translucent avatar. The participants were faster but made slightly more errors while using the no-avatar condition, and considered easier to perform the task with the translucent avatar. ![]() |
|
Dorado, José Luis |
![]() José Luis Dorado, Pablo Figueroa, Jean-Rémy Chardonnet, Frédéric Merienne, and José Tiberio Hernández (University of Andes, Colombia; LE2I, France) This study presents a comparison of the influence of different VR environments in the task of selecting a preferred seat in an opera theater. We used gaze-based raycasting and headsets in a low-cost head-mounted display (HMD) (GearVR); and a virtual wand, head tracking, and headsets in a CAVE, two somewhat opposing technologies in the spectrum of current VR systems. Visual rendering and the selection technique depend on the capabilities of each environment, whereas the sound is approximated in both environments. Results show that subjects can select similar seats but their decisions differ between both environments. The results obtained can be useful in guiding the development of future VR applications. ![]() ![]() Jérémy Plouzeau, José Luis Dorado, Damien Paillot, and Frédéric Merienne (LE2I, France; HeSam, France) This study proposes to investigate the effect of adding vibration feedback to a navigation task in virtual environment. Previous study used footstep vibrations and proprioceptive vibrations in order to decrease the cyber-sickness and increase the sense of presence. In this study, we experiment the same vibration modalities but with a new navigation method. The results show that proprioceptive vibrations do not impact the sense of presence neither the cyber-sickness while footstep vibrations increase sense of presence and decrease in a certain way cyber-sickness. ![]() |
|
Ducoffe, Mélanie |
![]() Ferran Argelaguet, Mélanie Ducoffe, Anatole Lécuyer, and Remi Gribonval (Inria, France; IRISA, France; ENS, France) Advances in motion tracking technology, especially for commodity hardware, still require robust 3D gesture recognition in order to fully exploit the benefits of natural user interfaces. In this paper, we introduce a novel 3D gesture recognition algorithm based on the sparse representation of 3D human motion. The sparse representation of human motion provides a set of features that can be used to efficiently classify gestures in real-time. Compared to existing gesture recognition systems, sparse representation, the proposed approach enables full spatial and rotation invariance and provides high tolerance to noise. Moreover, the proposed classification scheme takes into account the inter-user variability which increases gesture classification accuracy in user-independent scenarios. We validated our approach with existing motion databases for gestural interaction and performed a user evaluation with naive subjects to show its robustness to arbitrarily defined gestures. The results showed that our classification scheme has high classification accuracy for user-independent scenarios even with users who have different handedness. We believe that sparse representation of human motion will pave the way for a new generation of 3D gesture recognition systems in order to fully open the potential of natural user interfaces. ![]() ![]() |
|
Ebrahimi, Elham |
![]() Nathanael Harrell, Grayson Bonds, Xiaojia Wang, Sean Valent, Elham Ebrahimi, and Sabarish V. Babu (Clemson University, USA) We present our metaphor for object translation, rotation, and rescaling and particle parameter manipulation in an augmented reality environment using an Android smartphone or tablet for the 2017 3DUI Competition in Los Angeles, California. Our metaphor aims to map the three-dimensional interaction of objects in a real world space to the two-dimensional plane of a smartphone or tablet screen. Our final product is the result of experimentation with different metaphors for translation, rotation, rescaling, and particle parameter manipulation and was guided by the feedback of voluntary product testers. The result is an interaction technique between a mobile device and the virtual world which we believe to be intuitive. ![]() ![]() |
|
Ellingson, Arin M. |
![]() Kyungyoon Kim, Rebekah L. Lawrence, Nikki Kyllonen, Paula M. Ludewig, Arin M. Ellingson, and Daniel F. Keefe (University of Minnesota, USA) We introduce a virtual reality 3D user interface (3DUI) for anatomical 2D/3D shape-matching, a challenging task that is part of medical imaging processes required by biomechanics researchers. Manual shape-matching can be thought of as a nuanced version of classic 6 degree-of-freedom docking tasks studied in the 3DUI research community. Our solution combines dynamic gain for precise translation and rotation from 6 degree-of-freedom tracker input, constraints based on both 2D and 3D data, and immersive visualization and visual feedback. ![]() ![]() |
|
Elmongui, Hicham G. |
![]() Mohamed Handosa, Denis Gračanin, Hicham G. Elmongui, and Andrew Ciambrone (Virginia Tech, USA; Alexandria University, Egypt) Lighting can play an essential role in supporting user tasks as well as creating an ambiance. Although users may feel excited about the supported functionality when a complex indoor lighting system is first deployed, the lack of a convenient interface may prevent them from taking the full advantage of the system. We propose a system and a 3D interaction technique for controlling indoor lights. The system is extendable and supports multiple users to control lights using either gestures or a GUI. Using a single Kinect sensor, the user is able to control lights from different positions in the room while standing or sitting down within the tracking range of the sensor. The selection and manipulation accuracy of the proposed technique together with the ease of use compared to other alternatives makes it a promising lighting control technique. ![]() ![]() |
|
Erfanian, Aida |
![]() Aida Erfanian, Stanley Tarng, Yaoping Hu, Jérémy Plouzeau, and Frédéric Merienne (University of Calgary, Canada; LE2I, France) Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies on multi-sensory integration of visual and haptic cues revealed that the integration follows maximum likelihood estimation (MLE). Little effort focuses however on integrating force and vibrotactile cues – two sub-categorical cues of the haptic modality. Hence, this paper presents an investigation on MLE’s suitability for integrating these sub-categorical cues. Within a stereoscopic VE, human users performed a 3D interactive task of navigating a flying drone along a high-voltage transmission line in an inaccessible region and identifying defects on the line. The users had to identify defects via individual force or vibrotactile cues, and their combinations in co-located and dislocated settings. The co-located setting provided both cues on the right hand of the users; whereas the dislocated setting delivered the force and vibrotactile cues on the right hand and forearm of the users, respectively. Task performance of the users, such as completion time and accuracy, was assessed under each cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This was in agreement with the role of tactile cues in sensing surface properties, herein setting a baseline for using MLE. The task performance under the co-located setting indicated certain degrees of combining those under the individual cues. In contrast, the performance under the dislocated setting was alike that under the individual vibrotactile cue. These observations imply an inconclusiveness of MLE to integrate both cues in a co-located setting for 3D user interaction. ![]() |
|
Ferlay, Fabien |
![]() Céphise Louison, Fabien Ferlay, and Daniel R. Mestre (CEA, France; Aix-Marseille University, France; CNRS, France) In virtual reality (VR), spatial awareness is a dominant research topic. It plays an essential role in the assessment of human operators’ behavior within virtual environments (VE), notably for the evaluation of the feasibility of manual maintenance tasks in cluttered industrial settings. In such contexts, it is decisive to evaluate the spatial and temporal correspondence between the operator’s movement kinematics and that of his/her virtual avatar in the virtual environment. Often, in a cluttered VE, direct kinesthetic (force) feedback is limited or absent. We tested whether vibrotactile (cutaneous) feedback would increase visuo-proprioceptive consistency, spatial awareness, and thus the validity of VR studies, by augmenting the perception of the operator’s contact(s) with virtual objects. We present preliminary experimental results, obtained using a head-mounted display (HMD) during a goal-directed task in a cluttered VE. Data suggest that spatialized vibrotactile feedback contributes to visuo-proprioceptive consistency. ![]() ![]() |
|
Ferreira, Alfredo |
![]() Daniel Mendes, Daniel Medeiros, Maurício Sousa, Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal; PUC-Rio, Brazil) Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks. ![]() ![]() Daniel Mendes, Daniel Medeiros, Eduardo Cordeiro, Maurício Sousa, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal) Selecting objects outside user's arm-reach in Virtual Reality still poses significant challenges. Techniques proposed to overcome such limitations often follow arm-extension metaphors or favor the use of selection volumes combined with ray-casting. Nonetheless, these approaches work for room sized and sparse environments, and they do not scale to larger scenarios with many objects. We introduce PRECIOUS, a novel mid-air technique for selecting out-of-reach objects. It employs an iterative progressive refinement, using cone-casting to select multiple objects and moving users closer to them in each step, allowing accurate selections. A user evaluation showed that PRECIOUS compares favorably against existing approaches, being the most versatile. ![]() |
|
Ferreira, Carlos |
![]() Luis Afonso, Paulo Dias, Carlos Ferreira, and Beatriz Sousa Santos (University of Aveiro, Portugal) How does the virtual representation of the user's hands influence the performance on a button selection task performed in a tablet-based interaction within an immersive virtual environment? To answer this question, we asked 55 participants to use three conditions: no-hand avatar, realistic avatar and translucent avatar. The participants were faster but made slightly more errors while using the no-avatar condition, and considered easier to perform the task with the translucent avatar. ![]() |
|
Ferreira, Ricardo |
![]() Daniel Mendes, Daniel Medeiros, Maurício Sousa, Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal; PUC-Rio, Brazil) Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks. ![]() |
|
Figueroa, Pablo |
![]() José Luis Dorado, Pablo Figueroa, Jean-Rémy Chardonnet, Frédéric Merienne, and José Tiberio Hernández (University of Andes, Colombia; LE2I, France) This study presents a comparison of the influence of different VR environments in the task of selecting a preferred seat in an opera theater. We used gaze-based raycasting and headsets in a low-cost head-mounted display (HMD) (GearVR); and a virtual wand, head tracking, and headsets in a CAVE, two somewhat opposing technologies in the spectrum of current VR systems. Visual rendering and the selection technique depend on the capabilities of each environment, whereas the sound is approximated in both environments. Results show that subjects can select similar seats but their decisions differ between both environments. The results obtained can be useful in guiding the development of future VR applications. ![]() |
|
Freitag, Sebastian |
![]() Sebastian Freitag, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Scene visibility - the information of which parts of the scene are visible from a certain location—can be used to derive various properties of a virtual environment. For example, it enables the computation of viewpoint quality to determine the informativeness of a viewpoint, helps in constructing virtual tours, and allows to keep track of the objects a user may already have seen. However, computing visibility at runtime may be too computationally expensive for many applications, while sampling the entire scene beforehand introduces a costly precomputation step and may include many samples not needed later on. Therefore, in this paper, we propose a novel approach to precompute visibility information based on navigation meshes, a polygonal representation of a scene’s navigable areas. We show that with only limited precomputation, high accuracy can be achieved in these areas. Furthermore, we demonstrate the usefulness of the approach by means of several applications, including viewpoint quality computation, landmark and room detection, and exploration assistance. In addition, we present a travel interface based on common visibility that we found to result in less cybersickness in a user study. ![]() |
|
Fuhrmann, Arnulph |
![]() Oliver Basting, Arnulph Fuhrmann, and Stefan M. Grünvogel (TH Köln, Germany) The following paper investigates the effect on the intensity of per- ceived vection by changing the field of view (FOV) using a head- mounted display (HMD) in a virtual environment (VE). For this purpose a study was carried out, where the participants were situ- ated in a vection evoking VE using a HMD. During the experiment, the VE was presented with different FOVs, and a measurement of the felt intensity of vection was performed. The results indicate that a decrease of the FOV invokes a decrease of the intensity of perceived vection. ![]() |
|
Galvan, Alain |
![]() Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. ![]() ![]() Alain Galvan, Francisco R. Ortega, and Naphtali Rishe (Florida International University, USA) Finding the best suitable environment for 3D navigation that utilizes at least six degrees-of-freedom is still difficult. Furthermore, creating a system to procedurally generate a large virtual environment provides an opportunity for researchers to understand this problem further. Therefore, we present a novel technique to render a parametric celestial skybox with the ability to light environments similar to natural color corrected images from telescopes. We first pre-compute a spherical ray map that corresponds to the cubemap coordinates, then generate stars and dust through a combination of different noise generation shaders. ![]() |
|
Girard, Adrien |
![]() Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. ![]() |
|
Gračanin, Denis |
![]() Mohamed Handosa, Denis Gračanin, Hicham G. Elmongui, and Andrew Ciambrone (Virginia Tech, USA; Alexandria University, Egypt) Lighting can play an essential role in supporting user tasks as well as creating an ambiance. Although users may feel excited about the supported functionality when a complex indoor lighting system is first deployed, the lack of a convenient interface may prevent them from taking the full advantage of the system. We propose a system and a 3D interaction technique for controlling indoor lights. The system is extendable and supports multiple users to control lights using either gestures or a GUI. Using a single Kinect sensor, the user is able to control lights from different positions in the room while standing or sitting down within the tracking range of the sensor. The selection and manipulation accuracy of the proposed technique together with the ease of use compared to other alternatives makes it a promising lighting control technique. ![]() ![]() |
|
Gramopadhye, Anand |
![]() Jeffrey Bertrand, Ayush Bhargava, Kapil Chalil Madathil, Anand Gramopadhye, and Sabarish V. Babu (Clemson University, USA) In this study, we empirically evaluated the effects of presentation method and simulation fidelity on task performance and psychomotor skills acquisition in an immersive bimanual simulation towards precision metrology education. In a 2 x 2 experiment design, we investigated a large-screen immersive display (LSID) with a head-mounted display (HMD), and the presence versus absence of gravity. Advantages of the HMD include interacting with the simulation in a more natural manner as compared to using a large-screen immersive display due to the similarities between the interactions afforded in the virtual compared to the real-world task. Suspending the laws of physics may have an effect on usability and in turn could affect learning outcomes. Our dependent variables consisted of a pre and post cognition questionnaire, quantitative performance measures, perceived workload and system usefulness, and a psychomotor assessment to measure to what extent transfer of learning took place from the virtual to the real world. Results indicate that the HMD condition was preferable to the immersive display in several metrics while the no-gravity condition resulted in users adopting strategies that were not advantageous for task performance. ![]() |
|
Grandi, Jerônimo G. |
![]() Jerônimo G. Grandi, Iago Berndt, Henrique G. Debarba, Luciana Nedel, and Anderson Maciel (Federal University of Rio Grande do Sul, Brazil; Artanim Foundation, Switzerland) Interaction in augmented reality environments may be very complex, depending on the degrees of freedom (DOFs) required for the task. In this work we present a 3D user interface for collaborative manipulation of virtual objects in augmented reality (AR) environments. It maps position -- acquired with a camera and fiducial markers -- and touchscreen input of a handheld device into gestures to select, move, rotate and scale virtual objects. As these transformations require the control of multiple DOFs, collaboration is proposed as a solution to coordinate the modification of each and all the available DOFs. Users are free to decide their own manipulation roles. All virtual elements are displayed directly on the mobile device as an overlay of the camera capture, providing an individual point of view of the AR environment to each user. ![]() ![]() |
|
Gribonval, Remi |
![]() Ferran Argelaguet, Mélanie Ducoffe, Anatole Lécuyer, and Remi Gribonval (Inria, France; IRISA, France; ENS, France) Advances in motion tracking technology, especially for commodity hardware, still require robust 3D gesture recognition in order to fully exploit the benefits of natural user interfaces. In this paper, we introduce a novel 3D gesture recognition algorithm based on the sparse representation of 3D human motion. The sparse representation of human motion provides a set of features that can be used to efficiently classify gestures in real-time. Compared to existing gesture recognition systems, sparse representation, the proposed approach enables full spatial and rotation invariance and provides high tolerance to noise. Moreover, the proposed classification scheme takes into account the inter-user variability which increases gesture classification accuracy in user-independent scenarios. We validated our approach with existing motion databases for gestural interaction and performed a user evaluation with naive subjects to show its robustness to arbitrarily defined gestures. The results showed that our classification scheme has high classification accuracy for user-independent scenarios even with users who have different handedness. We believe that sparse representation of human motion will pave the way for a new generation of 3D gesture recognition systems in order to fully open the potential of natural user interfaces. ![]() ![]() |
|
Grubert, Jens |
![]() Peter Mohr, Markus Tatzgern, Jens Grubert, Dieter Schmalstieg, and Denis Kalkofen (Graz University of Technology, Austria; Salzburg University of Applied Sciences, Austria; Coburg University, Germany) Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user’s real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user’s head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user’s motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering. ![]() |
|
Grünvogel, Stefan M. |
![]() Oliver Basting, Arnulph Fuhrmann, and Stefan M. Grünvogel (TH Köln, Germany) The following paper investigates the effect on the intensity of per- ceived vection by changing the field of view (FOV) using a head- mounted display (HMD) in a virtual environment (VE). For this purpose a study was carried out, where the participants were situ- ated in a vection evoking VE using a HMD. During the experiment, the VE was presented with different FOVs, and a measurement of the felt intensity of vection was performed. The results indicate that a decrease of the FOV invokes a decrease of the intensity of perceived vection. ![]() |
|
Guitton, Pascal |
![]() Damien Clergeaud and Pascal Guitton (Inria, France; University of Bordeaux, France) Virtual Reality experiments have enabled immersed users to perform virtual tasks in a Virtual Environment (VE). Before beginning a task, however, users must locate and select the different objects they will need in the VE. This first step is important and affects global performance in the virtual task. If a user takes too long to locate and select an object, the duration of the task is increased. Moreover, both the comfort and global efficiency of users deteriorate as search and selection time increase. We have developed Pano, a technique which reduces this time by increasing the user’s “natural” field of view. More precisely, we provide a 360° panoramic virtual image which is displayed on a specific window, called the PanoWindow. Thanks to the PanoWindow, users can perceive and interact with the part of the Virtual Environment (VE) that is behind them without any additional head or body movement. In this paper, we present two user studies with 30 and 21 participants in different VEs. For the first study, participants were invited to perform object-finding tasks with and without Pano. The second study involved position-estimating tasks in order to know if the PanoWindow image enables users to build an accurate representation of the environment. First, the results show that Pano both reduces task duration and improves user comfort. Second, they demonstrate that good object-localization accuracy can be achieved using Pano. ![]() |
|
Guo, Rongkai |
![]() Rongkai Guo, Ryan P. McMahan, and Benjamin Weyers (Kennesaw State University, USA; University of Texas at Dallas, USA; RWTH Aachen University, Germany) The 8th annual IEEE 3DUI Contest focuses on the development of a 3D User Interface (3DUI) for an Augmented Reality (AR) exhibit of constructive art. The 3DUI Contest is part of the 2017 IEEE Symposium on 3D User Interfaces held in Los Angeles, California. The contest was open to anyone interested in 3DUIs, from researchers to students, enthusiasts, and professionals. The purpose of the contest is to stimulate innovative and creative solutions to challenging 3DUI problems. ![]() |
|
Gutenko, Ievgeniia |
![]() Ievgeniia Gutenko, Seyedkoosha Mirhosseini, and Arie E. Kaufman (Stony Brook University, USA) The target picking in 3D volumetric data is a common interaction problem in a variety of domains such as medicine, engineering, and physics. When a user clicks on a 2D point on the screen rendering, the ray is cast through the volume at an angle perpendicular to the screen. The problem lies in determining the intended location of picking, that is the end point of the ray. We introduce picking for 3D volumetric data by utilizing pressure and angle of the digital stylus pen on a touchscreen device. We map both pressure and angle of the stylus to the depth of the selector widget to help target acquisition. We evaluate several methods of selection: single finger selection, stylus with pressure but without an angle, and stylus with both angle and pressure. We decouple rotation of the volumetric data, and control for two target sizes. We report significant benefits of digital stylus picking methods and a relation between the methods and target sizes. ![]() |
|
Hachet, Martin |
![]() Joan Sol Roo and Martin Hachet (Inria, France) Spatial Augmented Reality (SAR) allows a user, or a group of users, to benefit from digital augmentations embedded directly into the physical world. This enables co-located information and unobstructed interaction. On the other hand, SAR suffers from limitations that are inherently linked to its physical dependency, which is not the case for see-through or immersive displays. In this work, we explore how to facilitate the transition from SAR to VR, and vice versa, integrating both into a unified experience. We developed a set of interaction techniques and obtained first feedback from informal interviews. ![]() ![]() ![]() |
|
Han, Dustin T. |
![]() Mohamed Suhail, Shyam Prathish Sargunam, Dustin T. Han, and Eric D. Ragan (Texas A&M University, USA) In many virtual reality applications, it would be ideal if users could use their physical hands to directly interact with virtual objects while experiencing realistic haptic feedback. While this can be achieved via interaction with tracked physical props that correspond to virtual objects, practical limitations can make it difficult to achieve a physical environment that exactly represents the virtual world, and virtual environments are often much larger than the available tracked physical space. Our approach maps a single physical prop to multiple virtual objects distributed throughout a virtual environment. Additionally, our work explores scenarios using one physical prop to control multiple types of object interactions. We explore considerations that allow physical object manipulation using orientation resetting to physically align the user with a physical prop for interaction. The resetting approach applies a discrete positional and rotational update to the user's location when the user virtually approaches a target for interaction, and the redirected reach approach applies a translational offset to the user’s virtual hand based on the positional difference of the virtual and physical objects. ![]() |
|
Handosa, Mohamed |
![]() Mohamed Handosa, Denis Gračanin, Hicham G. Elmongui, and Andrew Ciambrone (Virginia Tech, USA; Alexandria University, Egypt) Lighting can play an essential role in supporting user tasks as well as creating an ambiance. Although users may feel excited about the supported functionality when a complex indoor lighting system is first deployed, the lack of a convenient interface may prevent them from taking the full advantage of the system. We propose a system and a 3D interaction technique for controlling indoor lights. The system is extendable and supports multiple users to control lights using either gestures or a GUI. Using a single Kinect sensor, the user is able to control lights from different positions in the room while standing or sitting down within the tracking range of the sensor. The selection and manipulation accuracy of the proposed technique together with the ease of use compared to other alternatives makes it a promising lighting control technique. ![]() ![]() |
|
Harrell, Nathanael |
![]() Nathanael Harrell, Grayson Bonds, Xiaojia Wang, Sean Valent, Elham Ebrahimi, and Sabarish V. Babu (Clemson University, USA) We present our metaphor for object translation, rotation, and rescaling and particle parameter manipulation in an augmented reality environment using an Android smartphone or tablet for the 2017 3DUI Competition in Los Angeles, California. Our metaphor aims to map the three-dimensional interaction of objects in a real world space to the two-dimensional plane of a smartphone or tablet screen. Our final product is the result of experimentation with different metaphors for translation, rotation, rescaling, and particle parameter manipulation and was guided by the feedback of voluntary product testers. The result is an interaction technique between a mobile device and the virtual world which we believe to be intuitive. ![]() ![]() |
|
Hashemian, Abraham M. |
![]() Alexandra Kitson, Abraham M. Hashemian, Ekaterina R. Stepanova, Ernst Kruijff, and Bernhard E. Riecke (Simon Fraser University, Canada; Bonn-Rhein-Sieg University of Applied Sciences, Germany) In this paper, we describe a user study comparing five different locomotion interfaces for virtual reality locomotion. We compared a standard non-motion cueing interface, Joystick (Xbox), with four motion cueing interfaces, NaviChair (stool with springs), MuvMan (sit/stand active stool), Head-Directed (Oculus Rift DK2), and Swivel Chair (everyday office chair with leaning capability). Each interface had two degrees of freedom to move forward/backward and rotate using velocity (rate) control. The aim of this mixed methods study was to better understand relevant user experience factors and guide the design of future locomotion interfaces. This study employed methods from HCI to provide an understanding of why users behave a certain way while using the interface and to unearth any new issues with the design. Participants were tasked to search for objects in a virtual city while they provided talk-aloud feedback and we logged their behaviour. Subsequently, they completed a post-experimental questionnaire on their experience. We found that the qualitative themes of control, usability, and experience echoed the results of the questionnaire, providing internal validity. The quantitative measures revealed the Joystick to be significantly more comfortable and precise than the motion cueing interfaces. However, the qualitative feedback and interviews showed this was due to the reduced perceived controllability and safety of the motion cueing interfaces. Designers of these interfaces should consider using a backrest if users need to lean backwards and avoid using velocity-control for rotations when using HMDs. ![]() |
|
Hashiguchi, Satoshi |
![]() Kana Oshima, Satoshi Hashiguchi, Fumihisa Shibata, and Asako Kimura (Ritsumeikan University, Japan) Previously, we discovered the “R-V Dynamics Illusion,” a psychophysical phenomenon caused by the difference between the dynamics of real object (R) and virtual object (V) in mixed reality (MR) space. We confirmed that a real object with a movable portion (CG) is perceived lighter by MR visual stimulation. Here, we analyze whether the difference in the mass of real objects affects the R-V Dynamics Illusion. We conducted experiments to determine the difference threshold of weights under the condition where the masses of real objects are 500, 750, and 1000g, and only the CG liquid level is changed. As a result, the difference in mass did not influence the difference threshold of weights by changing the virtual liquid level. On the other hand, with the same mass conditions, the difference threshold of weights becomes smaller when the R-V Dynamics Illusion occurs. ![]() |
|
Heidicker, Paul |
![]() Paul Heidicker, Eike Langbehn, and Frank Steinicke (University of Hamburg, Germany) Social virtual reality (VR) has enormous potential to allow several physically separated users to collaborate in an immersive virtual environment (IVE). These users and their actions are represented by avatars in the IVE. In question is how the appearance of those avatars influences communication and interaction. It might make a difference, if the avatar consists of a complete body representation or if only certain body parts are visible. Moreover, a one-to-one mapping of the user's movements to the avatar's movements might have advantages compared to pre-defined avatar animations. To answer these questions, we compared three different types of avatar appearances in a user study. For this, we used estimations of presence, social presence, and cognitive load. The evaluation showed that motion-controlled avatars with full representation of the avatar body lead to an increased sense of presence. Motion-controlled avatars as well as avatars which have only head and hands visible produced an increased feeling of co-presence and behavioral interdependence. This is interesting, since it states that we do not need a complete avatar body in social VR. ![]() |
|
Hernández, José Tiberio |
![]() José Luis Dorado, Pablo Figueroa, Jean-Rémy Chardonnet, Frédéric Merienne, and José Tiberio Hernández (University of Andes, Colombia; LE2I, France) This study presents a comparison of the influence of different VR environments in the task of selecting a preferred seat in an opera theater. We used gaze-based raycasting and headsets in a low-cost head-mounted display (HMD) (GearVR); and a virtual wand, head tracking, and headsets in a CAVE, two somewhat opposing technologies in the spectrum of current VR systems. Visual rendering and the selection technique depend on the capabilities of each environment, whereas the sound is approximated in both environments. Results show that subjects can select similar seats but their decisions differ between both environments. The results obtained can be useful in guiding the development of future VR applications. ![]() |
|
Höllerer, Tobias |
![]() Yun Suk Chang, Benjamin Nuernberger, Bo Luan, and Tobias Höllerer (University of California at Santa Barbara, USA) Drawing annotations with 3D hand gestures in augmented reality are useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn circle and arrow annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. In this paper, we compare different annotation methods using the HoloLens augmented reality development platform: Surface-Drawing and Air-Drawing, with either raw but smoothed or interpreted and beautified gesture input. For the Surface-Drawing method, users control a cursor that is projected onto the world model, allowing gesture input to occur directly on the surfaces of real-world objects. For the Air-Drawing method, gesture drawing occurs at the user's fingertip and is projected onto the world model on release. The methods have different characteristics regarding necessitated vergence switches and afforded cursor control. We performed an experiment in which users draw on two different real-world objects at different distances using the different methods. Results indicate that Surface-Drawing is more accurate than Air-Drawing and Beautified annotations are drawn faster than Non-Beautified; participants also preferred Surface-Drawing and Beautified. ![]() ![]() |
|
Hu, Yaoping |
![]() Aida Erfanian, Stanley Tarng, Yaoping Hu, Jérémy Plouzeau, and Frédéric Merienne (University of Calgary, Canada; LE2I, France) Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies on multi-sensory integration of visual and haptic cues revealed that the integration follows maximum likelihood estimation (MLE). Little effort focuses however on integrating force and vibrotactile cues – two sub-categorical cues of the haptic modality. Hence, this paper presents an investigation on MLE’s suitability for integrating these sub-categorical cues. Within a stereoscopic VE, human users performed a 3D interactive task of navigating a flying drone along a high-voltage transmission line in an inaccessible region and identifying defects on the line. The users had to identify defects via individual force or vibrotactile cues, and their combinations in co-located and dislocated settings. The co-located setting provided both cues on the right hand of the users; whereas the dislocated setting delivered the force and vibrotactile cues on the right hand and forearm of the users, respectively. Task performance of the users, such as completion time and accuracy, was assessed under each cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This was in agreement with the role of tactile cues in sensing surface properties, herein setting a baseline for using MLE. The task performance under the co-located setting indicated certain degrees of combining those under the individual cues. In contrast, the performance under the dislocated setting was alike that under the individual vibrotactile cue. These observations imply an inconclusiveness of MLE to integrate both cues in a co-located setting for 3D user interaction. ![]() |
|
Johnson, Tiana |
![]() Oyewole Oyekoya, Emily Sassard, and Tiana Johnson (Clemson University, USA) The VizSpace is a physically situated interactive system that combines touch and hand interactions behind the screen to create the effect that users are reaching inside and interacting in a 3D virtual workspace. It extends the conventional touch table interface with hand tracking and 3D visualization to enable interaction in the positive parallax plane, where the binocular focus falls behind the screen so as not to occlude projected images. This paper covers the system design, human factors and ergonomics considerations for an interactive and immersive gesture-based visualization system. Results are presented from a preliminary user study that validates the usability of VizSpace. ![]() |
|
Jorge, Joaquim |
![]() Daniel Mendes, Daniel Medeiros, Maurício Sousa, Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal; PUC-Rio, Brazil) Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks. ![]() ![]() Daniel Mendes, Daniel Medeiros, Eduardo Cordeiro, Maurício Sousa, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal) Selecting objects outside user's arm-reach in Virtual Reality still poses significant challenges. Techniques proposed to overcome such limitations often follow arm-extension metaphors or favor the use of selection volumes combined with ray-casting. Nonetheless, these approaches work for room sized and sparse environments, and they do not scale to larger scenarios with many objects. We introduce PRECIOUS, a novel mid-air technique for selecting out-of-reach objects. It employs an iterative progressive refinement, using cone-casting to select multiple objects and moving users closer to them in each step, allowing accurate selections. A user evaluation showed that PRECIOUS compares favorably against existing approaches, being the most versatile. ![]() |
|
Kajimoto, Hiroyuki |
![]() Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. ![]() ![]() Yuki Kon, Takuto Nakamura, and Hiroyuki Kajimoto (University of Electro-Communications, Japan) The Hanger Reflex is a phenomenon in which the head rotates unintentionally when force is applied via a wire hanger placed on the head. This phenomenon is caused by physical pressure on the skin, and the direction of the Hanger Reflex modulated by the direction of skin deformation. A previous study examined the use of the head-, waist-, and ankle-type Hanger Reflex on walking navigation without interpretation of navigation information, and found that the waist-type Hanger Reflex had the strongest effect on walking. However, the existing waist-type Hanger Reflex device is passive; i.e. must be operated by the user, which leads to the necessity of developing a new active type device for use as part of a navigational system. In this paper, we developed a controlled waist-type Hanger Reflex device with four pneumatic actuators. We investigated different interpretations of navigation information on the effect of our device on walking. Our interpretation conditions included “Natural”, in which users did not attempt to interpret the navigation information, “Follow”, and “Resist”, in which they actively followed, or resisted the navigation information, respectively. We confirmed that our waist-type Hanger Reflex device could control the walking path and body direction, depending on user’s interpretation of the navigational information. ![]() ![]() ![]() Ryo Kodama, Masahiro Koge, Shun Taguchi, and Hiroyuki Kajimoto (TOYOTA Central R&D Labs, Japan; University of Electro-Communications, Japan) We propose a novel virtual reality entertainment system using a car as a motion platform. Motion platforms present a sensation of motion to the user using powerful actuators. Combined with virtual reality content, including surrounding visual, auditory and tactile displays, such systems can provide and immersive experience. However, the space and cost requirements for installation of motion platforms are large. To overcome this issue, we propose to use a car as a motion platform. We developed a prototype system composed of a head mounted display, a one-person electric car and an automatic driving algorithm. We developed and tested immersive content in which users ride on a trolley in a virtual space. All users responded quite positively to the experience. ![]() |
|
Kalkofen, Denis |
![]() Peter Mohr, Markus Tatzgern, Jens Grubert, Dieter Schmalstieg, and Denis Kalkofen (Graz University of Technology, Austria; Salzburg University of Applied Sciences, Austria; Coburg University, Germany) Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user’s real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user’s head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user’s motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering. ![]() |
|
Kaufman, Arie E. |
![]() Ievgeniia Gutenko, Seyedkoosha Mirhosseini, and Arie E. Kaufman (Stony Brook University, USA) The target picking in 3D volumetric data is a common interaction problem in a variety of domains such as medicine, engineering, and physics. When a user clicks on a 2D point on the screen rendering, the ray is cast through the volume at an angle perpendicular to the screen. The problem lies in determining the intended location of picking, that is the end point of the ray. We introduce picking for 3D volumetric data by utilizing pressure and angle of the digital stylus pen on a touchscreen device. We map both pressure and angle of the stylus to the depth of the selector widget to help target acquisition. We evaluate several methods of selection: single finger selection, stylus with pressure but without an angle, and stylus with both angle and pressure. We decouple rotation of the volumetric data, and control for two target sizes. We report significant benefits of digital stylus picking methods and a relation between the methods and target sizes. ![]() |
|
Kaufmann, Hannes |
![]() Khrystyna Vasylevska and Hannes Kaufmann (Vienna University of Technology, Austria) Space available for any virtual reality experience is often strictly limited and abridges the virtual world to a size of a room. To extend the amount of virtual space accessible by walking within the same real workspace the methods of spatial compression were proposed. Scene manipulation with a controlled spatial overlap has been shown to be an efficient method. However, in order to apply space compression effectively for a dynamic, scalable and robust 3D user interface, it is important to study how the human perceives different layouts with overlapping spaces. In this paper, we explore the influence of the properties of the layout used on human spatial perception in a physically impossible spatial arrangement. Our first reported study focuses on the following parameters of the path within a simple self-overlapping layout: number of turns, relative door positions, sequences of counter- and clockwise turns, symmetry and asymmetry of the path used. In addition, in the second study we explore the effect of path smoothing by substituting the right-angled corridors by smooth curves. Our studies show that usage of the smooth curved corridors is more beneficial for spatial compression than the conventional right-angled approach. ![]() |
|
Keefe, Daniel F. |
![]() Kyungyoon Kim, Rebekah L. Lawrence, Nikki Kyllonen, Paula M. Ludewig, Arin M. Ellingson, and Daniel F. Keefe (University of Minnesota, USA) We introduce a virtual reality 3D user interface (3DUI) for anatomical 2D/3D shape-matching, a challenging task that is part of medical imaging processes required by biomechanics researchers. Manual shape-matching can be thought of as a nuanced version of classic 6 degree-of-freedom docking tasks studied in the 3DUI research community. Our solution combines dynamic gain for precise translation and rotation from 6 degree-of-freedom tracker input, constraints based on both 2D and 3D data, and immersive visualization and visual feedback. ![]() ![]() |
|
Kim, Kangsoo |
![]() Salam Daher, Kangsoo Kim, Myungho Lee, Gerd Bruder, Ryan Schubert, Jeremy Bailenson, and Gregory F. Welch (University of Central Florida, USA; Stanford University, USA) We explore whether a peripheral observation of apparent mutual social presence between a virtual human (VH) and a virtual human confederate (VHC) can increase a subject’s sense of social presence with the VH. Human subjects were asked to play a game with a VH. Half of the subjects were primed by being exposed to a brief but apparently engaging conversation between the VHC and the VH. The primed subjects reported being significantly more excited, alert, had significantly higher measures of Co-Presence, Attentional Allocation, and Message Understanding dimensions of social presence for the VH, compared to those who were not primed. ![]() |
|
Kim, Kyungyoon |
![]() Kyungyoon Kim, Rebekah L. Lawrence, Nikki Kyllonen, Paula M. Ludewig, Arin M. Ellingson, and Daniel F. Keefe (University of Minnesota, USA) We introduce a virtual reality 3D user interface (3DUI) for anatomical 2D/3D shape-matching, a challenging task that is part of medical imaging processes required by biomechanics researchers. Manual shape-matching can be thought of as a nuanced version of classic 6 degree-of-freedom docking tasks studied in the 3DUI research community. Our solution combines dynamic gain for precise translation and rotation from 6 degree-of-freedom tracker input, constraints based on both 2D and 3D data, and immersive visualization and visual feedback. ![]() ![]() |
|
Kimura, Asako |
![]() Kana Oshima, Satoshi Hashiguchi, Fumihisa Shibata, and Asako Kimura (Ritsumeikan University, Japan) Previously, we discovered the “R-V Dynamics Illusion,” a psychophysical phenomenon caused by the difference between the dynamics of real object (R) and virtual object (V) in mixed reality (MR) space. We confirmed that a real object with a movable portion (CG) is perceived lighter by MR visual stimulation. Here, we analyze whether the difference in the mass of real objects affects the R-V Dynamics Illusion. We conducted experiments to determine the difference threshold of weights under the condition where the masses of real objects are 500, 750, and 1000g, and only the CG liquid level is changed. As a result, the difference in mass did not influence the difference threshold of weights by changing the virtual liquid level. On the other hand, with the same mass conditions, the difference threshold of weights becomes smaller when the R-V Dynamics Illusion occurs. ![]() |
|
Kitson, Alexandra |
![]() Alexandra Kitson, Abraham M. Hashemian, Ekaterina R. Stepanova, Ernst Kruijff, and Bernhard E. Riecke (Simon Fraser University, Canada; Bonn-Rhein-Sieg University of Applied Sciences, Germany) In this paper, we describe a user study comparing five different locomotion interfaces for virtual reality locomotion. We compared a standard non-motion cueing interface, Joystick (Xbox), with four motion cueing interfaces, NaviChair (stool with springs), MuvMan (sit/stand active stool), Head-Directed (Oculus Rift DK2), and Swivel Chair (everyday office chair with leaning capability). Each interface had two degrees of freedom to move forward/backward and rotate using velocity (rate) control. The aim of this mixed methods study was to better understand relevant user experience factors and guide the design of future locomotion interfaces. This study employed methods from HCI to provide an understanding of why users behave a certain way while using the interface and to unearth any new issues with the design. Participants were tasked to search for objects in a virtual city while they provided talk-aloud feedback and we logged their behaviour. Subsequently, they completed a post-experimental questionnaire on their experience. We found that the qualitative themes of control, usability, and experience echoed the results of the questionnaire, providing internal validity. The quantitative measures revealed the Joystick to be significantly more comfortable and precise than the motion cueing interfaces. However, the qualitative feedback and interviews showed this was due to the reduced perceived controllability and safety of the motion cueing interfaces. Designers of these interfaces should consider using a backrest if users need to lean backwards and avoid using velocity-control for rotations when using HMDs. ![]() |
|
Kodama, Ryo |
![]() Ryo Kodama, Masahiro Koge, Shun Taguchi, and Hiroyuki Kajimoto (TOYOTA Central R&D Labs, Japan; University of Electro-Communications, Japan) We propose a novel virtual reality entertainment system using a car as a motion platform. Motion platforms present a sensation of motion to the user using powerful actuators. Combined with virtual reality content, including surrounding visual, auditory and tactile displays, such systems can provide and immersive experience. However, the space and cost requirements for installation of motion platforms are large. To overcome this issue, we propose to use a car as a motion platform. We developed a prototype system composed of a head mounted display, a one-person electric car and an automatic driving algorithm. We developed and tested immersive content in which users ride on a trolley in a virtual space. All users responded quite positively to the experience. ![]() |
|
Koge, Masahiro |
![]() Ryo Kodama, Masahiro Koge, Shun Taguchi, and Hiroyuki Kajimoto (TOYOTA Central R&D Labs, Japan; University of Electro-Communications, Japan) We propose a novel virtual reality entertainment system using a car as a motion platform. Motion platforms present a sensation of motion to the user using powerful actuators. Combined with virtual reality content, including surrounding visual, auditory and tactile displays, such systems can provide and immersive experience. However, the space and cost requirements for installation of motion platforms are large. To overcome this issue, we propose to use a car as a motion platform. We developed a prototype system composed of a head mounted display, a one-person electric car and an automatic driving algorithm. We developed and tested immersive content in which users ride on a trolley in a virtual space. All users responded quite positively to the experience. ![]() |
|
Kon, Yuki |
![]() Yuki Kon, Takuto Nakamura, and Hiroyuki Kajimoto (University of Electro-Communications, Japan) The Hanger Reflex is a phenomenon in which the head rotates unintentionally when force is applied via a wire hanger placed on the head. This phenomenon is caused by physical pressure on the skin, and the direction of the Hanger Reflex modulated by the direction of skin deformation. A previous study examined the use of the head-, waist-, and ankle-type Hanger Reflex on walking navigation without interpretation of navigation information, and found that the waist-type Hanger Reflex had the strongest effect on walking. However, the existing waist-type Hanger Reflex device is passive; i.e. must be operated by the user, which leads to the necessity of developing a new active type device for use as part of a navigational system. In this paper, we developed a controlled waist-type Hanger Reflex device with four pneumatic actuators. We investigated different interpretations of navigation information on the effect of our device on walking. Our interpretation conditions included “Natural”, in which users did not attempt to interpret the navigation information, “Follow”, and “Resist”, in which they actively followed, or resisted the navigation information, respectively. We confirmed that our waist-type Hanger Reflex device could control the walking path and body direction, depending on user’s interpretation of the navigational information. ![]() ![]() |
|
Kondur, Navyaram |
![]() Run Yu, Wallace S. Lages, Mahdi Nabiyouni, Brandon Ray, Navyaram Kondur, Vikram Chandrashekar, and Doug A. Bowman (Virginia Tech, USA) We present two novel redirection techniques to enable real walking in large virtual environments (VEs) using only “room-scale” tracked spaces. The techniques, called Bookshelf and Bird, provide narrative-consistent redirection to keep the user inside the physical space, and require the user to walk to explore the VE. The underlying concept, called cell-based redirection, is to divide the virtual world into discrete cells that have the same size as the physical tracking space. The techniques then can redirect the users without altering the relationship between the user, the physical space, and the virtual cell. In this way, users can always access the entire cell using real walking. ![]() ![]() |
|
Kopper, Regis |
![]() David J. Zielinski, Derek Nankivil, and Regis Kopper (Duke University, USA) Object selection and manipulation in world-fixed displays such as CAVE-type systems are typically achieved with tracked input devices, which lack the tangibility of real-world interactions. Conversely, due to the visual blockage of the real world, head-mounted displays allow the use of many types of real world objects that can convey realistic haptic feedback. To bridge this gap, we propose Specimen Box, an interaction technique that allows users to naturally hold a plausible physical object while manipulating virtual content inside it. This virtual content is rendered based on the tracked position of the box in relation to the user's point of view. Specimen Box provides the weight and tactile feel of an actual object and does not occlude rendered objects in the scene. The end result is that the user sees the virtual content as if it exists inside the clear physical box. We hypothesize that the effect of holding a physical box, which is a valid part of the overall scenario, would improve user performance and experience. To verify this hypothesis, we conducted a user study which involved a cognitively loaded inspection task requiring extensive manipulation of the box. We compared Specimen Box to Grab-and-Twirl, a naturalistic bimanual manipulation technique that closely mimics the mechanics of our proposed technique. Results show that in our specific task, performance was significantly faster and rotation rate was significantly lower with Specimen Box. Further, performance of the control technique was positively affected by experience with Specimen Box. ![]() ![]() Leonardo Pavanatto Soares, Márcio Sarroglia Pinho, and Regis Kopper (PUCRS, Brazil; Duke University, USA) This work proposes and evaluates the EGO-EXO technique for cooperative manipulation in a Collaborative Virtual Environment (CVE). From the premise that simultaneous control over navigation and manipulation by the user can make interaction complex, this technique places two users in asymmetric viewpoint positions during the cooperative manipulation of an object, allowing one of them to follow the object. It applies the separation of degrees of freedom method between the two viewpoints to make the manipulation easier. The technique is evaluated through a user study to test its efficiency on handling cooperative manipulation. Results indicate that, for manipulation tasks that require high amplitude position control along precise orientation control, the technique performs with a lower collisions to time ratio. ![]() ![]() ![]() Ana Grasielle Corrêa, Eduardo Zilles Borba, Roseli Lopes, Marcelo Knorich Zuffo, Astolfo Araujo, and Regis Kopper (University of São Paulo, Brazil; Duke University, USA) In this work we present a study about usability experience of users in a cyber-archeological environment. We researched how they explore a realistic 3D environment in Virtual Reality (VR) through archaeometry conventional techniques. Our objective is to evaluate users experiences with interactive archaeometry tools with archaeologist (not a VR expert) and compare results with VR experts (not an archeology expert). Two hypothesis will be tested: a) it’s possible to simulate the virtual world realistically as the real one?; b) if this VR model is passive of exploration, is it possible to create 3DUI analytical tools to help archaeologist to manipulate archaeometry tools? To explore these hypotheses we conducted experimental tests with ten users and the results are promising. ![]() |
|
Kruijff, Ernst |
![]() Alexandra Kitson, Abraham M. Hashemian, Ekaterina R. Stepanova, Ernst Kruijff, and Bernhard E. Riecke (Simon Fraser University, Canada; Bonn-Rhein-Sieg University of Applied Sciences, Germany) In this paper, we describe a user study comparing five different locomotion interfaces for virtual reality locomotion. We compared a standard non-motion cueing interface, Joystick (Xbox), with four motion cueing interfaces, NaviChair (stool with springs), MuvMan (sit/stand active stool), Head-Directed (Oculus Rift DK2), and Swivel Chair (everyday office chair with leaning capability). Each interface had two degrees of freedom to move forward/backward and rotate using velocity (rate) control. The aim of this mixed methods study was to better understand relevant user experience factors and guide the design of future locomotion interfaces. This study employed methods from HCI to provide an understanding of why users behave a certain way while using the interface and to unearth any new issues with the design. Participants were tasked to search for objects in a virtual city while they provided talk-aloud feedback and we logged their behaviour. Subsequently, they completed a post-experimental questionnaire on their experience. We found that the qualitative themes of control, usability, and experience echoed the results of the questionnaire, providing internal validity. The quantitative measures revealed the Joystick to be significantly more comfortable and precise than the motion cueing interfaces. However, the qualitative feedback and interviews showed this was due to the reduced perceived controllability and safety of the motion cueing interfaces. Designers of these interfaces should consider using a backrest if users need to lean backwards and avoid using velocity-control for rotations when using HMDs. ![]() |
|
Kuhlen, Torsten W. |
![]() Sebastian Freitag, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Scene visibility - the information of which parts of the scene are visible from a certain location—can be used to derive various properties of a virtual environment. For example, it enables the computation of viewpoint quality to determine the informativeness of a viewpoint, helps in constructing virtual tours, and allows to keep track of the objects a user may already have seen. However, computing visibility at runtime may be too computationally expensive for many applications, while sampling the entire scene beforehand introduces a costly precomputation step and may include many samples not needed later on. Therefore, in this paper, we propose a novel approach to precompute visibility information based on navigation meshes, a polygonal representation of a scene’s navigable areas. We show that with only limited precomputation, high accuracy can be achieved in these areas. Furthermore, we demonstrate the usefulness of the approach by means of several applications, including viewpoint quality computation, landmark and room detection, and exploration assistance. In addition, we present a travel interface based on common visibility that we found to result in less cybersickness in a user study. ![]() ![]() Daniel Zielasko, Neha Neha, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) While experiencing an immersive virtual environment a suitable trigger metaphor is often needed, e.g. for the interaction with objects out of physical reach or system control. The BlowClick approach [Zielasko2015] that is based on non-verbal vocal input has been proven to be a valuable trigger technique in previous work. However, its original detection method is vulnerable to false positives and, thus, is limited in its potential use. Therefore, we extended the existing approach by adding machine learning methods to reliably classify blowing events. We found a support vector machine with Gaussian kernel performing the best with at least the same latency and more precision than before. Furthermore, we added acoustic feedback to the NVVI trigger, which increases the user’s confidence and whose absence was also stated as a limitation of the previous work. With this extended technique, we repeated the conducted Fitts’ law experiment with 33 participants and could confirm that it is possible to use NVVI as a reliable trigger as part of a hands-free point-andclick interface. Furthermore, we tested reaction times to measure the trigger’s performance without the influence of pointing and calculated device throughputs to ensure comparability. ![]() ![]() Andrea Bönsch, Tom Vierjahn, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Embodied, virtual agents provide users assistance in agent-based support systems. To this end, two closely linked factors have to be considered for the agents’ behavioral design: their presence time (PT), i.e., the time in which the agents are visible, and the approaching time (AT), i.e., the time span between the user’s calling for an agent and the agent’s actual availability. This work focuses on human-like assistants that are embedded in immersive scenes but that are required only temporarily. To the best of our knowledge, guidelines for a suitable trade-off between PT and AT of these assistants do not yet exist. We address this gap by presenting the results of a controlled within-subjects study in a CAVE. While keeping a low PT so that the agent is not perceived as annoying, three strategies affecting the AT, namely fading, walking, and running, are evaluated by 40 subjects. The results indicate no clear preference for either behavior. Instead, the necessity of a better trade-off between a low AT and an agent’s realistic behavior is demonstrated. ![]() |
|
Kunz, Andreas |
![]() Markus Zank and Andreas Kunz (ETH Zurich, Switzerland) Redirected walking with advanced planners such as MPCRed or FORCE requires both knowledge about the virtual environment - mostly in the form of a skeleton graph representing the virtual environment - and a robust prediction of the user's actions. This paper presents methods for both parts and evaluates them with a number of test cases. Since frame rate is crucial for a virtual reality application, the computationally heavy extraction and preprocessing of the skeleton graph is done offline while only parts directly linked to the user's behavior such as the prediction are done online. The prediction is done using a target-based long-term prediction and the targets are determined automatically and combined with targets predefined by the designer of the virtual environment. The methods presented here provide a graph that is well suited for planning redirection and allows prediction techniques previously only demonstrated in studies to be applied to large scale virtual environments. ![]() ![]() Markus Zank, Colin Yao, and Andreas Kunz (ETH Zurich, Switzerland) Real walking is a means to explore virtual environments that are even larger than a physical space. Avoiding collisions with the physical walls necessitates a system to warn users. This paper describes the design and implementation of a multi-phase warning system as a solution to this safety necessity. The first phase is a velocity-based warning based on time-to-impact. The second phase is distance-based and designed as an emergency warning. Combinations of acoustic and visual feedback mechanisms were tested in a study with 13 participants. The quantitative measures show that the system keeps users safe, while allowing them to freely explore the virtual environment. ![]() |
|
Kyllonen, Nikki |
![]() Kyungyoon Kim, Rebekah L. Lawrence, Nikki Kyllonen, Paula M. Ludewig, Arin M. Ellingson, and Daniel F. Keefe (University of Minnesota, USA) We introduce a virtual reality 3D user interface (3DUI) for anatomical 2D/3D shape-matching, a challenging task that is part of medical imaging processes required by biomechanics researchers. Manual shape-matching can be thought of as a nuanced version of classic 6 degree-of-freedom docking tasks studied in the 3DUI research community. Our solution combines dynamic gain for precise translation and rotation from 6 degree-of-freedom tracker input, constraints based on both 2D and 3D data, and immersive visualization and visual feedback. ![]() ![]() |
|
Lages, Wallace S. |
![]() Run Yu, Wallace S. Lages, Mahdi Nabiyouni, Brandon Ray, Navyaram Kondur, Vikram Chandrashekar, and Doug A. Bowman (Virginia Tech, USA) We present two novel redirection techniques to enable real walking in large virtual environments (VEs) using only “room-scale” tracked spaces. The techniques, called Bookshelf and Bird, provide narrative-consistent redirection to keep the user inside the physical space, and require the user to walk to explore the VE. The underlying concept, called cell-based redirection, is to divide the virtual world into discrete cells that have the same size as the physical tracking space. The techniques then can redirect the users without altering the relationship between the user, the physical space, and the virtual cell. In this way, users can always access the entire cell using real walking. ![]() ![]() |
|
Lamberti, Fabrizio |
![]() Giuseppe Attanasio, Alberto Cannavò, Francesca Cibrario, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) Using digital instruments to support artistic expression and creativity is a hot topic. In this work, we focused on the design of a suitable interface for Augmented Reality-based constructive art on handheld devices. Issues to be faced encompassed how to give artists sense of spatial dimensions, how to provide them with different tools for realizing artworks, and how much moving away from ``the real'' and going towards ``the virtual''. Through a touch-capable device, such as a smartphone or a tablet, we offer artists a clean workspace, where they can decide when to introduce artworks and tools. In fact, besides exploiting the multi-touch functionality and the gyroscopes/accelerometers to manipulate artworks in six degrees of freedom (6DOF), the proposed solution exploits a set of printed markers that can be brought into the camera's field of view to make specific virtual tools appear in the augmented scene. With such tools, artists can decide to control, e.g., manipulation speed, scale factor, scene parameters, etc., thus complementing functionalities that can be accessed via the device's screen. ![]() ![]() Alberto Cannavò, Fabio Cermelli, Vincenzo Chiaramida, Giovanni Ciccone, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) A 3D User Interface for manipulating virtual objects in Augmented Reality scenarios on handheld devices is presented. The proposed solution takes advantage of two interaction techniques. The former (named ``cursor mode'') exploits a cursor, which position and movement are bound to the view of the device; the cursor allows the user to select objects and to perform coarse-grain manipulations by moving the device. The latter (referred to as ``tuning mode'') uses the physical affordances of a tangible interface to provide the user with the possibility to refine objects in all their aspects (position, rotation, scale, color, and so forth) with a fine-grained control. ![]() |
|
Langbehn, Eike |
![]() Paul Heidicker, Eike Langbehn, and Frank Steinicke (University of Hamburg, Germany) Social virtual reality (VR) has enormous potential to allow several physically separated users to collaborate in an immersive virtual environment (IVE). These users and their actions are represented by avatars in the IVE. In question is how the appearance of those avatars influences communication and interaction. It might make a difference, if the avatar consists of a complete body representation or if only certain body parts are visible. Moreover, a one-to-one mapping of the user's movements to the avatar's movements might have advantages compared to pre-defined avatar animations. To answer these questions, we compared three different types of avatar appearances in a user study. For this, we used estimations of presence, social presence, and cognitive load. The evaluation showed that motion-controlled avatars with full representation of the avatar body lead to an increased sense of presence. Motion-controlled avatars as well as avatars which have only head and hands visible produced an increased feeling of co-presence and behavioral interdependence. This is interesting, since it states that we do not need a complete avatar body in social VR. ![]() |
|
Lange, Markus |
![]() Oscar J. Ariza N., Markus Lange, Frank Steinicke, and Gerd Bruder (University of Hamburg, Germany; University of Würzburg, Germany; University of Central Florida, USA) The selection of objects in virtual reality (VR) located on the periphery and outside the field of view (FOV) requires a visual search by rotations of the HMD, which can reduce the performance of interaction. Our work explores the use of a pair of self-made wireless and wearable devices which once attached to the user’s head provide assistive vibrotactile cues for guidance in order to reduce the time used to turn and locate a target object. We present an experiment based on a dual-tasking method to analyze cognitive demands and performance metrics during a selection task. ![]() |
|
Lank, Edward |
![]() Krzysztof Pietroszek, Liudmila Tahai, James R. Wallace, and Edward Lank (California State University at Monterey Bay, USA; University of Waterloo, Canada) We describe a mid-air, watch-based 3D interaction technique called Watchcasting. The technique enables target selection and translation by mapping z-coordinate position to forearm rotation. By replicat- ing a large display 3D selection study, we show that Watchcasting provides comparable performance to smartphone (Smartcasting) and electrical impedance myography (Myopoint) techniques. Our work demonstrates that an off-the-shelf smartwatch is a practical alternative for 3D interaction using specialized devices. ![]() ![]() |
|
Lawrence, Rebekah L. |
![]() Kyungyoon Kim, Rebekah L. Lawrence, Nikki Kyllonen, Paula M. Ludewig, Arin M. Ellingson, and Daniel F. Keefe (University of Minnesota, USA) We introduce a virtual reality 3D user interface (3DUI) for anatomical 2D/3D shape-matching, a challenging task that is part of medical imaging processes required by biomechanics researchers. Manual shape-matching can be thought of as a nuanced version of classic 6 degree-of-freedom docking tasks studied in the 3DUI research community. Our solution combines dynamic gain for precise translation and rotation from 6 degree-of-freedom tracker input, constraints based on both 2D and 3D data, and immersive visualization and visual feedback. ![]() ![]() |
|
Lécuyer, Anatole |
![]() Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. ![]() ![]() Ferran Argelaguet, Mélanie Ducoffe, Anatole Lécuyer, and Remi Gribonval (Inria, France; IRISA, France; ENS, France) Advances in motion tracking technology, especially for commodity hardware, still require robust 3D gesture recognition in order to fully exploit the benefits of natural user interfaces. In this paper, we introduce a novel 3D gesture recognition algorithm based on the sparse representation of 3D human motion. The sparse representation of human motion provides a set of features that can be used to efficiently classify gestures in real-time. Compared to existing gesture recognition systems, sparse representation, the proposed approach enables full spatial and rotation invariance and provides high tolerance to noise. Moreover, the proposed classification scheme takes into account the inter-user variability which increases gesture classification accuracy in user-independent scenarios. We validated our approach with existing motion databases for gestural interaction and performed a user evaluation with naive subjects to show its robustness to arbitrarily defined gestures. The results showed that our classification scheme has high classification accuracy for user-independent scenarios even with users who have different handedness. We believe that sparse representation of human motion will pave the way for a new generation of 3D gesture recognition systems in order to fully open the potential of natural user interfaces. ![]() ![]() ![]() Guillaume Cortes, Eric Marchand, Jérôme Ardouin, and Anatole Lécuyer (Realyz, France; University of Rennes 1, France; Inria, France) We propose an approach to greatly increase the tracking workspace of VR applications without adding new sensors. Our approach relies on controlled cameras able to follow the tracked markers all around the VR workspace providing 6DoF tracking data. We designed the proof-of-concept of such approach based on two consumer-grade cameras and a pan-tilt head. The resulting tracking workspace could be greatly increased depending on the actuators’ range of motion. The accuracy error and jitter were found to be rather limited during camera motion (resp. 0.3cm and 0.02cm). Therefore, whenever the final VR application does not require a perfect tracking accuracy over the entire workspace, we recommend using our approach in order to enlarge the tracking workspace. ![]() |
|
Lee, Gun |
![]() Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman, and Mark Billinghurst (University of South Australia, Australia; University of Canterbury, New Zealand) Eye tracking technology in a head-mounted display has undergone rapid advancement in recent years, making it possible for researchers to explore new interaction techniques using natural eye movements. This paper explores three novel eye-gaze-based interaction techniques: (1) Duo-Reticles, eye-gaze selection based on eye-gaze and inertial reticles, (2) Radial Pursuit, cluttered-object selection that takes advantage of smooth pursuit, and (3) Nod and Roll, head-gesture-based interaction based on the vestibulo-ocular reflex. In an initial user study, we compare each technique against a baseline condition in a scenario that demonstrates its strengths and weaknesses. ![]() ![]() ![]() |
|
Lee, Myungho |
![]() Salam Daher, Kangsoo Kim, Myungho Lee, Gerd Bruder, Ryan Schubert, Jeremy Bailenson, and Gregory F. Welch (University of Central Florida, USA; Stanford University, USA) We explore whether a peripheral observation of apparent mutual social presence between a virtual human (VH) and a virtual human confederate (VHC) can increase a subject’s sense of social presence with the VH. Human subjects were asked to play a game with a VH. Half of the subjects were primed by being exposed to a brief but apparently engaging conversation between the VHC and the VH. The primed subjects reported being significantly more excited, alert, had significantly higher measures of Co-Presence, Attentional Allocation, and Message Understanding dimensions of social presence for the VH, compared to those who were not primed. ![]() |
|
Le Gouis, Benoît |
![]() Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. ![]() |
|
Léziart, Pierre-Alexandre |
![]() Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. ![]() |
|
Lindeman, Robert W. |
![]() Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman, and Mark Billinghurst (University of South Australia, Australia; University of Canterbury, New Zealand) Eye tracking technology in a head-mounted display has undergone rapid advancement in recent years, making it possible for researchers to explore new interaction techniques using natural eye movements. This paper explores three novel eye-gaze-based interaction techniques: (1) Duo-Reticles, eye-gaze selection based on eye-gaze and inertial reticles, (2) Radial Pursuit, cluttered-object selection that takes advantage of smooth pursuit, and (3) Nod and Roll, head-gesture-based interaction based on the vestibulo-ocular reflex. In an initial user study, we compare each technique against a baseline condition in a scenario that demonstrates its strengths and weaknesses. ![]() ![]() ![]() ![]() Bhuvaneswari Sarupuri, Miriam Luque Chipana, and Robert W. Lindeman (University of Canterbury, New Zealand) We present Trigger Walking, a low-fatigue travel technique for immersive virtual reality which uses hand-held controllers to move about more naturally within a limited physical space. Most commercial applications use some form of teleportation or physical walking for moving around in a virtual space. However, teleportation can be disorienting, due to the sudden change in the environment when teleported to another location. Physical walking techniques are more physically demanding, leading to fatigue. Hence, we explore the use of two spatial controllers that accompany commercial headsets to walk by taking a virtual step each time a controller trigger is pulled. The user has the choice of using the orientation of a single-controller, the average of both controllers, or that of the head to determine the direction of walking, and speed can be controlled by changing the angle of the controller to the Frontal plane. ![]() ![]() Rory M. S. Clifford, Nikita Mae B. Tuanquin, and Robert W. Lindeman (University of Canterbury, New Zealand) Virtual Reality (VR) enables us to freely operate in a space that is unconstrained by physical laws and limitations. To take advantage of this aspect, we have developed a technique for pseudo-telekinetic object manipulation in VR using slight downward tilt of the head to simulate Jedi concentration. This telekinetic ability draws inspiration from The Force abilities exhibited in the Star Wars universe, and is particularly well suited to VR because it provides the ability to interact with and manipulate objects at a distance. We implemented force translate, force rotate, force push and force pull behaviours as examples of the general concept of force extension. We conducted exploratory user testing to assess telekinesis as a suitable interaction metaphor. Subject performance and feedback varied between participants but were generally encouraging. ![]() |
|
Liu, Xiaohan |
![]() Ronghai Wang, Junfeng Yao, Lin Wang, Xiaohan Liu, Hongwei Wang, and Liling Zheng (Quanzhou Normal University, China; Xiamen University, China; Sanming University, China; Quanzhou First Affiliated Hospital of Fujian Medical University, China) This poster presents a surgical training system for four medical punctures based on virtual reality and haptic feedback, including a client program developed in the Unity3D game engine and a server program developed by PHP. This system provides the immersive surgery simulation for thoracentesis, lumbar puncture, bone marrow puncture and abdominal paracentesis that we call four medical punctures. Trainers or teachers can release training tasks in which trainees or students are able to learn surgery skills at a 3D visual scene. Furthermore, they will feel a sense of immediacy when putting on the head-mounted display and with the help of haptic feedback. The training records will be put into database for analysis. ![]() |
|
Loffredo, Donald |
![]() Jace Regenbrecht, Alireza Tavakkoli, and Donald Loffredo (University of Houston-Victoria, USA) In this paper an intuitive human interface is presented which allows for an operator immersed in a virtual environment to remotely control a teleoperated agent with minimal cognitive overload and minimal risk of accidental input. Additionally, a cursor-based interface is presented allowing for the placement of navigation nodes for the agent, thus facilitating robot's autonomous navigation functions to be executed. ![]() |
|
Lopes, Roseli |
![]() Ana Grasielle Corrêa, Eduardo Zilles Borba, Roseli Lopes, Marcelo Knorich Zuffo, Astolfo Araujo, and Regis Kopper (University of São Paulo, Brazil; Duke University, USA) In this work we present a study about usability experience of users in a cyber-archeological environment. We researched how they explore a realistic 3D environment in Virtual Reality (VR) through archaeometry conventional techniques. Our objective is to evaluate users experiences with interactive archaeometry tools with archaeologist (not a VR expert) and compare results with VR experts (not an archeology expert). Two hypothesis will be tested: a) it’s possible to simulate the virtual world realistically as the real one?; b) if this VR model is passive of exploration, is it possible to create 3DUI analytical tools to help archaeologist to manipulate archaeometry tools? To explore these hypotheses we conducted experimental tests with ten users and the results are promising. ![]() |
|
Louison, Céphise |
![]() Céphise Louison, Fabien Ferlay, and Daniel R. Mestre (CEA, France; Aix-Marseille University, France; CNRS, France) In virtual reality (VR), spatial awareness is a dominant research topic. It plays an essential role in the assessment of human operators’ behavior within virtual environments (VE), notably for the evaluation of the feasibility of manual maintenance tasks in cluttered industrial settings. In such contexts, it is decisive to evaluate the spatial and temporal correspondence between the operator’s movement kinematics and that of his/her virtual avatar in the virtual environment. Often, in a cluttered VE, direct kinesthetic (force) feedback is limited or absent. We tested whether vibrotactile (cutaneous) feedback would increase visuo-proprioceptive consistency, spatial awareness, and thus the validity of VR studies, by augmenting the perception of the operator’s contact(s) with virtual objects. We present preliminary experimental results, obtained using a head-mounted display (HMD) during a goal-directed task in a cluttered VE. Data suggest that spatialized vibrotactile feedback contributes to visuo-proprioceptive consistency. ![]() ![]() |
|
Luan, Bo |
![]() Yun Suk Chang, Benjamin Nuernberger, Bo Luan, and Tobias Höllerer (University of California at Santa Barbara, USA) Drawing annotations with 3D hand gestures in augmented reality are useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn circle and arrow annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. In this paper, we compare different annotation methods using the HoloLens augmented reality development platform: Surface-Drawing and Air-Drawing, with either raw but smoothed or interpreted and beautified gesture input. For the Surface-Drawing method, users control a cursor that is projected onto the world model, allowing gesture input to occur directly on the surfaces of real-world objects. For the Air-Drawing method, gesture drawing occurs at the user's fingertip and is projected onto the world model on release. The methods have different characteristics regarding necessitated vergence switches and afforded cursor control. We performed an experiment in which users draw on two different real-world objects at different distances using the different methods. Results indicate that Surface-Drawing is more accurate than Air-Drawing and Beautified annotations are drawn faster than Non-Beautified; participants also preferred Surface-Drawing and Beautified. ![]() ![]() |
|
Ludewig, Paula M. |
![]() Kyungyoon Kim, Rebekah L. Lawrence, Nikki Kyllonen, Paula M. Ludewig, Arin M. Ellingson, and Daniel F. Keefe (University of Minnesota, USA) We introduce a virtual reality 3D user interface (3DUI) for anatomical 2D/3D shape-matching, a challenging task that is part of medical imaging processes required by biomechanics researchers. Manual shape-matching can be thought of as a nuanced version of classic 6 degree-of-freedom docking tasks studied in the 3DUI research community. Our solution combines dynamic gain for precise translation and rotation from 6 degree-of-freedom tracker input, constraints based on both 2D and 3D data, and immersive visualization and visual feedback. ![]() ![]() |
|
Maciel, Anderson |
![]() Jerônimo G. Grandi, Iago Berndt, Henrique G. Debarba, Luciana Nedel, and Anderson Maciel (Federal University of Rio Grande do Sul, Brazil; Artanim Foundation, Switzerland) Interaction in augmented reality environments may be very complex, depending on the degrees of freedom (DOFs) required for the task. In this work we present a 3D user interface for collaborative manipulation of virtual objects in augmented reality (AR) environments. It maps position -- acquired with a camera and fiducial markers -- and touchscreen input of a handheld device into gestures to select, move, rotate and scale virtual objects. As these transformations require the control of multiple DOFs, collaboration is proposed as a solution to coordinate the modification of each and all the available DOFs. Users are free to decide their own manipulation roles. All virtual elements are displayed directly on the mobile device as an overlay of the camera capture, providing an individual point of view of the AR environment to each user. ![]() ![]() |
|
MacKenzie, I. Scott |
![]() Thomas S. Young, Robert J. Teather, and I. Scott MacKenzie (York University, Canada; Carleton University, Canada) We designed a low-cost arm-mounted wearable 3D input device that uses inertial measurement units as an alternative to existing tracking systems requiring fixed frames of reference. The device employs inertial sensors to derive a 3D cursor position through natural arm movement. We also explore three methods of indicating selection with the device, one entirely software based (dwell), one using a twist gesture, and one relying on buttons. To address the paucity of research reporting human performance metrics across comparable interfaces, we quantify the performance of this new device through a point selection experiment and compare results from a similar study. ![]() |
|
Madathil, Kapil Chalil |
![]() Jeffrey Bertrand, Ayush Bhargava, Kapil Chalil Madathil, Anand Gramopadhye, and Sabarish V. Babu (Clemson University, USA) In this study, we empirically evaluated the effects of presentation method and simulation fidelity on task performance and psychomotor skills acquisition in an immersive bimanual simulation towards precision metrology education. In a 2 x 2 experiment design, we investigated a large-screen immersive display (LSID) with a head-mounted display (HMD), and the presence versus absence of gravity. Advantages of the HMD include interacting with the simulation in a more natural manner as compared to using a large-screen immersive display due to the similarities between the interactions afforded in the virtual compared to the real-world task. Suspending the laws of physics may have an effect on usability and in turn could affect learning outcomes. Our dependent variables consisted of a pre and post cognition questionnaire, quantitative performance measures, perceived workload and system usefulness, and a psychomotor assessment to measure to what extent transfer of learning took place from the virtual to the real world. Results indicate that the HMD condition was preferable to the immersive display in several metrics while the no-gravity condition resulted in users adopting strategies that were not advantageous for task performance. ![]() |
|
Marchal, Maud |
![]() Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. ![]() |
|
Marchand, Eric |
![]() Guillaume Cortes, Eric Marchand, Jérôme Ardouin, and Anatole Lécuyer (Realyz, France; University of Rennes 1, France; Inria, France) We propose an approach to greatly increase the tracking workspace of VR applications without adding new sensors. Our approach relies on controlled cameras able to follow the tracked markers all around the VR workspace providing 6DoF tracking data. We designed the proof-of-concept of such approach based on two consumer-grade cameras and a pan-tilt head. The resulting tracking workspace could be greatly increased depending on the actuators’ range of motion. The accuracy error and jitter were found to be rather limited during camera motion (resp. 0.3cm and 0.02cm). Therefore, whenever the final VR application does not require a perfect tracking accuracy over the entire workspace, we recommend using our approach in order to enlarge the tracking workspace. ![]() |
|
McMahan, Ryan P. |
![]() Rongkai Guo, Ryan P. McMahan, and Benjamin Weyers (Kennesaw State University, USA; University of Texas at Dallas, USA; RWTH Aachen University, Germany) The 8th annual IEEE 3DUI Contest focuses on the development of a 3D User Interface (3DUI) for an Augmented Reality (AR) exhibit of constructive art. The 3DUI Contest is part of the 2017 IEEE Symposium on 3D User Interfaces held in Los Angeles, California. The contest was open to anyone interested in 3DUIs, from researchers to students, enthusiasts, and professionals. The purpose of the contest is to stimulate innovative and creative solutions to challenging 3DUI problems. ![]() |
|
Medeiros, Daniel |
![]() Daniel Mendes, Daniel Medeiros, Maurício Sousa, Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal; PUC-Rio, Brazil) Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks. ![]() ![]() Daniel Mendes, Daniel Medeiros, Eduardo Cordeiro, Maurício Sousa, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal) Selecting objects outside user's arm-reach in Virtual Reality still poses significant challenges. Techniques proposed to overcome such limitations often follow arm-extension metaphors or favor the use of selection volumes combined with ray-casting. Nonetheless, these approaches work for room sized and sparse environments, and they do not scale to larger scenarios with many objects. We introduce PRECIOUS, a novel mid-air technique for selecting out-of-reach objects. It employs an iterative progressive refinement, using cone-casting to select multiple objects and moving users closer to them in each step, allowing accurate selections. A user evaluation showed that PRECIOUS compares favorably against existing approaches, being the most versatile. ![]() |
|
Mendes, Daniel |
![]() Daniel Mendes, Daniel Medeiros, Maurício Sousa, Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal; PUC-Rio, Brazil) Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks. ![]() ![]() Daniel Mendes, Daniel Medeiros, Eduardo Cordeiro, Maurício Sousa, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal) Selecting objects outside user's arm-reach in Virtual Reality still poses significant challenges. Techniques proposed to overcome such limitations often follow arm-extension metaphors or favor the use of selection volumes combined with ray-casting. Nonetheless, these approaches work for room sized and sparse environments, and they do not scale to larger scenarios with many objects. We introduce PRECIOUS, a novel mid-air technique for selecting out-of-reach objects. It employs an iterative progressive refinement, using cone-casting to select multiple objects and moving users closer to them in each step, allowing accurate selections. A user evaluation showed that PRECIOUS compares favorably against existing approaches, being the most versatile. ![]() |
|
Merienne, Frédéric |
![]() Aida Erfanian, Stanley Tarng, Yaoping Hu, Jérémy Plouzeau, and Frédéric Merienne (University of Calgary, Canada; LE2I, France) Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies on multi-sensory integration of visual and haptic cues revealed that the integration follows maximum likelihood estimation (MLE). Little effort focuses however on integrating force and vibrotactile cues – two sub-categorical cues of the haptic modality. Hence, this paper presents an investigation on MLE’s suitability for integrating these sub-categorical cues. Within a stereoscopic VE, human users performed a 3D interactive task of navigating a flying drone along a high-voltage transmission line in an inaccessible region and identifying defects on the line. The users had to identify defects via individual force or vibrotactile cues, and their combinations in co-located and dislocated settings. The co-located setting provided both cues on the right hand of the users; whereas the dislocated setting delivered the force and vibrotactile cues on the right hand and forearm of the users, respectively. Task performance of the users, such as completion time and accuracy, was assessed under each cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This was in agreement with the role of tactile cues in sensing surface properties, herein setting a baseline for using MLE. The task performance under the co-located setting indicated certain degrees of combining those under the individual cues. In contrast, the performance under the dislocated setting was alike that under the individual vibrotactile cue. These observations imply an inconclusiveness of MLE to integrate both cues in a co-located setting for 3D user interaction. ![]() ![]() José Luis Dorado, Pablo Figueroa, Jean-Rémy Chardonnet, Frédéric Merienne, and José Tiberio Hernández (University of Andes, Colombia; LE2I, France) This study presents a comparison of the influence of different VR environments in the task of selecting a preferred seat in an opera theater. We used gaze-based raycasting and headsets in a low-cost head-mounted display (HMD) (GearVR); and a virtual wand, head tracking, and headsets in a CAVE, two somewhat opposing technologies in the spectrum of current VR systems. Visual rendering and the selection technique depend on the capabilities of each environment, whereas the sound is approximated in both environments. Results show that subjects can select similar seats but their decisions differ between both environments. The results obtained can be useful in guiding the development of future VR applications. ![]() ![]() Jérémy Plouzeau, José Luis Dorado, Damien Paillot, and Frédéric Merienne (LE2I, France; HeSam, France) This study proposes to investigate the effect of adding vibration feedback to a navigation task in virtual environment. Previous study used footstep vibrations and proprioceptive vibrations in order to decrease the cyber-sickness and increase the sense of presence. In this study, we experiment the same vibration modalities but with a new navigation method. The results show that proprioceptive vibrations do not impact the sense of presence neither the cyber-sickness while footstep vibrations increase sense of presence and decrease in a certain way cyber-sickness. ![]() |
|
Mestre, Daniel R. |
![]() Céphise Louison, Fabien Ferlay, and Daniel R. Mestre (CEA, France; Aix-Marseille University, France; CNRS, France) In virtual reality (VR), spatial awareness is a dominant research topic. It plays an essential role in the assessment of human operators’ behavior within virtual environments (VE), notably for the evaluation of the feasibility of manual maintenance tasks in cluttered industrial settings. In such contexts, it is decisive to evaluate the spatial and temporal correspondence between the operator’s movement kinematics and that of his/her virtual avatar in the virtual environment. Often, in a cluttered VE, direct kinesthetic (force) feedback is limited or absent. We tested whether vibrotactile (cutaneous) feedback would increase visuo-proprioceptive consistency, spatial awareness, and thus the validity of VR studies, by augmenting the perception of the operator’s contact(s) with virtual objects. We present preliminary experimental results, obtained using a head-mounted display (HMD) during a goal-directed task in a cluttered VE. Data suggest that spatialized vibrotactile feedback contributes to visuo-proprioceptive consistency. ![]() ![]() |
|
Mirhosseini, Seyedkoosha |
![]() Ievgeniia Gutenko, Seyedkoosha Mirhosseini, and Arie E. Kaufman (Stony Brook University, USA) The target picking in 3D volumetric data is a common interaction problem in a variety of domains such as medicine, engineering, and physics. When a user clicks on a 2D point on the screen rendering, the ray is cast through the volume at an angle perpendicular to the screen. The problem lies in determining the intended location of picking, that is the end point of the ray. We introduce picking for 3D volumetric data by utilizing pressure and angle of the digital stylus pen on a touchscreen device. We map both pressure and angle of the stylus to the depth of the selector widget to help target acquisition. We evaluate several methods of selection: single finger selection, stylus with pressure but without an angle, and stylus with both angle and pressure. We decouple rotation of the volumetric data, and control for two target sizes. We report significant benefits of digital stylus picking methods and a relation between the methods and target sizes. ![]() |
|
Mohr, Peter |
![]() Peter Mohr, Markus Tatzgern, Jens Grubert, Dieter Schmalstieg, and Denis Kalkofen (Graz University of Technology, Austria; Salzburg University of Applied Sciences, Austria; Coburg University, Germany) Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user’s real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user’s head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user’s motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering. ![]() |
|
Montuschi, Paolo |
![]() Giuseppe Attanasio, Alberto Cannavò, Francesca Cibrario, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) Using digital instruments to support artistic expression and creativity is a hot topic. In this work, we focused on the design of a suitable interface for Augmented Reality-based constructive art on handheld devices. Issues to be faced encompassed how to give artists sense of spatial dimensions, how to provide them with different tools for realizing artworks, and how much moving away from ``the real'' and going towards ``the virtual''. Through a touch-capable device, such as a smartphone or a tablet, we offer artists a clean workspace, where they can decide when to introduce artworks and tools. In fact, besides exploiting the multi-touch functionality and the gyroscopes/accelerometers to manipulate artworks in six degrees of freedom (6DOF), the proposed solution exploits a set of printed markers that can be brought into the camera's field of view to make specific virtual tools appear in the augmented scene. With such tools, artists can decide to control, e.g., manipulation speed, scale factor, scene parameters, etc., thus complementing functionalities that can be accessed via the device's screen. ![]() ![]() Alberto Cannavò, Fabio Cermelli, Vincenzo Chiaramida, Giovanni Ciccone, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) A 3D User Interface for manipulating virtual objects in Augmented Reality scenarios on handheld devices is presented. The proposed solution takes advantage of two interaction techniques. The former (named ``cursor mode'') exploits a cursor, which position and movement are bound to the view of the device; the cursor allows the user to select objects and to perform coarse-grain manipulations by moving the device. The latter (referred to as ``tuning mode'') uses the physical affordances of a tangible interface to provide the user with the possibility to refine objects in all their aspects (position, rotation, scale, color, and so forth) with a fine-grained control. ![]() |
|
Nabiyouni, Mahdi |
![]() Run Yu, Wallace S. Lages, Mahdi Nabiyouni, Brandon Ray, Navyaram Kondur, Vikram Chandrashekar, and Doug A. Bowman (Virginia Tech, USA) We present two novel redirection techniques to enable real walking in large virtual environments (VEs) using only “room-scale” tracked spaces. The techniques, called Bookshelf and Bird, provide narrative-consistent redirection to keep the user inside the physical space, and require the user to walk to explore the VE. The underlying concept, called cell-based redirection, is to divide the virtual world into discrete cells that have the same size as the physical tracking space. The techniques then can redirect the users without altering the relationship between the user, the physical space, and the virtual cell. In this way, users can always access the entire cell using real walking. ![]() ![]() |
|
Nagamura, Mario |
![]() André Montes Rodrigues, Olavo Belloc, Eduardo Zilles Borba, Mario Nagamura, and Marcelo Knorich Zuffo (University of São Paulo, Brazil) In this work we present an interactive 3D object manipulation system using off the shelf mobile devices coupled with Augmented Reality (AR) technology that allows editing 3D objects by way of natural interactions based on tangible interfaces paradigms. The set-up consists of a mobile device, an interactive wand marker and AR markers laid on a table. The system allows users to change viewpoint and execute operations on 3D objects - simultaneous translation and rotation, scaling, cloning or deleting - by unconstrained natural interactions, leveraging user’s proficiency on daily object manipulation tasks and speeding up such typical 3D manipulation operations. Depth perception was significantly enhanced with dynamic shadows, allowing fast alignment and accurate positioning of objects. The prototype presented here allows successful completion of the three challenges proposed by the 2017 3DUI Contest, as validated by a preliminary informal user study with participants from the target audience and also from the general public. ![]() |
|
Nakamura, Takuto |
![]() Yuki Kon, Takuto Nakamura, and Hiroyuki Kajimoto (University of Electro-Communications, Japan) The Hanger Reflex is a phenomenon in which the head rotates unintentionally when force is applied via a wire hanger placed on the head. This phenomenon is caused by physical pressure on the skin, and the direction of the Hanger Reflex modulated by the direction of skin deformation. A previous study examined the use of the head-, waist-, and ankle-type Hanger Reflex on walking navigation without interpretation of navigation information, and found that the waist-type Hanger Reflex had the strongest effect on walking. However, the existing waist-type Hanger Reflex device is passive; i.e. must be operated by the user, which leads to the necessity of developing a new active type device for use as part of a navigational system. In this paper, we developed a controlled waist-type Hanger Reflex device with four pneumatic actuators. We investigated different interpretations of navigation information on the effect of our device on walking. Our interpretation conditions included “Natural”, in which users did not attempt to interpret the navigation information, “Follow”, and “Resist”, in which they actively followed, or resisted the navigation information, respectively. We confirmed that our waist-type Hanger Reflex device could control the walking path and body direction, depending on user’s interpretation of the navigational information. ![]() ![]() |
|
Nankivil, Derek |
![]() David J. Zielinski, Derek Nankivil, and Regis Kopper (Duke University, USA) Object selection and manipulation in world-fixed displays such as CAVE-type systems are typically achieved with tracked input devices, which lack the tangibility of real-world interactions. Conversely, due to the visual blockage of the real world, head-mounted displays allow the use of many types of real world objects that can convey realistic haptic feedback. To bridge this gap, we propose Specimen Box, an interaction technique that allows users to naturally hold a plausible physical object while manipulating virtual content inside it. This virtual content is rendered based on the tracked position of the box in relation to the user's point of view. Specimen Box provides the weight and tactile feel of an actual object and does not occlude rendered objects in the scene. The end result is that the user sees the virtual content as if it exists inside the clear physical box. We hypothesize that the effect of holding a physical box, which is a valid part of the overall scenario, would improve user performance and experience. To verify this hypothesis, we conducted a user study which involved a cognitively loaded inspection task requiring extensive manipulation of the box. We compared Specimen Box to Grab-and-Twirl, a naturalistic bimanual manipulation technique that closely mimics the mechanics of our proposed technique. Results show that in our specific task, performance was significantly faster and rotation rate was significantly lower with Specimen Box. Further, performance of the control technique was positively affected by experience with Specimen Box. ![]() |
|
Nedel, Luciana |
![]() Jerônimo G. Grandi, Iago Berndt, Henrique G. Debarba, Luciana Nedel, and Anderson Maciel (Federal University of Rio Grande do Sul, Brazil; Artanim Foundation, Switzerland) Interaction in augmented reality environments may be very complex, depending on the degrees of freedom (DOFs) required for the task. In this work we present a 3D user interface for collaborative manipulation of virtual objects in augmented reality (AR) environments. It maps position -- acquired with a camera and fiducial markers -- and touchscreen input of a handheld device into gestures to select, move, rotate and scale virtual objects. As these transformations require the control of multiple DOFs, collaboration is proposed as a solution to coordinate the modification of each and all the available DOFs. Users are free to decide their own manipulation roles. All virtual elements are displayed directly on the mobile device as an overlay of the camera capture, providing an individual point of view of the AR environment to each user. ![]() ![]() |
|
Neha, Neha |
![]() Daniel Zielasko, Neha Neha, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) While experiencing an immersive virtual environment a suitable trigger metaphor is often needed, e.g. for the interaction with objects out of physical reach or system control. The BlowClick approach [Zielasko2015] that is based on non-verbal vocal input has been proven to be a valuable trigger technique in previous work. However, its original detection method is vulnerable to false positives and, thus, is limited in its potential use. Therefore, we extended the existing approach by adding machine learning methods to reliably classify blowing events. We found a support vector machine with Gaussian kernel performing the best with at least the same latency and more precision than before. Furthermore, we added acoustic feedback to the NVVI trigger, which increases the user’s confidence and whose absence was also stated as a limitation of the previous work. With this extended technique, we repeated the conducted Fitts’ law experiment with 33 participants and could confirm that it is possible to use NVVI as a reliable trigger as part of a hands-free point-andclick interface. Furthermore, we tested reaction times to measure the trigger’s performance without the influence of pointing and calculated device throughputs to ensure comparability. ![]() |
|
Nguyen-Vo, Thinh |
![]() Thinh Nguyen-Vo, Bernhard E. Riecke, and Wolfgang Stuerzlinger (Simon Fraser University, Canada) Despite recent advances in virtual reality, locomotion in a virtual environment is still restricted because of spatial disorientation. Previous research has shown the benefits of reference frames in maintaining spatial orientation. Here, we propose using a visually simulated reference frame in virtual reality to provide users with a better sense of direction in landmark-free virtual environments. Visually overlaid rectangular frames simulate different variations of frames of reference. We investigated how two different types of visually simulated reference frames might benefit in a navigational search task through a mixed-method study. Results showed that the presence of a reference frame significantly affects participants’ performance in a navigational search task. Though the egocentric frame of reference (simulated CAVE) that translates with the observer did not significantly help, an allocentric frame of reference (a simulated stationary room) significantly improved user performance both in navigational search time and overall travel distance. Our study suggests that adding a variation of the reference frame to virtual reality applications might be a cost-effective solution to enable more effective locomotion in virtual reality. ![]() |
|
Nuernberger, Benjamin |
![]() Yun Suk Chang, Benjamin Nuernberger, Bo Luan, and Tobias Höllerer (University of California at Santa Barbara, USA) Drawing annotations with 3D hand gestures in augmented reality are useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn circle and arrow annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. In this paper, we compare different annotation methods using the HoloLens augmented reality development platform: Surface-Drawing and Air-Drawing, with either raw but smoothed or interpreted and beautified gesture input. For the Surface-Drawing method, users control a cursor that is projected onto the world model, allowing gesture input to occur directly on the surfaces of real-world objects. For the Air-Drawing method, gesture drawing occurs at the user's fingertip and is projected onto the world model on release. The methods have different characteristics regarding necessitated vergence switches and afforded cursor control. We performed an experiment in which users draw on two different real-world objects at different distances using the different methods. Results indicate that Surface-Drawing is more accurate than Air-Drawing and Beautified annotations are drawn faster than Non-Beautified; participants also preferred Surface-Drawing and Beautified. ![]() ![]() |
|
Ortega, Francisco R. |
![]() Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. ![]() ![]() Alain Galvan, Francisco R. Ortega, and Naphtali Rishe (Florida International University, USA) Finding the best suitable environment for 3D navigation that utilizes at least six degrees-of-freedom is still difficult. Furthermore, creating a system to procedurally generate a large virtual environment provides an opportunity for researchers to understand this problem further. Therefore, we present a novel technique to render a parametric celestial skybox with the ability to light environments similar to natural color corrected images from telescopes. We first pre-compute a spherical ray map that corresponds to the cubemap coordinates, then generate stars and dust through a combination of different noise generation shaders. ![]() |
|
Oshima, Kana |
![]() Kana Oshima, Satoshi Hashiguchi, Fumihisa Shibata, and Asako Kimura (Ritsumeikan University, Japan) Previously, we discovered the “R-V Dynamics Illusion,” a psychophysical phenomenon caused by the difference between the dynamics of real object (R) and virtual object (V) in mixed reality (MR) space. We confirmed that a real object with a movable portion (CG) is perceived lighter by MR visual stimulation. Here, we analyze whether the difference in the mass of real objects affects the R-V Dynamics Illusion. We conducted experiments to determine the difference threshold of weights under the condition where the masses of real objects are 500, 750, and 1000g, and only the CG liquid level is changed. As a result, the difference in mass did not influence the difference threshold of weights by changing the virtual liquid level. On the other hand, with the same mass conditions, the difference threshold of weights becomes smaller when the R-V Dynamics Illusion occurs. ![]() |
|
Otmane, Samir |
![]() Siju Wu, Aylen Ricca, Amine Chellali, and Samir Otmane (University of Evry, France) Standard 3D widgets are used for object manipulation in desktop CAD applications but are less suited for use on touchscreens. We propose two 3D constrained manipulation techniques for Tablet PCs. Using finger identification, the dominant hand's index, middle and ring fingers are mapped with the X, Y and Z axes. Users can then trigger different manipulation tasks using specific chording gestures. A user study to assess usability and efficiency permitted to identify the gestures that are the most suitable for each manipulation task. Some design recommendations for an efficient 3D constrained manipulations technique are presented. ![]() |
|
Oyekoya, Oyewole |
![]() Oyewole Oyekoya, Emily Sassard, and Tiana Johnson (Clemson University, USA) The VizSpace is a physically situated interactive system that combines touch and hand interactions behind the screen to create the effect that users are reaching inside and interacting in a 3D virtual workspace. It extends the conventional touch table interface with hand tracking and 3D visualization to enable interaction in the positive parallax plane, where the binocular focus falls behind the screen so as not to occlude projected images. This paper covers the system design, human factors and ergonomics considerations for an interactive and immersive gesture-based visualization system. Results are presented from a preliminary user study that validates the usability of VizSpace. ![]() |
|
Paillot, Damien |
![]() Jérémy Plouzeau, José Luis Dorado, Damien Paillot, and Frédéric Merienne (LE2I, France; HeSam, France) This study proposes to investigate the effect of adding vibration feedback to a navigation task in virtual environment. Previous study used footstep vibrations and proprioceptive vibrations in order to decrease the cyber-sickness and increase the sense of presence. In this study, we experiment the same vibration modalities but with a new navigation method. The results show that proprioceptive vibrations do not impact the sense of presence neither the cyber-sickness while footstep vibrations increase sense of presence and decrease in a certain way cyber-sickness. ![]() |
|
Paravati, Gianluca |
![]() Giuseppe Attanasio, Alberto Cannavò, Francesca Cibrario, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) Using digital instruments to support artistic expression and creativity is a hot topic. In this work, we focused on the design of a suitable interface for Augmented Reality-based constructive art on handheld devices. Issues to be faced encompassed how to give artists sense of spatial dimensions, how to provide them with different tools for realizing artworks, and how much moving away from ``the real'' and going towards ``the virtual''. Through a touch-capable device, such as a smartphone or a tablet, we offer artists a clean workspace, where they can decide when to introduce artworks and tools. In fact, besides exploiting the multi-touch functionality and the gyroscopes/accelerometers to manipulate artworks in six degrees of freedom (6DOF), the proposed solution exploits a set of printed markers that can be brought into the camera's field of view to make specific virtual tools appear in the augmented scene. With such tools, artists can decide to control, e.g., manipulation speed, scale factor, scene parameters, etc., thus complementing functionalities that can be accessed via the device's screen. ![]() ![]() Alberto Cannavò, Fabio Cermelli, Vincenzo Chiaramida, Giovanni Ciccone, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) A 3D User Interface for manipulating virtual objects in Augmented Reality scenarios on handheld devices is presented. The proposed solution takes advantage of two interaction techniques. The former (named ``cursor mode'') exploits a cursor, which position and movement are bound to the view of the device; the cursor allows the user to select objects and to perform coarse-grain manipulations by moving the device. The latter (referred to as ``tuning mode'') uses the physical affordances of a tangible interface to provide the user with the possibility to refine objects in all their aspects (position, rotation, scale, color, and so forth) with a fine-grained control. ![]() |
|
Peer, Alex |
![]() Alex Peer and Kevin Ponto (University of Wisconsin-Madison, USA) Distance misperception (sometimes, distance compression) in immersive virtual environments is an active area of study, and the recent availability of consumer-grade display and tracking technologies raises new questions. This work explores the plausibility of measuring misperceptions within the small tracking volumes of consumer-grade technology, whether measures practical within this space are directly comparable, and if contemporary displays induce distance misperceptions. ![]() |
|
Pfeiffer, Thies |
![]() Patrick Renner and Thies Pfeiffer (Bielefeld University, Germany) A limiting factor of current smart glasses-based augmented reality (AR) systems is their small field of view. AR assistance systems designed for tasks such as order picking or manual assembly are supposed to guide the visual attention of the user towards the item that is relevant next. This is a challenging task, as the user may initially be in an arbitrary position and orientation relative to the target. As a result of the small field of view, in most cases the target will initially not be covered by the AR display, even if it is visible to the user. This raises the question of how to design attention guiding for such "off-screen gaze" conditions. The central idea put forward in this paper is to display cues for attention guidance in a way that they can still be followed using peripheral vision. While the eyes' focus point is beyond the AR display, certain visual cues presented on the display are still detectable by the human. In addition to that, guidance methods that are adaptive to the eye movements of the user are introduced and evaluated. In the frame of a research project on smart glasses-based assistance systems for a manual assembly station, several attention guiding techniques with and without eye tracking have been designed, implemented and tested. As evaluation method simulated AR in a virtual reality HMD setup was used, which supports a repeatable and highly-controlled experimental design. ![]() ![]() |
|
Pietroszek, Krzysztof |
![]() Krzysztof Pietroszek, Liudmila Tahai, James R. Wallace, and Edward Lank (California State University at Monterey Bay, USA; University of Waterloo, Canada) We describe a mid-air, watch-based 3D interaction technique called Watchcasting. The technique enables target selection and translation by mapping z-coordinate position to forearm rotation. By replicat- ing a large display 3D selection study, we show that Watchcasting provides comparable performance to smartphone (Smartcasting) and electrical impedance myography (Myopoint) techniques. Our work demonstrates that an off-the-shelf smartwatch is a practical alternative for 3D interaction using specialized devices. ![]() ![]() |
|
Pinho, Márcio Sarroglia |
![]() Vicenzo Abichequer Sangalli, Thomas Volpato de Oliveira, Leonardo Pavanatto Soares, and Márcio Sarroglia Pinho (PUCRS, Brazil) In this work, a 3D mobile interface to create sculptures in an augmented reality environment tracked by AR markers is presented. A raycasting technique was used to interact with the objects in the scene, as well as 2D and 3D interfaces to manipulate and modify the objects. The users can move, delete, paint and duplicate virtual objects using 6 DOFs techniques. ![]() ![]() Leonardo Pavanatto Soares, Márcio Sarroglia Pinho, and Regis Kopper (PUCRS, Brazil; Duke University, USA) This work proposes and evaluates the EGO-EXO technique for cooperative manipulation in a Collaborative Virtual Environment (CVE). From the premise that simultaneous control over navigation and manipulation by the user can make interaction complex, this technique places two users in asymmetric viewpoint positions during the cooperative manipulation of an object, allowing one of them to follow the object. It applies the separation of degrees of freedom method between the two viewpoints to make the manipulation easier. The technique is evaluated through a user study to test its efficiency on handling cooperative manipulation. Results indicate that, for manipulation tasks that require high amplitude position control along precise orientation control, the technique performs with a lower collisions to time ratio. ![]() ![]() |
|
Piumsomboon, Thammathip |
![]() Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman, and Mark Billinghurst (University of South Australia, Australia; University of Canterbury, New Zealand) Eye tracking technology in a head-mounted display has undergone rapid advancement in recent years, making it possible for researchers to explore new interaction techniques using natural eye movements. This paper explores three novel eye-gaze-based interaction techniques: (1) Duo-Reticles, eye-gaze selection based on eye-gaze and inertial reticles, (2) Radial Pursuit, cluttered-object selection that takes advantage of smooth pursuit, and (3) Nod and Roll, head-gesture-based interaction based on the vestibulo-ocular reflex. In an initial user study, we compare each technique against a baseline condition in a scenario that demonstrates its strengths and weaknesses. ![]() ![]() ![]() |
|
Plouzeau, Jérémy |
![]() Aida Erfanian, Stanley Tarng, Yaoping Hu, Jérémy Plouzeau, and Frédéric Merienne (University of Calgary, Canada; LE2I, France) Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies on multi-sensory integration of visual and haptic cues revealed that the integration follows maximum likelihood estimation (MLE). Little effort focuses however on integrating force and vibrotactile cues – two sub-categorical cues of the haptic modality. Hence, this paper presents an investigation on MLE’s suitability for integrating these sub-categorical cues. Within a stereoscopic VE, human users performed a 3D interactive task of navigating a flying drone along a high-voltage transmission line in an inaccessible region and identifying defects on the line. The users had to identify defects via individual force or vibrotactile cues, and their combinations in co-located and dislocated settings. The co-located setting provided both cues on the right hand of the users; whereas the dislocated setting delivered the force and vibrotactile cues on the right hand and forearm of the users, respectively. Task performance of the users, such as completion time and accuracy, was assessed under each cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This was in agreement with the role of tactile cues in sensing surface properties, herein setting a baseline for using MLE. The task performance under the co-located setting indicated certain degrees of combining those under the individual cues. In contrast, the performance under the dislocated setting was alike that under the individual vibrotactile cue. These observations imply an inconclusiveness of MLE to integrate both cues in a co-located setting for 3D user interaction. ![]() ![]() Jérémy Plouzeau, José Luis Dorado, Damien Paillot, and Frédéric Merienne (LE2I, France; HeSam, France) This study proposes to investigate the effect of adding vibration feedback to a navigation task in virtual environment. Previous study used footstep vibrations and proprioceptive vibrations in order to decrease the cyber-sickness and increase the sense of presence. In this study, we experiment the same vibration modalities but with a new navigation method. The results show that proprioceptive vibrations do not impact the sense of presence neither the cyber-sickness while footstep vibrations increase sense of presence and decrease in a certain way cyber-sickness. ![]() |
|
Ponto, Kevin |
![]() Alex Peer and Kevin Ponto (University of Wisconsin-Madison, USA) Distance misperception (sometimes, distance compression) in immersive virtual environments is an active area of study, and the recent availability of consumer-grade display and tracking technologies raises new questions. This work explores the plausibility of measuring misperceptions within the small tracking volumes of consumer-grade technology, whether measures practical within this space are directly comparable, and if contemporary displays induce distance misperceptions. ![]() |
|
Quarles, John |
![]() Tanvir Irfan Chowdhury, Raphael Costa, and John Quarles (University of Texas at San Antonio, USA) The purpose of this poster is to explain our study on the effect of the sense of presence on one aspect of learning, information recall, in an immersive (vs. non-immersive) virtual reality (VR) disability simulation (DS). We hypothesized that a higher level of immersion and involvement in a VR disability simulation that leads to a high sense of presence will help the user improve information recall. We conducted a between subjects experiment in which participants were presented information about multiple sclerosis (MS) in different immersive conditions and afterwards they attempted to recall the information. The results from our study suggest that participants who were in immersive conditions were able to recall the information more effectively than the participants who experienced a non-immersive condition. ![]() ![]() |
|
Quesnel, Denise |
![]() Denise Quesnel and Bernhard E. Riecke (Simon Fraser University, Canada) In the study of transformative experiences, the feeling of awe is found to alter an individual’s perception in positive, lasting manners. Our research aims to understand the potential for interactive virtual reality (VR) in eliciting awe, through a framework based on collection of physiological data alongside self-report and phenomenological observations that demonstrate awe. We conducted a mixed-methods experiment to test whether VR is effective in eliciting awe, and if this effect might be modulated by the type of natural interaction in the form of a “flight” lounger vs. “standing”. Results demonstrate both interaction paradigms were equally awe-inspiring, with overall physiological (in the form of goose bumps with a 43.8% incidence rate) and self-report data (overall awe rating of 79.7%), and females showing more physiological signs of awe than males. Observations revealed 360-degree interaction and operability of hand-held controllers could be improved, with the consequence of designing even more effective transformative experiences. ![]() ![]() ![]() |
|
Ragan, Eric D. |
![]() Mohamed Suhail, Shyam Prathish Sargunam, Dustin T. Han, and Eric D. Ragan (Texas A&M University, USA) In many virtual reality applications, it would be ideal if users could use their physical hands to directly interact with virtual objects while experiencing realistic haptic feedback. While this can be achieved via interaction with tracked physical props that correspond to virtual objects, practical limitations can make it difficult to achieve a physical environment that exactly represents the virtual world, and virtual environments are often much larger than the available tracked physical space. Our approach maps a single physical prop to multiple virtual objects distributed throughout a virtual environment. Additionally, our work explores scenarios using one physical prop to control multiple types of object interactions. We explore considerations that allow physical object manipulation using orientation resetting to physically align the user with a physical prop for interaction. The resetting approach applies a discrete positional and rotational update to the user's location when the user virtually approaches a target for interaction, and the redirected reach approach applies a translational offset to the user’s virtual hand based on the positional difference of the virtual and physical objects. ![]() |
|
Raposo, Alberto |
![]() Daniel Mendes, Daniel Medeiros, Maurício Sousa, Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal; PUC-Rio, Brazil) Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks. ![]() |
|
Ray, Brandon |
![]() Run Yu, Wallace S. Lages, Mahdi Nabiyouni, Brandon Ray, Navyaram Kondur, Vikram Chandrashekar, and Doug A. Bowman (Virginia Tech, USA) We present two novel redirection techniques to enable real walking in large virtual environments (VEs) using only “room-scale” tracked spaces. The techniques, called Bookshelf and Bird, provide narrative-consistent redirection to keep the user inside the physical space, and require the user to walk to explore the VE. The underlying concept, called cell-based redirection, is to divide the virtual world into discrete cells that have the same size as the physical tracking space. The techniques then can redirect the users without altering the relationship between the user, the physical space, and the virtual cell. In this way, users can always access the entire cell using real walking. ![]() ![]() |
|
Regenbrecht, Jace |
![]() Jace Regenbrecht, Alireza Tavakkoli, and Donald Loffredo (University of Houston-Victoria, USA) In this paper an intuitive human interface is presented which allows for an operator immersed in a virtual environment to remotely control a teleoperated agent with minimal cognitive overload and minimal risk of accidental input. Additionally, a cursor-based interface is presented allowing for the placement of navigation nodes for the agent, thus facilitating robot's autonomous navigation functions to be executed. ![]() |
|
Rekimoto, Jun |
![]() Jun Rekimoto (University of Tokyo, Japan; Sony CSL, Japan) Traditionally, the field of Human Computer Interaction (HCI) was primarily concerned with designing and investigating interfaces between humans and machines. However, with recent technological advances, the concepts of "enhancing", "augmenting" or even "re-designing" humans themselves are becoming feasible and serious topics of scientific research as well as engineering development. "Augmented Human" is a term that I use to refer to this overall research direction. Augmented Human introduces a fundamental paradigm shift in HCI: from human-computer-interaction to human-computer-integration, and out abilities will be mutually connected through the networks (what we call IoA, or Internet of Abilities, as the next step of IoT: Internet of Things). In this talk, I will discuss rich possibilities and distinct challenges in enhancing human abilities. I will introduce our recent projects including design of flying cameras as our remote and external eyes, a home appliance that can increase your happiness, an organic physical wall/window that dynamically mediates the environment, and an immersive human-human connection concept called "JackIn." ![]() |
|
Renner, Patrick |
![]() Patrick Renner and Thies Pfeiffer (Bielefeld University, Germany) A limiting factor of current smart glasses-based augmented reality (AR) systems is their small field of view. AR assistance systems designed for tasks such as order picking or manual assembly are supposed to guide the visual attention of the user towards the item that is relevant next. This is a challenging task, as the user may initially be in an arbitrary position and orientation relative to the target. As a result of the small field of view, in most cases the target will initially not be covered by the AR display, even if it is visible to the user. This raises the question of how to design attention guiding for such "off-screen gaze" conditions. The central idea put forward in this paper is to display cues for attention guidance in a way that they can still be followed using peripheral vision. While the eyes' focus point is beyond the AR display, certain visual cues presented on the display are still detectable by the human. In addition to that, guidance methods that are adaptive to the eye movements of the user are introduced and evaluated. In the frame of a research project on smart glasses-based assistance systems for a manual assembly station, several attention guiding techniques with and without eye tracking have been designed, implemented and tested. As evaluation method simulated AR in a virtual reality HMD setup was used, which supports a repeatable and highly-controlled experimental design. ![]() ![]() |
|
Ricca, Aylen |
![]() Siju Wu, Aylen Ricca, Amine Chellali, and Samir Otmane (University of Evry, France) Standard 3D widgets are used for object manipulation in desktop CAD applications but are less suited for use on touchscreens. We propose two 3D constrained manipulation techniques for Tablet PCs. Using finger identification, the dominant hand's index, middle and ring fingers are mapped with the X, Y and Z axes. Users can then trigger different manipulation tasks using specific chording gestures. A user study to assess usability and efficiency permitted to identify the gestures that are the most suitable for each manipulation task. Some design recommendations for an efficient 3D constrained manipulations technique are presented. ![]() |
|
Riecke, Bernhard E. |
![]() Alexandra Kitson, Abraham M. Hashemian, Ekaterina R. Stepanova, Ernst Kruijff, and Bernhard E. Riecke (Simon Fraser University, Canada; Bonn-Rhein-Sieg University of Applied Sciences, Germany) In this paper, we describe a user study comparing five different locomotion interfaces for virtual reality locomotion. We compared a standard non-motion cueing interface, Joystick (Xbox), with four motion cueing interfaces, NaviChair (stool with springs), MuvMan (sit/stand active stool), Head-Directed (Oculus Rift DK2), and Swivel Chair (everyday office chair with leaning capability). Each interface had two degrees of freedom to move forward/backward and rotate using velocity (rate) control. The aim of this mixed methods study was to better understand relevant user experience factors and guide the design of future locomotion interfaces. This study employed methods from HCI to provide an understanding of why users behave a certain way while using the interface and to unearth any new issues with the design. Participants were tasked to search for objects in a virtual city while they provided talk-aloud feedback and we logged their behaviour. Subsequently, they completed a post-experimental questionnaire on their experience. We found that the qualitative themes of control, usability, and experience echoed the results of the questionnaire, providing internal validity. The quantitative measures revealed the Joystick to be significantly more comfortable and precise than the motion cueing interfaces. However, the qualitative feedback and interviews showed this was due to the reduced perceived controllability and safety of the motion cueing interfaces. Designers of these interfaces should consider using a backrest if users need to lean backwards and avoid using velocity-control for rotations when using HMDs. ![]() ![]() Denise Quesnel and Bernhard E. Riecke (Simon Fraser University, Canada) In the study of transformative experiences, the feeling of awe is found to alter an individual’s perception in positive, lasting manners. Our research aims to understand the potential for interactive virtual reality (VR) in eliciting awe, through a framework based on collection of physiological data alongside self-report and phenomenological observations that demonstrate awe. We conducted a mixed-methods experiment to test whether VR is effective in eliciting awe, and if this effect might be modulated by the type of natural interaction in the form of a “flight” lounger vs. “standing”. Results demonstrate both interaction paradigms were equally awe-inspiring, with overall physiological (in the form of goose bumps with a 43.8% incidence rate) and self-report data (overall awe rating of 79.7%), and females showing more physiological signs of awe than males. Observations revealed 360-degree interaction and operability of hand-held controllers could be improved, with the consequence of designing even more effective transformative experiences. ![]() ![]() ![]() ![]() Thinh Nguyen-Vo, Bernhard E. Riecke, and Wolfgang Stuerzlinger (Simon Fraser University, Canada) Despite recent advances in virtual reality, locomotion in a virtual environment is still restricted because of spatial disorientation. Previous research has shown the benefits of reference frames in maintaining spatial orientation. Here, we propose using a visually simulated reference frame in virtual reality to provide users with a better sense of direction in landmark-free virtual environments. Visually overlaid rectangular frames simulate different variations of frames of reference. We investigated how two different types of visually simulated reference frames might benefit in a navigational search task through a mixed-method study. Results showed that the presence of a reference frame significantly affects participants’ performance in a navigational search task. Though the egocentric frame of reference (simulated CAVE) that translates with the observer did not significantly help, an allocentric frame of reference (a simulated stationary room) significantly improved user performance both in navigational search time and overall travel distance. Our study suggests that adding a variation of the reference frame to virtual reality applications might be a cost-effective solution to enable more effective locomotion in virtual reality. ![]() |
|
Rishe, Naphtali |
![]() Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. ![]() ![]() Alain Galvan, Francisco R. Ortega, and Naphtali Rishe (Florida International University, USA) Finding the best suitable environment for 3D navigation that utilizes at least six degrees-of-freedom is still difficult. Furthermore, creating a system to procedurally generate a large virtual environment provides an opportunity for researchers to understand this problem further. Therefore, we present a novel technique to render a parametric celestial skybox with the ability to light environments similar to natural color corrected images from telescopes. We first pre-compute a spherical ray map that corresponds to the cubemap coordinates, then generate stars and dust through a combination of different noise generation shaders. ![]() |
|
Rodrigues, André Montes |
![]() André Montes Rodrigues, Olavo Belloc, Eduardo Zilles Borba, Mario Nagamura, and Marcelo Knorich Zuffo (University of São Paulo, Brazil) In this work we present an interactive 3D object manipulation system using off the shelf mobile devices coupled with Augmented Reality (AR) technology that allows editing 3D objects by way of natural interactions based on tangible interfaces paradigms. The set-up consists of a mobile device, an interactive wand marker and AR markers laid on a table. The system allows users to change viewpoint and execute operations on 3D objects - simultaneous translation and rotation, scaling, cloning or deleting - by unconstrained natural interactions, leveraging user’s proficiency on daily object manipulation tasks and speeding up such typical 3D manipulation operations. Depth perception was significantly enhanced with dynamic shadows, allowing fast alignment and accurate positioning of objects. The prototype presented here allows successful completion of the three challenges proposed by the 2017 3DUI Contest, as validated by a preliminary informal user study with participants from the target audience and also from the general public. ![]() |
|
Roo, Joan Sol |
![]() Joan Sol Roo and Martin Hachet (Inria, France) Spatial Augmented Reality (SAR) allows a user, or a group of users, to benefit from digital augmentations embedded directly into the physical world. This enables co-located information and unobstructed interaction. On the other hand, SAR suffers from limitations that are inherently linked to its physical dependency, which is not the case for see-through or immersive displays. In this work, we explore how to facilitate the transition from SAR to VR, and vice versa, integrating both into a unified experience. We developed a set of interaction techniques and obtained first feedback from informal interviews. ![]() ![]() ![]() |
|
Sangalli, Vicenzo Abichequer |
![]() Vicenzo Abichequer Sangalli, Thomas Volpato de Oliveira, Leonardo Pavanatto Soares, and Márcio Sarroglia Pinho (PUCRS, Brazil) In this work, a 3D mobile interface to create sculptures in an augmented reality environment tracked by AR markers is presented. A raycasting technique was used to interact with the objects in the scene, as well as 2D and 3D interfaces to manipulate and modify the objects. The users can move, delete, paint and duplicate virtual objects using 6 DOFs techniques. ![]() |
|
Santos, Beatriz Sousa |
![]() Luis Afonso, Paulo Dias, Carlos Ferreira, and Beatriz Sousa Santos (University of Aveiro, Portugal) How does the virtual representation of the user's hands influence the performance on a button selection task performed in a tablet-based interaction within an immersive virtual environment? To answer this question, we asked 55 participants to use three conditions: no-hand avatar, realistic avatar and translucent avatar. The participants were faster but made slightly more errors while using the no-avatar condition, and considered easier to perform the task with the translucent avatar. ![]() |
|
Sargunam, Shyam Prathish |
![]() Mohamed Suhail, Shyam Prathish Sargunam, Dustin T. Han, and Eric D. Ragan (Texas A&M University, USA) In many virtual reality applications, it would be ideal if users could use their physical hands to directly interact with virtual objects while experiencing realistic haptic feedback. While this can be achieved via interaction with tracked physical props that correspond to virtual objects, practical limitations can make it difficult to achieve a physical environment that exactly represents the virtual world, and virtual environments are often much larger than the available tracked physical space. Our approach maps a single physical prop to multiple virtual objects distributed throughout a virtual environment. Additionally, our work explores scenarios using one physical prop to control multiple types of object interactions. We explore considerations that allow physical object manipulation using orientation resetting to physically align the user with a physical prop for interaction. The resetting approach applies a discrete positional and rotational update to the user's location when the user virtually approaches a target for interaction, and the redirected reach approach applies a translational offset to the user’s virtual hand based on the positional difference of the virtual and physical objects. ![]() |
|
Sarupuri, Bhuvaneswari |
![]() Bhuvaneswari Sarupuri, Miriam Luque Chipana, and Robert W. Lindeman (University of Canterbury, New Zealand) We present Trigger Walking, a low-fatigue travel technique for immersive virtual reality which uses hand-held controllers to move about more naturally within a limited physical space. Most commercial applications use some form of teleportation or physical walking for moving around in a virtual space. However, teleportation can be disorienting, due to the sudden change in the environment when teleported to another location. Physical walking techniques are more physically demanding, leading to fatigue. Hence, we explore the use of two spatial controllers that accompany commercial headsets to walk by taking a virtual step each time a controller trigger is pulled. The user has the choice of using the orientation of a single-controller, the average of both controllers, or that of the head to determine the direction of walking, and speed can be controlled by changing the angle of the controller to the Frontal plane. ![]() |
|
Sassard, Emily |
![]() Oyewole Oyekoya, Emily Sassard, and Tiana Johnson (Clemson University, USA) The VizSpace is a physically situated interactive system that combines touch and hand interactions behind the screen to create the effect that users are reaching inside and interacting in a 3D virtual workspace. It extends the conventional touch table interface with hand tracking and 3D visualization to enable interaction in the positive parallax plane, where the binocular focus falls behind the screen so as not to occlude projected images. This paper covers the system design, human factors and ergonomics considerations for an interactive and immersive gesture-based visualization system. Results are presented from a preliminary user study that validates the usability of VizSpace. ![]() |
|
Schmalstieg, Dieter |
![]() Peter Mohr, Markus Tatzgern, Jens Grubert, Dieter Schmalstieg, and Denis Kalkofen (Graz University of Technology, Austria; Salzburg University of Applied Sciences, Austria; Coburg University, Germany) Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user’s real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user’s head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user’s motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering. ![]() |
|
Schubert, Ryan |
![]() Salam Daher, Kangsoo Kim, Myungho Lee, Gerd Bruder, Ryan Schubert, Jeremy Bailenson, and Gregory F. Welch (University of Central Florida, USA; Stanford University, USA) We explore whether a peripheral observation of apparent mutual social presence between a virtual human (VH) and a virtual human confederate (VHC) can increase a subject’s sense of social presence with the VH. Human subjects were asked to play a game with a VH. Half of the subjects were primed by being exposed to a brief but apparently engaging conversation between the VHC and the VH. The primed subjects reported being significantly more excited, alert, had significantly higher measures of Co-Presence, Attentional Allocation, and Message Understanding dimensions of social presence for the VH, compared to those who were not primed. ![]() |
|
Shibata, Fumihisa |
![]() Kana Oshima, Satoshi Hashiguchi, Fumihisa Shibata, and Asako Kimura (Ritsumeikan University, Japan) Previously, we discovered the “R-V Dynamics Illusion,” a psychophysical phenomenon caused by the difference between the dynamics of real object (R) and virtual object (V) in mixed reality (MR) space. We confirmed that a real object with a movable portion (CG) is perceived lighter by MR visual stimulation. Here, we analyze whether the difference in the mass of real objects affects the R-V Dynamics Illusion. We conducted experiments to determine the difference threshold of weights under the condition where the masses of real objects are 500, 750, and 1000g, and only the CG liquid level is changed. As a result, the difference in mass did not influence the difference threshold of weights by changing the virtual liquid level. On the other hand, with the same mass conditions, the difference threshold of weights becomes smaller when the R-V Dynamics Illusion occurs. ![]() |
|
Soares, Leonardo Pavanatto |
![]() Vicenzo Abichequer Sangalli, Thomas Volpato de Oliveira, Leonardo Pavanatto Soares, and Márcio Sarroglia Pinho (PUCRS, Brazil) In this work, a 3D mobile interface to create sculptures in an augmented reality environment tracked by AR markers is presented. A raycasting technique was used to interact with the objects in the scene, as well as 2D and 3D interfaces to manipulate and modify the objects. The users can move, delete, paint and duplicate virtual objects using 6 DOFs techniques. ![]() ![]() Leonardo Pavanatto Soares, Márcio Sarroglia Pinho, and Regis Kopper (PUCRS, Brazil; Duke University, USA) This work proposes and evaluates the EGO-EXO technique for cooperative manipulation in a Collaborative Virtual Environment (CVE). From the premise that simultaneous control over navigation and manipulation by the user can make interaction complex, this technique places two users in asymmetric viewpoint positions during the cooperative manipulation of an object, allowing one of them to follow the object. It applies the separation of degrees of freedom method between the two viewpoints to make the manipulation easier. The technique is evaluated through a user study to test its efficiency on handling cooperative manipulation. Results indicate that, for manipulation tasks that require high amplitude position control along precise orientation control, the technique performs with a lower collisions to time ratio. ![]() ![]() |
|
Sousa, Maurício |
![]() Daniel Mendes, Daniel Medeiros, Maurício Sousa, Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal; PUC-Rio, Brazil) Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks. ![]() ![]() Daniel Mendes, Daniel Medeiros, Eduardo Cordeiro, Maurício Sousa, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal) Selecting objects outside user's arm-reach in Virtual Reality still poses significant challenges. Techniques proposed to overcome such limitations often follow arm-extension metaphors or favor the use of selection volumes combined with ray-casting. Nonetheless, these approaches work for room sized and sparse environments, and they do not scale to larger scenarios with many objects. We introduce PRECIOUS, a novel mid-air technique for selecting out-of-reach objects. It employs an iterative progressive refinement, using cone-casting to select multiple objects and moving users closer to them in each step, allowing accurate selections. A user evaluation showed that PRECIOUS compares favorably against existing approaches, being the most versatile. ![]() |
|
Steinicke, Frank |
![]() Oscar J. Ariza N., Markus Lange, Frank Steinicke, and Gerd Bruder (University of Hamburg, Germany; University of Würzburg, Germany; University of Central Florida, USA) The selection of objects in virtual reality (VR) located on the periphery and outside the field of view (FOV) requires a visual search by rotations of the HMD, which can reduce the performance of interaction. Our work explores the use of a pair of self-made wireless and wearable devices which once attached to the user’s head provide assistive vibrotactile cues for guidance in order to reduce the time used to turn and locate a target object. We present an experiment based on a dual-tasking method to analyze cognitive demands and performance metrics during a selection task. ![]() ![]() Paul Heidicker, Eike Langbehn, and Frank Steinicke (University of Hamburg, Germany) Social virtual reality (VR) has enormous potential to allow several physically separated users to collaborate in an immersive virtual environment (IVE). These users and their actions are represented by avatars in the IVE. In question is how the appearance of those avatars influences communication and interaction. It might make a difference, if the avatar consists of a complete body representation or if only certain body parts are visible. Moreover, a one-to-one mapping of the user's movements to the avatar's movements might have advantages compared to pre-defined avatar animations. To answer these questions, we compared three different types of avatar appearances in a user study. For this, we used estimations of presence, social presence, and cognitive load. The evaluation showed that motion-controlled avatars with full representation of the avatar body lead to an increased sense of presence. Motion-controlled avatars as well as avatars which have only head and hands visible produced an increased feeling of co-presence and behavioral interdependence. This is interesting, since it states that we do not need a complete avatar body in social VR. ![]() |
|
Stepanova, Ekaterina R. |
![]() Alexandra Kitson, Abraham M. Hashemian, Ekaterina R. Stepanova, Ernst Kruijff, and Bernhard E. Riecke (Simon Fraser University, Canada; Bonn-Rhein-Sieg University of Applied Sciences, Germany) In this paper, we describe a user study comparing five different locomotion interfaces for virtual reality locomotion. We compared a standard non-motion cueing interface, Joystick (Xbox), with four motion cueing interfaces, NaviChair (stool with springs), MuvMan (sit/stand active stool), Head-Directed (Oculus Rift DK2), and Swivel Chair (everyday office chair with leaning capability). Each interface had two degrees of freedom to move forward/backward and rotate using velocity (rate) control. The aim of this mixed methods study was to better understand relevant user experience factors and guide the design of future locomotion interfaces. This study employed methods from HCI to provide an understanding of why users behave a certain way while using the interface and to unearth any new issues with the design. Participants were tasked to search for objects in a virtual city while they provided talk-aloud feedback and we logged their behaviour. Subsequently, they completed a post-experimental questionnaire on their experience. We found that the qualitative themes of control, usability, and experience echoed the results of the questionnaire, providing internal validity. The quantitative measures revealed the Joystick to be significantly more comfortable and precise than the motion cueing interfaces. However, the qualitative feedback and interviews showed this was due to the reduced perceived controllability and safety of the motion cueing interfaces. Designers of these interfaces should consider using a backrest if users need to lean backwards and avoid using velocity-control for rotations when using HMDs. ![]() |
|
Stuerzlinger, Wolfgang |
![]() Thinh Nguyen-Vo, Bernhard E. Riecke, and Wolfgang Stuerzlinger (Simon Fraser University, Canada) Despite recent advances in virtual reality, locomotion in a virtual environment is still restricted because of spatial disorientation. Previous research has shown the benefits of reference frames in maintaining spatial orientation. Here, we propose using a visually simulated reference frame in virtual reality to provide users with a better sense of direction in landmark-free virtual environments. Visually overlaid rectangular frames simulate different variations of frames of reference. We investigated how two different types of visually simulated reference frames might benefit in a navigational search task through a mixed-method study. Results showed that the presence of a reference frame significantly affects participants’ performance in a navigational search task. Though the egocentric frame of reference (simulated CAVE) that translates with the observer did not significantly help, an allocentric frame of reference (a simulated stationary room) significantly improved user performance both in navigational search time and overall travel distance. Our study suggests that adding a variation of the reference frame to virtual reality applications might be a cost-effective solution to enable more effective locomotion in virtual reality. ![]() |
|
Suhail, Mohamed |
![]() Mohamed Suhail, Shyam Prathish Sargunam, Dustin T. Han, and Eric D. Ragan (Texas A&M University, USA) In many virtual reality applications, it would be ideal if users could use their physical hands to directly interact with virtual objects while experiencing realistic haptic feedback. While this can be achieved via interaction with tracked physical props that correspond to virtual objects, practical limitations can make it difficult to achieve a physical environment that exactly represents the virtual world, and virtual environments are often much larger than the available tracked physical space. Our approach maps a single physical prop to multiple virtual objects distributed throughout a virtual environment. Additionally, our work explores scenarios using one physical prop to control multiple types of object interactions. We explore considerations that allow physical object manipulation using orientation resetting to physically align the user with a physical prop for interaction. The resetting approach applies a discrete positional and rotational update to the user's location when the user virtually approaches a target for interaction, and the redirected reach approach applies a translational offset to the user’s virtual hand based on the positional difference of the virtual and physical objects. ![]() |
|
Taguchi, Shun |
![]() Ryo Kodama, Masahiro Koge, Shun Taguchi, and Hiroyuki Kajimoto (TOYOTA Central R&D Labs, Japan; University of Electro-Communications, Japan) We propose a novel virtual reality entertainment system using a car as a motion platform. Motion platforms present a sensation of motion to the user using powerful actuators. Combined with virtual reality content, including surrounding visual, auditory and tactile displays, such systems can provide and immersive experience. However, the space and cost requirements for installation of motion platforms are large. To overcome this issue, we propose to use a car as a motion platform. We developed a prototype system composed of a head mounted display, a one-person electric car and an automatic driving algorithm. We developed and tested immersive content in which users ride on a trolley in a virtual space. All users responded quite positively to the experience. ![]() |
|
Tahai, Liudmila |
![]() Krzysztof Pietroszek, Liudmila Tahai, James R. Wallace, and Edward Lank (California State University at Monterey Bay, USA; University of Waterloo, Canada) We describe a mid-air, watch-based 3D interaction technique called Watchcasting. The technique enables target selection and translation by mapping z-coordinate position to forearm rotation. By replicat- ing a large display 3D selection study, we show that Watchcasting provides comparable performance to smartphone (Smartcasting) and electrical impedance myography (Myopoint) techniques. Our work demonstrates that an off-the-shelf smartwatch is a practical alternative for 3D interaction using specialized devices. ![]() ![]() |
|
Tarng, Stanley |
![]() Aida Erfanian, Stanley Tarng, Yaoping Hu, Jérémy Plouzeau, and Frédéric Merienne (University of Calgary, Canada; LE2I, France) Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies on multi-sensory integration of visual and haptic cues revealed that the integration follows maximum likelihood estimation (MLE). Little effort focuses however on integrating force and vibrotactile cues – two sub-categorical cues of the haptic modality. Hence, this paper presents an investigation on MLE’s suitability for integrating these sub-categorical cues. Within a stereoscopic VE, human users performed a 3D interactive task of navigating a flying drone along a high-voltage transmission line in an inaccessible region and identifying defects on the line. The users had to identify defects via individual force or vibrotactile cues, and their combinations in co-located and dislocated settings. The co-located setting provided both cues on the right hand of the users; whereas the dislocated setting delivered the force and vibrotactile cues on the right hand and forearm of the users, respectively. Task performance of the users, such as completion time and accuracy, was assessed under each cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This was in agreement with the role of tactile cues in sensing surface properties, herein setting a baseline for using MLE. The task performance under the co-located setting indicated certain degrees of combining those under the individual cues. In contrast, the performance under the dislocated setting was alike that under the individual vibrotactile cue. These observations imply an inconclusiveness of MLE to integrate both cues in a co-located setting for 3D user interaction. ![]() |
|
Tarre, Katherine |
![]() Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. ![]() |
|
Tatzgern, Markus |
![]() Peter Mohr, Markus Tatzgern, Jens Grubert, Dieter Schmalstieg, and Denis Kalkofen (Graz University of Technology, Austria; Salzburg University of Applied Sciences, Austria; Coburg University, Germany) Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user’s real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user’s head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user’s motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering. ![]() |
|
Tavakkoli, Alireza |
![]() Jace Regenbrecht, Alireza Tavakkoli, and Donald Loffredo (University of Houston-Victoria, USA) In this paper an intuitive human interface is presented which allows for an operator immersed in a virtual environment to remotely control a teleoperated agent with minimal cognitive overload and minimal risk of accidental input. Additionally, a cursor-based interface is presented allowing for the placement of navigation nodes for the agent, thus facilitating robot's autonomous navigation functions to be executed. ![]() |
|
Teather, Robert J. |
![]() Thomas S. Young, Robert J. Teather, and I. Scott MacKenzie (York University, Canada; Carleton University, Canada) We designed a low-cost arm-mounted wearable 3D input device that uses inertial measurement units as an alternative to existing tracking systems requiring fixed frames of reference. The device employs inertial sensors to derive a 3D cursor position through natural arm movement. We also explore three methods of indicating selection with the device, one entirely software based (dwell), one using a twist gesture, and one relying on buttons. To address the paucity of research reporting human performance metrics across comparable interfaces, we quantify the performance of this new device through a point selection experiment and compare results from a similar study. ![]() |
|
Thomas, Jason-Lee |
![]() Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. ![]() |
|
Tuanquin, Nikita Mae B. |
![]() Rory M. S. Clifford, Nikita Mae B. Tuanquin, and Robert W. Lindeman (University of Canterbury, New Zealand) Virtual Reality (VR) enables us to freely operate in a space that is unconstrained by physical laws and limitations. To take advantage of this aspect, we have developed a technique for pseudo-telekinetic object manipulation in VR using slight downward tilt of the head to simulate Jedi concentration. This telekinetic ability draws inspiration from The Force abilities exhibited in the Star Wars universe, and is particularly well suited to VR because it provides the ability to interact with and manipulate objects at a distance. We implemented force translate, force rotate, force push and force pull behaviours as examples of the general concept of force extension. We conducted exploratory user testing to assess telekinesis as a suitable interaction metaphor. Subject performance and feedback varied between participants but were generally encouraging. ![]() |
|
Valent, Sean |
![]() Nathanael Harrell, Grayson Bonds, Xiaojia Wang, Sean Valent, Elham Ebrahimi, and Sabarish V. Babu (Clemson University, USA) We present our metaphor for object translation, rotation, and rescaling and particle parameter manipulation in an augmented reality environment using an Android smartphone or tablet for the 2017 3DUI Competition in Los Angeles, California. Our metaphor aims to map the three-dimensional interaction of objects in a real world space to the two-dimensional plane of a smartphone or tablet screen. Our final product is the result of experimentation with different metaphors for translation, rotation, rescaling, and particle parameter manipulation and was guided by the feedback of voluntary product testers. The result is an interaction technique between a mobile device and the virtual world which we believe to be intuitive. ![]() ![]() |
|
Vasylevska, Khrystyna |
![]() Khrystyna Vasylevska and Hannes Kaufmann (Vienna University of Technology, Austria) Space available for any virtual reality experience is often strictly limited and abridges the virtual world to a size of a room. To extend the amount of virtual space accessible by walking within the same real workspace the methods of spatial compression were proposed. Scene manipulation with a controlled spatial overlap has been shown to be an efficient method. However, in order to apply space compression effectively for a dynamic, scalable and robust 3D user interface, it is important to study how the human perceives different layouts with overlapping spaces. In this paper, we explore the influence of the properties of the layout used on human spatial perception in a physically impossible spatial arrangement. Our first reported study focuses on the following parameters of the path within a simple self-overlapping layout: number of turns, relative door positions, sequences of counter- and clockwise turns, symmetry and asymmetry of the path used. In addition, in the second study we explore the effect of path smoothing by substituting the right-angled corridors by smooth curves. Our studies show that usage of the smooth curved corridors is more beneficial for spatial compression than the conventional right-angled approach. ![]() |
|
Vemavarapu, Prabhakar V. |
![]() Prabhakar V. Vemavarapu and Christoph W. Borst (University of Louisiana at Lafayette, USA) An indirect touch 3D interface using a two-sided handheld touch device for interactions with dense datasets on stereoscopic displays. This work explores the possibilities for a smartphone able to sense touch on both sides. Two android phones are combined back-to-back. The top touch surface is used for primary or fine interactions (selection /translation/ rotation) and bottom surface for coarser aspects such as mode control or feature extraction. Two surfaces are programmed to recognize input from 4 digits – two top and two bottom. The four touch areas enable 3D object selection, manipulation, and feature extraction using combinations of simultaneous touches. ![]() |
|
Vierjahn, Tom |
![]() Andrea Bönsch, Tom Vierjahn, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Embodied, virtual agents provide users assistance in agent-based support systems. To this end, two closely linked factors have to be considered for the agents’ behavioral design: their presence time (PT), i.e., the time in which the agents are visible, and the approaching time (AT), i.e., the time span between the user’s calling for an agent and the agent’s actual availability. This work focuses on human-like assistants that are embedded in immersive scenes but that are required only temporarily. To the best of our knowledge, guidelines for a suitable trade-off between PT and AT of these assistants do not yet exist. We address this gap by presenting the results of a controlled within-subjects study in a CAVE. While keeping a low PT so that the agent is not perceived as annoying, three strategies affecting the AT, namely fading, walking, and running, are evaluated by 40 subjects. The results indicate no clear preference for either behavior. Instead, the necessity of a better trade-off between a low AT and an agent’s realistic behavior is demonstrated. ![]() |
|
Wallace, James R. |
![]() Krzysztof Pietroszek, Liudmila Tahai, James R. Wallace, and Edward Lank (California State University at Monterey Bay, USA; University of Waterloo, Canada) We describe a mid-air, watch-based 3D interaction technique called Watchcasting. The technique enables target selection and translation by mapping z-coordinate position to forearm rotation. By replicat- ing a large display 3D selection study, we show that Watchcasting provides comparable performance to smartphone (Smartcasting) and electrical impedance myography (Myopoint) techniques. Our work demonstrates that an off-the-shelf smartwatch is a practical alternative for 3D interaction using specialized devices. ![]() ![]() |
|
Wang, Hongwei |
![]() Ronghai Wang, Junfeng Yao, Lin Wang, Xiaohan Liu, Hongwei Wang, and Liling Zheng (Quanzhou Normal University, China; Xiamen University, China; Sanming University, China; Quanzhou First Affiliated Hospital of Fujian Medical University, China) This poster presents a surgical training system for four medical punctures based on virtual reality and haptic feedback, including a client program developed in the Unity3D game engine and a server program developed by PHP. This system provides the immersive surgery simulation for thoracentesis, lumbar puncture, bone marrow puncture and abdominal paracentesis that we call four medical punctures. Trainers or teachers can release training tasks in which trainees or students are able to learn surgery skills at a 3D visual scene. Furthermore, they will feel a sense of immediacy when putting on the head-mounted display and with the help of haptic feedback. The training records will be put into database for analysis. ![]() |
|
Wang, Lin |
![]() Ronghai Wang, Junfeng Yao, Lin Wang, Xiaohan Liu, Hongwei Wang, and Liling Zheng (Quanzhou Normal University, China; Xiamen University, China; Sanming University, China; Quanzhou First Affiliated Hospital of Fujian Medical University, China) This poster presents a surgical training system for four medical punctures based on virtual reality and haptic feedback, including a client program developed in the Unity3D game engine and a server program developed by PHP. This system provides the immersive surgery simulation for thoracentesis, lumbar puncture, bone marrow puncture and abdominal paracentesis that we call four medical punctures. Trainers or teachers can release training tasks in which trainees or students are able to learn surgery skills at a 3D visual scene. Furthermore, they will feel a sense of immediacy when putting on the head-mounted display and with the help of haptic feedback. The training records will be put into database for analysis. ![]() |
|
Wang, Ronghai |
![]() Ronghai Wang, Junfeng Yao, Lin Wang, Xiaohan Liu, Hongwei Wang, and Liling Zheng (Quanzhou Normal University, China; Xiamen University, China; Sanming University, China; Quanzhou First Affiliated Hospital of Fujian Medical University, China) This poster presents a surgical training system for four medical punctures based on virtual reality and haptic feedback, including a client program developed in the Unity3D game engine and a server program developed by PHP. This system provides the immersive surgery simulation for thoracentesis, lumbar puncture, bone marrow puncture and abdominal paracentesis that we call four medical punctures. Trainers or teachers can release training tasks in which trainees or students are able to learn surgery skills at a 3D visual scene. Furthermore, they will feel a sense of immediacy when putting on the head-mounted display and with the help of haptic feedback. The training records will be put into database for analysis. ![]() |
|
Wang, Xiaojia |
![]() Nathanael Harrell, Grayson Bonds, Xiaojia Wang, Sean Valent, Elham Ebrahimi, and Sabarish V. Babu (Clemson University, USA) We present our metaphor for object translation, rotation, and rescaling and particle parameter manipulation in an augmented reality environment using an Android smartphone or tablet for the 2017 3DUI Competition in Los Angeles, California. Our metaphor aims to map the three-dimensional interaction of objects in a real world space to the two-dimensional plane of a smartphone or tablet screen. Our final product is the result of experimentation with different metaphors for translation, rotation, rescaling, and particle parameter manipulation and was guided by the feedback of voluntary product testers. The result is an interaction technique between a mobile device and the virtual world which we believe to be intuitive. ![]() ![]() |
|
Welch, Gregory F. |
![]() Salam Daher, Kangsoo Kim, Myungho Lee, Gerd Bruder, Ryan Schubert, Jeremy Bailenson, and Gregory F. Welch (University of Central Florida, USA; Stanford University, USA) We explore whether a peripheral observation of apparent mutual social presence between a virtual human (VH) and a virtual human confederate (VHC) can increase a subject’s sense of social presence with the VH. Human subjects were asked to play a game with a VH. Half of the subjects were primed by being exposed to a brief but apparently engaging conversation between the VHC and the VH. The primed subjects reported being significantly more excited, alert, had significantly higher measures of Co-Presence, Attentional Allocation, and Message Understanding dimensions of social presence for the VH, compared to those who were not primed. ![]() |
|
Weyers, Benjamin |
![]() Rongkai Guo, Ryan P. McMahan, and Benjamin Weyers (Kennesaw State University, USA; University of Texas at Dallas, USA; RWTH Aachen University, Germany) The 8th annual IEEE 3DUI Contest focuses on the development of a 3D User Interface (3DUI) for an Augmented Reality (AR) exhibit of constructive art. The 3DUI Contest is part of the 2017 IEEE Symposium on 3D User Interfaces held in Los Angeles, California. The contest was open to anyone interested in 3DUIs, from researchers to students, enthusiasts, and professionals. The purpose of the contest is to stimulate innovative and creative solutions to challenging 3DUI problems. ![]() ![]() Sebastian Freitag, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Scene visibility - the information of which parts of the scene are visible from a certain location—can be used to derive various properties of a virtual environment. For example, it enables the computation of viewpoint quality to determine the informativeness of a viewpoint, helps in constructing virtual tours, and allows to keep track of the objects a user may already have seen. However, computing visibility at runtime may be too computationally expensive for many applications, while sampling the entire scene beforehand introduces a costly precomputation step and may include many samples not needed later on. Therefore, in this paper, we propose a novel approach to precompute visibility information based on navigation meshes, a polygonal representation of a scene’s navigable areas. We show that with only limited precomputation, high accuracy can be achieved in these areas. Furthermore, we demonstrate the usefulness of the approach by means of several applications, including viewpoint quality computation, landmark and room detection, and exploration assistance. In addition, we present a travel interface based on common visibility that we found to result in less cybersickness in a user study. ![]() ![]() Daniel Zielasko, Neha Neha, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) While experiencing an immersive virtual environment a suitable trigger metaphor is often needed, e.g. for the interaction with objects out of physical reach or system control. The BlowClick approach [Zielasko2015] that is based on non-verbal vocal input has been proven to be a valuable trigger technique in previous work. However, its original detection method is vulnerable to false positives and, thus, is limited in its potential use. Therefore, we extended the existing approach by adding machine learning methods to reliably classify blowing events. We found a support vector machine with Gaussian kernel performing the best with at least the same latency and more precision than before. Furthermore, we added acoustic feedback to the NVVI trigger, which increases the user’s confidence and whose absence was also stated as a limitation of the previous work. With this extended technique, we repeated the conducted Fitts’ law experiment with 33 participants and could confirm that it is possible to use NVVI as a reliable trigger as part of a hands-free point-andclick interface. Furthermore, we tested reaction times to measure the trigger’s performance without the influence of pointing and calculated device throughputs to ensure comparability. ![]() |
|
Woodworth, Jason W. |
![]() Jason W. Woodworth and Christoph W. Borst (University of Louisiana at Lafayette, USA) We address a 3D pointing problem for a "virtual mirror" view used in collaborative VR. The virtual mirror is a large TV display showing a depth-camera-based image of a user in a surrounding virtual environment. There are problems with pointing and communicating to remote users due to the indirectness of pointing in a mirror and a low sense of depth. We propose several visual cues to help the user control pointing depth, and present an initial user study, providing a basis for further refinement and investigation of techniques. ![]() ![]() |
|
Wu, Siju |
![]() Siju Wu, Aylen Ricca, Amine Chellali, and Samir Otmane (University of Evry, France) Standard 3D widgets are used for object manipulation in desktop CAD applications but are less suited for use on touchscreens. We propose two 3D constrained manipulation techniques for Tablet PCs. Using finger identification, the dominant hand's index, middle and ring fingers are mapped with the X, Y and Z axes. Users can then trigger different manipulation tasks using specific chording gestures. A user study to assess usability and efficiency permitted to identify the gestures that are the most suitable for each manipulation task. Some design recommendations for an efficient 3D constrained manipulations technique are presented. ![]() |
|
Yao, Colin |
![]() Markus Zank, Colin Yao, and Andreas Kunz (ETH Zurich, Switzerland) Real walking is a means to explore virtual environments that are even larger than a physical space. Avoiding collisions with the physical walls necessitates a system to warn users. This paper describes the design and implementation of a multi-phase warning system as a solution to this safety necessity. The first phase is a velocity-based warning based on time-to-impact. The second phase is distance-based and designed as an emergency warning. Combinations of acoustic and visual feedback mechanisms were tested in a study with 13 participants. The quantitative measures show that the system keeps users safe, while allowing them to freely explore the virtual environment. ![]() |
|
Yao, Junfeng |
![]() Ronghai Wang, Junfeng Yao, Lin Wang, Xiaohan Liu, Hongwei Wang, and Liling Zheng (Quanzhou Normal University, China; Xiamen University, China; Sanming University, China; Quanzhou First Affiliated Hospital of Fujian Medical University, China) This poster presents a surgical training system for four medical punctures based on virtual reality and haptic feedback, including a client program developed in the Unity3D game engine and a server program developed by PHP. This system provides the immersive surgery simulation for thoracentesis, lumbar puncture, bone marrow puncture and abdominal paracentesis that we call four medical punctures. Trainers or teachers can release training tasks in which trainees or students are able to learn surgery skills at a 3D visual scene. Furthermore, they will feel a sense of immediacy when putting on the head-mounted display and with the help of haptic feedback. The training records will be put into database for analysis. ![]() |
|
Young, Thomas S. |
![]() Thomas S. Young, Robert J. Teather, and I. Scott MacKenzie (York University, Canada; Carleton University, Canada) We designed a low-cost arm-mounted wearable 3D input device that uses inertial measurement units as an alternative to existing tracking systems requiring fixed frames of reference. The device employs inertial sensors to derive a 3D cursor position through natural arm movement. We also explore three methods of indicating selection with the device, one entirely software based (dwell), one using a twist gesture, and one relying on buttons. To address the paucity of research reporting human performance metrics across comparable interfaces, we quantify the performance of this new device through a point selection experiment and compare results from a similar study. ![]() |
|
Yu, Run |
![]() Run Yu, Wallace S. Lages, Mahdi Nabiyouni, Brandon Ray, Navyaram Kondur, Vikram Chandrashekar, and Doug A. Bowman (Virginia Tech, USA) We present two novel redirection techniques to enable real walking in large virtual environments (VEs) using only “room-scale” tracked spaces. The techniques, called Bookshelf and Bird, provide narrative-consistent redirection to keep the user inside the physical space, and require the user to walk to explore the VE. The underlying concept, called cell-based redirection, is to divide the virtual world into discrete cells that have the same size as the physical tracking space. The techniques then can redirect the users without altering the relationship between the user, the physical space, and the virtual cell. In this way, users can always access the entire cell using real walking. ![]() ![]() |
|
Zank, Markus |
![]() Markus Zank and Andreas Kunz (ETH Zurich, Switzerland) Redirected walking with advanced planners such as MPCRed or FORCE requires both knowledge about the virtual environment - mostly in the form of a skeleton graph representing the virtual environment - and a robust prediction of the user's actions. This paper presents methods for both parts and evaluates them with a number of test cases. Since frame rate is crucial for a virtual reality application, the computationally heavy extraction and preprocessing of the skeleton graph is done offline while only parts directly linked to the user's behavior such as the prediction are done online. The prediction is done using a target-based long-term prediction and the targets are determined automatically and combined with targets predefined by the designer of the virtual environment. The methods presented here provide a graph that is well suited for planning redirection and allows prediction techniques previously only demonstrated in studies to be applied to large scale virtual environments. ![]() ![]() Markus Zank, Colin Yao, and Andreas Kunz (ETH Zurich, Switzerland) Real walking is a means to explore virtual environments that are even larger than a physical space. Avoiding collisions with the physical walls necessitates a system to warn users. This paper describes the design and implementation of a multi-phase warning system as a solution to this safety necessity. The first phase is a velocity-based warning based on time-to-impact. The second phase is distance-based and designed as an emergency warning. Combinations of acoustic and visual feedback mechanisms were tested in a study with 13 participants. The quantitative measures show that the system keeps users safe, while allowing them to freely explore the virtual environment. ![]() |
|
Zheng, Liling |
![]() Ronghai Wang, Junfeng Yao, Lin Wang, Xiaohan Liu, Hongwei Wang, and Liling Zheng (Quanzhou Normal University, China; Xiamen University, China; Sanming University, China; Quanzhou First Affiliated Hospital of Fujian Medical University, China) This poster presents a surgical training system for four medical punctures based on virtual reality and haptic feedback, including a client program developed in the Unity3D game engine and a server program developed by PHP. This system provides the immersive surgery simulation for thoracentesis, lumbar puncture, bone marrow puncture and abdominal paracentesis that we call four medical punctures. Trainers or teachers can release training tasks in which trainees or students are able to learn surgery skills at a 3D visual scene. Furthermore, they will feel a sense of immediacy when putting on the head-mounted display and with the help of haptic feedback. The training records will be put into database for analysis. ![]() |
|
Zielasko, Daniel |
![]() Daniel Zielasko, Neha Neha, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) While experiencing an immersive virtual environment a suitable trigger metaphor is often needed, e.g. for the interaction with objects out of physical reach or system control. The BlowClick approach [Zielasko2015] that is based on non-verbal vocal input has been proven to be a valuable trigger technique in previous work. However, its original detection method is vulnerable to false positives and, thus, is limited in its potential use. Therefore, we extended the existing approach by adding machine learning methods to reliably classify blowing events. We found a support vector machine with Gaussian kernel performing the best with at least the same latency and more precision than before. Furthermore, we added acoustic feedback to the NVVI trigger, which increases the user’s confidence and whose absence was also stated as a limitation of the previous work. With this extended technique, we repeated the conducted Fitts’ law experiment with 33 participants and could confirm that it is possible to use NVVI as a reliable trigger as part of a hands-free point-andclick interface. Furthermore, we tested reaction times to measure the trigger’s performance without the influence of pointing and calculated device throughputs to ensure comparability. ![]() |
|
Zielinski, David J. |
![]() David J. Zielinski, Derek Nankivil, and Regis Kopper (Duke University, USA) Object selection and manipulation in world-fixed displays such as CAVE-type systems are typically achieved with tracked input devices, which lack the tangibility of real-world interactions. Conversely, due to the visual blockage of the real world, head-mounted displays allow the use of many types of real world objects that can convey realistic haptic feedback. To bridge this gap, we propose Specimen Box, an interaction technique that allows users to naturally hold a plausible physical object while manipulating virtual content inside it. This virtual content is rendered based on the tracked position of the box in relation to the user's point of view. Specimen Box provides the weight and tactile feel of an actual object and does not occlude rendered objects in the scene. The end result is that the user sees the virtual content as if it exists inside the clear physical box. We hypothesize that the effect of holding a physical box, which is a valid part of the overall scenario, would improve user performance and experience. To verify this hypothesis, we conducted a user study which involved a cognitively loaded inspection task requiring extensive manipulation of the box. We compared Specimen Box to Grab-and-Twirl, a naturalistic bimanual manipulation technique that closely mimics the mechanics of our proposed technique. Results show that in our specific task, performance was significantly faster and rotation rate was significantly lower with Specimen Box. Further, performance of the control technique was positively affected by experience with Specimen Box. ![]() |
|
Zuffo, Marcelo Knorich |
![]() André Montes Rodrigues, Olavo Belloc, Eduardo Zilles Borba, Mario Nagamura, and Marcelo Knorich Zuffo (University of São Paulo, Brazil) In this work we present an interactive 3D object manipulation system using off the shelf mobile devices coupled with Augmented Reality (AR) technology that allows editing 3D objects by way of natural interactions based on tangible interfaces paradigms. The set-up consists of a mobile device, an interactive wand marker and AR markers laid on a table. The system allows users to change viewpoint and execute operations on 3D objects - simultaneous translation and rotation, scaling, cloning or deleting - by unconstrained natural interactions, leveraging user’s proficiency on daily object manipulation tasks and speeding up such typical 3D manipulation operations. Depth perception was significantly enhanced with dynamic shadows, allowing fast alignment and accurate positioning of objects. The prototype presented here allows successful completion of the three challenges proposed by the 2017 3DUI Contest, as validated by a preliminary informal user study with participants from the target audience and also from the general public. ![]() ![]() Ana Grasielle Corrêa, Eduardo Zilles Borba, Roseli Lopes, Marcelo Knorich Zuffo, Astolfo Araujo, and Regis Kopper (University of São Paulo, Brazil; Duke University, USA) In this work we present a study about usability experience of users in a cyber-archeological environment. We researched how they explore a realistic 3D environment in Virtual Reality (VR) through archaeometry conventional techniques. Our objective is to evaluate users experiences with interactive archaeometry tools with archaeologist (not a VR expert) and compare results with VR experts (not an archeology expert). Two hypothesis will be tested: a) it’s possible to simulate the virtual world realistically as the real one?; b) if this VR model is passive of exploration, is it possible to create 3DUI analytical tools to help archaeologist to manipulate archaeometry tools? To explore these hypotheses we conducted experimental tests with ten users and the results are promising. ![]() |
257 authors
proc time: 0.45