3DUI 2017 – Author Index |
Contents -
Abstracts -
Authors
Online Calendar - iCal File |
A B C D E F G H J K L M N O P Q R S T V W Y Z
Achibet, Merwan |
3DUI '17: "FlexiFingers: Multi-finger ..."
FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics
Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. @InProceedings{3DUI17p103, author = {Merwan Achibet and Benoît Le Gouis and Maud Marchal and Pierre-Alexandre Léziart and Ferran Argelaguet and Adrien Girard and Anatole Lécuyer and Hiroyuki Kajimoto}, title = {FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {103--106}, doi = {}, year = {2017}, } |
|
Afonso, Luis |
3DUI '17: "Effect of Hand-Avatar in a ..."
Effect of Hand-Avatar in a Selection Task Using a Tablet as Input Device in an Immersive Virtual Environment
Luis Afonso, Paulo Dias, Carlos Ferreira, and Beatriz Sousa Santos (University of Aveiro, Portugal) How does the virtual representation of the user's hands influence the performance on a button selection task performed in a tablet-based interaction within an immersive virtual environment? To answer this question, we asked 55 participants to use three conditions: no-hand avatar, realistic avatar and translucent avatar. The participants were faster but made slightly more errors while using the no-avatar condition, and considered easier to perform the task with the translucent avatar. @InProceedings{3DUI17p247, author = {Luis Afonso and Paulo Dias and Carlos Ferreira and Beatriz Sousa Santos}, title = {Effect of Hand-Avatar in a Selection Task Using a Tablet as Input Device in an Immersive Virtual Environment}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {247--248}, doi = {}, year = {2017}, } |
|
Araujo, Astolfo |
3DUI '17: "User Experience Evaluation ..."
User Experience Evaluation with Archaeometry Interactive Tools in Virtual Reality Environment
Ana Grasielle Corrêa, Eduardo Zilles Borba, Roseli Lopes, Marcelo Knorich Zuffo, Astolfo Araujo, and Regis Kopper (University of São Paulo, Brazil; Duke University, USA) In this work we present a study about usability experience of users in a cyber-archeological environment. We researched how they explore a realistic 3D environment in Virtual Reality (VR) through archaeometry conventional techniques. Our objective is to evaluate users experiences with interactive archaeometry tools with archaeologist (not a VR expert) and compare results with VR experts (not an archeology expert). Two hypothesis will be tested: a) it’s possible to simulate the virtual world realistically as the real one?; b) if this VR model is passive of exploration, is it possible to create 3DUI analytical tools to help archaeologist to manipulate archaeometry tools? To explore these hypotheses we conducted experimental tests with ten users and the results are promising. @InProceedings{3DUI17p217, author = {Ana Grasielle Corrêa and Eduardo Zilles Borba and Roseli Lopes and Marcelo Knorich Zuffo and Astolfo Araujo and Regis Kopper}, title = {User Experience Evaluation with Archaeometry Interactive Tools in Virtual Reality Environment}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {217--218}, doi = {}, year = {2017}, } |
|
Ardouin, Jérôme |
3DUI '17: "Increasing Optical Tracking ..."
Increasing Optical Tracking Workspace of VR Applications using Controlled Cameras
Guillaume Cortes, Eric Marchand, Jérôme Ardouin, and Anatole Lécuyer (Realyz, France; University of Rennes 1, France; Inria, France) We propose an approach to greatly increase the tracking workspace of VR applications without adding new sensors. Our approach relies on controlled cameras able to follow the tracked markers all around the VR workspace providing 6DoF tracking data. We designed the proof-of-concept of such approach based on two consumer-grade cameras and a pan-tilt head. The resulting tracking workspace could be greatly increased depending on the actuators’ range of motion. The accuracy error and jitter were found to be rather limited during camera motion (resp. 0.3cm and 0.02cm). Therefore, whenever the final VR application does not require a perfect tracking accuracy over the entire workspace, we recommend using our approach in order to enlarge the tracking workspace. @InProceedings{3DUI17p22, author = {Guillaume Cortes and Eric Marchand and Jérôme Ardouin and Anatole Lécuyer}, title = {Increasing Optical Tracking Workspace of VR Applications using Controlled Cameras}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {22--25}, doi = {}, year = {2017}, } |
|
Argelaguet, Ferran |
3DUI '17: "FlexiFingers: Multi-finger ..."
FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics
Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. @InProceedings{3DUI17p103, author = {Merwan Achibet and Benoît Le Gouis and Maud Marchal and Pierre-Alexandre Léziart and Ferran Argelaguet and Adrien Girard and Anatole Lécuyer and Hiroyuki Kajimoto}, title = {FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {103--106}, doi = {}, year = {2017}, } 3DUI '17: "Spatial and Rotation Invariant ..." Spatial and Rotation Invariant 3D Gesture Recognition Based on Sparse Representation Ferran Argelaguet, Mélanie Ducoffe, Anatole Lécuyer, and Remi Gribonval (Inria, France; IRISA, France; ENS, France) Advances in motion tracking technology, especially for commodity hardware, still require robust 3D gesture recognition in order to fully exploit the benefits of natural user interfaces. In this paper, we introduce a novel 3D gesture recognition algorithm based on the sparse representation of 3D human motion. The sparse representation of human motion provides a set of features that can be used to efficiently classify gestures in real-time. Compared to existing gesture recognition systems, sparse representation, the proposed approach enables full spatial and rotation invariance and provides high tolerance to noise. Moreover, the proposed classification scheme takes into account the inter-user variability which increases gesture classification accuracy in user-independent scenarios. We validated our approach with existing motion databases for gestural interaction and performed a user evaluation with naive subjects to show its robustness to arbitrarily defined gestures. The results showed that our classification scheme has high classification accuracy for user-independent scenarios even with users who have different handedness. We believe that sparse representation of human motion will pave the way for a new generation of 3D gesture recognition systems in order to fully open the potential of natural user interfaces. @InProceedings{3DUI17p158, author = {Ferran Argelaguet and Mélanie Ducoffe and Anatole Lécuyer and Remi Gribonval}, title = {Spatial and Rotation Invariant 3D Gesture Recognition Based on Sparse Representation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {158--167}, doi = {}, year = {2017}, } Video |
|
Ariza N., Oscar J. |
3DUI '17: "Vibrotactile Assistance for ..."
Vibrotactile Assistance for User Guidance Towards Selection Targets in VR and the Cognitive Resources Involved
Oscar J. Ariza N., Markus Lange, Frank Steinicke, and Gerd Bruder (University of Hamburg, Germany; University of Würzburg, Germany; University of Central Florida, USA) The selection of objects in virtual reality (VR) located on the periphery and outside the field of view (FOV) requires a visual search by rotations of the HMD, which can reduce the performance of interaction. Our work explores the use of a pair of self-made wireless and wearable devices which once attached to the user’s head provide assistive vibrotactile cues for guidance in order to reduce the time used to turn and locate a target object. We present an experiment based on a dual-tasking method to analyze cognitive demands and performance metrics during a selection task. @InProceedings{3DUI17p95, author = {Oscar J. Ariza N. and Markus Lange and Frank Steinicke and Gerd Bruder}, title = {Vibrotactile Assistance for User Guidance Towards Selection Targets in VR and the Cognitive Resources Involved}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {95--98}, doi = {}, year = {2017}, } |
|
Attanasio, Giuseppe |
3DUI '17: "HOT: Hold your Own Tools for ..."
HOT: Hold your Own Tools for AR-Based Constructive Art
Giuseppe Attanasio, Alberto Cannavò, Francesca Cibrario, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) Using digital instruments to support artistic expression and creativity is a hot topic. In this work, we focused on the design of a suitable interface for Augmented Reality-based constructive art on handheld devices. Issues to be faced encompassed how to give artists sense of spatial dimensions, how to provide them with different tools for realizing artworks, and how much moving away from ``the real'' and going towards ``the virtual''. Through a touch-capable device, such as a smartphone or a tablet, we offer artists a clean workspace, where they can decide when to introduce artworks and tools. In fact, besides exploiting the multi-touch functionality and the gyroscopes/accelerometers to manipulate artworks in six degrees of freedom (6DOF), the proposed solution exploits a set of printed markers that can be brought into the camera's field of view to make specific virtual tools appear in the augmented scene. With such tools, artists can decide to control, e.g., manipulation speed, scale factor, scene parameters, etc., thus complementing functionalities that can be accessed via the device's screen. @InProceedings{3DUI17p256, author = {Giuseppe Attanasio and Alberto Cannavò and Francesca Cibrario and Fabrizio Lamberti and Paolo Montuschi and Gianluca Paravati}, title = {HOT: Hold your Own Tools for AR-Based Constructive Art}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {256--257}, doi = {}, year = {2017}, } |
|
Babu, Sabarish V. |
3DUI '17: "AACT: A Mobile Augmented Reality ..."
AACT: A Mobile Augmented Reality Application for Art Creation
Ayush Bhargava, Jeffrey Bertrand, and Sabarish V. Babu (Clemson University, USA) In this paper, we present Augmented-Art Creation Tool (AACT) as our solution to the IEEE 3DUI 2017 challenge. Our solution employs multi-finger touch gestures along with the built-in camera and accelerometer on a mobile device for interaction in an Augmented Reality (AR) setup. We leverage a user's knowledge of touch gestures like pinching, swiping, etc. and physical device movement to interact with the environment making the metaphor intuitive. The system helps prevent occlusion by using the accelerometer and allowing touch gestures anywhere on the screen. The interaction metaphor allows for successful art piece creation and assembly. @InProceedings{3DUI17p254, author = {Ayush Bhargava and Jeffrey Bertrand and Sabarish V. Babu}, title = {AACT: A Mobile Augmented Reality Application for Art Creation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {254--255}, doi = {}, year = {2017}, } Video 3DUI '17: "Augmented Reality Digital ..." Augmented Reality Digital Sculpture Nathanael Harrell, Grayson Bonds, Xiaojia Wang, Sean Valent, Elham Ebrahimi, and Sabarish V. Babu (Clemson University, USA) We present our metaphor for object translation, rotation, and rescaling and particle parameter manipulation in an augmented reality environment using an Android smartphone or tablet for the 2017 3DUI Competition in Los Angeles, California. Our metaphor aims to map the three-dimensional interaction of objects in a real world space to the two-dimensional plane of a smartphone or tablet screen. Our final product is the result of experimentation with different metaphors for translation, rotation, rescaling, and particle parameter manipulation and was guided by the feedback of voluntary product testers. The result is an interaction technique between a mobile device and the virtual world which we believe to be intuitive. @InProceedings{3DUI17p262, author = {Nathanael Harrell and Grayson Bonds and Xiaojia Wang and Sean Valent and Elham Ebrahimi and Sabarish V. Babu}, title = {Augmented Reality Digital Sculpture}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {262--263}, doi = {}, year = {2017}, } Video 3DUI '17: "The Effects of Presentation ..." The Effects of Presentation Method and Simulation Fidelity on Psychomotor Education in a Bimanual Metrology Training Simulation Jeffrey Bertrand, Ayush Bhargava, Kapil Chalil Madathil, Anand Gramopadhye, and Sabarish V. Babu (Clemson University, USA) In this study, we empirically evaluated the effects of presentation method and simulation fidelity on task performance and psychomotor skills acquisition in an immersive bimanual simulation towards precision metrology education. In a 2 x 2 experiment design, we investigated a large-screen immersive display (LSID) with a head-mounted display (HMD), and the presence versus absence of gravity. Advantages of the HMD include interacting with the simulation in a more natural manner as compared to using a large-screen immersive display due to the similarities between the interactions afforded in the virtual compared to the real-world task. Suspending the laws of physics may have an effect on usability and in turn could affect learning outcomes. Our dependent variables consisted of a pre and post cognition questionnaire, quantitative performance measures, perceived workload and system usefulness, and a psychomotor assessment to measure to what extent transfer of learning took place from the virtual to the real world. Results indicate that the HMD condition was preferable to the immersive display in several metrics while the no-gravity condition resulted in users adopting strategies that were not advantageous for task performance. @InProceedings{3DUI17p59, author = {Jeffrey Bertrand and Ayush Bhargava and Kapil Chalil Madathil and Anand Gramopadhye and Sabarish V. Babu}, title = {The Effects of Presentation Method and Simulation Fidelity on Psychomotor Education in a Bimanual Metrology Training Simulation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {59--68}, doi = {}, year = {2017}, } |
|
Bailenson, Jeremy |
3DUI '17: "Can Social Presence Be Contagious? ..."
Can Social Presence Be Contagious? Effects of Social Presence Priming on Interaction with Virtual Humans
Salam Daher, Kangsoo Kim, Myungho Lee, Gerd Bruder, Ryan Schubert, Jeremy Bailenson, and Gregory F. Welch (University of Central Florida, USA; Stanford University, USA) We explore whether a peripheral observation of apparent mutual social presence between a virtual human (VH) and a virtual human confederate (VHC) can increase a subject’s sense of social presence with the VH. Human subjects were asked to play a game with a VH. Half of the subjects were primed by being exposed to a brief but apparently engaging conversation between the VHC and the VH. The primed subjects reported being significantly more excited, alert, had significantly higher measures of Co-Presence, Attentional Allocation, and Message Understanding dimensions of social presence for the VH, compared to those who were not primed. @InProceedings{3DUI17p201, author = {Salam Daher and Kangsoo Kim and Myungho Lee and Gerd Bruder and Ryan Schubert and Jeremy Bailenson and Gregory F. Welch}, title = {Can Social Presence Be Contagious? Effects of Social Presence Priming on Interaction with Virtual Humans}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {201--202}, doi = {}, year = {2017}, } |
|
Balcazar, Ruben |
3DUI '17: "Gesture Elicitation for 3D ..."
Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe
Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. @InProceedings{3DUI17p144, author = {Francisco R. Ortega and Alain Galvan and Katherine Tarre and Armando Barreto and Naphtali Rishe and Jonathan Bernal and Ruben Balcazar and Jason-Lee Thomas}, title = {Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {144--153}, doi = {}, year = {2017}, } |
|
Barreto, Armando |
3DUI '17: "Gesture Elicitation for 3D ..."
Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe
Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. @InProceedings{3DUI17p144, author = {Francisco R. Ortega and Alain Galvan and Katherine Tarre and Armando Barreto and Naphtali Rishe and Jonathan Bernal and Ruben Balcazar and Jason-Lee Thomas}, title = {Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {144--153}, doi = {}, year = {2017}, } |
|
Basting, Oliver |
3DUI '17: "The Effectiveness of Changing ..."
The Effectiveness of Changing the Field of View in a HMD on the Perceived Self-Motion
Oliver Basting, Arnulph Fuhrmann, and Stefan M. Grünvogel (TH Köln, Germany) The following paper investigates the effect on the intensity of per- ceived vection by changing the field of view (FOV) using a head- mounted display (HMD) in a virtual environment (VE). For this purpose a study was carried out, where the participants were situ- ated in a vection evoking VE using a HMD. During the experiment, the VE was presented with different FOVs, and a measurement of the felt intensity of vection was performed. The results indicate that a decrease of the FOV invokes a decrease of the intensity of perceived vection. @InProceedings{3DUI17p225, author = {Oliver Basting and Arnulph Fuhrmann and Stefan M. Grünvogel}, title = {The Effectiveness of Changing the Field of View in a HMD on the Perceived Self-Motion}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {225--226}, doi = {}, year = {2017}, } |
|
Bechmann, Dominique |
3DUI '17: "Effects of Stereo and Head ..."
Effects of Stereo and Head Tracking on Distance Estimation, Presence, and Simulator Sickness using Wall Screen in Architectural Project Review
Sabah Boustila, Dominique Bechmann, and Antonio Capobianco (University of Strasbourg, France) Stereo and head tracking are considered as distance perception cues in virtual environment. Several studies have investigated their influence on several tasks. Results were different among studies. In this paper, we conducted a complete experiment investigating the influence of the stereo and head tracking in the specific context of virtual visits of houses during architectural project review with clients. We manipulated the stereo and head tracking in four conditions and we examined effects of the two factors on distance perception (room dimensions, habitability, etc.), task difficulty, presence and simulator sickness. Results reveal a significant effect of the stereo on the estimation of the habitability, the dimensions of the rooms and task difficulty. However, the effect of stereo and head tracking was not significant on the presence and simulator sickness. @InProceedings{3DUI17p231, author = {Sabah Boustila and Dominique Bechmann and Antonio Capobianco}, title = {Effects of Stereo and Head Tracking on Distance Estimation, Presence, and Simulator Sickness using Wall Screen in Architectural Project Review}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {231--232}, doi = {}, year = {2017}, } |
|
Belloc, Olavo |
3DUI '17: "Batmen Beyond: Natural 3D ..."
Batmen Beyond: Natural 3D Manipulation with the BatWand
André Montes Rodrigues, Olavo Belloc, Eduardo Zilles Borba, Mario Nagamura, and Marcelo Knorich Zuffo (University of São Paulo, Brazil) In this work we present an interactive 3D object manipulation system using off the shelf mobile devices coupled with Augmented Reality (AR) technology that allows editing 3D objects by way of natural interactions based on tangible interfaces paradigms. The set-up consists of a mobile device, an interactive wand marker and AR markers laid on a table. The system allows users to change viewpoint and execute operations on 3D objects - simultaneous translation and rotation, scaling, cloning or deleting - by unconstrained natural interactions, leveraging user’s proficiency on daily object manipulation tasks and speeding up such typical 3D manipulation operations. Depth perception was significantly enhanced with dynamic shadows, allowing fast alignment and accurate positioning of objects. The prototype presented here allows successful completion of the three challenges proposed by the 2017 3DUI Contest, as validated by a preliminary informal user study with participants from the target audience and also from the general public. @InProceedings{3DUI17p258, author = {André Montes Rodrigues and Olavo Belloc and Eduardo Zilles Borba and Mario Nagamura and Marcelo Knorich Zuffo}, title = {Batmen Beyond: Natural 3D Manipulation with the BatWand}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {258--259}, doi = {}, year = {2017}, } |
|
Bernal, Jonathan |
3DUI '17: "Gesture Elicitation for 3D ..."
Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe
Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. @InProceedings{3DUI17p144, author = {Francisco R. Ortega and Alain Galvan and Katherine Tarre and Armando Barreto and Naphtali Rishe and Jonathan Bernal and Ruben Balcazar and Jason-Lee Thomas}, title = {Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {144--153}, doi = {}, year = {2017}, } |
|
Berndt, Iago |
3DUI '17: "Collaborative Manipulation ..."
Collaborative Manipulation of 3D Virtual Objects in Augmented Reality Scenarios using Mobile Devices
Jerônimo G. Grandi, Iago Berndt, Henrique G. Debarba, Luciana Nedel, and Anderson Maciel (Federal University of Rio Grande do Sul, Brazil; Artanim Foundation, Switzerland) Interaction in augmented reality environments may be very complex, depending on the degrees of freedom (DOFs) required for the task. In this work we present a 3D user interface for collaborative manipulation of virtual objects in augmented reality (AR) environments. It maps position -- acquired with a camera and fiducial markers -- and touchscreen input of a handheld device into gestures to select, move, rotate and scale virtual objects. As these transformations require the control of multiple DOFs, collaboration is proposed as a solution to coordinate the modification of each and all the available DOFs. Users are free to decide their own manipulation roles. All virtual elements are displayed directly on the mobile device as an overlay of the camera capture, providing an individual point of view of the AR environment to each user. @InProceedings{3DUI17p264, author = {Jerônimo G. Grandi and Iago Berndt and Henrique G. Debarba and Luciana Nedel and Anderson Maciel}, title = {Collaborative Manipulation of 3D Virtual Objects in Augmented Reality Scenarios using Mobile Devices}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {264--265}, doi = {}, year = {2017}, } Video |
|
Bertrand, Jeffrey |
3DUI '17: "AACT: A Mobile Augmented Reality ..."
AACT: A Mobile Augmented Reality Application for Art Creation
Ayush Bhargava, Jeffrey Bertrand, and Sabarish V. Babu (Clemson University, USA) In this paper, we present Augmented-Art Creation Tool (AACT) as our solution to the IEEE 3DUI 2017 challenge. Our solution employs multi-finger touch gestures along with the built-in camera and accelerometer on a mobile device for interaction in an Augmented Reality (AR) setup. We leverage a user's knowledge of touch gestures like pinching, swiping, etc. and physical device movement to interact with the environment making the metaphor intuitive. The system helps prevent occlusion by using the accelerometer and allowing touch gestures anywhere on the screen. The interaction metaphor allows for successful art piece creation and assembly. @InProceedings{3DUI17p254, author = {Ayush Bhargava and Jeffrey Bertrand and Sabarish V. Babu}, title = {AACT: A Mobile Augmented Reality Application for Art Creation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {254--255}, doi = {}, year = {2017}, } Video 3DUI '17: "The Effects of Presentation ..." The Effects of Presentation Method and Simulation Fidelity on Psychomotor Education in a Bimanual Metrology Training Simulation Jeffrey Bertrand, Ayush Bhargava, Kapil Chalil Madathil, Anand Gramopadhye, and Sabarish V. Babu (Clemson University, USA) In this study, we empirically evaluated the effects of presentation method and simulation fidelity on task performance and psychomotor skills acquisition in an immersive bimanual simulation towards precision metrology education. In a 2 x 2 experiment design, we investigated a large-screen immersive display (LSID) with a head-mounted display (HMD), and the presence versus absence of gravity. Advantages of the HMD include interacting with the simulation in a more natural manner as compared to using a large-screen immersive display due to the similarities between the interactions afforded in the virtual compared to the real-world task. Suspending the laws of physics may have an effect on usability and in turn could affect learning outcomes. Our dependent variables consisted of a pre and post cognition questionnaire, quantitative performance measures, perceived workload and system usefulness, and a psychomotor assessment to measure to what extent transfer of learning took place from the virtual to the real world. Results indicate that the HMD condition was preferable to the immersive display in several metrics while the no-gravity condition resulted in users adopting strategies that were not advantageous for task performance. @InProceedings{3DUI17p59, author = {Jeffrey Bertrand and Ayush Bhargava and Kapil Chalil Madathil and Anand Gramopadhye and Sabarish V. Babu}, title = {The Effects of Presentation Method and Simulation Fidelity on Psychomotor Education in a Bimanual Metrology Training Simulation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {59--68}, doi = {}, year = {2017}, } |
|
Bhargava, Ayush |
3DUI '17: "AACT: A Mobile Augmented Reality ..."
AACT: A Mobile Augmented Reality Application for Art Creation
Ayush Bhargava, Jeffrey Bertrand, and Sabarish V. Babu (Clemson University, USA) In this paper, we present Augmented-Art Creation Tool (AACT) as our solution to the IEEE 3DUI 2017 challenge. Our solution employs multi-finger touch gestures along with the built-in camera and accelerometer on a mobile device for interaction in an Augmented Reality (AR) setup. We leverage a user's knowledge of touch gestures like pinching, swiping, etc. and physical device movement to interact with the environment making the metaphor intuitive. The system helps prevent occlusion by using the accelerometer and allowing touch gestures anywhere on the screen. The interaction metaphor allows for successful art piece creation and assembly. @InProceedings{3DUI17p254, author = {Ayush Bhargava and Jeffrey Bertrand and Sabarish V. Babu}, title = {AACT: A Mobile Augmented Reality Application for Art Creation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {254--255}, doi = {}, year = {2017}, } Video 3DUI '17: "The Effects of Presentation ..." The Effects of Presentation Method and Simulation Fidelity on Psychomotor Education in a Bimanual Metrology Training Simulation Jeffrey Bertrand, Ayush Bhargava, Kapil Chalil Madathil, Anand Gramopadhye, and Sabarish V. Babu (Clemson University, USA) In this study, we empirically evaluated the effects of presentation method and simulation fidelity on task performance and psychomotor skills acquisition in an immersive bimanual simulation towards precision metrology education. In a 2 x 2 experiment design, we investigated a large-screen immersive display (LSID) with a head-mounted display (HMD), and the presence versus absence of gravity. Advantages of the HMD include interacting with the simulation in a more natural manner as compared to using a large-screen immersive display due to the similarities between the interactions afforded in the virtual compared to the real-world task. Suspending the laws of physics may have an effect on usability and in turn could affect learning outcomes. Our dependent variables consisted of a pre and post cognition questionnaire, quantitative performance measures, perceived workload and system usefulness, and a psychomotor assessment to measure to what extent transfer of learning took place from the virtual to the real world. Results indicate that the HMD condition was preferable to the immersive display in several metrics while the no-gravity condition resulted in users adopting strategies that were not advantageous for task performance. @InProceedings{3DUI17p59, author = {Jeffrey Bertrand and Ayush Bhargava and Kapil Chalil Madathil and Anand Gramopadhye and Sabarish V. Babu}, title = {The Effects of Presentation Method and Simulation Fidelity on Psychomotor Education in a Bimanual Metrology Training Simulation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {59--68}, doi = {}, year = {2017}, } |
|
Billinghurst, Mark |
3DUI '17: "Exploring Natural Eye-Gaze-Based ..."
Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality
Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman, and Mark Billinghurst (University of South Australia, Australia; University of Canterbury, New Zealand) Eye tracking technology in a head-mounted display has undergone rapid advancement in recent years, making it possible for researchers to explore new interaction techniques using natural eye movements. This paper explores three novel eye-gaze-based interaction techniques: (1) Duo-Reticles, eye-gaze selection based on eye-gaze and inertial reticles, (2) Radial Pursuit, cluttered-object selection that takes advantage of smooth pursuit, and (3) Nod and Roll, head-gesture-based interaction based on the vestibulo-ocular reflex. In an initial user study, we compare each technique against a baseline condition in a scenario that demonstrates its strengths and weaknesses. @InProceedings{3DUI17p36, author = {Thammathip Piumsomboon and Gun Lee and Robert W. Lindeman and Mark Billinghurst}, title = {Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {36--39}, doi = {}, year = {2017}, } Video Info |
|
Bonds, Grayson |
3DUI '17: "Augmented Reality Digital ..."
Augmented Reality Digital Sculpture
Nathanael Harrell, Grayson Bonds, Xiaojia Wang, Sean Valent, Elham Ebrahimi, and Sabarish V. Babu (Clemson University, USA) We present our metaphor for object translation, rotation, and rescaling and particle parameter manipulation in an augmented reality environment using an Android smartphone or tablet for the 2017 3DUI Competition in Los Angeles, California. Our metaphor aims to map the three-dimensional interaction of objects in a real world space to the two-dimensional plane of a smartphone or tablet screen. Our final product is the result of experimentation with different metaphors for translation, rotation, rescaling, and particle parameter manipulation and was guided by the feedback of voluntary product testers. The result is an interaction technique between a mobile device and the virtual world which we believe to be intuitive. @InProceedings{3DUI17p262, author = {Nathanael Harrell and Grayson Bonds and Xiaojia Wang and Sean Valent and Elham Ebrahimi and Sabarish V. Babu}, title = {Augmented Reality Digital Sculpture}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {262--263}, doi = {}, year = {2017}, } Video |
|
Bönsch, Andrea |
3DUI '17: "Evaluation of Approaching-Strategies ..."
Evaluation of Approaching-Strategies of Temporarily Required Virtual Assistants in Immersive Environments
Andrea Bönsch, Tom Vierjahn, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Embodied, virtual agents provide users assistance in agent-based support systems. To this end, two closely linked factors have to be considered for the agents’ behavioral design: their presence time (PT), i.e., the time in which the agents are visible, and the approaching time (AT), i.e., the time span between the user’s calling for an agent and the agent’s actual availability. This work focuses on human-like assistants that are embedded in immersive scenes but that are required only temporarily. To the best of our knowledge, guidelines for a suitable trade-off between PT and AT of these assistants do not yet exist. We address this gap by presenting the results of a controlled within-subjects study in a CAVE. While keeping a low PT so that the agent is not perceived as annoying, three strategies affecting the AT, namely fading, walking, and running, are evaluated by 40 subjects. The results indicate no clear preference for either behavior. Instead, the necessity of a better trade-off between a low AT and an agent’s realistic behavior is demonstrated. @InProceedings{3DUI17p69, author = {Andrea Bönsch and Tom Vierjahn and Torsten W. Kuhlen}, title = {Evaluation of Approaching-Strategies of Temporarily Required Virtual Assistants in Immersive Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {69--72}, doi = {}, year = {2017}, } |
|
Borba, Eduardo Zilles |
3DUI '17: "Batmen Beyond: Natural 3D ..."
Batmen Beyond: Natural 3D Manipulation with the BatWand
André Montes Rodrigues, Olavo Belloc, Eduardo Zilles Borba, Mario Nagamura, and Marcelo Knorich Zuffo (University of São Paulo, Brazil) In this work we present an interactive 3D object manipulation system using off the shelf mobile devices coupled with Augmented Reality (AR) technology that allows editing 3D objects by way of natural interactions based on tangible interfaces paradigms. The set-up consists of a mobile device, an interactive wand marker and AR markers laid on a table. The system allows users to change viewpoint and execute operations on 3D objects - simultaneous translation and rotation, scaling, cloning or deleting - by unconstrained natural interactions, leveraging user’s proficiency on daily object manipulation tasks and speeding up such typical 3D manipulation operations. Depth perception was significantly enhanced with dynamic shadows, allowing fast alignment and accurate positioning of objects. The prototype presented here allows successful completion of the three challenges proposed by the 2017 3DUI Contest, as validated by a preliminary informal user study with participants from the target audience and also from the general public. @InProceedings{3DUI17p258, author = {André Montes Rodrigues and Olavo Belloc and Eduardo Zilles Borba and Mario Nagamura and Marcelo Knorich Zuffo}, title = {Batmen Beyond: Natural 3D Manipulation with the BatWand}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {258--259}, doi = {}, year = {2017}, } 3DUI '17: "User Experience Evaluation ..." User Experience Evaluation with Archaeometry Interactive Tools in Virtual Reality Environment Ana Grasielle Corrêa, Eduardo Zilles Borba, Roseli Lopes, Marcelo Knorich Zuffo, Astolfo Araujo, and Regis Kopper (University of São Paulo, Brazil; Duke University, USA) In this work we present a study about usability experience of users in a cyber-archeological environment. We researched how they explore a realistic 3D environment in Virtual Reality (VR) through archaeometry conventional techniques. Our objective is to evaluate users experiences with interactive archaeometry tools with archaeologist (not a VR expert) and compare results with VR experts (not an archeology expert). Two hypothesis will be tested: a) it’s possible to simulate the virtual world realistically as the real one?; b) if this VR model is passive of exploration, is it possible to create 3DUI analytical tools to help archaeologist to manipulate archaeometry tools? To explore these hypotheses we conducted experimental tests with ten users and the results are promising. @InProceedings{3DUI17p217, author = {Ana Grasielle Corrêa and Eduardo Zilles Borba and Roseli Lopes and Marcelo Knorich Zuffo and Astolfo Araujo and Regis Kopper}, title = {User Experience Evaluation with Archaeometry Interactive Tools in Virtual Reality Environment}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {217--218}, doi = {}, year = {2017}, } |
|
Borst, Christoph W. |
3DUI '17: "Indirect Touch Interaction ..."
Indirect Touch Interaction with Stereoscopic Displays using a Two-Sided Handheld Touch Device
Prabhakar V. Vemavarapu and Christoph W. Borst (University of Louisiana at Lafayette, USA) An indirect touch 3D interface using a two-sided handheld touch device for interactions with dense datasets on stereoscopic displays. This work explores the possibilities for a smartphone able to sense touch on both sides. Two android phones are combined back-to-back. The top touch surface is used for primary or fine interactions (selection /translation/ rotation) and bottom surface for coarser aspects such as mode control or feature extraction. Two surfaces are programmed to recognize input from 4 digits – two top and two bottom. The four touch areas enable 3D object selection, manipulation, and feature extraction using combinations of simultaneous touches. @InProceedings{3DUI17p209, author = {Prabhakar V. Vemavarapu and Christoph W. Borst}, title = {Indirect Touch Interaction with Stereoscopic Displays using a Two-Sided Handheld Touch Device}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {209--210}, doi = {}, year = {2017}, } 3DUI '17: "Visual Cues to Aid 3D Pointing ..." Visual Cues to Aid 3D Pointing in a Virtual Mirror Jason W. Woodworth and Christoph W. Borst (University of Louisiana at Lafayette, USA) We address a 3D pointing problem for a "virtual mirror" view used in collaborative VR. The virtual mirror is a large TV display showing a depth-camera-based image of a user in a surrounding virtual environment. There are problems with pointing and communicating to remote users due to the indirectness of pointing in a mirror and a low sense of depth. We propose several visual cues to help the user control pointing depth, and present an initial user study, providing a basis for further refinement and investigation of techniques. @InProceedings{3DUI17p251, author = {Jason W. Woodworth and Christoph W. Borst}, title = {Visual Cues to Aid 3D Pointing in a Virtual Mirror}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {251--252}, doi = {}, year = {2017}, } Video |
|
Boustila, Sabah |
3DUI '17: "Effects of Stereo and Head ..."
Effects of Stereo and Head Tracking on Distance Estimation, Presence, and Simulator Sickness using Wall Screen in Architectural Project Review
Sabah Boustila, Dominique Bechmann, and Antonio Capobianco (University of Strasbourg, France) Stereo and head tracking are considered as distance perception cues in virtual environment. Several studies have investigated their influence on several tasks. Results were different among studies. In this paper, we conducted a complete experiment investigating the influence of the stereo and head tracking in the specific context of virtual visits of houses during architectural project review with clients. We manipulated the stereo and head tracking in four conditions and we examined effects of the two factors on distance perception (room dimensions, habitability, etc.), task difficulty, presence and simulator sickness. Results reveal a significant effect of the stereo on the estimation of the habitability, the dimensions of the rooms and task difficulty. However, the effect of stereo and head tracking was not significant on the presence and simulator sickness. @InProceedings{3DUI17p231, author = {Sabah Boustila and Dominique Bechmann and Antonio Capobianco}, title = {Effects of Stereo and Head Tracking on Distance Estimation, Presence, and Simulator Sickness using Wall Screen in Architectural Project Review}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {231--232}, doi = {}, year = {2017}, } |
|
Bowman, Doug A. |
3DUI '17: "Bookshelf and Bird: Enabling ..."
Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection
Run Yu, Wallace S. Lages, Mahdi Nabiyouni, Brandon Ray, Navyaram Kondur, Vikram Chandrashekar, and Doug A. Bowman (Virginia Tech, USA) We present two novel redirection techniques to enable real walking in large virtual environments (VEs) using only “room-scale” tracked spaces. The techniques, called Bookshelf and Bird, provide narrative-consistent redirection to keep the user inside the physical space, and require the user to walk to explore the VE. The underlying concept, called cell-based redirection, is to divide the virtual world into discrete cells that have the same size as the physical tracking space. The techniques then can redirect the users without altering the relationship between the user, the physical space, and the virtual cell. In this way, users can always access the entire cell using real walking. @InProceedings{3DUI17p116, author = {Run Yu and Wallace S. Lages and Mahdi Nabiyouni and Brandon Ray and Navyaram Kondur and Vikram Chandrashekar and Doug A. Bowman}, title = {Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {116--119}, doi = {}, year = {2017}, } Video |
|
Bruder, Gerd |
3DUI '17: "Vibrotactile Assistance for ..."
Vibrotactile Assistance for User Guidance Towards Selection Targets in VR and the Cognitive Resources Involved
Oscar J. Ariza N., Markus Lange, Frank Steinicke, and Gerd Bruder (University of Hamburg, Germany; University of Würzburg, Germany; University of Central Florida, USA) The selection of objects in virtual reality (VR) located on the periphery and outside the field of view (FOV) requires a visual search by rotations of the HMD, which can reduce the performance of interaction. Our work explores the use of a pair of self-made wireless and wearable devices which once attached to the user’s head provide assistive vibrotactile cues for guidance in order to reduce the time used to turn and locate a target object. We present an experiment based on a dual-tasking method to analyze cognitive demands and performance metrics during a selection task. @InProceedings{3DUI17p95, author = {Oscar J. Ariza N. and Markus Lange and Frank Steinicke and Gerd Bruder}, title = {Vibrotactile Assistance for User Guidance Towards Selection Targets in VR and the Cognitive Resources Involved}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {95--98}, doi = {}, year = {2017}, } 3DUI '17: "Can Social Presence Be Contagious? ..." Can Social Presence Be Contagious? Effects of Social Presence Priming on Interaction with Virtual Humans Salam Daher, Kangsoo Kim, Myungho Lee, Gerd Bruder, Ryan Schubert, Jeremy Bailenson, and Gregory F. Welch (University of Central Florida, USA; Stanford University, USA) We explore whether a peripheral observation of apparent mutual social presence between a virtual human (VH) and a virtual human confederate (VHC) can increase a subject’s sense of social presence with the VH. Human subjects were asked to play a game with a VH. Half of the subjects were primed by being exposed to a brief but apparently engaging conversation between the VHC and the VH. The primed subjects reported being significantly more excited, alert, had significantly higher measures of Co-Presence, Attentional Allocation, and Message Understanding dimensions of social presence for the VH, compared to those who were not primed. @InProceedings{3DUI17p201, author = {Salam Daher and Kangsoo Kim and Myungho Lee and Gerd Bruder and Ryan Schubert and Jeremy Bailenson and Gregory F. Welch}, title = {Can Social Presence Be Contagious? Effects of Social Presence Priming on Interaction with Virtual Humans}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {201--202}, doi = {}, year = {2017}, } |
|
Cannavò, Alberto |
3DUI '17: "HOT: Hold your Own Tools for ..."
HOT: Hold your Own Tools for AR-Based Constructive Art
Giuseppe Attanasio, Alberto Cannavò, Francesca Cibrario, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) Using digital instruments to support artistic expression and creativity is a hot topic. In this work, we focused on the design of a suitable interface for Augmented Reality-based constructive art on handheld devices. Issues to be faced encompassed how to give artists sense of spatial dimensions, how to provide them with different tools for realizing artworks, and how much moving away from ``the real'' and going towards ``the virtual''. Through a touch-capable device, such as a smartphone or a tablet, we offer artists a clean workspace, where they can decide when to introduce artworks and tools. In fact, besides exploiting the multi-touch functionality and the gyroscopes/accelerometers to manipulate artworks in six degrees of freedom (6DOF), the proposed solution exploits a set of printed markers that can be brought into the camera's field of view to make specific virtual tools appear in the augmented scene. With such tools, artists can decide to control, e.g., manipulation speed, scale factor, scene parameters, etc., thus complementing functionalities that can be accessed via the device's screen. @InProceedings{3DUI17p256, author = {Giuseppe Attanasio and Alberto Cannavò and Francesca Cibrario and Fabrizio Lamberti and Paolo Montuschi and Gianluca Paravati}, title = {HOT: Hold your Own Tools for AR-Based Constructive Art}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {256--257}, doi = {}, year = {2017}, } 3DUI '17: "T4T: Tangible Interface for ..." T4T: Tangible Interface for Tuning 3D Object Manipulation Tools Alberto Cannavò, Fabio Cermelli, Vincenzo Chiaramida, Giovanni Ciccone, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) A 3D User Interface for manipulating virtual objects in Augmented Reality scenarios on handheld devices is presented. The proposed solution takes advantage of two interaction techniques. The former (named ``cursor mode'') exploits a cursor, which position and movement are bound to the view of the device; the cursor allows the user to select objects and to perform coarse-grain manipulations by moving the device. The latter (referred to as ``tuning mode'') uses the physical affordances of a tangible interface to provide the user with the possibility to refine objects in all their aspects (position, rotation, scale, color, and so forth) with a fine-grained control. @InProceedings{3DUI17p266, author = {Alberto Cannavò and Fabio Cermelli and Vincenzo Chiaramida and Giovanni Ciccone and Fabrizio Lamberti and Paolo Montuschi and Gianluca Paravati}, title = {T4T: Tangible Interface for Tuning 3D Object Manipulation Tools}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {266--267}, doi = {}, year = {2017}, } |
|
Capobianco, Antonio |
3DUI '17: "Effects of Stereo and Head ..."
Effects of Stereo and Head Tracking on Distance Estimation, Presence, and Simulator Sickness using Wall Screen in Architectural Project Review
Sabah Boustila, Dominique Bechmann, and Antonio Capobianco (University of Strasbourg, France) Stereo and head tracking are considered as distance perception cues in virtual environment. Several studies have investigated their influence on several tasks. Results were different among studies. In this paper, we conducted a complete experiment investigating the influence of the stereo and head tracking in the specific context of virtual visits of houses during architectural project review with clients. We manipulated the stereo and head tracking in four conditions and we examined effects of the two factors on distance perception (room dimensions, habitability, etc.), task difficulty, presence and simulator sickness. Results reveal a significant effect of the stereo on the estimation of the habitability, the dimensions of the rooms and task difficulty. However, the effect of stereo and head tracking was not significant on the presence and simulator sickness. @InProceedings{3DUI17p231, author = {Sabah Boustila and Dominique Bechmann and Antonio Capobianco}, title = {Effects of Stereo and Head Tracking on Distance Estimation, Presence, and Simulator Sickness using Wall Screen in Architectural Project Review}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {231--232}, doi = {}, year = {2017}, } |
|
Cermelli, Fabio |
3DUI '17: "T4T: Tangible Interface for ..."
T4T: Tangible Interface for Tuning 3D Object Manipulation Tools
Alberto Cannavò, Fabio Cermelli, Vincenzo Chiaramida, Giovanni Ciccone, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) A 3D User Interface for manipulating virtual objects in Augmented Reality scenarios on handheld devices is presented. The proposed solution takes advantage of two interaction techniques. The former (named ``cursor mode'') exploits a cursor, which position and movement are bound to the view of the device; the cursor allows the user to select objects and to perform coarse-grain manipulations by moving the device. The latter (referred to as ``tuning mode'') uses the physical affordances of a tangible interface to provide the user with the possibility to refine objects in all their aspects (position, rotation, scale, color, and so forth) with a fine-grained control. @InProceedings{3DUI17p266, author = {Alberto Cannavò and Fabio Cermelli and Vincenzo Chiaramida and Giovanni Ciccone and Fabrizio Lamberti and Paolo Montuschi and Gianluca Paravati}, title = {T4T: Tangible Interface for Tuning 3D Object Manipulation Tools}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {266--267}, doi = {}, year = {2017}, } |
|
Chandrashekar, Vikram |
3DUI '17: "Bookshelf and Bird: Enabling ..."
Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection
Run Yu, Wallace S. Lages, Mahdi Nabiyouni, Brandon Ray, Navyaram Kondur, Vikram Chandrashekar, and Doug A. Bowman (Virginia Tech, USA) We present two novel redirection techniques to enable real walking in large virtual environments (VEs) using only “room-scale” tracked spaces. The techniques, called Bookshelf and Bird, provide narrative-consistent redirection to keep the user inside the physical space, and require the user to walk to explore the VE. The underlying concept, called cell-based redirection, is to divide the virtual world into discrete cells that have the same size as the physical tracking space. The techniques then can redirect the users without altering the relationship between the user, the physical space, and the virtual cell. In this way, users can always access the entire cell using real walking. @InProceedings{3DUI17p116, author = {Run Yu and Wallace S. Lages and Mahdi Nabiyouni and Brandon Ray and Navyaram Kondur and Vikram Chandrashekar and Doug A. Bowman}, title = {Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {116--119}, doi = {}, year = {2017}, } Video |
|
Chang, Yun Suk |
3DUI '17: "Evaluating Gesture-Based Augmented ..."
Evaluating Gesture-Based Augmented Reality Annotation
Yun Suk Chang, Benjamin Nuernberger, Bo Luan, and Tobias Höllerer (University of California at Santa Barbara, USA) Drawing annotations with 3D hand gestures in augmented reality are useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn circle and arrow annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. In this paper, we compare different annotation methods using the HoloLens augmented reality development platform: Surface-Drawing and Air-Drawing, with either raw but smoothed or interpreted and beautified gesture input. For the Surface-Drawing method, users control a cursor that is projected onto the world model, allowing gesture input to occur directly on the surfaces of real-world objects. For the Air-Drawing method, gesture drawing occurs at the user's fingertip and is projected onto the world model on release. The methods have different characteristics regarding necessitated vergence switches and afforded cursor control. We performed an experiment in which users draw on two different real-world objects at different distances using the different methods. Results indicate that Surface-Drawing is more accurate than Air-Drawing and Beautified annotations are drawn faster than Non-Beautified; participants also preferred Surface-Drawing and Beautified. @InProceedings{3DUI17p182, author = {Yun Suk Chang and Benjamin Nuernberger and Bo Luan and Tobias Höllerer}, title = {Evaluating Gesture-Based Augmented Reality Annotation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {182--185}, doi = {}, year = {2017}, } Video |
|
Chardonnet, Jean-Rémy |
3DUI '17: "Comparing VR Environments ..."
Comparing VR Environments for Seat Selection in an Opera Theater
José Luis Dorado, Pablo Figueroa, Jean-Rémy Chardonnet, Frédéric Merienne, and José Tiberio Hernández (University of Andes, Colombia; LE2I, France) This study presents a comparison of the influence of different VR environments in the task of selecting a preferred seat in an opera theater. We used gaze-based raycasting and headsets in a low-cost head-mounted display (HMD) (GearVR); and a virtual wand, head tracking, and headsets in a CAVE, two somewhat opposing technologies in the spectrum of current VR systems. Visual rendering and the selection technique depend on the capabilities of each environment, whereas the sound is approximated in both environments. Results show that subjects can select similar seats but their decisions differ between both environments. The results obtained can be useful in guiding the development of future VR applications. @InProceedings{3DUI17p221, author = {José Luis Dorado and Pablo Figueroa and Jean-Rémy Chardonnet and Frédéric Merienne and José Tiberio Hernández}, title = {Comparing VR Environments for Seat Selection in an Opera Theater}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {221--222}, doi = {}, year = {2017}, } |
|
Chellali, Amine |
3DUI '17: "Classic3D and Single3D: Two ..."
Classic3D and Single3D: Two Unimanual Techniques for Constrained 3D Manipulations on Tablet PCs
Siju Wu, Aylen Ricca, Amine Chellali, and Samir Otmane (University of Evry, France) Standard 3D widgets are used for object manipulation in desktop CAD applications but are less suited for use on touchscreens. We propose two 3D constrained manipulation techniques for Tablet PCs. Using finger identification, the dominant hand's index, middle and ring fingers are mapped with the X, Y and Z axes. Users can then trigger different manipulation tasks using specific chording gestures. A user study to assess usability and efficiency permitted to identify the gestures that are the most suitable for each manipulation task. Some design recommendations for an efficient 3D constrained manipulations technique are presented. @InProceedings{3DUI17p168, author = {Siju Wu and Aylen Ricca and Amine Chellali and Samir Otmane}, title = {Classic3D and Single3D: Two Unimanual Techniques for Constrained 3D Manipulations on Tablet PCs}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {168--171}, doi = {}, year = {2017}, } |
|
Chiaramida, Vincenzo |
3DUI '17: "T4T: Tangible Interface for ..."
T4T: Tangible Interface for Tuning 3D Object Manipulation Tools
Alberto Cannavò, Fabio Cermelli, Vincenzo Chiaramida, Giovanni Ciccone, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) A 3D User Interface for manipulating virtual objects in Augmented Reality scenarios on handheld devices is presented. The proposed solution takes advantage of two interaction techniques. The former (named ``cursor mode'') exploits a cursor, which position and movement are bound to the view of the device; the cursor allows the user to select objects and to perform coarse-grain manipulations by moving the device. The latter (referred to as ``tuning mode'') uses the physical affordances of a tangible interface to provide the user with the possibility to refine objects in all their aspects (position, rotation, scale, color, and so forth) with a fine-grained control. @InProceedings{3DUI17p266, author = {Alberto Cannavò and Fabio Cermelli and Vincenzo Chiaramida and Giovanni Ciccone and Fabrizio Lamberti and Paolo Montuschi and Gianluca Paravati}, title = {T4T: Tangible Interface for Tuning 3D Object Manipulation Tools}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {266--267}, doi = {}, year = {2017}, } |
|
Chipana, Miriam Luque |
3DUI '17: "Trigger Walking: A Low-Fatigue ..."
Trigger Walking: A Low-Fatigue Travel Technique for Immersive Virtual Reality
Bhuvaneswari Sarupuri, Miriam Luque Chipana, and Robert W. Lindeman (University of Canterbury, New Zealand) We present Trigger Walking, a low-fatigue travel technique for immersive virtual reality which uses hand-held controllers to move about more naturally within a limited physical space. Most commercial applications use some form of teleportation or physical walking for moving around in a virtual space. However, teleportation can be disorienting, due to the sudden change in the environment when teleported to another location. Physical walking techniques are more physically demanding, leading to fatigue. Hence, we explore the use of two spatial controllers that accompany commercial headsets to walk by taking a virtual step each time a controller trigger is pulled. The user has the choice of using the orientation of a single-controller, the average of both controllers, or that of the head to determine the direction of walking, and speed can be controlled by changing the angle of the controller to the Frontal plane. @InProceedings{3DUI17p227, author = {Bhuvaneswari Sarupuri and Miriam Luque Chipana and Robert W. Lindeman}, title = {Trigger Walking: A Low-Fatigue Travel Technique for Immersive Virtual Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {227--228}, doi = {}, year = {2017}, } |
|
Chowdhury, Tanvir Irfan |
3DUI '17: "Information Recall in VR Disability ..."
Information Recall in VR Disability Simulation
Tanvir Irfan Chowdhury, Raphael Costa, and John Quarles (University of Texas at San Antonio, USA) The purpose of this poster is to explain our study on the effect of the sense of presence on one aspect of learning, information recall, in an immersive (vs. non-immersive) virtual reality (VR) disability simulation (DS). We hypothesized that a higher level of immersion and involvement in a VR disability simulation that leads to a high sense of presence will help the user improve information recall. We conducted a between subjects experiment in which participants were presented information about multiple sclerosis (MS) in different immersive conditions and afterwards they attempted to recall the information. The results from our study suggest that participants who were in immersive conditions were able to recall the information more effectively than the participants who experienced a non-immersive condition. @InProceedings{3DUI17p219, author = {Tanvir Irfan Chowdhury and Raphael Costa and John Quarles}, title = {Information Recall in VR Disability Simulation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {219--220}, doi = {}, year = {2017}, } Video |
|
Ciambrone, Andrew |
3DUI '17: "Painting with Light: Gesture ..."
Painting with Light: Gesture Based Light Control in Architectural Settings
Mohamed Handosa, Denis Gračanin, Hicham G. Elmongui, and Andrew Ciambrone (Virginia Tech, USA; Alexandria University, Egypt) Lighting can play an essential role in supporting user tasks as well as creating an ambiance. Although users may feel excited about the supported functionality when a complex indoor lighting system is first deployed, the lack of a convenient interface may prevent them from taking the full advantage of the system. We propose a system and a 3D interaction technique for controlling indoor lights. The system is extendable and supports multiple users to control lights using either gestures or a GUI. Using a single Kinect sensor, the user is able to control lights from different positions in the room while standing or sitting down within the tracking range of the sensor. The selection and manipulation accuracy of the proposed technique together with the ease of use compared to other alternatives makes it a promising lighting control technique. @InProceedings{3DUI17p249, author = {Mohamed Handosa and Denis Gračanin and Hicham G. Elmongui and Andrew Ciambrone}, title = {Painting with Light: Gesture Based Light Control in Architectural Settings}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {249--250}, doi = {}, year = {2017}, } Video |
|
Cibrario, Francesca |
3DUI '17: "HOT: Hold your Own Tools for ..."
HOT: Hold your Own Tools for AR-Based Constructive Art
Giuseppe Attanasio, Alberto Cannavò, Francesca Cibrario, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) Using digital instruments to support artistic expression and creativity is a hot topic. In this work, we focused on the design of a suitable interface for Augmented Reality-based constructive art on handheld devices. Issues to be faced encompassed how to give artists sense of spatial dimensions, how to provide them with different tools for realizing artworks, and how much moving away from ``the real'' and going towards ``the virtual''. Through a touch-capable device, such as a smartphone or a tablet, we offer artists a clean workspace, where they can decide when to introduce artworks and tools. In fact, besides exploiting the multi-touch functionality and the gyroscopes/accelerometers to manipulate artworks in six degrees of freedom (6DOF), the proposed solution exploits a set of printed markers that can be brought into the camera's field of view to make specific virtual tools appear in the augmented scene. With such tools, artists can decide to control, e.g., manipulation speed, scale factor, scene parameters, etc., thus complementing functionalities that can be accessed via the device's screen. @InProceedings{3DUI17p256, author = {Giuseppe Attanasio and Alberto Cannavò and Francesca Cibrario and Fabrizio Lamberti and Paolo Montuschi and Gianluca Paravati}, title = {HOT: Hold your Own Tools for AR-Based Constructive Art}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {256--257}, doi = {}, year = {2017}, } |
|
Ciccone, Giovanni |
3DUI '17: "T4T: Tangible Interface for ..."
T4T: Tangible Interface for Tuning 3D Object Manipulation Tools
Alberto Cannavò, Fabio Cermelli, Vincenzo Chiaramida, Giovanni Ciccone, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) A 3D User Interface for manipulating virtual objects in Augmented Reality scenarios on handheld devices is presented. The proposed solution takes advantage of two interaction techniques. The former (named ``cursor mode'') exploits a cursor, which position and movement are bound to the view of the device; the cursor allows the user to select objects and to perform coarse-grain manipulations by moving the device. The latter (referred to as ``tuning mode'') uses the physical affordances of a tangible interface to provide the user with the possibility to refine objects in all their aspects (position, rotation, scale, color, and so forth) with a fine-grained control. @InProceedings{3DUI17p266, author = {Alberto Cannavò and Fabio Cermelli and Vincenzo Chiaramida and Giovanni Ciccone and Fabrizio Lamberti and Paolo Montuschi and Gianluca Paravati}, title = {T4T: Tangible Interface for Tuning 3D Object Manipulation Tools}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {266--267}, doi = {}, year = {2017}, } |
|
Clergeaud, Damien |
3DUI '17: "Pano: Design and Evaluation ..."
Pano: Design and Evaluation of a 360° Through-the-Lens Technique
Damien Clergeaud and Pascal Guitton (Inria, France; University of Bordeaux, France) Virtual Reality experiments have enabled immersed users to perform virtual tasks in a Virtual Environment (VE). Before beginning a task, however, users must locate and select the different objects they will need in the VE. This first step is important and affects global performance in the virtual task. If a user takes too long to locate and select an object, the duration of the task is increased. Moreover, both the comfort and global efficiency of users deteriorate as search and selection time increase. We have developed Pano, a technique which reduces this time by increasing the user’s “natural” field of view. More precisely, we provide a 360° panoramic virtual image which is displayed on a specific window, called the PanoWindow. Thanks to the PanoWindow, users can perceive and interact with the part of the Virtual Environment (VE) that is behind them without any additional head or body movement. In this paper, we present two user studies with 30 and 21 participants in different VEs. For the first study, participants were invited to perform object-finding tasks with and without Pano. The second study involved position-estimating tasks in order to know if the PanoWindow image enables users to build an accurate representation of the environment. First, the results show that Pano both reduces task duration and improves user comfort. Second, they demonstrate that good object-localization accuracy can be achieved using Pano. @InProceedings{3DUI17p2, author = {Damien Clergeaud and Pascal Guitton}, title = {Pano: Design and Evaluation of a 360° Through-the-Lens Technique}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {2--11}, doi = {}, year = {2017}, } |
|
Clifford, Rory M. S. |
3DUI '17: "Jedi ForceExtension: Telekinesis ..."
Jedi ForceExtension: Telekinesis as a Virtual Reality Interaction Metaphor
Rory M. S. Clifford, Nikita Mae B. Tuanquin, and Robert W. Lindeman (University of Canterbury, New Zealand) Virtual Reality (VR) enables us to freely operate in a space that is unconstrained by physical laws and limitations. To take advantage of this aspect, we have developed a technique for pseudo-telekinetic object manipulation in VR using slight downward tilt of the head to simulate Jedi concentration. This telekinetic ability draws inspiration from The Force abilities exhibited in the Star Wars universe, and is particularly well suited to VR because it provides the ability to interact with and manipulate objects at a distance. We implemented force translate, force rotate, force push and force pull behaviours as examples of the general concept of force extension. We conducted exploratory user testing to assess telekinesis as a suitable interaction metaphor. Subject performance and feedback varied between participants but were generally encouraging. @InProceedings{3DUI17p239, author = {Rory M. S. Clifford and Nikita Mae B. Tuanquin and Robert W. Lindeman}, title = {Jedi ForceExtension: Telekinesis as a Virtual Reality Interaction Metaphor}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {239--240}, doi = {}, year = {2017}, } |
|
Cordeiro, Eduardo |
3DUI '17: "PRECIOUS! Out-of-Reach Selection ..."
PRECIOUS! Out-of-Reach Selection using Iterative Refinement in VR
Daniel Mendes, Daniel Medeiros, Eduardo Cordeiro, Maurício Sousa, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal) Selecting objects outside user's arm-reach in Virtual Reality still poses significant challenges. Techniques proposed to overcome such limitations often follow arm-extension metaphors or favor the use of selection volumes combined with ray-casting. Nonetheless, these approaches work for room sized and sparse environments, and they do not scale to larger scenarios with many objects. We introduce PRECIOUS, a novel mid-air technique for selecting out-of-reach objects. It employs an iterative progressive refinement, using cone-casting to select multiple objects and moving users closer to them in each step, allowing accurate selections. A user evaluation showed that PRECIOUS compares favorably against existing approaches, being the most versatile. @InProceedings{3DUI17p237, author = {Daniel Mendes and Daniel Medeiros and Eduardo Cordeiro and Maurício Sousa and Alfredo Ferreira and Joaquim Jorge}, title = {PRECIOUS! Out-of-Reach Selection using Iterative Refinement in VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {237--238}, doi = {}, year = {2017}, } |
|
Corrêa, Ana Grasielle |
3DUI '17: "User Experience Evaluation ..."
User Experience Evaluation with Archaeometry Interactive Tools in Virtual Reality Environment
Ana Grasielle Corrêa, Eduardo Zilles Borba, Roseli Lopes, Marcelo Knorich Zuffo, Astolfo Araujo, and Regis Kopper (University of São Paulo, Brazil; Duke University, USA) In this work we present a study about usability experience of users in a cyber-archeological environment. We researched how they explore a realistic 3D environment in Virtual Reality (VR) through archaeometry conventional techniques. Our objective is to evaluate users experiences with interactive archaeometry tools with archaeologist (not a VR expert) and compare results with VR experts (not an archeology expert). Two hypothesis will be tested: a) it’s possible to simulate the virtual world realistically as the real one?; b) if this VR model is passive of exploration, is it possible to create 3DUI analytical tools to help archaeologist to manipulate archaeometry tools? To explore these hypotheses we conducted experimental tests with ten users and the results are promising. @InProceedings{3DUI17p217, author = {Ana Grasielle Corrêa and Eduardo Zilles Borba and Roseli Lopes and Marcelo Knorich Zuffo and Astolfo Araujo and Regis Kopper}, title = {User Experience Evaluation with Archaeometry Interactive Tools in Virtual Reality Environment}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {217--218}, doi = {}, year = {2017}, } |
|
Cortes, Guillaume |
3DUI '17: "Increasing Optical Tracking ..."
Increasing Optical Tracking Workspace of VR Applications using Controlled Cameras
Guillaume Cortes, Eric Marchand, Jérôme Ardouin, and Anatole Lécuyer (Realyz, France; University of Rennes 1, France; Inria, France) We propose an approach to greatly increase the tracking workspace of VR applications without adding new sensors. Our approach relies on controlled cameras able to follow the tracked markers all around the VR workspace providing 6DoF tracking data. We designed the proof-of-concept of such approach based on two consumer-grade cameras and a pan-tilt head. The resulting tracking workspace could be greatly increased depending on the actuators’ range of motion. The accuracy error and jitter were found to be rather limited during camera motion (resp. 0.3cm and 0.02cm). Therefore, whenever the final VR application does not require a perfect tracking accuracy over the entire workspace, we recommend using our approach in order to enlarge the tracking workspace. @InProceedings{3DUI17p22, author = {Guillaume Cortes and Eric Marchand and Jérôme Ardouin and Anatole Lécuyer}, title = {Increasing Optical Tracking Workspace of VR Applications using Controlled Cameras}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {22--25}, doi = {}, year = {2017}, } |
|
Costa, Raphael |
3DUI '17: "Information Recall in VR Disability ..."
Information Recall in VR Disability Simulation
Tanvir Irfan Chowdhury, Raphael Costa, and John Quarles (University of Texas at San Antonio, USA) The purpose of this poster is to explain our study on the effect of the sense of presence on one aspect of learning, information recall, in an immersive (vs. non-immersive) virtual reality (VR) disability simulation (DS). We hypothesized that a higher level of immersion and involvement in a VR disability simulation that leads to a high sense of presence will help the user improve information recall. We conducted a between subjects experiment in which participants were presented information about multiple sclerosis (MS) in different immersive conditions and afterwards they attempted to recall the information. The results from our study suggest that participants who were in immersive conditions were able to recall the information more effectively than the participants who experienced a non-immersive condition. @InProceedings{3DUI17p219, author = {Tanvir Irfan Chowdhury and Raphael Costa and John Quarles}, title = {Information Recall in VR Disability Simulation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {219--220}, doi = {}, year = {2017}, } Video |
|
Daher, Salam |
3DUI '17: "Can Social Presence Be Contagious? ..."
Can Social Presence Be Contagious? Effects of Social Presence Priming on Interaction with Virtual Humans
Salam Daher, Kangsoo Kim, Myungho Lee, Gerd Bruder, Ryan Schubert, Jeremy Bailenson, and Gregory F. Welch (University of Central Florida, USA; Stanford University, USA) We explore whether a peripheral observation of apparent mutual social presence between a virtual human (VH) and a virtual human confederate (VHC) can increase a subject’s sense of social presence with the VH. Human subjects were asked to play a game with a VH. Half of the subjects were primed by being exposed to a brief but apparently engaging conversation between the VHC and the VH. The primed subjects reported being significantly more excited, alert, had significantly higher measures of Co-Presence, Attentional Allocation, and Message Understanding dimensions of social presence for the VH, compared to those who were not primed. @InProceedings{3DUI17p201, author = {Salam Daher and Kangsoo Kim and Myungho Lee and Gerd Bruder and Ryan Schubert and Jeremy Bailenson and Gregory F. Welch}, title = {Can Social Presence Be Contagious? Effects of Social Presence Priming on Interaction with Virtual Humans}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {201--202}, doi = {}, year = {2017}, } |
|
Debarba, Henrique G. |
3DUI '17: "Collaborative Manipulation ..."
Collaborative Manipulation of 3D Virtual Objects in Augmented Reality Scenarios using Mobile Devices
Jerônimo G. Grandi, Iago Berndt, Henrique G. Debarba, Luciana Nedel, and Anderson Maciel (Federal University of Rio Grande do Sul, Brazil; Artanim Foundation, Switzerland) Interaction in augmented reality environments may be very complex, depending on the degrees of freedom (DOFs) required for the task. In this work we present a 3D user interface for collaborative manipulation of virtual objects in augmented reality (AR) environments. It maps position -- acquired with a camera and fiducial markers -- and touchscreen input of a handheld device into gestures to select, move, rotate and scale virtual objects. As these transformations require the control of multiple DOFs, collaboration is proposed as a solution to coordinate the modification of each and all the available DOFs. Users are free to decide their own manipulation roles. All virtual elements are displayed directly on the mobile device as an overlay of the camera capture, providing an individual point of view of the AR environment to each user. @InProceedings{3DUI17p264, author = {Jerônimo G. Grandi and Iago Berndt and Henrique G. Debarba and Luciana Nedel and Anderson Maciel}, title = {Collaborative Manipulation of 3D Virtual Objects in Augmented Reality Scenarios using Mobile Devices}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {264--265}, doi = {}, year = {2017}, } Video |
|
De Oliveira, Thomas Volpato |
3DUI '17: "SculptAR: An Augmented Reality ..."
SculptAR: An Augmented Reality Interaction System
Vicenzo Abichequer Sangalli, Thomas Volpato de Oliveira, Leonardo Pavanatto Soares, and Márcio Sarroglia Pinho (PUCRS, Brazil) In this work, a 3D mobile interface to create sculptures in an augmented reality environment tracked by AR markers is presented. A raycasting technique was used to interact with the objects in the scene, as well as 2D and 3D interfaces to manipulate and modify the objects. The users can move, delete, paint and duplicate virtual objects using 6 DOFs techniques. @InProceedings{3DUI17p260, author = {Vicenzo Abichequer Sangalli and Thomas Volpato de Oliveira and Leonardo Pavanatto Soares and Márcio Sarroglia Pinho}, title = {SculptAR: An Augmented Reality Interaction System}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {260--261}, doi = {}, year = {2017}, } |
|
Dias, Paulo |
3DUI '17: "Effect of Hand-Avatar in a ..."
Effect of Hand-Avatar in a Selection Task Using a Tablet as Input Device in an Immersive Virtual Environment
Luis Afonso, Paulo Dias, Carlos Ferreira, and Beatriz Sousa Santos (University of Aveiro, Portugal) How does the virtual representation of the user's hands influence the performance on a button selection task performed in a tablet-based interaction within an immersive virtual environment? To answer this question, we asked 55 participants to use three conditions: no-hand avatar, realistic avatar and translucent avatar. The participants were faster but made slightly more errors while using the no-avatar condition, and considered easier to perform the task with the translucent avatar. @InProceedings{3DUI17p247, author = {Luis Afonso and Paulo Dias and Carlos Ferreira and Beatriz Sousa Santos}, title = {Effect of Hand-Avatar in a Selection Task Using a Tablet as Input Device in an Immersive Virtual Environment}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {247--248}, doi = {}, year = {2017}, } |
|
Dorado, José Luis |
3DUI '17: "Comparing VR Environments ..."
Comparing VR Environments for Seat Selection in an Opera Theater
José Luis Dorado, Pablo Figueroa, Jean-Rémy Chardonnet, Frédéric Merienne, and José Tiberio Hernández (University of Andes, Colombia; LE2I, France) This study presents a comparison of the influence of different VR environments in the task of selecting a preferred seat in an opera theater. We used gaze-based raycasting and headsets in a low-cost head-mounted display (HMD) (GearVR); and a virtual wand, head tracking, and headsets in a CAVE, two somewhat opposing technologies in the spectrum of current VR systems. Visual rendering and the selection technique depend on the capabilities of each environment, whereas the sound is approximated in both environments. Results show that subjects can select similar seats but their decisions differ between both environments. The results obtained can be useful in guiding the development of future VR applications. @InProceedings{3DUI17p221, author = {José Luis Dorado and Pablo Figueroa and Jean-Rémy Chardonnet and Frédéric Merienne and José Tiberio Hernández}, title = {Comparing VR Environments for Seat Selection in an Opera Theater}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {221--222}, doi = {}, year = {2017}, } 3DUI '17: "Effect of Footstep Vibrations ..." Effect of Footstep Vibrations and Proprioceptive Vibrations Used with an Innovative Navigation Method Jérémy Plouzeau, José Luis Dorado, Damien Paillot, and Frédéric Merienne (LE2I, France; HeSam, France) This study proposes to investigate the effect of adding vibration feedback to a navigation task in virtual environment. Previous study used footstep vibrations and proprioceptive vibrations in order to decrease the cyber-sickness and increase the sense of presence. In this study, we experiment the same vibration modalities but with a new navigation method. The results show that proprioceptive vibrations do not impact the sense of presence neither the cyber-sickness while footstep vibrations increase sense of presence and decrease in a certain way cyber-sickness. @InProceedings{3DUI17p241, author = {Jérémy Plouzeau and José Luis Dorado and Damien Paillot and Frédéric Merienne}, title = {Effect of Footstep Vibrations and Proprioceptive Vibrations Used with an Innovative Navigation Method}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {241--242}, doi = {}, year = {2017}, } |
|
Ducoffe, Mélanie |
3DUI '17: "Spatial and Rotation Invariant ..."
Spatial and Rotation Invariant 3D Gesture Recognition Based on Sparse Representation
Ferran Argelaguet, Mélanie Ducoffe, Anatole Lécuyer, and Remi Gribonval (Inria, France; IRISA, France; ENS, France) Advances in motion tracking technology, especially for commodity hardware, still require robust 3D gesture recognition in order to fully exploit the benefits of natural user interfaces. In this paper, we introduce a novel 3D gesture recognition algorithm based on the sparse representation of 3D human motion. The sparse representation of human motion provides a set of features that can be used to efficiently classify gestures in real-time. Compared to existing gesture recognition systems, sparse representation, the proposed approach enables full spatial and rotation invariance and provides high tolerance to noise. Moreover, the proposed classification scheme takes into account the inter-user variability which increases gesture classification accuracy in user-independent scenarios. We validated our approach with existing motion databases for gestural interaction and performed a user evaluation with naive subjects to show its robustness to arbitrarily defined gestures. The results showed that our classification scheme has high classification accuracy for user-independent scenarios even with users who have different handedness. We believe that sparse representation of human motion will pave the way for a new generation of 3D gesture recognition systems in order to fully open the potential of natural user interfaces. @InProceedings{3DUI17p158, author = {Ferran Argelaguet and Mélanie Ducoffe and Anatole Lécuyer and Remi Gribonval}, title = {Spatial and Rotation Invariant 3D Gesture Recognition Based on Sparse Representation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {158--167}, doi = {}, year = {2017}, } Video |
|
Ebrahimi, Elham |
3DUI '17: "Augmented Reality Digital ..."
Augmented Reality Digital Sculpture
Nathanael Harrell, Grayson Bonds, Xiaojia Wang, Sean Valent, Elham Ebrahimi, and Sabarish V. Babu (Clemson University, USA) We present our metaphor for object translation, rotation, and rescaling and particle parameter manipulation in an augmented reality environment using an Android smartphone or tablet for the 2017 3DUI Competition in Los Angeles, California. Our metaphor aims to map the three-dimensional interaction of objects in a real world space to the two-dimensional plane of a smartphone or tablet screen. Our final product is the result of experimentation with different metaphors for translation, rotation, rescaling, and particle parameter manipulation and was guided by the feedback of voluntary product testers. The result is an interaction technique between a mobile device and the virtual world which we believe to be intuitive. @InProceedings{3DUI17p262, author = {Nathanael Harrell and Grayson Bonds and Xiaojia Wang and Sean Valent and Elham Ebrahimi and Sabarish V. Babu}, title = {Augmented Reality Digital Sculpture}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {262--263}, doi = {}, year = {2017}, } Video |
|
Ellingson, Arin M. |
3DUI '17: "Anatomical 2D/3D Shape-Matching ..."
Anatomical 2D/3D Shape-Matching in Virtual Reality: A User Interface for Quantifying Joint Kinematics with Radiographic Imaging
Kyungyoon Kim, Rebekah L. Lawrence, Nikki Kyllonen, Paula M. Ludewig, Arin M. Ellingson, and Daniel F. Keefe (University of Minnesota, USA) We introduce a virtual reality 3D user interface (3DUI) for anatomical 2D/3D shape-matching, a challenging task that is part of medical imaging processes required by biomechanics researchers. Manual shape-matching can be thought of as a nuanced version of classic 6 degree-of-freedom docking tasks studied in the 3DUI research community. Our solution combines dynamic gain for precise translation and rotation from 6 degree-of-freedom tracker input, constraints based on both 2D and 3D data, and immersive visualization and visual feedback. @InProceedings{3DUI17p243, author = {Kyungyoon Kim and Rebekah L. Lawrence and Nikki Kyllonen and Paula M. Ludewig and Arin M. Ellingson and Daniel F. Keefe}, title = {Anatomical 2D/3D Shape-Matching in Virtual Reality: A User Interface for Quantifying Joint Kinematics with Radiographic Imaging}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {243--244}, doi = {}, year = {2017}, } Video |
|
Elmongui, Hicham G. |
3DUI '17: "Painting with Light: Gesture ..."
Painting with Light: Gesture Based Light Control in Architectural Settings
Mohamed Handosa, Denis Gračanin, Hicham G. Elmongui, and Andrew Ciambrone (Virginia Tech, USA; Alexandria University, Egypt) Lighting can play an essential role in supporting user tasks as well as creating an ambiance. Although users may feel excited about the supported functionality when a complex indoor lighting system is first deployed, the lack of a convenient interface may prevent them from taking the full advantage of the system. We propose a system and a 3D interaction technique for controlling indoor lights. The system is extendable and supports multiple users to control lights using either gestures or a GUI. Using a single Kinect sensor, the user is able to control lights from different positions in the room while standing or sitting down within the tracking range of the sensor. The selection and manipulation accuracy of the proposed technique together with the ease of use compared to other alternatives makes it a promising lighting control technique. @InProceedings{3DUI17p249, author = {Mohamed Handosa and Denis Gračanin and Hicham G. Elmongui and Andrew Ciambrone}, title = {Painting with Light: Gesture Based Light Control in Architectural Settings}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {249--250}, doi = {}, year = {2017}, } Video |
|
Erfanian, Aida |
3DUI '17: "Force and Vibrotactile Integration ..."
Force and Vibrotactile Integration for 3D User Interaction within Virtual Environments
Aida Erfanian, Stanley Tarng, Yaoping Hu, Jérémy Plouzeau, and Frédéric Merienne (University of Calgary, Canada; LE2I, France) Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies on multi-sensory integration of visual and haptic cues revealed that the integration follows maximum likelihood estimation (MLE). Little effort focuses however on integrating force and vibrotactile cues – two sub-categorical cues of the haptic modality. Hence, this paper presents an investigation on MLE’s suitability for integrating these sub-categorical cues. Within a stereoscopic VE, human users performed a 3D interactive task of navigating a flying drone along a high-voltage transmission line in an inaccessible region and identifying defects on the line. The users had to identify defects via individual force or vibrotactile cues, and their combinations in co-located and dislocated settings. The co-located setting provided both cues on the right hand of the users; whereas the dislocated setting delivered the force and vibrotactile cues on the right hand and forearm of the users, respectively. Task performance of the users, such as completion time and accuracy, was assessed under each cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This was in agreement with the role of tactile cues in sensing surface properties, herein setting a baseline for using MLE. The task performance under the co-located setting indicated certain degrees of combining those under the individual cues. In contrast, the performance under the dislocated setting was alike that under the individual vibrotactile cue. These observations imply an inconclusiveness of MLE to integrate both cues in a co-located setting for 3D user interaction. @InProceedings{3DUI17p87, author = {Aida Erfanian and Stanley Tarng and Yaoping Hu and Jérémy Plouzeau and Frédéric Merienne}, title = {Force and Vibrotactile Integration for 3D User Interaction within Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {87--94}, doi = {}, year = {2017}, } |
|
Ferlay, Fabien |
3DUI '17: "Spatialized Vibrotactile Feedback ..."
Spatialized Vibrotactile Feedback Contributes to Goal-Directed Movements in Cluttered Virtual Environments
Céphise Louison, Fabien Ferlay, and Daniel R. Mestre (CEA, France; Aix-Marseille University, France; CNRS, France) In virtual reality (VR), spatial awareness is a dominant research topic. It plays an essential role in the assessment of human operators’ behavior within virtual environments (VE), notably for the evaluation of the feasibility of manual maintenance tasks in cluttered industrial settings. In such contexts, it is decisive to evaluate the spatial and temporal correspondence between the operator’s movement kinematics and that of his/her virtual avatar in the virtual environment. Often, in a cluttered VE, direct kinesthetic (force) feedback is limited or absent. We tested whether vibrotactile (cutaneous) feedback would increase visuo-proprioceptive consistency, spatial awareness, and thus the validity of VR studies, by augmenting the perception of the operator’s contact(s) with virtual objects. We present preliminary experimental results, obtained using a head-mounted display (HMD) during a goal-directed task in a cluttered VE. Data suggest that spatialized vibrotactile feedback contributes to visuo-proprioceptive consistency. @InProceedings{3DUI17p99, author = {Céphise Louison and Fabien Ferlay and Daniel R. Mestre}, title = {Spatialized Vibrotactile Feedback Contributes to Goal-Directed Movements in Cluttered Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {99--102}, doi = {}, year = {2017}, } Video |
|
Ferreira, Alfredo |
3DUI '17: "Mid-Air Modeling with Boolean ..."
Mid-Air Modeling with Boolean Operations in VR
Daniel Mendes, Daniel Medeiros, Maurício Sousa, Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal; PUC-Rio, Brazil) Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks. @InProceedings{3DUI17p154, author = {Daniel Mendes and Daniel Medeiros and Maurício Sousa and Ricardo Ferreira and Alberto Raposo and Alfredo Ferreira and Joaquim Jorge}, title = {Mid-Air Modeling with Boolean Operations in VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {154--157}, doi = {}, year = {2017}, } 3DUI '17: "PRECIOUS! Out-of-Reach Selection ..." PRECIOUS! Out-of-Reach Selection using Iterative Refinement in VR Daniel Mendes, Daniel Medeiros, Eduardo Cordeiro, Maurício Sousa, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal) Selecting objects outside user's arm-reach in Virtual Reality still poses significant challenges. Techniques proposed to overcome such limitations often follow arm-extension metaphors or favor the use of selection volumes combined with ray-casting. Nonetheless, these approaches work for room sized and sparse environments, and they do not scale to larger scenarios with many objects. We introduce PRECIOUS, a novel mid-air technique for selecting out-of-reach objects. It employs an iterative progressive refinement, using cone-casting to select multiple objects and moving users closer to them in each step, allowing accurate selections. A user evaluation showed that PRECIOUS compares favorably against existing approaches, being the most versatile. @InProceedings{3DUI17p237, author = {Daniel Mendes and Daniel Medeiros and Eduardo Cordeiro and Maurício Sousa and Alfredo Ferreira and Joaquim Jorge}, title = {PRECIOUS! Out-of-Reach Selection using Iterative Refinement in VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {237--238}, doi = {}, year = {2017}, } |
|
Ferreira, Carlos |
3DUI '17: "Effect of Hand-Avatar in a ..."
Effect of Hand-Avatar in a Selection Task Using a Tablet as Input Device in an Immersive Virtual Environment
Luis Afonso, Paulo Dias, Carlos Ferreira, and Beatriz Sousa Santos (University of Aveiro, Portugal) How does the virtual representation of the user's hands influence the performance on a button selection task performed in a tablet-based interaction within an immersive virtual environment? To answer this question, we asked 55 participants to use three conditions: no-hand avatar, realistic avatar and translucent avatar. The participants were faster but made slightly more errors while using the no-avatar condition, and considered easier to perform the task with the translucent avatar. @InProceedings{3DUI17p247, author = {Luis Afonso and Paulo Dias and Carlos Ferreira and Beatriz Sousa Santos}, title = {Effect of Hand-Avatar in a Selection Task Using a Tablet as Input Device in an Immersive Virtual Environment}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {247--248}, doi = {}, year = {2017}, } |
|
Ferreira, Ricardo |
3DUI '17: "Mid-Air Modeling with Boolean ..."
Mid-Air Modeling with Boolean Operations in VR
Daniel Mendes, Daniel Medeiros, Maurício Sousa, Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal; PUC-Rio, Brazil) Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks. @InProceedings{3DUI17p154, author = {Daniel Mendes and Daniel Medeiros and Maurício Sousa and Ricardo Ferreira and Alberto Raposo and Alfredo Ferreira and Joaquim Jorge}, title = {Mid-Air Modeling with Boolean Operations in VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {154--157}, doi = {}, year = {2017}, } |
|
Figueroa, Pablo |
3DUI '17: "Comparing VR Environments ..."
Comparing VR Environments for Seat Selection in an Opera Theater
José Luis Dorado, Pablo Figueroa, Jean-Rémy Chardonnet, Frédéric Merienne, and José Tiberio Hernández (University of Andes, Colombia; LE2I, France) This study presents a comparison of the influence of different VR environments in the task of selecting a preferred seat in an opera theater. We used gaze-based raycasting and headsets in a low-cost head-mounted display (HMD) (GearVR); and a virtual wand, head tracking, and headsets in a CAVE, two somewhat opposing technologies in the spectrum of current VR systems. Visual rendering and the selection technique depend on the capabilities of each environment, whereas the sound is approximated in both environments. Results show that subjects can select similar seats but their decisions differ between both environments. The results obtained can be useful in guiding the development of future VR applications. @InProceedings{3DUI17p221, author = {José Luis Dorado and Pablo Figueroa and Jean-Rémy Chardonnet and Frédéric Merienne and José Tiberio Hernández}, title = {Comparing VR Environments for Seat Selection in an Opera Theater}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {221--222}, doi = {}, year = {2017}, } |
|
Freitag, Sebastian |
3DUI '17: "Efficient Approximate Computation ..."
Efficient Approximate Computation of Scene Visibility Based on Navigation Meshes and Applications for Navigation and Scene Analysis
Sebastian Freitag, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Scene visibility - the information of which parts of the scene are visible from a certain location—can be used to derive various properties of a virtual environment. For example, it enables the computation of viewpoint quality to determine the informativeness of a viewpoint, helps in constructing virtual tours, and allows to keep track of the objects a user may already have seen. However, computing visibility at runtime may be too computationally expensive for many applications, while sampling the entire scene beforehand introduces a costly precomputation step and may include many samples not needed later on. Therefore, in this paper, we propose a novel approach to precompute visibility information based on navigation meshes, a polygonal representation of a scene’s navigable areas. We show that with only limited precomputation, high accuracy can be achieved in these areas. Furthermore, we demonstrate the usefulness of the approach by means of several applications, including viewpoint quality computation, landmark and room detection, and exploration assistance. In addition, we present a travel interface based on common visibility that we found to result in less cybersickness in a user study. @InProceedings{3DUI17p134, author = {Sebastian Freitag and Benjamin Weyers and Torsten W. Kuhlen}, title = {Efficient Approximate Computation of Scene Visibility Based on Navigation Meshes and Applications for Navigation and Scene Analysis}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {134--143}, doi = {}, year = {2017}, } |
|
Fuhrmann, Arnulph |
3DUI '17: "The Effectiveness of Changing ..."
The Effectiveness of Changing the Field of View in a HMD on the Perceived Self-Motion
Oliver Basting, Arnulph Fuhrmann, and Stefan M. Grünvogel (TH Köln, Germany) The following paper investigates the effect on the intensity of per- ceived vection by changing the field of view (FOV) using a head- mounted display (HMD) in a virtual environment (VE). For this purpose a study was carried out, where the participants were situ- ated in a vection evoking VE using a HMD. During the experiment, the VE was presented with different FOVs, and a measurement of the felt intensity of vection was performed. The results indicate that a decrease of the FOV invokes a decrease of the intensity of perceived vection. @InProceedings{3DUI17p225, author = {Oliver Basting and Arnulph Fuhrmann and Stefan M. Grünvogel}, title = {The Effectiveness of Changing the Field of View in a HMD on the Perceived Self-Motion}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {225--226}, doi = {}, year = {2017}, } |
|
Galvan, Alain |
3DUI '17: "Gesture Elicitation for 3D ..."
Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe
Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. @InProceedings{3DUI17p144, author = {Francisco R. Ortega and Alain Galvan and Katherine Tarre and Armando Barreto and Naphtali Rishe and Jonathan Bernal and Ruben Balcazar and Jason-Lee Thomas}, title = {Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {144--153}, doi = {}, year = {2017}, } 3DUI '17: "Procedural Celestial Rendering ..." Procedural Celestial Rendering for 3D Navigation Alain Galvan, Francisco R. Ortega, and Naphtali Rishe (Florida International University, USA) Finding the best suitable environment for 3D navigation that utilizes at least six degrees-of-freedom is still difficult. Furthermore, creating a system to procedurally generate a large virtual environment provides an opportunity for researchers to understand this problem further. Therefore, we present a novel technique to render a parametric celestial skybox with the ability to light environments similar to natural color corrected images from telescopes. We first pre-compute a spherical ray map that corresponds to the cubemap coordinates, then generate stars and dust through a combination of different noise generation shaders. @InProceedings{3DUI17p211, author = {Alain Galvan and Francisco R. Ortega and Naphtali Rishe}, title = {Procedural Celestial Rendering for 3D Navigation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {211--212}, doi = {}, year = {2017}, } |
|
Girard, Adrien |
3DUI '17: "FlexiFingers: Multi-finger ..."
FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics
Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. @InProceedings{3DUI17p103, author = {Merwan Achibet and Benoît Le Gouis and Maud Marchal and Pierre-Alexandre Léziart and Ferran Argelaguet and Adrien Girard and Anatole Lécuyer and Hiroyuki Kajimoto}, title = {FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {103--106}, doi = {}, year = {2017}, } |
|
Gračanin, Denis |
3DUI '17: "Painting with Light: Gesture ..."
Painting with Light: Gesture Based Light Control in Architectural Settings
Mohamed Handosa, Denis Gračanin, Hicham G. Elmongui, and Andrew Ciambrone (Virginia Tech, USA; Alexandria University, Egypt) Lighting can play an essential role in supporting user tasks as well as creating an ambiance. Although users may feel excited about the supported functionality when a complex indoor lighting system is first deployed, the lack of a convenient interface may prevent them from taking the full advantage of the system. We propose a system and a 3D interaction technique for controlling indoor lights. The system is extendable and supports multiple users to control lights using either gestures or a GUI. Using a single Kinect sensor, the user is able to control lights from different positions in the room while standing or sitting down within the tracking range of the sensor. The selection and manipulation accuracy of the proposed technique together with the ease of use compared to other alternatives makes it a promising lighting control technique. @InProceedings{3DUI17p249, author = {Mohamed Handosa and Denis Gračanin and Hicham G. Elmongui and Andrew Ciambrone}, title = {Painting with Light: Gesture Based Light Control in Architectural Settings}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {249--250}, doi = {}, year = {2017}, } Video |
|
Gramopadhye, Anand |
3DUI '17: "The Effects of Presentation ..."
The Effects of Presentation Method and Simulation Fidelity on Psychomotor Education in a Bimanual Metrology Training Simulation
Jeffrey Bertrand, Ayush Bhargava, Kapil Chalil Madathil, Anand Gramopadhye, and Sabarish V. Babu (Clemson University, USA) In this study, we empirically evaluated the effects of presentation method and simulation fidelity on task performance and psychomotor skills acquisition in an immersive bimanual simulation towards precision metrology education. In a 2 x 2 experiment design, we investigated a large-screen immersive display (LSID) with a head-mounted display (HMD), and the presence versus absence of gravity. Advantages of the HMD include interacting with the simulation in a more natural manner as compared to using a large-screen immersive display due to the similarities between the interactions afforded in the virtual compared to the real-world task. Suspending the laws of physics may have an effect on usability and in turn could affect learning outcomes. Our dependent variables consisted of a pre and post cognition questionnaire, quantitative performance measures, perceived workload and system usefulness, and a psychomotor assessment to measure to what extent transfer of learning took place from the virtual to the real world. Results indicate that the HMD condition was preferable to the immersive display in several metrics while the no-gravity condition resulted in users adopting strategies that were not advantageous for task performance. @InProceedings{3DUI17p59, author = {Jeffrey Bertrand and Ayush Bhargava and Kapil Chalil Madathil and Anand Gramopadhye and Sabarish V. Babu}, title = {The Effects of Presentation Method and Simulation Fidelity on Psychomotor Education in a Bimanual Metrology Training Simulation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {59--68}, doi = {}, year = {2017}, } |
|
Grandi, Jerônimo G. |
3DUI '17: "Collaborative Manipulation ..."
Collaborative Manipulation of 3D Virtual Objects in Augmented Reality Scenarios using Mobile Devices
Jerônimo G. Grandi, Iago Berndt, Henrique G. Debarba, Luciana Nedel, and Anderson Maciel (Federal University of Rio Grande do Sul, Brazil; Artanim Foundation, Switzerland) Interaction in augmented reality environments may be very complex, depending on the degrees of freedom (DOFs) required for the task. In this work we present a 3D user interface for collaborative manipulation of virtual objects in augmented reality (AR) environments. It maps position -- acquired with a camera and fiducial markers -- and touchscreen input of a handheld device into gestures to select, move, rotate and scale virtual objects. As these transformations require the control of multiple DOFs, collaboration is proposed as a solution to coordinate the modification of each and all the available DOFs. Users are free to decide their own manipulation roles. All virtual elements are displayed directly on the mobile device as an overlay of the camera capture, providing an individual point of view of the AR environment to each user. @InProceedings{3DUI17p264, author = {Jerônimo G. Grandi and Iago Berndt and Henrique G. Debarba and Luciana Nedel and Anderson Maciel}, title = {Collaborative Manipulation of 3D Virtual Objects in Augmented Reality Scenarios using Mobile Devices}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {264--265}, doi = {}, year = {2017}, } Video |
|
Gribonval, Remi |
3DUI '17: "Spatial and Rotation Invariant ..."
Spatial and Rotation Invariant 3D Gesture Recognition Based on Sparse Representation
Ferran Argelaguet, Mélanie Ducoffe, Anatole Lécuyer, and Remi Gribonval (Inria, France; IRISA, France; ENS, France) Advances in motion tracking technology, especially for commodity hardware, still require robust 3D gesture recognition in order to fully exploit the benefits of natural user interfaces. In this paper, we introduce a novel 3D gesture recognition algorithm based on the sparse representation of 3D human motion. The sparse representation of human motion provides a set of features that can be used to efficiently classify gestures in real-time. Compared to existing gesture recognition systems, sparse representation, the proposed approach enables full spatial and rotation invariance and provides high tolerance to noise. Moreover, the proposed classification scheme takes into account the inter-user variability which increases gesture classification accuracy in user-independent scenarios. We validated our approach with existing motion databases for gestural interaction and performed a user evaluation with naive subjects to show its robustness to arbitrarily defined gestures. The results showed that our classification scheme has high classification accuracy for user-independent scenarios even with users who have different handedness. We believe that sparse representation of human motion will pave the way for a new generation of 3D gesture recognition systems in order to fully open the potential of natural user interfaces. @InProceedings{3DUI17p158, author = {Ferran Argelaguet and Mélanie Ducoffe and Anatole Lécuyer and Remi Gribonval}, title = {Spatial and Rotation Invariant 3D Gesture Recognition Based on Sparse Representation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {158--167}, doi = {}, year = {2017}, } Video |
|
Grubert, Jens |
3DUI '17: "Adaptive User Perspective ..."
Adaptive User Perspective Rendering for Handheld Augmented Reality
Peter Mohr, Markus Tatzgern, Jens Grubert, Dieter Schmalstieg, and Denis Kalkofen (Graz University of Technology, Austria; Salzburg University of Applied Sciences, Austria; Coburg University, Germany) Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user’s real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user’s head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user’s motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering. @InProceedings{3DUI17p176, author = {Peter Mohr and Markus Tatzgern and Jens Grubert and Dieter Schmalstieg and Denis Kalkofen}, title = {Adaptive User Perspective Rendering for Handheld Augmented Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {176--181}, doi = {}, year = {2017}, } |
|
Grünvogel, Stefan M. |
3DUI '17: "The Effectiveness of Changing ..."
The Effectiveness of Changing the Field of View in a HMD on the Perceived Self-Motion
Oliver Basting, Arnulph Fuhrmann, and Stefan M. Grünvogel (TH Köln, Germany) The following paper investigates the effect on the intensity of per- ceived vection by changing the field of view (FOV) using a head- mounted display (HMD) in a virtual environment (VE). For this purpose a study was carried out, where the participants were situ- ated in a vection evoking VE using a HMD. During the experiment, the VE was presented with different FOVs, and a measurement of the felt intensity of vection was performed. The results indicate that a decrease of the FOV invokes a decrease of the intensity of perceived vection. @InProceedings{3DUI17p225, author = {Oliver Basting and Arnulph Fuhrmann and Stefan M. Grünvogel}, title = {The Effectiveness of Changing the Field of View in a HMD on the Perceived Self-Motion}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {225--226}, doi = {}, year = {2017}, } |
|
Guitton, Pascal |
3DUI '17: "Pano: Design and Evaluation ..."
Pano: Design and Evaluation of a 360° Through-the-Lens Technique
Damien Clergeaud and Pascal Guitton (Inria, France; University of Bordeaux, France) Virtual Reality experiments have enabled immersed users to perform virtual tasks in a Virtual Environment (VE). Before beginning a task, however, users must locate and select the different objects they will need in the VE. This first step is important and affects global performance in the virtual task. If a user takes too long to locate and select an object, the duration of the task is increased. Moreover, both the comfort and global efficiency of users deteriorate as search and selection time increase. We have developed Pano, a technique which reduces this time by increasing the user’s “natural” field of view. More precisely, we provide a 360° panoramic virtual image which is displayed on a specific window, called the PanoWindow. Thanks to the PanoWindow, users can perceive and interact with the part of the Virtual Environment (VE) that is behind them without any additional head or body movement. In this paper, we present two user studies with 30 and 21 participants in different VEs. For the first study, participants were invited to perform object-finding tasks with and without Pano. The second study involved position-estimating tasks in order to know if the PanoWindow image enables users to build an accurate representation of the environment. First, the results show that Pano both reduces task duration and improves user comfort. Second, they demonstrate that good object-localization accuracy can be achieved using Pano. @InProceedings{3DUI17p2, author = {Damien Clergeaud and Pascal Guitton}, title = {Pano: Design and Evaluation of a 360° Through-the-Lens Technique}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {2--11}, doi = {}, year = {2017}, } |
|
Guo, Rongkai |
3DUI '17: "Augmented Reality Exhibits ..."
Augmented Reality Exhibits of Constructive Art: 8th Annual 3DUI Contest
Rongkai Guo, Ryan P. McMahan, and Benjamin Weyers (Kennesaw State University, USA; University of Texas at Dallas, USA; RWTH Aachen University, Germany) The 8th annual IEEE 3DUI Contest focuses on the development of a 3D User Interface (3DUI) for an Augmented Reality (AR) exhibit of constructive art. The 3DUI Contest is part of the 2017 IEEE Symposium on 3D User Interfaces held in Los Angeles, California. The contest was open to anyone interested in 3DUIs, from researchers to students, enthusiasts, and professionals. The purpose of the contest is to stimulate innovative and creative solutions to challenging 3DUI problems. @InProceedings{3DUI17p253, author = {Rongkai Guo and Ryan P. McMahan and Benjamin Weyers}, title = {Augmented Reality Exhibits of Constructive Art: 8th Annual 3DUI Contest}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {253--253}, doi = {}, year = {2017}, } |
|
Gutenko, Ievgeniia |
3DUI '17: "Angle and Pressure-Based Volumetric ..."
Angle and Pressure-Based Volumetric Picking on Touchscreen Devices
Ievgeniia Gutenko, Seyedkoosha Mirhosseini, and Arie E. Kaufman (Stony Brook University, USA) The target picking in 3D volumetric data is a common interaction problem in a variety of domains such as medicine, engineering, and physics. When a user clicks on a 2D point on the screen rendering, the ray is cast through the volume at an angle perpendicular to the screen. The problem lies in determining the intended location of picking, that is the end point of the ray. We introduce picking for 3D volumetric data by utilizing pressure and angle of the digital stylus pen on a touchscreen device. We map both pressure and angle of the stylus to the depth of the selector widget to help target acquisition. We evaluate several methods of selection: single finger selection, stylus with pressure but without an angle, and stylus with both angle and pressure. We decouple rotation of the volumetric data, and control for two target sizes. We report significant benefits of digital stylus picking methods and a relation between the methods and target sizes. @InProceedings{3DUI17p235, author = {Ievgeniia Gutenko and Seyedkoosha Mirhosseini and Arie E. Kaufman}, title = {Angle and Pressure-Based Volumetric Picking on Touchscreen Devices}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {235--236}, doi = {}, year = {2017}, } |
|
Hachet, Martin |
3DUI '17: "Towards a Hybrid Space Combining ..."
Towards a Hybrid Space Combining Spatial Augmented Reality and Virtual Reality
Joan Sol Roo and Martin Hachet (Inria, France) Spatial Augmented Reality (SAR) allows a user, or a group of users, to benefit from digital augmentations embedded directly into the physical world. This enables co-located information and unobstructed interaction. On the other hand, SAR suffers from limitations that are inherently linked to its physical dependency, which is not the case for see-through or immersive displays. In this work, we explore how to facilitate the transition from SAR to VR, and vice versa, integrating both into a unified experience. We developed a set of interaction techniques and obtained first feedback from informal interviews. @InProceedings{3DUI17p195, author = {Joan Sol Roo and Martin Hachet}, title = {Towards a Hybrid Space Combining Spatial Augmented Reality and Virtual Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {195--198}, doi = {}, year = {2017}, } Video Info |
|
Han, Dustin T. |
3DUI '17: "Redirected Reach in Virtual ..."
Redirected Reach in Virtual Reality: Enabling Natural Hand Interaction at Multiple Virtual Locations with Passive Haptics
Mohamed Suhail, Shyam Prathish Sargunam, Dustin T. Han, and Eric D. Ragan (Texas A&M University, USA) In many virtual reality applications, it would be ideal if users could use their physical hands to directly interact with virtual objects while experiencing realistic haptic feedback. While this can be achieved via interaction with tracked physical props that correspond to virtual objects, practical limitations can make it difficult to achieve a physical environment that exactly represents the virtual world, and virtual environments are often much larger than the available tracked physical space. Our approach maps a single physical prop to multiple virtual objects distributed throughout a virtual environment. Additionally, our work explores scenarios using one physical prop to control multiple types of object interactions. We explore considerations that allow physical object manipulation using orientation resetting to physically align the user with a physical prop for interaction. The resetting approach applies a discrete positional and rotational update to the user's location when the user virtually approaches a target for interaction, and the redirected reach approach applies a translational offset to the user’s virtual hand based on the positional difference of the virtual and physical objects. @InProceedings{3DUI17p245, author = {Mohamed Suhail and Shyam Prathish Sargunam and Dustin T. Han and Eric D. Ragan}, title = {Redirected Reach in Virtual Reality: Enabling Natural Hand Interaction at Multiple Virtual Locations with Passive Haptics}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {245--246}, doi = {}, year = {2017}, } |
|
Handosa, Mohamed |
3DUI '17: "Painting with Light: Gesture ..."
Painting with Light: Gesture Based Light Control in Architectural Settings
Mohamed Handosa, Denis Gračanin, Hicham G. Elmongui, and Andrew Ciambrone (Virginia Tech, USA; Alexandria University, Egypt) Lighting can play an essential role in supporting user tasks as well as creating an ambiance. Although users may feel excited about the supported functionality when a complex indoor lighting system is first deployed, the lack of a convenient interface may prevent them from taking the full advantage of the system. We propose a system and a 3D interaction technique for controlling indoor lights. The system is extendable and supports multiple users to control lights using either gestures or a GUI. Using a single Kinect sensor, the user is able to control lights from different positions in the room while standing or sitting down within the tracking range of the sensor. The selection and manipulation accuracy of the proposed technique together with the ease of use compared to other alternatives makes it a promising lighting control technique. @InProceedings{3DUI17p249, author = {Mohamed Handosa and Denis Gračanin and Hicham G. Elmongui and Andrew Ciambrone}, title = {Painting with Light: Gesture Based Light Control in Architectural Settings}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {249--250}, doi = {}, year = {2017}, } Video |
|
Harrell, Nathanael |
3DUI '17: "Augmented Reality Digital ..."
Augmented Reality Digital Sculpture
Nathanael Harrell, Grayson Bonds, Xiaojia Wang, Sean Valent, Elham Ebrahimi, and Sabarish V. Babu (Clemson University, USA) We present our metaphor for object translation, rotation, and rescaling and particle parameter manipulation in an augmented reality environment using an Android smartphone or tablet for the 2017 3DUI Competition in Los Angeles, California. Our metaphor aims to map the three-dimensional interaction of objects in a real world space to the two-dimensional plane of a smartphone or tablet screen. Our final product is the result of experimentation with different metaphors for translation, rotation, rescaling, and particle parameter manipulation and was guided by the feedback of voluntary product testers. The result is an interaction technique between a mobile device and the virtual world which we believe to be intuitive. @InProceedings{3DUI17p262, author = {Nathanael Harrell and Grayson Bonds and Xiaojia Wang and Sean Valent and Elham Ebrahimi and Sabarish V. Babu}, title = {Augmented Reality Digital Sculpture}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {262--263}, doi = {}, year = {2017}, } Video |
|
Hashemian, Abraham M. |
3DUI '17: "Comparing Leaning-Based Motion ..."
Comparing Leaning-Based Motion Cueing Interfaces for Virtual Reality Locomotion
Alexandra Kitson, Abraham M. Hashemian, Ekaterina R. Stepanova, Ernst Kruijff, and Bernhard E. Riecke (Simon Fraser University, Canada; Bonn-Rhein-Sieg University of Applied Sciences, Germany) In this paper, we describe a user study comparing five different locomotion interfaces for virtual reality locomotion. We compared a standard non-motion cueing interface, Joystick (Xbox), with four motion cueing interfaces, NaviChair (stool with springs), MuvMan (sit/stand active stool), Head-Directed (Oculus Rift DK2), and Swivel Chair (everyday office chair with leaning capability). Each interface had two degrees of freedom to move forward/backward and rotate using velocity (rate) control. The aim of this mixed methods study was to better understand relevant user experience factors and guide the design of future locomotion interfaces. This study employed methods from HCI to provide an understanding of why users behave a certain way while using the interface and to unearth any new issues with the design. Participants were tasked to search for objects in a virtual city while they provided talk-aloud feedback and we logged their behaviour. Subsequently, they completed a post-experimental questionnaire on their experience. We found that the qualitative themes of control, usability, and experience echoed the results of the questionnaire, providing internal validity. The quantitative measures revealed the Joystick to be significantly more comfortable and precise than the motion cueing interfaces. However, the qualitative feedback and interviews showed this was due to the reduced perceived controllability and safety of the motion cueing interfaces. Designers of these interfaces should consider using a backrest if users need to lean backwards and avoid using velocity-control for rotations when using HMDs. @InProceedings{3DUI17p73, author = {Alexandra Kitson and Abraham M. Hashemian and Ekaterina R. Stepanova and Ernst Kruijff and Bernhard E. Riecke}, title = {Comparing Leaning-Based Motion Cueing Interfaces for Virtual Reality Locomotion}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {73--82}, doi = {}, year = {2017}, } |
|
Hashiguchi, Satoshi |
3DUI '17: "Analysis of R-V Dynamics Illusion ..."
Analysis of R-V Dynamics Illusion Behavior Caused by Varying the Weight of Real Object
Kana Oshima, Satoshi Hashiguchi, Fumihisa Shibata, and Asako Kimura (Ritsumeikan University, Japan) Previously, we discovered the “R-V Dynamics Illusion,” a psychophysical phenomenon caused by the difference between the dynamics of real object (R) and virtual object (V) in mixed reality (MR) space. We confirmed that a real object with a movable portion (CG) is perceived lighter by MR visual stimulation. Here, we analyze whether the difference in the mass of real objects affects the R-V Dynamics Illusion. We conducted experiments to determine the difference threshold of weights under the condition where the masses of real objects are 500, 750, and 1000g, and only the CG liquid level is changed. As a result, the difference in mass did not influence the difference threshold of weights by changing the virtual liquid level. On the other hand, with the same mass conditions, the difference threshold of weights becomes smaller when the R-V Dynamics Illusion occurs. @InProceedings{3DUI17p213, author = {Kana Oshima and Satoshi Hashiguchi and Fumihisa Shibata and Asako Kimura}, title = {Analysis of R-V Dynamics Illusion Behavior Caused by Varying the Weight of Real Object}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {213--214}, doi = {}, year = {2017}, } |
|
Heidicker, Paul |
3DUI '17: "Influence of Avatar Appearance ..."
Influence of Avatar Appearance on Presence in Social VR
Paul Heidicker, Eike Langbehn, and Frank Steinicke (University of Hamburg, Germany) Social virtual reality (VR) has enormous potential to allow several physically separated users to collaborate in an immersive virtual environment (IVE). These users and their actions are represented by avatars in the IVE. In question is how the appearance of those avatars influences communication and interaction. It might make a difference, if the avatar consists of a complete body representation or if only certain body parts are visible. Moreover, a one-to-one mapping of the user's movements to the avatar's movements might have advantages compared to pre-defined avatar animations. To answer these questions, we compared three different types of avatar appearances in a user study. For this, we used estimations of presence, social presence, and cognitive load. The evaluation showed that motion-controlled avatars with full representation of the avatar body lead to an increased sense of presence. Motion-controlled avatars as well as avatars which have only head and hands visible produced an increased feeling of co-presence and behavioral interdependence. This is interesting, since it states that we do not need a complete avatar body in social VR. @InProceedings{3DUI17p233, author = {Paul Heidicker and Eike Langbehn and Frank Steinicke}, title = {Influence of Avatar Appearance on Presence in Social VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {233--234}, doi = {}, year = {2017}, } |
|
Hernández, José Tiberio |
3DUI '17: "Comparing VR Environments ..."
Comparing VR Environments for Seat Selection in an Opera Theater
José Luis Dorado, Pablo Figueroa, Jean-Rémy Chardonnet, Frédéric Merienne, and José Tiberio Hernández (University of Andes, Colombia; LE2I, France) This study presents a comparison of the influence of different VR environments in the task of selecting a preferred seat in an opera theater. We used gaze-based raycasting and headsets in a low-cost head-mounted display (HMD) (GearVR); and a virtual wand, head tracking, and headsets in a CAVE, two somewhat opposing technologies in the spectrum of current VR systems. Visual rendering and the selection technique depend on the capabilities of each environment, whereas the sound is approximated in both environments. Results show that subjects can select similar seats but their decisions differ between both environments. The results obtained can be useful in guiding the development of future VR applications. @InProceedings{3DUI17p221, author = {José Luis Dorado and Pablo Figueroa and Jean-Rémy Chardonnet and Frédéric Merienne and José Tiberio Hernández}, title = {Comparing VR Environments for Seat Selection in an Opera Theater}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {221--222}, doi = {}, year = {2017}, } |
|
Höllerer, Tobias |
3DUI '17: "Evaluating Gesture-Based Augmented ..."
Evaluating Gesture-Based Augmented Reality Annotation
Yun Suk Chang, Benjamin Nuernberger, Bo Luan, and Tobias Höllerer (University of California at Santa Barbara, USA) Drawing annotations with 3D hand gestures in augmented reality are useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn circle and arrow annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. In this paper, we compare different annotation methods using the HoloLens augmented reality development platform: Surface-Drawing and Air-Drawing, with either raw but smoothed or interpreted and beautified gesture input. For the Surface-Drawing method, users control a cursor that is projected onto the world model, allowing gesture input to occur directly on the surfaces of real-world objects. For the Air-Drawing method, gesture drawing occurs at the user's fingertip and is projected onto the world model on release. The methods have different characteristics regarding necessitated vergence switches and afforded cursor control. We performed an experiment in which users draw on two different real-world objects at different distances using the different methods. Results indicate that Surface-Drawing is more accurate than Air-Drawing and Beautified annotations are drawn faster than Non-Beautified; participants also preferred Surface-Drawing and Beautified. @InProceedings{3DUI17p182, author = {Yun Suk Chang and Benjamin Nuernberger and Bo Luan and Tobias Höllerer}, title = {Evaluating Gesture-Based Augmented Reality Annotation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {182--185}, doi = {}, year = {2017}, } Video |
|
Hu, Yaoping |
3DUI '17: "Force and Vibrotactile Integration ..."
Force and Vibrotactile Integration for 3D User Interaction within Virtual Environments
Aida Erfanian, Stanley Tarng, Yaoping Hu, Jérémy Plouzeau, and Frédéric Merienne (University of Calgary, Canada; LE2I, France) Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies on multi-sensory integration of visual and haptic cues revealed that the integration follows maximum likelihood estimation (MLE). Little effort focuses however on integrating force and vibrotactile cues – two sub-categorical cues of the haptic modality. Hence, this paper presents an investigation on MLE’s suitability for integrating these sub-categorical cues. Within a stereoscopic VE, human users performed a 3D interactive task of navigating a flying drone along a high-voltage transmission line in an inaccessible region and identifying defects on the line. The users had to identify defects via individual force or vibrotactile cues, and their combinations in co-located and dislocated settings. The co-located setting provided both cues on the right hand of the users; whereas the dislocated setting delivered the force and vibrotactile cues on the right hand and forearm of the users, respectively. Task performance of the users, such as completion time and accuracy, was assessed under each cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This was in agreement with the role of tactile cues in sensing surface properties, herein setting a baseline for using MLE. The task performance under the co-located setting indicated certain degrees of combining those under the individual cues. In contrast, the performance under the dislocated setting was alike that under the individual vibrotactile cue. These observations imply an inconclusiveness of MLE to integrate both cues in a co-located setting for 3D user interaction. @InProceedings{3DUI17p87, author = {Aida Erfanian and Stanley Tarng and Yaoping Hu and Jérémy Plouzeau and Frédéric Merienne}, title = {Force and Vibrotactile Integration for 3D User Interaction within Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {87--94}, doi = {}, year = {2017}, } |
|
Johnson, Tiana |
3DUI '17: "VizSpace: Interaction in the ..."
VizSpace: Interaction in the Positive Parallax Screen Plane
Oyewole Oyekoya, Emily Sassard, and Tiana Johnson (Clemson University, USA) The VizSpace is a physically situated interactive system that combines touch and hand interactions behind the screen to create the effect that users are reaching inside and interacting in a 3D virtual workspace. It extends the conventional touch table interface with hand tracking and 3D visualization to enable interaction in the positive parallax plane, where the binocular focus falls behind the screen so as not to occlude projected images. This paper covers the system design, human factors and ergonomics considerations for an interactive and immersive gesture-based visualization system. Results are presented from a preliminary user study that validates the usability of VizSpace. @InProceedings{3DUI17p229, author = {Oyewole Oyekoya and Emily Sassard and Tiana Johnson}, title = {VizSpace: Interaction in the Positive Parallax Screen Plane}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {229--230}, doi = {}, year = {2017}, } |
|
Jorge, Joaquim |
3DUI '17: "Mid-Air Modeling with Boolean ..."
Mid-Air Modeling with Boolean Operations in VR
Daniel Mendes, Daniel Medeiros, Maurício Sousa, Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal; PUC-Rio, Brazil) Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks. @InProceedings{3DUI17p154, author = {Daniel Mendes and Daniel Medeiros and Maurício Sousa and Ricardo Ferreira and Alberto Raposo and Alfredo Ferreira and Joaquim Jorge}, title = {Mid-Air Modeling with Boolean Operations in VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {154--157}, doi = {}, year = {2017}, } 3DUI '17: "PRECIOUS! Out-of-Reach Selection ..." PRECIOUS! Out-of-Reach Selection using Iterative Refinement in VR Daniel Mendes, Daniel Medeiros, Eduardo Cordeiro, Maurício Sousa, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal) Selecting objects outside user's arm-reach in Virtual Reality still poses significant challenges. Techniques proposed to overcome such limitations often follow arm-extension metaphors or favor the use of selection volumes combined with ray-casting. Nonetheless, these approaches work for room sized and sparse environments, and they do not scale to larger scenarios with many objects. We introduce PRECIOUS, a novel mid-air technique for selecting out-of-reach objects. It employs an iterative progressive refinement, using cone-casting to select multiple objects and moving users closer to them in each step, allowing accurate selections. A user evaluation showed that PRECIOUS compares favorably against existing approaches, being the most versatile. @InProceedings{3DUI17p237, author = {Daniel Mendes and Daniel Medeiros and Eduardo Cordeiro and Maurício Sousa and Alfredo Ferreira and Joaquim Jorge}, title = {PRECIOUS! Out-of-Reach Selection using Iterative Refinement in VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {237--238}, doi = {}, year = {2017}, } |
|
Kajimoto, Hiroyuki |
3DUI '17: "FlexiFingers: Multi-finger ..."
FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics
Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. @InProceedings{3DUI17p103, author = {Merwan Achibet and Benoît Le Gouis and Maud Marchal and Pierre-Alexandre Léziart and Ferran Argelaguet and Adrien Girard and Anatole Lécuyer and Hiroyuki Kajimoto}, title = {FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {103--106}, doi = {}, year = {2017}, } 3DUI '17: "Interpretation of Navigation ..." Interpretation of Navigation Information Modulates the Effect of the Waist-Type Hanger Reflex on Walking Yuki Kon, Takuto Nakamura, and Hiroyuki Kajimoto (University of Electro-Communications, Japan) The Hanger Reflex is a phenomenon in which the head rotates unintentionally when force is applied via a wire hanger placed on the head. This phenomenon is caused by physical pressure on the skin, and the direction of the Hanger Reflex modulated by the direction of skin deformation. A previous study examined the use of the head-, waist-, and ankle-type Hanger Reflex on walking navigation without interpretation of navigation information, and found that the waist-type Hanger Reflex had the strongest effect on walking. However, the existing waist-type Hanger Reflex device is passive; i.e. must be operated by the user, which leads to the necessity of developing a new active type device for use as part of a navigational system. In this paper, we developed a controlled waist-type Hanger Reflex device with four pneumatic actuators. We investigated different interpretations of navigation information on the effect of our device on walking. Our interpretation conditions included “Natural”, in which users did not attempt to interpret the navigation information, “Follow”, and “Resist”, in which they actively followed, or resisted the navigation information, respectively. We confirmed that our waist-type Hanger Reflex device could control the walking path and body direction, depending on user’s interpretation of the navigational information. @InProceedings{3DUI17p107, author = {Yuki Kon and Takuto Nakamura and Hiroyuki Kajimoto}, title = {Interpretation of Navigation Information Modulates the Effect of the Waist-Type Hanger Reflex on Walking}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {107--115}, doi = {}, year = {2017}, } Video 3DUI '17: "COMS-VR: Mobile Virtual Reality ..." COMS-VR: Mobile Virtual Reality Entertainment System using Electric Car and Head-Mounted Display Ryo Kodama, Masahiro Koge, Shun Taguchi, and Hiroyuki Kajimoto (TOYOTA Central R&D Labs, Japan; University of Electro-Communications, Japan) We propose a novel virtual reality entertainment system using a car as a motion platform. Motion platforms present a sensation of motion to the user using powerful actuators. Combined with virtual reality content, including surrounding visual, auditory and tactile displays, such systems can provide and immersive experience. However, the space and cost requirements for installation of motion platforms are large. To overcome this issue, we propose to use a car as a motion platform. We developed a prototype system composed of a head mounted display, a one-person electric car and an automatic driving algorithm. We developed and tested immersive content in which users ride on a trolley in a virtual space. All users responded quite positively to the experience. @InProceedings{3DUI17p130, author = {Ryo Kodama and Masahiro Koge and Shun Taguchi and Hiroyuki Kajimoto}, title = {COMS-VR: Mobile Virtual Reality Entertainment System using Electric Car and Head-Mounted Display}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {130--133}, doi = {}, year = {2017}, } |
|
Kalkofen, Denis |
3DUI '17: "Adaptive User Perspective ..."
Adaptive User Perspective Rendering for Handheld Augmented Reality
Peter Mohr, Markus Tatzgern, Jens Grubert, Dieter Schmalstieg, and Denis Kalkofen (Graz University of Technology, Austria; Salzburg University of Applied Sciences, Austria; Coburg University, Germany) Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user’s real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user’s head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user’s motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering. @InProceedings{3DUI17p176, author = {Peter Mohr and Markus Tatzgern and Jens Grubert and Dieter Schmalstieg and Denis Kalkofen}, title = {Adaptive User Perspective Rendering for Handheld Augmented Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {176--181}, doi = {}, year = {2017}, } |
|
Kaufman, Arie E. |
3DUI '17: "Angle and Pressure-Based Volumetric ..."
Angle and Pressure-Based Volumetric Picking on Touchscreen Devices
Ievgeniia Gutenko, Seyedkoosha Mirhosseini, and Arie E. Kaufman (Stony Brook University, USA) The target picking in 3D volumetric data is a common interaction problem in a variety of domains such as medicine, engineering, and physics. When a user clicks on a 2D point on the screen rendering, the ray is cast through the volume at an angle perpendicular to the screen. The problem lies in determining the intended location of picking, that is the end point of the ray. We introduce picking for 3D volumetric data by utilizing pressure and angle of the digital stylus pen on a touchscreen device. We map both pressure and angle of the stylus to the depth of the selector widget to help target acquisition. We evaluate several methods of selection: single finger selection, stylus with pressure but without an angle, and stylus with both angle and pressure. We decouple rotation of the volumetric data, and control for two target sizes. We report significant benefits of digital stylus picking methods and a relation between the methods and target sizes. @InProceedings{3DUI17p235, author = {Ievgeniia Gutenko and Seyedkoosha Mirhosseini and Arie E. Kaufman}, title = {Angle and Pressure-Based Volumetric Picking on Touchscreen Devices}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {235--236}, doi = {}, year = {2017}, } |
|
Kaufmann, Hannes |
3DUI '17: "Towards Efficient Spatial ..."
Towards Efficient Spatial Compression in Self-Overlapping Virtual Environments
Khrystyna Vasylevska and Hannes Kaufmann (Vienna University of Technology, Austria) Space available for any virtual reality experience is often strictly limited and abridges the virtual world to a size of a room. To extend the amount of virtual space accessible by walking within the same real workspace the methods of spatial compression were proposed. Scene manipulation with a controlled spatial overlap has been shown to be an efficient method. However, in order to apply space compression effectively for a dynamic, scalable and robust 3D user interface, it is important to study how the human perceives different layouts with overlapping spaces. In this paper, we explore the influence of the properties of the layout used on human spatial perception in a physically impossible spatial arrangement. Our first reported study focuses on the following parameters of the path within a simple self-overlapping layout: number of turns, relative door positions, sequences of counter- and clockwise turns, symmetry and asymmetry of the path used. In addition, in the second study we explore the effect of path smoothing by substituting the right-angled corridors by smooth curves. Our studies show that usage of the smooth curved corridors is more beneficial for spatial compression than the conventional right-angled approach. @InProceedings{3DUI17p12, author = {Khrystyna Vasylevska and Hannes Kaufmann}, title = {Towards Efficient Spatial Compression in Self-Overlapping Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {12--21}, doi = {}, year = {2017}, } |
|
Keefe, Daniel F. |
3DUI '17: "Anatomical 2D/3D Shape-Matching ..."
Anatomical 2D/3D Shape-Matching in Virtual Reality: A User Interface for Quantifying Joint Kinematics with Radiographic Imaging
Kyungyoon Kim, Rebekah L. Lawrence, Nikki Kyllonen, Paula M. Ludewig, Arin M. Ellingson, and Daniel F. Keefe (University of Minnesota, USA) We introduce a virtual reality 3D user interface (3DUI) for anatomical 2D/3D shape-matching, a challenging task that is part of medical imaging processes required by biomechanics researchers. Manual shape-matching can be thought of as a nuanced version of classic 6 degree-of-freedom docking tasks studied in the 3DUI research community. Our solution combines dynamic gain for precise translation and rotation from 6 degree-of-freedom tracker input, constraints based on both 2D and 3D data, and immersive visualization and visual feedback. @InProceedings{3DUI17p243, author = {Kyungyoon Kim and Rebekah L. Lawrence and Nikki Kyllonen and Paula M. Ludewig and Arin M. Ellingson and Daniel F. Keefe}, title = {Anatomical 2D/3D Shape-Matching in Virtual Reality: A User Interface for Quantifying Joint Kinematics with Radiographic Imaging}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {243--244}, doi = {}, year = {2017}, } Video |
|
Kim, Kangsoo |
3DUI '17: "Can Social Presence Be Contagious? ..."
Can Social Presence Be Contagious? Effects of Social Presence Priming on Interaction with Virtual Humans
Salam Daher, Kangsoo Kim, Myungho Lee, Gerd Bruder, Ryan Schubert, Jeremy Bailenson, and Gregory F. Welch (University of Central Florida, USA; Stanford University, USA) We explore whether a peripheral observation of apparent mutual social presence between a virtual human (VH) and a virtual human confederate (VHC) can increase a subject’s sense of social presence with the VH. Human subjects were asked to play a game with a VH. Half of the subjects were primed by being exposed to a brief but apparently engaging conversation between the VHC and the VH. The primed subjects reported being significantly more excited, alert, had significantly higher measures of Co-Presence, Attentional Allocation, and Message Understanding dimensions of social presence for the VH, compared to those who were not primed. @InProceedings{3DUI17p201, author = {Salam Daher and Kangsoo Kim and Myungho Lee and Gerd Bruder and Ryan Schubert and Jeremy Bailenson and Gregory F. Welch}, title = {Can Social Presence Be Contagious? Effects of Social Presence Priming on Interaction with Virtual Humans}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {201--202}, doi = {}, year = {2017}, } |
|
Kim, Kyungyoon |
3DUI '17: "Anatomical 2D/3D Shape-Matching ..."
Anatomical 2D/3D Shape-Matching in Virtual Reality: A User Interface for Quantifying Joint Kinematics with Radiographic Imaging
Kyungyoon Kim, Rebekah L. Lawrence, Nikki Kyllonen, Paula M. Ludewig, Arin M. Ellingson, and Daniel F. Keefe (University of Minnesota, USA) We introduce a virtual reality 3D user interface (3DUI) for anatomical 2D/3D shape-matching, a challenging task that is part of medical imaging processes required by biomechanics researchers. Manual shape-matching can be thought of as a nuanced version of classic 6 degree-of-freedom docking tasks studied in the 3DUI research community. Our solution combines dynamic gain for precise translation and rotation from 6 degree-of-freedom tracker input, constraints based on both 2D and 3D data, and immersive visualization and visual feedback. @InProceedings{3DUI17p243, author = {Kyungyoon Kim and Rebekah L. Lawrence and Nikki Kyllonen and Paula M. Ludewig and Arin M. Ellingson and Daniel F. Keefe}, title = {Anatomical 2D/3D Shape-Matching in Virtual Reality: A User Interface for Quantifying Joint Kinematics with Radiographic Imaging}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {243--244}, doi = {}, year = {2017}, } Video |
|
Kimura, Asako |
3DUI '17: "Analysis of R-V Dynamics Illusion ..."
Analysis of R-V Dynamics Illusion Behavior Caused by Varying the Weight of Real Object
Kana Oshima, Satoshi Hashiguchi, Fumihisa Shibata, and Asako Kimura (Ritsumeikan University, Japan) Previously, we discovered the “R-V Dynamics Illusion,” a psychophysical phenomenon caused by the difference between the dynamics of real object (R) and virtual object (V) in mixed reality (MR) space. We confirmed that a real object with a movable portion (CG) is perceived lighter by MR visual stimulation. Here, we analyze whether the difference in the mass of real objects affects the R-V Dynamics Illusion. We conducted experiments to determine the difference threshold of weights under the condition where the masses of real objects are 500, 750, and 1000g, and only the CG liquid level is changed. As a result, the difference in mass did not influence the difference threshold of weights by changing the virtual liquid level. On the other hand, with the same mass conditions, the difference threshold of weights becomes smaller when the R-V Dynamics Illusion occurs. @InProceedings{3DUI17p213, author = {Kana Oshima and Satoshi Hashiguchi and Fumihisa Shibata and Asako Kimura}, title = {Analysis of R-V Dynamics Illusion Behavior Caused by Varying the Weight of Real Object}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {213--214}, doi = {}, year = {2017}, } |
|
Kitson, Alexandra |
3DUI '17: "Comparing Leaning-Based Motion ..."
Comparing Leaning-Based Motion Cueing Interfaces for Virtual Reality Locomotion
Alexandra Kitson, Abraham M. Hashemian, Ekaterina R. Stepanova, Ernst Kruijff, and Bernhard E. Riecke (Simon Fraser University, Canada; Bonn-Rhein-Sieg University of Applied Sciences, Germany) In this paper, we describe a user study comparing five different locomotion interfaces for virtual reality locomotion. We compared a standard non-motion cueing interface, Joystick (Xbox), with four motion cueing interfaces, NaviChair (stool with springs), MuvMan (sit/stand active stool), Head-Directed (Oculus Rift DK2), and Swivel Chair (everyday office chair with leaning capability). Each interface had two degrees of freedom to move forward/backward and rotate using velocity (rate) control. The aim of this mixed methods study was to better understand relevant user experience factors and guide the design of future locomotion interfaces. This study employed methods from HCI to provide an understanding of why users behave a certain way while using the interface and to unearth any new issues with the design. Participants were tasked to search for objects in a virtual city while they provided talk-aloud feedback and we logged their behaviour. Subsequently, they completed a post-experimental questionnaire on their experience. We found that the qualitative themes of control, usability, and experience echoed the results of the questionnaire, providing internal validity. The quantitative measures revealed the Joystick to be significantly more comfortable and precise than the motion cueing interfaces. However, the qualitative feedback and interviews showed this was due to the reduced perceived controllability and safety of the motion cueing interfaces. Designers of these interfaces should consider using a backrest if users need to lean backwards and avoid using velocity-control for rotations when using HMDs. @InProceedings{3DUI17p73, author = {Alexandra Kitson and Abraham M. Hashemian and Ekaterina R. Stepanova and Ernst Kruijff and Bernhard E. Riecke}, title = {Comparing Leaning-Based Motion Cueing Interfaces for Virtual Reality Locomotion}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {73--82}, doi = {}, year = {2017}, } |
|
Kodama, Ryo |
3DUI '17: "COMS-VR: Mobile Virtual Reality ..."
COMS-VR: Mobile Virtual Reality Entertainment System using Electric Car and Head-Mounted Display
Ryo Kodama, Masahiro Koge, Shun Taguchi, and Hiroyuki Kajimoto (TOYOTA Central R&D Labs, Japan; University of Electro-Communications, Japan) We propose a novel virtual reality entertainment system using a car as a motion platform. Motion platforms present a sensation of motion to the user using powerful actuators. Combined with virtual reality content, including surrounding visual, auditory and tactile displays, such systems can provide and immersive experience. However, the space and cost requirements for installation of motion platforms are large. To overcome this issue, we propose to use a car as a motion platform. We developed a prototype system composed of a head mounted display, a one-person electric car and an automatic driving algorithm. We developed and tested immersive content in which users ride on a trolley in a virtual space. All users responded quite positively to the experience. @InProceedings{3DUI17p130, author = {Ryo Kodama and Masahiro Koge and Shun Taguchi and Hiroyuki Kajimoto}, title = {COMS-VR: Mobile Virtual Reality Entertainment System using Electric Car and Head-Mounted Display}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {130--133}, doi = {}, year = {2017}, } |
|
Koge, Masahiro |
3DUI '17: "COMS-VR: Mobile Virtual Reality ..."
COMS-VR: Mobile Virtual Reality Entertainment System using Electric Car and Head-Mounted Display
Ryo Kodama, Masahiro Koge, Shun Taguchi, and Hiroyuki Kajimoto (TOYOTA Central R&D Labs, Japan; University of Electro-Communications, Japan) We propose a novel virtual reality entertainment system using a car as a motion platform. Motion platforms present a sensation of motion to the user using powerful actuators. Combined with virtual reality content, including surrounding visual, auditory and tactile displays, such systems can provide and immersive experience. However, the space and cost requirements for installation of motion platforms are large. To overcome this issue, we propose to use a car as a motion platform. We developed a prototype system composed of a head mounted display, a one-person electric car and an automatic driving algorithm. We developed and tested immersive content in which users ride on a trolley in a virtual space. All users responded quite positively to the experience. @InProceedings{3DUI17p130, author = {Ryo Kodama and Masahiro Koge and Shun Taguchi and Hiroyuki Kajimoto}, title = {COMS-VR: Mobile Virtual Reality Entertainment System using Electric Car and Head-Mounted Display}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {130--133}, doi = {}, year = {2017}, } |
|
Kon, Yuki |
3DUI '17: "Interpretation of Navigation ..."
Interpretation of Navigation Information Modulates the Effect of the Waist-Type Hanger Reflex on Walking
Yuki Kon, Takuto Nakamura, and Hiroyuki Kajimoto (University of Electro-Communications, Japan) The Hanger Reflex is a phenomenon in which the head rotates unintentionally when force is applied via a wire hanger placed on the head. This phenomenon is caused by physical pressure on the skin, and the direction of the Hanger Reflex modulated by the direction of skin deformation. A previous study examined the use of the head-, waist-, and ankle-type Hanger Reflex on walking navigation without interpretation of navigation information, and found that the waist-type Hanger Reflex had the strongest effect on walking. However, the existing waist-type Hanger Reflex device is passive; i.e. must be operated by the user, which leads to the necessity of developing a new active type device for use as part of a navigational system. In this paper, we developed a controlled waist-type Hanger Reflex device with four pneumatic actuators. We investigated different interpretations of navigation information on the effect of our device on walking. Our interpretation conditions included “Natural”, in which users did not attempt to interpret the navigation information, “Follow”, and “Resist”, in which they actively followed, or resisted the navigation information, respectively. We confirmed that our waist-type Hanger Reflex device could control the walking path and body direction, depending on user’s interpretation of the navigational information. @InProceedings{3DUI17p107, author = {Yuki Kon and Takuto Nakamura and Hiroyuki Kajimoto}, title = {Interpretation of Navigation Information Modulates the Effect of the Waist-Type Hanger Reflex on Walking}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {107--115}, doi = {}, year = {2017}, } Video |
|
Kondur, Navyaram |
3DUI '17: "Bookshelf and Bird: Enabling ..."
Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection
Run Yu, Wallace S. Lages, Mahdi Nabiyouni, Brandon Ray, Navyaram Kondur, Vikram Chandrashekar, and Doug A. Bowman (Virginia Tech, USA) We present two novel redirection techniques to enable real walking in large virtual environments (VEs) using only “room-scale” tracked spaces. The techniques, called Bookshelf and Bird, provide narrative-consistent redirection to keep the user inside the physical space, and require the user to walk to explore the VE. The underlying concept, called cell-based redirection, is to divide the virtual world into discrete cells that have the same size as the physical tracking space. The techniques then can redirect the users without altering the relationship between the user, the physical space, and the virtual cell. In this way, users can always access the entire cell using real walking. @InProceedings{3DUI17p116, author = {Run Yu and Wallace S. Lages and Mahdi Nabiyouni and Brandon Ray and Navyaram Kondur and Vikram Chandrashekar and Doug A. Bowman}, title = {Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {116--119}, doi = {}, year = {2017}, } Video |
|
Kopper, Regis |
3DUI '17: "Specimen Box: A Tangible Interaction ..."
Specimen Box: A Tangible Interaction Technique for World-Fixed Virtual Reality Displays
David J. Zielinski, Derek Nankivil, and Regis Kopper (Duke University, USA) Object selection and manipulation in world-fixed displays such as CAVE-type systems are typically achieved with tracked input devices, which lack the tangibility of real-world interactions. Conversely, due to the visual blockage of the real world, head-mounted displays allow the use of many types of real world objects that can convey realistic haptic feedback. To bridge this gap, we propose Specimen Box, an interaction technique that allows users to naturally hold a plausible physical object while manipulating virtual content inside it. This virtual content is rendered based on the tracked position of the box in relation to the user's point of view. Specimen Box provides the weight and tactile feel of an actual object and does not occlude rendered objects in the scene. The end result is that the user sees the virtual content as if it exists inside the clear physical box. We hypothesize that the effect of holding a physical box, which is a valid part of the overall scenario, would improve user performance and experience. To verify this hypothesis, we conducted a user study which involved a cognitively loaded inspection task requiring extensive manipulation of the box. We compared Specimen Box to Grab-and-Twirl, a naturalistic bimanual manipulation technique that closely mimics the mechanics of our proposed technique. Results show that in our specific task, performance was significantly faster and rotation rate was significantly lower with Specimen Box. Further, performance of the control technique was positively affected by experience with Specimen Box. @InProceedings{3DUI17p50, author = {David J. Zielinski and Derek Nankivil and Regis Kopper}, title = {Specimen Box: A Tangible Interaction Technique for World-Fixed Virtual Reality Displays}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {50--58}, doi = {}, year = {2017}, } 3DUI '17: "Design and Preliminary Evaluation ..." Design and Preliminary Evaluation of an Ego-Exocentric Technique for Cooperative Manipulation Leonardo Pavanatto Soares, Márcio Sarroglia Pinho, and Regis Kopper (PUCRS, Brazil; Duke University, USA) This work proposes and evaluates the EGO-EXO technique for cooperative manipulation in a Collaborative Virtual Environment (CVE). From the premise that simultaneous control over navigation and manipulation by the user can make interaction complex, this technique places two users in asymmetric viewpoint positions during the cooperative manipulation of an object, allowing one of them to follow the object. It applies the separation of degrees of freedom method between the two viewpoints to make the manipulation easier. The technique is evaluated through a user study to test its efficiency on handling cooperative manipulation. Results indicate that, for manipulation tasks that require high amplitude position control along precise orientation control, the technique performs with a lower collisions to time ratio. @InProceedings{3DUI17p203, author = {Leonardo Pavanatto Soares and Márcio Sarroglia Pinho and Regis Kopper}, title = {Design and Preliminary Evaluation of an Ego-Exocentric Technique for Cooperative Manipulation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {203--204}, doi = {}, year = {2017}, } Video 3DUI '17: "User Experience Evaluation ..." User Experience Evaluation with Archaeometry Interactive Tools in Virtual Reality Environment Ana Grasielle Corrêa, Eduardo Zilles Borba, Roseli Lopes, Marcelo Knorich Zuffo, Astolfo Araujo, and Regis Kopper (University of São Paulo, Brazil; Duke University, USA) In this work we present a study about usability experience of users in a cyber-archeological environment. We researched how they explore a realistic 3D environment in Virtual Reality (VR) through archaeometry conventional techniques. Our objective is to evaluate users experiences with interactive archaeometry tools with archaeologist (not a VR expert) and compare results with VR experts (not an archeology expert). Two hypothesis will be tested: a) it’s possible to simulate the virtual world realistically as the real one?; b) if this VR model is passive of exploration, is it possible to create 3DUI analytical tools to help archaeologist to manipulate archaeometry tools? To explore these hypotheses we conducted experimental tests with ten users and the results are promising. @InProceedings{3DUI17p217, author = {Ana Grasielle Corrêa and Eduardo Zilles Borba and Roseli Lopes and Marcelo Knorich Zuffo and Astolfo Araujo and Regis Kopper}, title = {User Experience Evaluation with Archaeometry Interactive Tools in Virtual Reality Environment}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {217--218}, doi = {}, year = {2017}, } |
|
Kruijff, Ernst |
3DUI '17: "Comparing Leaning-Based Motion ..."
Comparing Leaning-Based Motion Cueing Interfaces for Virtual Reality Locomotion
Alexandra Kitson, Abraham M. Hashemian, Ekaterina R. Stepanova, Ernst Kruijff, and Bernhard E. Riecke (Simon Fraser University, Canada; Bonn-Rhein-Sieg University of Applied Sciences, Germany) In this paper, we describe a user study comparing five different locomotion interfaces for virtual reality locomotion. We compared a standard non-motion cueing interface, Joystick (Xbox), with four motion cueing interfaces, NaviChair (stool with springs), MuvMan (sit/stand active stool), Head-Directed (Oculus Rift DK2), and Swivel Chair (everyday office chair with leaning capability). Each interface had two degrees of freedom to move forward/backward and rotate using velocity (rate) control. The aim of this mixed methods study was to better understand relevant user experience factors and guide the design of future locomotion interfaces. This study employed methods from HCI to provide an understanding of why users behave a certain way while using the interface and to unearth any new issues with the design. Participants were tasked to search for objects in a virtual city while they provided talk-aloud feedback and we logged their behaviour. Subsequently, they completed a post-experimental questionnaire on their experience. We found that the qualitative themes of control, usability, and experience echoed the results of the questionnaire, providing internal validity. The quantitative measures revealed the Joystick to be significantly more comfortable and precise than the motion cueing interfaces. However, the qualitative feedback and interviews showed this was due to the reduced perceived controllability and safety of the motion cueing interfaces. Designers of these interfaces should consider using a backrest if users need to lean backwards and avoid using velocity-control for rotations when using HMDs. @InProceedings{3DUI17p73, author = {Alexandra Kitson and Abraham M. Hashemian and Ekaterina R. Stepanova and Ernst Kruijff and Bernhard E. Riecke}, title = {Comparing Leaning-Based Motion Cueing Interfaces for Virtual Reality Locomotion}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {73--82}, doi = {}, year = {2017}, } |
|
Kuhlen, Torsten W. |
3DUI '17: "Efficient Approximate Computation ..."
Efficient Approximate Computation of Scene Visibility Based on Navigation Meshes and Applications for Navigation and Scene Analysis
Sebastian Freitag, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Scene visibility - the information of which parts of the scene are visible from a certain location—can be used to derive various properties of a virtual environment. For example, it enables the computation of viewpoint quality to determine the informativeness of a viewpoint, helps in constructing virtual tours, and allows to keep track of the objects a user may already have seen. However, computing visibility at runtime may be too computationally expensive for many applications, while sampling the entire scene beforehand introduces a costly precomputation step and may include many samples not needed later on. Therefore, in this paper, we propose a novel approach to precompute visibility information based on navigation meshes, a polygonal representation of a scene’s navigable areas. We show that with only limited precomputation, high accuracy can be achieved in these areas. Furthermore, we demonstrate the usefulness of the approach by means of several applications, including viewpoint quality computation, landmark and room detection, and exploration assistance. In addition, we present a travel interface based on common visibility that we found to result in less cybersickness in a user study. @InProceedings{3DUI17p134, author = {Sebastian Freitag and Benjamin Weyers and Torsten W. Kuhlen}, title = {Efficient Approximate Computation of Scene Visibility Based on Navigation Meshes and Applications for Navigation and Scene Analysis}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {134--143}, doi = {}, year = {2017}, } 3DUI '17: "A Reliable Non-verbal Vocal ..." A Reliable Non-verbal Vocal Input Metaphor for Clicking Daniel Zielasko, Neha Neha, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) While experiencing an immersive virtual environment a suitable trigger metaphor is often needed, e.g. for the interaction with objects out of physical reach or system control. The BlowClick approach [Zielasko2015] that is based on non-verbal vocal input has been proven to be a valuable trigger technique in previous work. However, its original detection method is vulnerable to false positives and, thus, is limited in its potential use. Therefore, we extended the existing approach by adding machine learning methods to reliably classify blowing events. We found a support vector machine with Gaussian kernel performing the best with at least the same latency and more precision than before. Furthermore, we added acoustic feedback to the NVVI trigger, which increases the user’s confidence and whose absence was also stated as a limitation of the previous work. With this extended technique, we repeated the conducted Fitts’ law experiment with 33 participants and could confirm that it is possible to use NVVI as a reliable trigger as part of a hands-free point-andclick interface. Furthermore, we tested reaction times to measure the trigger’s performance without the influence of pointing and calculated device throughputs to ensure comparability. @InProceedings{3DUI17p40, author = {Daniel Zielasko and Neha Neha and Benjamin Weyers and Torsten W. Kuhlen}, title = {A Reliable Non-verbal Vocal Input Metaphor for Clicking}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {40--49}, doi = {}, year = {2017}, } 3DUI '17: "Evaluation of Approaching-Strategies ..." Evaluation of Approaching-Strategies of Temporarily Required Virtual Assistants in Immersive Environments Andrea Bönsch, Tom Vierjahn, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Embodied, virtual agents provide users assistance in agent-based support systems. To this end, two closely linked factors have to be considered for the agents’ behavioral design: their presence time (PT), i.e., the time in which the agents are visible, and the approaching time (AT), i.e., the time span between the user’s calling for an agent and the agent’s actual availability. This work focuses on human-like assistants that are embedded in immersive scenes but that are required only temporarily. To the best of our knowledge, guidelines for a suitable trade-off between PT and AT of these assistants do not yet exist. We address this gap by presenting the results of a controlled within-subjects study in a CAVE. While keeping a low PT so that the agent is not perceived as annoying, three strategies affecting the AT, namely fading, walking, and running, are evaluated by 40 subjects. The results indicate no clear preference for either behavior. Instead, the necessity of a better trade-off between a low AT and an agent’s realistic behavior is demonstrated. @InProceedings{3DUI17p69, author = {Andrea Bönsch and Tom Vierjahn and Torsten W. Kuhlen}, title = {Evaluation of Approaching-Strategies of Temporarily Required Virtual Assistants in Immersive Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {69--72}, doi = {}, year = {2017}, } |
|
Kunz, Andreas |
3DUI '17: "Optimized Graph Extraction ..."
Optimized Graph Extraction and Locomotion Prediction for Redirected Walking
Markus Zank and Andreas Kunz (ETH Zurich, Switzerland) Redirected walking with advanced planners such as MPCRed or FORCE requires both knowledge about the virtual environment - mostly in the form of a skeleton graph representing the virtual environment - and a robust prediction of the user's actions. This paper presents methods for both parts and evaluates them with a number of test cases. Since frame rate is crucial for a virtual reality application, the computationally heavy extraction and preprocessing of the skeleton graph is done offline while only parts directly linked to the user's behavior such as the prediction are done online. The prediction is done using a target-based long-term prediction and the targets are determined automatically and combined with targets predefined by the designer of the virtual environment. The methods presented here provide a graph that is well suited for planning redirection and allows prediction techniques previously only demonstrated in studies to be applied to large scale virtual environments. @InProceedings{3DUI17p120, author = {Markus Zank and Andreas Kunz}, title = {Optimized Graph Extraction and Locomotion Prediction for Redirected Walking}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {120--129}, doi = {}, year = {2017}, } 3DUI '17: "Multi-phase Wall Warner System ..." Multi-phase Wall Warner System for Real Walking in Virtual Environments Markus Zank, Colin Yao, and Andreas Kunz (ETH Zurich, Switzerland) Real walking is a means to explore virtual environments that are even larger than a physical space. Avoiding collisions with the physical walls necessitates a system to warn users. This paper describes the design and implementation of a multi-phase warning system as a solution to this safety necessity. The first phase is a velocity-based warning based on time-to-impact. The second phase is distance-based and designed as an emergency warning. Combinations of acoustic and visual feedback mechanisms were tested in a study with 13 participants. The quantitative measures show that the system keeps users safe, while allowing them to freely explore the virtual environment. @InProceedings{3DUI17p223, author = {Markus Zank and Colin Yao and Andreas Kunz}, title = {Multi-phase Wall Warner System for Real Walking in Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {223--224}, doi = {}, year = {2017}, } |
|
Kyllonen, Nikki |
3DUI '17: "Anatomical 2D/3D Shape-Matching ..."
Anatomical 2D/3D Shape-Matching in Virtual Reality: A User Interface for Quantifying Joint Kinematics with Radiographic Imaging
Kyungyoon Kim, Rebekah L. Lawrence, Nikki Kyllonen, Paula M. Ludewig, Arin M. Ellingson, and Daniel F. Keefe (University of Minnesota, USA) We introduce a virtual reality 3D user interface (3DUI) for anatomical 2D/3D shape-matching, a challenging task that is part of medical imaging processes required by biomechanics researchers. Manual shape-matching can be thought of as a nuanced version of classic 6 degree-of-freedom docking tasks studied in the 3DUI research community. Our solution combines dynamic gain for precise translation and rotation from 6 degree-of-freedom tracker input, constraints based on both 2D and 3D data, and immersive visualization and visual feedback. @InProceedings{3DUI17p243, author = {Kyungyoon Kim and Rebekah L. Lawrence and Nikki Kyllonen and Paula M. Ludewig and Arin M. Ellingson and Daniel F. Keefe}, title = {Anatomical 2D/3D Shape-Matching in Virtual Reality: A User Interface for Quantifying Joint Kinematics with Radiographic Imaging}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {243--244}, doi = {}, year = {2017}, } Video |
|
Lages, Wallace S. |
3DUI '17: "Bookshelf and Bird: Enabling ..."
Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection
Run Yu, Wallace S. Lages, Mahdi Nabiyouni, Brandon Ray, Navyaram Kondur, Vikram Chandrashekar, and Doug A. Bowman (Virginia Tech, USA) We present two novel redirection techniques to enable real walking in large virtual environments (VEs) using only “room-scale” tracked spaces. The techniques, called Bookshelf and Bird, provide narrative-consistent redirection to keep the user inside the physical space, and require the user to walk to explore the VE. The underlying concept, called cell-based redirection, is to divide the virtual world into discrete cells that have the same size as the physical tracking space. The techniques then can redirect the users without altering the relationship between the user, the physical space, and the virtual cell. In this way, users can always access the entire cell using real walking. @InProceedings{3DUI17p116, author = {Run Yu and Wallace S. Lages and Mahdi Nabiyouni and Brandon Ray and Navyaram Kondur and Vikram Chandrashekar and Doug A. Bowman}, title = {Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {116--119}, doi = {}, year = {2017}, } Video |
|
Lamberti, Fabrizio |
3DUI '17: "HOT: Hold your Own Tools for ..."
HOT: Hold your Own Tools for AR-Based Constructive Art
Giuseppe Attanasio, Alberto Cannavò, Francesca Cibrario, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) Using digital instruments to support artistic expression and creativity is a hot topic. In this work, we focused on the design of a suitable interface for Augmented Reality-based constructive art on handheld devices. Issues to be faced encompassed how to give artists sense of spatial dimensions, how to provide them with different tools for realizing artworks, and how much moving away from ``the real'' and going towards ``the virtual''. Through a touch-capable device, such as a smartphone or a tablet, we offer artists a clean workspace, where they can decide when to introduce artworks and tools. In fact, besides exploiting the multi-touch functionality and the gyroscopes/accelerometers to manipulate artworks in six degrees of freedom (6DOF), the proposed solution exploits a set of printed markers that can be brought into the camera's field of view to make specific virtual tools appear in the augmented scene. With such tools, artists can decide to control, e.g., manipulation speed, scale factor, scene parameters, etc., thus complementing functionalities that can be accessed via the device's screen. @InProceedings{3DUI17p256, author = {Giuseppe Attanasio and Alberto Cannavò and Francesca Cibrario and Fabrizio Lamberti and Paolo Montuschi and Gianluca Paravati}, title = {HOT: Hold your Own Tools for AR-Based Constructive Art}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {256--257}, doi = {}, year = {2017}, } 3DUI '17: "T4T: Tangible Interface for ..." T4T: Tangible Interface for Tuning 3D Object Manipulation Tools Alberto Cannavò, Fabio Cermelli, Vincenzo Chiaramida, Giovanni Ciccone, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) A 3D User Interface for manipulating virtual objects in Augmented Reality scenarios on handheld devices is presented. The proposed solution takes advantage of two interaction techniques. The former (named ``cursor mode'') exploits a cursor, which position and movement are bound to the view of the device; the cursor allows the user to select objects and to perform coarse-grain manipulations by moving the device. The latter (referred to as ``tuning mode'') uses the physical affordances of a tangible interface to provide the user with the possibility to refine objects in all their aspects (position, rotation, scale, color, and so forth) with a fine-grained control. @InProceedings{3DUI17p266, author = {Alberto Cannavò and Fabio Cermelli and Vincenzo Chiaramida and Giovanni Ciccone and Fabrizio Lamberti and Paolo Montuschi and Gianluca Paravati}, title = {T4T: Tangible Interface for Tuning 3D Object Manipulation Tools}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {266--267}, doi = {}, year = {2017}, } |
|
Langbehn, Eike |
3DUI '17: "Influence of Avatar Appearance ..."
Influence of Avatar Appearance on Presence in Social VR
Paul Heidicker, Eike Langbehn, and Frank Steinicke (University of Hamburg, Germany) Social virtual reality (VR) has enormous potential to allow several physically separated users to collaborate in an immersive virtual environment (IVE). These users and their actions are represented by avatars in the IVE. In question is how the appearance of those avatars influences communication and interaction. It might make a difference, if the avatar consists of a complete body representation or if only certain body parts are visible. Moreover, a one-to-one mapping of the user's movements to the avatar's movements might have advantages compared to pre-defined avatar animations. To answer these questions, we compared three different types of avatar appearances in a user study. For this, we used estimations of presence, social presence, and cognitive load. The evaluation showed that motion-controlled avatars with full representation of the avatar body lead to an increased sense of presence. Motion-controlled avatars as well as avatars which have only head and hands visible produced an increased feeling of co-presence and behavioral interdependence. This is interesting, since it states that we do not need a complete avatar body in social VR. @InProceedings{3DUI17p233, author = {Paul Heidicker and Eike Langbehn and Frank Steinicke}, title = {Influence of Avatar Appearance on Presence in Social VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {233--234}, doi = {}, year = {2017}, } |
|
Lange, Markus |
3DUI '17: "Vibrotactile Assistance for ..."
Vibrotactile Assistance for User Guidance Towards Selection Targets in VR and the Cognitive Resources Involved
Oscar J. Ariza N., Markus Lange, Frank Steinicke, and Gerd Bruder (University of Hamburg, Germany; University of Würzburg, Germany; University of Central Florida, USA) The selection of objects in virtual reality (VR) located on the periphery and outside the field of view (FOV) requires a visual search by rotations of the HMD, which can reduce the performance of interaction. Our work explores the use of a pair of self-made wireless and wearable devices which once attached to the user’s head provide assistive vibrotactile cues for guidance in order to reduce the time used to turn and locate a target object. We present an experiment based on a dual-tasking method to analyze cognitive demands and performance metrics during a selection task. @InProceedings{3DUI17p95, author = {Oscar J. Ariza N. and Markus Lange and Frank Steinicke and Gerd Bruder}, title = {Vibrotactile Assistance for User Guidance Towards Selection Targets in VR and the Cognitive Resources Involved}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {95--98}, doi = {}, year = {2017}, } |
|
Lank, Edward |
3DUI '17: "Watchcasting: Freehand 3D ..."
Watchcasting: Freehand 3D Interaction with Off-the-Shelf Smartwatch
Krzysztof Pietroszek, Liudmila Tahai, James R. Wallace, and Edward Lank (California State University at Monterey Bay, USA; University of Waterloo, Canada) We describe a mid-air, watch-based 3D interaction technique called Watchcasting. The technique enables target selection and translation by mapping z-coordinate position to forearm rotation. By replicat- ing a large display 3D selection study, we show that Watchcasting provides comparable performance to smartphone (Smartcasting) and electrical impedance myography (Myopoint) techniques. Our work demonstrates that an off-the-shelf smartwatch is a practical alternative for 3D interaction using specialized devices. @InProceedings{3DUI17p172, author = {Krzysztof Pietroszek and Liudmila Tahai and James R. Wallace and Edward Lank}, title = {Watchcasting: Freehand 3D Interaction with Off-the-Shelf Smartwatch}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {172--175}, doi = {}, year = {2017}, } Video |
|
Lawrence, Rebekah L. |
3DUI '17: "Anatomical 2D/3D Shape-Matching ..."
Anatomical 2D/3D Shape-Matching in Virtual Reality: A User Interface for Quantifying Joint Kinematics with Radiographic Imaging
Kyungyoon Kim, Rebekah L. Lawrence, Nikki Kyllonen, Paula M. Ludewig, Arin M. Ellingson, and Daniel F. Keefe (University of Minnesota, USA) We introduce a virtual reality 3D user interface (3DUI) for anatomical 2D/3D shape-matching, a challenging task that is part of medical imaging processes required by biomechanics researchers. Manual shape-matching can be thought of as a nuanced version of classic 6 degree-of-freedom docking tasks studied in the 3DUI research community. Our solution combines dynamic gain for precise translation and rotation from 6 degree-of-freedom tracker input, constraints based on both 2D and 3D data, and immersive visualization and visual feedback. @InProceedings{3DUI17p243, author = {Kyungyoon Kim and Rebekah L. Lawrence and Nikki Kyllonen and Paula M. Ludewig and Arin M. Ellingson and Daniel F. Keefe}, title = {Anatomical 2D/3D Shape-Matching in Virtual Reality: A User Interface for Quantifying Joint Kinematics with Radiographic Imaging}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {243--244}, doi = {}, year = {2017}, } Video |
|
Lécuyer, Anatole |
3DUI '17: "FlexiFingers: Multi-finger ..."
FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics
Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. @InProceedings{3DUI17p103, author = {Merwan Achibet and Benoît Le Gouis and Maud Marchal and Pierre-Alexandre Léziart and Ferran Argelaguet and Adrien Girard and Anatole Lécuyer and Hiroyuki Kajimoto}, title = {FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {103--106}, doi = {}, year = {2017}, } 3DUI '17: "Spatial and Rotation Invariant ..." Spatial and Rotation Invariant 3D Gesture Recognition Based on Sparse Representation Ferran Argelaguet, Mélanie Ducoffe, Anatole Lécuyer, and Remi Gribonval (Inria, France; IRISA, France; ENS, France) Advances in motion tracking technology, especially for commodity hardware, still require robust 3D gesture recognition in order to fully exploit the benefits of natural user interfaces. In this paper, we introduce a novel 3D gesture recognition algorithm based on the sparse representation of 3D human motion. The sparse representation of human motion provides a set of features that can be used to efficiently classify gestures in real-time. Compared to existing gesture recognition systems, sparse representation, the proposed approach enables full spatial and rotation invariance and provides high tolerance to noise. Moreover, the proposed classification scheme takes into account the inter-user variability which increases gesture classification accuracy in user-independent scenarios. We validated our approach with existing motion databases for gestural interaction and performed a user evaluation with naive subjects to show its robustness to arbitrarily defined gestures. The results showed that our classification scheme has high classification accuracy for user-independent scenarios even with users who have different handedness. We believe that sparse representation of human motion will pave the way for a new generation of 3D gesture recognition systems in order to fully open the potential of natural user interfaces. @InProceedings{3DUI17p158, author = {Ferran Argelaguet and Mélanie Ducoffe and Anatole Lécuyer and Remi Gribonval}, title = {Spatial and Rotation Invariant 3D Gesture Recognition Based on Sparse Representation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {158--167}, doi = {}, year = {2017}, } Video 3DUI '17: "Increasing Optical Tracking ..." Increasing Optical Tracking Workspace of VR Applications using Controlled Cameras Guillaume Cortes, Eric Marchand, Jérôme Ardouin, and Anatole Lécuyer (Realyz, France; University of Rennes 1, France; Inria, France) We propose an approach to greatly increase the tracking workspace of VR applications without adding new sensors. Our approach relies on controlled cameras able to follow the tracked markers all around the VR workspace providing 6DoF tracking data. We designed the proof-of-concept of such approach based on two consumer-grade cameras and a pan-tilt head. The resulting tracking workspace could be greatly increased depending on the actuators’ range of motion. The accuracy error and jitter were found to be rather limited during camera motion (resp. 0.3cm and 0.02cm). Therefore, whenever the final VR application does not require a perfect tracking accuracy over the entire workspace, we recommend using our approach in order to enlarge the tracking workspace. @InProceedings{3DUI17p22, author = {Guillaume Cortes and Eric Marchand and Jérôme Ardouin and Anatole Lécuyer}, title = {Increasing Optical Tracking Workspace of VR Applications using Controlled Cameras}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {22--25}, doi = {}, year = {2017}, } |
|
Lee, Gun |
3DUI '17: "Exploring Natural Eye-Gaze-Based ..."
Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality
Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman, and Mark Billinghurst (University of South Australia, Australia; University of Canterbury, New Zealand) Eye tracking technology in a head-mounted display has undergone rapid advancement in recent years, making it possible for researchers to explore new interaction techniques using natural eye movements. This paper explores three novel eye-gaze-based interaction techniques: (1) Duo-Reticles, eye-gaze selection based on eye-gaze and inertial reticles, (2) Radial Pursuit, cluttered-object selection that takes advantage of smooth pursuit, and (3) Nod and Roll, head-gesture-based interaction based on the vestibulo-ocular reflex. In an initial user study, we compare each technique against a baseline condition in a scenario that demonstrates its strengths and weaknesses. @InProceedings{3DUI17p36, author = {Thammathip Piumsomboon and Gun Lee and Robert W. Lindeman and Mark Billinghurst}, title = {Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {36--39}, doi = {}, year = {2017}, } Video Info |
|
Lee, Myungho |
3DUI '17: "Can Social Presence Be Contagious? ..."
Can Social Presence Be Contagious? Effects of Social Presence Priming on Interaction with Virtual Humans
Salam Daher, Kangsoo Kim, Myungho Lee, Gerd Bruder, Ryan Schubert, Jeremy Bailenson, and Gregory F. Welch (University of Central Florida, USA; Stanford University, USA) We explore whether a peripheral observation of apparent mutual social presence between a virtual human (VH) and a virtual human confederate (VHC) can increase a subject’s sense of social presence with the VH. Human subjects were asked to play a game with a VH. Half of the subjects were primed by being exposed to a brief but apparently engaging conversation between the VHC and the VH. The primed subjects reported being significantly more excited, alert, had significantly higher measures of Co-Presence, Attentional Allocation, and Message Understanding dimensions of social presence for the VH, compared to those who were not primed. @InProceedings{3DUI17p201, author = {Salam Daher and Kangsoo Kim and Myungho Lee and Gerd Bruder and Ryan Schubert and Jeremy Bailenson and Gregory F. Welch}, title = {Can Social Presence Be Contagious? Effects of Social Presence Priming on Interaction with Virtual Humans}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {201--202}, doi = {}, year = {2017}, } |
|
Le Gouis, Benoît |
3DUI '17: "FlexiFingers: Multi-finger ..."
FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics
Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. @InProceedings{3DUI17p103, author = {Merwan Achibet and Benoît Le Gouis and Maud Marchal and Pierre-Alexandre Léziart and Ferran Argelaguet and Adrien Girard and Anatole Lécuyer and Hiroyuki Kajimoto}, title = {FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {103--106}, doi = {}, year = {2017}, } |
|
Léziart, Pierre-Alexandre |
3DUI '17: "FlexiFingers: Multi-finger ..."
FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics
Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. @InProceedings{3DUI17p103, author = {Merwan Achibet and Benoît Le Gouis and Maud Marchal and Pierre-Alexandre Léziart and Ferran Argelaguet and Adrien Girard and Anatole Lécuyer and Hiroyuki Kajimoto}, title = {FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {103--106}, doi = {}, year = {2017}, } |
|
Lindeman, Robert W. |
3DUI '17: "Exploring Natural Eye-Gaze-Based ..."
Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality
Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman, and Mark Billinghurst (University of South Australia, Australia; University of Canterbury, New Zealand) Eye tracking technology in a head-mounted display has undergone rapid advancement in recent years, making it possible for researchers to explore new interaction techniques using natural eye movements. This paper explores three novel eye-gaze-based interaction techniques: (1) Duo-Reticles, eye-gaze selection based on eye-gaze and inertial reticles, (2) Radial Pursuit, cluttered-object selection that takes advantage of smooth pursuit, and (3) Nod and Roll, head-gesture-based interaction based on the vestibulo-ocular reflex. In an initial user study, we compare each technique against a baseline condition in a scenario that demonstrates its strengths and weaknesses. @InProceedings{3DUI17p36, author = {Thammathip Piumsomboon and Gun Lee and Robert W. Lindeman and Mark Billinghurst}, title = {Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {36--39}, doi = {}, year = {2017}, } Video Info 3DUI '17: "Trigger Walking: A Low-Fatigue ..." Trigger Walking: A Low-Fatigue Travel Technique for Immersive Virtual Reality Bhuvaneswari Sarupuri, Miriam Luque Chipana, and Robert W. Lindeman (University of Canterbury, New Zealand) We present Trigger Walking, a low-fatigue travel technique for immersive virtual reality which uses hand-held controllers to move about more naturally within a limited physical space. Most commercial applications use some form of teleportation or physical walking for moving around in a virtual space. However, teleportation can be disorienting, due to the sudden change in the environment when teleported to another location. Physical walking techniques are more physically demanding, leading to fatigue. Hence, we explore the use of two spatial controllers that accompany commercial headsets to walk by taking a virtual step each time a controller trigger is pulled. The user has the choice of using the orientation of a single-controller, the average of both controllers, or that of the head to determine the direction of walking, and speed can be controlled by changing the angle of the controller to the Frontal plane. @InProceedings{3DUI17p227, author = {Bhuvaneswari Sarupuri and Miriam Luque Chipana and Robert W. Lindeman}, title = {Trigger Walking: A Low-Fatigue Travel Technique for Immersive Virtual Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {227--228}, doi = {}, year = {2017}, } 3DUI '17: "Jedi ForceExtension: Telekinesis ..." Jedi ForceExtension: Telekinesis as a Virtual Reality Interaction Metaphor Rory M. S. Clifford, Nikita Mae B. Tuanquin, and Robert W. Lindeman (University of Canterbury, New Zealand) Virtual Reality (VR) enables us to freely operate in a space that is unconstrained by physical laws and limitations. To take advantage of this aspect, we have developed a technique for pseudo-telekinetic object manipulation in VR using slight downward tilt of the head to simulate Jedi concentration. This telekinetic ability draws inspiration from The Force abilities exhibited in the Star Wars universe, and is particularly well suited to VR because it provides the ability to interact with and manipulate objects at a distance. We implemented force translate, force rotate, force push and force pull behaviours as examples of the general concept of force extension. We conducted exploratory user testing to assess telekinesis as a suitable interaction metaphor. Subject performance and feedback varied between participants but were generally encouraging. @InProceedings{3DUI17p239, author = {Rory M. S. Clifford and Nikita Mae B. Tuanquin and Robert W. Lindeman}, title = {Jedi ForceExtension: Telekinesis as a Virtual Reality Interaction Metaphor}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {239--240}, doi = {}, year = {2017}, } |
|
Liu, Xiaohan |
3DUI '17: "A Surgical Training System ..."
A Surgical Training System for Four Medical Punctures Based on Virtual Reality and Haptic Feedback
Ronghai Wang, Junfeng Yao, Lin Wang, Xiaohan Liu, Hongwei Wang, and Liling Zheng (Quanzhou Normal University, China; Xiamen University, China; Sanming University, China; Quanzhou First Affiliated Hospital of Fujian Medical University, China) This poster presents a surgical training system for four medical punctures based on virtual reality and haptic feedback, including a client program developed in the Unity3D game engine and a server program developed by PHP. This system provides the immersive surgery simulation for thoracentesis, lumbar puncture, bone marrow puncture and abdominal paracentesis that we call four medical punctures. Trainers or teachers can release training tasks in which trainees or students are able to learn surgery skills at a 3D visual scene. Furthermore, they will feel a sense of immediacy when putting on the head-mounted display and with the help of haptic feedback. The training records will be put into database for analysis. @InProceedings{3DUI17p215, author = {Ronghai Wang and Junfeng Yao and Lin Wang and Xiaohan Liu and Hongwei Wang and Liling Zheng}, title = {A Surgical Training System for Four Medical Punctures Based on Virtual Reality and Haptic Feedback}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {215--216}, doi = {}, year = {2017}, } |
|
Loffredo, Donald |
3DUI '17: "A Robust and Intuitive 3D ..."
A Robust and Intuitive 3D Interface for Teleoperation of Autonomous Robotic Agents through Immersive Virtual Reality Environments
Jace Regenbrecht, Alireza Tavakkoli, and Donald Loffredo (University of Houston-Victoria, USA) In this paper an intuitive human interface is presented which allows for an operator immersed in a virtual environment to remotely control a teleoperated agent with minimal cognitive overload and minimal risk of accidental input. Additionally, a cursor-based interface is presented allowing for the placement of navigation nodes for the agent, thus facilitating robot's autonomous navigation functions to be executed. @InProceedings{3DUI17p199, author = {Jace Regenbrecht and Alireza Tavakkoli and Donald Loffredo}, title = {A Robust and Intuitive 3D Interface for Teleoperation of Autonomous Robotic Agents through Immersive Virtual Reality Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {199--200}, doi = {}, year = {2017}, } |
|
Lopes, Roseli |
3DUI '17: "User Experience Evaluation ..."
User Experience Evaluation with Archaeometry Interactive Tools in Virtual Reality Environment
Ana Grasielle Corrêa, Eduardo Zilles Borba, Roseli Lopes, Marcelo Knorich Zuffo, Astolfo Araujo, and Regis Kopper (University of São Paulo, Brazil; Duke University, USA) In this work we present a study about usability experience of users in a cyber-archeological environment. We researched how they explore a realistic 3D environment in Virtual Reality (VR) through archaeometry conventional techniques. Our objective is to evaluate users experiences with interactive archaeometry tools with archaeologist (not a VR expert) and compare results with VR experts (not an archeology expert). Two hypothesis will be tested: a) it’s possible to simulate the virtual world realistically as the real one?; b) if this VR model is passive of exploration, is it possible to create 3DUI analytical tools to help archaeologist to manipulate archaeometry tools? To explore these hypotheses we conducted experimental tests with ten users and the results are promising. @InProceedings{3DUI17p217, author = {Ana Grasielle Corrêa and Eduardo Zilles Borba and Roseli Lopes and Marcelo Knorich Zuffo and Astolfo Araujo and Regis Kopper}, title = {User Experience Evaluation with Archaeometry Interactive Tools in Virtual Reality Environment}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {217--218}, doi = {}, year = {2017}, } |
|
Louison, Céphise |
3DUI '17: "Spatialized Vibrotactile Feedback ..."
Spatialized Vibrotactile Feedback Contributes to Goal-Directed Movements in Cluttered Virtual Environments
Céphise Louison, Fabien Ferlay, and Daniel R. Mestre (CEA, France; Aix-Marseille University, France; CNRS, France) In virtual reality (VR), spatial awareness is a dominant research topic. It plays an essential role in the assessment of human operators’ behavior within virtual environments (VE), notably for the evaluation of the feasibility of manual maintenance tasks in cluttered industrial settings. In such contexts, it is decisive to evaluate the spatial and temporal correspondence between the operator’s movement kinematics and that of his/her virtual avatar in the virtual environment. Often, in a cluttered VE, direct kinesthetic (force) feedback is limited or absent. We tested whether vibrotactile (cutaneous) feedback would increase visuo-proprioceptive consistency, spatial awareness, and thus the validity of VR studies, by augmenting the perception of the operator’s contact(s) with virtual objects. We present preliminary experimental results, obtained using a head-mounted display (HMD) during a goal-directed task in a cluttered VE. Data suggest that spatialized vibrotactile feedback contributes to visuo-proprioceptive consistency. @InProceedings{3DUI17p99, author = {Céphise Louison and Fabien Ferlay and Daniel R. Mestre}, title = {Spatialized Vibrotactile Feedback Contributes to Goal-Directed Movements in Cluttered Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {99--102}, doi = {}, year = {2017}, } Video |
|
Luan, Bo |
3DUI '17: "Evaluating Gesture-Based Augmented ..."
Evaluating Gesture-Based Augmented Reality Annotation
Yun Suk Chang, Benjamin Nuernberger, Bo Luan, and Tobias Höllerer (University of California at Santa Barbara, USA) Drawing annotations with 3D hand gestures in augmented reality are useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn circle and arrow annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. In this paper, we compare different annotation methods using the HoloLens augmented reality development platform: Surface-Drawing and Air-Drawing, with either raw but smoothed or interpreted and beautified gesture input. For the Surface-Drawing method, users control a cursor that is projected onto the world model, allowing gesture input to occur directly on the surfaces of real-world objects. For the Air-Drawing method, gesture drawing occurs at the user's fingertip and is projected onto the world model on release. The methods have different characteristics regarding necessitated vergence switches and afforded cursor control. We performed an experiment in which users draw on two different real-world objects at different distances using the different methods. Results indicate that Surface-Drawing is more accurate than Air-Drawing and Beautified annotations are drawn faster than Non-Beautified; participants also preferred Surface-Drawing and Beautified. @InProceedings{3DUI17p182, author = {Yun Suk Chang and Benjamin Nuernberger and Bo Luan and Tobias Höllerer}, title = {Evaluating Gesture-Based Augmented Reality Annotation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {182--185}, doi = {}, year = {2017}, } Video |
|
Ludewig, Paula M. |
3DUI '17: "Anatomical 2D/3D Shape-Matching ..."
Anatomical 2D/3D Shape-Matching in Virtual Reality: A User Interface for Quantifying Joint Kinematics with Radiographic Imaging
Kyungyoon Kim, Rebekah L. Lawrence, Nikki Kyllonen, Paula M. Ludewig, Arin M. Ellingson, and Daniel F. Keefe (University of Minnesota, USA) We introduce a virtual reality 3D user interface (3DUI) for anatomical 2D/3D shape-matching, a challenging task that is part of medical imaging processes required by biomechanics researchers. Manual shape-matching can be thought of as a nuanced version of classic 6 degree-of-freedom docking tasks studied in the 3DUI research community. Our solution combines dynamic gain for precise translation and rotation from 6 degree-of-freedom tracker input, constraints based on both 2D and 3D data, and immersive visualization and visual feedback. @InProceedings{3DUI17p243, author = {Kyungyoon Kim and Rebekah L. Lawrence and Nikki Kyllonen and Paula M. Ludewig and Arin M. Ellingson and Daniel F. Keefe}, title = {Anatomical 2D/3D Shape-Matching in Virtual Reality: A User Interface for Quantifying Joint Kinematics with Radiographic Imaging}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {243--244}, doi = {}, year = {2017}, } Video |
|
Maciel, Anderson |
3DUI '17: "Collaborative Manipulation ..."
Collaborative Manipulation of 3D Virtual Objects in Augmented Reality Scenarios using Mobile Devices
Jerônimo G. Grandi, Iago Berndt, Henrique G. Debarba, Luciana Nedel, and Anderson Maciel (Federal University of Rio Grande do Sul, Brazil; Artanim Foundation, Switzerland) Interaction in augmented reality environments may be very complex, depending on the degrees of freedom (DOFs) required for the task. In this work we present a 3D user interface for collaborative manipulation of virtual objects in augmented reality (AR) environments. It maps position -- acquired with a camera and fiducial markers -- and touchscreen input of a handheld device into gestures to select, move, rotate and scale virtual objects. As these transformations require the control of multiple DOFs, collaboration is proposed as a solution to coordinate the modification of each and all the available DOFs. Users are free to decide their own manipulation roles. All virtual elements are displayed directly on the mobile device as an overlay of the camera capture, providing an individual point of view of the AR environment to each user. @InProceedings{3DUI17p264, author = {Jerônimo G. Grandi and Iago Berndt and Henrique G. Debarba and Luciana Nedel and Anderson Maciel}, title = {Collaborative Manipulation of 3D Virtual Objects in Augmented Reality Scenarios using Mobile Devices}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {264--265}, doi = {}, year = {2017}, } Video |
|
MacKenzie, I. Scott |
3DUI '17: "An Arm-Mounted Inertial Controller ..."
An Arm-Mounted Inertial Controller for 6DOF Input: Design and Evaluation
Thomas S. Young, Robert J. Teather, and I. Scott MacKenzie (York University, Canada; Carleton University, Canada) We designed a low-cost arm-mounted wearable 3D input device that uses inertial measurement units as an alternative to existing tracking systems requiring fixed frames of reference. The device employs inertial sensors to derive a 3D cursor position through natural arm movement. We also explore three methods of indicating selection with the device, one entirely software based (dwell), one using a twist gesture, and one relying on buttons. To address the paucity of research reporting human performance metrics across comparable interfaces, we quantify the performance of this new device through a point selection experiment and compare results from a similar study. @InProceedings{3DUI17p26, author = {Thomas S. Young and Robert J. Teather and I. Scott MacKenzie}, title = {An Arm-Mounted Inertial Controller for 6DOF Input: Design and Evaluation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {26--35}, doi = {}, year = {2017}, } |
|
Madathil, Kapil Chalil |
3DUI '17: "The Effects of Presentation ..."
The Effects of Presentation Method and Simulation Fidelity on Psychomotor Education in a Bimanual Metrology Training Simulation
Jeffrey Bertrand, Ayush Bhargava, Kapil Chalil Madathil, Anand Gramopadhye, and Sabarish V. Babu (Clemson University, USA) In this study, we empirically evaluated the effects of presentation method and simulation fidelity on task performance and psychomotor skills acquisition in an immersive bimanual simulation towards precision metrology education. In a 2 x 2 experiment design, we investigated a large-screen immersive display (LSID) with a head-mounted display (HMD), and the presence versus absence of gravity. Advantages of the HMD include interacting with the simulation in a more natural manner as compared to using a large-screen immersive display due to the similarities between the interactions afforded in the virtual compared to the real-world task. Suspending the laws of physics may have an effect on usability and in turn could affect learning outcomes. Our dependent variables consisted of a pre and post cognition questionnaire, quantitative performance measures, perceived workload and system usefulness, and a psychomotor assessment to measure to what extent transfer of learning took place from the virtual to the real world. Results indicate that the HMD condition was preferable to the immersive display in several metrics while the no-gravity condition resulted in users adopting strategies that were not advantageous for task performance. @InProceedings{3DUI17p59, author = {Jeffrey Bertrand and Ayush Bhargava and Kapil Chalil Madathil and Anand Gramopadhye and Sabarish V. Babu}, title = {The Effects of Presentation Method and Simulation Fidelity on Psychomotor Education in a Bimanual Metrology Training Simulation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {59--68}, doi = {}, year = {2017}, } |
|
Marchal, Maud |
3DUI '17: "FlexiFingers: Multi-finger ..."
FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics
Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. @InProceedings{3DUI17p103, author = {Merwan Achibet and Benoît Le Gouis and Maud Marchal and Pierre-Alexandre Léziart and Ferran Argelaguet and Adrien Girard and Anatole Lécuyer and Hiroyuki Kajimoto}, title = {FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {103--106}, doi = {}, year = {2017}, } |
|
Marchand, Eric |
3DUI '17: "Increasing Optical Tracking ..."
Increasing Optical Tracking Workspace of VR Applications using Controlled Cameras
Guillaume Cortes, Eric Marchand, Jérôme Ardouin, and Anatole Lécuyer (Realyz, France; University of Rennes 1, France; Inria, France) We propose an approach to greatly increase the tracking workspace of VR applications without adding new sensors. Our approach relies on controlled cameras able to follow the tracked markers all around the VR workspace providing 6DoF tracking data. We designed the proof-of-concept of such approach based on two consumer-grade cameras and a pan-tilt head. The resulting tracking workspace could be greatly increased depending on the actuators’ range of motion. The accuracy error and jitter were found to be rather limited during camera motion (resp. 0.3cm and 0.02cm). Therefore, whenever the final VR application does not require a perfect tracking accuracy over the entire workspace, we recommend using our approach in order to enlarge the tracking workspace. @InProceedings{3DUI17p22, author = {Guillaume Cortes and Eric Marchand and Jérôme Ardouin and Anatole Lécuyer}, title = {Increasing Optical Tracking Workspace of VR Applications using Controlled Cameras}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {22--25}, doi = {}, year = {2017}, } |
|
McMahan, Ryan P. |
3DUI '17: "Augmented Reality Exhibits ..."
Augmented Reality Exhibits of Constructive Art: 8th Annual 3DUI Contest
Rongkai Guo, Ryan P. McMahan, and Benjamin Weyers (Kennesaw State University, USA; University of Texas at Dallas, USA; RWTH Aachen University, Germany) The 8th annual IEEE 3DUI Contest focuses on the development of a 3D User Interface (3DUI) for an Augmented Reality (AR) exhibit of constructive art. The 3DUI Contest is part of the 2017 IEEE Symposium on 3D User Interfaces held in Los Angeles, California. The contest was open to anyone interested in 3DUIs, from researchers to students, enthusiasts, and professionals. The purpose of the contest is to stimulate innovative and creative solutions to challenging 3DUI problems. @InProceedings{3DUI17p253, author = {Rongkai Guo and Ryan P. McMahan and Benjamin Weyers}, title = {Augmented Reality Exhibits of Constructive Art: 8th Annual 3DUI Contest}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {253--253}, doi = {}, year = {2017}, } |
|
Medeiros, Daniel |
3DUI '17: "Mid-Air Modeling with Boolean ..."
Mid-Air Modeling with Boolean Operations in VR
Daniel Mendes, Daniel Medeiros, Maurício Sousa, Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal; PUC-Rio, Brazil) Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks. @InProceedings{3DUI17p154, author = {Daniel Mendes and Daniel Medeiros and Maurício Sousa and Ricardo Ferreira and Alberto Raposo and Alfredo Ferreira and Joaquim Jorge}, title = {Mid-Air Modeling with Boolean Operations in VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {154--157}, doi = {}, year = {2017}, } 3DUI '17: "PRECIOUS! Out-of-Reach Selection ..." PRECIOUS! Out-of-Reach Selection using Iterative Refinement in VR Daniel Mendes, Daniel Medeiros, Eduardo Cordeiro, Maurício Sousa, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal) Selecting objects outside user's arm-reach in Virtual Reality still poses significant challenges. Techniques proposed to overcome such limitations often follow arm-extension metaphors or favor the use of selection volumes combined with ray-casting. Nonetheless, these approaches work for room sized and sparse environments, and they do not scale to larger scenarios with many objects. We introduce PRECIOUS, a novel mid-air technique for selecting out-of-reach objects. It employs an iterative progressive refinement, using cone-casting to select multiple objects and moving users closer to them in each step, allowing accurate selections. A user evaluation showed that PRECIOUS compares favorably against existing approaches, being the most versatile. @InProceedings{3DUI17p237, author = {Daniel Mendes and Daniel Medeiros and Eduardo Cordeiro and Maurício Sousa and Alfredo Ferreira and Joaquim Jorge}, title = {PRECIOUS! Out-of-Reach Selection using Iterative Refinement in VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {237--238}, doi = {}, year = {2017}, } |
|
Mendes, Daniel |
3DUI '17: "Mid-Air Modeling with Boolean ..."
Mid-Air Modeling with Boolean Operations in VR
Daniel Mendes, Daniel Medeiros, Maurício Sousa, Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal; PUC-Rio, Brazil) Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks. @InProceedings{3DUI17p154, author = {Daniel Mendes and Daniel Medeiros and Maurício Sousa and Ricardo Ferreira and Alberto Raposo and Alfredo Ferreira and Joaquim Jorge}, title = {Mid-Air Modeling with Boolean Operations in VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {154--157}, doi = {}, year = {2017}, } 3DUI '17: "PRECIOUS! Out-of-Reach Selection ..." PRECIOUS! Out-of-Reach Selection using Iterative Refinement in VR Daniel Mendes, Daniel Medeiros, Eduardo Cordeiro, Maurício Sousa, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal) Selecting objects outside user's arm-reach in Virtual Reality still poses significant challenges. Techniques proposed to overcome such limitations often follow arm-extension metaphors or favor the use of selection volumes combined with ray-casting. Nonetheless, these approaches work for room sized and sparse environments, and they do not scale to larger scenarios with many objects. We introduce PRECIOUS, a novel mid-air technique for selecting out-of-reach objects. It employs an iterative progressive refinement, using cone-casting to select multiple objects and moving users closer to them in each step, allowing accurate selections. A user evaluation showed that PRECIOUS compares favorably against existing approaches, being the most versatile. @InProceedings{3DUI17p237, author = {Daniel Mendes and Daniel Medeiros and Eduardo Cordeiro and Maurício Sousa and Alfredo Ferreira and Joaquim Jorge}, title = {PRECIOUS! Out-of-Reach Selection using Iterative Refinement in VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {237--238}, doi = {}, year = {2017}, } |
|
Merienne, Frédéric |
3DUI '17: "Force and Vibrotactile Integration ..."
Force and Vibrotactile Integration for 3D User Interaction within Virtual Environments
Aida Erfanian, Stanley Tarng, Yaoping Hu, Jérémy Plouzeau, and Frédéric Merienne (University of Calgary, Canada; LE2I, France) Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies on multi-sensory integration of visual and haptic cues revealed that the integration follows maximum likelihood estimation (MLE). Little effort focuses however on integrating force and vibrotactile cues – two sub-categorical cues of the haptic modality. Hence, this paper presents an investigation on MLE’s suitability for integrating these sub-categorical cues. Within a stereoscopic VE, human users performed a 3D interactive task of navigating a flying drone along a high-voltage transmission line in an inaccessible region and identifying defects on the line. The users had to identify defects via individual force or vibrotactile cues, and their combinations in co-located and dislocated settings. The co-located setting provided both cues on the right hand of the users; whereas the dislocated setting delivered the force and vibrotactile cues on the right hand and forearm of the users, respectively. Task performance of the users, such as completion time and accuracy, was assessed under each cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This was in agreement with the role of tactile cues in sensing surface properties, herein setting a baseline for using MLE. The task performance under the co-located setting indicated certain degrees of combining those under the individual cues. In contrast, the performance under the dislocated setting was alike that under the individual vibrotactile cue. These observations imply an inconclusiveness of MLE to integrate both cues in a co-located setting for 3D user interaction. @InProceedings{3DUI17p87, author = {Aida Erfanian and Stanley Tarng and Yaoping Hu and Jérémy Plouzeau and Frédéric Merienne}, title = {Force and Vibrotactile Integration for 3D User Interaction within Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {87--94}, doi = {}, year = {2017}, } 3DUI '17: "Comparing VR Environments ..." Comparing VR Environments for Seat Selection in an Opera Theater José Luis Dorado, Pablo Figueroa, Jean-Rémy Chardonnet, Frédéric Merienne, and José Tiberio Hernández (University of Andes, Colombia; LE2I, France) This study presents a comparison of the influence of different VR environments in the task of selecting a preferred seat in an opera theater. We used gaze-based raycasting and headsets in a low-cost head-mounted display (HMD) (GearVR); and a virtual wand, head tracking, and headsets in a CAVE, two somewhat opposing technologies in the spectrum of current VR systems. Visual rendering and the selection technique depend on the capabilities of each environment, whereas the sound is approximated in both environments. Results show that subjects can select similar seats but their decisions differ between both environments. The results obtained can be useful in guiding the development of future VR applications. @InProceedings{3DUI17p221, author = {José Luis Dorado and Pablo Figueroa and Jean-Rémy Chardonnet and Frédéric Merienne and José Tiberio Hernández}, title = {Comparing VR Environments for Seat Selection in an Opera Theater}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {221--222}, doi = {}, year = {2017}, } 3DUI '17: "Effect of Footstep Vibrations ..." Effect of Footstep Vibrations and Proprioceptive Vibrations Used with an Innovative Navigation Method Jérémy Plouzeau, José Luis Dorado, Damien Paillot, and Frédéric Merienne (LE2I, France; HeSam, France) This study proposes to investigate the effect of adding vibration feedback to a navigation task in virtual environment. Previous study used footstep vibrations and proprioceptive vibrations in order to decrease the cyber-sickness and increase the sense of presence. In this study, we experiment the same vibration modalities but with a new navigation method. The results show that proprioceptive vibrations do not impact the sense of presence neither the cyber-sickness while footstep vibrations increase sense of presence and decrease in a certain way cyber-sickness. @InProceedings{3DUI17p241, author = {Jérémy Plouzeau and José Luis Dorado and Damien Paillot and Frédéric Merienne}, title = {Effect of Footstep Vibrations and Proprioceptive Vibrations Used with an Innovative Navigation Method}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {241--242}, doi = {}, year = {2017}, } |
|
Mestre, Daniel R. |
3DUI '17: "Spatialized Vibrotactile Feedback ..."
Spatialized Vibrotactile Feedback Contributes to Goal-Directed Movements in Cluttered Virtual Environments
Céphise Louison, Fabien Ferlay, and Daniel R. Mestre (CEA, France; Aix-Marseille University, France; CNRS, France) In virtual reality (VR), spatial awareness is a dominant research topic. It plays an essential role in the assessment of human operators’ behavior within virtual environments (VE), notably for the evaluation of the feasibility of manual maintenance tasks in cluttered industrial settings. In such contexts, it is decisive to evaluate the spatial and temporal correspondence between the operator’s movement kinematics and that of his/her virtual avatar in the virtual environment. Often, in a cluttered VE, direct kinesthetic (force) feedback is limited or absent. We tested whether vibrotactile (cutaneous) feedback would increase visuo-proprioceptive consistency, spatial awareness, and thus the validity of VR studies, by augmenting the perception of the operator’s contact(s) with virtual objects. We present preliminary experimental results, obtained using a head-mounted display (HMD) during a goal-directed task in a cluttered VE. Data suggest that spatialized vibrotactile feedback contributes to visuo-proprioceptive consistency. @InProceedings{3DUI17p99, author = {Céphise Louison and Fabien Ferlay and Daniel R. Mestre}, title = {Spatialized Vibrotactile Feedback Contributes to Goal-Directed Movements in Cluttered Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {99--102}, doi = {}, year = {2017}, } Video |
|
Mirhosseini, Seyedkoosha |
3DUI '17: "Angle and Pressure-Based Volumetric ..."
Angle and Pressure-Based Volumetric Picking on Touchscreen Devices
Ievgeniia Gutenko, Seyedkoosha Mirhosseini, and Arie E. Kaufman (Stony Brook University, USA) The target picking in 3D volumetric data is a common interaction problem in a variety of domains such as medicine, engineering, and physics. When a user clicks on a 2D point on the screen rendering, the ray is cast through the volume at an angle perpendicular to the screen. The problem lies in determining the intended location of picking, that is the end point of the ray. We introduce picking for 3D volumetric data by utilizing pressure and angle of the digital stylus pen on a touchscreen device. We map both pressure and angle of the stylus to the depth of the selector widget to help target acquisition. We evaluate several methods of selection: single finger selection, stylus with pressure but without an angle, and stylus with both angle and pressure. We decouple rotation of the volumetric data, and control for two target sizes. We report significant benefits of digital stylus picking methods and a relation between the methods and target sizes. @InProceedings{3DUI17p235, author = {Ievgeniia Gutenko and Seyedkoosha Mirhosseini and Arie E. Kaufman}, title = {Angle and Pressure-Based Volumetric Picking on Touchscreen Devices}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {235--236}, doi = {}, year = {2017}, } |
|
Mohr, Peter |
3DUI '17: "Adaptive User Perspective ..."
Adaptive User Perspective Rendering for Handheld Augmented Reality
Peter Mohr, Markus Tatzgern, Jens Grubert, Dieter Schmalstieg, and Denis Kalkofen (Graz University of Technology, Austria; Salzburg University of Applied Sciences, Austria; Coburg University, Germany) Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user’s real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user’s head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user’s motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering. @InProceedings{3DUI17p176, author = {Peter Mohr and Markus Tatzgern and Jens Grubert and Dieter Schmalstieg and Denis Kalkofen}, title = {Adaptive User Perspective Rendering for Handheld Augmented Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {176--181}, doi = {}, year = {2017}, } |
|
Montuschi, Paolo |
3DUI '17: "HOT: Hold your Own Tools for ..."
HOT: Hold your Own Tools for AR-Based Constructive Art
Giuseppe Attanasio, Alberto Cannavò, Francesca Cibrario, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) Using digital instruments to support artistic expression and creativity is a hot topic. In this work, we focused on the design of a suitable interface for Augmented Reality-based constructive art on handheld devices. Issues to be faced encompassed how to give artists sense of spatial dimensions, how to provide them with different tools for realizing artworks, and how much moving away from ``the real'' and going towards ``the virtual''. Through a touch-capable device, such as a smartphone or a tablet, we offer artists a clean workspace, where they can decide when to introduce artworks and tools. In fact, besides exploiting the multi-touch functionality and the gyroscopes/accelerometers to manipulate artworks in six degrees of freedom (6DOF), the proposed solution exploits a set of printed markers that can be brought into the camera's field of view to make specific virtual tools appear in the augmented scene. With such tools, artists can decide to control, e.g., manipulation speed, scale factor, scene parameters, etc., thus complementing functionalities that can be accessed via the device's screen. @InProceedings{3DUI17p256, author = {Giuseppe Attanasio and Alberto Cannavò and Francesca Cibrario and Fabrizio Lamberti and Paolo Montuschi and Gianluca Paravati}, title = {HOT: Hold your Own Tools for AR-Based Constructive Art}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {256--257}, doi = {}, year = {2017}, } 3DUI '17: "T4T: Tangible Interface for ..." T4T: Tangible Interface for Tuning 3D Object Manipulation Tools Alberto Cannavò, Fabio Cermelli, Vincenzo Chiaramida, Giovanni Ciccone, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) A 3D User Interface for manipulating virtual objects in Augmented Reality scenarios on handheld devices is presented. The proposed solution takes advantage of two interaction techniques. The former (named ``cursor mode'') exploits a cursor, which position and movement are bound to the view of the device; the cursor allows the user to select objects and to perform coarse-grain manipulations by moving the device. The latter (referred to as ``tuning mode'') uses the physical affordances of a tangible interface to provide the user with the possibility to refine objects in all their aspects (position, rotation, scale, color, and so forth) with a fine-grained control. @InProceedings{3DUI17p266, author = {Alberto Cannavò and Fabio Cermelli and Vincenzo Chiaramida and Giovanni Ciccone and Fabrizio Lamberti and Paolo Montuschi and Gianluca Paravati}, title = {T4T: Tangible Interface for Tuning 3D Object Manipulation Tools}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {266--267}, doi = {}, year = {2017}, } |
|
Nabiyouni, Mahdi |
3DUI '17: "Bookshelf and Bird: Enabling ..."
Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection
Run Yu, Wallace S. Lages, Mahdi Nabiyouni, Brandon Ray, Navyaram Kondur, Vikram Chandrashekar, and Doug A. Bowman (Virginia Tech, USA) We present two novel redirection techniques to enable real walking in large virtual environments (VEs) using only “room-scale” tracked spaces. The techniques, called Bookshelf and Bird, provide narrative-consistent redirection to keep the user inside the physical space, and require the user to walk to explore the VE. The underlying concept, called cell-based redirection, is to divide the virtual world into discrete cells that have the same size as the physical tracking space. The techniques then can redirect the users without altering the relationship between the user, the physical space, and the virtual cell. In this way, users can always access the entire cell using real walking. @InProceedings{3DUI17p116, author = {Run Yu and Wallace S. Lages and Mahdi Nabiyouni and Brandon Ray and Navyaram Kondur and Vikram Chandrashekar and Doug A. Bowman}, title = {Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {116--119}, doi = {}, year = {2017}, } Video |
|
Nagamura, Mario |
3DUI '17: "Batmen Beyond: Natural 3D ..."
Batmen Beyond: Natural 3D Manipulation with the BatWand
André Montes Rodrigues, Olavo Belloc, Eduardo Zilles Borba, Mario Nagamura, and Marcelo Knorich Zuffo (University of São Paulo, Brazil) In this work we present an interactive 3D object manipulation system using off the shelf mobile devices coupled with Augmented Reality (AR) technology that allows editing 3D objects by way of natural interactions based on tangible interfaces paradigms. The set-up consists of a mobile device, an interactive wand marker and AR markers laid on a table. The system allows users to change viewpoint and execute operations on 3D objects - simultaneous translation and rotation, scaling, cloning or deleting - by unconstrained natural interactions, leveraging user’s proficiency on daily object manipulation tasks and speeding up such typical 3D manipulation operations. Depth perception was significantly enhanced with dynamic shadows, allowing fast alignment and accurate positioning of objects. The prototype presented here allows successful completion of the three challenges proposed by the 2017 3DUI Contest, as validated by a preliminary informal user study with participants from the target audience and also from the general public. @InProceedings{3DUI17p258, author = {André Montes Rodrigues and Olavo Belloc and Eduardo Zilles Borba and Mario Nagamura and Marcelo Knorich Zuffo}, title = {Batmen Beyond: Natural 3D Manipulation with the BatWand}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {258--259}, doi = {}, year = {2017}, } |
|
Nakamura, Takuto |
3DUI '17: "Interpretation of Navigation ..."
Interpretation of Navigation Information Modulates the Effect of the Waist-Type Hanger Reflex on Walking
Yuki Kon, Takuto Nakamura, and Hiroyuki Kajimoto (University of Electro-Communications, Japan) The Hanger Reflex is a phenomenon in which the head rotates unintentionally when force is applied via a wire hanger placed on the head. This phenomenon is caused by physical pressure on the skin, and the direction of the Hanger Reflex modulated by the direction of skin deformation. A previous study examined the use of the head-, waist-, and ankle-type Hanger Reflex on walking navigation without interpretation of navigation information, and found that the waist-type Hanger Reflex had the strongest effect on walking. However, the existing waist-type Hanger Reflex device is passive; i.e. must be operated by the user, which leads to the necessity of developing a new active type device for use as part of a navigational system. In this paper, we developed a controlled waist-type Hanger Reflex device with four pneumatic actuators. We investigated different interpretations of navigation information on the effect of our device on walking. Our interpretation conditions included “Natural”, in which users did not attempt to interpret the navigation information, “Follow”, and “Resist”, in which they actively followed, or resisted the navigation information, respectively. We confirmed that our waist-type Hanger Reflex device could control the walking path and body direction, depending on user’s interpretation of the navigational information. @InProceedings{3DUI17p107, author = {Yuki Kon and Takuto Nakamura and Hiroyuki Kajimoto}, title = {Interpretation of Navigation Information Modulates the Effect of the Waist-Type Hanger Reflex on Walking}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {107--115}, doi = {}, year = {2017}, } Video |
|
Nankivil, Derek |
3DUI '17: "Specimen Box: A Tangible Interaction ..."
Specimen Box: A Tangible Interaction Technique for World-Fixed Virtual Reality Displays
David J. Zielinski, Derek Nankivil, and Regis Kopper (Duke University, USA) Object selection and manipulation in world-fixed displays such as CAVE-type systems are typically achieved with tracked input devices, which lack the tangibility of real-world interactions. Conversely, due to the visual blockage of the real world, head-mounted displays allow the use of many types of real world objects that can convey realistic haptic feedback. To bridge this gap, we propose Specimen Box, an interaction technique that allows users to naturally hold a plausible physical object while manipulating virtual content inside it. This virtual content is rendered based on the tracked position of the box in relation to the user's point of view. Specimen Box provides the weight and tactile feel of an actual object and does not occlude rendered objects in the scene. The end result is that the user sees the virtual content as if it exists inside the clear physical box. We hypothesize that the effect of holding a physical box, which is a valid part of the overall scenario, would improve user performance and experience. To verify this hypothesis, we conducted a user study which involved a cognitively loaded inspection task requiring extensive manipulation of the box. We compared Specimen Box to Grab-and-Twirl, a naturalistic bimanual manipulation technique that closely mimics the mechanics of our proposed technique. Results show that in our specific task, performance was significantly faster and rotation rate was significantly lower with Specimen Box. Further, performance of the control technique was positively affected by experience with Specimen Box. @InProceedings{3DUI17p50, author = {David J. Zielinski and Derek Nankivil and Regis Kopper}, title = {Specimen Box: A Tangible Interaction Technique for World-Fixed Virtual Reality Displays}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {50--58}, doi = {}, year = {2017}, } |
|
Nedel, Luciana |
3DUI '17: "Collaborative Manipulation ..."
Collaborative Manipulation of 3D Virtual Objects in Augmented Reality Scenarios using Mobile Devices
Jerônimo G. Grandi, Iago Berndt, Henrique G. Debarba, Luciana Nedel, and Anderson Maciel (Federal University of Rio Grande do Sul, Brazil; Artanim Foundation, Switzerland) Interaction in augmented reality environments may be very complex, depending on the degrees of freedom (DOFs) required for the task. In this work we present a 3D user interface for collaborative manipulation of virtual objects in augmented reality (AR) environments. It maps position -- acquired with a camera and fiducial markers -- and touchscreen input of a handheld device into gestures to select, move, rotate and scale virtual objects. As these transformations require the control of multiple DOFs, collaboration is proposed as a solution to coordinate the modification of each and all the available DOFs. Users are free to decide their own manipulation roles. All virtual elements are displayed directly on the mobile device as an overlay of the camera capture, providing an individual point of view of the AR environment to each user. @InProceedings{3DUI17p264, author = {Jerônimo G. Grandi and Iago Berndt and Henrique G. Debarba and Luciana Nedel and Anderson Maciel}, title = {Collaborative Manipulation of 3D Virtual Objects in Augmented Reality Scenarios using Mobile Devices}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {264--265}, doi = {}, year = {2017}, } Video |
|
Neha, Neha |
3DUI '17: "A Reliable Non-verbal Vocal ..."
A Reliable Non-verbal Vocal Input Metaphor for Clicking
Daniel Zielasko, Neha Neha, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) While experiencing an immersive virtual environment a suitable trigger metaphor is often needed, e.g. for the interaction with objects out of physical reach or system control. The BlowClick approach [Zielasko2015] that is based on non-verbal vocal input has been proven to be a valuable trigger technique in previous work. However, its original detection method is vulnerable to false positives and, thus, is limited in its potential use. Therefore, we extended the existing approach by adding machine learning methods to reliably classify blowing events. We found a support vector machine with Gaussian kernel performing the best with at least the same latency and more precision than before. Furthermore, we added acoustic feedback to the NVVI trigger, which increases the user’s confidence and whose absence was also stated as a limitation of the previous work. With this extended technique, we repeated the conducted Fitts’ law experiment with 33 participants and could confirm that it is possible to use NVVI as a reliable trigger as part of a hands-free point-andclick interface. Furthermore, we tested reaction times to measure the trigger’s performance without the influence of pointing and calculated device throughputs to ensure comparability. @InProceedings{3DUI17p40, author = {Daniel Zielasko and Neha Neha and Benjamin Weyers and Torsten W. Kuhlen}, title = {A Reliable Non-verbal Vocal Input Metaphor for Clicking}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {40--49}, doi = {}, year = {2017}, } |
|
Nguyen-Vo, Thinh |
3DUI '17: "Moving in a Box: Improving ..."
Moving in a Box: Improving Spatial Orientation in Virtual Reality using Simulated Reference Frames
Thinh Nguyen-Vo, Bernhard E. Riecke, and Wolfgang Stuerzlinger (Simon Fraser University, Canada) Despite recent advances in virtual reality, locomotion in a virtual environment is still restricted because of spatial disorientation. Previous research has shown the benefits of reference frames in maintaining spatial orientation. Here, we propose using a visually simulated reference frame in virtual reality to provide users with a better sense of direction in landmark-free virtual environments. Visually overlaid rectangular frames simulate different variations of frames of reference. We investigated how two different types of visually simulated reference frames might benefit in a navigational search task through a mixed-method study. Results showed that the presence of a reference frame significantly affects participants’ performance in a navigational search task. Though the egocentric frame of reference (simulated CAVE) that translates with the observer did not significantly help, an allocentric frame of reference (a simulated stationary room) significantly improved user performance both in navigational search time and overall travel distance. Our study suggests that adding a variation of the reference frame to virtual reality applications might be a cost-effective solution to enable more effective locomotion in virtual reality. @InProceedings{3DUI17p207, author = {Thinh Nguyen-Vo and Bernhard E. Riecke and Wolfgang Stuerzlinger}, title = {Moving in a Box: Improving Spatial Orientation in Virtual Reality using Simulated Reference Frames}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {207--208}, doi = {}, year = {2017}, } |
|
Nuernberger, Benjamin |
3DUI '17: "Evaluating Gesture-Based Augmented ..."
Evaluating Gesture-Based Augmented Reality Annotation
Yun Suk Chang, Benjamin Nuernberger, Bo Luan, and Tobias Höllerer (University of California at Santa Barbara, USA) Drawing annotations with 3D hand gestures in augmented reality are useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn circle and arrow annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. In this paper, we compare different annotation methods using the HoloLens augmented reality development platform: Surface-Drawing and Air-Drawing, with either raw but smoothed or interpreted and beautified gesture input. For the Surface-Drawing method, users control a cursor that is projected onto the world model, allowing gesture input to occur directly on the surfaces of real-world objects. For the Air-Drawing method, gesture drawing occurs at the user's fingertip and is projected onto the world model on release. The methods have different characteristics regarding necessitated vergence switches and afforded cursor control. We performed an experiment in which users draw on two different real-world objects at different distances using the different methods. Results indicate that Surface-Drawing is more accurate than Air-Drawing and Beautified annotations are drawn faster than Non-Beautified; participants also preferred Surface-Drawing and Beautified. @InProceedings{3DUI17p182, author = {Yun Suk Chang and Benjamin Nuernberger and Bo Luan and Tobias Höllerer}, title = {Evaluating Gesture-Based Augmented Reality Annotation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {182--185}, doi = {}, year = {2017}, } Video |
|
Ortega, Francisco R. |
3DUI '17: "Gesture Elicitation for 3D ..."
Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe
Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. @InProceedings{3DUI17p144, author = {Francisco R. Ortega and Alain Galvan and Katherine Tarre and Armando Barreto and Naphtali Rishe and Jonathan Bernal and Ruben Balcazar and Jason-Lee Thomas}, title = {Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {144--153}, doi = {}, year = {2017}, } 3DUI '17: "Procedural Celestial Rendering ..." Procedural Celestial Rendering for 3D Navigation Alain Galvan, Francisco R. Ortega, and Naphtali Rishe (Florida International University, USA) Finding the best suitable environment for 3D navigation that utilizes at least six degrees-of-freedom is still difficult. Furthermore, creating a system to procedurally generate a large virtual environment provides an opportunity for researchers to understand this problem further. Therefore, we present a novel technique to render a parametric celestial skybox with the ability to light environments similar to natural color corrected images from telescopes. We first pre-compute a spherical ray map that corresponds to the cubemap coordinates, then generate stars and dust through a combination of different noise generation shaders. @InProceedings{3DUI17p211, author = {Alain Galvan and Francisco R. Ortega and Naphtali Rishe}, title = {Procedural Celestial Rendering for 3D Navigation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {211--212}, doi = {}, year = {2017}, } |
|
Oshima, Kana |
3DUI '17: "Analysis of R-V Dynamics Illusion ..."
Analysis of R-V Dynamics Illusion Behavior Caused by Varying the Weight of Real Object
Kana Oshima, Satoshi Hashiguchi, Fumihisa Shibata, and Asako Kimura (Ritsumeikan University, Japan) Previously, we discovered the “R-V Dynamics Illusion,” a psychophysical phenomenon caused by the difference between the dynamics of real object (R) and virtual object (V) in mixed reality (MR) space. We confirmed that a real object with a movable portion (CG) is perceived lighter by MR visual stimulation. Here, we analyze whether the difference in the mass of real objects affects the R-V Dynamics Illusion. We conducted experiments to determine the difference threshold of weights under the condition where the masses of real objects are 500, 750, and 1000g, and only the CG liquid level is changed. As a result, the difference in mass did not influence the difference threshold of weights by changing the virtual liquid level. On the other hand, with the same mass conditions, the difference threshold of weights becomes smaller when the R-V Dynamics Illusion occurs. @InProceedings{3DUI17p213, author = {Kana Oshima and Satoshi Hashiguchi and Fumihisa Shibata and Asako Kimura}, title = {Analysis of R-V Dynamics Illusion Behavior Caused by Varying the Weight of Real Object}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {213--214}, doi = {}, year = {2017}, } |
|
Otmane, Samir |
3DUI '17: "Classic3D and Single3D: Two ..."
Classic3D and Single3D: Two Unimanual Techniques for Constrained 3D Manipulations on Tablet PCs
Siju Wu, Aylen Ricca, Amine Chellali, and Samir Otmane (University of Evry, France) Standard 3D widgets are used for object manipulation in desktop CAD applications but are less suited for use on touchscreens. We propose two 3D constrained manipulation techniques for Tablet PCs. Using finger identification, the dominant hand's index, middle and ring fingers are mapped with the X, Y and Z axes. Users can then trigger different manipulation tasks using specific chording gestures. A user study to assess usability and efficiency permitted to identify the gestures that are the most suitable for each manipulation task. Some design recommendations for an efficient 3D constrained manipulations technique are presented. @InProceedings{3DUI17p168, author = {Siju Wu and Aylen Ricca and Amine Chellali and Samir Otmane}, title = {Classic3D and Single3D: Two Unimanual Techniques for Constrained 3D Manipulations on Tablet PCs}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {168--171}, doi = {}, year = {2017}, } |
|
Oyekoya, Oyewole |
3DUI '17: "VizSpace: Interaction in the ..."
VizSpace: Interaction in the Positive Parallax Screen Plane
Oyewole Oyekoya, Emily Sassard, and Tiana Johnson (Clemson University, USA) The VizSpace is a physically situated interactive system that combines touch and hand interactions behind the screen to create the effect that users are reaching inside and interacting in a 3D virtual workspace. It extends the conventional touch table interface with hand tracking and 3D visualization to enable interaction in the positive parallax plane, where the binocular focus falls behind the screen so as not to occlude projected images. This paper covers the system design, human factors and ergonomics considerations for an interactive and immersive gesture-based visualization system. Results are presented from a preliminary user study that validates the usability of VizSpace. @InProceedings{3DUI17p229, author = {Oyewole Oyekoya and Emily Sassard and Tiana Johnson}, title = {VizSpace: Interaction in the Positive Parallax Screen Plane}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {229--230}, doi = {}, year = {2017}, } |
|
Paillot, Damien |
3DUI '17: "Effect of Footstep Vibrations ..."
Effect of Footstep Vibrations and Proprioceptive Vibrations Used with an Innovative Navigation Method
Jérémy Plouzeau, José Luis Dorado, Damien Paillot, and Frédéric Merienne (LE2I, France; HeSam, France) This study proposes to investigate the effect of adding vibration feedback to a navigation task in virtual environment. Previous study used footstep vibrations and proprioceptive vibrations in order to decrease the cyber-sickness and increase the sense of presence. In this study, we experiment the same vibration modalities but with a new navigation method. The results show that proprioceptive vibrations do not impact the sense of presence neither the cyber-sickness while footstep vibrations increase sense of presence and decrease in a certain way cyber-sickness. @InProceedings{3DUI17p241, author = {Jérémy Plouzeau and José Luis Dorado and Damien Paillot and Frédéric Merienne}, title = {Effect of Footstep Vibrations and Proprioceptive Vibrations Used with an Innovative Navigation Method}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {241--242}, doi = {}, year = {2017}, } |
|
Paravati, Gianluca |
3DUI '17: "HOT: Hold your Own Tools for ..."
HOT: Hold your Own Tools for AR-Based Constructive Art
Giuseppe Attanasio, Alberto Cannavò, Francesca Cibrario, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) Using digital instruments to support artistic expression and creativity is a hot topic. In this work, we focused on the design of a suitable interface for Augmented Reality-based constructive art on handheld devices. Issues to be faced encompassed how to give artists sense of spatial dimensions, how to provide them with different tools for realizing artworks, and how much moving away from ``the real'' and going towards ``the virtual''. Through a touch-capable device, such as a smartphone or a tablet, we offer artists a clean workspace, where they can decide when to introduce artworks and tools. In fact, besides exploiting the multi-touch functionality and the gyroscopes/accelerometers to manipulate artworks in six degrees of freedom (6DOF), the proposed solution exploits a set of printed markers that can be brought into the camera's field of view to make specific virtual tools appear in the augmented scene. With such tools, artists can decide to control, e.g., manipulation speed, scale factor, scene parameters, etc., thus complementing functionalities that can be accessed via the device's screen. @InProceedings{3DUI17p256, author = {Giuseppe Attanasio and Alberto Cannavò and Francesca Cibrario and Fabrizio Lamberti and Paolo Montuschi and Gianluca Paravati}, title = {HOT: Hold your Own Tools for AR-Based Constructive Art}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {256--257}, doi = {}, year = {2017}, } 3DUI '17: "T4T: Tangible Interface for ..." T4T: Tangible Interface for Tuning 3D Object Manipulation Tools Alberto Cannavò, Fabio Cermelli, Vincenzo Chiaramida, Giovanni Ciccone, Fabrizio Lamberti, Paolo Montuschi, and Gianluca Paravati (Politecnico di Torino, Italy) A 3D User Interface for manipulating virtual objects in Augmented Reality scenarios on handheld devices is presented. The proposed solution takes advantage of two interaction techniques. The former (named ``cursor mode'') exploits a cursor, which position and movement are bound to the view of the device; the cursor allows the user to select objects and to perform coarse-grain manipulations by moving the device. The latter (referred to as ``tuning mode'') uses the physical affordances of a tangible interface to provide the user with the possibility to refine objects in all their aspects (position, rotation, scale, color, and so forth) with a fine-grained control. @InProceedings{3DUI17p266, author = {Alberto Cannavò and Fabio Cermelli and Vincenzo Chiaramida and Giovanni Ciccone and Fabrizio Lamberti and Paolo Montuschi and Gianluca Paravati}, title = {T4T: Tangible Interface for Tuning 3D Object Manipulation Tools}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {266--267}, doi = {}, year = {2017}, } |
|
Peer, Alex |
3DUI '17: "Evaluating Perceived Distance ..."
Evaluating Perceived Distance Measures in Room-Scale Spaces using Consumer-Grade Head Mounted Displays
Alex Peer and Kevin Ponto (University of Wisconsin-Madison, USA) Distance misperception (sometimes, distance compression) in immersive virtual environments is an active area of study, and the recent availability of consumer-grade display and tracking technologies raises new questions. This work explores the plausibility of measuring misperceptions within the small tracking volumes of consumer-grade technology, whether measures practical within this space are directly comparable, and if contemporary displays induce distance misperceptions. @InProceedings{3DUI17p83, author = {Alex Peer and Kevin Ponto}, title = {Evaluating Perceived Distance Measures in Room-Scale Spaces using Consumer-Grade Head Mounted Displays}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {83--86}, doi = {}, year = {2017}, } |
|
Pfeiffer, Thies |
3DUI '17: "Attention Guiding Techniques ..."
Attention Guiding Techniques using Peripheral Vision and Eye Tracking for Feedback in Augmented-Reality-Based Assistance Systems
Patrick Renner and Thies Pfeiffer (Bielefeld University, Germany) A limiting factor of current smart glasses-based augmented reality (AR) systems is their small field of view. AR assistance systems designed for tasks such as order picking or manual assembly are supposed to guide the visual attention of the user towards the item that is relevant next. This is a challenging task, as the user may initially be in an arbitrary position and orientation relative to the target. As a result of the small field of view, in most cases the target will initially not be covered by the AR display, even if it is visible to the user. This raises the question of how to design attention guiding for such "off-screen gaze" conditions. The central idea put forward in this paper is to display cues for attention guidance in a way that they can still be followed using peripheral vision. While the eyes' focus point is beyond the AR display, certain visual cues presented on the display are still detectable by the human. In addition to that, guidance methods that are adaptive to the eye movements of the user are introduced and evaluated. In the frame of a research project on smart glasses-based assistance systems for a manual assembly station, several attention guiding techniques with and without eye tracking have been designed, implemented and tested. As evaluation method simulated AR in a virtual reality HMD setup was used, which supports a repeatable and highly-controlled experimental design. @InProceedings{3DUI17p186, author = {Patrick Renner and Thies Pfeiffer}, title = {Attention Guiding Techniques using Peripheral Vision and Eye Tracking for Feedback in Augmented-Reality-Based Assistance Systems}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {186--194}, doi = {}, year = {2017}, } Video |
|
Pietroszek, Krzysztof |
3DUI '17: "Watchcasting: Freehand 3D ..."
Watchcasting: Freehand 3D Interaction with Off-the-Shelf Smartwatch
Krzysztof Pietroszek, Liudmila Tahai, James R. Wallace, and Edward Lank (California State University at Monterey Bay, USA; University of Waterloo, Canada) We describe a mid-air, watch-based 3D interaction technique called Watchcasting. The technique enables target selection and translation by mapping z-coordinate position to forearm rotation. By replicat- ing a large display 3D selection study, we show that Watchcasting provides comparable performance to smartphone (Smartcasting) and electrical impedance myography (Myopoint) techniques. Our work demonstrates that an off-the-shelf smartwatch is a practical alternative for 3D interaction using specialized devices. @InProceedings{3DUI17p172, author = {Krzysztof Pietroszek and Liudmila Tahai and James R. Wallace and Edward Lank}, title = {Watchcasting: Freehand 3D Interaction with Off-the-Shelf Smartwatch}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {172--175}, doi = {}, year = {2017}, } Video |
|
Pinho, Márcio Sarroglia |
3DUI '17: "SculptAR: An Augmented Reality ..."
SculptAR: An Augmented Reality Interaction System
Vicenzo Abichequer Sangalli, Thomas Volpato de Oliveira, Leonardo Pavanatto Soares, and Márcio Sarroglia Pinho (PUCRS, Brazil) In this work, a 3D mobile interface to create sculptures in an augmented reality environment tracked by AR markers is presented. A raycasting technique was used to interact with the objects in the scene, as well as 2D and 3D interfaces to manipulate and modify the objects. The users can move, delete, paint and duplicate virtual objects using 6 DOFs techniques. @InProceedings{3DUI17p260, author = {Vicenzo Abichequer Sangalli and Thomas Volpato de Oliveira and Leonardo Pavanatto Soares and Márcio Sarroglia Pinho}, title = {SculptAR: An Augmented Reality Interaction System}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {260--261}, doi = {}, year = {2017}, } 3DUI '17: "Design and Preliminary Evaluation ..." Design and Preliminary Evaluation of an Ego-Exocentric Technique for Cooperative Manipulation Leonardo Pavanatto Soares, Márcio Sarroglia Pinho, and Regis Kopper (PUCRS, Brazil; Duke University, USA) This work proposes and evaluates the EGO-EXO technique for cooperative manipulation in a Collaborative Virtual Environment (CVE). From the premise that simultaneous control over navigation and manipulation by the user can make interaction complex, this technique places two users in asymmetric viewpoint positions during the cooperative manipulation of an object, allowing one of them to follow the object. It applies the separation of degrees of freedom method between the two viewpoints to make the manipulation easier. The technique is evaluated through a user study to test its efficiency on handling cooperative manipulation. Results indicate that, for manipulation tasks that require high amplitude position control along precise orientation control, the technique performs with a lower collisions to time ratio. @InProceedings{3DUI17p203, author = {Leonardo Pavanatto Soares and Márcio Sarroglia Pinho and Regis Kopper}, title = {Design and Preliminary Evaluation of an Ego-Exocentric Technique for Cooperative Manipulation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {203--204}, doi = {}, year = {2017}, } Video |
|
Piumsomboon, Thammathip |
3DUI '17: "Exploring Natural Eye-Gaze-Based ..."
Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality
Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman, and Mark Billinghurst (University of South Australia, Australia; University of Canterbury, New Zealand) Eye tracking technology in a head-mounted display has undergone rapid advancement in recent years, making it possible for researchers to explore new interaction techniques using natural eye movements. This paper explores three novel eye-gaze-based interaction techniques: (1) Duo-Reticles, eye-gaze selection based on eye-gaze and inertial reticles, (2) Radial Pursuit, cluttered-object selection that takes advantage of smooth pursuit, and (3) Nod and Roll, head-gesture-based interaction based on the vestibulo-ocular reflex. In an initial user study, we compare each technique against a baseline condition in a scenario that demonstrates its strengths and weaknesses. @InProceedings{3DUI17p36, author = {Thammathip Piumsomboon and Gun Lee and Robert W. Lindeman and Mark Billinghurst}, title = {Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {36--39}, doi = {}, year = {2017}, } Video Info |
|
Plouzeau, Jérémy |
3DUI '17: "Force and Vibrotactile Integration ..."
Force and Vibrotactile Integration for 3D User Interaction within Virtual Environments
Aida Erfanian, Stanley Tarng, Yaoping Hu, Jérémy Plouzeau, and Frédéric Merienne (University of Calgary, Canada; LE2I, France) Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies on multi-sensory integration of visual and haptic cues revealed that the integration follows maximum likelihood estimation (MLE). Little effort focuses however on integrating force and vibrotactile cues – two sub-categorical cues of the haptic modality. Hence, this paper presents an investigation on MLE’s suitability for integrating these sub-categorical cues. Within a stereoscopic VE, human users performed a 3D interactive task of navigating a flying drone along a high-voltage transmission line in an inaccessible region and identifying defects on the line. The users had to identify defects via individual force or vibrotactile cues, and their combinations in co-located and dislocated settings. The co-located setting provided both cues on the right hand of the users; whereas the dislocated setting delivered the force and vibrotactile cues on the right hand and forearm of the users, respectively. Task performance of the users, such as completion time and accuracy, was assessed under each cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This was in agreement with the role of tactile cues in sensing surface properties, herein setting a baseline for using MLE. The task performance under the co-located setting indicated certain degrees of combining those under the individual cues. In contrast, the performance under the dislocated setting was alike that under the individual vibrotactile cue. These observations imply an inconclusiveness of MLE to integrate both cues in a co-located setting for 3D user interaction. @InProceedings{3DUI17p87, author = {Aida Erfanian and Stanley Tarng and Yaoping Hu and Jérémy Plouzeau and Frédéric Merienne}, title = {Force and Vibrotactile Integration for 3D User Interaction within Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {87--94}, doi = {}, year = {2017}, } 3DUI '17: "Effect of Footstep Vibrations ..." Effect of Footstep Vibrations and Proprioceptive Vibrations Used with an Innovative Navigation Method Jérémy Plouzeau, José Luis Dorado, Damien Paillot, and Frédéric Merienne (LE2I, France; HeSam, France) This study proposes to investigate the effect of adding vibration feedback to a navigation task in virtual environment. Previous study used footstep vibrations and proprioceptive vibrations in order to decrease the cyber-sickness and increase the sense of presence. In this study, we experiment the same vibration modalities but with a new navigation method. The results show that proprioceptive vibrations do not impact the sense of presence neither the cyber-sickness while footstep vibrations increase sense of presence and decrease in a certain way cyber-sickness. @InProceedings{3DUI17p241, author = {Jérémy Plouzeau and José Luis Dorado and Damien Paillot and Frédéric Merienne}, title = {Effect of Footstep Vibrations and Proprioceptive Vibrations Used with an Innovative Navigation Method}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {241--242}, doi = {}, year = {2017}, } |
|
Ponto, Kevin |
3DUI '17: "Evaluating Perceived Distance ..."
Evaluating Perceived Distance Measures in Room-Scale Spaces using Consumer-Grade Head Mounted Displays
Alex Peer and Kevin Ponto (University of Wisconsin-Madison, USA) Distance misperception (sometimes, distance compression) in immersive virtual environments is an active area of study, and the recent availability of consumer-grade display and tracking technologies raises new questions. This work explores the plausibility of measuring misperceptions within the small tracking volumes of consumer-grade technology, whether measures practical within this space are directly comparable, and if contemporary displays induce distance misperceptions. @InProceedings{3DUI17p83, author = {Alex Peer and Kevin Ponto}, title = {Evaluating Perceived Distance Measures in Room-Scale Spaces using Consumer-Grade Head Mounted Displays}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {83--86}, doi = {}, year = {2017}, } |
|
Quarles, John |
3DUI '17: "Information Recall in VR Disability ..."
Information Recall in VR Disability Simulation
Tanvir Irfan Chowdhury, Raphael Costa, and John Quarles (University of Texas at San Antonio, USA) The purpose of this poster is to explain our study on the effect of the sense of presence on one aspect of learning, information recall, in an immersive (vs. non-immersive) virtual reality (VR) disability simulation (DS). We hypothesized that a higher level of immersion and involvement in a VR disability simulation that leads to a high sense of presence will help the user improve information recall. We conducted a between subjects experiment in which participants were presented information about multiple sclerosis (MS) in different immersive conditions and afterwards they attempted to recall the information. The results from our study suggest that participants who were in immersive conditions were able to recall the information more effectively than the participants who experienced a non-immersive condition. @InProceedings{3DUI17p219, author = {Tanvir Irfan Chowdhury and Raphael Costa and John Quarles}, title = {Information Recall in VR Disability Simulation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {219--220}, doi = {}, year = {2017}, } Video |
|
Quesnel, Denise |
3DUI '17: "Awestruck: Natural Interaction ..."
Awestruck: Natural Interaction with Virtual Reality on Eliciting Awe
Denise Quesnel and Bernhard E. Riecke (Simon Fraser University, Canada) In the study of transformative experiences, the feeling of awe is found to alter an individual’s perception in positive, lasting manners. Our research aims to understand the potential for interactive virtual reality (VR) in eliciting awe, through a framework based on collection of physiological data alongside self-report and phenomenological observations that demonstrate awe. We conducted a mixed-methods experiment to test whether VR is effective in eliciting awe, and if this effect might be modulated by the type of natural interaction in the form of a “flight” lounger vs. “standing”. Results demonstrate both interaction paradigms were equally awe-inspiring, with overall physiological (in the form of goose bumps with a 43.8% incidence rate) and self-report data (overall awe rating of 79.7%), and females showing more physiological signs of awe than males. Observations revealed 360-degree interaction and operability of hand-held controllers could be improved, with the consequence of designing even more effective transformative experiences. @InProceedings{3DUI17p205, author = {Denise Quesnel and Bernhard E. Riecke}, title = {Awestruck: Natural Interaction with Virtual Reality on Eliciting Awe}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {205--206}, doi = {}, year = {2017}, } Video Info |
|
Ragan, Eric D. |
3DUI '17: "Redirected Reach in Virtual ..."
Redirected Reach in Virtual Reality: Enabling Natural Hand Interaction at Multiple Virtual Locations with Passive Haptics
Mohamed Suhail, Shyam Prathish Sargunam, Dustin T. Han, and Eric D. Ragan (Texas A&M University, USA) In many virtual reality applications, it would be ideal if users could use their physical hands to directly interact with virtual objects while experiencing realistic haptic feedback. While this can be achieved via interaction with tracked physical props that correspond to virtual objects, practical limitations can make it difficult to achieve a physical environment that exactly represents the virtual world, and virtual environments are often much larger than the available tracked physical space. Our approach maps a single physical prop to multiple virtual objects distributed throughout a virtual environment. Additionally, our work explores scenarios using one physical prop to control multiple types of object interactions. We explore considerations that allow physical object manipulation using orientation resetting to physically align the user with a physical prop for interaction. The resetting approach applies a discrete positional and rotational update to the user's location when the user virtually approaches a target for interaction, and the redirected reach approach applies a translational offset to the user’s virtual hand based on the positional difference of the virtual and physical objects. @InProceedings{3DUI17p245, author = {Mohamed Suhail and Shyam Prathish Sargunam and Dustin T. Han and Eric D. Ragan}, title = {Redirected Reach in Virtual Reality: Enabling Natural Hand Interaction at Multiple Virtual Locations with Passive Haptics}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {245--246}, doi = {}, year = {2017}, } |
|
Raposo, Alberto |
3DUI '17: "Mid-Air Modeling with Boolean ..."
Mid-Air Modeling with Boolean Operations in VR
Daniel Mendes, Daniel Medeiros, Maurício Sousa, Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal; PUC-Rio, Brazil) Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks. @InProceedings{3DUI17p154, author = {Daniel Mendes and Daniel Medeiros and Maurício Sousa and Ricardo Ferreira and Alberto Raposo and Alfredo Ferreira and Joaquim Jorge}, title = {Mid-Air Modeling with Boolean Operations in VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {154--157}, doi = {}, year = {2017}, } |
|
Ray, Brandon |
3DUI '17: "Bookshelf and Bird: Enabling ..."
Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection
Run Yu, Wallace S. Lages, Mahdi Nabiyouni, Brandon Ray, Navyaram Kondur, Vikram Chandrashekar, and Doug A. Bowman (Virginia Tech, USA) We present two novel redirection techniques to enable real walking in large virtual environments (VEs) using only “room-scale” tracked spaces. The techniques, called Bookshelf and Bird, provide narrative-consistent redirection to keep the user inside the physical space, and require the user to walk to explore the VE. The underlying concept, called cell-based redirection, is to divide the virtual world into discrete cells that have the same size as the physical tracking space. The techniques then can redirect the users without altering the relationship between the user, the physical space, and the virtual cell. In this way, users can always access the entire cell using real walking. @InProceedings{3DUI17p116, author = {Run Yu and Wallace S. Lages and Mahdi Nabiyouni and Brandon Ray and Navyaram Kondur and Vikram Chandrashekar and Doug A. Bowman}, title = {Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {116--119}, doi = {}, year = {2017}, } Video |
|
Regenbrecht, Jace |
3DUI '17: "A Robust and Intuitive 3D ..."
A Robust and Intuitive 3D Interface for Teleoperation of Autonomous Robotic Agents through Immersive Virtual Reality Environments
Jace Regenbrecht, Alireza Tavakkoli, and Donald Loffredo (University of Houston-Victoria, USA) In this paper an intuitive human interface is presented which allows for an operator immersed in a virtual environment to remotely control a teleoperated agent with minimal cognitive overload and minimal risk of accidental input. Additionally, a cursor-based interface is presented allowing for the placement of navigation nodes for the agent, thus facilitating robot's autonomous navigation functions to be executed. @InProceedings{3DUI17p199, author = {Jace Regenbrecht and Alireza Tavakkoli and Donald Loffredo}, title = {A Robust and Intuitive 3D Interface for Teleoperation of Autonomous Robotic Agents through Immersive Virtual Reality Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {199--200}, doi = {}, year = {2017}, } |
|
Rekimoto, Jun |
3DUI '17: "Internet of Abilities: Human ..."
Internet of Abilities: Human Augmentation, and Beyond (Keynote)
Jun Rekimoto (University of Tokyo, Japan; Sony CSL, Japan) Traditionally, the field of Human Computer Interaction (HCI) was primarily concerned with designing and investigating interfaces between humans and machines. However, with recent technological advances, the concepts of "enhancing", "augmenting" or even "re-designing" humans themselves are becoming feasible and serious topics of scientific research as well as engineering development. "Augmented Human" is a term that I use to refer to this overall research direction. Augmented Human introduces a fundamental paradigm shift in HCI: from human-computer-interaction to human-computer-integration, and out abilities will be mutually connected through the networks (what we call IoA, or Internet of Abilities, as the next step of IoT: Internet of Things). In this talk, I will discuss rich possibilities and distinct challenges in enhancing human abilities. I will introduce our recent projects including design of flying cameras as our remote and external eyes, a home appliance that can increase your happiness, an organic physical wall/window that dynamically mediates the environment, and an immersive human-human connection concept called "JackIn." @InProceedings{3DUI17p1, author = {Jun Rekimoto}, title = {Internet of Abilities: Human Augmentation, and Beyond (Keynote)}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {1--1}, doi = {}, year = {2017}, } |
|
Renner, Patrick |
3DUI '17: "Attention Guiding Techniques ..."
Attention Guiding Techniques using Peripheral Vision and Eye Tracking for Feedback in Augmented-Reality-Based Assistance Systems
Patrick Renner and Thies Pfeiffer (Bielefeld University, Germany) A limiting factor of current smart glasses-based augmented reality (AR) systems is their small field of view. AR assistance systems designed for tasks such as order picking or manual assembly are supposed to guide the visual attention of the user towards the item that is relevant next. This is a challenging task, as the user may initially be in an arbitrary position and orientation relative to the target. As a result of the small field of view, in most cases the target will initially not be covered by the AR display, even if it is visible to the user. This raises the question of how to design attention guiding for such "off-screen gaze" conditions. The central idea put forward in this paper is to display cues for attention guidance in a way that they can still be followed using peripheral vision. While the eyes' focus point is beyond the AR display, certain visual cues presented on the display are still detectable by the human. In addition to that, guidance methods that are adaptive to the eye movements of the user are introduced and evaluated. In the frame of a research project on smart glasses-based assistance systems for a manual assembly station, several attention guiding techniques with and without eye tracking have been designed, implemented and tested. As evaluation method simulated AR in a virtual reality HMD setup was used, which supports a repeatable and highly-controlled experimental design. @InProceedings{3DUI17p186, author = {Patrick Renner and Thies Pfeiffer}, title = {Attention Guiding Techniques using Peripheral Vision and Eye Tracking for Feedback in Augmented-Reality-Based Assistance Systems}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {186--194}, doi = {}, year = {2017}, } Video |
|
Ricca, Aylen |
3DUI '17: "Classic3D and Single3D: Two ..."
Classic3D and Single3D: Two Unimanual Techniques for Constrained 3D Manipulations on Tablet PCs
Siju Wu, Aylen Ricca, Amine Chellali, and Samir Otmane (University of Evry, France) Standard 3D widgets are used for object manipulation in desktop CAD applications but are less suited for use on touchscreens. We propose two 3D constrained manipulation techniques for Tablet PCs. Using finger identification, the dominant hand's index, middle and ring fingers are mapped with the X, Y and Z axes. Users can then trigger different manipulation tasks using specific chording gestures. A user study to assess usability and efficiency permitted to identify the gestures that are the most suitable for each manipulation task. Some design recommendations for an efficient 3D constrained manipulations technique are presented. @InProceedings{3DUI17p168, author = {Siju Wu and Aylen Ricca and Amine Chellali and Samir Otmane}, title = {Classic3D and Single3D: Two Unimanual Techniques for Constrained 3D Manipulations on Tablet PCs}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {168--171}, doi = {}, year = {2017}, } |
|
Riecke, Bernhard E. |
3DUI '17: "Comparing Leaning-Based Motion ..."
Comparing Leaning-Based Motion Cueing Interfaces for Virtual Reality Locomotion
Alexandra Kitson, Abraham M. Hashemian, Ekaterina R. Stepanova, Ernst Kruijff, and Bernhard E. Riecke (Simon Fraser University, Canada; Bonn-Rhein-Sieg University of Applied Sciences, Germany) In this paper, we describe a user study comparing five different locomotion interfaces for virtual reality locomotion. We compared a standard non-motion cueing interface, Joystick (Xbox), with four motion cueing interfaces, NaviChair (stool with springs), MuvMan (sit/stand active stool), Head-Directed (Oculus Rift DK2), and Swivel Chair (everyday office chair with leaning capability). Each interface had two degrees of freedom to move forward/backward and rotate using velocity (rate) control. The aim of this mixed methods study was to better understand relevant user experience factors and guide the design of future locomotion interfaces. This study employed methods from HCI to provide an understanding of why users behave a certain way while using the interface and to unearth any new issues with the design. Participants were tasked to search for objects in a virtual city while they provided talk-aloud feedback and we logged their behaviour. Subsequently, they completed a post-experimental questionnaire on their experience. We found that the qualitative themes of control, usability, and experience echoed the results of the questionnaire, providing internal validity. The quantitative measures revealed the Joystick to be significantly more comfortable and precise than the motion cueing interfaces. However, the qualitative feedback and interviews showed this was due to the reduced perceived controllability and safety of the motion cueing interfaces. Designers of these interfaces should consider using a backrest if users need to lean backwards and avoid using velocity-control for rotations when using HMDs. @InProceedings{3DUI17p73, author = {Alexandra Kitson and Abraham M. Hashemian and Ekaterina R. Stepanova and Ernst Kruijff and Bernhard E. Riecke}, title = {Comparing Leaning-Based Motion Cueing Interfaces for Virtual Reality Locomotion}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {73--82}, doi = {}, year = {2017}, } 3DUI '17: "Awestruck: Natural Interaction ..." Awestruck: Natural Interaction with Virtual Reality on Eliciting Awe Denise Quesnel and Bernhard E. Riecke (Simon Fraser University, Canada) In the study of transformative experiences, the feeling of awe is found to alter an individual’s perception in positive, lasting manners. Our research aims to understand the potential for interactive virtual reality (VR) in eliciting awe, through a framework based on collection of physiological data alongside self-report and phenomenological observations that demonstrate awe. We conducted a mixed-methods experiment to test whether VR is effective in eliciting awe, and if this effect might be modulated by the type of natural interaction in the form of a “flight” lounger vs. “standing”. Results demonstrate both interaction paradigms were equally awe-inspiring, with overall physiological (in the form of goose bumps with a 43.8% incidence rate) and self-report data (overall awe rating of 79.7%), and females showing more physiological signs of awe than males. Observations revealed 360-degree interaction and operability of hand-held controllers could be improved, with the consequence of designing even more effective transformative experiences. @InProceedings{3DUI17p205, author = {Denise Quesnel and Bernhard E. Riecke}, title = {Awestruck: Natural Interaction with Virtual Reality on Eliciting Awe}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {205--206}, doi = {}, year = {2017}, } Video Info 3DUI '17: "Moving in a Box: Improving ..." Moving in a Box: Improving Spatial Orientation in Virtual Reality using Simulated Reference Frames Thinh Nguyen-Vo, Bernhard E. Riecke, and Wolfgang Stuerzlinger (Simon Fraser University, Canada) Despite recent advances in virtual reality, locomotion in a virtual environment is still restricted because of spatial disorientation. Previous research has shown the benefits of reference frames in maintaining spatial orientation. Here, we propose using a visually simulated reference frame in virtual reality to provide users with a better sense of direction in landmark-free virtual environments. Visually overlaid rectangular frames simulate different variations of frames of reference. We investigated how two different types of visually simulated reference frames might benefit in a navigational search task through a mixed-method study. Results showed that the presence of a reference frame significantly affects participants’ performance in a navigational search task. Though the egocentric frame of reference (simulated CAVE) that translates with the observer did not significantly help, an allocentric frame of reference (a simulated stationary room) significantly improved user performance both in navigational search time and overall travel distance. Our study suggests that adding a variation of the reference frame to virtual reality applications might be a cost-effective solution to enable more effective locomotion in virtual reality. @InProceedings{3DUI17p207, author = {Thinh Nguyen-Vo and Bernhard E. Riecke and Wolfgang Stuerzlinger}, title = {Moving in a Box: Improving Spatial Orientation in Virtual Reality using Simulated Reference Frames}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {207--208}, doi = {}, year = {2017}, } |
|
Rishe, Naphtali |
3DUI '17: "Gesture Elicitation for 3D ..."
Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe
Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. @InProceedings{3DUI17p144, author = {Francisco R. Ortega and Alain Galvan and Katherine Tarre and Armando Barreto and Naphtali Rishe and Jonathan Bernal and Ruben Balcazar and Jason-Lee Thomas}, title = {Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {144--153}, doi = {}, year = {2017}, } 3DUI '17: "Procedural Celestial Rendering ..." Procedural Celestial Rendering for 3D Navigation Alain Galvan, Francisco R. Ortega, and Naphtali Rishe (Florida International University, USA) Finding the best suitable environment for 3D navigation that utilizes at least six degrees-of-freedom is still difficult. Furthermore, creating a system to procedurally generate a large virtual environment provides an opportunity for researchers to understand this problem further. Therefore, we present a novel technique to render a parametric celestial skybox with the ability to light environments similar to natural color corrected images from telescopes. We first pre-compute a spherical ray map that corresponds to the cubemap coordinates, then generate stars and dust through a combination of different noise generation shaders. @InProceedings{3DUI17p211, author = {Alain Galvan and Francisco R. Ortega and Naphtali Rishe}, title = {Procedural Celestial Rendering for 3D Navigation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {211--212}, doi = {}, year = {2017}, } |
|
Rodrigues, André Montes |
3DUI '17: "Batmen Beyond: Natural 3D ..."
Batmen Beyond: Natural 3D Manipulation with the BatWand
André Montes Rodrigues, Olavo Belloc, Eduardo Zilles Borba, Mario Nagamura, and Marcelo Knorich Zuffo (University of São Paulo, Brazil) In this work we present an interactive 3D object manipulation system using off the shelf mobile devices coupled with Augmented Reality (AR) technology that allows editing 3D objects by way of natural interactions based on tangible interfaces paradigms. The set-up consists of a mobile device, an interactive wand marker and AR markers laid on a table. The system allows users to change viewpoint and execute operations on 3D objects - simultaneous translation and rotation, scaling, cloning or deleting - by unconstrained natural interactions, leveraging user’s proficiency on daily object manipulation tasks and speeding up such typical 3D manipulation operations. Depth perception was significantly enhanced with dynamic shadows, allowing fast alignment and accurate positioning of objects. The prototype presented here allows successful completion of the three challenges proposed by the 2017 3DUI Contest, as validated by a preliminary informal user study with participants from the target audience and also from the general public. @InProceedings{3DUI17p258, author = {André Montes Rodrigues and Olavo Belloc and Eduardo Zilles Borba and Mario Nagamura and Marcelo Knorich Zuffo}, title = {Batmen Beyond: Natural 3D Manipulation with the BatWand}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {258--259}, doi = {}, year = {2017}, } |
|
Roo, Joan Sol |
3DUI '17: "Towards a Hybrid Space Combining ..."
Towards a Hybrid Space Combining Spatial Augmented Reality and Virtual Reality
Joan Sol Roo and Martin Hachet (Inria, France) Spatial Augmented Reality (SAR) allows a user, or a group of users, to benefit from digital augmentations embedded directly into the physical world. This enables co-located information and unobstructed interaction. On the other hand, SAR suffers from limitations that are inherently linked to its physical dependency, which is not the case for see-through or immersive displays. In this work, we explore how to facilitate the transition from SAR to VR, and vice versa, integrating both into a unified experience. We developed a set of interaction techniques and obtained first feedback from informal interviews. @InProceedings{3DUI17p195, author = {Joan Sol Roo and Martin Hachet}, title = {Towards a Hybrid Space Combining Spatial Augmented Reality and Virtual Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {195--198}, doi = {}, year = {2017}, } Video Info |
|
Sangalli, Vicenzo Abichequer |
3DUI '17: "SculptAR: An Augmented Reality ..."
SculptAR: An Augmented Reality Interaction System
Vicenzo Abichequer Sangalli, Thomas Volpato de Oliveira, Leonardo Pavanatto Soares, and Márcio Sarroglia Pinho (PUCRS, Brazil) In this work, a 3D mobile interface to create sculptures in an augmented reality environment tracked by AR markers is presented. A raycasting technique was used to interact with the objects in the scene, as well as 2D and 3D interfaces to manipulate and modify the objects. The users can move, delete, paint and duplicate virtual objects using 6 DOFs techniques. @InProceedings{3DUI17p260, author = {Vicenzo Abichequer Sangalli and Thomas Volpato de Oliveira and Leonardo Pavanatto Soares and Márcio Sarroglia Pinho}, title = {SculptAR: An Augmented Reality Interaction System}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {260--261}, doi = {}, year = {2017}, } |
|
Santos, Beatriz Sousa |
3DUI '17: "Effect of Hand-Avatar in a ..."
Effect of Hand-Avatar in a Selection Task Using a Tablet as Input Device in an Immersive Virtual Environment
Luis Afonso, Paulo Dias, Carlos Ferreira, and Beatriz Sousa Santos (University of Aveiro, Portugal) How does the virtual representation of the user's hands influence the performance on a button selection task performed in a tablet-based interaction within an immersive virtual environment? To answer this question, we asked 55 participants to use three conditions: no-hand avatar, realistic avatar and translucent avatar. The participants were faster but made slightly more errors while using the no-avatar condition, and considered easier to perform the task with the translucent avatar. @InProceedings{3DUI17p247, author = {Luis Afonso and Paulo Dias and Carlos Ferreira and Beatriz Sousa Santos}, title = {Effect of Hand-Avatar in a Selection Task Using a Tablet as Input Device in an Immersive Virtual Environment}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {247--248}, doi = {}, year = {2017}, } |
|
Sargunam, Shyam Prathish |
3DUI '17: "Redirected Reach in Virtual ..."
Redirected Reach in Virtual Reality: Enabling Natural Hand Interaction at Multiple Virtual Locations with Passive Haptics
Mohamed Suhail, Shyam Prathish Sargunam, Dustin T. Han, and Eric D. Ragan (Texas A&M University, USA) In many virtual reality applications, it would be ideal if users could use their physical hands to directly interact with virtual objects while experiencing realistic haptic feedback. While this can be achieved via interaction with tracked physical props that correspond to virtual objects, practical limitations can make it difficult to achieve a physical environment that exactly represents the virtual world, and virtual environments are often much larger than the available tracked physical space. Our approach maps a single physical prop to multiple virtual objects distributed throughout a virtual environment. Additionally, our work explores scenarios using one physical prop to control multiple types of object interactions. We explore considerations that allow physical object manipulation using orientation resetting to physically align the user with a physical prop for interaction. The resetting approach applies a discrete positional and rotational update to the user's location when the user virtually approaches a target for interaction, and the redirected reach approach applies a translational offset to the user’s virtual hand based on the positional difference of the virtual and physical objects. @InProceedings{3DUI17p245, author = {Mohamed Suhail and Shyam Prathish Sargunam and Dustin T. Han and Eric D. Ragan}, title = {Redirected Reach in Virtual Reality: Enabling Natural Hand Interaction at Multiple Virtual Locations with Passive Haptics}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {245--246}, doi = {}, year = {2017}, } |
|
Sarupuri, Bhuvaneswari |
3DUI '17: "Trigger Walking: A Low-Fatigue ..."
Trigger Walking: A Low-Fatigue Travel Technique for Immersive Virtual Reality
Bhuvaneswari Sarupuri, Miriam Luque Chipana, and Robert W. Lindeman (University of Canterbury, New Zealand) We present Trigger Walking, a low-fatigue travel technique for immersive virtual reality which uses hand-held controllers to move about more naturally within a limited physical space. Most commercial applications use some form of teleportation or physical walking for moving around in a virtual space. However, teleportation can be disorienting, due to the sudden change in the environment when teleported to another location. Physical walking techniques are more physically demanding, leading to fatigue. Hence, we explore the use of two spatial controllers that accompany commercial headsets to walk by taking a virtual step each time a controller trigger is pulled. The user has the choice of using the orientation of a single-controller, the average of both controllers, or that of the head to determine the direction of walking, and speed can be controlled by changing the angle of the controller to the Frontal plane. @InProceedings{3DUI17p227, author = {Bhuvaneswari Sarupuri and Miriam Luque Chipana and Robert W. Lindeman}, title = {Trigger Walking: A Low-Fatigue Travel Technique for Immersive Virtual Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {227--228}, doi = {}, year = {2017}, } |
|
Sassard, Emily |
3DUI '17: "VizSpace: Interaction in the ..."
VizSpace: Interaction in the Positive Parallax Screen Plane
Oyewole Oyekoya, Emily Sassard, and Tiana Johnson (Clemson University, USA) The VizSpace is a physically situated interactive system that combines touch and hand interactions behind the screen to create the effect that users are reaching inside and interacting in a 3D virtual workspace. It extends the conventional touch table interface with hand tracking and 3D visualization to enable interaction in the positive parallax plane, where the binocular focus falls behind the screen so as not to occlude projected images. This paper covers the system design, human factors and ergonomics considerations for an interactive and immersive gesture-based visualization system. Results are presented from a preliminary user study that validates the usability of VizSpace. @InProceedings{3DUI17p229, author = {Oyewole Oyekoya and Emily Sassard and Tiana Johnson}, title = {VizSpace: Interaction in the Positive Parallax Screen Plane}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {229--230}, doi = {}, year = {2017}, } |
|
Schmalstieg, Dieter |
3DUI '17: "Adaptive User Perspective ..."
Adaptive User Perspective Rendering for Handheld Augmented Reality
Peter Mohr, Markus Tatzgern, Jens Grubert, Dieter Schmalstieg, and Denis Kalkofen (Graz University of Technology, Austria; Salzburg University of Applied Sciences, Austria; Coburg University, Germany) Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user’s real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user’s head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user’s motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering. @InProceedings{3DUI17p176, author = {Peter Mohr and Markus Tatzgern and Jens Grubert and Dieter Schmalstieg and Denis Kalkofen}, title = {Adaptive User Perspective Rendering for Handheld Augmented Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {176--181}, doi = {}, year = {2017}, } |
|
Schubert, Ryan |
3DUI '17: "Can Social Presence Be Contagious? ..."
Can Social Presence Be Contagious? Effects of Social Presence Priming on Interaction with Virtual Humans
Salam Daher, Kangsoo Kim, Myungho Lee, Gerd Bruder, Ryan Schubert, Jeremy Bailenson, and Gregory F. Welch (University of Central Florida, USA; Stanford University, USA) We explore whether a peripheral observation of apparent mutual social presence between a virtual human (VH) and a virtual human confederate (VHC) can increase a subject’s sense of social presence with the VH. Human subjects were asked to play a game with a VH. Half of the subjects were primed by being exposed to a brief but apparently engaging conversation between the VHC and the VH. The primed subjects reported being significantly more excited, alert, had significantly higher measures of Co-Presence, Attentional Allocation, and Message Understanding dimensions of social presence for the VH, compared to those who were not primed. @InProceedings{3DUI17p201, author = {Salam Daher and Kangsoo Kim and Myungho Lee and Gerd Bruder and Ryan Schubert and Jeremy Bailenson and Gregory F. Welch}, title = {Can Social Presence Be Contagious? Effects of Social Presence Priming on Interaction with Virtual Humans}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {201--202}, doi = {}, year = {2017}, } |
|
Shibata, Fumihisa |
3DUI '17: "Analysis of R-V Dynamics Illusion ..."
Analysis of R-V Dynamics Illusion Behavior Caused by Varying the Weight of Real Object
Kana Oshima, Satoshi Hashiguchi, Fumihisa Shibata, and Asako Kimura (Ritsumeikan University, Japan) Previously, we discovered the “R-V Dynamics Illusion,” a psychophysical phenomenon caused by the difference between the dynamics of real object (R) and virtual object (V) in mixed reality (MR) space. We confirmed that a real object with a movable portion (CG) is perceived lighter by MR visual stimulation. Here, we analyze whether the difference in the mass of real objects affects the R-V Dynamics Illusion. We conducted experiments to determine the difference threshold of weights under the condition where the masses of real objects are 500, 750, and 1000g, and only the CG liquid level is changed. As a result, the difference in mass did not influence the difference threshold of weights by changing the virtual liquid level. On the other hand, with the same mass conditions, the difference threshold of weights becomes smaller when the R-V Dynamics Illusion occurs. @InProceedings{3DUI17p213, author = {Kana Oshima and Satoshi Hashiguchi and Fumihisa Shibata and Asako Kimura}, title = {Analysis of R-V Dynamics Illusion Behavior Caused by Varying the Weight of Real Object}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {213--214}, doi = {}, year = {2017}, } |
|
Soares, Leonardo Pavanatto |
3DUI '17: "SculptAR: An Augmented Reality ..."
SculptAR: An Augmented Reality Interaction System
Vicenzo Abichequer Sangalli, Thomas Volpato de Oliveira, Leonardo Pavanatto Soares, and Márcio Sarroglia Pinho (PUCRS, Brazil) In this work, a 3D mobile interface to create sculptures in an augmented reality environment tracked by AR markers is presented. A raycasting technique was used to interact with the objects in the scene, as well as 2D and 3D interfaces to manipulate and modify the objects. The users can move, delete, paint and duplicate virtual objects using 6 DOFs techniques. @InProceedings{3DUI17p260, author = {Vicenzo Abichequer Sangalli and Thomas Volpato de Oliveira and Leonardo Pavanatto Soares and Márcio Sarroglia Pinho}, title = {SculptAR: An Augmented Reality Interaction System}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {260--261}, doi = {}, year = {2017}, } 3DUI '17: "Design and Preliminary Evaluation ..." Design and Preliminary Evaluation of an Ego-Exocentric Technique for Cooperative Manipulation Leonardo Pavanatto Soares, Márcio Sarroglia Pinho, and Regis Kopper (PUCRS, Brazil; Duke University, USA) This work proposes and evaluates the EGO-EXO technique for cooperative manipulation in a Collaborative Virtual Environment (CVE). From the premise that simultaneous control over navigation and manipulation by the user can make interaction complex, this technique places two users in asymmetric viewpoint positions during the cooperative manipulation of an object, allowing one of them to follow the object. It applies the separation of degrees of freedom method between the two viewpoints to make the manipulation easier. The technique is evaluated through a user study to test its efficiency on handling cooperative manipulation. Results indicate that, for manipulation tasks that require high amplitude position control along precise orientation control, the technique performs with a lower collisions to time ratio. @InProceedings{3DUI17p203, author = {Leonardo Pavanatto Soares and Márcio Sarroglia Pinho and Regis Kopper}, title = {Design and Preliminary Evaluation of an Ego-Exocentric Technique for Cooperative Manipulation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {203--204}, doi = {}, year = {2017}, } Video |
|
Sousa, Maurício |
3DUI '17: "Mid-Air Modeling with Boolean ..."
Mid-Air Modeling with Boolean Operations in VR
Daniel Mendes, Daniel Medeiros, Maurício Sousa, Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal; PUC-Rio, Brazil) Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks. @InProceedings{3DUI17p154, author = {Daniel Mendes and Daniel Medeiros and Maurício Sousa and Ricardo Ferreira and Alberto Raposo and Alfredo Ferreira and Joaquim Jorge}, title = {Mid-Air Modeling with Boolean Operations in VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {154--157}, doi = {}, year = {2017}, } 3DUI '17: "PRECIOUS! Out-of-Reach Selection ..." PRECIOUS! Out-of-Reach Selection using Iterative Refinement in VR Daniel Mendes, Daniel Medeiros, Eduardo Cordeiro, Maurício Sousa, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal) Selecting objects outside user's arm-reach in Virtual Reality still poses significant challenges. Techniques proposed to overcome such limitations often follow arm-extension metaphors or favor the use of selection volumes combined with ray-casting. Nonetheless, these approaches work for room sized and sparse environments, and they do not scale to larger scenarios with many objects. We introduce PRECIOUS, a novel mid-air technique for selecting out-of-reach objects. It employs an iterative progressive refinement, using cone-casting to select multiple objects and moving users closer to them in each step, allowing accurate selections. A user evaluation showed that PRECIOUS compares favorably against existing approaches, being the most versatile. @InProceedings{3DUI17p237, author = {Daniel Mendes and Daniel Medeiros and Eduardo Cordeiro and Maurício Sousa and Alfredo Ferreira and Joaquim Jorge}, title = {PRECIOUS! Out-of-Reach Selection using Iterative Refinement in VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {237--238}, doi = {}, year = {2017}, } |
|
Steinicke, Frank |
3DUI '17: "Vibrotactile Assistance for ..."
Vibrotactile Assistance for User Guidance Towards Selection Targets in VR and the Cognitive Resources Involved
Oscar J. Ariza N., Markus Lange, Frank Steinicke, and Gerd Bruder (University of Hamburg, Germany; University of Würzburg, Germany; University of Central Florida, USA) The selection of objects in virtual reality (VR) located on the periphery and outside the field of view (FOV) requires a visual search by rotations of the HMD, which can reduce the performance of interaction. Our work explores the use of a pair of self-made wireless and wearable devices which once attached to the user’s head provide assistive vibrotactile cues for guidance in order to reduce the time used to turn and locate a target object. We present an experiment based on a dual-tasking method to analyze cognitive demands and performance metrics during a selection task. @InProceedings{3DUI17p95, author = {Oscar J. Ariza N. and Markus Lange and Frank Steinicke and Gerd Bruder}, title = {Vibrotactile Assistance for User Guidance Towards Selection Targets in VR and the Cognitive Resources Involved}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {95--98}, doi = {}, year = {2017}, } 3DUI '17: "Influence of Avatar Appearance ..." Influence of Avatar Appearance on Presence in Social VR Paul Heidicker, Eike Langbehn, and Frank Steinicke (University of Hamburg, Germany) Social virtual reality (VR) has enormous potential to allow several physically separated users to collaborate in an immersive virtual environment (IVE). These users and their actions are represented by avatars in the IVE. In question is how the appearance of those avatars influences communication and interaction. It might make a difference, if the avatar consists of a complete body representation or if only certain body parts are visible. Moreover, a one-to-one mapping of the user's movements to the avatar's movements might have advantages compared to pre-defined avatar animations. To answer these questions, we compared three different types of avatar appearances in a user study. For this, we used estimations of presence, social presence, and cognitive load. The evaluation showed that motion-controlled avatars with full representation of the avatar body lead to an increased sense of presence. Motion-controlled avatars as well as avatars which have only head and hands visible produced an increased feeling of co-presence and behavioral interdependence. This is interesting, since it states that we do not need a complete avatar body in social VR. @InProceedings{3DUI17p233, author = {Paul Heidicker and Eike Langbehn and Frank Steinicke}, title = {Influence of Avatar Appearance on Presence in Social VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {233--234}, doi = {}, year = {2017}, } |
|
Stepanova, Ekaterina R. |
3DUI '17: "Comparing Leaning-Based Motion ..."
Comparing Leaning-Based Motion Cueing Interfaces for Virtual Reality Locomotion
Alexandra Kitson, Abraham M. Hashemian, Ekaterina R. Stepanova, Ernst Kruijff, and Bernhard E. Riecke (Simon Fraser University, Canada; Bonn-Rhein-Sieg University of Applied Sciences, Germany) In this paper, we describe a user study comparing five different locomotion interfaces for virtual reality locomotion. We compared a standard non-motion cueing interface, Joystick (Xbox), with four motion cueing interfaces, NaviChair (stool with springs), MuvMan (sit/stand active stool), Head-Directed (Oculus Rift DK2), and Swivel Chair (everyday office chair with leaning capability). Each interface had two degrees of freedom to move forward/backward and rotate using velocity (rate) control. The aim of this mixed methods study was to better understand relevant user experience factors and guide the design of future locomotion interfaces. This study employed methods from HCI to provide an understanding of why users behave a certain way while using the interface and to unearth any new issues with the design. Participants were tasked to search for objects in a virtual city while they provided talk-aloud feedback and we logged their behaviour. Subsequently, they completed a post-experimental questionnaire on their experience. We found that the qualitative themes of control, usability, and experience echoed the results of the questionnaire, providing internal validity. The quantitative measures revealed the Joystick to be significantly more comfortable and precise than the motion cueing interfaces. However, the qualitative feedback and interviews showed this was due to the reduced perceived controllability and safety of the motion cueing interfaces. Designers of these interfaces should consider using a backrest if users need to lean backwards and avoid using velocity-control for rotations when using HMDs. @InProceedings{3DUI17p73, author = {Alexandra Kitson and Abraham M. Hashemian and Ekaterina R. Stepanova and Ernst Kruijff and Bernhard E. Riecke}, title = {Comparing Leaning-Based Motion Cueing Interfaces for Virtual Reality Locomotion}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {73--82}, doi = {}, year = {2017}, } |
|
Stuerzlinger, Wolfgang |
3DUI '17: "Moving in a Box: Improving ..."
Moving in a Box: Improving Spatial Orientation in Virtual Reality using Simulated Reference Frames
Thinh Nguyen-Vo, Bernhard E. Riecke, and Wolfgang Stuerzlinger (Simon Fraser University, Canada) Despite recent advances in virtual reality, locomotion in a virtual environment is still restricted because of spatial disorientation. Previous research has shown the benefits of reference frames in maintaining spatial orientation. Here, we propose using a visually simulated reference frame in virtual reality to provide users with a better sense of direction in landmark-free virtual environments. Visually overlaid rectangular frames simulate different variations of frames of reference. We investigated how two different types of visually simulated reference frames might benefit in a navigational search task through a mixed-method study. Results showed that the presence of a reference frame significantly affects participants’ performance in a navigational search task. Though the egocentric frame of reference (simulated CAVE) that translates with the observer did not significantly help, an allocentric frame of reference (a simulated stationary room) significantly improved user performance both in navigational search time and overall travel distance. Our study suggests that adding a variation of the reference frame to virtual reality applications might be a cost-effective solution to enable more effective locomotion in virtual reality. @InProceedings{3DUI17p207, author = {Thinh Nguyen-Vo and Bernhard E. Riecke and Wolfgang Stuerzlinger}, title = {Moving in a Box: Improving Spatial Orientation in Virtual Reality using Simulated Reference Frames}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {207--208}, doi = {}, year = {2017}, } |
|
Suhail, Mohamed |
3DUI '17: "Redirected Reach in Virtual ..."
Redirected Reach in Virtual Reality: Enabling Natural Hand Interaction at Multiple Virtual Locations with Passive Haptics
Mohamed Suhail, Shyam Prathish Sargunam, Dustin T. Han, and Eric D. Ragan (Texas A&M University, USA) In many virtual reality applications, it would be ideal if users could use their physical hands to directly interact with virtual objects while experiencing realistic haptic feedback. While this can be achieved via interaction with tracked physical props that correspond to virtual objects, practical limitations can make it difficult to achieve a physical environment that exactly represents the virtual world, and virtual environments are often much larger than the available tracked physical space. Our approach maps a single physical prop to multiple virtual objects distributed throughout a virtual environment. Additionally, our work explores scenarios using one physical prop to control multiple types of object interactions. We explore considerations that allow physical object manipulation using orientation resetting to physically align the user with a physical prop for interaction. The resetting approach applies a discrete positional and rotational update to the user's location when the user virtually approaches a target for interaction, and the redirected reach approach applies a translational offset to the user’s virtual hand based on the positional difference of the virtual and physical objects. @InProceedings{3DUI17p245, author = {Mohamed Suhail and Shyam Prathish Sargunam and Dustin T. Han and Eric D. Ragan}, title = {Redirected Reach in Virtual Reality: Enabling Natural Hand Interaction at Multiple Virtual Locations with Passive Haptics}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {245--246}, doi = {}, year = {2017}, } |
|
Taguchi, Shun |
3DUI '17: "COMS-VR: Mobile Virtual Reality ..."
COMS-VR: Mobile Virtual Reality Entertainment System using Electric Car and Head-Mounted Display
Ryo Kodama, Masahiro Koge, Shun Taguchi, and Hiroyuki Kajimoto (TOYOTA Central R&D Labs, Japan; University of Electro-Communications, Japan) We propose a novel virtual reality entertainment system using a car as a motion platform. Motion platforms present a sensation of motion to the user using powerful actuators. Combined with virtual reality content, including surrounding visual, auditory and tactile displays, such systems can provide and immersive experience. However, the space and cost requirements for installation of motion platforms are large. To overcome this issue, we propose to use a car as a motion platform. We developed a prototype system composed of a head mounted display, a one-person electric car and an automatic driving algorithm. We developed and tested immersive content in which users ride on a trolley in a virtual space. All users responded quite positively to the experience. @InProceedings{3DUI17p130, author = {Ryo Kodama and Masahiro Koge and Shun Taguchi and Hiroyuki Kajimoto}, title = {COMS-VR: Mobile Virtual Reality Entertainment System using Electric Car and Head-Mounted Display}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {130--133}, doi = {}, year = {2017}, } |
|
Tahai, Liudmila |
3DUI '17: "Watchcasting: Freehand 3D ..."
Watchcasting: Freehand 3D Interaction with Off-the-Shelf Smartwatch
Krzysztof Pietroszek, Liudmila Tahai, James R. Wallace, and Edward Lank (California State University at Monterey Bay, USA; University of Waterloo, Canada) We describe a mid-air, watch-based 3D interaction technique called Watchcasting. The technique enables target selection and translation by mapping z-coordinate position to forearm rotation. By replicat- ing a large display 3D selection study, we show that Watchcasting provides comparable performance to smartphone (Smartcasting) and electrical impedance myography (Myopoint) techniques. Our work demonstrates that an off-the-shelf smartwatch is a practical alternative for 3D interaction using specialized devices. @InProceedings{3DUI17p172, author = {Krzysztof Pietroszek and Liudmila Tahai and James R. Wallace and Edward Lank}, title = {Watchcasting: Freehand 3D Interaction with Off-the-Shelf Smartwatch}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {172--175}, doi = {}, year = {2017}, } Video |
|
Tarng, Stanley |
3DUI '17: "Force and Vibrotactile Integration ..."
Force and Vibrotactile Integration for 3D User Interaction within Virtual Environments
Aida Erfanian, Stanley Tarng, Yaoping Hu, Jérémy Plouzeau, and Frédéric Merienne (University of Calgary, Canada; LE2I, France) Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies on multi-sensory integration of visual and haptic cues revealed that the integration follows maximum likelihood estimation (MLE). Little effort focuses however on integrating force and vibrotactile cues – two sub-categorical cues of the haptic modality. Hence, this paper presents an investigation on MLE’s suitability for integrating these sub-categorical cues. Within a stereoscopic VE, human users performed a 3D interactive task of navigating a flying drone along a high-voltage transmission line in an inaccessible region and identifying defects on the line. The users had to identify defects via individual force or vibrotactile cues, and their combinations in co-located and dislocated settings. The co-located setting provided both cues on the right hand of the users; whereas the dislocated setting delivered the force and vibrotactile cues on the right hand and forearm of the users, respectively. Task performance of the users, such as completion time and accuracy, was assessed under each cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This was in agreement with the role of tactile cues in sensing surface properties, herein setting a baseline for using MLE. The task performance under the co-located setting indicated certain degrees of combining those under the individual cues. In contrast, the performance under the dislocated setting was alike that under the individual vibrotactile cue. These observations imply an inconclusiveness of MLE to integrate both cues in a co-located setting for 3D user interaction. @InProceedings{3DUI17p87, author = {Aida Erfanian and Stanley Tarng and Yaoping Hu and Jérémy Plouzeau and Frédéric Merienne}, title = {Force and Vibrotactile Integration for 3D User Interaction within Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {87--94}, doi = {}, year = {2017}, } |
|
Tarre, Katherine |
3DUI '17: "Gesture Elicitation for 3D ..."
Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe
Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. @InProceedings{3DUI17p144, author = {Francisco R. Ortega and Alain Galvan and Katherine Tarre and Armando Barreto and Naphtali Rishe and Jonathan Bernal and Ruben Balcazar and Jason-Lee Thomas}, title = {Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {144--153}, doi = {}, year = {2017}, } |
|
Tatzgern, Markus |
3DUI '17: "Adaptive User Perspective ..."
Adaptive User Perspective Rendering for Handheld Augmented Reality
Peter Mohr, Markus Tatzgern, Jens Grubert, Dieter Schmalstieg, and Denis Kalkofen (Graz University of Technology, Austria; Salzburg University of Applied Sciences, Austria; Coburg University, Germany) Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user’s real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user’s head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user’s motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering. @InProceedings{3DUI17p176, author = {Peter Mohr and Markus Tatzgern and Jens Grubert and Dieter Schmalstieg and Denis Kalkofen}, title = {Adaptive User Perspective Rendering for Handheld Augmented Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {176--181}, doi = {}, year = {2017}, } |
|
Tavakkoli, Alireza |
3DUI '17: "A Robust and Intuitive 3D ..."
A Robust and Intuitive 3D Interface for Teleoperation of Autonomous Robotic Agents through Immersive Virtual Reality Environments
Jace Regenbrecht, Alireza Tavakkoli, and Donald Loffredo (University of Houston-Victoria, USA) In this paper an intuitive human interface is presented which allows for an operator immersed in a virtual environment to remotely control a teleoperated agent with minimal cognitive overload and minimal risk of accidental input. Additionally, a cursor-based interface is presented allowing for the placement of navigation nodes for the agent, thus facilitating robot's autonomous navigation functions to be executed. @InProceedings{3DUI17p199, author = {Jace Regenbrecht and Alireza Tavakkoli and Donald Loffredo}, title = {A Robust and Intuitive 3D Interface for Teleoperation of Autonomous Robotic Agents through Immersive Virtual Reality Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {199--200}, doi = {}, year = {2017}, } |
|
Teather, Robert J. |
3DUI '17: "An Arm-Mounted Inertial Controller ..."
An Arm-Mounted Inertial Controller for 6DOF Input: Design and Evaluation
Thomas S. Young, Robert J. Teather, and I. Scott MacKenzie (York University, Canada; Carleton University, Canada) We designed a low-cost arm-mounted wearable 3D input device that uses inertial measurement units as an alternative to existing tracking systems requiring fixed frames of reference. The device employs inertial sensors to derive a 3D cursor position through natural arm movement. We also explore three methods of indicating selection with the device, one entirely software based (dwell), one using a twist gesture, and one relying on buttons. To address the paucity of research reporting human performance metrics across comparable interfaces, we quantify the performance of this new device through a point selection experiment and compare results from a similar study. @InProceedings{3DUI17p26, author = {Thomas S. Young and Robert J. Teather and I. Scott MacKenzie}, title = {An Arm-Mounted Inertial Controller for 6DOF Input: Design and Evaluation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {26--35}, doi = {}, year = {2017}, } |
|
Thomas, Jason-Lee |
3DUI '17: "Gesture Elicitation for 3D ..."
Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe
Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. @InProceedings{3DUI17p144, author = {Francisco R. Ortega and Alain Galvan and Katherine Tarre and Armando Barreto and Naphtali Rishe and Jonathan Bernal and Ruben Balcazar and Jason-Lee Thomas}, title = {Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {144--153}, doi = {}, year = {2017}, } |
|
Tuanquin, Nikita Mae B. |
3DUI '17: "Jedi ForceExtension: Telekinesis ..."
Jedi ForceExtension: Telekinesis as a Virtual Reality Interaction Metaphor
Rory M. S. Clifford, Nikita Mae B. Tuanquin, and Robert W. Lindeman (University of Canterbury, New Zealand) Virtual Reality (VR) enables us to freely operate in a space that is unconstrained by physical laws and limitations. To take advantage of this aspect, we have developed a technique for pseudo-telekinetic object manipulation in VR using slight downward tilt of the head to simulate Jedi concentration. This telekinetic ability draws inspiration from The Force abilities exhibited in the Star Wars universe, and is particularly well suited to VR because it provides the ability to interact with and manipulate objects at a distance. We implemented force translate, force rotate, force push and force pull behaviours as examples of the general concept of force extension. We conducted exploratory user testing to assess telekinesis as a suitable interaction metaphor. Subject performance and feedback varied between participants but were generally encouraging. @InProceedings{3DUI17p239, author = {Rory M. S. Clifford and Nikita Mae B. Tuanquin and Robert W. Lindeman}, title = {Jedi ForceExtension: Telekinesis as a Virtual Reality Interaction Metaphor}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {239--240}, doi = {}, year = {2017}, } |
|
Valent, Sean |
3DUI '17: "Augmented Reality Digital ..."
Augmented Reality Digital Sculpture
Nathanael Harrell, Grayson Bonds, Xiaojia Wang, Sean Valent, Elham Ebrahimi, and Sabarish V. Babu (Clemson University, USA) We present our metaphor for object translation, rotation, and rescaling and particle parameter manipulation in an augmented reality environment using an Android smartphone or tablet for the 2017 3DUI Competition in Los Angeles, California. Our metaphor aims to map the three-dimensional interaction of objects in a real world space to the two-dimensional plane of a smartphone or tablet screen. Our final product is the result of experimentation with different metaphors for translation, rotation, rescaling, and particle parameter manipulation and was guided by the feedback of voluntary product testers. The result is an interaction technique between a mobile device and the virtual world which we believe to be intuitive. @InProceedings{3DUI17p262, author = {Nathanael Harrell and Grayson Bonds and Xiaojia Wang and Sean Valent and Elham Ebrahimi and Sabarish V. Babu}, title = {Augmented Reality Digital Sculpture}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {262--263}, doi = {}, year = {2017}, } Video |
|
Vasylevska, Khrystyna |
3DUI '17: "Towards Efficient Spatial ..."
Towards Efficient Spatial Compression in Self-Overlapping Virtual Environments
Khrystyna Vasylevska and Hannes Kaufmann (Vienna University of Technology, Austria) Space available for any virtual reality experience is often strictly limited and abridges the virtual world to a size of a room. To extend the amount of virtual space accessible by walking within the same real workspace the methods of spatial compression were proposed. Scene manipulation with a controlled spatial overlap has been shown to be an efficient method. However, in order to apply space compression effectively for a dynamic, scalable and robust 3D user interface, it is important to study how the human perceives different layouts with overlapping spaces. In this paper, we explore the influence of the properties of the layout used on human spatial perception in a physically impossible spatial arrangement. Our first reported study focuses on the following parameters of the path within a simple self-overlapping layout: number of turns, relative door positions, sequences of counter- and clockwise turns, symmetry and asymmetry of the path used. In addition, in the second study we explore the effect of path smoothing by substituting the right-angled corridors by smooth curves. Our studies show that usage of the smooth curved corridors is more beneficial for spatial compression than the conventional right-angled approach. @InProceedings{3DUI17p12, author = {Khrystyna Vasylevska and Hannes Kaufmann}, title = {Towards Efficient Spatial Compression in Self-Overlapping Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {12--21}, doi = {}, year = {2017}, } |
|
Vemavarapu, Prabhakar V. |
3DUI '17: "Indirect Touch Interaction ..."
Indirect Touch Interaction with Stereoscopic Displays using a Two-Sided Handheld Touch Device
Prabhakar V. Vemavarapu and Christoph W. Borst (University of Louisiana at Lafayette, USA) An indirect touch 3D interface using a two-sided handheld touch device for interactions with dense datasets on stereoscopic displays. This work explores the possibilities for a smartphone able to sense touch on both sides. Two android phones are combined back-to-back. The top touch surface is used for primary or fine interactions (selection /translation/ rotation) and bottom surface for coarser aspects such as mode control or feature extraction. Two surfaces are programmed to recognize input from 4 digits – two top and two bottom. The four touch areas enable 3D object selection, manipulation, and feature extraction using combinations of simultaneous touches. @InProceedings{3DUI17p209, author = {Prabhakar V. Vemavarapu and Christoph W. Borst}, title = {Indirect Touch Interaction with Stereoscopic Displays using a Two-Sided Handheld Touch Device}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {209--210}, doi = {}, year = {2017}, } |
|
Vierjahn, Tom |
3DUI '17: "Evaluation of Approaching-Strategies ..."
Evaluation of Approaching-Strategies of Temporarily Required Virtual Assistants in Immersive Environments
Andrea Bönsch, Tom Vierjahn, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Embodied, virtual agents provide users assistance in agent-based support systems. To this end, two closely linked factors have to be considered for the agents’ behavioral design: their presence time (PT), i.e., the time in which the agents are visible, and the approaching time (AT), i.e., the time span between the user’s calling for an agent and the agent’s actual availability. This work focuses on human-like assistants that are embedded in immersive scenes but that are required only temporarily. To the best of our knowledge, guidelines for a suitable trade-off between PT and AT of these assistants do not yet exist. We address this gap by presenting the results of a controlled within-subjects study in a CAVE. While keeping a low PT so that the agent is not perceived as annoying, three strategies affecting the AT, namely fading, walking, and running, are evaluated by 40 subjects. The results indicate no clear preference for either behavior. Instead, the necessity of a better trade-off between a low AT and an agent’s realistic behavior is demonstrated. @InProceedings{3DUI17p69, author = {Andrea Bönsch and Tom Vierjahn and Torsten W. Kuhlen}, title = {Evaluation of Approaching-Strategies of Temporarily Required Virtual Assistants in Immersive Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {69--72}, doi = {}, year = {2017}, } |
|
Wallace, James R. |
3DUI '17: "Watchcasting: Freehand 3D ..."
Watchcasting: Freehand 3D Interaction with Off-the-Shelf Smartwatch
Krzysztof Pietroszek, Liudmila Tahai, James R. Wallace, and Edward Lank (California State University at Monterey Bay, USA; University of Waterloo, Canada) We describe a mid-air, watch-based 3D interaction technique called Watchcasting. The technique enables target selection and translation by mapping z-coordinate position to forearm rotation. By replicat- ing a large display 3D selection study, we show that Watchcasting provides comparable performance to smartphone (Smartcasting) and electrical impedance myography (Myopoint) techniques. Our work demonstrates that an off-the-shelf smartwatch is a practical alternative for 3D interaction using specialized devices. @InProceedings{3DUI17p172, author = {Krzysztof Pietroszek and Liudmila Tahai and James R. Wallace and Edward Lank}, title = {Watchcasting: Freehand 3D Interaction with Off-the-Shelf Smartwatch}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {172--175}, doi = {}, year = {2017}, } Video |
|
Wang, Hongwei |
3DUI '17: "A Surgical Training System ..."
A Surgical Training System for Four Medical Punctures Based on Virtual Reality and Haptic Feedback
Ronghai Wang, Junfeng Yao, Lin Wang, Xiaohan Liu, Hongwei Wang, and Liling Zheng (Quanzhou Normal University, China; Xiamen University, China; Sanming University, China; Quanzhou First Affiliated Hospital of Fujian Medical University, China) This poster presents a surgical training system for four medical punctures based on virtual reality and haptic feedback, including a client program developed in the Unity3D game engine and a server program developed by PHP. This system provides the immersive surgery simulation for thoracentesis, lumbar puncture, bone marrow puncture and abdominal paracentesis that we call four medical punctures. Trainers or teachers can release training tasks in which trainees or students are able to learn surgery skills at a 3D visual scene. Furthermore, they will feel a sense of immediacy when putting on the head-mounted display and with the help of haptic feedback. The training records will be put into database for analysis. @InProceedings{3DUI17p215, author = {Ronghai Wang and Junfeng Yao and Lin Wang and Xiaohan Liu and Hongwei Wang and Liling Zheng}, title = {A Surgical Training System for Four Medical Punctures Based on Virtual Reality and Haptic Feedback}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {215--216}, doi = {}, year = {2017}, } |
|
Wang, Lin |
3DUI '17: "A Surgical Training System ..."
A Surgical Training System for Four Medical Punctures Based on Virtual Reality and Haptic Feedback
Ronghai Wang, Junfeng Yao, Lin Wang, Xiaohan Liu, Hongwei Wang, and Liling Zheng (Quanzhou Normal University, China; Xiamen University, China; Sanming University, China; Quanzhou First Affiliated Hospital of Fujian Medical University, China) This poster presents a surgical training system for four medical punctures based on virtual reality and haptic feedback, including a client program developed in the Unity3D game engine and a server program developed by PHP. This system provides the immersive surgery simulation for thoracentesis, lumbar puncture, bone marrow puncture and abdominal paracentesis that we call four medical punctures. Trainers or teachers can release training tasks in which trainees or students are able to learn surgery skills at a 3D visual scene. Furthermore, they will feel a sense of immediacy when putting on the head-mounted display and with the help of haptic feedback. The training records will be put into database for analysis. @InProceedings{3DUI17p215, author = {Ronghai Wang and Junfeng Yao and Lin Wang and Xiaohan Liu and Hongwei Wang and Liling Zheng}, title = {A Surgical Training System for Four Medical Punctures Based on Virtual Reality and Haptic Feedback}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {215--216}, doi = {}, year = {2017}, } |
|
Wang, Ronghai |
3DUI '17: "A Surgical Training System ..."
A Surgical Training System for Four Medical Punctures Based on Virtual Reality and Haptic Feedback
Ronghai Wang, Junfeng Yao, Lin Wang, Xiaohan Liu, Hongwei Wang, and Liling Zheng (Quanzhou Normal University, China; Xiamen University, China; Sanming University, China; Quanzhou First Affiliated Hospital of Fujian Medical University, China) This poster presents a surgical training system for four medical punctures based on virtual reality and haptic feedback, including a client program developed in the Unity3D game engine and a server program developed by PHP. This system provides the immersive surgery simulation for thoracentesis, lumbar puncture, bone marrow puncture and abdominal paracentesis that we call four medical punctures. Trainers or teachers can release training tasks in which trainees or students are able to learn surgery skills at a 3D visual scene. Furthermore, they will feel a sense of immediacy when putting on the head-mounted display and with the help of haptic feedback. The training records will be put into database for analysis. @InProceedings{3DUI17p215, author = {Ronghai Wang and Junfeng Yao and Lin Wang and Xiaohan Liu and Hongwei Wang and Liling Zheng}, title = {A Surgical Training System for Four Medical Punctures Based on Virtual Reality and Haptic Feedback}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {215--216}, doi = {}, year = {2017}, } |
|
Wang, Xiaojia |
3DUI '17: "Augmented Reality Digital ..."
Augmented Reality Digital Sculpture
Nathanael Harrell, Grayson Bonds, Xiaojia Wang, Sean Valent, Elham Ebrahimi, and Sabarish V. Babu (Clemson University, USA) We present our metaphor for object translation, rotation, and rescaling and particle parameter manipulation in an augmented reality environment using an Android smartphone or tablet for the 2017 3DUI Competition in Los Angeles, California. Our metaphor aims to map the three-dimensional interaction of objects in a real world space to the two-dimensional plane of a smartphone or tablet screen. Our final product is the result of experimentation with different metaphors for translation, rotation, rescaling, and particle parameter manipulation and was guided by the feedback of voluntary product testers. The result is an interaction technique between a mobile device and the virtual world which we believe to be intuitive. @InProceedings{3DUI17p262, author = {Nathanael Harrell and Grayson Bonds and Xiaojia Wang and Sean Valent and Elham Ebrahimi and Sabarish V. Babu}, title = {Augmented Reality Digital Sculpture}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {262--263}, doi = {}, year = {2017}, } Video |
|
Welch, Gregory F. |
3DUI '17: "Can Social Presence Be Contagious? ..."
Can Social Presence Be Contagious? Effects of Social Presence Priming on Interaction with Virtual Humans
Salam Daher, Kangsoo Kim, Myungho Lee, Gerd Bruder, Ryan Schubert, Jeremy Bailenson, and Gregory F. Welch (University of Central Florida, USA; Stanford University, USA) We explore whether a peripheral observation of apparent mutual social presence between a virtual human (VH) and a virtual human confederate (VHC) can increase a subject’s sense of social presence with the VH. Human subjects were asked to play a game with a VH. Half of the subjects were primed by being exposed to a brief but apparently engaging conversation between the VHC and the VH. The primed subjects reported being significantly more excited, alert, had significantly higher measures of Co-Presence, Attentional Allocation, and Message Understanding dimensions of social presence for the VH, compared to those who were not primed. @InProceedings{3DUI17p201, author = {Salam Daher and Kangsoo Kim and Myungho Lee and Gerd Bruder and Ryan Schubert and Jeremy Bailenson and Gregory F. Welch}, title = {Can Social Presence Be Contagious? Effects of Social Presence Priming on Interaction with Virtual Humans}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {201--202}, doi = {}, year = {2017}, } |
|
Weyers, Benjamin |
3DUI '17: "Augmented Reality Exhibits ..."
Augmented Reality Exhibits of Constructive Art: 8th Annual 3DUI Contest
Rongkai Guo, Ryan P. McMahan, and Benjamin Weyers (Kennesaw State University, USA; University of Texas at Dallas, USA; RWTH Aachen University, Germany) The 8th annual IEEE 3DUI Contest focuses on the development of a 3D User Interface (3DUI) for an Augmented Reality (AR) exhibit of constructive art. The 3DUI Contest is part of the 2017 IEEE Symposium on 3D User Interfaces held in Los Angeles, California. The contest was open to anyone interested in 3DUIs, from researchers to students, enthusiasts, and professionals. The purpose of the contest is to stimulate innovative and creative solutions to challenging 3DUI problems. @InProceedings{3DUI17p253, author = {Rongkai Guo and Ryan P. McMahan and Benjamin Weyers}, title = {Augmented Reality Exhibits of Constructive Art: 8th Annual 3DUI Contest}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {253--253}, doi = {}, year = {2017}, } 3DUI '17: "Efficient Approximate Computation ..." Efficient Approximate Computation of Scene Visibility Based on Navigation Meshes and Applications for Navigation and Scene Analysis Sebastian Freitag, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Scene visibility - the information of which parts of the scene are visible from a certain location—can be used to derive various properties of a virtual environment. For example, it enables the computation of viewpoint quality to determine the informativeness of a viewpoint, helps in constructing virtual tours, and allows to keep track of the objects a user may already have seen. However, computing visibility at runtime may be too computationally expensive for many applications, while sampling the entire scene beforehand introduces a costly precomputation step and may include many samples not needed later on. Therefore, in this paper, we propose a novel approach to precompute visibility information based on navigation meshes, a polygonal representation of a scene’s navigable areas. We show that with only limited precomputation, high accuracy can be achieved in these areas. Furthermore, we demonstrate the usefulness of the approach by means of several applications, including viewpoint quality computation, landmark and room detection, and exploration assistance. In addition, we present a travel interface based on common visibility that we found to result in less cybersickness in a user study. @InProceedings{3DUI17p134, author = {Sebastian Freitag and Benjamin Weyers and Torsten W. Kuhlen}, title = {Efficient Approximate Computation of Scene Visibility Based on Navigation Meshes and Applications for Navigation and Scene Analysis}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {134--143}, doi = {}, year = {2017}, } 3DUI '17: "A Reliable Non-verbal Vocal ..." A Reliable Non-verbal Vocal Input Metaphor for Clicking Daniel Zielasko, Neha Neha, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) While experiencing an immersive virtual environment a suitable trigger metaphor is often needed, e.g. for the interaction with objects out of physical reach or system control. The BlowClick approach [Zielasko2015] that is based on non-verbal vocal input has been proven to be a valuable trigger technique in previous work. However, its original detection method is vulnerable to false positives and, thus, is limited in its potential use. Therefore, we extended the existing approach by adding machine learning methods to reliably classify blowing events. We found a support vector machine with Gaussian kernel performing the best with at least the same latency and more precision than before. Furthermore, we added acoustic feedback to the NVVI trigger, which increases the user’s confidence and whose absence was also stated as a limitation of the previous work. With this extended technique, we repeated the conducted Fitts’ law experiment with 33 participants and could confirm that it is possible to use NVVI as a reliable trigger as part of a hands-free point-andclick interface. Furthermore, we tested reaction times to measure the trigger’s performance without the influence of pointing and calculated device throughputs to ensure comparability. @InProceedings{3DUI17p40, author = {Daniel Zielasko and Neha Neha and Benjamin Weyers and Torsten W. Kuhlen}, title = {A Reliable Non-verbal Vocal Input Metaphor for Clicking}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {40--49}, doi = {}, year = {2017}, } |
|
Woodworth, Jason W. |
3DUI '17: "Visual Cues to Aid 3D Pointing ..."
Visual Cues to Aid 3D Pointing in a Virtual Mirror
Jason W. Woodworth and Christoph W. Borst (University of Louisiana at Lafayette, USA) We address a 3D pointing problem for a "virtual mirror" view used in collaborative VR. The virtual mirror is a large TV display showing a depth-camera-based image of a user in a surrounding virtual environment. There are problems with pointing and communicating to remote users due to the indirectness of pointing in a mirror and a low sense of depth. We propose several visual cues to help the user control pointing depth, and present an initial user study, providing a basis for further refinement and investigation of techniques. @InProceedings{3DUI17p251, author = {Jason W. Woodworth and Christoph W. Borst}, title = {Visual Cues to Aid 3D Pointing in a Virtual Mirror}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {251--252}, doi = {}, year = {2017}, } Video |
|
Wu, Siju |
3DUI '17: "Classic3D and Single3D: Two ..."
Classic3D and Single3D: Two Unimanual Techniques for Constrained 3D Manipulations on Tablet PCs
Siju Wu, Aylen Ricca, Amine Chellali, and Samir Otmane (University of Evry, France) Standard 3D widgets are used for object manipulation in desktop CAD applications but are less suited for use on touchscreens. We propose two 3D constrained manipulation techniques for Tablet PCs. Using finger identification, the dominant hand's index, middle and ring fingers are mapped with the X, Y and Z axes. Users can then trigger different manipulation tasks using specific chording gestures. A user study to assess usability and efficiency permitted to identify the gestures that are the most suitable for each manipulation task. Some design recommendations for an efficient 3D constrained manipulations technique are presented. @InProceedings{3DUI17p168, author = {Siju Wu and Aylen Ricca and Amine Chellali and Samir Otmane}, title = {Classic3D and Single3D: Two Unimanual Techniques for Constrained 3D Manipulations on Tablet PCs}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {168--171}, doi = {}, year = {2017}, } |
|
Yao, Colin |
3DUI '17: "Multi-phase Wall Warner System ..."
Multi-phase Wall Warner System for Real Walking in Virtual Environments
Markus Zank, Colin Yao, and Andreas Kunz (ETH Zurich, Switzerland) Real walking is a means to explore virtual environments that are even larger than a physical space. Avoiding collisions with the physical walls necessitates a system to warn users. This paper describes the design and implementation of a multi-phase warning system as a solution to this safety necessity. The first phase is a velocity-based warning based on time-to-impact. The second phase is distance-based and designed as an emergency warning. Combinations of acoustic and visual feedback mechanisms were tested in a study with 13 participants. The quantitative measures show that the system keeps users safe, while allowing them to freely explore the virtual environment. @InProceedings{3DUI17p223, author = {Markus Zank and Colin Yao and Andreas Kunz}, title = {Multi-phase Wall Warner System for Real Walking in Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {223--224}, doi = {}, year = {2017}, } |
|
Yao, Junfeng |
3DUI '17: "A Surgical Training System ..."
A Surgical Training System for Four Medical Punctures Based on Virtual Reality and Haptic Feedback
Ronghai Wang, Junfeng Yao, Lin Wang, Xiaohan Liu, Hongwei Wang, and Liling Zheng (Quanzhou Normal University, China; Xiamen University, China; Sanming University, China; Quanzhou First Affiliated Hospital of Fujian Medical University, China) This poster presents a surgical training system for four medical punctures based on virtual reality and haptic feedback, including a client program developed in the Unity3D game engine and a server program developed by PHP. This system provides the immersive surgery simulation for thoracentesis, lumbar puncture, bone marrow puncture and abdominal paracentesis that we call four medical punctures. Trainers or teachers can release training tasks in which trainees or students are able to learn surgery skills at a 3D visual scene. Furthermore, they will feel a sense of immediacy when putting on the head-mounted display and with the help of haptic feedback. The training records will be put into database for analysis. @InProceedings{3DUI17p215, author = {Ronghai Wang and Junfeng Yao and Lin Wang and Xiaohan Liu and Hongwei Wang and Liling Zheng}, title = {A Surgical Training System for Four Medical Punctures Based on Virtual Reality and Haptic Feedback}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {215--216}, doi = {}, year = {2017}, } |
|
Young, Thomas S. |
3DUI '17: "An Arm-Mounted Inertial Controller ..."
An Arm-Mounted Inertial Controller for 6DOF Input: Design and Evaluation
Thomas S. Young, Robert J. Teather, and I. Scott MacKenzie (York University, Canada; Carleton University, Canada) We designed a low-cost arm-mounted wearable 3D input device that uses inertial measurement units as an alternative to existing tracking systems requiring fixed frames of reference. The device employs inertial sensors to derive a 3D cursor position through natural arm movement. We also explore three methods of indicating selection with the device, one entirely software based (dwell), one using a twist gesture, and one relying on buttons. To address the paucity of research reporting human performance metrics across comparable interfaces, we quantify the performance of this new device through a point selection experiment and compare results from a similar study. @InProceedings{3DUI17p26, author = {Thomas S. Young and Robert J. Teather and I. Scott MacKenzie}, title = {An Arm-Mounted Inertial Controller for 6DOF Input: Design and Evaluation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {26--35}, doi = {}, year = {2017}, } |
|
Yu, Run |
3DUI '17: "Bookshelf and Bird: Enabling ..."
Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection
Run Yu, Wallace S. Lages, Mahdi Nabiyouni, Brandon Ray, Navyaram Kondur, Vikram Chandrashekar, and Doug A. Bowman (Virginia Tech, USA) We present two novel redirection techniques to enable real walking in large virtual environments (VEs) using only “room-scale” tracked spaces. The techniques, called Bookshelf and Bird, provide narrative-consistent redirection to keep the user inside the physical space, and require the user to walk to explore the VE. The underlying concept, called cell-based redirection, is to divide the virtual world into discrete cells that have the same size as the physical tracking space. The techniques then can redirect the users without altering the relationship between the user, the physical space, and the virtual cell. In this way, users can always access the entire cell using real walking. @InProceedings{3DUI17p116, author = {Run Yu and Wallace S. Lages and Mahdi Nabiyouni and Brandon Ray and Navyaram Kondur and Vikram Chandrashekar and Doug A. Bowman}, title = {Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {116--119}, doi = {}, year = {2017}, } Video |
|
Zank, Markus |
3DUI '17: "Optimized Graph Extraction ..."
Optimized Graph Extraction and Locomotion Prediction for Redirected Walking
Markus Zank and Andreas Kunz (ETH Zurich, Switzerland) Redirected walking with advanced planners such as MPCRed or FORCE requires both knowledge about the virtual environment - mostly in the form of a skeleton graph representing the virtual environment - and a robust prediction of the user's actions. This paper presents methods for both parts and evaluates them with a number of test cases. Since frame rate is crucial for a virtual reality application, the computationally heavy extraction and preprocessing of the skeleton graph is done offline while only parts directly linked to the user's behavior such as the prediction are done online. The prediction is done using a target-based long-term prediction and the targets are determined automatically and combined with targets predefined by the designer of the virtual environment. The methods presented here provide a graph that is well suited for planning redirection and allows prediction techniques previously only demonstrated in studies to be applied to large scale virtual environments. @InProceedings{3DUI17p120, author = {Markus Zank and Andreas Kunz}, title = {Optimized Graph Extraction and Locomotion Prediction for Redirected Walking}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {120--129}, doi = {}, year = {2017}, } 3DUI '17: "Multi-phase Wall Warner System ..." Multi-phase Wall Warner System for Real Walking in Virtual Environments Markus Zank, Colin Yao, and Andreas Kunz (ETH Zurich, Switzerland) Real walking is a means to explore virtual environments that are even larger than a physical space. Avoiding collisions with the physical walls necessitates a system to warn users. This paper describes the design and implementation of a multi-phase warning system as a solution to this safety necessity. The first phase is a velocity-based warning based on time-to-impact. The second phase is distance-based and designed as an emergency warning. Combinations of acoustic and visual feedback mechanisms were tested in a study with 13 participants. The quantitative measures show that the system keeps users safe, while allowing them to freely explore the virtual environment. @InProceedings{3DUI17p223, author = {Markus Zank and Colin Yao and Andreas Kunz}, title = {Multi-phase Wall Warner System for Real Walking in Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {223--224}, doi = {}, year = {2017}, } |
|
Zheng, Liling |
3DUI '17: "A Surgical Training System ..."
A Surgical Training System for Four Medical Punctures Based on Virtual Reality and Haptic Feedback
Ronghai Wang, Junfeng Yao, Lin Wang, Xiaohan Liu, Hongwei Wang, and Liling Zheng (Quanzhou Normal University, China; Xiamen University, China; Sanming University, China; Quanzhou First Affiliated Hospital of Fujian Medical University, China) This poster presents a surgical training system for four medical punctures based on virtual reality and haptic feedback, including a client program developed in the Unity3D game engine and a server program developed by PHP. This system provides the immersive surgery simulation for thoracentesis, lumbar puncture, bone marrow puncture and abdominal paracentesis that we call four medical punctures. Trainers or teachers can release training tasks in which trainees or students are able to learn surgery skills at a 3D visual scene. Furthermore, they will feel a sense of immediacy when putting on the head-mounted display and with the help of haptic feedback. The training records will be put into database for analysis. @InProceedings{3DUI17p215, author = {Ronghai Wang and Junfeng Yao and Lin Wang and Xiaohan Liu and Hongwei Wang and Liling Zheng}, title = {A Surgical Training System for Four Medical Punctures Based on Virtual Reality and Haptic Feedback}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {215--216}, doi = {}, year = {2017}, } |
|
Zielasko, Daniel |
3DUI '17: "A Reliable Non-verbal Vocal ..."
A Reliable Non-verbal Vocal Input Metaphor for Clicking
Daniel Zielasko, Neha Neha, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) While experiencing an immersive virtual environment a suitable trigger metaphor is often needed, e.g. for the interaction with objects out of physical reach or system control. The BlowClick approach [Zielasko2015] that is based on non-verbal vocal input has been proven to be a valuable trigger technique in previous work. However, its original detection method is vulnerable to false positives and, thus, is limited in its potential use. Therefore, we extended the existing approach by adding machine learning methods to reliably classify blowing events. We found a support vector machine with Gaussian kernel performing the best with at least the same latency and more precision than before. Furthermore, we added acoustic feedback to the NVVI trigger, which increases the user’s confidence and whose absence was also stated as a limitation of the previous work. With this extended technique, we repeated the conducted Fitts’ law experiment with 33 participants and could confirm that it is possible to use NVVI as a reliable trigger as part of a hands-free point-andclick interface. Furthermore, we tested reaction times to measure the trigger’s performance without the influence of pointing and calculated device throughputs to ensure comparability. @InProceedings{3DUI17p40, author = {Daniel Zielasko and Neha Neha and Benjamin Weyers and Torsten W. Kuhlen}, title = {A Reliable Non-verbal Vocal Input Metaphor for Clicking}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {40--49}, doi = {}, year = {2017}, } |
|
Zielinski, David J. |
3DUI '17: "Specimen Box: A Tangible Interaction ..."
Specimen Box: A Tangible Interaction Technique for World-Fixed Virtual Reality Displays
David J. Zielinski, Derek Nankivil, and Regis Kopper (Duke University, USA) Object selection and manipulation in world-fixed displays such as CAVE-type systems are typically achieved with tracked input devices, which lack the tangibility of real-world interactions. Conversely, due to the visual blockage of the real world, head-mounted displays allow the use of many types of real world objects that can convey realistic haptic feedback. To bridge this gap, we propose Specimen Box, an interaction technique that allows users to naturally hold a plausible physical object while manipulating virtual content inside it. This virtual content is rendered based on the tracked position of the box in relation to the user's point of view. Specimen Box provides the weight and tactile feel of an actual object and does not occlude rendered objects in the scene. The end result is that the user sees the virtual content as if it exists inside the clear physical box. We hypothesize that the effect of holding a physical box, which is a valid part of the overall scenario, would improve user performance and experience. To verify this hypothesis, we conducted a user study which involved a cognitively loaded inspection task requiring extensive manipulation of the box. We compared Specimen Box to Grab-and-Twirl, a naturalistic bimanual manipulation technique that closely mimics the mechanics of our proposed technique. Results show that in our specific task, performance was significantly faster and rotation rate was significantly lower with Specimen Box. Further, performance of the control technique was positively affected by experience with Specimen Box. @InProceedings{3DUI17p50, author = {David J. Zielinski and Derek Nankivil and Regis Kopper}, title = {Specimen Box: A Tangible Interaction Technique for World-Fixed Virtual Reality Displays}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {50--58}, doi = {}, year = {2017}, } |
|
Zuffo, Marcelo Knorich |
3DUI '17: "Batmen Beyond: Natural 3D ..."
Batmen Beyond: Natural 3D Manipulation with the BatWand
André Montes Rodrigues, Olavo Belloc, Eduardo Zilles Borba, Mario Nagamura, and Marcelo Knorich Zuffo (University of São Paulo, Brazil) In this work we present an interactive 3D object manipulation system using off the shelf mobile devices coupled with Augmented Reality (AR) technology that allows editing 3D objects by way of natural interactions based on tangible interfaces paradigms. The set-up consists of a mobile device, an interactive wand marker and AR markers laid on a table. The system allows users to change viewpoint and execute operations on 3D objects - simultaneous translation and rotation, scaling, cloning or deleting - by unconstrained natural interactions, leveraging user’s proficiency on daily object manipulation tasks and speeding up such typical 3D manipulation operations. Depth perception was significantly enhanced with dynamic shadows, allowing fast alignment and accurate positioning of objects. The prototype presented here allows successful completion of the three challenges proposed by the 2017 3DUI Contest, as validated by a preliminary informal user study with participants from the target audience and also from the general public. @InProceedings{3DUI17p258, author = {André Montes Rodrigues and Olavo Belloc and Eduardo Zilles Borba and Mario Nagamura and Marcelo Knorich Zuffo}, title = {Batmen Beyond: Natural 3D Manipulation with the BatWand}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {258--259}, doi = {}, year = {2017}, } 3DUI '17: "User Experience Evaluation ..." User Experience Evaluation with Archaeometry Interactive Tools in Virtual Reality Environment Ana Grasielle Corrêa, Eduardo Zilles Borba, Roseli Lopes, Marcelo Knorich Zuffo, Astolfo Araujo, and Regis Kopper (University of São Paulo, Brazil; Duke University, USA) In this work we present a study about usability experience of users in a cyber-archeological environment. We researched how they explore a realistic 3D environment in Virtual Reality (VR) through archaeometry conventional techniques. Our objective is to evaluate users experiences with interactive archaeometry tools with archaeologist (not a VR expert) and compare results with VR experts (not an archeology expert). Two hypothesis will be tested: a) it’s possible to simulate the virtual world realistically as the real one?; b) if this VR model is passive of exploration, is it possible to create 3DUI analytical tools to help archaeologist to manipulate archaeometry tools? To explore these hypotheses we conducted experimental tests with ten users and the results are promising. @InProceedings{3DUI17p217, author = {Ana Grasielle Corrêa and Eduardo Zilles Borba and Roseli Lopes and Marcelo Knorich Zuffo and Astolfo Araujo and Regis Kopper}, title = {User Experience Evaluation with Archaeometry Interactive Tools in Virtual Reality Environment}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {217--218}, doi = {}, year = {2017}, } |
257 authors
proc time: 0.44