3DUI 2017 – Author Index |
Contents -
Abstracts -
Authors
Online Calendar - iCal File |
A B C D E F G H J K L M N O P R S T V W Y Z
Achibet, Merwan |
3DUI '17: "FlexiFingers: Multi-finger ..."
FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics
Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. @InProceedings{3DUI17p103, author = {Merwan Achibet and Benoît Le Gouis and Maud Marchal and Pierre-Alexandre Léziart and Ferran Argelaguet and Adrien Girard and Anatole Lécuyer and Hiroyuki Kajimoto}, title = {FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {103--106}, doi = {}, year = {2017}, } |
|
Ardouin, Jérôme |
3DUI '17: "Increasing Optical Tracking ..."
Increasing Optical Tracking Workspace of VR Applications using Controlled Cameras
Guillaume Cortes, Eric Marchand, Jérôme Ardouin, and Anatole Lécuyer (Realyz, France; University of Rennes 1, France; Inria, France) We propose an approach to greatly increase the tracking workspace of VR applications without adding new sensors. Our approach relies on controlled cameras able to follow the tracked markers all around the VR workspace providing 6DoF tracking data. We designed the proof-of-concept of such approach based on two consumer-grade cameras and a pan-tilt head. The resulting tracking workspace could be greatly increased depending on the actuators’ range of motion. The accuracy error and jitter were found to be rather limited during camera motion (resp. 0.3cm and 0.02cm). Therefore, whenever the final VR application does not require a perfect tracking accuracy over the entire workspace, we recommend using our approach in order to enlarge the tracking workspace. @InProceedings{3DUI17p22, author = {Guillaume Cortes and Eric Marchand and Jérôme Ardouin and Anatole Lécuyer}, title = {Increasing Optical Tracking Workspace of VR Applications using Controlled Cameras}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {22--25}, doi = {}, year = {2017}, } |
|
Argelaguet, Ferran |
3DUI '17: "FlexiFingers: Multi-finger ..."
FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics
Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. @InProceedings{3DUI17p103, author = {Merwan Achibet and Benoît Le Gouis and Maud Marchal and Pierre-Alexandre Léziart and Ferran Argelaguet and Adrien Girard and Anatole Lécuyer and Hiroyuki Kajimoto}, title = {FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {103--106}, doi = {}, year = {2017}, } 3DUI '17: "Spatial and Rotation Invariant ..." Spatial and Rotation Invariant 3D Gesture Recognition Based on Sparse Representation Ferran Argelaguet, Mélanie Ducoffe, Anatole Lécuyer, and Remi Gribonval (Inria, France; IRISA, France; ENS, France) Advances in motion tracking technology, especially for commodity hardware, still require robust 3D gesture recognition in order to fully exploit the benefits of natural user interfaces. In this paper, we introduce a novel 3D gesture recognition algorithm based on the sparse representation of 3D human motion. The sparse representation of human motion provides a set of features that can be used to efficiently classify gestures in real-time. Compared to existing gesture recognition systems, sparse representation, the proposed approach enables full spatial and rotation invariance and provides high tolerance to noise. Moreover, the proposed classification scheme takes into account the inter-user variability which increases gesture classification accuracy in user-independent scenarios. We validated our approach with existing motion databases for gestural interaction and performed a user evaluation with naive subjects to show its robustness to arbitrarily defined gestures. The results showed that our classification scheme has high classification accuracy for user-independent scenarios even with users who have different handedness. We believe that sparse representation of human motion will pave the way for a new generation of 3D gesture recognition systems in order to fully open the potential of natural user interfaces. @InProceedings{3DUI17p158, author = {Ferran Argelaguet and Mélanie Ducoffe and Anatole Lécuyer and Remi Gribonval}, title = {Spatial and Rotation Invariant 3D Gesture Recognition Based on Sparse Representation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {158--167}, doi = {}, year = {2017}, } Video |
|
Ariza N., Oscar J. |
3DUI '17: "Vibrotactile Assistance for ..."
Vibrotactile Assistance for User Guidance Towards Selection Targets in VR and the Cognitive Resources Involved
Oscar J. Ariza N., Markus Lange, Frank Steinicke, and Gerd Bruder (University of Hamburg, Germany; University of Würzburg, Germany; University of Central Florida, USA) The selection of objects in virtual reality (VR) located on the periphery and outside the field of view (FOV) requires a visual search by rotations of the HMD, which can reduce the performance of interaction. Our work explores the use of a pair of self-made wireless and wearable devices which once attached to the user’s head provide assistive vibrotactile cues for guidance in order to reduce the time used to turn and locate a target object. We present an experiment based on a dual-tasking method to analyze cognitive demands and performance metrics during a selection task. @InProceedings{3DUI17p95, author = {Oscar J. Ariza N. and Markus Lange and Frank Steinicke and Gerd Bruder}, title = {Vibrotactile Assistance for User Guidance Towards Selection Targets in VR and the Cognitive Resources Involved}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {95--98}, doi = {}, year = {2017}, } |
|
Babu, Sabarish V. |
3DUI '17: "The Effects of Presentation ..."
The Effects of Presentation Method and Simulation Fidelity on Psychomotor Education in a Bimanual Metrology Training Simulation
Jeffrey Bertrand, Ayush Bhargava, Kapil Chalil Madathil, Anand Gramopadhye, and Sabarish V. Babu (Clemson University, USA) In this study, we empirically evaluated the effects of presentation method and simulation fidelity on task performance and psychomotor skills acquisition in an immersive bimanual simulation towards precision metrology education. In a 2 x 2 experiment design, we investigated a large-screen immersive display (LSID) with a head-mounted display (HMD), and the presence versus absence of gravity. Advantages of the HMD include interacting with the simulation in a more natural manner as compared to using a large-screen immersive display due to the similarities between the interactions afforded in the virtual compared to the real-world task. Suspending the laws of physics may have an effect on usability and in turn could affect learning outcomes. Our dependent variables consisted of a pre and post cognition questionnaire, quantitative performance measures, perceived workload and system usefulness, and a psychomotor assessment to measure to what extent transfer of learning took place from the virtual to the real world. Results indicate that the HMD condition was preferable to the immersive display in several metrics while the no-gravity condition resulted in users adopting strategies that were not advantageous for task performance. @InProceedings{3DUI17p59, author = {Jeffrey Bertrand and Ayush Bhargava and Kapil Chalil Madathil and Anand Gramopadhye and Sabarish V. Babu}, title = {The Effects of Presentation Method and Simulation Fidelity on Psychomotor Education in a Bimanual Metrology Training Simulation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {59--68}, doi = {}, year = {2017}, } |
|
Balcazar, Ruben |
3DUI '17: "Gesture Elicitation for 3D ..."
Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe
Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. @InProceedings{3DUI17p144, author = {Francisco R. Ortega and Alain Galvan and Katherine Tarre and Armando Barreto and Naphtali Rishe and Jonathan Bernal and Ruben Balcazar and Jason-Lee Thomas}, title = {Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {144--153}, doi = {}, year = {2017}, } |
|
Barreto, Armando |
3DUI '17: "Gesture Elicitation for 3D ..."
Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe
Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. @InProceedings{3DUI17p144, author = {Francisco R. Ortega and Alain Galvan and Katherine Tarre and Armando Barreto and Naphtali Rishe and Jonathan Bernal and Ruben Balcazar and Jason-Lee Thomas}, title = {Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {144--153}, doi = {}, year = {2017}, } |
|
Bernal, Jonathan |
3DUI '17: "Gesture Elicitation for 3D ..."
Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe
Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. @InProceedings{3DUI17p144, author = {Francisco R. Ortega and Alain Galvan and Katherine Tarre and Armando Barreto and Naphtali Rishe and Jonathan Bernal and Ruben Balcazar and Jason-Lee Thomas}, title = {Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {144--153}, doi = {}, year = {2017}, } |
|
Bertrand, Jeffrey |
3DUI '17: "The Effects of Presentation ..."
The Effects of Presentation Method and Simulation Fidelity on Psychomotor Education in a Bimanual Metrology Training Simulation
Jeffrey Bertrand, Ayush Bhargava, Kapil Chalil Madathil, Anand Gramopadhye, and Sabarish V. Babu (Clemson University, USA) In this study, we empirically evaluated the effects of presentation method and simulation fidelity on task performance and psychomotor skills acquisition in an immersive bimanual simulation towards precision metrology education. In a 2 x 2 experiment design, we investigated a large-screen immersive display (LSID) with a head-mounted display (HMD), and the presence versus absence of gravity. Advantages of the HMD include interacting with the simulation in a more natural manner as compared to using a large-screen immersive display due to the similarities between the interactions afforded in the virtual compared to the real-world task. Suspending the laws of physics may have an effect on usability and in turn could affect learning outcomes. Our dependent variables consisted of a pre and post cognition questionnaire, quantitative performance measures, perceived workload and system usefulness, and a psychomotor assessment to measure to what extent transfer of learning took place from the virtual to the real world. Results indicate that the HMD condition was preferable to the immersive display in several metrics while the no-gravity condition resulted in users adopting strategies that were not advantageous for task performance. @InProceedings{3DUI17p59, author = {Jeffrey Bertrand and Ayush Bhargava and Kapil Chalil Madathil and Anand Gramopadhye and Sabarish V. Babu}, title = {The Effects of Presentation Method and Simulation Fidelity on Psychomotor Education in a Bimanual Metrology Training Simulation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {59--68}, doi = {}, year = {2017}, } |
|
Bhargava, Ayush |
3DUI '17: "The Effects of Presentation ..."
The Effects of Presentation Method and Simulation Fidelity on Psychomotor Education in a Bimanual Metrology Training Simulation
Jeffrey Bertrand, Ayush Bhargava, Kapil Chalil Madathil, Anand Gramopadhye, and Sabarish V. Babu (Clemson University, USA) In this study, we empirically evaluated the effects of presentation method and simulation fidelity on task performance and psychomotor skills acquisition in an immersive bimanual simulation towards precision metrology education. In a 2 x 2 experiment design, we investigated a large-screen immersive display (LSID) with a head-mounted display (HMD), and the presence versus absence of gravity. Advantages of the HMD include interacting with the simulation in a more natural manner as compared to using a large-screen immersive display due to the similarities between the interactions afforded in the virtual compared to the real-world task. Suspending the laws of physics may have an effect on usability and in turn could affect learning outcomes. Our dependent variables consisted of a pre and post cognition questionnaire, quantitative performance measures, perceived workload and system usefulness, and a psychomotor assessment to measure to what extent transfer of learning took place from the virtual to the real world. Results indicate that the HMD condition was preferable to the immersive display in several metrics while the no-gravity condition resulted in users adopting strategies that were not advantageous for task performance. @InProceedings{3DUI17p59, author = {Jeffrey Bertrand and Ayush Bhargava and Kapil Chalil Madathil and Anand Gramopadhye and Sabarish V. Babu}, title = {The Effects of Presentation Method and Simulation Fidelity on Psychomotor Education in a Bimanual Metrology Training Simulation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {59--68}, doi = {}, year = {2017}, } |
|
Billinghurst, Mark |
3DUI '17: "Exploring Natural Eye-Gaze-Based ..."
Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality
Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman, and Mark Billinghurst (University of South Australia, Australia; University of Canterbury, New Zealand) Eye tracking technology in a head-mounted display has undergone rapid advancement in recent years, making it possible for researchers to explore new interaction techniques using natural eye movements. This paper explores three novel eye-gaze-based interaction techniques: (1) Duo-Reticles, eye-gaze selection based on eye-gaze and inertial reticles, (2) Radial Pursuit, cluttered-object selection that takes advantage of smooth pursuit, and (3) Nod and Roll, head-gesture-based interaction based on the vestibulo-ocular reflex. In an initial user study, we compare each technique against a baseline condition in a scenario that demonstrates its strengths and weaknesses. @InProceedings{3DUI17p36, author = {Thammathip Piumsomboon and Gun Lee and Robert W. Lindeman and Mark Billinghurst}, title = {Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {36--39}, doi = {}, year = {2017}, } Video Info |
|
Bönsch, Andrea |
3DUI '17: "Evaluation of Approaching-Strategies ..."
Evaluation of Approaching-Strategies of Temporarily Required Virtual Assistants in Immersive Environments
Andrea Bönsch, Tom Vierjahn, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Embodied, virtual agents provide users assistance in agent-based support systems. To this end, two closely linked factors have to be considered for the agents’ behavioral design: their presence time (PT), i.e., the time in which the agents are visible, and the approaching time (AT), i.e., the time span between the user’s calling for an agent and the agent’s actual availability. This work focuses on human-like assistants that are embedded in immersive scenes but that are required only temporarily. To the best of our knowledge, guidelines for a suitable trade-off between PT and AT of these assistants do not yet exist. We address this gap by presenting the results of a controlled within-subjects study in a CAVE. While keeping a low PT so that the agent is not perceived as annoying, three strategies affecting the AT, namely fading, walking, and running, are evaluated by 40 subjects. The results indicate no clear preference for either behavior. Instead, the necessity of a better trade-off between a low AT and an agent’s realistic behavior is demonstrated. @InProceedings{3DUI17p69, author = {Andrea Bönsch and Tom Vierjahn and Torsten W. Kuhlen}, title = {Evaluation of Approaching-Strategies of Temporarily Required Virtual Assistants in Immersive Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {69--72}, doi = {}, year = {2017}, } |
|
Bowman, Doug A. |
3DUI '17: "Bookshelf and Bird: Enabling ..."
Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection
Run Yu, Wallace S. Lages, Mahdi Nabiyouni, Brandon Ray, Navyaram Kondur, Vikram Chandrashekar, and Doug A. Bowman (Virginia Tech, USA) We present two novel redirection techniques to enable real walking in large virtual environments (VEs) using only “room-scale” tracked spaces. The techniques, called Bookshelf and Bird, provide narrative-consistent redirection to keep the user inside the physical space, and require the user to walk to explore the VE. The underlying concept, called cell-based redirection, is to divide the virtual world into discrete cells that have the same size as the physical tracking space. The techniques then can redirect the users without altering the relationship between the user, the physical space, and the virtual cell. In this way, users can always access the entire cell using real walking. @InProceedings{3DUI17p116, author = {Run Yu and Wallace S. Lages and Mahdi Nabiyouni and Brandon Ray and Navyaram Kondur and Vikram Chandrashekar and Doug A. Bowman}, title = {Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {116--119}, doi = {}, year = {2017}, } Video |
|
Bruder, Gerd |
3DUI '17: "Vibrotactile Assistance for ..."
Vibrotactile Assistance for User Guidance Towards Selection Targets in VR and the Cognitive Resources Involved
Oscar J. Ariza N., Markus Lange, Frank Steinicke, and Gerd Bruder (University of Hamburg, Germany; University of Würzburg, Germany; University of Central Florida, USA) The selection of objects in virtual reality (VR) located on the periphery and outside the field of view (FOV) requires a visual search by rotations of the HMD, which can reduce the performance of interaction. Our work explores the use of a pair of self-made wireless and wearable devices which once attached to the user’s head provide assistive vibrotactile cues for guidance in order to reduce the time used to turn and locate a target object. We present an experiment based on a dual-tasking method to analyze cognitive demands and performance metrics during a selection task. @InProceedings{3DUI17p95, author = {Oscar J. Ariza N. and Markus Lange and Frank Steinicke and Gerd Bruder}, title = {Vibrotactile Assistance for User Guidance Towards Selection Targets in VR and the Cognitive Resources Involved}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {95--98}, doi = {}, year = {2017}, } |
|
Chandrashekar, Vikram |
3DUI '17: "Bookshelf and Bird: Enabling ..."
Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection
Run Yu, Wallace S. Lages, Mahdi Nabiyouni, Brandon Ray, Navyaram Kondur, Vikram Chandrashekar, and Doug A. Bowman (Virginia Tech, USA) We present two novel redirection techniques to enable real walking in large virtual environments (VEs) using only “room-scale” tracked spaces. The techniques, called Bookshelf and Bird, provide narrative-consistent redirection to keep the user inside the physical space, and require the user to walk to explore the VE. The underlying concept, called cell-based redirection, is to divide the virtual world into discrete cells that have the same size as the physical tracking space. The techniques then can redirect the users without altering the relationship between the user, the physical space, and the virtual cell. In this way, users can always access the entire cell using real walking. @InProceedings{3DUI17p116, author = {Run Yu and Wallace S. Lages and Mahdi Nabiyouni and Brandon Ray and Navyaram Kondur and Vikram Chandrashekar and Doug A. Bowman}, title = {Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {116--119}, doi = {}, year = {2017}, } Video |
|
Chang, Yun Suk |
3DUI '17: "Evaluating Gesture-Based Augmented ..."
Evaluating Gesture-Based Augmented Reality Annotation
Yun Suk Chang, Benjamin Nuernberger, Bo Luan, and Tobias Höllerer (University of California at Santa Barbara, USA) Drawing annotations with 3D hand gestures in augmented reality are useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn circle and arrow annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. In this paper, we compare different annotation methods using the HoloLens augmented reality development platform: Surface-Drawing and Air-Drawing, with either raw but smoothed or interpreted and beautified gesture input. For the Surface-Drawing method, users control a cursor that is projected onto the world model, allowing gesture input to occur directly on the surfaces of real-world objects. For the Air-Drawing method, gesture drawing occurs at the user's fingertip and is projected onto the world model on release. The methods have different characteristics regarding necessitated vergence switches and afforded cursor control. We performed an experiment in which users draw on two different real-world objects at different distances using the different methods. Results indicate that Surface-Drawing is more accurate than Air-Drawing and Beautified annotations are drawn faster than Non-Beautified; participants also preferred Surface-Drawing and Beautified. @InProceedings{3DUI17p182, author = {Yun Suk Chang and Benjamin Nuernberger and Bo Luan and Tobias Höllerer}, title = {Evaluating Gesture-Based Augmented Reality Annotation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {182--185}, doi = {}, year = {2017}, } Video |
|
Chellali, Amine |
3DUI '17: "Classic3D and Single3D: Two ..."
Classic3D and Single3D: Two Unimanual Techniques for Constrained 3D Manipulations on Tablet PCs
Siju Wu, Aylen Ricca, Amine Chellali, and Samir Otmane (University of Evry, France) Standard 3D widgets are used for object manipulation in desktop CAD applications but are less suited for use on touchscreens. We propose two 3D constrained manipulation techniques for Tablet PCs. Using finger identification, the dominant hand's index, middle and ring fingers are mapped with the X, Y and Z axes. Users can then trigger different manipulation tasks using specific chording gestures. A user study to assess usability and efficiency permitted to identify the gestures that are the most suitable for each manipulation task. Some design recommendations for an efficient 3D constrained manipulations technique are presented. @InProceedings{3DUI17p168, author = {Siju Wu and Aylen Ricca and Amine Chellali and Samir Otmane}, title = {Classic3D and Single3D: Two Unimanual Techniques for Constrained 3D Manipulations on Tablet PCs}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {168--171}, doi = {}, year = {2017}, } |
|
Clergeaud, Damien |
3DUI '17: "Pano: Design and Evaluation ..."
Pano: Design and Evaluation of a 360° Through-the-Lens Technique
Damien Clergeaud and Pascal Guitton (Inria, France; University of Bordeaux, France) Virtual Reality experiments have enabled immersed users to perform virtual tasks in a Virtual Environment (VE). Before beginning a task, however, users must locate and select the different objects they will need in the VE. This first step is important and affects global performance in the virtual task. If a user takes too long to locate and select an object, the duration of the task is increased. Moreover, both the comfort and global efficiency of users deteriorate as search and selection time increase. We have developed Pano, a technique which reduces this time by increasing the user’s “natural” field of view. More precisely, we provide a 360° panoramic virtual image which is displayed on a specific window, called the PanoWindow. Thanks to the PanoWindow, users can perceive and interact with the part of the Virtual Environment (VE) that is behind them without any additional head or body movement. In this paper, we present two user studies with 30 and 21 participants in different VEs. For the first study, participants were invited to perform object-finding tasks with and without Pano. The second study involved position-estimating tasks in order to know if the PanoWindow image enables users to build an accurate representation of the environment. First, the results show that Pano both reduces task duration and improves user comfort. Second, they demonstrate that good object-localization accuracy can be achieved using Pano. @InProceedings{3DUI17p2, author = {Damien Clergeaud and Pascal Guitton}, title = {Pano: Design and Evaluation of a 360° Through-the-Lens Technique}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {2--11}, doi = {}, year = {2017}, } |
|
Cortes, Guillaume |
3DUI '17: "Increasing Optical Tracking ..."
Increasing Optical Tracking Workspace of VR Applications using Controlled Cameras
Guillaume Cortes, Eric Marchand, Jérôme Ardouin, and Anatole Lécuyer (Realyz, France; University of Rennes 1, France; Inria, France) We propose an approach to greatly increase the tracking workspace of VR applications without adding new sensors. Our approach relies on controlled cameras able to follow the tracked markers all around the VR workspace providing 6DoF tracking data. We designed the proof-of-concept of such approach based on two consumer-grade cameras and a pan-tilt head. The resulting tracking workspace could be greatly increased depending on the actuators’ range of motion. The accuracy error and jitter were found to be rather limited during camera motion (resp. 0.3cm and 0.02cm). Therefore, whenever the final VR application does not require a perfect tracking accuracy over the entire workspace, we recommend using our approach in order to enlarge the tracking workspace. @InProceedings{3DUI17p22, author = {Guillaume Cortes and Eric Marchand and Jérôme Ardouin and Anatole Lécuyer}, title = {Increasing Optical Tracking Workspace of VR Applications using Controlled Cameras}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {22--25}, doi = {}, year = {2017}, } |
|
Ducoffe, Mélanie |
3DUI '17: "Spatial and Rotation Invariant ..."
Spatial and Rotation Invariant 3D Gesture Recognition Based on Sparse Representation
Ferran Argelaguet, Mélanie Ducoffe, Anatole Lécuyer, and Remi Gribonval (Inria, France; IRISA, France; ENS, France) Advances in motion tracking technology, especially for commodity hardware, still require robust 3D gesture recognition in order to fully exploit the benefits of natural user interfaces. In this paper, we introduce a novel 3D gesture recognition algorithm based on the sparse representation of 3D human motion. The sparse representation of human motion provides a set of features that can be used to efficiently classify gestures in real-time. Compared to existing gesture recognition systems, sparse representation, the proposed approach enables full spatial and rotation invariance and provides high tolerance to noise. Moreover, the proposed classification scheme takes into account the inter-user variability which increases gesture classification accuracy in user-independent scenarios. We validated our approach with existing motion databases for gestural interaction and performed a user evaluation with naive subjects to show its robustness to arbitrarily defined gestures. The results showed that our classification scheme has high classification accuracy for user-independent scenarios even with users who have different handedness. We believe that sparse representation of human motion will pave the way for a new generation of 3D gesture recognition systems in order to fully open the potential of natural user interfaces. @InProceedings{3DUI17p158, author = {Ferran Argelaguet and Mélanie Ducoffe and Anatole Lécuyer and Remi Gribonval}, title = {Spatial and Rotation Invariant 3D Gesture Recognition Based on Sparse Representation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {158--167}, doi = {}, year = {2017}, } Video |
|
Erfanian, Aida |
3DUI '17: "Force and Vibrotactile Integration ..."
Force and Vibrotactile Integration for 3D User Interaction within Virtual Environments
Aida Erfanian, Stanley Tarng, Yaoping Hu, Jérémy Plouzeau, and Frédéric Merienne (University of Calgary, Canada; LE2I, France) Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies on multi-sensory integration of visual and haptic cues revealed that the integration follows maximum likelihood estimation (MLE). Little effort focuses however on integrating force and vibrotactile cues – two sub-categorical cues of the haptic modality. Hence, this paper presents an investigation on MLE’s suitability for integrating these sub-categorical cues. Within a stereoscopic VE, human users performed a 3D interactive task of navigating a flying drone along a high-voltage transmission line in an inaccessible region and identifying defects on the line. The users had to identify defects via individual force or vibrotactile cues, and their combinations in co-located and dislocated settings. The co-located setting provided both cues on the right hand of the users; whereas the dislocated setting delivered the force and vibrotactile cues on the right hand and forearm of the users, respectively. Task performance of the users, such as completion time and accuracy, was assessed under each cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This was in agreement with the role of tactile cues in sensing surface properties, herein setting a baseline for using MLE. The task performance under the co-located setting indicated certain degrees of combining those under the individual cues. In contrast, the performance under the dislocated setting was alike that under the individual vibrotactile cue. These observations imply an inconclusiveness of MLE to integrate both cues in a co-located setting for 3D user interaction. @InProceedings{3DUI17p87, author = {Aida Erfanian and Stanley Tarng and Yaoping Hu and Jérémy Plouzeau and Frédéric Merienne}, title = {Force and Vibrotactile Integration for 3D User Interaction within Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {87--94}, doi = {}, year = {2017}, } |
|
Ferlay, Fabien |
3DUI '17: "Spatialized Vibrotactile Feedback ..."
Spatialized Vibrotactile Feedback Contributes to Goal-Directed Movements in Cluttered Virtual Environments
Céphise Louison, Fabien Ferlay, and Daniel R. Mestre (CEA, France; Aix-Marseille University, France; CNRS, France) In virtual reality (VR), spatial awareness is a dominant research topic. It plays an essential role in the assessment of human operators’ behavior within virtual environments (VE), notably for the evaluation of the feasibility of manual maintenance tasks in cluttered industrial settings. In such contexts, it is decisive to evaluate the spatial and temporal correspondence between the operator’s movement kinematics and that of his/her virtual avatar in the virtual environment. Often, in a cluttered VE, direct kinesthetic (force) feedback is limited or absent. We tested whether vibrotactile (cutaneous) feedback would increase visuo-proprioceptive consistency, spatial awareness, and thus the validity of VR studies, by augmenting the perception of the operator’s contact(s) with virtual objects. We present preliminary experimental results, obtained using a head-mounted display (HMD) during a goal-directed task in a cluttered VE. Data suggest that spatialized vibrotactile feedback contributes to visuo-proprioceptive consistency. @InProceedings{3DUI17p99, author = {Céphise Louison and Fabien Ferlay and Daniel R. Mestre}, title = {Spatialized Vibrotactile Feedback Contributes to Goal-Directed Movements in Cluttered Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {99--102}, doi = {}, year = {2017}, } Video |
|
Ferreira, Alfredo |
3DUI '17: "Mid-Air Modeling with Boolean ..."
Mid-Air Modeling with Boolean Operations in VR
Daniel Mendes, Daniel Medeiros, Maurício Sousa, Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal; PUC-Rio, Brazil) Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks. @InProceedings{3DUI17p154, author = {Daniel Mendes and Daniel Medeiros and Maurício Sousa and Ricardo Ferreira and Alberto Raposo and Alfredo Ferreira and Joaquim Jorge}, title = {Mid-Air Modeling with Boolean Operations in VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {154--157}, doi = {}, year = {2017}, } |
|
Ferreira, Ricardo |
3DUI '17: "Mid-Air Modeling with Boolean ..."
Mid-Air Modeling with Boolean Operations in VR
Daniel Mendes, Daniel Medeiros, Maurício Sousa, Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal; PUC-Rio, Brazil) Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks. @InProceedings{3DUI17p154, author = {Daniel Mendes and Daniel Medeiros and Maurício Sousa and Ricardo Ferreira and Alberto Raposo and Alfredo Ferreira and Joaquim Jorge}, title = {Mid-Air Modeling with Boolean Operations in VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {154--157}, doi = {}, year = {2017}, } |
|
Freitag, Sebastian |
3DUI '17: "Efficient Approximate Computation ..."
Efficient Approximate Computation of Scene Visibility Based on Navigation Meshes and Applications for Navigation and Scene Analysis
Sebastian Freitag, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Scene visibility - the information of which parts of the scene are visible from a certain location—can be used to derive various properties of a virtual environment. For example, it enables the computation of viewpoint quality to determine the informativeness of a viewpoint, helps in constructing virtual tours, and allows to keep track of the objects a user may already have seen. However, computing visibility at runtime may be too computationally expensive for many applications, while sampling the entire scene beforehand introduces a costly precomputation step and may include many samples not needed later on. Therefore, in this paper, we propose a novel approach to precompute visibility information based on navigation meshes, a polygonal representation of a scene’s navigable areas. We show that with only limited precomputation, high accuracy can be achieved in these areas. Furthermore, we demonstrate the usefulness of the approach by means of several applications, including viewpoint quality computation, landmark and room detection, and exploration assistance. In addition, we present a travel interface based on common visibility that we found to result in less cybersickness in a user study. @InProceedings{3DUI17p134, author = {Sebastian Freitag and Benjamin Weyers and Torsten W. Kuhlen}, title = {Efficient Approximate Computation of Scene Visibility Based on Navigation Meshes and Applications for Navigation and Scene Analysis}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {134--143}, doi = {}, year = {2017}, } |
|
Galvan, Alain |
3DUI '17: "Gesture Elicitation for 3D ..."
Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe
Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. @InProceedings{3DUI17p144, author = {Francisco R. Ortega and Alain Galvan and Katherine Tarre and Armando Barreto and Naphtali Rishe and Jonathan Bernal and Ruben Balcazar and Jason-Lee Thomas}, title = {Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {144--153}, doi = {}, year = {2017}, } |
|
Girard, Adrien |
3DUI '17: "FlexiFingers: Multi-finger ..."
FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics
Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. @InProceedings{3DUI17p103, author = {Merwan Achibet and Benoît Le Gouis and Maud Marchal and Pierre-Alexandre Léziart and Ferran Argelaguet and Adrien Girard and Anatole Lécuyer and Hiroyuki Kajimoto}, title = {FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {103--106}, doi = {}, year = {2017}, } |
|
Gouis, Benoît Le |
3DUI '17: "FlexiFingers: Multi-finger ..."
FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics
Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. @InProceedings{3DUI17p103, author = {Merwan Achibet and Benoît Le Gouis and Maud Marchal and Pierre-Alexandre Léziart and Ferran Argelaguet and Adrien Girard and Anatole Lécuyer and Hiroyuki Kajimoto}, title = {FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {103--106}, doi = {}, year = {2017}, } |
|
Gramopadhye, Anand |
3DUI '17: "The Effects of Presentation ..."
The Effects of Presentation Method and Simulation Fidelity on Psychomotor Education in a Bimanual Metrology Training Simulation
Jeffrey Bertrand, Ayush Bhargava, Kapil Chalil Madathil, Anand Gramopadhye, and Sabarish V. Babu (Clemson University, USA) In this study, we empirically evaluated the effects of presentation method and simulation fidelity on task performance and psychomotor skills acquisition in an immersive bimanual simulation towards precision metrology education. In a 2 x 2 experiment design, we investigated a large-screen immersive display (LSID) with a head-mounted display (HMD), and the presence versus absence of gravity. Advantages of the HMD include interacting with the simulation in a more natural manner as compared to using a large-screen immersive display due to the similarities between the interactions afforded in the virtual compared to the real-world task. Suspending the laws of physics may have an effect on usability and in turn could affect learning outcomes. Our dependent variables consisted of a pre and post cognition questionnaire, quantitative performance measures, perceived workload and system usefulness, and a psychomotor assessment to measure to what extent transfer of learning took place from the virtual to the real world. Results indicate that the HMD condition was preferable to the immersive display in several metrics while the no-gravity condition resulted in users adopting strategies that were not advantageous for task performance. @InProceedings{3DUI17p59, author = {Jeffrey Bertrand and Ayush Bhargava and Kapil Chalil Madathil and Anand Gramopadhye and Sabarish V. Babu}, title = {The Effects of Presentation Method and Simulation Fidelity on Psychomotor Education in a Bimanual Metrology Training Simulation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {59--68}, doi = {}, year = {2017}, } |
|
Gribonval, Remi |
3DUI '17: "Spatial and Rotation Invariant ..."
Spatial and Rotation Invariant 3D Gesture Recognition Based on Sparse Representation
Ferran Argelaguet, Mélanie Ducoffe, Anatole Lécuyer, and Remi Gribonval (Inria, France; IRISA, France; ENS, France) Advances in motion tracking technology, especially for commodity hardware, still require robust 3D gesture recognition in order to fully exploit the benefits of natural user interfaces. In this paper, we introduce a novel 3D gesture recognition algorithm based on the sparse representation of 3D human motion. The sparse representation of human motion provides a set of features that can be used to efficiently classify gestures in real-time. Compared to existing gesture recognition systems, sparse representation, the proposed approach enables full spatial and rotation invariance and provides high tolerance to noise. Moreover, the proposed classification scheme takes into account the inter-user variability which increases gesture classification accuracy in user-independent scenarios. We validated our approach with existing motion databases for gestural interaction and performed a user evaluation with naive subjects to show its robustness to arbitrarily defined gestures. The results showed that our classification scheme has high classification accuracy for user-independent scenarios even with users who have different handedness. We believe that sparse representation of human motion will pave the way for a new generation of 3D gesture recognition systems in order to fully open the potential of natural user interfaces. @InProceedings{3DUI17p158, author = {Ferran Argelaguet and Mélanie Ducoffe and Anatole Lécuyer and Remi Gribonval}, title = {Spatial and Rotation Invariant 3D Gesture Recognition Based on Sparse Representation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {158--167}, doi = {}, year = {2017}, } Video |
|
Grubert, Jens |
3DUI '17: "Adaptive User Perspective ..."
Adaptive User Perspective Rendering for Handheld Augmented Reality
Peter Mohr, Markus Tatzgern, Jens Grubert, Dieter Schmalstieg, and Denis Kalkofen (Graz University of Technology, Austria; Salzburg University of Applied Sciences, Austria; Coburg University, Germany) Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user’s real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user’s head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user’s motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering. @InProceedings{3DUI17p176, author = {Peter Mohr and Markus Tatzgern and Jens Grubert and Dieter Schmalstieg and Denis Kalkofen}, title = {Adaptive User Perspective Rendering for Handheld Augmented Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {176--181}, doi = {}, year = {2017}, } |
|
Guitton, Pascal |
3DUI '17: "Pano: Design and Evaluation ..."
Pano: Design and Evaluation of a 360° Through-the-Lens Technique
Damien Clergeaud and Pascal Guitton (Inria, France; University of Bordeaux, France) Virtual Reality experiments have enabled immersed users to perform virtual tasks in a Virtual Environment (VE). Before beginning a task, however, users must locate and select the different objects they will need in the VE. This first step is important and affects global performance in the virtual task. If a user takes too long to locate and select an object, the duration of the task is increased. Moreover, both the comfort and global efficiency of users deteriorate as search and selection time increase. We have developed Pano, a technique which reduces this time by increasing the user’s “natural” field of view. More precisely, we provide a 360° panoramic virtual image which is displayed on a specific window, called the PanoWindow. Thanks to the PanoWindow, users can perceive and interact with the part of the Virtual Environment (VE) that is behind them without any additional head or body movement. In this paper, we present two user studies with 30 and 21 participants in different VEs. For the first study, participants were invited to perform object-finding tasks with and without Pano. The second study involved position-estimating tasks in order to know if the PanoWindow image enables users to build an accurate representation of the environment. First, the results show that Pano both reduces task duration and improves user comfort. Second, they demonstrate that good object-localization accuracy can be achieved using Pano. @InProceedings{3DUI17p2, author = {Damien Clergeaud and Pascal Guitton}, title = {Pano: Design and Evaluation of a 360° Through-the-Lens Technique}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {2--11}, doi = {}, year = {2017}, } |
|
Hachet, Martin |
3DUI '17: "Towards a Hybrid Space Combining ..."
Towards a Hybrid Space Combining Spatial Augmented Reality and Virtual Reality
Joan Sol Roo and Martin Hachet (Inria, France) Spatial Augmented Reality (SAR) allows a user, or a group of users, to benefit from digital augmentations embedded directly into the physical world. This enables co-located information and unobstructed interaction. On the other hand, SAR suffers from limitations that are inherently linked to its physical dependency, which is not the case for see-through or immersive displays. In this work, we explore how to facilitate the transition from SAR to VR, and vice versa, integrating both into a unified experience. We developed a set of interaction techniques and obtained first feedback from informal interviews. @InProceedings{3DUI17p195, author = {Joan Sol Roo and Martin Hachet}, title = {Towards a Hybrid Space Combining Spatial Augmented Reality and Virtual Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {195--198}, doi = {}, year = {2017}, } Video Info |
|
Hashemian, Abraham M. |
3DUI '17: "Comparing Leaning-Based Motion ..."
Comparing Leaning-Based Motion Cueing Interfaces for Virtual Reality Locomotion
Alexandra Kitson, Abraham M. Hashemian, Ekaterina R. Stepanova, Ernst Kruijff, and Bernhard E. Riecke (Simon Fraser University, Canada; Bonn-Rhein-Sieg University of Applied Sciences, Germany) In this paper, we describe a user study comparing five different locomotion interfaces for virtual reality locomotion. We compared a standard non-motion cueing interface, Joystick (Xbox), with four motion cueing interfaces, NaviChair (stool with springs), MuvMan (sit/stand active stool), Head-Directed (Oculus Rift DK2), and Swivel Chair (everyday office chair with leaning capability). Each interface had two degrees of freedom to move forward/backward and rotate using velocity (rate) control. The aim of this mixed methods study was to better understand relevant user experience factors and guide the design of future locomotion interfaces. This study employed methods from HCI to provide an understanding of why users behave a certain way while using the interface and to unearth any new issues with the design. Participants were tasked to search for objects in a virtual city while they provided talk-aloud feedback and we logged their behaviour. Subsequently, they completed a post-experimental questionnaire on their experience. We found that the qualitative themes of control, usability, and experience echoed the results of the questionnaire, providing internal validity. The quantitative measures revealed the Joystick to be significantly more comfortable and precise than the motion cueing interfaces. However, the qualitative feedback and interviews showed this was due to the reduced perceived controllability and safety of the motion cueing interfaces. Designers of these interfaces should consider using a backrest if users need to lean backwards and avoid using velocity-control for rotations when using HMDs. @InProceedings{3DUI17p73, author = {Alexandra Kitson and Abraham M. Hashemian and Ekaterina R. Stepanova and Ernst Kruijff and Bernhard E. Riecke}, title = {Comparing Leaning-Based Motion Cueing Interfaces for Virtual Reality Locomotion}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {73--82}, doi = {}, year = {2017}, } |
|
Höllerer, Tobias |
3DUI '17: "Evaluating Gesture-Based Augmented ..."
Evaluating Gesture-Based Augmented Reality Annotation
Yun Suk Chang, Benjamin Nuernberger, Bo Luan, and Tobias Höllerer (University of California at Santa Barbara, USA) Drawing annotations with 3D hand gestures in augmented reality are useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn circle and arrow annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. In this paper, we compare different annotation methods using the HoloLens augmented reality development platform: Surface-Drawing and Air-Drawing, with either raw but smoothed or interpreted and beautified gesture input. For the Surface-Drawing method, users control a cursor that is projected onto the world model, allowing gesture input to occur directly on the surfaces of real-world objects. For the Air-Drawing method, gesture drawing occurs at the user's fingertip and is projected onto the world model on release. The methods have different characteristics regarding necessitated vergence switches and afforded cursor control. We performed an experiment in which users draw on two different real-world objects at different distances using the different methods. Results indicate that Surface-Drawing is more accurate than Air-Drawing and Beautified annotations are drawn faster than Non-Beautified; participants also preferred Surface-Drawing and Beautified. @InProceedings{3DUI17p182, author = {Yun Suk Chang and Benjamin Nuernberger and Bo Luan and Tobias Höllerer}, title = {Evaluating Gesture-Based Augmented Reality Annotation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {182--185}, doi = {}, year = {2017}, } Video |
|
Hu, Yaoping |
3DUI '17: "Force and Vibrotactile Integration ..."
Force and Vibrotactile Integration for 3D User Interaction within Virtual Environments
Aida Erfanian, Stanley Tarng, Yaoping Hu, Jérémy Plouzeau, and Frédéric Merienne (University of Calgary, Canada; LE2I, France) Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies on multi-sensory integration of visual and haptic cues revealed that the integration follows maximum likelihood estimation (MLE). Little effort focuses however on integrating force and vibrotactile cues – two sub-categorical cues of the haptic modality. Hence, this paper presents an investigation on MLE’s suitability for integrating these sub-categorical cues. Within a stereoscopic VE, human users performed a 3D interactive task of navigating a flying drone along a high-voltage transmission line in an inaccessible region and identifying defects on the line. The users had to identify defects via individual force or vibrotactile cues, and their combinations in co-located and dislocated settings. The co-located setting provided both cues on the right hand of the users; whereas the dislocated setting delivered the force and vibrotactile cues on the right hand and forearm of the users, respectively. Task performance of the users, such as completion time and accuracy, was assessed under each cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This was in agreement with the role of tactile cues in sensing surface properties, herein setting a baseline for using MLE. The task performance under the co-located setting indicated certain degrees of combining those under the individual cues. In contrast, the performance under the dislocated setting was alike that under the individual vibrotactile cue. These observations imply an inconclusiveness of MLE to integrate both cues in a co-located setting for 3D user interaction. @InProceedings{3DUI17p87, author = {Aida Erfanian and Stanley Tarng and Yaoping Hu and Jérémy Plouzeau and Frédéric Merienne}, title = {Force and Vibrotactile Integration for 3D User Interaction within Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {87--94}, doi = {}, year = {2017}, } |
|
Jorge, Joaquim |
3DUI '17: "Mid-Air Modeling with Boolean ..."
Mid-Air Modeling with Boolean Operations in VR
Daniel Mendes, Daniel Medeiros, Maurício Sousa, Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal; PUC-Rio, Brazil) Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks. @InProceedings{3DUI17p154, author = {Daniel Mendes and Daniel Medeiros and Maurício Sousa and Ricardo Ferreira and Alberto Raposo and Alfredo Ferreira and Joaquim Jorge}, title = {Mid-Air Modeling with Boolean Operations in VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {154--157}, doi = {}, year = {2017}, } |
|
Kajimoto, Hiroyuki |
3DUI '17: "FlexiFingers: Multi-finger ..."
FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics
Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. @InProceedings{3DUI17p103, author = {Merwan Achibet and Benoît Le Gouis and Maud Marchal and Pierre-Alexandre Léziart and Ferran Argelaguet and Adrien Girard and Anatole Lécuyer and Hiroyuki Kajimoto}, title = {FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {103--106}, doi = {}, year = {2017}, } 3DUI '17: "Interpretation of Navigation ..." Interpretation of Navigation Information Modulates the Effect of the Waist-Type Hanger Reflex on Walking Yuki Kon, Takuto Nakamura, and Hiroyuki Kajimoto (University of Electro-Communications, Japan) The Hanger Reflex is a phenomenon in which the head rotates unintentionally when force is applied via a wire hanger placed on the head. This phenomenon is caused by physical pressure on the skin, and the direction of the Hanger Reflex modulated by the direction of skin deformation. A previous study examined the use of the head-, waist-, and ankle-type Hanger Reflex on walking navigation without interpretation of navigation information, and found that the waist-type Hanger Reflex had the strongest effect on walking. However, the existing waist-type Hanger Reflex device is passive; i.e. must be operated by the user, which leads to the necessity of developing a new active type device for use as part of a navigational system. In this paper, we developed a controlled waist-type Hanger Reflex device with four pneumatic actuators. We investigated different interpretations of navigation information on the effect of our device on walking. Our interpretation conditions included “Natural”, in which users did not attempt to interpret the navigation information, “Follow”, and “Resist”, in which they actively followed, or resisted the navigation information, respectively. We confirmed that our waist-type Hanger Reflex device could control the walking path and body direction, depending on user’s interpretation of the navigational information. @InProceedings{3DUI17p107, author = {Yuki Kon and Takuto Nakamura and Hiroyuki Kajimoto}, title = {Interpretation of Navigation Information Modulates the Effect of the Waist-Type Hanger Reflex on Walking}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {107--115}, doi = {}, year = {2017}, } Video 3DUI '17: "COMS-VR: Mobile Virtual Reality ..." COMS-VR: Mobile Virtual Reality Entertainment System using Electric Car and Head-Mounted Display Ryo Kodama, Masahiro Koge, Shun Taguchi, and Hiroyuki Kajimoto (TOYOTA Central R&D Labs, Japan; University of Electro-Communications, Japan) We propose a novel virtual reality entertainment system using a car as a motion platform. Motion platforms present a sensation of motion to the user using powerful actuators. Combined with virtual reality content, including surrounding visual, auditory and tactile displays, such systems can provide and immersive experience. However, the space and cost requirements for installation of motion platforms are large. To overcome this issue, we propose to use a car as a motion platform. We developed a prototype system composed of a head mounted display, a one-person electric car and an automatic driving algorithm. We developed and tested immersive content in which users ride on a trolley in a virtual space. All users responded quite positively to the experience. @InProceedings{3DUI17p130, author = {Ryo Kodama and Masahiro Koge and Shun Taguchi and Hiroyuki Kajimoto}, title = {COMS-VR: Mobile Virtual Reality Entertainment System using Electric Car and Head-Mounted Display}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {130--133}, doi = {}, year = {2017}, } |
|
Kalkofen, Denis |
3DUI '17: "Adaptive User Perspective ..."
Adaptive User Perspective Rendering for Handheld Augmented Reality
Peter Mohr, Markus Tatzgern, Jens Grubert, Dieter Schmalstieg, and Denis Kalkofen (Graz University of Technology, Austria; Salzburg University of Applied Sciences, Austria; Coburg University, Germany) Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user’s real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user’s head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user’s motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering. @InProceedings{3DUI17p176, author = {Peter Mohr and Markus Tatzgern and Jens Grubert and Dieter Schmalstieg and Denis Kalkofen}, title = {Adaptive User Perspective Rendering for Handheld Augmented Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {176--181}, doi = {}, year = {2017}, } |
|
Kaufmann, Hannes |
3DUI '17: "Towards Efficient Spatial ..."
Towards Efficient Spatial Compression in Self-Overlapping Virtual Environments
Khrystyna Vasylevska and Hannes Kaufmann (Vienna University of Technology, Austria) Space available for any virtual reality experience is often strictly limited and abridges the virtual world to a size of a room. To extend the amount of virtual space accessible by walking within the same real workspace the methods of spatial compression were proposed. Scene manipulation with a controlled spatial overlap has been shown to be an efficient method. However, in order to apply space compression effectively for a dynamic, scalable and robust 3D user interface, it is important to study how the human perceives different layouts with overlapping spaces. In this paper, we explore the influence of the properties of the layout used on human spatial perception in a physically impossible spatial arrangement. Our first reported study focuses on the following parameters of the path within a simple self-overlapping layout: number of turns, relative door positions, sequences of counter- and clockwise turns, symmetry and asymmetry of the path used. In addition, in the second study we explore the effect of path smoothing by substituting the right-angled corridors by smooth curves. Our studies show that usage of the smooth curved corridors is more beneficial for spatial compression than the conventional right-angled approach. @InProceedings{3DUI17p12, author = {Khrystyna Vasylevska and Hannes Kaufmann}, title = {Towards Efficient Spatial Compression in Self-Overlapping Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {12--21}, doi = {}, year = {2017}, } |
|
Kitson, Alexandra |
3DUI '17: "Comparing Leaning-Based Motion ..."
Comparing Leaning-Based Motion Cueing Interfaces for Virtual Reality Locomotion
Alexandra Kitson, Abraham M. Hashemian, Ekaterina R. Stepanova, Ernst Kruijff, and Bernhard E. Riecke (Simon Fraser University, Canada; Bonn-Rhein-Sieg University of Applied Sciences, Germany) In this paper, we describe a user study comparing five different locomotion interfaces for virtual reality locomotion. We compared a standard non-motion cueing interface, Joystick (Xbox), with four motion cueing interfaces, NaviChair (stool with springs), MuvMan (sit/stand active stool), Head-Directed (Oculus Rift DK2), and Swivel Chair (everyday office chair with leaning capability). Each interface had two degrees of freedom to move forward/backward and rotate using velocity (rate) control. The aim of this mixed methods study was to better understand relevant user experience factors and guide the design of future locomotion interfaces. This study employed methods from HCI to provide an understanding of why users behave a certain way while using the interface and to unearth any new issues with the design. Participants were tasked to search for objects in a virtual city while they provided talk-aloud feedback and we logged their behaviour. Subsequently, they completed a post-experimental questionnaire on their experience. We found that the qualitative themes of control, usability, and experience echoed the results of the questionnaire, providing internal validity. The quantitative measures revealed the Joystick to be significantly more comfortable and precise than the motion cueing interfaces. However, the qualitative feedback and interviews showed this was due to the reduced perceived controllability and safety of the motion cueing interfaces. Designers of these interfaces should consider using a backrest if users need to lean backwards and avoid using velocity-control for rotations when using HMDs. @InProceedings{3DUI17p73, author = {Alexandra Kitson and Abraham M. Hashemian and Ekaterina R. Stepanova and Ernst Kruijff and Bernhard E. Riecke}, title = {Comparing Leaning-Based Motion Cueing Interfaces for Virtual Reality Locomotion}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {73--82}, doi = {}, year = {2017}, } |
|
Kodama, Ryo |
3DUI '17: "COMS-VR: Mobile Virtual Reality ..."
COMS-VR: Mobile Virtual Reality Entertainment System using Electric Car and Head-Mounted Display
Ryo Kodama, Masahiro Koge, Shun Taguchi, and Hiroyuki Kajimoto (TOYOTA Central R&D Labs, Japan; University of Electro-Communications, Japan) We propose a novel virtual reality entertainment system using a car as a motion platform. Motion platforms present a sensation of motion to the user using powerful actuators. Combined with virtual reality content, including surrounding visual, auditory and tactile displays, such systems can provide and immersive experience. However, the space and cost requirements for installation of motion platforms are large. To overcome this issue, we propose to use a car as a motion platform. We developed a prototype system composed of a head mounted display, a one-person electric car and an automatic driving algorithm. We developed and tested immersive content in which users ride on a trolley in a virtual space. All users responded quite positively to the experience. @InProceedings{3DUI17p130, author = {Ryo Kodama and Masahiro Koge and Shun Taguchi and Hiroyuki Kajimoto}, title = {COMS-VR: Mobile Virtual Reality Entertainment System using Electric Car and Head-Mounted Display}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {130--133}, doi = {}, year = {2017}, } |
|
Koge, Masahiro |
3DUI '17: "COMS-VR: Mobile Virtual Reality ..."
COMS-VR: Mobile Virtual Reality Entertainment System using Electric Car and Head-Mounted Display
Ryo Kodama, Masahiro Koge, Shun Taguchi, and Hiroyuki Kajimoto (TOYOTA Central R&D Labs, Japan; University of Electro-Communications, Japan) We propose a novel virtual reality entertainment system using a car as a motion platform. Motion platforms present a sensation of motion to the user using powerful actuators. Combined with virtual reality content, including surrounding visual, auditory and tactile displays, such systems can provide and immersive experience. However, the space and cost requirements for installation of motion platforms are large. To overcome this issue, we propose to use a car as a motion platform. We developed a prototype system composed of a head mounted display, a one-person electric car and an automatic driving algorithm. We developed and tested immersive content in which users ride on a trolley in a virtual space. All users responded quite positively to the experience. @InProceedings{3DUI17p130, author = {Ryo Kodama and Masahiro Koge and Shun Taguchi and Hiroyuki Kajimoto}, title = {COMS-VR: Mobile Virtual Reality Entertainment System using Electric Car and Head-Mounted Display}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {130--133}, doi = {}, year = {2017}, } |
|
Kon, Yuki |
3DUI '17: "Interpretation of Navigation ..."
Interpretation of Navigation Information Modulates the Effect of the Waist-Type Hanger Reflex on Walking
Yuki Kon, Takuto Nakamura, and Hiroyuki Kajimoto (University of Electro-Communications, Japan) The Hanger Reflex is a phenomenon in which the head rotates unintentionally when force is applied via a wire hanger placed on the head. This phenomenon is caused by physical pressure on the skin, and the direction of the Hanger Reflex modulated by the direction of skin deformation. A previous study examined the use of the head-, waist-, and ankle-type Hanger Reflex on walking navigation without interpretation of navigation information, and found that the waist-type Hanger Reflex had the strongest effect on walking. However, the existing waist-type Hanger Reflex device is passive; i.e. must be operated by the user, which leads to the necessity of developing a new active type device for use as part of a navigational system. In this paper, we developed a controlled waist-type Hanger Reflex device with four pneumatic actuators. We investigated different interpretations of navigation information on the effect of our device on walking. Our interpretation conditions included “Natural”, in which users did not attempt to interpret the navigation information, “Follow”, and “Resist”, in which they actively followed, or resisted the navigation information, respectively. We confirmed that our waist-type Hanger Reflex device could control the walking path and body direction, depending on user’s interpretation of the navigational information. @InProceedings{3DUI17p107, author = {Yuki Kon and Takuto Nakamura and Hiroyuki Kajimoto}, title = {Interpretation of Navigation Information Modulates the Effect of the Waist-Type Hanger Reflex on Walking}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {107--115}, doi = {}, year = {2017}, } Video |
|
Kondur, Navyaram |
3DUI '17: "Bookshelf and Bird: Enabling ..."
Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection
Run Yu, Wallace S. Lages, Mahdi Nabiyouni, Brandon Ray, Navyaram Kondur, Vikram Chandrashekar, and Doug A. Bowman (Virginia Tech, USA) We present two novel redirection techniques to enable real walking in large virtual environments (VEs) using only “room-scale” tracked spaces. The techniques, called Bookshelf and Bird, provide narrative-consistent redirection to keep the user inside the physical space, and require the user to walk to explore the VE. The underlying concept, called cell-based redirection, is to divide the virtual world into discrete cells that have the same size as the physical tracking space. The techniques then can redirect the users without altering the relationship between the user, the physical space, and the virtual cell. In this way, users can always access the entire cell using real walking. @InProceedings{3DUI17p116, author = {Run Yu and Wallace S. Lages and Mahdi Nabiyouni and Brandon Ray and Navyaram Kondur and Vikram Chandrashekar and Doug A. Bowman}, title = {Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {116--119}, doi = {}, year = {2017}, } Video |
|
Kopper, Regis |
3DUI '17: "Specimen Box: A Tangible Interaction ..."
Specimen Box: A Tangible Interaction Technique for World-Fixed Virtual Reality Displays
David J. Zielinski, Derek Nankivil, and Regis Kopper (Duke University, USA) Object selection and manipulation in world-fixed displays such as CAVE-type systems are typically achieved with tracked input devices, which lack the tangibility of real-world interactions. Conversely, due to the visual blockage of the real world, head-mounted displays allow the use of many types of real world objects that can convey realistic haptic feedback. To bridge this gap, we propose Specimen Box, an interaction technique that allows users to naturally hold a plausible physical object while manipulating virtual content inside it. This virtual content is rendered based on the tracked position of the box in relation to the user's point of view. Specimen Box provides the weight and tactile feel of an actual object and does not occlude rendered objects in the scene. The end result is that the user sees the virtual content as if it exists inside the clear physical box. We hypothesize that the effect of holding a physical box, which is a valid part of the overall scenario, would improve user performance and experience. To verify this hypothesis, we conducted a user study which involved a cognitively loaded inspection task requiring extensive manipulation of the box. We compared Specimen Box to Grab-and-Twirl, a naturalistic bimanual manipulation technique that closely mimics the mechanics of our proposed technique. Results show that in our specific task, performance was significantly faster and rotation rate was significantly lower with Specimen Box. Further, performance of the control technique was positively affected by experience with Specimen Box. @InProceedings{3DUI17p50, author = {David J. Zielinski and Derek Nankivil and Regis Kopper}, title = {Specimen Box: A Tangible Interaction Technique for World-Fixed Virtual Reality Displays}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {50--58}, doi = {}, year = {2017}, } |
|
Kruijff, Ernst |
3DUI '17: "Comparing Leaning-Based Motion ..."
Comparing Leaning-Based Motion Cueing Interfaces for Virtual Reality Locomotion
Alexandra Kitson, Abraham M. Hashemian, Ekaterina R. Stepanova, Ernst Kruijff, and Bernhard E. Riecke (Simon Fraser University, Canada; Bonn-Rhein-Sieg University of Applied Sciences, Germany) In this paper, we describe a user study comparing five different locomotion interfaces for virtual reality locomotion. We compared a standard non-motion cueing interface, Joystick (Xbox), with four motion cueing interfaces, NaviChair (stool with springs), MuvMan (sit/stand active stool), Head-Directed (Oculus Rift DK2), and Swivel Chair (everyday office chair with leaning capability). Each interface had two degrees of freedom to move forward/backward and rotate using velocity (rate) control. The aim of this mixed methods study was to better understand relevant user experience factors and guide the design of future locomotion interfaces. This study employed methods from HCI to provide an understanding of why users behave a certain way while using the interface and to unearth any new issues with the design. Participants were tasked to search for objects in a virtual city while they provided talk-aloud feedback and we logged their behaviour. Subsequently, they completed a post-experimental questionnaire on their experience. We found that the qualitative themes of control, usability, and experience echoed the results of the questionnaire, providing internal validity. The quantitative measures revealed the Joystick to be significantly more comfortable and precise than the motion cueing interfaces. However, the qualitative feedback and interviews showed this was due to the reduced perceived controllability and safety of the motion cueing interfaces. Designers of these interfaces should consider using a backrest if users need to lean backwards and avoid using velocity-control for rotations when using HMDs. @InProceedings{3DUI17p73, author = {Alexandra Kitson and Abraham M. Hashemian and Ekaterina R. Stepanova and Ernst Kruijff and Bernhard E. Riecke}, title = {Comparing Leaning-Based Motion Cueing Interfaces for Virtual Reality Locomotion}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {73--82}, doi = {}, year = {2017}, } |
|
Kuhlen, Torsten W. |
3DUI '17: "Evaluation of Approaching-Strategies ..."
Evaluation of Approaching-Strategies of Temporarily Required Virtual Assistants in Immersive Environments
Andrea Bönsch, Tom Vierjahn, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Embodied, virtual agents provide users assistance in agent-based support systems. To this end, two closely linked factors have to be considered for the agents’ behavioral design: their presence time (PT), i.e., the time in which the agents are visible, and the approaching time (AT), i.e., the time span between the user’s calling for an agent and the agent’s actual availability. This work focuses on human-like assistants that are embedded in immersive scenes but that are required only temporarily. To the best of our knowledge, guidelines for a suitable trade-off between PT and AT of these assistants do not yet exist. We address this gap by presenting the results of a controlled within-subjects study in a CAVE. While keeping a low PT so that the agent is not perceived as annoying, three strategies affecting the AT, namely fading, walking, and running, are evaluated by 40 subjects. The results indicate no clear preference for either behavior. Instead, the necessity of a better trade-off between a low AT and an agent’s realistic behavior is demonstrated. @InProceedings{3DUI17p69, author = {Andrea Bönsch and Tom Vierjahn and Torsten W. Kuhlen}, title = {Evaluation of Approaching-Strategies of Temporarily Required Virtual Assistants in Immersive Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {69--72}, doi = {}, year = {2017}, } 3DUI '17: "A Reliable Non-verbal Vocal ..." A Reliable Non-verbal Vocal Input Metaphor for Clicking Daniel Zielasko, Neha Neha, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) While experiencing an immersive virtual environment a suitable trigger metaphor is often needed, e.g. for the interaction with objects out of physical reach or system control. The BlowClick approach [Zielasko2015] that is based on non-verbal vocal input has been proven to be a valuable trigger technique in previous work. However, its original detection method is vulnerable to false positives and, thus, is limited in its potential use. Therefore, we extended the existing approach by adding machine learning methods to reliably classify blowing events. We found a support vector machine with Gaussian kernel performing the best with at least the same latency and more precision than before. Furthermore, we added acoustic feedback to the NVVI trigger, which increases the user’s confidence and whose absence was also stated as a limitation of the previous work. With this extended technique, we repeated the conducted Fitts’ law experiment with 33 participants and could confirm that it is possible to use NVVI as a reliable trigger as part of a hands-free point-andclick interface. Furthermore, we tested reaction times to measure the trigger’s performance without the influence of pointing and calculated device throughputs to ensure comparability. @InProceedings{3DUI17p40, author = {Daniel Zielasko and Neha Neha and Benjamin Weyers and Torsten W. Kuhlen}, title = {A Reliable Non-verbal Vocal Input Metaphor for Clicking}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {40--49}, doi = {}, year = {2017}, } 3DUI '17: "Efficient Approximate Computation ..." Efficient Approximate Computation of Scene Visibility Based on Navigation Meshes and Applications for Navigation and Scene Analysis Sebastian Freitag, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Scene visibility - the information of which parts of the scene are visible from a certain location—can be used to derive various properties of a virtual environment. For example, it enables the computation of viewpoint quality to determine the informativeness of a viewpoint, helps in constructing virtual tours, and allows to keep track of the objects a user may already have seen. However, computing visibility at runtime may be too computationally expensive for many applications, while sampling the entire scene beforehand introduces a costly precomputation step and may include many samples not needed later on. Therefore, in this paper, we propose a novel approach to precompute visibility information based on navigation meshes, a polygonal representation of a scene’s navigable areas. We show that with only limited precomputation, high accuracy can be achieved in these areas. Furthermore, we demonstrate the usefulness of the approach by means of several applications, including viewpoint quality computation, landmark and room detection, and exploration assistance. In addition, we present a travel interface based on common visibility that we found to result in less cybersickness in a user study. @InProceedings{3DUI17p134, author = {Sebastian Freitag and Benjamin Weyers and Torsten W. Kuhlen}, title = {Efficient Approximate Computation of Scene Visibility Based on Navigation Meshes and Applications for Navigation and Scene Analysis}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {134--143}, doi = {}, year = {2017}, } |
|
Kunz, Andreas |
3DUI '17: "Optimized Graph Extraction ..."
Optimized Graph Extraction and Locomotion Prediction for Redirected Walking
Markus Zank and Andreas Kunz (ETH Zurich, Switzerland) Redirected walking with advanced planners such as MPCRed or FORCE requires both knowledge about the virtual environment - mostly in the form of a skeleton graph representing the virtual environment - and a robust prediction of the user's actions. This paper presents methods for both parts and evaluates them with a number of test cases. Since frame rate is crucial for a virtual reality application, the computationally heavy extraction and preprocessing of the skeleton graph is done offline while only parts directly linked to the user's behavior such as the prediction are done online. The prediction is done using a target-based long-term prediction and the targets are determined automatically and combined with targets predefined by the designer of the virtual environment. The methods presented here provide a graph that is well suited for planning redirection and allows prediction techniques previously only demonstrated in studies to be applied to large scale virtual environments. @InProceedings{3DUI17p120, author = {Markus Zank and Andreas Kunz}, title = {Optimized Graph Extraction and Locomotion Prediction for Redirected Walking}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {120--129}, doi = {}, year = {2017}, } |
|
Lages, Wallace S. |
3DUI '17: "Bookshelf and Bird: Enabling ..."
Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection
Run Yu, Wallace S. Lages, Mahdi Nabiyouni, Brandon Ray, Navyaram Kondur, Vikram Chandrashekar, and Doug A. Bowman (Virginia Tech, USA) We present two novel redirection techniques to enable real walking in large virtual environments (VEs) using only “room-scale” tracked spaces. The techniques, called Bookshelf and Bird, provide narrative-consistent redirection to keep the user inside the physical space, and require the user to walk to explore the VE. The underlying concept, called cell-based redirection, is to divide the virtual world into discrete cells that have the same size as the physical tracking space. The techniques then can redirect the users without altering the relationship between the user, the physical space, and the virtual cell. In this way, users can always access the entire cell using real walking. @InProceedings{3DUI17p116, author = {Run Yu and Wallace S. Lages and Mahdi Nabiyouni and Brandon Ray and Navyaram Kondur and Vikram Chandrashekar and Doug A. Bowman}, title = {Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {116--119}, doi = {}, year = {2017}, } Video |
|
Lange, Markus |
3DUI '17: "Vibrotactile Assistance for ..."
Vibrotactile Assistance for User Guidance Towards Selection Targets in VR and the Cognitive Resources Involved
Oscar J. Ariza N., Markus Lange, Frank Steinicke, and Gerd Bruder (University of Hamburg, Germany; University of Würzburg, Germany; University of Central Florida, USA) The selection of objects in virtual reality (VR) located on the periphery and outside the field of view (FOV) requires a visual search by rotations of the HMD, which can reduce the performance of interaction. Our work explores the use of a pair of self-made wireless and wearable devices which once attached to the user’s head provide assistive vibrotactile cues for guidance in order to reduce the time used to turn and locate a target object. We present an experiment based on a dual-tasking method to analyze cognitive demands and performance metrics during a selection task. @InProceedings{3DUI17p95, author = {Oscar J. Ariza N. and Markus Lange and Frank Steinicke and Gerd Bruder}, title = {Vibrotactile Assistance for User Guidance Towards Selection Targets in VR and the Cognitive Resources Involved}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {95--98}, doi = {}, year = {2017}, } |
|
Lank, Edward |
3DUI '17: "Watchcasting: Freehand 3D ..."
Watchcasting: Freehand 3D Interaction with Off-the-Shelf Smartwatch
Krzysztof Pietroszek, Liudmila Tahai, James R. Wallace , and Edward Lank (California State University at Monterey Bay, USA; University of Waterloo, Canada) We describe a mid-air, watch-based 3D interaction technique called Watchcasting. The technique enables target selection and translation by mapping z-coordinate position to forearm rotation. By replicat- ing a large display 3D selection study, we show that Watchcasting provides comparable performance to smartphone (Smartcasting) and electrical impedance myography (Myopoint) techniques. Our work demonstrates that an off-the-shelf smartwatch is a practical alternative for 3D interaction using specialized devices. @InProceedings{3DUI17p172, author = {Krzysztof Pietroszek and Liudmila Tahai and James R. Wallace and Edward Lank}, title = {Watchcasting: Freehand 3D Interaction with Off-the-Shelf Smartwatch}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {172--175}, doi = {}, year = {2017}, } Video |
|
Lécuyer, Anatole |
3DUI '17: "FlexiFingers: Multi-finger ..."
FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics
Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. @InProceedings{3DUI17p103, author = {Merwan Achibet and Benoît Le Gouis and Maud Marchal and Pierre-Alexandre Léziart and Ferran Argelaguet and Adrien Girard and Anatole Lécuyer and Hiroyuki Kajimoto}, title = {FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {103--106}, doi = {}, year = {2017}, } 3DUI '17: "Increasing Optical Tracking ..." Increasing Optical Tracking Workspace of VR Applications using Controlled Cameras Guillaume Cortes, Eric Marchand, Jérôme Ardouin, and Anatole Lécuyer (Realyz, France; University of Rennes 1, France; Inria, France) We propose an approach to greatly increase the tracking workspace of VR applications without adding new sensors. Our approach relies on controlled cameras able to follow the tracked markers all around the VR workspace providing 6DoF tracking data. We designed the proof-of-concept of such approach based on two consumer-grade cameras and a pan-tilt head. The resulting tracking workspace could be greatly increased depending on the actuators’ range of motion. The accuracy error and jitter were found to be rather limited during camera motion (resp. 0.3cm and 0.02cm). Therefore, whenever the final VR application does not require a perfect tracking accuracy over the entire workspace, we recommend using our approach in order to enlarge the tracking workspace. @InProceedings{3DUI17p22, author = {Guillaume Cortes and Eric Marchand and Jérôme Ardouin and Anatole Lécuyer}, title = {Increasing Optical Tracking Workspace of VR Applications using Controlled Cameras}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {22--25}, doi = {}, year = {2017}, } 3DUI '17: "Spatial and Rotation Invariant ..." Spatial and Rotation Invariant 3D Gesture Recognition Based on Sparse Representation Ferran Argelaguet, Mélanie Ducoffe, Anatole Lécuyer, and Remi Gribonval (Inria, France; IRISA, France; ENS, France) Advances in motion tracking technology, especially for commodity hardware, still require robust 3D gesture recognition in order to fully exploit the benefits of natural user interfaces. In this paper, we introduce a novel 3D gesture recognition algorithm based on the sparse representation of 3D human motion. The sparse representation of human motion provides a set of features that can be used to efficiently classify gestures in real-time. Compared to existing gesture recognition systems, sparse representation, the proposed approach enables full spatial and rotation invariance and provides high tolerance to noise. Moreover, the proposed classification scheme takes into account the inter-user variability which increases gesture classification accuracy in user-independent scenarios. We validated our approach with existing motion databases for gestural interaction and performed a user evaluation with naive subjects to show its robustness to arbitrarily defined gestures. The results showed that our classification scheme has high classification accuracy for user-independent scenarios even with users who have different handedness. We believe that sparse representation of human motion will pave the way for a new generation of 3D gesture recognition systems in order to fully open the potential of natural user interfaces. @InProceedings{3DUI17p158, author = {Ferran Argelaguet and Mélanie Ducoffe and Anatole Lécuyer and Remi Gribonval}, title = {Spatial and Rotation Invariant 3D Gesture Recognition Based on Sparse Representation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {158--167}, doi = {}, year = {2017}, } Video |
|
Lee, Gun |
3DUI '17: "Exploring Natural Eye-Gaze-Based ..."
Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality
Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman, and Mark Billinghurst (University of South Australia, Australia; University of Canterbury, New Zealand) Eye tracking technology in a head-mounted display has undergone rapid advancement in recent years, making it possible for researchers to explore new interaction techniques using natural eye movements. This paper explores three novel eye-gaze-based interaction techniques: (1) Duo-Reticles, eye-gaze selection based on eye-gaze and inertial reticles, (2) Radial Pursuit, cluttered-object selection that takes advantage of smooth pursuit, and (3) Nod and Roll, head-gesture-based interaction based on the vestibulo-ocular reflex. In an initial user study, we compare each technique against a baseline condition in a scenario that demonstrates its strengths and weaknesses. @InProceedings{3DUI17p36, author = {Thammathip Piumsomboon and Gun Lee and Robert W. Lindeman and Mark Billinghurst}, title = {Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {36--39}, doi = {}, year = {2017}, } Video Info |
|
Léziart, Pierre-Alexandre |
3DUI '17: "FlexiFingers: Multi-finger ..."
FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics
Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. @InProceedings{3DUI17p103, author = {Merwan Achibet and Benoît Le Gouis and Maud Marchal and Pierre-Alexandre Léziart and Ferran Argelaguet and Adrien Girard and Anatole Lécuyer and Hiroyuki Kajimoto}, title = {FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {103--106}, doi = {}, year = {2017}, } |
|
Lindeman, Robert W. |
3DUI '17: "Exploring Natural Eye-Gaze-Based ..."
Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality
Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman, and Mark Billinghurst (University of South Australia, Australia; University of Canterbury, New Zealand) Eye tracking technology in a head-mounted display has undergone rapid advancement in recent years, making it possible for researchers to explore new interaction techniques using natural eye movements. This paper explores three novel eye-gaze-based interaction techniques: (1) Duo-Reticles, eye-gaze selection based on eye-gaze and inertial reticles, (2) Radial Pursuit, cluttered-object selection that takes advantage of smooth pursuit, and (3) Nod and Roll, head-gesture-based interaction based on the vestibulo-ocular reflex. In an initial user study, we compare each technique against a baseline condition in a scenario that demonstrates its strengths and weaknesses. @InProceedings{3DUI17p36, author = {Thammathip Piumsomboon and Gun Lee and Robert W. Lindeman and Mark Billinghurst}, title = {Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {36--39}, doi = {}, year = {2017}, } Video Info |
|
Louison, Céphise |
3DUI '17: "Spatialized Vibrotactile Feedback ..."
Spatialized Vibrotactile Feedback Contributes to Goal-Directed Movements in Cluttered Virtual Environments
Céphise Louison, Fabien Ferlay, and Daniel R. Mestre (CEA, France; Aix-Marseille University, France; CNRS, France) In virtual reality (VR), spatial awareness is a dominant research topic. It plays an essential role in the assessment of human operators’ behavior within virtual environments (VE), notably for the evaluation of the feasibility of manual maintenance tasks in cluttered industrial settings. In such contexts, it is decisive to evaluate the spatial and temporal correspondence between the operator’s movement kinematics and that of his/her virtual avatar in the virtual environment. Often, in a cluttered VE, direct kinesthetic (force) feedback is limited or absent. We tested whether vibrotactile (cutaneous) feedback would increase visuo-proprioceptive consistency, spatial awareness, and thus the validity of VR studies, by augmenting the perception of the operator’s contact(s) with virtual objects. We present preliminary experimental results, obtained using a head-mounted display (HMD) during a goal-directed task in a cluttered VE. Data suggest that spatialized vibrotactile feedback contributes to visuo-proprioceptive consistency. @InProceedings{3DUI17p99, author = {Céphise Louison and Fabien Ferlay and Daniel R. Mestre}, title = {Spatialized Vibrotactile Feedback Contributes to Goal-Directed Movements in Cluttered Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {99--102}, doi = {}, year = {2017}, } Video |
|
Luan, Bo |
3DUI '17: "Evaluating Gesture-Based Augmented ..."
Evaluating Gesture-Based Augmented Reality Annotation
Yun Suk Chang, Benjamin Nuernberger, Bo Luan, and Tobias Höllerer (University of California at Santa Barbara, USA) Drawing annotations with 3D hand gestures in augmented reality are useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn circle and arrow annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. In this paper, we compare different annotation methods using the HoloLens augmented reality development platform: Surface-Drawing and Air-Drawing, with either raw but smoothed or interpreted and beautified gesture input. For the Surface-Drawing method, users control a cursor that is projected onto the world model, allowing gesture input to occur directly on the surfaces of real-world objects. For the Air-Drawing method, gesture drawing occurs at the user's fingertip and is projected onto the world model on release. The methods have different characteristics regarding necessitated vergence switches and afforded cursor control. We performed an experiment in which users draw on two different real-world objects at different distances using the different methods. Results indicate that Surface-Drawing is more accurate than Air-Drawing and Beautified annotations are drawn faster than Non-Beautified; participants also preferred Surface-Drawing and Beautified. @InProceedings{3DUI17p182, author = {Yun Suk Chang and Benjamin Nuernberger and Bo Luan and Tobias Höllerer}, title = {Evaluating Gesture-Based Augmented Reality Annotation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {182--185}, doi = {}, year = {2017}, } Video |
|
MacKenzie, I. Scott |
3DUI '17: "An Arm-Mounted Inertial Controller ..."
An Arm-Mounted Inertial Controller for 6DOF Input: Design and Evaluation
Thomas S. Young, Robert J. Teather, and I. Scott MacKenzie (York University, Canada; Carleton University, Canada) We designed a low-cost arm-mounted wearable 3D input device that uses inertial measurement units as an alternative to existing tracking systems requiring fixed frames of reference. The device employs inertial sensors to derive a 3D cursor position through natural arm movement. We also explore three methods of indicating selection with the device, one entirely software based (dwell), one using a twist gesture, and one relying on buttons. To address the paucity of research reporting human performance metrics across comparable interfaces, we quantify the performance of this new device through a point selection experiment and compare results from a similar study. @InProceedings{3DUI17p26, author = {Thomas S. Young and Robert J. Teather and I. Scott MacKenzie}, title = {An Arm-Mounted Inertial Controller for 6DOF Input: Design and Evaluation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {26--35}, doi = {}, year = {2017}, } |
|
Madathil, Kapil Chalil |
3DUI '17: "The Effects of Presentation ..."
The Effects of Presentation Method and Simulation Fidelity on Psychomotor Education in a Bimanual Metrology Training Simulation
Jeffrey Bertrand, Ayush Bhargava, Kapil Chalil Madathil, Anand Gramopadhye, and Sabarish V. Babu (Clemson University, USA) In this study, we empirically evaluated the effects of presentation method and simulation fidelity on task performance and psychomotor skills acquisition in an immersive bimanual simulation towards precision metrology education. In a 2 x 2 experiment design, we investigated a large-screen immersive display (LSID) with a head-mounted display (HMD), and the presence versus absence of gravity. Advantages of the HMD include interacting with the simulation in a more natural manner as compared to using a large-screen immersive display due to the similarities between the interactions afforded in the virtual compared to the real-world task. Suspending the laws of physics may have an effect on usability and in turn could affect learning outcomes. Our dependent variables consisted of a pre and post cognition questionnaire, quantitative performance measures, perceived workload and system usefulness, and a psychomotor assessment to measure to what extent transfer of learning took place from the virtual to the real world. Results indicate that the HMD condition was preferable to the immersive display in several metrics while the no-gravity condition resulted in users adopting strategies that were not advantageous for task performance. @InProceedings{3DUI17p59, author = {Jeffrey Bertrand and Ayush Bhargava and Kapil Chalil Madathil and Anand Gramopadhye and Sabarish V. Babu}, title = {The Effects of Presentation Method and Simulation Fidelity on Psychomotor Education in a Bimanual Metrology Training Simulation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {59--68}, doi = {}, year = {2017}, } |
|
Marchal, Maud |
3DUI '17: "FlexiFingers: Multi-finger ..."
FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics
Merwan Achibet, Benoît Le Gouis, Maud Marchal, Pierre-Alexandre Léziart, Ferran Argelaguet, Adrien Girard, Anatole Lécuyer, and Hiroyuki Kajimoto (Inria, France; INSA, France; IRISA, France; ENS, France; University of Electro-Communications, Japan) 3D interaction in virtual reality often requires to manipulate and feel virtual objects with our fingers. Although existing haptic interfaces can be used for this purpose (e.g. force-feedback exoskeleton gloves), they are still bulky and expensive. In this paper, we introduce a novel multi-finger device called "FlexiFingers" that constrains each digit individually and produces elastic force-feedback. FlexiFingers leverages passive haptics in order to offer a lightweight, modular, and affordable alternative to active devices. Moreover, we combine Flexifingers with a pseudo-haptic approach that simulates different levels of stiffness when interacting with virtual objects. We illustrate how this combination of passive haptics and pseudo-haptics can benefit multi-finger interaction through several use cases related to music learning and medical training. Those examples suggest that our approach could find applications in various domains that require an accessible and portable way of providing haptic feedback to the fingers. @InProceedings{3DUI17p103, author = {Merwan Achibet and Benoît Le Gouis and Maud Marchal and Pierre-Alexandre Léziart and Ferran Argelaguet and Adrien Girard and Anatole Lécuyer and Hiroyuki Kajimoto}, title = {FlexiFingers: Multi-finger Interaction in VR Combining Passive Haptics and Pseudo-Haptics}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {103--106}, doi = {}, year = {2017}, } |
|
Marchand, Eric |
3DUI '17: "Increasing Optical Tracking ..."
Increasing Optical Tracking Workspace of VR Applications using Controlled Cameras
Guillaume Cortes, Eric Marchand, Jérôme Ardouin, and Anatole Lécuyer (Realyz, France; University of Rennes 1, France; Inria, France) We propose an approach to greatly increase the tracking workspace of VR applications without adding new sensors. Our approach relies on controlled cameras able to follow the tracked markers all around the VR workspace providing 6DoF tracking data. We designed the proof-of-concept of such approach based on two consumer-grade cameras and a pan-tilt head. The resulting tracking workspace could be greatly increased depending on the actuators’ range of motion. The accuracy error and jitter were found to be rather limited during camera motion (resp. 0.3cm and 0.02cm). Therefore, whenever the final VR application does not require a perfect tracking accuracy over the entire workspace, we recommend using our approach in order to enlarge the tracking workspace. @InProceedings{3DUI17p22, author = {Guillaume Cortes and Eric Marchand and Jérôme Ardouin and Anatole Lécuyer}, title = {Increasing Optical Tracking Workspace of VR Applications using Controlled Cameras}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {22--25}, doi = {}, year = {2017}, } |
|
Medeiros, Daniel |
3DUI '17: "Mid-Air Modeling with Boolean ..."
Mid-Air Modeling with Boolean Operations in VR
Daniel Mendes, Daniel Medeiros, Maurício Sousa, Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal; PUC-Rio, Brazil) Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks. @InProceedings{3DUI17p154, author = {Daniel Mendes and Daniel Medeiros and Maurício Sousa and Ricardo Ferreira and Alberto Raposo and Alfredo Ferreira and Joaquim Jorge}, title = {Mid-Air Modeling with Boolean Operations in VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {154--157}, doi = {}, year = {2017}, } |
|
Mendes, Daniel |
3DUI '17: "Mid-Air Modeling with Boolean ..."
Mid-Air Modeling with Boolean Operations in VR
Daniel Mendes, Daniel Medeiros, Maurício Sousa, Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal; PUC-Rio, Brazil) Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks. @InProceedings{3DUI17p154, author = {Daniel Mendes and Daniel Medeiros and Maurício Sousa and Ricardo Ferreira and Alberto Raposo and Alfredo Ferreira and Joaquim Jorge}, title = {Mid-Air Modeling with Boolean Operations in VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {154--157}, doi = {}, year = {2017}, } |
|
Merienne, Frédéric |
3DUI '17: "Force and Vibrotactile Integration ..."
Force and Vibrotactile Integration for 3D User Interaction within Virtual Environments
Aida Erfanian, Stanley Tarng, Yaoping Hu, Jérémy Plouzeau, and Frédéric Merienne (University of Calgary, Canada; LE2I, France) Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies on multi-sensory integration of visual and haptic cues revealed that the integration follows maximum likelihood estimation (MLE). Little effort focuses however on integrating force and vibrotactile cues – two sub-categorical cues of the haptic modality. Hence, this paper presents an investigation on MLE’s suitability for integrating these sub-categorical cues. Within a stereoscopic VE, human users performed a 3D interactive task of navigating a flying drone along a high-voltage transmission line in an inaccessible region and identifying defects on the line. The users had to identify defects via individual force or vibrotactile cues, and their combinations in co-located and dislocated settings. The co-located setting provided both cues on the right hand of the users; whereas the dislocated setting delivered the force and vibrotactile cues on the right hand and forearm of the users, respectively. Task performance of the users, such as completion time and accuracy, was assessed under each cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This was in agreement with the role of tactile cues in sensing surface properties, herein setting a baseline for using MLE. The task performance under the co-located setting indicated certain degrees of combining those under the individual cues. In contrast, the performance under the dislocated setting was alike that under the individual vibrotactile cue. These observations imply an inconclusiveness of MLE to integrate both cues in a co-located setting for 3D user interaction. @InProceedings{3DUI17p87, author = {Aida Erfanian and Stanley Tarng and Yaoping Hu and Jérémy Plouzeau and Frédéric Merienne}, title = {Force and Vibrotactile Integration for 3D User Interaction within Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {87--94}, doi = {}, year = {2017}, } |
|
Mestre, Daniel R. |
3DUI '17: "Spatialized Vibrotactile Feedback ..."
Spatialized Vibrotactile Feedback Contributes to Goal-Directed Movements in Cluttered Virtual Environments
Céphise Louison, Fabien Ferlay, and Daniel R. Mestre (CEA, France; Aix-Marseille University, France; CNRS, France) In virtual reality (VR), spatial awareness is a dominant research topic. It plays an essential role in the assessment of human operators’ behavior within virtual environments (VE), notably for the evaluation of the feasibility of manual maintenance tasks in cluttered industrial settings. In such contexts, it is decisive to evaluate the spatial and temporal correspondence between the operator’s movement kinematics and that of his/her virtual avatar in the virtual environment. Often, in a cluttered VE, direct kinesthetic (force) feedback is limited or absent. We tested whether vibrotactile (cutaneous) feedback would increase visuo-proprioceptive consistency, spatial awareness, and thus the validity of VR studies, by augmenting the perception of the operator’s contact(s) with virtual objects. We present preliminary experimental results, obtained using a head-mounted display (HMD) during a goal-directed task in a cluttered VE. Data suggest that spatialized vibrotactile feedback contributes to visuo-proprioceptive consistency. @InProceedings{3DUI17p99, author = {Céphise Louison and Fabien Ferlay and Daniel R. Mestre}, title = {Spatialized Vibrotactile Feedback Contributes to Goal-Directed Movements in Cluttered Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {99--102}, doi = {}, year = {2017}, } Video |
|
Mohr, Peter |
3DUI '17: "Adaptive User Perspective ..."
Adaptive User Perspective Rendering for Handheld Augmented Reality
Peter Mohr, Markus Tatzgern, Jens Grubert, Dieter Schmalstieg, and Denis Kalkofen (Graz University of Technology, Austria; Salzburg University of Applied Sciences, Austria; Coburg University, Germany) Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user’s real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user’s head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user’s motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering. @InProceedings{3DUI17p176, author = {Peter Mohr and Markus Tatzgern and Jens Grubert and Dieter Schmalstieg and Denis Kalkofen}, title = {Adaptive User Perspective Rendering for Handheld Augmented Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {176--181}, doi = {}, year = {2017}, } |
|
Nabiyouni, Mahdi |
3DUI '17: "Bookshelf and Bird: Enabling ..."
Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection
Run Yu, Wallace S. Lages, Mahdi Nabiyouni, Brandon Ray, Navyaram Kondur, Vikram Chandrashekar, and Doug A. Bowman (Virginia Tech, USA) We present two novel redirection techniques to enable real walking in large virtual environments (VEs) using only “room-scale” tracked spaces. The techniques, called Bookshelf and Bird, provide narrative-consistent redirection to keep the user inside the physical space, and require the user to walk to explore the VE. The underlying concept, called cell-based redirection, is to divide the virtual world into discrete cells that have the same size as the physical tracking space. The techniques then can redirect the users without altering the relationship between the user, the physical space, and the virtual cell. In this way, users can always access the entire cell using real walking. @InProceedings{3DUI17p116, author = {Run Yu and Wallace S. Lages and Mahdi Nabiyouni and Brandon Ray and Navyaram Kondur and Vikram Chandrashekar and Doug A. Bowman}, title = {Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {116--119}, doi = {}, year = {2017}, } Video |
|
Nakamura, Takuto |
3DUI '17: "Interpretation of Navigation ..."
Interpretation of Navigation Information Modulates the Effect of the Waist-Type Hanger Reflex on Walking
Yuki Kon, Takuto Nakamura, and Hiroyuki Kajimoto (University of Electro-Communications, Japan) The Hanger Reflex is a phenomenon in which the head rotates unintentionally when force is applied via a wire hanger placed on the head. This phenomenon is caused by physical pressure on the skin, and the direction of the Hanger Reflex modulated by the direction of skin deformation. A previous study examined the use of the head-, waist-, and ankle-type Hanger Reflex on walking navigation without interpretation of navigation information, and found that the waist-type Hanger Reflex had the strongest effect on walking. However, the existing waist-type Hanger Reflex device is passive; i.e. must be operated by the user, which leads to the necessity of developing a new active type device for use as part of a navigational system. In this paper, we developed a controlled waist-type Hanger Reflex device with four pneumatic actuators. We investigated different interpretations of navigation information on the effect of our device on walking. Our interpretation conditions included “Natural”, in which users did not attempt to interpret the navigation information, “Follow”, and “Resist”, in which they actively followed, or resisted the navigation information, respectively. We confirmed that our waist-type Hanger Reflex device could control the walking path and body direction, depending on user’s interpretation of the navigational information. @InProceedings{3DUI17p107, author = {Yuki Kon and Takuto Nakamura and Hiroyuki Kajimoto}, title = {Interpretation of Navigation Information Modulates the Effect of the Waist-Type Hanger Reflex on Walking}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {107--115}, doi = {}, year = {2017}, } Video |
|
Nankivil, Derek |
3DUI '17: "Specimen Box: A Tangible Interaction ..."
Specimen Box: A Tangible Interaction Technique for World-Fixed Virtual Reality Displays
David J. Zielinski, Derek Nankivil, and Regis Kopper (Duke University, USA) Object selection and manipulation in world-fixed displays such as CAVE-type systems are typically achieved with tracked input devices, which lack the tangibility of real-world interactions. Conversely, due to the visual blockage of the real world, head-mounted displays allow the use of many types of real world objects that can convey realistic haptic feedback. To bridge this gap, we propose Specimen Box, an interaction technique that allows users to naturally hold a plausible physical object while manipulating virtual content inside it. This virtual content is rendered based on the tracked position of the box in relation to the user's point of view. Specimen Box provides the weight and tactile feel of an actual object and does not occlude rendered objects in the scene. The end result is that the user sees the virtual content as if it exists inside the clear physical box. We hypothesize that the effect of holding a physical box, which is a valid part of the overall scenario, would improve user performance and experience. To verify this hypothesis, we conducted a user study which involved a cognitively loaded inspection task requiring extensive manipulation of the box. We compared Specimen Box to Grab-and-Twirl, a naturalistic bimanual manipulation technique that closely mimics the mechanics of our proposed technique. Results show that in our specific task, performance was significantly faster and rotation rate was significantly lower with Specimen Box. Further, performance of the control technique was positively affected by experience with Specimen Box. @InProceedings{3DUI17p50, author = {David J. Zielinski and Derek Nankivil and Regis Kopper}, title = {Specimen Box: A Tangible Interaction Technique for World-Fixed Virtual Reality Displays}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {50--58}, doi = {}, year = {2017}, } |
|
Neha, Neha |
3DUI '17: "A Reliable Non-verbal Vocal ..."
A Reliable Non-verbal Vocal Input Metaphor for Clicking
Daniel Zielasko, Neha Neha, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) While experiencing an immersive virtual environment a suitable trigger metaphor is often needed, e.g. for the interaction with objects out of physical reach or system control. The BlowClick approach [Zielasko2015] that is based on non-verbal vocal input has been proven to be a valuable trigger technique in previous work. However, its original detection method is vulnerable to false positives and, thus, is limited in its potential use. Therefore, we extended the existing approach by adding machine learning methods to reliably classify blowing events. We found a support vector machine with Gaussian kernel performing the best with at least the same latency and more precision than before. Furthermore, we added acoustic feedback to the NVVI trigger, which increases the user’s confidence and whose absence was also stated as a limitation of the previous work. With this extended technique, we repeated the conducted Fitts’ law experiment with 33 participants and could confirm that it is possible to use NVVI as a reliable trigger as part of a hands-free point-andclick interface. Furthermore, we tested reaction times to measure the trigger’s performance without the influence of pointing and calculated device throughputs to ensure comparability. @InProceedings{3DUI17p40, author = {Daniel Zielasko and Neha Neha and Benjamin Weyers and Torsten W. Kuhlen}, title = {A Reliable Non-verbal Vocal Input Metaphor for Clicking}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {40--49}, doi = {}, year = {2017}, } |
|
Nuernberger, Benjamin |
3DUI '17: "Evaluating Gesture-Based Augmented ..."
Evaluating Gesture-Based Augmented Reality Annotation
Yun Suk Chang, Benjamin Nuernberger, Bo Luan, and Tobias Höllerer (University of California at Santa Barbara, USA) Drawing annotations with 3D hand gestures in augmented reality are useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn circle and arrow annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. In this paper, we compare different annotation methods using the HoloLens augmented reality development platform: Surface-Drawing and Air-Drawing, with either raw but smoothed or interpreted and beautified gesture input. For the Surface-Drawing method, users control a cursor that is projected onto the world model, allowing gesture input to occur directly on the surfaces of real-world objects. For the Air-Drawing method, gesture drawing occurs at the user's fingertip and is projected onto the world model on release. The methods have different characteristics regarding necessitated vergence switches and afforded cursor control. We performed an experiment in which users draw on two different real-world objects at different distances using the different methods. Results indicate that Surface-Drawing is more accurate than Air-Drawing and Beautified annotations are drawn faster than Non-Beautified; participants also preferred Surface-Drawing and Beautified. @InProceedings{3DUI17p182, author = {Yun Suk Chang and Benjamin Nuernberger and Bo Luan and Tobias Höllerer}, title = {Evaluating Gesture-Based Augmented Reality Annotation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {182--185}, doi = {}, year = {2017}, } Video |
|
Ortega, Francisco R. |
3DUI '17: "Gesture Elicitation for 3D ..."
Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe
Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. @InProceedings{3DUI17p144, author = {Francisco R. Ortega and Alain Galvan and Katherine Tarre and Armando Barreto and Naphtali Rishe and Jonathan Bernal and Ruben Balcazar and Jason-Lee Thomas}, title = {Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {144--153}, doi = {}, year = {2017}, } |
|
Otmane, Samir |
3DUI '17: "Classic3D and Single3D: Two ..."
Classic3D and Single3D: Two Unimanual Techniques for Constrained 3D Manipulations on Tablet PCs
Siju Wu, Aylen Ricca, Amine Chellali, and Samir Otmane (University of Evry, France) Standard 3D widgets are used for object manipulation in desktop CAD applications but are less suited for use on touchscreens. We propose two 3D constrained manipulation techniques for Tablet PCs. Using finger identification, the dominant hand's index, middle and ring fingers are mapped with the X, Y and Z axes. Users can then trigger different manipulation tasks using specific chording gestures. A user study to assess usability and efficiency permitted to identify the gestures that are the most suitable for each manipulation task. Some design recommendations for an efficient 3D constrained manipulations technique are presented. @InProceedings{3DUI17p168, author = {Siju Wu and Aylen Ricca and Amine Chellali and Samir Otmane}, title = {Classic3D and Single3D: Two Unimanual Techniques for Constrained 3D Manipulations on Tablet PCs}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {168--171}, doi = {}, year = {2017}, } |
|
Peer, Alex |
3DUI '17: "Evaluating Perceived Distance ..."
Evaluating Perceived Distance Measures in Room-Scale Spaces using Consumer-Grade Head Mounted Displays
Alex Peer and Kevin Ponto (University of Wisconsin-Madison, USA) Distance misperception (sometimes, distance compression) in immersive virtual environments is an active area of study, and the recent availability of consumer-grade display and tracking technologies raises new questions. This work explores the plausibility of measuring misperceptions within the small tracking volumes of consumer-grade technology, whether measures practical within this space are directly comparable, and if contemporary displays induce distance misperceptions. @InProceedings{3DUI17p83, author = {Alex Peer and Kevin Ponto}, title = {Evaluating Perceived Distance Measures in Room-Scale Spaces using Consumer-Grade Head Mounted Displays}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {83--86}, doi = {}, year = {2017}, } |
|
Pfeiffer, Thies |
3DUI '17: "Attention Guiding Techniques ..."
Attention Guiding Techniques using Peripheral Vision and Eye Tracking for Feedback in Augmented-Reality-Based Assistance Systems
Patrick Renner and Thies Pfeiffer (Bielefeld University, Germany) A limiting factor of current smart glasses-based augmented reality (AR) systems is their small field of view. AR assistance systems designed for tasks such as order picking or manual assembly are supposed to guide the visual attention of the user towards the item that is relevant next. This is a challenging task, as the user may initially be in an arbitrary position and orientation relative to the target. As a result of the small field of view, in most cases the target will initially not be covered by the AR display, even if it is visible to the user. This raises the question of how to design attention guiding for such "off-screen gaze" conditions. The central idea put forward in this paper is to display cues for attention guidance in a way that they can still be followed using peripheral vision. While the eyes' focus point is beyond the AR display, certain visual cues presented on the display are still detectable by the human. In addition to that, guidance methods that are adaptive to the eye movements of the user are introduced and evaluated. In the frame of a research project on smart glasses-based assistance systems for a manual assembly station, several attention guiding techniques with and without eye tracking have been designed, implemented and tested. As evaluation method simulated AR in a virtual reality HMD setup was used, which supports a repeatable and highly-controlled experimental design. @InProceedings{3DUI17p186, author = {Patrick Renner and Thies Pfeiffer}, title = {Attention Guiding Techniques using Peripheral Vision and Eye Tracking for Feedback in Augmented-Reality-Based Assistance Systems}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {186--194}, doi = {}, year = {2017}, } Video |
|
Pietroszek, Krzysztof |
3DUI '17: "Watchcasting: Freehand 3D ..."
Watchcasting: Freehand 3D Interaction with Off-the-Shelf Smartwatch
Krzysztof Pietroszek, Liudmila Tahai, James R. Wallace , and Edward Lank (California State University at Monterey Bay, USA; University of Waterloo, Canada) We describe a mid-air, watch-based 3D interaction technique called Watchcasting. The technique enables target selection and translation by mapping z-coordinate position to forearm rotation. By replicat- ing a large display 3D selection study, we show that Watchcasting provides comparable performance to smartphone (Smartcasting) and electrical impedance myography (Myopoint) techniques. Our work demonstrates that an off-the-shelf smartwatch is a practical alternative for 3D interaction using specialized devices. @InProceedings{3DUI17p172, author = {Krzysztof Pietroszek and Liudmila Tahai and James R. Wallace and Edward Lank}, title = {Watchcasting: Freehand 3D Interaction with Off-the-Shelf Smartwatch}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {172--175}, doi = {}, year = {2017}, } Video |
|
Piumsomboon, Thammathip |
3DUI '17: "Exploring Natural Eye-Gaze-Based ..."
Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality
Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman, and Mark Billinghurst (University of South Australia, Australia; University of Canterbury, New Zealand) Eye tracking technology in a head-mounted display has undergone rapid advancement in recent years, making it possible for researchers to explore new interaction techniques using natural eye movements. This paper explores three novel eye-gaze-based interaction techniques: (1) Duo-Reticles, eye-gaze selection based on eye-gaze and inertial reticles, (2) Radial Pursuit, cluttered-object selection that takes advantage of smooth pursuit, and (3) Nod and Roll, head-gesture-based interaction based on the vestibulo-ocular reflex. In an initial user study, we compare each technique against a baseline condition in a scenario that demonstrates its strengths and weaknesses. @InProceedings{3DUI17p36, author = {Thammathip Piumsomboon and Gun Lee and Robert W. Lindeman and Mark Billinghurst}, title = {Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {36--39}, doi = {}, year = {2017}, } Video Info |
|
Plouzeau, Jérémy |
3DUI '17: "Force and Vibrotactile Integration ..."
Force and Vibrotactile Integration for 3D User Interaction within Virtual Environments
Aida Erfanian, Stanley Tarng, Yaoping Hu, Jérémy Plouzeau, and Frédéric Merienne (University of Calgary, Canada; LE2I, France) Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies on multi-sensory integration of visual and haptic cues revealed that the integration follows maximum likelihood estimation (MLE). Little effort focuses however on integrating force and vibrotactile cues – two sub-categorical cues of the haptic modality. Hence, this paper presents an investigation on MLE’s suitability for integrating these sub-categorical cues. Within a stereoscopic VE, human users performed a 3D interactive task of navigating a flying drone along a high-voltage transmission line in an inaccessible region and identifying defects on the line. The users had to identify defects via individual force or vibrotactile cues, and their combinations in co-located and dislocated settings. The co-located setting provided both cues on the right hand of the users; whereas the dislocated setting delivered the force and vibrotactile cues on the right hand and forearm of the users, respectively. Task performance of the users, such as completion time and accuracy, was assessed under each cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This was in agreement with the role of tactile cues in sensing surface properties, herein setting a baseline for using MLE. The task performance under the co-located setting indicated certain degrees of combining those under the individual cues. In contrast, the performance under the dislocated setting was alike that under the individual vibrotactile cue. These observations imply an inconclusiveness of MLE to integrate both cues in a co-located setting for 3D user interaction. @InProceedings{3DUI17p87, author = {Aida Erfanian and Stanley Tarng and Yaoping Hu and Jérémy Plouzeau and Frédéric Merienne}, title = {Force and Vibrotactile Integration for 3D User Interaction within Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {87--94}, doi = {}, year = {2017}, } |
|
Ponto, Kevin |
3DUI '17: "Evaluating Perceived Distance ..."
Evaluating Perceived Distance Measures in Room-Scale Spaces using Consumer-Grade Head Mounted Displays
Alex Peer and Kevin Ponto (University of Wisconsin-Madison, USA) Distance misperception (sometimes, distance compression) in immersive virtual environments is an active area of study, and the recent availability of consumer-grade display and tracking technologies raises new questions. This work explores the plausibility of measuring misperceptions within the small tracking volumes of consumer-grade technology, whether measures practical within this space are directly comparable, and if contemporary displays induce distance misperceptions. @InProceedings{3DUI17p83, author = {Alex Peer and Kevin Ponto}, title = {Evaluating Perceived Distance Measures in Room-Scale Spaces using Consumer-Grade Head Mounted Displays}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {83--86}, doi = {}, year = {2017}, } |
|
Raposo, Alberto |
3DUI '17: "Mid-Air Modeling with Boolean ..."
Mid-Air Modeling with Boolean Operations in VR
Daniel Mendes, Daniel Medeiros, Maurício Sousa, Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal; PUC-Rio, Brazil) Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks. @InProceedings{3DUI17p154, author = {Daniel Mendes and Daniel Medeiros and Maurício Sousa and Ricardo Ferreira and Alberto Raposo and Alfredo Ferreira and Joaquim Jorge}, title = {Mid-Air Modeling with Boolean Operations in VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {154--157}, doi = {}, year = {2017}, } |
|
Ray, Brandon |
3DUI '17: "Bookshelf and Bird: Enabling ..."
Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection
Run Yu, Wallace S. Lages, Mahdi Nabiyouni, Brandon Ray, Navyaram Kondur, Vikram Chandrashekar, and Doug A. Bowman (Virginia Tech, USA) We present two novel redirection techniques to enable real walking in large virtual environments (VEs) using only “room-scale” tracked spaces. The techniques, called Bookshelf and Bird, provide narrative-consistent redirection to keep the user inside the physical space, and require the user to walk to explore the VE. The underlying concept, called cell-based redirection, is to divide the virtual world into discrete cells that have the same size as the physical tracking space. The techniques then can redirect the users without altering the relationship between the user, the physical space, and the virtual cell. In this way, users can always access the entire cell using real walking. @InProceedings{3DUI17p116, author = {Run Yu and Wallace S. Lages and Mahdi Nabiyouni and Brandon Ray and Navyaram Kondur and Vikram Chandrashekar and Doug A. Bowman}, title = {Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {116--119}, doi = {}, year = {2017}, } Video |
|
Renner, Patrick |
3DUI '17: "Attention Guiding Techniques ..."
Attention Guiding Techniques using Peripheral Vision and Eye Tracking for Feedback in Augmented-Reality-Based Assistance Systems
Patrick Renner and Thies Pfeiffer (Bielefeld University, Germany) A limiting factor of current smart glasses-based augmented reality (AR) systems is their small field of view. AR assistance systems designed for tasks such as order picking or manual assembly are supposed to guide the visual attention of the user towards the item that is relevant next. This is a challenging task, as the user may initially be in an arbitrary position and orientation relative to the target. As a result of the small field of view, in most cases the target will initially not be covered by the AR display, even if it is visible to the user. This raises the question of how to design attention guiding for such "off-screen gaze" conditions. The central idea put forward in this paper is to display cues for attention guidance in a way that they can still be followed using peripheral vision. While the eyes' focus point is beyond the AR display, certain visual cues presented on the display are still detectable by the human. In addition to that, guidance methods that are adaptive to the eye movements of the user are introduced and evaluated. In the frame of a research project on smart glasses-based assistance systems for a manual assembly station, several attention guiding techniques with and without eye tracking have been designed, implemented and tested. As evaluation method simulated AR in a virtual reality HMD setup was used, which supports a repeatable and highly-controlled experimental design. @InProceedings{3DUI17p186, author = {Patrick Renner and Thies Pfeiffer}, title = {Attention Guiding Techniques using Peripheral Vision and Eye Tracking for Feedback in Augmented-Reality-Based Assistance Systems}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {186--194}, doi = {}, year = {2017}, } Video |
|
Ricca, Aylen |
3DUI '17: "Classic3D and Single3D: Two ..."
Classic3D and Single3D: Two Unimanual Techniques for Constrained 3D Manipulations on Tablet PCs
Siju Wu, Aylen Ricca, Amine Chellali, and Samir Otmane (University of Evry, France) Standard 3D widgets are used for object manipulation in desktop CAD applications but are less suited for use on touchscreens. We propose two 3D constrained manipulation techniques for Tablet PCs. Using finger identification, the dominant hand's index, middle and ring fingers are mapped with the X, Y and Z axes. Users can then trigger different manipulation tasks using specific chording gestures. A user study to assess usability and efficiency permitted to identify the gestures that are the most suitable for each manipulation task. Some design recommendations for an efficient 3D constrained manipulations technique are presented. @InProceedings{3DUI17p168, author = {Siju Wu and Aylen Ricca and Amine Chellali and Samir Otmane}, title = {Classic3D and Single3D: Two Unimanual Techniques for Constrained 3D Manipulations on Tablet PCs}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {168--171}, doi = {}, year = {2017}, } |
|
Riecke, Bernhard E. |
3DUI '17: "Comparing Leaning-Based Motion ..."
Comparing Leaning-Based Motion Cueing Interfaces for Virtual Reality Locomotion
Alexandra Kitson, Abraham M. Hashemian, Ekaterina R. Stepanova, Ernst Kruijff, and Bernhard E. Riecke (Simon Fraser University, Canada; Bonn-Rhein-Sieg University of Applied Sciences, Germany) In this paper, we describe a user study comparing five different locomotion interfaces for virtual reality locomotion. We compared a standard non-motion cueing interface, Joystick (Xbox), with four motion cueing interfaces, NaviChair (stool with springs), MuvMan (sit/stand active stool), Head-Directed (Oculus Rift DK2), and Swivel Chair (everyday office chair with leaning capability). Each interface had two degrees of freedom to move forward/backward and rotate using velocity (rate) control. The aim of this mixed methods study was to better understand relevant user experience factors and guide the design of future locomotion interfaces. This study employed methods from HCI to provide an understanding of why users behave a certain way while using the interface and to unearth any new issues with the design. Participants were tasked to search for objects in a virtual city while they provided talk-aloud feedback and we logged their behaviour. Subsequently, they completed a post-experimental questionnaire on their experience. We found that the qualitative themes of control, usability, and experience echoed the results of the questionnaire, providing internal validity. The quantitative measures revealed the Joystick to be significantly more comfortable and precise than the motion cueing interfaces. However, the qualitative feedback and interviews showed this was due to the reduced perceived controllability and safety of the motion cueing interfaces. Designers of these interfaces should consider using a backrest if users need to lean backwards and avoid using velocity-control for rotations when using HMDs. @InProceedings{3DUI17p73, author = {Alexandra Kitson and Abraham M. Hashemian and Ekaterina R. Stepanova and Ernst Kruijff and Bernhard E. Riecke}, title = {Comparing Leaning-Based Motion Cueing Interfaces for Virtual Reality Locomotion}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {73--82}, doi = {}, year = {2017}, } |
|
Rishe, Naphtali |
3DUI '17: "Gesture Elicitation for 3D ..."
Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe
Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. @InProceedings{3DUI17p144, author = {Francisco R. Ortega and Alain Galvan and Katherine Tarre and Armando Barreto and Naphtali Rishe and Jonathan Bernal and Ruben Balcazar and Jason-Lee Thomas}, title = {Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {144--153}, doi = {}, year = {2017}, } |
|
Roo, Joan Sol |
3DUI '17: "Towards a Hybrid Space Combining ..."
Towards a Hybrid Space Combining Spatial Augmented Reality and Virtual Reality
Joan Sol Roo and Martin Hachet (Inria, France) Spatial Augmented Reality (SAR) allows a user, or a group of users, to benefit from digital augmentations embedded directly into the physical world. This enables co-located information and unobstructed interaction. On the other hand, SAR suffers from limitations that are inherently linked to its physical dependency, which is not the case for see-through or immersive displays. In this work, we explore how to facilitate the transition from SAR to VR, and vice versa, integrating both into a unified experience. We developed a set of interaction techniques and obtained first feedback from informal interviews. @InProceedings{3DUI17p195, author = {Joan Sol Roo and Martin Hachet}, title = {Towards a Hybrid Space Combining Spatial Augmented Reality and Virtual Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {195--198}, doi = {}, year = {2017}, } Video Info |
|
Schmalstieg, Dieter |
3DUI '17: "Adaptive User Perspective ..."
Adaptive User Perspective Rendering for Handheld Augmented Reality
Peter Mohr, Markus Tatzgern, Jens Grubert, Dieter Schmalstieg, and Denis Kalkofen (Graz University of Technology, Austria; Salzburg University of Applied Sciences, Austria; Coburg University, Germany) Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user’s real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user’s head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user’s motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering. @InProceedings{3DUI17p176, author = {Peter Mohr and Markus Tatzgern and Jens Grubert and Dieter Schmalstieg and Denis Kalkofen}, title = {Adaptive User Perspective Rendering for Handheld Augmented Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {176--181}, doi = {}, year = {2017}, } |
|
Sousa, Maurício |
3DUI '17: "Mid-Air Modeling with Boolean ..."
Mid-Air Modeling with Boolean Operations in VR
Daniel Mendes, Daniel Medeiros, Maurício Sousa, Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira, and Joaquim Jorge (INESC-ID, Portugal; University of Lisbon, Portugal; PUC-Rio, Brazil) Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks. @InProceedings{3DUI17p154, author = {Daniel Mendes and Daniel Medeiros and Maurício Sousa and Ricardo Ferreira and Alberto Raposo and Alfredo Ferreira and Joaquim Jorge}, title = {Mid-Air Modeling with Boolean Operations in VR}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {154--157}, doi = {}, year = {2017}, } |
|
Steinicke, Frank |
3DUI '17: "Vibrotactile Assistance for ..."
Vibrotactile Assistance for User Guidance Towards Selection Targets in VR and the Cognitive Resources Involved
Oscar J. Ariza N., Markus Lange, Frank Steinicke, and Gerd Bruder (University of Hamburg, Germany; University of Würzburg, Germany; University of Central Florida, USA) The selection of objects in virtual reality (VR) located on the periphery and outside the field of view (FOV) requires a visual search by rotations of the HMD, which can reduce the performance of interaction. Our work explores the use of a pair of self-made wireless and wearable devices which once attached to the user’s head provide assistive vibrotactile cues for guidance in order to reduce the time used to turn and locate a target object. We present an experiment based on a dual-tasking method to analyze cognitive demands and performance metrics during a selection task. @InProceedings{3DUI17p95, author = {Oscar J. Ariza N. and Markus Lange and Frank Steinicke and Gerd Bruder}, title = {Vibrotactile Assistance for User Guidance Towards Selection Targets in VR and the Cognitive Resources Involved}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {95--98}, doi = {}, year = {2017}, } |
|
Stepanova, Ekaterina R. |
3DUI '17: "Comparing Leaning-Based Motion ..."
Comparing Leaning-Based Motion Cueing Interfaces for Virtual Reality Locomotion
Alexandra Kitson, Abraham M. Hashemian, Ekaterina R. Stepanova, Ernst Kruijff, and Bernhard E. Riecke (Simon Fraser University, Canada; Bonn-Rhein-Sieg University of Applied Sciences, Germany) In this paper, we describe a user study comparing five different locomotion interfaces for virtual reality locomotion. We compared a standard non-motion cueing interface, Joystick (Xbox), with four motion cueing interfaces, NaviChair (stool with springs), MuvMan (sit/stand active stool), Head-Directed (Oculus Rift DK2), and Swivel Chair (everyday office chair with leaning capability). Each interface had two degrees of freedom to move forward/backward and rotate using velocity (rate) control. The aim of this mixed methods study was to better understand relevant user experience factors and guide the design of future locomotion interfaces. This study employed methods from HCI to provide an understanding of why users behave a certain way while using the interface and to unearth any new issues with the design. Participants were tasked to search for objects in a virtual city while they provided talk-aloud feedback and we logged their behaviour. Subsequently, they completed a post-experimental questionnaire on their experience. We found that the qualitative themes of control, usability, and experience echoed the results of the questionnaire, providing internal validity. The quantitative measures revealed the Joystick to be significantly more comfortable and precise than the motion cueing interfaces. However, the qualitative feedback and interviews showed this was due to the reduced perceived controllability and safety of the motion cueing interfaces. Designers of these interfaces should consider using a backrest if users need to lean backwards and avoid using velocity-control for rotations when using HMDs. @InProceedings{3DUI17p73, author = {Alexandra Kitson and Abraham M. Hashemian and Ekaterina R. Stepanova and Ernst Kruijff and Bernhard E. Riecke}, title = {Comparing Leaning-Based Motion Cueing Interfaces for Virtual Reality Locomotion}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {73--82}, doi = {}, year = {2017}, } |
|
Taguchi, Shun |
3DUI '17: "COMS-VR: Mobile Virtual Reality ..."
COMS-VR: Mobile Virtual Reality Entertainment System using Electric Car and Head-Mounted Display
Ryo Kodama, Masahiro Koge, Shun Taguchi, and Hiroyuki Kajimoto (TOYOTA Central R&D Labs, Japan; University of Electro-Communications, Japan) We propose a novel virtual reality entertainment system using a car as a motion platform. Motion platforms present a sensation of motion to the user using powerful actuators. Combined with virtual reality content, including surrounding visual, auditory and tactile displays, such systems can provide and immersive experience. However, the space and cost requirements for installation of motion platforms are large. To overcome this issue, we propose to use a car as a motion platform. We developed a prototype system composed of a head mounted display, a one-person electric car and an automatic driving algorithm. We developed and tested immersive content in which users ride on a trolley in a virtual space. All users responded quite positively to the experience. @InProceedings{3DUI17p130, author = {Ryo Kodama and Masahiro Koge and Shun Taguchi and Hiroyuki Kajimoto}, title = {COMS-VR: Mobile Virtual Reality Entertainment System using Electric Car and Head-Mounted Display}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {130--133}, doi = {}, year = {2017}, } |
|
Tahai, Liudmila |
3DUI '17: "Watchcasting: Freehand 3D ..."
Watchcasting: Freehand 3D Interaction with Off-the-Shelf Smartwatch
Krzysztof Pietroszek, Liudmila Tahai, James R. Wallace , and Edward Lank (California State University at Monterey Bay, USA; University of Waterloo, Canada) We describe a mid-air, watch-based 3D interaction technique called Watchcasting. The technique enables target selection and translation by mapping z-coordinate position to forearm rotation. By replicat- ing a large display 3D selection study, we show that Watchcasting provides comparable performance to smartphone (Smartcasting) and electrical impedance myography (Myopoint) techniques. Our work demonstrates that an off-the-shelf smartwatch is a practical alternative for 3D interaction using specialized devices. @InProceedings{3DUI17p172, author = {Krzysztof Pietroszek and Liudmila Tahai and James R. Wallace and Edward Lank}, title = {Watchcasting: Freehand 3D Interaction with Off-the-Shelf Smartwatch}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {172--175}, doi = {}, year = {2017}, } Video |
|
Tarng, Stanley |
3DUI '17: "Force and Vibrotactile Integration ..."
Force and Vibrotactile Integration for 3D User Interaction within Virtual Environments
Aida Erfanian, Stanley Tarng, Yaoping Hu, Jérémy Plouzeau, and Frédéric Merienne (University of Calgary, Canada; LE2I, France) Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies on multi-sensory integration of visual and haptic cues revealed that the integration follows maximum likelihood estimation (MLE). Little effort focuses however on integrating force and vibrotactile cues – two sub-categorical cues of the haptic modality. Hence, this paper presents an investigation on MLE’s suitability for integrating these sub-categorical cues. Within a stereoscopic VE, human users performed a 3D interactive task of navigating a flying drone along a high-voltage transmission line in an inaccessible region and identifying defects on the line. The users had to identify defects via individual force or vibrotactile cues, and their combinations in co-located and dislocated settings. The co-located setting provided both cues on the right hand of the users; whereas the dislocated setting delivered the force and vibrotactile cues on the right hand and forearm of the users, respectively. Task performance of the users, such as completion time and accuracy, was assessed under each cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This was in agreement with the role of tactile cues in sensing surface properties, herein setting a baseline for using MLE. The task performance under the co-located setting indicated certain degrees of combining those under the individual cues. In contrast, the performance under the dislocated setting was alike that under the individual vibrotactile cue. These observations imply an inconclusiveness of MLE to integrate both cues in a co-located setting for 3D user interaction. @InProceedings{3DUI17p87, author = {Aida Erfanian and Stanley Tarng and Yaoping Hu and Jérémy Plouzeau and Frédéric Merienne}, title = {Force and Vibrotactile Integration for 3D User Interaction within Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {87--94}, doi = {}, year = {2017}, } |
|
Tarre, Katherine |
3DUI '17: "Gesture Elicitation for 3D ..."
Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe
Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. @InProceedings{3DUI17p144, author = {Francisco R. Ortega and Alain Galvan and Katherine Tarre and Armando Barreto and Naphtali Rishe and Jonathan Bernal and Ruben Balcazar and Jason-Lee Thomas}, title = {Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {144--153}, doi = {}, year = {2017}, } |
|
Tatzgern, Markus |
3DUI '17: "Adaptive User Perspective ..."
Adaptive User Perspective Rendering for Handheld Augmented Reality
Peter Mohr, Markus Tatzgern, Jens Grubert, Dieter Schmalstieg, and Denis Kalkofen (Graz University of Technology, Austria; Salzburg University of Applied Sciences, Austria; Coburg University, Germany) Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user’s real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user’s head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user’s motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering. @InProceedings{3DUI17p176, author = {Peter Mohr and Markus Tatzgern and Jens Grubert and Dieter Schmalstieg and Denis Kalkofen}, title = {Adaptive User Perspective Rendering for Handheld Augmented Reality}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {176--181}, doi = {}, year = {2017}, } |
|
Teather, Robert J. |
3DUI '17: "An Arm-Mounted Inertial Controller ..."
An Arm-Mounted Inertial Controller for 6DOF Input: Design and Evaluation
Thomas S. Young, Robert J. Teather, and I. Scott MacKenzie (York University, Canada; Carleton University, Canada) We designed a low-cost arm-mounted wearable 3D input device that uses inertial measurement units as an alternative to existing tracking systems requiring fixed frames of reference. The device employs inertial sensors to derive a 3D cursor position through natural arm movement. We also explore three methods of indicating selection with the device, one entirely software based (dwell), one using a twist gesture, and one relying on buttons. To address the paucity of research reporting human performance metrics across comparable interfaces, we quantify the performance of this new device through a point selection experiment and compare results from a similar study. @InProceedings{3DUI17p26, author = {Thomas S. Young and Robert J. Teather and I. Scott MacKenzie}, title = {An Arm-Mounted Inertial Controller for 6DOF Input: Design and Evaluation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {26--35}, doi = {}, year = {2017}, } |
|
Thomas, Jason-Lee |
3DUI '17: "Gesture Elicitation for 3D ..."
Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe
Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas (Florida International University, USA) With the introduction of new input devices, a series of questions have been raised in regard to making user interaction more intuitive -- in particular, preferred gestures for different tasks. Our study looks into how to find a gesture set for 3D travel using a multi-touch display and a mid-air device to improve user interaction. We conducted a user study with 30 subjects, concluding that users preferred simple gestures for multi-touch. In addition, we found that multi-touch user legacy carried over mid-Air interaction. Finally, we propose a gesture set for both type of interactions. @InProceedings{3DUI17p144, author = {Francisco R. Ortega and Alain Galvan and Katherine Tarre and Armando Barreto and Naphtali Rishe and Jonathan Bernal and Ruben Balcazar and Jason-Lee Thomas}, title = {Gesture Elicitation for 3D Travel via Multi-Touch and Mid-Air Systems for Procedurally Generated Pseudo-Universe}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {144--153}, doi = {}, year = {2017}, } |
|
Vasylevska, Khrystyna |
3DUI '17: "Towards Efficient Spatial ..."
Towards Efficient Spatial Compression in Self-Overlapping Virtual Environments
Khrystyna Vasylevska and Hannes Kaufmann (Vienna University of Technology, Austria) Space available for any virtual reality experience is often strictly limited and abridges the virtual world to a size of a room. To extend the amount of virtual space accessible by walking within the same real workspace the methods of spatial compression were proposed. Scene manipulation with a controlled spatial overlap has been shown to be an efficient method. However, in order to apply space compression effectively for a dynamic, scalable and robust 3D user interface, it is important to study how the human perceives different layouts with overlapping spaces. In this paper, we explore the influence of the properties of the layout used on human spatial perception in a physically impossible spatial arrangement. Our first reported study focuses on the following parameters of the path within a simple self-overlapping layout: number of turns, relative door positions, sequences of counter- and clockwise turns, symmetry and asymmetry of the path used. In addition, in the second study we explore the effect of path smoothing by substituting the right-angled corridors by smooth curves. Our studies show that usage of the smooth curved corridors is more beneficial for spatial compression than the conventional right-angled approach. @InProceedings{3DUI17p12, author = {Khrystyna Vasylevska and Hannes Kaufmann}, title = {Towards Efficient Spatial Compression in Self-Overlapping Virtual Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {12--21}, doi = {}, year = {2017}, } |
|
Vierjahn, Tom |
3DUI '17: "Evaluation of Approaching-Strategies ..."
Evaluation of Approaching-Strategies of Temporarily Required Virtual Assistants in Immersive Environments
Andrea Bönsch, Tom Vierjahn, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Embodied, virtual agents provide users assistance in agent-based support systems. To this end, two closely linked factors have to be considered for the agents’ behavioral design: their presence time (PT), i.e., the time in which the agents are visible, and the approaching time (AT), i.e., the time span between the user’s calling for an agent and the agent’s actual availability. This work focuses on human-like assistants that are embedded in immersive scenes but that are required only temporarily. To the best of our knowledge, guidelines for a suitable trade-off between PT and AT of these assistants do not yet exist. We address this gap by presenting the results of a controlled within-subjects study in a CAVE. While keeping a low PT so that the agent is not perceived as annoying, three strategies affecting the AT, namely fading, walking, and running, are evaluated by 40 subjects. The results indicate no clear preference for either behavior. Instead, the necessity of a better trade-off between a low AT and an agent’s realistic behavior is demonstrated. @InProceedings{3DUI17p69, author = {Andrea Bönsch and Tom Vierjahn and Torsten W. Kuhlen}, title = {Evaluation of Approaching-Strategies of Temporarily Required Virtual Assistants in Immersive Environments}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {69--72}, doi = {}, year = {2017}, } |
|
Wallace, James R. |
3DUI '17: "Watchcasting: Freehand 3D ..."
Watchcasting: Freehand 3D Interaction with Off-the-Shelf Smartwatch
Krzysztof Pietroszek, Liudmila Tahai, James R. Wallace , and Edward Lank (California State University at Monterey Bay, USA; University of Waterloo, Canada) We describe a mid-air, watch-based 3D interaction technique called Watchcasting. The technique enables target selection and translation by mapping z-coordinate position to forearm rotation. By replicat- ing a large display 3D selection study, we show that Watchcasting provides comparable performance to smartphone (Smartcasting) and electrical impedance myography (Myopoint) techniques. Our work demonstrates that an off-the-shelf smartwatch is a practical alternative for 3D interaction using specialized devices. @InProceedings{3DUI17p172, author = {Krzysztof Pietroszek and Liudmila Tahai and James R. Wallace and Edward Lank}, title = {Watchcasting: Freehand 3D Interaction with Off-the-Shelf Smartwatch}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {172--175}, doi = {}, year = {2017}, } Video |
|
Weyers, Benjamin |
3DUI '17: "A Reliable Non-verbal Vocal ..."
A Reliable Non-verbal Vocal Input Metaphor for Clicking
Daniel Zielasko, Neha Neha, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) While experiencing an immersive virtual environment a suitable trigger metaphor is often needed, e.g. for the interaction with objects out of physical reach or system control. The BlowClick approach [Zielasko2015] that is based on non-verbal vocal input has been proven to be a valuable trigger technique in previous work. However, its original detection method is vulnerable to false positives and, thus, is limited in its potential use. Therefore, we extended the existing approach by adding machine learning methods to reliably classify blowing events. We found a support vector machine with Gaussian kernel performing the best with at least the same latency and more precision than before. Furthermore, we added acoustic feedback to the NVVI trigger, which increases the user’s confidence and whose absence was also stated as a limitation of the previous work. With this extended technique, we repeated the conducted Fitts’ law experiment with 33 participants and could confirm that it is possible to use NVVI as a reliable trigger as part of a hands-free point-andclick interface. Furthermore, we tested reaction times to measure the trigger’s performance without the influence of pointing and calculated device throughputs to ensure comparability. @InProceedings{3DUI17p40, author = {Daniel Zielasko and Neha Neha and Benjamin Weyers and Torsten W. Kuhlen}, title = {A Reliable Non-verbal Vocal Input Metaphor for Clicking}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {40--49}, doi = {}, year = {2017}, } 3DUI '17: "Efficient Approximate Computation ..." Efficient Approximate Computation of Scene Visibility Based on Navigation Meshes and Applications for Navigation and Scene Analysis Sebastian Freitag, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) Scene visibility - the information of which parts of the scene are visible from a certain location—can be used to derive various properties of a virtual environment. For example, it enables the computation of viewpoint quality to determine the informativeness of a viewpoint, helps in constructing virtual tours, and allows to keep track of the objects a user may already have seen. However, computing visibility at runtime may be too computationally expensive for many applications, while sampling the entire scene beforehand introduces a costly precomputation step and may include many samples not needed later on. Therefore, in this paper, we propose a novel approach to precompute visibility information based on navigation meshes, a polygonal representation of a scene’s navigable areas. We show that with only limited precomputation, high accuracy can be achieved in these areas. Furthermore, we demonstrate the usefulness of the approach by means of several applications, including viewpoint quality computation, landmark and room detection, and exploration assistance. In addition, we present a travel interface based on common visibility that we found to result in less cybersickness in a user study. @InProceedings{3DUI17p134, author = {Sebastian Freitag and Benjamin Weyers and Torsten W. Kuhlen}, title = {Efficient Approximate Computation of Scene Visibility Based on Navigation Meshes and Applications for Navigation and Scene Analysis}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {134--143}, doi = {}, year = {2017}, } |
|
Wu, Siju |
3DUI '17: "Classic3D and Single3D: Two ..."
Classic3D and Single3D: Two Unimanual Techniques for Constrained 3D Manipulations on Tablet PCs
Siju Wu, Aylen Ricca, Amine Chellali, and Samir Otmane (University of Evry, France) Standard 3D widgets are used for object manipulation in desktop CAD applications but are less suited for use on touchscreens. We propose two 3D constrained manipulation techniques for Tablet PCs. Using finger identification, the dominant hand's index, middle and ring fingers are mapped with the X, Y and Z axes. Users can then trigger different manipulation tasks using specific chording gestures. A user study to assess usability and efficiency permitted to identify the gestures that are the most suitable for each manipulation task. Some design recommendations for an efficient 3D constrained manipulations technique are presented. @InProceedings{3DUI17p168, author = {Siju Wu and Aylen Ricca and Amine Chellali and Samir Otmane}, title = {Classic3D and Single3D: Two Unimanual Techniques for Constrained 3D Manipulations on Tablet PCs}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {168--171}, doi = {}, year = {2017}, } |
|
Young, Thomas S. |
3DUI '17: "An Arm-Mounted Inertial Controller ..."
An Arm-Mounted Inertial Controller for 6DOF Input: Design and Evaluation
Thomas S. Young, Robert J. Teather, and I. Scott MacKenzie (York University, Canada; Carleton University, Canada) We designed a low-cost arm-mounted wearable 3D input device that uses inertial measurement units as an alternative to existing tracking systems requiring fixed frames of reference. The device employs inertial sensors to derive a 3D cursor position through natural arm movement. We also explore three methods of indicating selection with the device, one entirely software based (dwell), one using a twist gesture, and one relying on buttons. To address the paucity of research reporting human performance metrics across comparable interfaces, we quantify the performance of this new device through a point selection experiment and compare results from a similar study. @InProceedings{3DUI17p26, author = {Thomas S. Young and Robert J. Teather and I. Scott MacKenzie}, title = {An Arm-Mounted Inertial Controller for 6DOF Input: Design and Evaluation}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {26--35}, doi = {}, year = {2017}, } |
|
Yu, Run |
3DUI '17: "Bookshelf and Bird: Enabling ..."
Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection
Run Yu, Wallace S. Lages, Mahdi Nabiyouni, Brandon Ray, Navyaram Kondur, Vikram Chandrashekar, and Doug A. Bowman (Virginia Tech, USA) We present two novel redirection techniques to enable real walking in large virtual environments (VEs) using only “room-scale” tracked spaces. The techniques, called Bookshelf and Bird, provide narrative-consistent redirection to keep the user inside the physical space, and require the user to walk to explore the VE. The underlying concept, called cell-based redirection, is to divide the virtual world into discrete cells that have the same size as the physical tracking space. The techniques then can redirect the users without altering the relationship between the user, the physical space, and the virtual cell. In this way, users can always access the entire cell using real walking. @InProceedings{3DUI17p116, author = {Run Yu and Wallace S. Lages and Mahdi Nabiyouni and Brandon Ray and Navyaram Kondur and Vikram Chandrashekar and Doug A. Bowman}, title = {Bookshelf and Bird: Enabling Real Walking in Large VR Spaces through Cell-Based Redirection}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {116--119}, doi = {}, year = {2017}, } Video |
|
Zank, Markus |
3DUI '17: "Optimized Graph Extraction ..."
Optimized Graph Extraction and Locomotion Prediction for Redirected Walking
Markus Zank and Andreas Kunz (ETH Zurich, Switzerland) Redirected walking with advanced planners such as MPCRed or FORCE requires both knowledge about the virtual environment - mostly in the form of a skeleton graph representing the virtual environment - and a robust prediction of the user's actions. This paper presents methods for both parts and evaluates them with a number of test cases. Since frame rate is crucial for a virtual reality application, the computationally heavy extraction and preprocessing of the skeleton graph is done offline while only parts directly linked to the user's behavior such as the prediction are done online. The prediction is done using a target-based long-term prediction and the targets are determined automatically and combined with targets predefined by the designer of the virtual environment. The methods presented here provide a graph that is well suited for planning redirection and allows prediction techniques previously only demonstrated in studies to be applied to large scale virtual environments. @InProceedings{3DUI17p120, author = {Markus Zank and Andreas Kunz}, title = {Optimized Graph Extraction and Locomotion Prediction for Redirected Walking}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {120--129}, doi = {}, year = {2017}, } |
|
Zielasko, Daniel |
3DUI '17: "A Reliable Non-verbal Vocal ..."
A Reliable Non-verbal Vocal Input Metaphor for Clicking
Daniel Zielasko, Neha Neha, Benjamin Weyers, and Torsten W. Kuhlen (RWTH Aachen University, Germany; JARA-HPC, Germany) While experiencing an immersive virtual environment a suitable trigger metaphor is often needed, e.g. for the interaction with objects out of physical reach or system control. The BlowClick approach [Zielasko2015] that is based on non-verbal vocal input has been proven to be a valuable trigger technique in previous work. However, its original detection method is vulnerable to false positives and, thus, is limited in its potential use. Therefore, we extended the existing approach by adding machine learning methods to reliably classify blowing events. We found a support vector machine with Gaussian kernel performing the best with at least the same latency and more precision than before. Furthermore, we added acoustic feedback to the NVVI trigger, which increases the user’s confidence and whose absence was also stated as a limitation of the previous work. With this extended technique, we repeated the conducted Fitts’ law experiment with 33 participants and could confirm that it is possible to use NVVI as a reliable trigger as part of a hands-free point-andclick interface. Furthermore, we tested reaction times to measure the trigger’s performance without the influence of pointing and calculated device throughputs to ensure comparability. @InProceedings{3DUI17p40, author = {Daniel Zielasko and Neha Neha and Benjamin Weyers and Torsten W. Kuhlen}, title = {A Reliable Non-verbal Vocal Input Metaphor for Clicking}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {40--49}, doi = {}, year = {2017}, } |
|
Zielinski, David J. |
3DUI '17: "Specimen Box: A Tangible Interaction ..."
Specimen Box: A Tangible Interaction Technique for World-Fixed Virtual Reality Displays
David J. Zielinski, Derek Nankivil, and Regis Kopper (Duke University, USA) Object selection and manipulation in world-fixed displays such as CAVE-type systems are typically achieved with tracked input devices, which lack the tangibility of real-world interactions. Conversely, due to the visual blockage of the real world, head-mounted displays allow the use of many types of real world objects that can convey realistic haptic feedback. To bridge this gap, we propose Specimen Box, an interaction technique that allows users to naturally hold a plausible physical object while manipulating virtual content inside it. This virtual content is rendered based on the tracked position of the box in relation to the user's point of view. Specimen Box provides the weight and tactile feel of an actual object and does not occlude rendered objects in the scene. The end result is that the user sees the virtual content as if it exists inside the clear physical box. We hypothesize that the effect of holding a physical box, which is a valid part of the overall scenario, would improve user performance and experience. To verify this hypothesis, we conducted a user study which involved a cognitively loaded inspection task requiring extensive manipulation of the box. We compared Specimen Box to Grab-and-Twirl, a naturalistic bimanual manipulation technique that closely mimics the mechanics of our proposed technique. Results show that in our specific task, performance was significantly faster and rotation rate was significantly lower with Specimen Box. Further, performance of the control technique was positively affected by experience with Specimen Box. @InProceedings{3DUI17p50, author = {David J. Zielinski and Derek Nankivil and Regis Kopper}, title = {Specimen Box: A Tangible Interaction Technique for World-Fixed Virtual Reality Displays}, booktitle = {Proc.\ 3DUI}, publisher = {IEEE}, pages = {50--58}, doi = {}, year = {2017}, } |
108 authors
proc time: 0.48