Mixed Reality Light Fields:


Overview. (a) Scene capture: The local user shares the environment by capturing a local light field. The sampling process is visually guided by a 3D sphere that surrounds the object of interest. The sphere color encodes the current sampling density per subtended angle, allowing to identify those regions of the light field that require more sampling. The target sampling density is automatically specified by the system but may be adjusted by the remote user on demand. (b) Scene exploration: The remote user explores the light field using image-based rendering techniques. (c, d) Scene annotation: Once a suitable viewpoint has been reached, the remote user places a plane in 3D and starts annotating it with drawings sketched on the touchscreen of the mobile device. (e) AR visualization: The visual instructions are sent to the local user and presented within the 3D coordinate system that was used for capturing the light field. Therefore, the visual instructions naturally appear as 3D-registered augmentation in the local user’s environment.

Abstract: Remote assistance represents an important use case for mixed reality. With the rise of handheld and wearable devices, remote assistance has become practical in the wild. However, sponta- neous provisioning of remote assistance requires an easy, fast and robust approach for capturing and sharing of unprepared environments. In this work, we make a case for utilizing in- teractive light fields for remote assistance. We demonstrate the advantages of object representation using light fields over conventional geometric reconstruction. Moreover, we intro- duce an interaction method for quickly annotating light fields in 3D space without requiring surface geometry to anchor an- notations. We present results from a user study demonstrating the effectiveness of our interaction techniques, and we provide feedback on the usability of our overall system.

Acknowledgements: This work was enabled by the Competence Center VRVis, the FFG (grant 859208 - Matahari) and the Austrian Science Fund grant P30694.

Paper Video: