Preaching Voxels:

PreachingVoxels

Overview: The principle of voxel-based Mixed Reality illustrated with our implementation of a prototype system with a voxel grid resolution of 8 mm (5/16 inches). Real, recorded, and virtual objects and people are coherently experienced. (Consent was obtained from individuals depicted).

Abstract: For mixed reality applications, where reality and virtual reality are spatially merged and aligned in interactive real-time, we propose a pure voxel representation as a rendering and interaction method of choice. We show that voxels—gap-less volumetric pixels in a regular grid in space—allow for an actual user experience of a mixed reality environment, for a seamless blending of virtual and real as well as for a sense of presence and co-presence in such an environment. If everything is based on voxels, even if coarse, visual coherence is achieved inherently. We argue the case for voxels by (1) conceptually defining and illustrating voxel-based mixed reality, (2) describing the computational feasibility, (3) presenting a fully functioning, low resolution prototype, (4) empirically exploring the user experience, and finally (5) discussing current work and future directions for voxel-based mixed reality. This work is not the first that utilizes voxels for mixed reality, but is the first that uses voxels for all internal, external, and user interface representations as an effective way of experiencing and interacting with mixed reality environments.

Acknowledgements: We would like to thank Katrin Meng and Arne Reepen for their contributions to earlier versions of the system, our participants and the HCI Lab’s people time and effort, the makers of the CAD models used in our studies, and Kevin from Weta for encouraging discussions about voxels.

Funding: Parts of this project have been funded by University of Otago Research Grants 2016 and 2018 and by NZ’s National Science Challenge (SfTI) funding.

Video: