Lab space

Our lab space is the environment we spend most of our time. We are currently using a large open space as our main lab space. Here we have just enough space for our workplaces, individual and group meetings, and dedicated space to develop, test and evaluate our developed prototypes. We have also a dedicated space for soldering tasks or other works that require the use of tools. We usually give also our demos at our lab so if you are interested in what we are doing or the equipment we are using (see below) please get in contact with us or come around.

Augmented Reality displays

A large part of our research is on Augmented Reality. Examples are our work on identifying new application areas for using Augmented Reality (e.g., stroke rehabilitation, accessing everyday information using Pervasive Augmented Reality, industrial Augmented Reality). Here, we use in particular different AR head-mounted displays such as Microsoft Hololens (left), or other more affordable options such as Epson's Moverio Series or Meta Glasses (middle). We also develop and use our novel optical-see through head-mounted displays (not depicted here) or build our own versions of video-see through head-mounted display (right).

Virtual Reality displays

We also have a larger number of head-worn Virtual Reality displays that are heavily used in different research projects. We currently work mostly with different version of Oculus Rift's (currently we use three Oculus Rift DK2s and two Oculus Rift CVs, the latter seen on the left), HTC Vive's (middle picture, three installations currently running) but we also use mobile VR displays such as Samsung's Gear VR (right) or Google's Daydream devices for our research.

Mobile Devices

Several of our research projects are in the domain of mobile Human-Computer Interaction and Mobile Computing. Consequently we usually have a variety of recent mobile devices. We develop for both main platforms (Android and iOS) but the majority of our devices are higher-end Android devices ranging from tablets, Phablets (there is a good chance we are the only group in Australia/Oceania with a Lenovo Phab 2 Pro which integrates a Kinect-like depth sensor, see middle picture), or mobile phones (Samsung S-Series, Google Pixel, Apple iPhones).

Cameras

A large part of our research requires to capture the environment for the purpose of tracking the device position, 3D interaction, or modeling the environment. For this purpose we have different cameras that help us to get this job done. In addition to industrial grade cameras (different Point Grey models), panoramic cameras (e.g. Ricoh Theta S) we heavily use different examples of depth cameras for our research. These used models depend on the use case and cover depth cameras for capturing larger areas (e.g., MS Kinect 1 and Kinect 2) to mobile depth cameras (e.g., Lenovo Phab 2 Pro integrating a Google's Tango platform), and depth cameras for hand tracking such as Intel or Creative cameras or Leap Motion's hand/finger tracking.