Oculus Research (WIP)


The overall project investigates how far we need to push binaural audio rendering quality in VR for players to get life-like reactions for various types of games. The first two experiments will focus on the benefits of binaural rendering on players accuracy and efficiency: first assessing the impact of HRTF individualization, second that of training on a given set of HRTFs.

The headset just arrived.. :)

CAO design of the VR Room

VR Room install: check

HRTF individualization experiment: let's play

This is a short footage of a subject running the HRTF individualization experience. The point here was to assess the impact of individualized HRTF in a VR game where, needless to say, the whole design was built around target localization. HRTF individualization can roughly be seen an adaptation of the audio rendering to players “listening profile”. With individual rendering comes i.e. an improved capacity for audio source localization in the VR scene. The drawback is that establishing a player's profile is a pain at the moment (for the player at least). The goal of the experiment was to check whether this pain is worth it, and if so for which “level” of gameplay (no real need for high speed and accurate localization when everything is moving so slow you can randomly shoot and still get away with it).

HRTF training experiment: design proposal