On June 16th and 17th, the SONICOM consortium held its first face-to-face project meeting (at the National Kapodistrian University of Athens). The event involved two days of intensive discussions about audio immersive experiences from both sensory and psychological points of view. Overall, the main topic of the meeting was the estimate of Head Related Transfer Functions, i.e., that particular function that maps a sound hitting our ears into the percept we actually hear. Here are the most important lessons I learned:
- The HRTF is as individual as fingerprints, it depends on the anatomy of our ears, head and torso and there are no two persons with the same HRTF;
- There are two main ways to obtain the HRTF of an individual, one is to model the way perception works (through psychophysiological means), while the other is to compute the perception based on measurable physical characteristics (through data-driven approaches including machine learning);
- The computation of HRTF greatly improves audio immersive experiences (an approach called “personalisation”);
- There is an established community working on HRTF computation, but most of the work is based on laboratory experiments and it is unclear if and how can generalise to the “real-world”.
My role in such a setting is to see whether the perception of the artificial distance between speakers and listeners affects the perception of speakers’ personality. This would be one step in the direction of connecting the studies on HRTF to possible real-world outcomes. We are still at the beginning of the project, there are another few exciting years ahead to learn and discuss about.