Q: What is the future of personalization as it relates to interactive audio applications and technologies?
Head worn computers, smartphones that are essentially powerful PCs in our pockets that we carry with us at all times, fitness trackers…the list goes on…computing is just getting more “personal”. Customization is the next step and how we get closer with these new more personal computing devices, and also what makes them more deeply integrated into our lives. We assign special ringtones to important contacts, adjust our inter-pupillary distance for VR/AR headsets, and enter personal details into health trackers like weight, height, and birth date. Digital AI assistants like Siri and Cortana interface with us conversationally and learn our preferences. Active earbuds like the Bragi Dash and Here allow for personalization of the sound of the world around you to the point where we can almost dial in our own personal “mix” for a live musical performance.
Sample questions or discussion topics:
- How does personalization relate to our ability to experience things like spatial audio?
- How does personalization improve listening in general?
- Why do we settle for a spatial audio effect based on an HRTF profile of some guy at Cambridge?
- Will your HRTF profile be just like your height and weight someday?
- Is there a way we can personalize user interface audio to make it more informative?
- Does the mass-market care about personalization? If not, how do we get them interested?
- How can machine learning be applied to continuously refine or tune personalization over time for a better listening experience?
- Does inevitable hearing loss present an opportunity to adjust or tune audio systems over time to accommodate the individual hearing needs of the user?