Adding audio properties to gITF

Posted on Leave a commentPosted in 2016 Workgroup Topic Proposals

Let’s get together and make a formal recommendation for adding audio properties to the gITF open source asset delivery format for digital 3D objects. Currently there is zero notion of audio in the format.

Possible recommendations could include:

  • Acoustic properties: absorption, reflection
  • Density
  • Diffusion relating to material types
  • Sound “textures”

The format as described by the website:

The GL Transmission Format (glTF) is a runtime asset delivery format for GL APIs: WebGL, OpenGL ES, and OpenGL. glTF bridges the gap between 3D content creation tools and modern GL applications by providing an efficient, extensible, interoperable format for the transmission and loading of 3D content.

https://github.com/KhronosGroup/glTF

 

Also…there is a session at the WC3 workshop where audio will be discussed. If we can formalize our recommendation quickly it can be presented and submitted there:

https://www.w3.org/2016/06/vr-workshop/schedule.html

Personalization

Posted on 1 CommentPosted in 2016 Workgroup Topic Proposals

Q: What is the future of personalization as it relates to interactive audio applications and technologies?

Head worn computers, smartphones that are essentially powerful PCs in our pockets that we carry with us at all times, fitness trackers…the list goes on…computing is just getting more “personal”. Customization is the next step and how we get closer with these new more personal computing devices, and also what makes them more deeply integrated into our lives. We assign special ringtones to important contacts, adjust our inter-pupillary distance for VR/AR headsets, and enter personal details into health trackers like weight, height, and birth date. Digital AI assistants like Siri and Cortana interface with us conversationally and learn our preferences. Active earbuds like the Bragi Dash and Here allow for personalization of the sound of the world around you to the point where we can almost dial in our own personal “mix” for a live musical performance.

Sample questions or discussion topics:

  • How does personalization relate to our ability to experience things like spatial audio?
  • How does personalization improve listening in general?
  • Why do we settle for a spatial audio effect based on an HRTF profile of some guy at Cambridge?
  • Will your HRTF profile be just like your height and weight someday?
  • Is there a way we can personalize user interface audio to make it more informative?
  • Does the mass-market care about personalization? If not, how do we get them interested?
  • How can machine learning be applied to continuously refine or tune personalization over time for a better listening experience?
  • Does inevitable hearing loss present an opportunity to adjust or tune audio systems over time to accommodate the individual hearing needs of the user?

Some references:
https://www.geteven.co/
https://hereplus.me/