2016 Workgroup Topic Proposals

Adding audio properties to gITF

Let’s get together and make a formal recommendation for adding audio properties to the gITF open source asset delivery format for digital 3D objects. Currently there is zero notion of audio in the format.

Possible recommendations could include:

  • Acoustic properties: absorption, reflection
  • Density
  • Diffusion relating to material types
  • Sound “textures”

The format as described by the website:

The GL Transmission Format (glTF) is a runtime asset delivery format for GL APIs: WebGL, OpenGL ES, and OpenGL. glTF bridges the gap between 3D content creation tools and modern GL applications by providing an efficient, extensible, interoperable format for the transmission and loading of 3D content.

https://github.com/KhronosGroup/glTF

 

Also…there is a session at the WC3 workshop where audio will be discussed. If we can formalize our recommendation quickly it can be presented and submitted there:

https://www.w3.org/2016/06/vr-workshop/schedule.html

Augmented Audio Reality: Another “Google Glass” or “the Future of Listening”

“Augmented Audio Reality  (AAR) ” has become another buzz word with the advent of Air Buds, apps like RjDj Here and innovations from audio companies like Doppler Labs and Harman as well as the popularity of applications of it’s big brother, “Augmented Reality” like Pokeman Go.  Will consumers embrace AAR or will it be another personal intrusion only geeks will embrace?

I suspect it will be a mixed bag with broad adoption of AAR applications that enhance user experience and resistance to applications that intrude on the users brain.  But considering AAR is a close cousin to Interactive Audio who better qualified than the BarBQ to sort the wheat from the chaff (You’all).

I pose these questions to the BarBQ Brain:

  • What AAR applications will be embraced by consumers and why?
  • What audio technology is needed to make advanced AAR compelling?
  • Will all AAR solutions be proprietary or is there a need for any standards?
  • What will AAR applications look like in 2021?

I’m sure other can add even more meaningful questions to this list.

Scott

Why do my YouTube (next: livestream) videos still sound like sh*t?

Anyone and everyone – literally, anyone and everyone – are recording videos on their phones and uploading to YouTube to share with friends and family. While picture quality steadily improves upon device iterations, audio quality consistently leaves a lot to be desired.

We have dialogue-disrupting ambiences!
We have diaphragm-distorting wind!
We have directionality-driven dynamics fluctuations!

noise noise noise!

With recent trends toward livestreaming, this audio quality problem has now evolved from a mere “fix it in post” problem into a no-holds-barred, guns-a-blazing, real-time audio challenge for us to solve!

It’s a physics problem.
No, it’s a transducer problem.
No, it’s a DSP problem.
No, it’s a latency problem.
No, it’s a UI/UX problem.
No, it’s a marketing/education problem.
No, it’s a product problem.

It’s all of the above!

Only the Big Brain can solve it.

Personalization

Q: What is the future of personalization as it relates to interactive audio applications and technologies?

Head worn computers, smartphones that are essentially powerful PCs in our pockets that we carry with us at all times, fitness trackers…the list goes on…computing is just getting more “personal”. Customization is the next step and how we get closer with these new more personal computing devices, and also what makes them more deeply integrated into our lives. We assign special ringtones to important contacts, adjust our inter-pupillary distance for VR/AR headsets, and enter personal details into health trackers like weight, height, and birth date. Digital AI assistants like Siri and Cortana interface with us conversationally and learn our preferences. Active earbuds like the Bragi Dash and Here allow for personalization of the sound of the world around you to the point where we can almost dial in our own personal “mix” for a live musical performance.

Sample questions or discussion topics:

  • How does personalization relate to our ability to experience things like spatial audio?
  • How does personalization improve listening in general?
  • Why do we settle for a spatial audio effect based on an HRTF profile of some guy at Cambridge?
  • Will your HRTF profile be just like your height and weight someday?
  • Is there a way we can personalize user interface audio to make it more informative?
  • Does the mass-market care about personalization? If not, how do we get them interested?
  • How can machine learning be applied to continuously refine or tune personalization over time for a better listening experience?
  • Does inevitable hearing loss present an opportunity to adjust or tune audio systems over time to accommodate the individual hearing needs of the user?

Some references:
https://www.geteven.co/
https://hereplus.me/

From Audinary to Visionary

YouTube is now the largest music-streaming service. Facebook lets you post videos, but not audio. Many musicians get around this by posting non-moving movies consisting of a stereo music track and a picture of an album cover. But what if music could become the foundation for dynamic, compelling video? That could make the whole audio chain more popular.

Imagine a visualizer driven by a combination of audio metadata, DSP, and artificial intelligence…perhaps even influenced by other sensor inputs. Instead of wiggling wireframes, this system could approach cinematic storytelling. And not just in video, but AR and VR as well.

What hooks could we add to audio files to generate more immersive visuals? What are the opportunities in production and delivery? And why are people who imagine the future called visionaries?

Footnote: Creative Labs did some groundbreaking work on music visualization back in 1999 with Lava/Oozic. The system used a proprietary file format and web player, and it died around the dot-com crash, but there were some ambitious ideas in there.

Augmenting Augmented Reality with Audio

What advances in audio can enhance the effectiveness of augmented reality solutions?  Is state-of-the-art audio enough or are there sound barriers to be broken to make the augmented reality user experience more compelling? What AR applications will drive these needs?

While I considered inclusion of virtual reality I believe there are some many diverse applications for augmented reality it is a huge topic in itself.