0 votedvote

Adding audio properties to gITF

Let’s get together and make a formal recommendation for adding audio properties to the gITF open source asset delivery format for digital 3D objects. Currently there is zero notion of audio in the format.

Possible recommendations could include:

  • Acoustic properties: absorption, reflection
  • Density
  • Diffusion relating to material types
  • Sound “textures”

The format as described by the website:

The GL Transmission Format (glTF) is a runtime asset delivery format for GL APIs: WebGL, OpenGL ES, and OpenGL. glTF bridges the gap between 3D content creation tools and modern GL applications by providing an efficient, extensible, interoperable format for the transmission and loading of 3D content.



Also…there is a session at the WC3 workshop where audio will be discussed. If we can formalize our recommendation quickly it can be presented and submitted there:


1 votedvote

Augmented Audio Reality: Another “Google Glass” or “the Future of Listening”

“Augmented Audio Reality  (AAR) ” has become another buzz word with the advent of Air Buds, apps like RjDj Here and innovations from audio companies like Doppler Labs and Harman as well as the popularity of applications of it’s big brother, “Augmented Reality” like Pokeman Go.  Will consumers embrace AAR or will it be another personal intrusion only geeks will embrace?

I suspect it will be a mixed bag with broad adoption of AAR applications that enhance user experience and resistance to applications that intrude on the users brain.  But considering AAR is a close cousin to Interactive Audio who better qualified than the BarBQ to sort the wheat from the chaff (You’all).

I pose these questions to the BarBQ Brain:

  • What AAR applications will be embraced by consumers and why?
  • What audio technology is needed to make advanced AAR compelling?
  • Will all AAR solutions be proprietary or is there a need for any standards?
  • What will AAR applications look like in 2021?

I’m sure other can add even more meaningful questions to this list.


1 votedvote

Why do my YouTube (next: livestream) videos still sound like sh*t?

Anyone and everyone – literally, anyone and everyone – are recording videos on their phones and uploading to YouTube to share with friends and family. While picture quality steadily improves upon device iterations, audio quality consistently leaves a lot to be desired.

We have dialogue-disrupting ambiences!
We have diaphragm-distorting wind!
We have directionality-driven dynamics fluctuations!

noise noise noise!

With recent trends toward livestreaming, this audio quality problem has now evolved from a mere “fix it in post” problem into a no-holds-barred, guns-a-blazing, real-time audio challenge for us to solve!

It’s a physics problem.
No, it’s a transducer problem.
No, it’s a DSP problem.
No, it’s a latency problem.
No, it’s a UI/UX problem.
No, it’s a marketing/education problem.
No, it’s a product problem.

It’s all of the above!

Only the Big Brain can solve it.

1 votedvote


Q: What is the future of personalization as it relates to interactive audio applications and technologies?

Head worn computers, smartphones that are essentially powerful PCs in our pockets that we carry with us at all times, fitness trackers…the list goes on…computing is just getting more “personal”. Customization is the next step and how we get closer with these new more personal computing devices, and also what makes them more deeply integrated into our lives. We assign special ringtones to important contacts, adjust our inter-pupillary distance for VR/AR headsets, and enter personal details into health trackers like weight, height, and birth date. Digital AI assistants like Siri and Cortana interface with us conversationally and learn our preferences. Active earbuds like the Bragi Dash and Here allow for personalization of the sound of the world around you to the point where we can almost dial in our own personal “mix” for a live musical performance.

Sample questions or discussion topics:

  • How does personalization relate to our ability to experience things like spatial audio?
  • How does personalization improve listening in general?
  • Why do we settle for a spatial audio effect based on an HRTF profile of some guy at Cambridge?
  • Will your HRTF profile be just like your height and weight someday?
  • Is there a way we can personalize user interface audio to make it more informative?
  • Does the mass-market care about personalization? If not, how do we get them interested?
  • How can machine learning be applied to continuously refine or tune personalization over time for a better listening experience?
  • Does inevitable hearing loss present an opportunity to adjust or tune audio systems over time to accommodate the individual hearing needs of the user?

Some references:

1 votedvote

From Audinary to Visionary

YouTube is now the largest music-streaming service. Facebook lets you post videos, but not audio. Many musicians get around this by posting non-moving movies consisting of a stereo music track and a picture of an album cover. But what if music could become the foundation for dynamic, compelling video? That could make the whole audio chain more popular.

Imagine a visualizer driven by a combination of audio metadata, DSP, and artificial intelligence…perhaps even influenced by other sensor inputs. Instead of wiggling wireframes, this system could approach cinematic storytelling. And not just in video, but AR and VR as well.

What hooks could we add to audio files to generate more immersive visuals? What are the opportunities in production and delivery? And why are people who imagine the future called visionaries?

Footnote: Creative Labs did some groundbreaking work on music visualization back in 1999 with Lava/Oozic. The system used a proprietary file format and web player, and it died around the dot-com crash, but there were some ambitious ideas in there.

1 votedvote

Augmenting Augmented Reality with Audio

What advances in audio can enhance the effectiveness of augmented reality solutions?  Is state-of-the-art audio enough or are there sound barriers to be broken to make the augmented reality user experience more compelling? What AR applications will drive these needs?

While I considered inclusion of virtual reality I believe there are some many diverse applications for augmented reality it is a huge topic in itself.


0 votedvote

Open DSP Islands: how does the Open DSP ecosystem evolve to support an all wireless world?

Ok, so we championed the open DSP architecture in 2014. Now its 2020: audio accessories skipped our Smart Connector interface (also from 2014) and went straight to wireless (really Devon? BT inside the chassis?); our Open DSPs are now little islands isolated by high latency, low bandwidth links.

How does the Open DSP ecosystem evolve to support – and maximize – an all wireless world?

Are signal processing entities portable? How does the framework optimize processing?

Wireless Open DSP

SPE = Signal Processing Element

Monkey Bus Wireless = A wireless bus from a future bbq. It must be better than Bluetooth.

0 votedvote

Collaborative music creation – shit or get off the pot

In 2006, 9 years ago, the Big Brain had a look at facilitating remote jam sessions.  Since then we’ve seen a plethora of online collaborative music creation platforms that have come and gone.

Is there a successful collaborative music creation platform out there today?

Or is this just Dropbox?

Is there a meaningful demographic of musicians and producers who actually want to collaborate online, whether in real-time or offline?

Or is this just a cool technical challenge that engineers enjoy solving and marketeers think is a differentiator, because Social and virality coefficients?

If there is a market, why are there no clear leaders?  What do we have to do to make a giant leap forward?

If there isn’t a market, let’s prove that and get off the pot.

0 votedvote

Sonic Omniscience: If Everything Had Ears

What if every sound in the world was being recorded, and tagged with location and time?  What if it was all searchable, reusable and accessible from any device?  What new information could we learn from a sonic omniscience?  What could we detect and automate?  What problems could a system like this create or solve?  What would it disrupt?  What new forms of art could emerge?

As our world becomes increasingly filled with sensors and microphones, and the services we use are paid for with disclosure of data, it seems as though a system like this might one day be possible.  What are the long term implications of a sonic omniscience?  Is it all NSA and 1984, or are there opportunities to mitigate an Orwellian dystopia and use a system like this to create a better world?  What responsibilities should those developing sensor networks and search algorithms have to ensure the best possible outcome?  What should the equivalent be to Asimov’s “Laws of Robotics?”

0 votedvote

When To Standardize, vs. When Not To?

Over the years we’ve all seen several promising standards efforts fail to bear timely fruit, consuming huge amounts of valuable volunteer time and energy in the process. I posit that this is a relevant problem for at least some of the industries represented at BBQ, and worthy of careful thought inside those industries.

Therefore, let the Big BBQ Brain think together upon: When to Standardize, vs. When Not To?  Each path has its peculiar advantages and disadvantages which some people understand well but others don’t, particularly.  A BBQ Workgroup Report gathering knowledge on this subject could, perhaps, have practical use as inception-time advice for future efforts by helping them to choose whatever path’s best for the particular project.


Under certain circumstances standards development can be slow and contentious, and therefore frustrating.  Participants may burn out, then drop out, making subsequent progress even slower.  Sometimes standards efforts fail as a result.

When progress toward any important thing is perceived as excessively process-heavy, technical people naturally become impatient and seek a faster workaround… and start thinking of open-source projects etc. … but this is also not always a perfect solution.  After the feel-good launch and coding-party stages, the practical end results from that path don’t always display quite the required level of technical rigor, nor succeed quite as widely, nor attract quite the kinds of companies needed, nor exhibit quite the kind of technical stability over time that a large market may require.

A timely and good quality standard from a recognized standards development organization that’s created by major relevant companies can, by contrast, powerfully succeed and prevail in the market for many years, even as individual vendors come and go.  And for the right kind of project with the right individual participants, the fluidity of an open source project is absolutely the best and most productive way to go.

What exactly is it about a given project that makes it likely to fail as a standard, or fail as an open-source project?  This topic is all about characterizing the two ways, and characterizing projects.

Key Questions

  • How to funnel precious volunteer-hours toward more (vs. less) productive outcomes?
  • What are the characteristics of successful standards efforts?
  • What are the characteristics of unsuccessful standards efforts?
  • What characteristics make something other than a standards effort – for example, an open-source project, or establishing a new community – a more effective path for a given project?
  • What does taking a standards path achieve that other approaches (open-source, etc.) don’t, or can’t?
  • What does taking a non-standards path achieve that a standard doesn’t, or can’t?
  • What about IPR models?
  • Is there anything standards bodies could be doing differently to help troubled projects succeed
  • Which of standardization’s many inconveniences are simply unavoidable?
  • How about hybrid models, for example combining standardized specifications with open-source implementations?
0 votedvote

Protecting tomorrow’s ears…

‘…or meaningful ways to safeguard hearing without becoming a nanny state.’

I know that protecting hearing is a major hot-button issue with several BBQers, and I think it is an important issue that we should talk about.

I have not recently been involved in any updates to the EU hearing protection rules, but when I last read the proposed changes I nearly fainted. What I read:

-Dupe users into thinking they’re deaf or suffering from tinnitus when they’ve listened to too much loud music.

-Plaster ugly UI elements all over otherwise beautiful OSes.

-Hosts/players must psychically intuit rendering devices in order to know their output parameters.

-Track users across devices to monitor exposure.

If you are involved with the EU rulemaking and these do not reflect the current state of affairs (and assuming that you are permitted to do so), please correct my understanding. Note that I have taken some liberty in describing my observations.

What is the best way to protect hearing? What roll (or controls) should content creators/parents/governments/police have in protecting their fans/children/citizens/sheep? What can we do as technologists to help?

0 votedvote

Singing with Your Thumbs

What could replace ringtones as the next big personalized audio upgrade?

I think Peter Drescher heard the future back in 2007 when he wrote “Singing With Your Thumbs: How To Make User Interfaces Musical.” (If you’re lucky, the little JavaScript audio player I wrote to present his examples will still work.)

In the article, Peter, who’s also a two-time BBQ speaker, shares his insights on adding warmth and personality to devices through evocative sound. In the emerging Internet of Things, imagine how much further that could go if devices not only sung beautifully, but also harmonized with each other and the environment.

Peter Drescher

Who says there’s no sound in space?

0 votedvote

That Droning Sound

What’s the hottest new consumer technology? Helicopter drone video. What’s the worst thing about helicopter drone video? That droning sound. (Or no sound at all.) Imagine…

  • A drone-mounted mic that cancels the propeller sound, producing pristine soundtracks
  • A ring of drone-mounted speakers that follow you around, for mobile surround sound (“wingtones”)
  • An app that synthesizes music from silent drone video

More practically, future noise cancellation algorithms will offer numerous opportunities for adding sound and music to previously hostile environments. What are some scenarios that would encourage that development?

Party Drone

The 450-watt flying party speaker. Just need four more. (Click for video.)

0 votedvote

Beyond Binaural: Mixing Realities With Sound

While the creators of augmented and mixed reality are pioneering great experiences in the realm of binaural audio, one might ask the question, what’s next?  Where are the greatest opportunities for understanding our environment through sound, and seamlessly blending audio content with the world around us?  What would we do with greater contextual awareness and responsiveness?  What problems could we solve?  What are the limitations and the possibilities?

0 votedvote

Vehicle Audio: Where do we go next?

From the earliest sputtering combustion engine of the Ford Model T, or the clackity-clack of a Mickey Mantle card in your bicycle spokes, to the modern stealthy sounds of the Tesla, the symphony of transportation continues to evolve.  Knight Rider’s Kitt sold us on the dream of a car with the ability to carry on a conversation, although we’re not there yet.  Car enthusiasts modify their exhausts to make them louder, and researchers are designing tires to make them quieter.   As vehicle sound systems become more complex, what will this mean for our interactions with them?  How will the sonic experience of vehicles impact the emotional relationships that users or bystanders have with them.  What risks and opportunities does the vehicle give us that’s different from other platforms?

0 votedvote

iOS device sales just topped Win PC sales. What does this mean for music and audio products and development?

As Apple’s dominance continues to sore, most recently with iOS device sales overtaking Win PC sales, what does this mean for us developers? How much do the current popular platforms define the products we build? How much should they define what we build? In the case of iOS and music production there are some serious hurdles, namely screen real-estate and until recently lack of a good standard for inter-app communication. In the case of musical instruments, lack of tactile feedback creates serious design challenges. And in the case of all iOS products (software at least) there are significant economic challenges, namely, how do you fund and profit a serious development with a $5 product (if you’re lucky to charge any money at all)?  Is anybody other than Apple making money on iOS apps? Will El Capitan’s AU3 and Audio Extensions change things? Will Windows 10 audio updates change things? Can touch devices really change the way music is produced without Android attending the party?

0 votedvote

AoT (Audio of Things)

The IoT buzzword and concept has a lot of push behind it. The topic that I think would be interesting is does the Audio of Things in a given place want / need to use the internet. The companies with server side services are pushing it but that mean it is the “right” answer. Because of implied power (Radios) and security (information on shared server) can I keep the information local and still accomplish all of my home automation goals and get a benefit?

0 votedvote

tomorrows headset – what’s needed beyond a pair of headphones, a mic and a button?

With the advent of digital interfaces for headset accessories, what kind of functionality are end-users wanting in their next gen headset/accessories?

Let’s keep the conversation away from the “how” and “over what interface” and think bigger to what end users are really looking for in future systems. Sensor, lights, multichannel, floating cameras that take selfies! etc.


0 votedvote

Multifactor Interfaces Between Human and Machine?

Humans communicate with each other not just with pure speech but with speech augmented by a variety of cues including audio, facial, posture and gestures.  Do any of these cues also add value in human to machine communications?  Could they serve as another form of contextual awareness making ASR more accurate and machine responses more meaningful.

Some suggest these cues are too individual in nature to augment human to machine communication.  Others point out that experts can identify and interpret these audio and visual cues by observing anyone which suggests computer intelligence could easily programmed to interpret these cues.

Let’s identify specific applications where a multifactor interface (speech plus other visual and audio cues) would add value in the new world where your voice becomes the primary interface to consumer products.

0 votedvote

Audio Opportunities in Emerging Wearable and IoT

What wearable and IoT applications featuring audio will make a splash and stick?  What applications will just splat?  What sound voice, speech and audio technologies will make a difference in these emerging applications?  What advances in sound technologies are needed in the next five years to create compelling and sustainable applications in this space?  Let’s brainstorm potential applications with real consumer value propositions, debate their merit, prioritize them and define a sound technology roadmap to support them.

0 votedvote
0 votedvote
0 votedvote

Lets play lean startup

We’ve got two days to validate a business model for a new company that we have created out of the ether of beer, sweat and BBQ.

We will identify our target customers, understand what they’re bitching about, and propose what we’re going to do about it & blow their socks off. We’ll run experiments on other groups (our target customers) to validate the hypotheses of our model.  We’ll iterate, and pivot and all that good stuff. And when our model is complete, we will be ready to put a dent in the universe and make a ton of cash.  Yeehaw!

With some sincerity, this topic is really about tech and business, not just tech.  I’ve heard as many grumblings about business and politics at BBQ as I have about tech problems.  Like Love and Marriage, you can’t have one without the other.  So, what if we set out to solve some business problems?

0 votedvote
0 votedvote

The Chorus of Bats: “Do we REALLY want to support High-Resolution Audio?”

The Problem:

Regardless of where you stand on the “Is High-Resolution Audio Worth It” debate, the marketing departments have already opened the barn door and the cows are out.  In the corporate world’s never-ending quest for brand differentiation, market relevance, and lavish CEO compensation packages, “High Resolution Audio” is already being sold as the “Next Big Thing” in audio.   As the owners of audio for the next five years, do we:

1:  Ignore it:  “That’s Snake Oil and we don’t want any part of it!”

2:  Sell it:  “We’ve got the BEST Snake Oil, and we’re gonna milk this to stay employed for a few more years!”

3:  Build it:  “We love it, you really want it, and we’re gonna do it right, even if that means ultrasonic  tweeters in your headphone cans!”

I see that the 2011 “Galileo” group might be real big on topic 1, where we discuss whether or not High-Resolution Audio really does bring audio happiness.  I think we’ll need some Screaming Monkeys when we get to topic 2, because we need to know if the new Monkey Bus can support High-Res.  Finally, for #3, I think we need some support of the Doppler Chickens.  As engineers, let’s do it right.  I know mics can go ultrasonic, but what about speakers?  How do we engineer speakers, microspeakers, and headphone receivers that can cleanly go up to 40kHz?  What about low-power portable audio amplifiers with no intermodulation distortion up there?  What about headphones (both circumaural and in-ear) design considerations?  What kinds of transducers are we looking at?  How about test systems and standards?  I don’t think the type 3.3 ears go that high.  Where do we go from here?

0 votedvote

The Dream Dugout: New Best Practices for Dream-Team-Building

When it’s time to build the dream product, or series of products, yer gonna want yer best possible noises.

So you bring in the Dream Team, naturally, but how best to set them up?  What tricks have Time and Experience taught us about the environment, the structure, the attitude–and how can we anticipate changes to those lessons over the next few years?

How do these things change in the future, with vastly improved collaboration tools, teleconferencing, telepresence, lifecasting, and with new tools blurring lines between “integration,” “music” and “sound design:”  Who punches the clock at the factory, who commutes to his garage studio in bunny slippers?   How big are the teams?  How do we account for slippery job titles?  How frequent are physical/virtual meetings?  What’s flow of control and command will work best?  What about interactions Audio has with the deeper technical teams and loftier Vision-Holders?  Collaborations with outside contributors?  What about it?  HUH?  WHAT ABOUT IT?!?!?!?


0 votedvote

Making Binaural Work: Bringing back “Handsome’s” suggestion from 2013

Looks like there’s lots of interest in Binaural and Headphones this year.  Hmmmm.

I don’t recall who is handsome, let alone who “Handsome” is (Howard Brown?) but I like this topic, and it appears to have slipped through the cracks of 2013’s Giant Brain.  Can we take another swing at it?  



Make Binaural work

We often talk about ‘immersive audio’, where one feels like they are in the middle of a game, orchestra or movie. The use of spatial audio (HRTFs, room models, BRIRs, etc.) to render these immersive scenes is usually the ‘go-to’ idea. Some of the problems with synthetic spatial audio, as well as binaural field recordings, are:

1) The visual cues are missing or wrong.
2) Head motion is not taken into account.
3) HRTFs are generic and not individualized.
4) The listener’s environment is not taken into account.

That last point is particularly important. If you have a binaural recording made in a small room, but you listen to it in a large room, it will sound terribly colored. In fact, if the room you are listening in is not taken into account, any synthetic or binaural recording will have coloration.

Another big issue is that, if the visual cue is missing, the listener tends to localize the sound behind them (or at least somewhere outside of their field of vision).

So what can be done to mitigate these issues? Is this something that we can engineer (i.e., build me some new, celebrity endorsed headphones), or is it a matter of getting the signal processing just right (can you say ‘head tracker’, hallelujah!), or are there limitations at the cognitive level that need to be addressed?

0 votedvote

Reinventing audio for headphones

According to a 2013 survey by Motorola, more people watch TV and movies on tablets than on television sets/home theatres. More people than ever consume most of their sound over headphones, rather than speakers.

  • How do we adjust our creative practices for headphone listening?
  • What improvements to headphone sound do we need?
  • How can we prevent people from damaging their ears when so much sound is consumed loudly on headphones?
  • What other questions do we need to explore and address related to  headphone listening?

There are elements that could tie in with the binaural tracking workgroup proposal.

0 votedvote

The sound of one hand clapping

More and more gesture-based computer controllers are being developed (Leap motion, Myo, etc.). In the movies and TV (hello, Star Trek fans), these always have sounds, but as yet, these devices are usually released without sounds, to have each implementation/software using the device implement their own fx.

Should gestural sounds have some form of standardization, in the way that keyboard sounds and mouse clicks have?  What are some “universal” gestures that might need a standard set of interface sounds?

0 votedvote
0 votedvote

Recording studio and software design for game dialogue recording

If you were to build a recording studio for games dialogue recording what would it be like? Regular recording studios and existing recording software are the round hole that the square peg, nay, the multi-dimensional spaghetti peg, of games gets hammered through. What needs to change to make the games specialist studio technically on the nail and creatively inspiring?

Anyone up for brainstorming for the ultimate studio design and recording software design wish list? Is the market for games production big enough for the likes of Avid, Steinberg, Adobe, Sony etc to take commercial interest in game specific tools/features?

0 votedvote

Headtracking for Binaural Audio

Topics of discussion: The effective use of Head tracking position methods in Binaural and Augmented Reality applications. How much impact on the Center image stability is gained by the use of a Head-tracking device. Can a realistic 2D & 3D experience happen without the use of Headtracking  methods. Will Augmented Reality devices make Head tracking mandatory? What is the minimum degree of accuracy attainable and also required?

0 votedvote

Audio Component and Sensor Fusion

Many portable devices are already loaded with sensors and the Internet of Things will provide even more sensor data. The amount of available information will be enormous and we should figure out how to use it to the advantage of Audio. The question is, how can a combination of audio components (microphones, speakers, earpieces) and sensors (accelerometers, gyroscopes, pressure sensors / altimeters, thermometers, humidity sensors, etc) be much more than the sum of the individual components? What sort of sensor data could be used to improve audio? Could data from the microphone (or even speaker/earpiece..?) be used to complement sensors? What sort of new applications can we come up with when we have sensor data available?

Coming up with the higher level ideas could be enough but if there’s time, here are some technical questions that could also be discussed:

– What are the requirements for the interface? (There’s the i-word again…)

– What applications are latency critical? What are the latency requirements?

– Data bandwidth requirements?

– Should sensors and audio components use the same interface (bus)?

– Would it be beneficial to have a direct connection between sensors and audio components?

– Would audio/sensor hubs be the right way to go?

0 votedvote

Make Binaural work

We often talk about ‘immersive audio’, where one feels like they are in the middle of a game, orchestra or movie. The use of spatial audio (HRTFs, room models, BRIRs, etc.) to render these immersive scenes is usually the ‘go-to’ idea. Some of the problems with synthetic spatial audio, as well as binaural field recordings, are:

1) The visual cues are missing or wrong.
2) Head motion is not taken into account.
3) HRTFs are generic and not individualized.
4) The listener’s environment is not taken into account.

That last point is particularly important. If you have a binaural recording made in a small room, but you listen to it in a large room, it will sound terribly colored. In fact, if the room you are listening in is not taken into account, any synthetic or binaural recording will have coloration.

Another big issue is that, if the visual cue is missing, the listener tends to localize the sound behind them (or at least somewhere outside of their field of vision).

So what can be done to mitigate these issues? Is this something that we can engineer (i.e., build me some new, celebrity endorsed headphones), or is it a matter of getting the signal processing just right (can you say ‘head tracker’, hallelujah!), or are there limitations at the cognitive level that need to be addressed?

0 votedvote

Social Stereo

It saddens me to see two people sharing a single set of earbuds, one lonely channel per listener. What if there were an app that let one person broadcast music to the mobile devices of nearby friends? Add voice and text chat, plus some ways to monetize the service through DSP plug-ins, hardware add-ons, and referral fees for promoting music.

What are the requirements, and what types of businesses could result?

Failing that, how about some really immersive headphones?

0 votedvote

Multiple microphones in a system

Is there a value to connect 16 microphones in a system
        – is there a simple way of connecting these 16 microphones to the system
        – what requirements would be needed to fulfill the use case?

Always on for voice recogniziton is becomming popular. It is a tough use case as voice recognition requires to be accurately recognize a voice amongst a crowd  of people and react on the right voice. This could requrie to add more microphones. How to connect those microphones easy to the main chipset and what are the requirements for this.

Other considerations are: standby current (while being always on)

0 votedvote

When is hardware offloading truly effective?

There is a movement underway to move audio processing off the host processor and onto dedicated hardware intended for audio processing. However, there is a lot of information coming to light showing that offloading doesn’t provide many benefits for the most common use cases, and in fact the only scenario it may be useful is when using analog headphones listening to music for hours. Let’s flesh out what is truly useful, and what use cases justify the extra engineering, expense, and segmenting required for hardware offloading on the most popular computing platforms.

0 votedvote

Electric vehicle noise generation

With the Tesla Model S earning Car of the Year nods from several leading auto publication, and the company’s subsequent increase in sales and government incentives to push alternative fuels, the impact that silence of electric vehicles has on pedestrians is being considered by a number of car manufacturers. For example, Audi has a sound generation system outside their electric version of the R8. See this video http://www.youtube.com/watch?v=Yungwc92gFo

Should these kinds of sounds, artificially generated on the outside of electric (or other silent) vehicles be made to sound like a traditional gasoline engine and exhaust, or should there be some sort of iconic, industry-wide sound developed similar to the chirping sound currently used to assist the sight-impaired at crosswalks? Should sound generation only be done when proximity to collision is detected via peer-to-peer communication with other connected devices, or should it be in an “always on” state? This decision could have a profound impact on how humans perceive silent or near-silent approaching vehicles in the future.

0 votedvote

Driveless Car

Working on the assumption that you have a personal car (not a train or public transport) and it is completely safe and goes point to point smoothly. What does it look and sound like on the inside?

Steering wheel? Chairs point inward / rotating? Speaker placement change? Microphone arrays? Kinect-like IR?