Audio Component and Sensor Fusion
Posted onMany portable devices are already loaded with sensors and the Internet of Things will provide even more sensor data. The amount of available information will be enormous and we should figure out how to use it to the advantage of Audio. The question is, how can a combination of audio components (microphones, speakers, earpieces) and sensors (accelerometers, gyroscopes, pressure sensors / altimeters, thermometers, humidity sensors, etc) be much more than the sum of the individual components? What sort of sensor data could be used to improve audio? Could data from the microphone (or even speaker/earpiece..?) be used to complement sensors? What sort of new applications can we come up with when we have sensor data available?
Coming up with the higher level ideas could be enough but if there’s time, here are some technical questions that could also be discussed:
– What are the requirements for the interface? (There’s the i-word again…)
– What applications are latency critical? What are the latency requirements?
– Data bandwidth requirements?
– Should sensors and audio components use the same interface (bus)?
– Would it be beneficial to have a direct connection between sensors and audio components?
– Would audio/sensor hubs be the right way to go?