|The Fifteenth Annual Interactive Audio Conference
PROJECT BAR-B-Q 2010
|Group Report: Wherever You Go, There You Are: Audio that Understands Context and Mobility|
|Participants: A.K.A. "Nomadic Hoarders"|
|Peter Kirn, Create Digital Media||David Roach, Optimal Sound|
|Moshe Sheier, CEVA||Ron Kuper, Sonos|
|Devon Worrell, Intel||Dawn Leonetti, Dolby|
|Facilitator: Jim Rippie, Invisible Industries|
Diverse media sources, devices, and contexts fragment the experience of using sound.
A proliferation of devices and media sources has made accessing your music complicated and cumbersome.
Additionally, audio routing today through multiple devices incurs multiple signal processors that may not be aware of each other and may be incompatible.
Content is spread across a variety of sources, streamed and local. We use a variety of devices, each of which mixes music listening and voice communication functions. There’s no consolidated access to all of those sources, and no information about what you’re doing or where you are doing it. As you move from place to place, you want your audio to be associated with you, not only content sources and devices.
Various components attempt to solve pieces of this problem. UPnP, DLNA, and zeroconf (Bonjour) address device discovery. Aggregators like lockers, RadioTime, Rovi, and Muse consolidate some media sources. Services like Last.fm track user listening data across players and devices. But these services don’t talk to each other, and the sum total of the services do not create or share enough information to allow seamless playback across systems and devices. This content will also need to be socialized and shareable with others in our social networks or living environments.The solution:
Use the cloud to make audio devices aware of where you are and what you're doing.
We believe the remedy to these problems centers around an online, connected service that catalogs a user’s devices, content and listening environments.
We propose a cloud-based service that provides:
Using this service, the user can move seamlessly from context to context, by automatically performing one of two basic operations:
In addition to building the cloud system, client apps would need to be developed to access this service.
Clients with DSP capabilities can use the DSP catalog on the service to prevent conflicts between processing algorithms.
See previous Project BBQ work for more thoughts on this issue; most recently, “I Hear The Future: The Binaural Headset as Audio Contact Lens and Our Inevitable Mixed-in Lifestyle of Personal Audio Networks” (2007), “Smart Ambient Sound Sensor” (2008), “Here, There, and Everywhere” (2009), and “Mobile Infrastructure” (2009).Action items from workgroup
Peter Kirn, assisted by Jim Rippie – complete report for publication
Sensing methods for detecting how a user moves
between devices, environments, and contexts
Moshe Sheier, (additional device hardware capabilities) Peter Kirn
Two main methods could be used for observing a user movement between devices, environments, and contexts – manual (no sensing involved) and automatic.
Manually (“act”), a user would register for our aggregator service (described elsewhere in this report), using the device he would like to listen through. Only one device could be registered at a time. This way, once registered thorough the user’s home, office, or mobile device, the last played music track would continue on the newly registered device. While this is a straightforward approach, it adds the hassle of registering every small device/location change we make, and we would also like to consider a more automatic way of moving around.
Automatically (“react”), several methods could be used to sense a user location:
RFID – in a per-configured environment (home/office) a user could be equipped with a RFID tag that would get sensed at each room (using appropriate RFID readers). Music would be played to the room the user is in, based on the audio equipment at that room (recall that we assume all audio equipment is “connected”).
By extension, advances in NFC (Near Field Communication) wireless sensors, expected to be widely available in Android mobile devices (as publicly demonstrated for Android's “Gingerbread” release) and likely other mobile platforms (iOS, etc.) would make the potential for automatic sensing ubiquitous. Hand-off could be designed into an intuitive gesture, like swiping a cell phone NFC device across an NFC-equipped car audio system in order to pass off a conversation to hands-free speaker operation, or music playback from headphones to the car. The importance of having a commonly-understood protocol for communicating sound use “intents” irrespective of platform would therefore provide the users' ability to traverse various sound contexts.
GPS – as our SmartPhone is equipped with a GPS receiver, it is an ideal method to determine our location while on the move. Whenever our aggregator system get’s notified that we are out of the home/office location, music should be played either through the SmartPhone itself, or through a Bluetooth add-on (e.g. a car audio system) paired to our SmartPhone.
In addition to sensing a user location/device being used, the environment the user is in should also be sensed, in terms of context (quiet office, noisy party), other people in the room, and their music preferences.
Aside from obvious (and precise) GPS, onboard hardware sensors available in mobile devices (and increasingly available not only via native hardware APIs but next-generation mobile browsers, as well) can provide other clues to context:
By combining automatic sensing and manual registration (where necessary), our system would become aware of the current device a user plans to use for audio playback and the environment it is in, for our nomadic audio experience to be utilized.
Smart(er) cloud, dumb(er) client: In one implementation case, intelligence is built into cloud, for a centralized catalog of user devices, content, and contexts.
In usage, a single activity – like going to work – raises a number of queries for that cloud-based catalog:
Once those issues are resolved, a resulting action is taken – like setting up, then forwarding a stream from the old device to the new.
Dumb(er) cloud, smart(er) client: In our second implementation case, the cloud is still an essential ingredient, but greater intelligence is built into the client. (Indeed, the expanding array of sensors described elsewhere in this report suggest one reason such a scenario might come into play. The local device may be able to sense more information and immediate context than the cloud could collect or anticipate.)
select a section:
Copyright 2000-2014, Fat Labs, Inc., ALL RIGHTS RESERVED