| The Sixteenth Annual Interactive Audio Conference PROJECT BAR-B-Q 2011 |
![]() |
Group Report: Cloud Music Services |
| Participants: A.K.A. "The Zombie Dinosaurs" | |
Peter Drescher, Twittering Machine |
Kurt Heiden, Interactive Audio SIG |
| Ives Chor, Zenph Sound Innovations | Tim Howe, Cirrus Logic |
| Karen Collins, University of Waterloo | Tom White, MIDI Manufacturers Association |
| Facilitator: Aaron Higgins, Sound Trends, LLC | |
Brief statement of the problem(s) on which the group worked In a world where ultra-wideband wireless network connectivity is completely ubiquitous, how will people listen to music? How can a system be designed that provides universal access for consumers while satisfying the technology and business needs of stakeholders?
The group's solution is based on the following assumptions:
A multi-layered system for transparent delivery of music to consumers was envisioned. Music Content Producers: Music Storage Providers: The MNS Consortium: Music Access Services: Cloud Access: Client Apps: Server components to apps: Music Playback Devices:
In a world where ultra-wideband wireless network connectivity is completely ubiquitous, how will people listen to music? How does fast, reliable, access to vast libraries of streaming music files stored on Cloud servers, change the current pay-per-download business models? If you can listen to any song on the Internet, at any time, from anywhere, how many gigabytes of local storage do you need to carry around in your pocket? (Answer: none!) Current stakeholders, such as Apple's iTunes, will be highly resistant to change over to this new method of "radio on-demand" technology, but it seems inevitable that such a system will be developed, in part because we are already seeing rudimentary versions of it in services like Pandora and Spotify. The current generation of users, accustomed to owning large numbers of audio files on disk drives and mobile devices (if not physical storage media like CDs and vinyl) may be unhappy about giving up the idea of music ownership in favor of an playlist of selections on a server, but new users growing up with the system will simply take it for granted that streaming Cloud music servers are just "the way it works". Many problems exist getting from here to there. How does music media get aggregated, so that users can access any and all available music in the Cloud? How does everyone in the streaming media business get paid? Where does the money come from, particularly when users are unwilling to purchase information that is perceived to be freely available?
There was a fair amount of discussion about how music subscription services would be paid for, and who would be collecting the money. A three-tiered method was proposed: However, others in the group felt that eventually, all music royalties would be paid for by a "Cloud access utility" tax levied on all citizens (as with roads, firefighting, police, etc). This method provides the most transparent music listening user experience, and thus is the most desirable, supporting free (as in unrestricted) and legal access to any music file available in the Cloud, and financed in an inconspicuous, seamless manner.
In the year 2061: Music will be organically distributed throughout the environment and the appropriate music will be played at the appropriate times, anywhere you go, automatically, based on preset preferences, and determined by past usage. The Cloud knows where you are, what you want to hear, when you want to hear it, and simply plays it without even being asked to do so. Music content producers will be highly respected and appreciated. Their output will be directly accessed from the Cloud by nanobots in your brain, so no physical devices, and no speakers, are required. Payment is based on the number of times people have listened to a particular piece, which can be shared with your social network simply by thinking about it. Recommendations for new music will be completely on target, because they will be based on a wide variety of criteria, including all your friends recommendations, brain function models, cultural identifiers, etc. New digital audio art forms and performances will be based on groups of dancer/musicians playing motion capture instruments. You'll will be able to play with virtual musicians that accurately model playing styles based on Zenph-type analysis of their recorded works. Instruments will be able to morph into multiple forms during performance, and be able to produce previously unheard of tones. Given downloadable memories and enhanced education techniques, the usual ten thousand hours of practice required to master an musical instrument will be reduced to two and a half weeks. Genetic modifications will produce super talented musicians and composers. However, sometimes, things will go horribly wrong. Beamed speaker systems, used to send audio advertising at specific targets, will be used for direct brain control ... and when they malfunction, will deafen entire populations. Even worse, in a perversion of sonic goodness, the military will use beamed acoustic emitters as a weapons capable of killing the entire crew of a battleship, while leaving the ship itself untouched. In the most incredible twist of fate, new virtual speaker/microphone systems comprised of force fields capable of measuring and modifying the movement of individual air molecules will feedback uncontrollably, creating a singularity in the space/time continuum, and releasing a ravaging horde of zombie dinosaurs!! And yet, even after all these fantastic technology advances, MIDI will still sound bad ... section 6 |
|
|
Copyright 2000-2014, Fat Labs, Inc., ALL RIGHTS RESERVED |