|The Twenty-second Annual Interactive Audio Conference
PROJECT BAR-B-Q 2017
Abusing Technology for Creative Purposes
|Participants: A.K.A. "Unintended Consequencers"|
|Lawrence Sarkar, Cirrus Logic||Philip Nicol, Dolby Laboratories|
|Matthew Johnston, Microsoft||Bobby Lombardi, G-Technology|
|Rick Cohen, Qubiq Audio|
|Facilitator: Linda Law|
|download the PDF|
Creative output is limited by the tools that we use. More voices would be represented in music and art if the barrier of entry was lowered by more unintended/unofficial use of music production tools and other technologies.
The group initially explored existing types of unintended uses of technology that form a creative output. This was broken into different categories: historical musical unintended uses, data driven input, and sensors.
Historical musical unintended uses
We looked at a number of historical examples. Autotune is one example where historically it served a different purpose in the oil industry for finding potential drill sites. A breakthrough in music production came when this was used for pitch correction (enhancing the performance of out of tune singers). The unintended use of Autotune was to create a musical robotic style effect.
Further examples include a post on YouTube by a band called The Academic. They exploited the delay in Facebook Live feeds to use as an audio visual looper to create a polyphonic rock song by building up the track layer by layer.
Data driven input
The workgroup also explored the repurposing of big data. Large collection of data such as weather trends, traffic patterns, maps, population/census, and stored biometric data can be targeted to audio parameters or converted to musical note data.
Brian Foo has created a website with audio examples of data driven music. One of his examples is “Rhapsody In Grey” - Using Brain Wave Data to Convert a Seizure to Song.
We found the following patent:
It is assigned to BioBeats, Inc. Here is the abstract:
Sensor driven input
The easy availability of USB-based controllers and sensors has been a boon to creators. Max/MSP makes it easy to connect sensor outputs to musical controls. Musicians are no longer restricted to knobs and wheels, products such as the Leap Motion infrared sensor allow them to wave their hands and map that to parameter control, mixing, and conducting.
We created multiple prototypes showing existing technologies being used unconventionally to make new art.
None of the examples above present entry point levels for a person with limited or no knowledge of music production. The group attempted to bridge this gap by targeting easily accessible social media platforms
Prototype 1: Sonic Snapchat. A typical use case for an active phone user is sharing short video clips to friends and family. This prototype focuses on sharing of that media content and translating a video portfolio of a user's daily ‘story’ into a musical composition that represents that person’s sonic persona. The barrier of musical entry is lowered because a user is able to contribute to a musical composition based on their daily activities
Prototype 2: Sonic Browsing. An aggregator of social media feeds that enables sonic representations of user's activities. This prototype focuses on leveraging social media platforms to feed an aggregator service that triggers sound experiences based on user’s activities. User mobile devices are used to capture soundscapes from their locations and activities that be played individually or together in a sonic cacophony.
Items from the brainstorming lists that the group thought were worth reporting:
select a section:
Copyright 2000-2017, Fat Labs, Inc., ALL RIGHTS RESERVED