home  previous   next 
The Twenty-second Annual Interactive Audio Conference
BBQ Group Report:
Abusing Technology for Creative Purposes
Participants: A.K.A. "Unintended Consequencers"
Lawrence Sarkar, Cirrus Logic Philip Nicol, Dolby Laboratories
Matthew Johnston, Microsoft Bobby Lombardi, G-Technology
Rick Cohen, Qubiq Audio  
Facilitator: Linda Law  
  PDF download the PDF

Problem Statement

Creative output is limited by the tools that we use. More voices would be represented in music and art if the barrier of entry was lowered by more unintended/unofficial use of music production tools and other technologies.

The group initially explored existing types of unintended uses of technology that form a creative output. This was broken into different categories: historical musical unintended uses, data driven input, and sensors.

Historical musical unintended uses

We looked at a number of historical examples. Autotune is one example where historically it served a different purpose in the oil industry for finding potential drill sites. A breakthrough in music production came when this was used for pitch correction (enhancing the performance of out of tune singers). The unintended use of Autotune was to create a musical robotic style effect.

Turntable scratching is now a common rhythmic effect used by DJs. Where it was not initially possible to perform scratching on early turntables due to their design. Artists had to modify their turntables to allow the backwards motion.

Further examples include a post on YouTube by a band called The Academic. They exploited the delay in Facebook Live feeds to use as an audio visual looper to create a polyphonic rock song by building up the track layer by layer.


Data driven input

The workgroup also explored the repurposing of big data.  Large collection of  data such as weather trends, traffic patterns, maps, population/census, and stored biometric data can be targeted to audio parameters or converted to musical note data.

Brian Foo has created a website with audio examples of data driven music. One of his examples is “Rhapsody In Grey” - Using Brain Wave Data to Convert a Seizure to Song.


We found the following patent:
Biometric-Music Interaction Methods and Systems
US 20140074479 A1

Patent US20140074479 - Biometric-Music Interaction Methods and …

It is assigned to BioBeats, Inc. Here is the abstract:
A system and method for the automatic, procedural generation of musical content in relation to biometric data. The systems and methods use a user's device, such as a cell phone to capture image data of a body part, and derive a biometric signal from the image data. The biometric signal includes biometric parameters, which are used by a music generation engine to generate music. The music generation can also be based on user-specific data and quality data related to the biometric detection process.

Sensor driven input

The easy availability of USB-based controllers and sensors has been a boon to creators. Max/MSP makes it easy to connect sensor outputs to musical controls.  Musicians are no longer restricted to knobs and wheels, products such as the Leap Motion infrared sensor allow them to wave their hands and map that to parameter control, mixing, and conducting.



We created multiple prototypes showing existing technologies being used unconventionally to make new art.

None of the examples above present entry point levels for a person with limited or no knowledge of music production. The group attempted to bridge this gap by targeting easily accessible social media platforms

Prototype 1: Sonic Snapchat. A typical use case for an active phone user is sharing short video clips to friends and family. This prototype focuses on sharing of that media content and translating a video portfolio of a user's daily ‘story’ into a musical composition that represents that person’s sonic persona. The barrier of musical entry is lowered because a user is able to contribute to a musical composition based on their daily activities

Prototype 2: Sonic Browsing. An aggregator of social media feeds that enables sonic representations of user's activities. This prototype focuses on leveraging social media platforms to feed an aggregator service that triggers sound experiences based on user’s activities.  User mobile devices are used to capture soundscapes from their locations and activities that be played individually or together in a sonic cacophony.

Items from the brainstorming lists that the group thought were worth reporting:

  • Social media seemed to be the most accessible source of input for a typical user exposed to technology.

section 8

next section

select a section:
1. Introduction
2. Workgroup Reports Overview
3. Alexa, Siri, Cortana or: How I Learned to Stop Worrying and Love the Cloud
4. “You and the Uni: Defining Pedagogical Requirements for Audio Engineering Education” a.k.a. Discovering What to Learn Them Young Whippersnappers
5. A spatial audio format with 6 Degrees of Freedom
6. CAAML: Creative Audio Applications of Machine Learning
7. Mode and Nodes Enabling Consumer Use of Heterogeneous Wireless Speaker Devices
8. Abusing Technology for Creative Purposes
9. Schedule & Sponsors