We’re coming up on 23 years of collective brainstorming and problem solving across a broad swath of audio and music related topics. We’ve gone from trying to get crappy 8-bit SoundBlasters to work, to pervasive microphones listening to everything we say.
Have themes arisen and faded away? Why did they arise? Why did they fade away?
Is any old problem becoming new again?
Is the world becoming a better place for music and sound, or worse?
Why aren’t we celebrating the durability of MIDI???
I propose we look back on all of the subjects covered in past BBQs and look for patterns and predictors.
Why are music visualizers so artless? Most look like hardware spectrum analyzers or jiggling oscilloscopes. Perhaps it’s because the input data is limited to snapshots of bass and treble levels.
Let’s explore ideas for extracting emotionally meaningful information from audio to produce next-gen visual experiences. Imagine if a screen, garment, or virtual-reality experience could react to chord quality (e.g., major/minor/augmented), subtle tempo changes, solos, or even lyrics. A combination of deeper audio analysis and metadata could create experiences as exciting as movies and live dancers.