Skip to content
Project Bar-B-Q
  • Home
  • About
    • About BBQ
    • Photo Gallery
    • Privacy Policy
  • Workgroup Topics
    • About Topics
    • 2019 Workgroup Topic Proposals
    • 2018 Workgroup Topic Proposals
    • 2017 Workgroup Topic Proposals
    • 2016 Workgroup Topic Proposals
  • Reports
Project Bar-B-Q
2019 Workgroup Topic Proposals

Audio Visualizers 2025: Extracting emotional information from audio

Posted July 16, 2019 David Battino

Why are music visualizers so artless? Most look like hardware spectrum analyzers or jiggling oscilloscopes. Perhaps it’s because the input data is limited to snapshots of bass and treble levels.

Let’s explore ideas for extracting emotionally meaningful information from audio to produce next-gen visual experiences. Imagine if a screen, garment, or virtual-reality experience could react to chord quality (e.g., major/minor/augmented), subtle tempo changes, solos, or even lyrics. A combination of deeper audio analysis and metadata could create experiences as exciting as movies and live dancers.

Bog-Standard Visualizer
There’s gotta be more than this.
AIARcontextual awarenessmetadatavisualizerVR

Post navigation

⟵It all sounds CRAP!
Meta-retrospective: Is our children learning?⟶

Recent Posts

  • Best scene based audio to be used for post processing
  • Meta-retrospective: Is our children learning?
  • Audio Visualizers 2025: Extracting emotional information from audio
  • It all sounds CRAP!
  • MID.LY (Music Industry Data)

Recent Comments

  • jbailey on MID.LY (Music Industry Data)
  • jbailey on SETI for Audio
  • jbailey on How can we make the world of audio, music and engineering more inclusive?
  • dvhdr on MID.LY (Music Industry Data)
  • Brad.TX on Ultrasonic Communication for the Voice Interface and IoT World

RSS Feeds

  • Entries RSS
  • Comments RSS

Copyright 1996-2021, Fat Labs, Inc.