Our system comprised real-time-ready art directed assets and custom software for audio processing and reaction. Directly hooked into the booth, we received a line for audio which was then run through our software to detect kick, hi-hats, snares whilst also measuring and evaluating overall energy of the music. Real-time analysis of processed audio then guided the on-screen graphics. Outputted over standard HDMI, our graphics were mapped to eight large format LED displays.
This is the first outcome from our research into art directed generative content that can exist across multiple platforms, from large scale LED walls to smartphones and the web.