· By

How RenderWave 1.5 Turns Audio Into Live Shader Control

A source-backed guide to RenderWave 1.5 audio reactivity: mel-scale analysis, 14 routable signals, CPU-side parameter modulation, tempo sync, and live-safe defaults.

audio-reactivity renderwave-1.5 parameter-modulation live-visuals

RenderWave 1.5 does more than make visuals louder when the music gets louder. The audio system analyzes a live CoreAudio input, turns that signal into musically useful control data, and routes it into shader parameters, FX parameters, integrated motion phases, MIDI-driven performance states, and tempo-aware visuals.

The important word is control. Audio can move a value, trigger an accent, drive a color cycle, or feed a rate that becomes phase over time. If the route is disabled, the parameter stays manual. That is the difference between a background visualizer and a live VJ instrument.

What RenderWave Listens For

RenderWave captures a selected CoreAudio input and includes CoreAudio hot-swap handling so setup changes are less fragile during a show. The analyzer works from overlapping FFT windows, maps the spectrum through 40 mel bands, and groups those bands into signals that are useful at performance speed.

The public modulation surface has 14 routable audio signals:

  • Overall energy across the full spectrum
  • Bass, mid, mid-high, and high continuous levels
  • Onset strength from spectral flux
  • Bass hit, mid hit, mid-high hit, and high hit transient signals
  • Bass presence, mid presence, and high presence long-envelope signals
  • Beat detection for trigger-style behavior

That gives a performer more than a bass/mid/treble split. A bass hit can flash a ring for a single impact. Bass presence can make tunnel density breathe over several seconds. Mid-high hit can sharpen spark and edge motion without making the whole scene jump.

How Audio Reaches The Shader

RenderWave does not ask every Metal shader to inspect raw audio. Audio reactivity is CPU-side parameter modulation applied before the uniform buffer is written to the GPU. The shader receives the same kind of parameter values it already understands, while the modulation layer handles audio routing, smoothing, ranges, and mode behavior.

Each routed parameter stores:

  • The parameter index and name
  • A low and high modulation range
  • The selected audio band
  • The modulation mode
  • A smoothing amount from snappy to very smooth

RenderWave 1.5 currently has 258 default audio-modulation routes across the generated audio reactivity matrix. They are authored per shader. A structural control, a glow control, a hue control, and a transient flash should not all respond the same way.

The Four Modulation Modes

Fade maps audio energy into a range. It is best for continuous amounts like scale, density, brightness, glow, contrast, frequency, or texture intensity.

Hit reacts to transient signals. It is the right tool for flashes, impacts, beam bursts, pulse accents, and other moments that should fire on a kick, snare, or sharp onset.

Loop uses the selected source to drive a wraparound cycle. It fits hue shifts, palette travel, and motion that should keep cycling musically.

Ping Pong bounces between the low and high range. It is useful when a back-and-forth motion reads better than a wrap.

These are different musical behaviors, not four names for the same amount slider.

The 1.5 Control Contract

The most important 1.5 audio work was making the system performable. A motion-rate slider at zero should mean stopped.

Earlier audio defaults could break that expectation when Fade was used on speed, flow, scroll, or rotation controls. Fade replaces the slider value while active, so incoming audio could keep motion alive even after the performer put the visible slider at zero.

The 1.5 pass fixed that contract: motion-rate sliders stay slider-authoritative by default. When a rate needs to become smooth visual motion, RenderWave uses IntegratedUniform destinations, usually u.param10 through u.param12, so the CPU integrates the rate into phase over time. Audio modulation runs before this integration step, which means an explicitly routed rate can still move with the music, but the route is visible and controllable.

When the source slider returns home, the accumulated phase decays back instead of freezing at a stale offset. That is what makes the system usable mid-set: audio response is powerful, but it does not quietly override a control the performer has turned off.

Tempo Is A Separate Layer

Audio energy and musical time are not the same thing. RenderWave 1.5 handles both.

Tempo can come from Ableton Link, confident audio BPM detection, tap tempo, or manual BPM. Shaders can receive u.beatPhase, u.barPhase, and u.bpm when tempo uniforms are active.

That means a kick can drive a Bass Hit flash while barPhase keeps a scanner aligned to a four-beat phrase. One is amplitude. The other is musical position. Live visuals need both.

How This Changes A Set

A strong audio-reactive setup does not move every parameter all the time. It routes the right signal to the right kind of control:

  • Structure: bass or bass presence on scale, density, tunnel size, grid spacing, or warp
  • Light: bass hit or onset on glow, bloom, beam width, flash, or exposure
  • Color: overall, mid, or loop mode on hue and palette motion
  • Rhythm: beat, bass hit, or tempo phase on strobes, scanner motion, and pulse accents
  • Stability: smoothing and role limits so a hot room signal does not blow the scene apart

That is the RenderWave 1.5 model: audio as a performance routing system. You can use it subtly, drive a whole room with it, map it to MIDI, stack it with the FX rack, or turn it off and run manually. The system is built around the live-control contract first, because that is what matters when the output is on the wall.

More Articles

Explore more insights and guides about VJ software and live performance.

View All Posts