Projects

Visualizer Eden

Open Visualizer Eden

Same-origin app route. Upload WAV and drive the full control surface.

Audio tool

Visualizer Eden

Browser-only pipeline: decoded audio into an FFT, scalar features into shader uniforms, and a custom GLSL vertex stage that displaces a high-poly mesh every frame.

Why I built it

I wanted mixes to show up as motion, not only as a waveform strip. The same FFT that powers meters in a DAW is enough to steer a 3D look if you compress it into a few stable scalars and keep the CPU work off the hot path.

Web Audio: graph, analyser, and features

Playback goes through AudioContext, then createMediaElementSource on the HTMLMediaElement so the file you load is the same signal you hear. An AnalyserNode sits in-line before destination: fftSize = 512, smoothingTimeConstant around 0.3, and decibel bounds set so the byte spectrum is usable without pegging.

Each frame we call getByteFrequencyData into a Uint8Array sized to frequencyBinCount, then reduce bins into three bands (roughly the lowest 10% as bass, the next 40% as mid, the remainder as high) plus a whole-spectrum volume term. Those four numbers are normalized against 0–255 and passed into React context as the thing the canvas reads. The analysis loop is throttled to about 30fps so we are not burning main-thread time on work the eye cannot resolve anyway.

On top of that, there is light temporal logic: rolling volume history, peak tracking, and a bass-threshold beat detector that looks at spacing between hits to infer tempo-ish behavior for features that care about rhythm, not just level.

React Three Fiber and the render loop

The scene is a @react-three/fiber Canvas. The blob is a Three.js mesh with a ShaderMaterial, not a stack of built-in materials: everything interesting happens in strings you own. Each animation tick, useFrame copies fresh audio scalars and UI control values into material uniforms (time, play state, band levels, reactivity gain, palette colors, and dozens of “physics” and mode toggles). That keeps a single source of truth: the React tree for controls, the GPU for look.

GLSL: what the vertex shader actually does

The vertex stage builds an audioIntensity term from volume plus weighted bass, mid, and high. That scalar modulates how hard procedural motion runs: multi-octave value noise (fbm), sine wave stacks for “liquid” motion, surface-tension ripples, elasticity-style bounce, optional puddle flattening, goopiness and liquidity channels, split and tentacle modes, and a base fbm layer scaled by user noise parameters. When audio is playing, an extra normal-aligned displacement term scales with audioReactivity and per-band weights so kicks read differently from hiss.

Fragment-side, the shader handles multi-color blending, metallic and contrast controls, and the surface look that sells each preset (glass vs goo vs pearl is mostly uniform math, not separate scenes). The point is not photoreal PBR: it is a controllable, expressive surface that stays within a predictable cost because you wrote the math.

What this project was for

End-to-end ownership of one pipeline: decode audio in the browser, derive stable features, push them across the JS/WebGL boundary every frame, and keep the mesh readable under real laptops. The product value is the fluency: Web Audio, R3F’s render loop, and large custom GLSL in one codebase, with a clear split between “analysis” and “look.”