Assignment 7

Concept
After our visit to the teamLab exhibition, one installation stopped me cold. It was the room with scale laser piece — hundreds of beams of colored light radiating inward from sources arranged in a wide circle around the perimeter, all converging at a central point where they interfered and stacked into a glowing, slowly morphing orb. The shape at the center wasn’t static. It pulsed. It shifted — sometimes a perfect sphere, sometimes something more irregular — as if the light itself was breathing. The whole thing cycled through color states: a deep emerald green, then teal, then a full-spectrum rainbow, each transition preceded by a complete blackout before the next color bloomed back in. A low ambient drone played underneath it all, its pitch and texture shifting subtly with the light.
What struck me most was the sense of three-dimensionality. The beams weren’t converging on a flat point — they were targeting different parts of a rotating surface, and as that surface turned, the beams swept up, down, and sideways, making the central structure feel genuinely volumetric. It looked less like a projection and more like a physical object built entirely out of light. I wanted to recreate that.

Inspirations

Code Highlight
The piece I’m most proud of is the beam tapering system combined with depth-based brightness. These two things working together are what give the sketch its sense of physical space.
Real laser beams in a foggy room are dim at the source and brighten as they converge — the light accumulates. I replicate this by splitting each beam into three segments and drawing each progressively brighter:

// Tapered beam: dim at source, bright at convergence
let mx = lerp(b.sx, tgt.x, 0.5),  my = lerp(b.sy, tgt.y, 0.5);
let qx = lerp(b.sx, tgt.x, 0.78), qy = lerp(b.sy, tgt.y, 0.78);

strokeWeight(0.8);
stroke(hue, 88, 100, bA * 0.30 * depth);
line(b.sx, b.sy, mx, my);       // outer — dim

strokeWeight(1.0);
stroke(hue, 72, 100, bA * 0.50 * depth);
line(mx, my, qx, qy);           // mid

strokeWeight(1.3);
stroke(hue, 38, 100, bA * 0.88 * depth);
line(qx, qy, tgt.x, tgt.y);    // inner — brightest

On top of that, depth is derived from the sphere point’s perspective scale — points on the near side of the rotating sphere get a higher depth value and therefore brighter beams, while the far side is dimmer. This is what makes the 3D rotation actually readable:

const minS = FOV / (FOV + SPHERE_R * 1.85);
const maxS = FOV / (FOV - SPHERE_R * 1.85);
let depth = map(tgt.s, minS, maxS, 0.5, 1.0);

Embedded Sketch

Controls: move your mouse to tilt the sphere in 3D · click to toggle sound

Milestones and Challenges
Getting the direction right. My first build had the sources at the bottom arcing upward, like stage lights — which looked completely wrong. The actual installation has sources in a full circle, all shooting inward. Once I switched to a full 360° ring of sources the whole thing immediately felt closer to the reference.
Making the 3D readable. For a long time the sphere rotation was happening mathematically but you couldn’t see it — the beams just seemed to flicker randomly. The fix was depth-based opacity. Once beams targeting the near side of the sphere were noticeably brighter than those going to the far side, the rotation immediately read as three-dimensional. A small change with a huge perceptual payoff.
Fixing the orb banding. The central glow had visible rings — discrete concentric circles always show banding unless the steps are small enough to fall below the perceptual threshold. I switched to drawing 200 circles in 1-pixel steps using an exponential falloff curve:

let falloff = exp(-t * 3.8) - exp(-3.8);
falloff = falloff / (1 - exp(-3.8));

This makes the transition from bright core to outer haze completely continuous. It costs more per frame but is imperceptible on modern hardware at pixelDensity(1).
Syncing audio to visuals. Browsers block audio without a user gesture, so the first click starts the audio context. After that, the master gain and filter cutoff update every frame tracking globalAlpha directly — so the sound fades out during blackout transitions and breathes with the pulse automatically, without any separate audio state machine.

Reflection and Ideas for Future Work
The process taught me how much perceptual rendering depends on implied physics. The beams don’t actually accumulate light — I’m just drawing lines with varying opacity — but tapering them correctly is enough to convince the eye that something real is happening. The same is true of the depth gradient. These aren’t physically accurate simulations; they’re visual shortcuts that exploit how we expect light to behave.

For future improvements I’d want to explore:
Shape morphing at the convergence point. The sphere is a clean starting point but the installation showed the central structure shifting form during my visit — flattening into a disc, stretching vertically. Morphing the convergence surface between geometries over time would get much closer to that full visual range.

Reactive audio. Right now the audio is a static generative drone. Mapping the rotation speed or morphing intensity to filter resonance or oscillator detune would make the sound feel more alive and more directly tied to what the light is doing.

Leave a Reply

Your email address will not be published. Required fields are marked *