Assignment 9

Concept:

For this assignment, I was influenced by the glitch-aesthetic of Ryoichi Kurokawa and the organic precision of Robert Hodgin. The project tells a visual story of a digital organism struggling to maintain its structural integrity, moving through a scripted cycle of Crystalline Order, Kinetic Chaos, and Digital Decay. I was able to do this by layering standard flocking rules with a custom “Glitch” variable.

There are three modes you can switch through by pressing the spacebar, which I called Order (Cohesion high), Chaos (Wander & Flee high), and Decay (Fading trails).

Code Highlight:

I am particularly proud of the State-Dependent Steering Logic within the flock() method. This snippet acts as the nervous system of the simulation, allowing the agents to instantly reconfigure their behavior based on the current global state. By using vectors and dynamically shifting the weights of Cohesion, Wander, and Separation, I can transition the entire system.

if (state === 0) { // ORDER: High Cohesion to center
  let center = createVector(width/2, height/2);
  coh = this.seek(center).mult(2.5); // Force into a rigid cluster
  this.maxSpeed = 1.8;
} else if (state === 1) { // CHAOS: High Velocity & Randomness
  glitch = p5.Vector.random2D().mult(6.0); // Introduce erratic energy
  this.maxSpeed = 7;
  this.maxForce = 0.6;
} else { // DECAY: High Separation & Drifting
  sep.mult(5.0); // Force agents apart
  this.maxSpeed = 0.8;
}

Milestone 1:

This milestone focused on the mathematical accuracy of the core steering behaviors. At this stage, there was no tension and release or interactivity. The agents simply moved in a continuous, unchanging loop. The visual was kept basic to ensure the Separation, Alignment, and Cohesion logic was solid.
Milestone 2:

In this milestone, I shifted from representational triangles to the Kurokawa-inspired line aesthetic. I introduced the “Nearest Neighbor” logic, where agents “reach out” to one another to create a web-like structure. I also added low-alpha background clearing to create the smoky history trails seen in Robert Hodgin’s work.

Final Sketch:

Reflection and ideas for future work or improvements:

This project taught me that compelling generative art often emerges from the disruption of rules. This shift, combined with a low-alpha background, transformed a simple steering simulation into a smoky, ethereal system that bridges the gap between predictable math and the emotional tension seen in the works of Kurokawa and Hodgin. Moving forward, I plan to integrate p5.sound to map the “Chaos” phase to granular synthesis and implement Obstacle Avoidance using invisible voids to force agents into even more intricate woven patterns possibly within a 3D WebGL environment.

Yash – Assignment 9

Ephemeral Flocks: Painting with Boids and Live Video

Concept & Inspiration

For this project, I wanted to explore the intersection of organic, emergent systems and digital surveillance/capture. The concept revolves around using a simulated flocking system (boids) not just as moving entities, but as autonomous painters that “decode” and reconstruct reality.

The sketch operates in three distinct phases, creating a natural cycle of tension and release:

  1. The Live Feed (Reality): The user sees a standard, real-time webcam feed.

  2. The Freeze & Draw (Tension/Emergence): Upon clicking, time stops. A snapshot is captured, and suddenly hundreds of boids swarm the canvas. Instead of clearing the background, they leave continuous trails, acting as a generative brush. They read the brightness of the frozen pixels beneath them, mapping the light and shadow of the captured moment through their chaotic flight paths.

  3. The Dissolve (Release): After fifteen seconds of frantic drawing, the image slowly dissolves back into the live video feed, erasing the boids’ hard work and resetting the cycle.

Visually and conceptually, this was heavily inspired by the generative artwork of Ryoichi Kurokawa and Robert Hodgin, who both excel at blending chaotic particle systems with structured, recognizable forms, making the digital feel tactile and natural. The specific mechanic of using boids as a “brightness brush” was directly inspired by Valerio Viperino’s brilliant “Drawing with boids” experiment.

Code Highlight: The Autonomous Brush

The part of the code I am most proud of is within the Boid class’s show() method. Rather than telling the boids what to draw, I simply tell them how to see.

show() {
  // Constrain coordinates to prevent array out-of-bounds errors
  let px = constrain(floor(this.pos.x), 0, snap.width - 1);
  let py = constrain(floor(this.pos.y), 0, snap.height - 1);

  // Calculate 1D pixel array index
  let index = (px + py * snap.width) * 4;
  
  // Extract RGB and calculate rough brightness
  let r = snap.pixels[index];
  let g = snap.pixels[index + 1];
  let b = snap.pixels[index + 2];
  let brightness = (r + g + b) / 3;

  // Draw the trail mapped to the pixel brightness
  stroke(brightness, 150); 
  strokeWeight(1);
  line(this.prevPos.x, this.prevPos.y, this.pos.x, this.pos.y);
}

This snippet is the bridge between the physical world (the camera pixel array) and the simulated world (the boids’ coordinates). By tying the stroke color to the underlying image brightness and lowering the opacity, the boids slowly layer their trails to create an etching-like quality.

Video Documentation :

Embedded Sketch [PLEASE OPEN IN WEB EDITOR AND GIVE WEBCAM PERMISSIONS]

 

Milestones & Challenges

Milestone 1: Establishing the Trails Before integrating the camera, the first major hurdle was getting the boids to leave a continuous trail without the sketch crashing or looking like complete static. I had to modify the standard Craig Reynolds boid model to track.

Milestone 2: Reading the Environment The next challenge was getting the boids to “read” data. Before complicating things with a live video feed, I created a hidden canvas with a basic geometric shape. I programmed the boids to change their stroke color based on whether they were flying over the shape or the background. This confirmed the pixel-array math was working.

Challenge: Managing States Integrating the webcam introduced a massive flow challenge. I had to implement a state machine (LIVE, DRAWING, FADING) utilizing millis() to handle the timing. Ensuring the snapshot (snap.get()) only triggered exactly when the state shifted was tricky but crucial for performance.

Reflection & Future Work

This project pushed me to think about interactive media not just as tools that react instantly to a user, but as living systems that take time to develop. The 15-second drawing phase forces the user to pause and watch the algorithm work, highlighting the beauty of creative coding.

For future iterations, I would love to experiment with color data instead of just brightness, perhaps mapping the RGB values to the boids’ strokes to create a pointillist, impressionist painting. Additionally, mapping the flocking variables (like separation or speed) to audio input could make the drawing process even more dynamic and expressive.

Amal – Assignment 9

Concept

This project explores a flocking system that evolves over time, not just in movement but in behavior and structure. Instead of keeping the system stable, I wanted to push it toward moments of tension, where the agents begin to compress, lose balance, and then break apart before reorganizing again.

The core idea is to treat flocking not as a natural simulation, but as a system under pressure. At certain moments, the swarm pulls inward and becomes dense, almost like it is being compressed into a single point. At other moments, that pressure releases and the system fractures outward.

In the prototype, I approached flocking as a kind of signal network, where agents connect through proximity and form temporary constellations. In the final version, I pushed this further into a more aggressive visual language, where the agents behave like fragments moving through cycles of compression and release.

Inspiration

In unfold by Ryoichi Kurokawa, visual structures emerge from data and gradually distort, fragment, and reorganize. This sense of systems building toward instability and then collapsing influenced how I approached the tension and release within my flocking system.

https://www.ryoichikurokawa.com/project/unfold.html

Prototype 1


In the first prototype, I focused on building a flocking system that reads as a network rather than a group of individual agents.

Each agent connects to nearby agents, creating temporary lines that appear and disappear as the system moves. This produces a constantly shifting constellation-like structure. The motion is relatively stable, but the visual output is already starting to move away from traditional flocking representations. However, there is no strong sense of progression yet. The system exists in a continuous state without clear tension or release.

Final Sketch

In the final version, I introduced a time-based system that pushes the flock through cycles of compression and release.

The agents are no longer rendered as points, but as elongated shards, which changes how motion is perceived. Instead of reading as individuals, they begin to feel like fragments of a larger structure.

The system moves through phases:

  • Compression: agents are pulled toward the center, increasing density and tension
  • Instability: movement becomes more chaotic and tightly packed
  • Fracture: agents are pushed outward, breaking the structure apart
  • Reformation: the system reorganizes and the cycle repeats

These transitions are continuous rather than abrupt, allowing the system to feel more natural and performative over time.

Code Highlight

One part of the system I focused on was controlling the transition between compression and explosion:

let cycle = (sin(t * 0.6) + 1) * 0.5;

let compress = pow(sin(cycle * PI), 2.2);
let explode = pow(sin((cycle + 0.5) % 1.0 * PI), 2.2);

This allows the system to move through phases smoothly instead of switching behaviors on and off. The forces applied to each agent are then adjusted based on these values:

let alignW = lerp(1.2, 0.35, compress);
let cohesionW = lerp(0.7, 1.6, compress);
let separationW = lerp(0.9, 2.5, explode);

By shifting these weights over time, the same flocking rules produce very different behaviors, which creates the sense of development.

Milestones and Challenges

1. Breaking away from the “boids look”
At first, everything still looked like a standard flocking simulation. Changing the rendering from points or triangles into lines and shards made a big difference in how the system is perceived.

2. Creating actual tension
Early versions just moved smoothly without any variation. Introducing compression toward a central point made the system feel more unstable and intentional.

3. Balancing forces
When separation was too strong, everything scattered too quickly. When cohesion was too strong, the system became static. The challenge was finding a balance that allowed both buildup and release.

4. Making transitions feel continuous
Abrupt changes felt artificial. Using sine-based modulation allowed the system to evolve gradually, which made the motion feel more cohesive.

Reflection + Future Improvements

This project shifted how I think about generative systems. Instead of focusing only on movement, I started thinking about how a system can develop over time and create a sense of rhythm.

The most important change was moving from a stable simulation to a system that goes through cycles of tension and release.

If I were to continue developing this, I would:

  • Introduce sound so the system reacts to audio input
  • Explore depth by moving into a 3D space
  • Add trails to visualize past movement and build memory into the system
  • Refine the pacing so that each phase feels more intentional

Right now, the system loops continuously, but it could be pushed further into a more defined narrative structure.

Assignment 7

Concept
After our visit to the teamLab exhibition, one installation stopped me cold. It was the room with scale laser piece — hundreds of beams of colored light radiating inward from sources arranged in a wide circle around the perimeter, all converging at a central point where they interfered and stacked into a glowing, slowly morphing orb. The shape at the center wasn’t static. It pulsed. It shifted — sometimes a perfect sphere, sometimes something more irregular — as if the light itself was breathing. The whole thing cycled through color states: a deep emerald green, then teal, then a full-spectrum rainbow, each transition preceded by a complete blackout before the next color bloomed back in. A low ambient drone played underneath it all, its pitch and texture shifting subtly with the light.
What struck me most was the sense of three-dimensionality. The beams weren’t converging on a flat point — they were targeting different parts of a rotating surface, and as that surface turned, the beams swept up, down, and sideways, making the central structure feel genuinely volumetric. It looked less like a projection and more like a physical object built entirely out of light. I wanted to recreate that.

Inspirations

Code Highlight
The piece I’m most proud of is the beam tapering system combined with depth-based brightness. These two things working together are what give the sketch its sense of physical space.
Real laser beams in a foggy room are dim at the source and brighten as they converge — the light accumulates. I replicate this by splitting each beam into three segments and drawing each progressively brighter:

// Tapered beam: dim at source, bright at convergence
let mx = lerp(b.sx, tgt.x, 0.5),  my = lerp(b.sy, tgt.y, 0.5);
let qx = lerp(b.sx, tgt.x, 0.78), qy = lerp(b.sy, tgt.y, 0.78);

strokeWeight(0.8);
stroke(hue, 88, 100, bA * 0.30 * depth);
line(b.sx, b.sy, mx, my);       // outer — dim

strokeWeight(1.0);
stroke(hue, 72, 100, bA * 0.50 * depth);
line(mx, my, qx, qy);           // mid

strokeWeight(1.3);
stroke(hue, 38, 100, bA * 0.88 * depth);
line(qx, qy, tgt.x, tgt.y);    // inner — brightest

On top of that, depth is derived from the sphere point’s perspective scale — points on the near side of the rotating sphere get a higher depth value and therefore brighter beams, while the far side is dimmer. This is what makes the 3D rotation actually readable:

const minS = FOV / (FOV + SPHERE_R * 1.85);
const maxS = FOV / (FOV - SPHERE_R * 1.85);
let depth = map(tgt.s, minS, maxS, 0.5, 1.0);

Embedded Sketch

Controls: move your mouse to tilt the sphere in 3D · click to toggle sound

Milestones and Challenges
Getting the direction right. My first build had the sources at the bottom arcing upward, like stage lights — which looked completely wrong. The actual installation has sources in a full circle, all shooting inward. Once I switched to a full 360° ring of sources the whole thing immediately felt closer to the reference.
Making the 3D readable. For a long time the sphere rotation was happening mathematically but you couldn’t see it — the beams just seemed to flicker randomly. The fix was depth-based opacity. Once beams targeting the near side of the sphere were noticeably brighter than those going to the far side, the rotation immediately read as three-dimensional. A small change with a huge perceptual payoff.
Fixing the orb banding. The central glow had visible rings — discrete concentric circles always show banding unless the steps are small enough to fall below the perceptual threshold. I switched to drawing 200 circles in 1-pixel steps using an exponential falloff curve:

let falloff = exp(-t * 3.8) - exp(-3.8);
falloff = falloff / (1 - exp(-3.8));

This makes the transition from bright core to outer haze completely continuous. It costs more per frame but is imperceptible on modern hardware at pixelDensity(1).
Syncing audio to visuals. Browsers block audio without a user gesture, so the first click starts the audio context. After that, the master gain and filter cutoff update every frame tracking globalAlpha directly — so the sound fades out during blackout transitions and breathes with the pulse automatically, without any separate audio state machine.

Reflection and Ideas for Future Work
The process taught me how much perceptual rendering depends on implied physics. The beams don’t actually accumulate light — I’m just drawing lines with varying opacity — but tapering them correctly is enough to convince the eye that something real is happening. The same is true of the depth gradient. These aren’t physically accurate simulations; they’re visual shortcuts that exploit how we expect light to behave.

For future improvements I’d want to explore:
Shape morphing at the convergence point. The sphere is a clean starting point but the installation showed the central structure shifting form during my visit — flattening into a disc, stretching vertically. Morphing the convergence surface between geometries over time would get much closer to that full visual range.

Reactive audio. Right now the audio is a static generative drone. Mapping the rotation speed or morphing intensity to filter resonance or oscillator detune would make the sound feel more alive and more directly tied to what the light is doing.

Saeed Lootah – Assignment 8

Concept and Inspiration

My concept started from noticing that steering behavior looked very similar to moving ants. That gave me the idea to build an ant colony cross section simulation where ants travel between underground burrows and the surface like a terrarium. The simulation uses seek behavior and path following to create ant-like movement patterns.

Code Highlight

The part I am most proud of is the state machine in my Ant class. It coordinates each ant going to the surface, roaming, returning to the path, going back to the burrow, waiting, and repeating. Adding the returnToPath state was especially important because it fixed the issue where ants were cutting through the ground instead of reconnecting to the path at the surface first.

if (this.state === "roamSurface") {
    if (!this.roamTarget || p5.Vector.dist(this.pos, this.roamTarget) < 16) {
      this.pickNewRoamTarget();
    }
    this.seek(this.roamTarget);
    if (millis() >= this.roamUntil) {
      this.state = "returnToPath";
    }
    return;
  }
if (this.state === "returnToPath") {
    const overgroundJoin = this.path.getEnd();
    this.seek(overgroundJoin);
    if (dist(this.pos.x, this.pos.y, overgroundJoin.x, overgroundJoin.y) < 20) {
      this.state = "returnToBurrow";
    }
    return;
  }

Embedded Sketch

Milestones and Challenges

Stage 1: One path, one ant

 I started with one ant following one path to verify that seek and basic path following were working correctly.

 

Stage 2: One path, multiple ants


After confirming the basic behavior, I added multiple ants on the same path to test how the movement looked in a colony-like flow.

 

Stage 3: Multiple paths, multiple ants per path


I expanded the colony to multiple burrows and assigned ants to specific paths so each group had a consistent route to and from the surface.

 

Stage 4: Ground/grass visuals and roaming fix

I added the dirt and grass cross-section layout and introduced a roamSurface state for surface movement. A key challenge was that ants sometimes traveled through the ground when returning. I fixed this by adding a returnToPath state so ants first reconnect at ground level and then follow the tunnel path back down.

 

Reflection and Future Improvements

Right now, over ground, the ants still look like they are flying rather than staying fully attached to the ground surface. A future improvement would be to constrain or project overground motion to a ground contour so movement feels more realistic. I would also like to continue the ant colony idea further, or expand this into a larger ecosystem simulation with multiple interacting species and behaviors.

Assignment 8

Concept:

Inspired by this paper I read called Steering Behaviors for Autonomous Characters, I made an interactive exploration of Steering Behaviors and how they manifest as group intelligence. The project moves beyond simple animation by giving every agent a “brain” that constantly calculates its next move based on its environment. By layering Craig Reynolds’ classic steering forces—Separation, Alignment, and Cohesion—the herd achieves a lifelike, emergent flow. When the user moves the “Lion” (mouse), the Flee steering behavior becomes the dominant force, overriding the social urge to flock. Conversely, clicking the canvas plants “Grass Patches,” which activates a Seek behavior, pulling the autonomous agents toward a new goal.

Instructions:

  • Move Mouse: Control the Lion. Watch the herd split and reform.

  • Click Canvas: Plant a Grass Patch. The herd will navigate toward it.

  • Spacebar: Clear all grass patches.

Code Highlight:

I am proud of the handleInteractions method. It creates a hierarchy of needs: Safety > Hunger > Socializing. If the predator (mouse) is close, the animal ignores the grass and the herd to save its own life.

handleInteractions(predator, foodSources) {
    let mouseDist = dist(this.pos.x, this.pos.y, predator.x, predator.y);
    
    if (mouseDist < 100) {
      // Flee from the lion
      let fleeForce = this.flee(predator).mult(5.0);
      this.applyForce(fleeForce);
      this.isPanicking = true;
    } else {
      this.isPanicking = false;
      // Seek the nearest grass
      if (foodSources.length > 0) {
        let closest = foodSources[0];
        let huntForce = this.seek(closest.pos).mult(1.5);
        this.applyForce(huntForce);
      }
    }
  }

Milestones and Challenges:

  • My first milestone was the fear state. I added a boolean isPanicking. This allows the animals to change color and increase their maxSpeed when the lion is near, which makes the flee behavior much more visible.

  • A challenge I ran into was resource management. Initially, the animals would just stay on the grass forever so I gave the grass health. As the herd grazes, the health drops until the patch disappears, forcing the herd to look for the next patch.

  • The final challenged I faced was from how the sketch looks like below. After getting all the core functionality done, the sketch still felt “unalive”. Following the feedback from my professor, I decided to add to the diversity of the sketch by making two different looking, and behaving, species: Wildebeests and Gazelles. By differentiating their steering weights, I created a “Social Anchor” group that prioritizes cohesion and a “Scout” group that favors independent wandering, making the collective movement feel like a real ecosystem.


Reflection and Future Improvements:

This project taught me that “life” in code is about the priority of forces. The biggest challenge was balancing steering behaviors so agents wouldn’t just clump like magnets. Fine tuning the Separation weight was the breakthrough as it provided breathing room, transforming a messy cluster into a coordinated, organic herd.

Future Work:

Leader Dynamics: Adding a Leader agent with a stronger Seek force to see if the herd naturally follows its path.

Obstacles: Implementing “Rocks” or “Rivers” to force the herd to navigate around barriers using Obstacle Avoidance steering.

Vision Cycles: A Night Mode where the vision radius shrinks, forcing agents to rely more on Alignment when targets are out of sight.

Sound: Using p5.sound to map the Panic state to a rising ambient hum, making survival tension audible.

Midterm BlogPost

Project Overview
Digital Kente is a generative art system exploring Ghanaian Kente cloth through five different computational approaches. Instead of making one Kente simulation, I wanted to see what happens when you interpret the same tradition through different lenses – geometric patterns, particle movement, wave interference, flow fields, and physics-based interaction.
Each mode captures something different about Kente: the precision of woven patterns, the movement of threads, the rhythmic repetition, the flow of fabric, and the symbolic meaning of motifs.
The system runs at 800×800 pixels, 30 FPS.

Kente Inspirations

Implementation Details
The Color Palettes
I pulled colors directly from photographs of Kente cloth. Two palettes  emerged:Warm Ashanti: #FF6B2C, #4A7C2C, #FFD700, #8B3A3A, #1A1A1A
Orange, green, gold, burgundy, black.
Bold Ewe: #FFE135, #E63946, #99D98C, #F77F00, #000000
Yellow, red, light green, orange, black.

The key was not using all 5 colors everywhere. Real Kente bands have limited palettes – each band uses 2-3 colors from the full set. I implemented this with a helper function that selects colors based on the band index.

Mode 1: Traditional Weave
Four pattern types rotate through horizontal bands: zigzag diamonds, checkerboard, diamond motifs, and horizontal stripes. The bands have different heights to create rhythm.
The biggest addition was golden metallic accent threads that shimmer between bands:

let shimmer = sin(i * 0.5 + time * 0.3) * 0.5 + 0.5;
let alpha = map(shimmer, 0, 1, 100, 220);
stroke(255, 215, 0, alpha);

The sine wave creates a traveling wave of brightness, like light catching metallic thread in real Kente.
For the zigzag pattern, I used floor(abs(sin((i + time * 0.1) * 0.3)) * 4) to create the animated offset while keeping it stepped and geometric. Diamond patterns use Manhattan distance – distX + distY – which creates proper diamond shapes with only horizontal and vertical math.
Mode 2: Particle Weaving
300 particles moving in cardinal directions only, leaving 15-frame trails. They’re grid-constrained because real weaving threads can only move in specific directions.
When particles turn, they pulse briefly – size goes from 1.0 to 1.2 then decays. I originally had intense color blending where trails overlapped, but it was too much. The subtle pulse works better.

Mode 3: Harmonic Kente
Three sine waves combine to create interference patterns:

let wave1 = sin((i * 0.3 + time * freq) + phase);
let wave2 = sin((j * 0.5 + time * freq * 1.5) + phase * 1.3);
let wave3 = sin(((i + j) * 0.2 + time * freq * 0.8));
let combined = (wave1 + wave2 + wave3) / 3;

Each band has a different frequency so they oscillate at different rates. The combined value picks the color and controls cell size (pulses between 85% and 115%).
I added shimmer glow on wave peaks and resonance symbols where waves align perfectly. These are small golden circles that mark moments of harmony.
Bug I fixed: The canvas wasn’t filling completely – changed floor(rows / bandHeight) to ceil(rows / bandHeight) and added bounds checking for the last band.
Mode 4: Flow Cloth
Perlin noise flow field, but quantized to 8 directions to keep it geometric. I combine two noise octaves at different scales:

let noiseVal1 = noise(i * 0.08, j * 0.08, time * 0.008);
let noiseVal2 = noise(i * 0.15, j * 0.15, time * 0.012);
let combinedNoise = (noiseVal1 * 0.7 + noiseVal2 * 0.3);

The flow shows as elongated rectangles rotated to match direction. Added depth shadows (2px offset) and circular knot formations where noise exceeds 0.92 – these represent thread bunching.

Mode 5: Symbolic Kente – Physics and Sound
This took the most work. 8×8 grid of symbols connected by springs.
The Physics:
Each symbol springs back to its home position and connects to its 4 neighbors. Hooke’s Law implementation:

let force = p5.Vector.sub(this.home, this.pos);
force.mult(this.k);  // k = 0.04

For neighbors, I calculate rest distance vs current distance and apply spring force based on the difference. This creates cloth behavior – disturb one symbol and the whole grid ripples.

Bug I had to fix: My first version exploded. The problem was this.acc = force instead of this.acc.add(force). When you have multiple forces, you need to accumulate them. Also forgot to reset acceleration to zero each frame.
Spring constants took trial and error. Too stiff and symbols barely move. Too loose and they oscillate wildly. Neighbor springs need to be weaker than home springs or the grid collapses. Settled on k = 0.04 for home, neighborK = 0.015 for neighbors, damping = 0.90.

The Sound System:
First attempt used p5.PolySynth but it threw “oldestNote is not defined” errors. Switched to individual oscillators:

let osc = new p5.Oscillator('sine');
let env = new p5.Envelope();
env.setADSR(0.01, 0.15, 0.1, 0.2);
env.setRange(0.3, 0);
osc.freq(frequency);
osc.amp(env);
osc.start();
env.play(osc);

setTimeout(() => {
  osc.stop();  // Cleanup to prevent memory leaks
}, 300);

Each symbol type plays a different note from a pentatonic scale:

Diamond: C4 (261.63 Hz)
Cross: D4 (293.66 Hz)
Square: F4 (349.23 Hz)
Triangle: G4 (392.00 Hz)

Pitch shifts based on Y position – higher symbols play higher notes. Notes are rate-limited to 100ms per symbol to prevent spam.
Another bug: Sound only played once per symbol. Fixed by removing the state transition check – now it plays whenever the symbol is disturbed (with rate limiting).
Visual ripples: When a note plays, I create expanding circles that fade out. Gives visual feedback for the audio.
The threads connecting symbols visualize tension – they turn red and get thicker when stretched:

The Sketch

 

Video Documentation

The video above demonstrates:

  • All five modes in action
  • Switching between modes (1-5 keys)
  • Palette toggle (spacebar)
  • Mode 5 interaction: clicking/dragging, sound triggering, ripples expanding, thread tension
  • Reset and pause functionality

Reflection
What Worked
Having five separate modes instead of one catch-all simulation let me explore different aspects without compromise. Mode 1 is about static precision, Mode 2 about movement, Mode 3 about rhythm, Mode 4 about flow, Mode 5 about interaction and meaning.
The constraint system – limiting everything to grids and angles – forced creativity within bounds. This mirrors how actual Kente weavers work within the loom’s constraints.
The physics in Mode 5 feels satisfying once I fixed the bugs. The neighbor connections create realistic wave propagation.
Sound integration made Mode 5 much more engaging. The pentatonic scale means any combination sounds musical.

What Could Be Better
Sound is basic – single notes work but chords would be richer. Also thinking about mapping thread tension to pitch or volume.
Mode 2’s pulse is subtle, maybe too subtle. I backed off from intense blending but there might be a middle ground.
No parameter controls – would be useful to adjust spring constants, damping, particle count in real-time.

User Experience
Controls are straightforward but there’s no tutorial. New users might not discover everything. The UI overlay helps but could be clearer.
Mode 5 is the most interactive but also most complex. The sound toggle (M key) is essential because audio can get overwhelming.
Canvas size works for web but feels small for detail. Fullscreen mode would help.

Future Plans
If I keep working on this:

Parameter controls with sliders
More sophisticated sound (chords, tension-to-audio mapping)
Additional modes exploring Adinkra symbols or actual loom mechanics
High-resolution export capability
Mobile/touch support (currently mouse-only)

References
Cultural Research

National Museum of African Art (Smithsonian) – Kente cloth collection
“African Textiles: Color and Creativity Across a Continent” by John Gillow
Museum exhibitions on Ashanti and Ewe weaving

Technical Resources

p5.js Reference
p5.sound Library
Daniel Shiffman’s “The Nature of Code” – Chapters on oscillation and physics
Ken Perlin’s work on noise functions

AI Disclosure
I used Claude (Anthropic) for:

Debugging the physics force accumulation bug
Troubleshooting p5.sound after PolySynth failed
Code organization suggestions
Syntax checking

All creative decisions, design choices, cultural research, and core algorithms were my own. The AI helped a bit with implementation details and debugging, not concept or design.

Controls:

1-5: Switch modes
SPACE: Toggle palettes
R: Reset
P: Pause/resume
S: Save frame
M: Toggle sound (Mode 5)
Click & Drag: Interact with Mode 5

DRIFT – Assignment 8

There’s something genuinely strange about watching a crowd of autonomous agents share a canvas.

That’s the question behind DRIFT: what happens when you put three radically different vehicle archetypes into the same space, give each one its own agenda, and just let physics run?

The starting point was Craig Reynolds’ foundational work on steering behaviors: seek, flee, separation, alignment, cohesion. Those behaviors are well-documented and well-taught. The challenge I set for myself was to use them as raw material for something that reads more like an ecosystem than a demo.

I ended up with three vehicle types:

Seekers chase a moving attractor that is basically a signal that traces a path across the canvas. They leave luminous trails and pulse as they move.

Drifters ignore the signal entirely. They flock through alignment, cohesion, and separation.  and wander using noise.

Ghosts flee. They push away from the signal and from the combined mass of every other vehicle in the scene. They end up haunting the edges of the canvas.

The signal itself moves on a parametric Lissajous curve, so it sweeps the canvas continuously without any user input required.

 

The Ghost’s `applyBehaviors` method is the piece I find most satisfying. The rule sounds simple — flee everything — but the implementation has a specific texture to it.

javascript
applyBehaviors(signal, allVehicles) {
let fleeSignal = this.flee(signal, 220);
let fleeCrowd = createVector(0, 0);

for (let v of allVehicles) {
fleeCrowd.add(this.flee(v.pos, 90));
}

let w = this.wander(1.2);

fleeSignal.mult(2.0);
fleeCrowd.mult(0.8);
w.mult(0.9);

this.applyForce(fleeSignal);
this.applyForce(fleeCrowd);
this.applyForce(w);
}

 

What I like here is that `fleeCrowd` is an accumulated vector. For every seeker and drifter on the canvas, the ghost computes a flee force and adds them all together. The result is that the ghost reads the density of the crowd. A ghost near a tight cluster of drifters gets a much stronger push than one near a single seeker. It behaves like a pressure system.

The wander force on top of that means no two ghosts trace the same path even under identical starting conditions. The noise field shifts slowly over time, so the wandering feels natural.

The wander method from the base `Vehicle` class handles this:

wander(strength) {
let angle = noise(
this.pos.x * 0.003,
this.pos.y * 0.003,
driftT * 0.4
) * TWO_PI * 2;

return p5.Vector.fromAngle(angle).mult(strength * this.maxForce);
}


 

 

Getting the ghost behavior to feel ghostly rather than glitchy. The first version had ghosts with a flee radius too small, so they’d enter the crowd and then snap violently outward. Increasing the signal flee radius to 220 pixels and smoothing the crowd flee with accumulated vectors fixed the snapping.

The Lissajous signal path. My first instinct was to use `mouseX` and `mouseY` as the attractor, which is the standard approach for seek demos. The problem is that a static mouse produces boring convergence, everyone piles up on the target and sits there. A Lissajous curve gave the signal genuine sweep across the canvas, which keeps seekers in motion even after they’ve converged. The math is minimal:

function getSignalPos(t) {
let cx = width * 0.5;
let cy = height * 0.5;
let rx = width * 0.32;
let ry = height * 0.28;
return createVector(
cx + rx * sin(t * 0.41 + 0.6),
cy + ry * sin(t * 0.27)
);

 

The frequency ratio `0.41 / 0.27` is irrational enough that the path never perfectly repeats, so the sketch keeps shifting over long observation periods.

 

 

The three archetypes don’t interact across types in any interesting way. Seekers don’t react to drifters. Drifters don’t notice ghosts. The only cross-archetype behavior is the ghost’s crowd flee, which reads seeker and drifter positions as obstacles. A next version could introduce:

– Seekers that are temporarily distracted by passing drifter clusters, pulled off their trajectory before resuming the chase.
– The signal occasionally splitting into two attractors, creating competing factions among the seekers.

Visually, the grid underneath the simulation was meant to read as a city viewed from above, but it’s almost invisible after the first frame. Rendering it to a persistent background layer would strengthen that spatial metaphor.

 

Assignment 8

Concept

For this assignment, I wanted to explore Autonomous Agents. I took inspiration from Craig Reynolds’ Steering Behaviors, specifically the idea that a vehicle can “perceive” its environment and make its own decisions.

My goal was to create a Living City. I designed a system of commuters or normal vehicles that dutifully follow a circular traffic path, and a Police car (the interceptor) controlled by the mouse. The commuters get out of the police car’s way as it approaches. The project explores the tension between order (Path Following) and chaos (Fleeing from danger).

Sketch

Process and Milestones

I started by building a multi-segment path. The biggest challenge here was the logic required to make the vehicles drive in a continuous loop. I used the Modulo operator (%) in my segment loop so that when a vehicle reaches the final point of the rectangle, its “next target” automatically resets to the first point.

At first, my vehicles were “shaking” as they tried to stay on the path. I realized they were reacting to where they are right now, which is always slightly off the line. I implemented Future Perception—the vehicle now calculates a “predict” vector 25 pixels ahead of its current position. By steering based on where it will be, the movement became much smoother and more life-like.

The most interesting part of the process was coding the transition between behaviors. I wrote a conditional check: if the distance to the Interceptor (mouse) is less than 120 pixels, the Commuter completely abandons its followPath logic and switches to flee. I also increased their maxSpeed during this state to simulate “panic.” Once the danger passes, they naturally drift back toward the road and resume their commute.

Code I’m Proud Of

I am particularly proud of the logic that allows the vehicle to choose the correct path segment. It doesn’t just look at one line; it scans every segment of the “city” to find the closest one, then projects a target slightly ahead of its “normal point” to ensure it keeps moving forward.

// Predictive Path Following logic
followPath(path) {
  // look into the future
  let predict = this.vel.copy().setMag(25);
  let futurePos = p5.Vector.add(this.pos, predict);

  let target = null;
  let worldRecord = 1000000;

  // scan all road segments for the closest point
  for (let i = 0; i < path.points.length; i++) {
    let a = path.points[i];
    let b = path.points[(i + 1) % path.points.length];
    let normalPoint = getNormalPoint(futurePos, a, b);

    // ... (boundary checks) ...

    let distance = p5.Vector.dist(futurePos, normalPoint);
    if (distance < worldRecord) {
      worldRecord = distance;
      // look 15 pixels ahead on the segment to stay in motion
      let dir = p5.Vector.sub(b, a).setMag(15);
      target = p5.Vector.add(normalPoint, dir);
    }
  }

  // steer only if we've drifted outside the lane
  if (worldRecord > path.radius) {
    this.seek(target);
  }
}

Reflection

This project shifted my perspective on coding movement. In previous assignments, we moved objects by changing their position; here, we’re moving them by changing their desire. It feels much more like biological programming than math.

I also noticed that instead of commuters giving way to the police car, it looks like the cars are racing cars on a track fleeing from the police as it approaches. I’ve left the final interpretation up to the reader’s imagination…

Future Ideas

  • I want to add a separation force so that commuters don’t overlap with each other, creating more realistic traffic jams.
  • Allowing the user to click and drag the path points in real-time, watching the agents struggle to adapt to the new road layout.
  • Integrating the p5.sound library to make the interceptor’s siren get louder as it gets closer to the vehicles (Doppler effect).

Amal – Assignment 8

Concept

For this assignment, I created a system of multiple vehicles that interact through different steering behaviors. The goal was to move away from simple motion and build something that feels alive, where each agent reacts to both the environment and the other agents around it.

The system is inspired by flocking behavior and Craig Reynolds’ steering behaviors, especially seek, flee, separation, alignment, cohesion, and wander. What interested me most is how a few simple rules can create complex and unpredictable motion when combined.

Instead of representing real-world objects like birds or cars, I kept the visuals abstract. This shifts the focus to movement itself. The vehicles behave like particles with intention, constantly adjusting their direction based on nearby agents and the mouse. The result is a system that feels dynamic and slightly unpredictable.

First Prototype

The first prototype focused only on seek behavior. Each vehicle moved toward the mouse using velocity, acceleration, and steering force.

At this stage, the system worked technically, but it felt very predictable. All vehicles behaved the same way and moved toward the same point, so there was no interaction between them. The motion looked flat and repetitive.

This version helped me understand:

  • how to structure the Vehicle class
  • how steering works using desired velocity minus current velocity
  • how to limit speed and force for smoother motion

It was a necessary step, but it did not yet feel like a system.

Final Sketch

In the final version, I combined multiple steering behaviors. The vehicles no longer only seek the mouse. They also separate from each other to avoid crowding, loosely align with neighbors, move toward a local center through cohesion, and wander slightly to avoid rigid motion.

The mouse interaction also became more dynamic. Vehicles are attracted to the mouse from a distance, but when they get too close, they flee. This creates a push and pull effect that keeps the system constantly shifting.

Because each vehicle balances multiple forces at once, the motion feels more organic and emergent.

Code Highlight
applyBehaviors(vehicles, mouse) {
  let sep = this.separate(vehicles);
  let ali = this.align(vehicles);
  let coh = this.cohesion(vehicles);
  let wan = this.wander();

  let mouseDist = dist(this.pos.x, this.pos.y, mouse.x, mouse.y);
  let mouseForce;

  if (mouseDist < 90) {
    mouseForce = this.flee(mouse);
    mouseForce.mult(2.2);
  } else {
    mouseForce = this.seek(mouse);
    mouseForce.mult(0.35);
  }

  sep.mult(1.8);
  ali.mult(0.8);
  coh.mult(0.7);
  wan.mult(1.1);

  this.applyForce(sep);
  this.applyForce(ali);
  this.applyForce(coh);
  this.applyForce(wan);
  this.applyForce(mouseForce);
}

This part of the code is the core of the system. Each behavior is calculated separately and treated as a force, then scaled using multipliers before being applied. This allows the system to balance multiple influences at once.

From a technical perspective, each behavior returns a steering vector based on desired velocity minus current velocity. These vectors are then combined through applyForce(). Adjusting the weights changes how dominant each behavior is, which directly affects how the system feels visually.

Milestones and Challenges

The first milestone was getting the basic vehicle system working with position, velocity, and acceleration. After that, I implemented seek behavior so agents could move toward the mouse.

The next challenge was that everything felt too uniform. All vehicles behaved the same way, which made the system predictable. I fixed this by adding separation, which prevented the vehicles from collapsing into a single cluster.

After introducing alignment and cohesion, the system became more structured, but also too rigid. To fix that, I added wander, which introduced small random changes and made the motion feel more natural.

Balancing the behaviors was the biggest challenge. If separation was too strong, the system spread out too much. If cohesion was too strong, everything clustered. If wander was too strong, the system became chaotic. A lot of the process involved fine-tuning these weights.

Reflection + Future Improvements

This project showed me how complex behavior can emerge from simple rules. The final motion is not directly designed. It comes from the interaction between behaviors, which makes the system feel more alive.

What worked well is the layering of multiple forces. Each vehicle is simple on its own, but together they create a dynamic system that constantly changes.

For future improvements, I would introduce different types of agents with different behaviors, such as leaders or more reactive agents. I would also add environmental constraints like obstacles so the system reacts to space in a more complex way.

Another direction would be to develop the visual side further, such as adding trails, color variation, or changes based on speed and proximity.