Assignment 8 : Yash

The Light Weaver

Concept

For this project,  I realized that steering algorithms aren’t just for biology, they are perfect tools for generative art. My concept is called “The Light Weaver.” It acts like a simulated long-exposure camera. The “vehicles” are actually photons of light. Instead of just moving around, their movement draws the final piece.

I was inspired by long-exposure light painting photography and the pure geometric abstraction of artists who use lasers and neon. The system runs autonomously to draw a glowing hexagon, but the user’s mouse acts as a “magnetic distortion field” to fray the light lines (using flee). Clicking drops a prism, and the photons use the arrive behavior to pack densely into a glowing singularity.

A highlight of some code that you’re particularly proud of

I am really proud of getting the multi-segment path following to work, specifically inside the follow() function.

for (let i = 0; i < path.points.length - 1; i++) {
      let a = path.points[i];
      let b = path.points[i + 1];
      let normalPoint = getNormalPoint(predictLoc, a, b);

      // Check if the normal point is actually on the line segment
      let da = p5.Vector.dist(a, normalPoint);
      let db = p5.Vector.dist(b, normalPoint);
      let lineLen = p5.Vector.dist(a, b);

      if (da + db > lineLen + 1) {
        normalPoint = b.copy(); // clamp it!
      }

 

Embeded sketch

 

Milestones and challenges in my process

  • Milestone 1: Getting a single vehicle to follow a straight line.

  • Challenge 1: Upgrading from a single line to a multi-segment Hexagon. At first, my vehicles kept flying off the corners because they were calculating normal points on infinite lines instead of clamping to the vertices.

  • Milestone 2: Implementing the long-exposure visual.

  • Challenge 2: I struggled to make it look like light instead of solid shapes. I realized that by changing show() to draw lines from prevPos to pos, using blendMode(ADD), and drawing the background with an alpha value of 12 (background(5, 5, 5, 12)), I could get that perfect glowing trail effect.

 

Reflection and ideas for future work or improvements

This project completely changed how I look at steering behaviors. I realized that the math of “intention” and “steering” can be applied to abstract drawing tools, not just physical simulations.

For future improvements, I’d love to make the geometric path dynamic, maybe the hexagon slowly rotates over time, or pressing the spacebar adds more vertices to the path to make it a more complex polygon. I would also love to try tying the maximum speed or the path radius to an audio input, so the light trails dance to music.

Buernortey – Assignment 8: The Shoal

Concept

Growing up in a fishing community in Ghana, I watched fishermen read the water. Not just the tides, but the fish themselves. A school of fish moves like a single organism, splitting around rocks, scattering from shadows, reforming behind the boat. No one fish is giving orders. The pattern emerges from each individual following a few simple rules about its neighbors.

That memory became the concept for this sketch. The Shoal is a system of 60 fish navigating an ocean current, staying together as a group, and reacting to a predator that tracks your mouse cursor. The behaviors are inspired directly by Craig Reynolds’ steering model: each fish knows nothing about the whole. It only senses what is close to it. Yet the group produces complex, lifelike motion.

The color palette is drawn from the Ghanaian flag (red, gold, and green), and each color encodes a live behavioral state. Green means a fish is flocking normally. Red means it is fleeing the predator. Gold means it has drifted away from the group and is wandering on its own.

A Highlight of Code I’m Proud Of

The piece of code I kept returning to is the flock() method, the behavior composer that decides frame by frame what each fish should be doing. What I love about it is how it uses the same underlying steering primitives in completely different combinations depending on context.

flock(others, pred, flow) {
  let d = dist(this.pos.x, this.pos.y, pred.pos.x, pred.pos.y);
  let fleeing = d < FLEE_RADIUS;

  if (fleeing) {
    // Run directly away from the predator at boosted speed
    let flee = this._flee(pred.pos);
    flee.mult(2.5);
    this.applyForce(flee);
    this.maxSpeed = FISH_MAX_SPEED * 1.5;
    this.bodyColor = PAL.red;
  } else {
    // Normal flocking: separation keeps spacing, alignment matches heading,
    // cohesion pulls toward the group center using arrive()
    let sep = this._separation(others);
    let ali = this._alignment(others);
    let coh = this._cohesion(others);

    sep.mult(1.8);
    ali.mult(1.0);
    coh.mult(1.2);

    this.applyForce(sep);
    this.applyForce(ali);
    this.applyForce(coh);

    // Flow field gives the fish a subtle current to drift with
    let flowForce = flow.lookup(this.pos);
    flowForce.setMag(this.maxSpeed * 0.6);
    let flowSteer = p5.Vector.sub(flowForce, this.vel);
    flowSteer.limit(this.maxForce * 0.5);
    this.applyForce(flowSteer);

    // Isolated fish switch to wander
    let neighbors = this._countNeighbors(others, COH_RADIUS);
    if (neighbors < 3) {
      this.bodyColor = PAL.gold;
      this.applyForce(this._wander());
    } else {
      this.bodyColor = PAL.green;
    }
  }
}

 

What made this click for me is that _arrive() is called inside _cohesion(), so even the group behavior uses the same arrive logic I first learned as a single vehicle seeking a target. One method, three behavioral contexts: cohesion toward the group center, the predator arriving at the cursor, and the wander behavior projecting a circle ahead of itself. Reusing the same primitive in different combinations was the most satisfying part of this assignment.

Embedded Sketch


Move your cursor to control the predator. Watch the shoal react. Fish nearest the predator turn red and scatter, isolated fish turn gold and wander, and the rest stay green and hold formation together.

Milestones and Process

Phase 1 — One fish, seek and arrive

I started with a single fish vehicle following the mouse. The goal was to make sure the arrive behavior felt right before scaling up, because arrive is the foundation everything else builds on. At this stage it was just one ellipse with a triangle tail, but the deceleration as it approached the cursor already felt organic.

The main challenge here was getting the arrive slowing to feel natural rather than mechanical. My first attempt applied the speed reduction too early, so the fish would crawl painfully slowly from far away before even reaching the target zone. I had to narrow the slowdown window to only the final 100 pixels, leaving full speed everywhere outside that range, before the motion started to feel right.

Phase 2 — Scaling to a shoal, adding flocking

I scaled from one fish to 40 using a loop and implemented separation, alignment, and cohesion as three separate steering forces composed together.

Tuning the force multipliers was the hardest part of this phase and took the most iteration. My first set of values had separation too weak, so fish constantly clipped through each other and the group looked like a blob rather than a school. Raising it too far in the other direction made the shoal explode outward and never reform. I also had alignment weighted too heavily early on, which made every fish lock into the same heading so rigidly that the group moved like a marching band rather than a living thing. Getting to sep × 1.8, ali × 1.0, coh × 1.2 took many small adjustments and a lot of watching the sketch run.

Phase 3 — Predator, flee, wander, and color states

The predator changed the whole feel of the sketch. Once a dark fish that tracked the mouse was introduced, the shoal became reactive rather than passive. Flee was implemented as a reversed seek: instead of steering toward the target, the fish steers directly away from it with a force multiplied at 2.5× for urgency.

The flee force caused a problem I did not anticipate. With the multiplier too high, fish near the predator would accelerate so violently that they shot across the entire canvas in a single frame and wrapped around to the other side, which looked completely wrong. I had to pair the force multiplier with a capped maxSpeed boost rather than an uncapped acceleration, so the urgency comes through in the speed increase without the motion becoming physically implausible. Getting the flee radius right also took several attempts. Too large and fish were permanently panicking even when the predator was nowhere near them. Too small and the reaction looked delayed and unconvincing.

Phase 4 — Flow field, ocean background, full fish anatomy

The final layer was a Perlin noise flow field that gives the whole canvas a gentle ocean current. Each fish looks up the flow vector at its position and applies it as a weak additional force.

The challenge here was finding the right weight for the flow force relative to the flocking forces. In early versions I had it too strong, and it completely overrode the cohesion and alignment behavior. Fish stopped schooling and just drifted in the same direction like debris, which defeated the whole point. Pulling it back to maxSpeed × 0.6 with a force limit at half of maxForce made it feel like an environmental influence rather than a controlling force — the fish push through it, but you can see them drifting slightly when nothing else is competing for their attention.

Reflection and Ideas for Future Work

The biggest surprise was how little code produces this result. The entire behavioral system, 60 fish with four distinct modes, comes down to three vector additions per frame per fish, each about five lines long. What Reynolds figured out in the 1980s still feels almost unreasonably powerful.

The Ghanaian flag palette was a natural choice for me. I had already used it in my midterm project, and it works well here because each color carries cultural weight while also reading clearly as a data signal. You understand the system’s state at a glance just from the color distribution across the canvas.

Ideas for future work include adding a real food source, a glowing anchor point the shoal seeks when the predator is far away, which would complete the full foraging cycle. Two competing shoals in different colors racing for the same food would also be compelling. Letting individual fish leave a faint pheromone trail that gradually fades would make the paths the school carves through the water visible over time. And sound would add another dimension: low ambient ocean audio that pitches up slightly when the shoal is stressed and fleeing.

References and inspiration:

  • The Nature of Code, Chapter 5 (Autonomous Agents) by Daniel Shiffman, which provided the steering force formula the entire system is built on
  • Craig Reynolds, Steering Behaviors for Autonomous Characters, the original separation, alignment, and cohesion framework
  • Braitenberg Vehicles, and the idea that complex behavior can emerge from extremely simple rules
  • Personal memory of fishing communities in the Central Region of Ghana

Salem Al Shamsi – Assignment 8

The Trail

Concept

I wanted to build something that felt genuinely alive, not just shapes moving around a screen, but a system with real logic behind it. I kept coming back to one question: how do ants form those perfect lines without anyone telling them where to go?

The answer is surprisingly simple. One scout finds a path and leaves a chemical trail called a pheromone. Workers smell it and follow. The more ants walk the same path, the stronger the smell, the more ants follow. A highway emerges from nothing, no leader, no plan, just chemistry.

That became the concept. You are the scout. Move your mouse, and the colony follows the exact path you traced. Stop moving, and the scout breaks away to investigate the environment on its own, sniffing nearby rocks while the workers wait in place.

References and inspiration:

Code I’m Proud Of

The moment that made everything click was the pebble investigation system. When the mouse stops, the scout doesn’t just freeze, it actively searches for the nearest rock within 120 pixels and slowly approaches it, like an ant sniffing an obstacle. When it gets close enough it clears that target and finds the next one.

// ── LEADER 
if (mouseStill) {
  // slow crawl speed while exploring
  leader.maxSpeed = 0.5;
  // clear pebble target once scout is close enough
  if (targetPebble) {
    let d = dist(leader.pos.x, leader.pos.y, targetPebble.x, targetPebble.y);
    if (d < SEEK_STOP) targetPebble = null;
  }
  // find a new pebble if not already investigating one
  if (!targetPebble) targetPebble = findNearestPebble();
  // arrive at pebble slowly, or wander if none nearby
  if (targetPebble) leader.arrive(createVector(targetPebble.x, targetPebble.y));
  else              leader.wander();

What I love about this is that it uses the same arrive() behavior the whole project is built on, just at a slower speed. One behavior, three different contexts: following the mouse, following history points, investigating a rock. That reuse felt elegant.

Embedded Sketch

 

Milestones and Process

Phase 1 — Getting the chain to work

Started with two plain circles. One follows the mouse using arrive. Every frame, its position gets saved into a history array. The second circle targets history[40], where the first was 40 frames ago. That delay creates the following effect. The main challenge was stopping them from overlapping when the mouse stopped, fixed by only recording history when the leader actually moved.

Phase 2 — Scaling to a colony

Scaled from one follower to seven using a simple loop. Each ant targets a different point further back in history. Added real ant bodies, the sandy background, and separation logic between every pair of ants so they never overlap.

Phase 3 — The Final Sketch

Added the fading pheromone trail, pebble investigation when the mouse stops, workers freezing while the scout explores, and a custom glowing cursor. The final version can be found embedded above.

Reflection and Future Ideas

The biggest surprise was how little code produces this behavior. A five-line loop creates the chain. One Perlin noise value creates the wander. Simple rules, complex result, exactly what the ant research described.

Future ideas:

  • Add a real food source that the scout can find and bring workers to complete the full foraging loop
  • Two competing colonies with different trail colors racing to the same food
  • Make the trail actually fade so workers that fall too far behind lose the scent and wander off alone

 

 

 

 

Haris – Assignment 8

Concept

When thinking about “autonomous agents” in the nature one thing that came to mind were birds. Most of the time they move around by themselves, deciding on directions and the destination for us seemingly randomly, but when in a flock if they are following a leader suddenly they all come together to move in large groups following one. This is something I wanted to recreate in my project.

The user can click on the screen to spawn the leader bird, which will attract the rest of the flock. After the birds have gathered around it they will start orbiting around instead of all just piling up in one place. After which the user can click on another point on the screen and watch as the birds move towards it, each going their own direction and following the flow and the flock.

Process

I started by just having lines instead of birds as I just wanted to get the autonomous movement going:

After this was done it was time to add the birds. I decided to create them as pretty simple models, but I also wanted to add some flapping of the wings to make the visuals more appealing.

This was done with the following:

let flap = sin(frameCount * 0.2 + this.pos.x * 0.05) * 0.4;

I used the sin function to make the flapping smooth and natural, but I also made sure that the wings don’t move the the same position at the same times, instead when one moves up the other moves down and vice-versa.

push();
rotate(-flap);
ellipse(
  -this.size * 0.1,
  -this.size * 0.22,
  this.size * 0.9,
  this.size * 0.28
);
pop();

push();
rotate(flap);
ellipse(
  -this.size * 0.1,
  this.size * 0.22,
  this.size * 0.9,
  this.size * 0.28
);
pop();

Once I was happy with the look I decided to make the leader also a bird and allow the user to click to move it. I also decided to lower the alpha of the background to make the birds leave streaks behind so it gives it more of an art aesthetic.

Code Highlight

separate(vehicles) {
  let perceptionRadius = 25;
  let steering = createVector();
  let total = 0;

  for (let other of vehicles) {
    let d = dist(this.pos.x, this.pos.y, other.pos.x, other.pos.y);

    if (other !== this && d < perceptionRadius && d > 0) {
      let diff = p5.Vector.sub(this.pos, other.pos);
      diff.div(d * d);
      steering.add(diff);
      total++;
    }
  }

  if (total > 0) {
    steering.div(total);
    steering.setMag(this.maxSpeed);
    steering.sub(this.vel);
    steering.limit(this.maxForce * 1.2);
    this.applyForce(steering);
  }
}

One important action in this system is separation, where no two birds overlap and flock structure is maintained. Each bird looks at its local neighbors and applies a repulsive force that increases with proximity. The repulsive force increases significantly as distance decreases because it is divided by the square of the distance. This produces a natural spacing effect so that the flock is cohesive but not overly dense.

I would also like to highlight this part of the code:

fill(112, 231, 255, 40);
noStroke();
rect(0, 0, width, height);

Instead of clearing the screen and redrawing the background each frame I decided to dram a rectangle over the screen with a certain color and decrease its alpha. This gradually fades the previous frames instead of just clearing them instantly which makes the birds’ previous position to be visible for a short duration of time. This motion blur is something I wanted to create so we can better visualize  the movement of the birds and it also allows the user to create a canvas where they paint on by moving the leader and letting the birds paint as they move towards it.

Future improvements

I am really proud of how the project turned out in the end. If I was to add any other features that would probably be adding different colors for the birds and their trails as well as maybe playing with sound somehow as that is something I haven’t really worked on much before. Overall I am happy with the end result and have learned more about p5 while working on it.

Assignment 8 – Afra Binjerais

Concept

My concept is inspired by fireworks. I wanted to explore how this week’s vehicle demonstrations could be transformed into something more artistic, which I initially found challenging. When I first think of vehicles, I usually imagine something literal like cars, so it took some time to shift my thinking toward a more abstract approach. I used the flow field example from class as a starting point and built on it to create a system that feels more expressive.

A highlight of some code 

behaviors(flow) {
// explode outward first
this.seek(this.target);

// soften near destination
this.arrive(this.target);

// drift like smoke/wind after burst
if (this.life < 55) {
this.follow(flow);
}
}

seek(t) {
let desired = p5.Vector.sub(t, this.pos);
desired.setMag(this.maxSpeed);

let steering = p5.Vector.sub(desired, this.vel);
steering.limit(this.maxForce);

this.applyForce(steering);
}

arrive(t) {
let desired = p5.Vector.sub(t, this.pos);
let d = desired.mag();

if (d < 60) {
let m = map(d, 0, 60, 0, this.maxSpeed);
desired.setMag(m);
} else {
desired.setMag(this.maxSpeed);
}

let steering = p5.Vector.sub(desired, this.vel);
steering.limit(this.maxForce * 0.8);

this.applyForce(steering);
}

follow(flow) {
let desired = flow.lookup(this.pos);
desired.setMag(this.maxSpeed * 0.7);

let steering = p5.Vector.sub(desired, this.vel);
steering.limit(this.maxForce * 0.6);

this.applyForce(steering);
}

This is the part I’m most proud of because it is where the vehicles stop being just particles and start behaving like fireworks. Instead of using only one steering behavior, I combined three different ones so each particle goes through stages of motion.

First, seek() pushes each vehicle outward toward its explosion target, which creates the initial burst. Then arrive() slows the particle as it gets closer to that target, so the explosion doesn’t feel too mechanical or constant. After that, once the particle’s life drops below a certain point, follow() lets it drift with the flow field, which makes it look more like smoke or sparks being carried by air.

Sketch


Milestones 

  • I combined the flow field with a basic explosion system. Vehicles are created at a point and use seek to move outward, while still being influenced by the flow field. The visualization is simple to focus on the behavior before adding artistic styling.https://youtu.be/0gokb2LBqo0
  • I added arrival behavior, particle lifespan, and fading trails so the explosion becomes more natural and starts to resemble fireworks instead of just moving points.

https://youtube.com/shorts/n8gWaY6BZCA?feature=share

Reflection and ideas for future work or improvements

This assignment made me think about vehicles in a different way, because when I first hear “vehicle,” I usually imagine something literal like cars and that’s why it was hard at first to think of a concept. I used vehicles as abstract moving particles to create a firework system, which helped me see them more as agents following rules rather than physical objects. I was able to build motion that feels more expressive and dynamic. In the future, I would like to push this idea further by adding more variation in the explosions and refining the overall visual style.

Midterm Project – Saeed

Project Overview

For my project I wanted to replicate the movement of birds between trees and their pathing around obstacles, in this case mountains. To best visualize this I started with a top-down view and then to make the sketch more visually pleasing I used contour lines to replicate a topographic map.

I was first inspired to do a simulation with birds by the garden outside my house in Dubai. Put simply there are a lot of birds, many are pigeons or sometimes crows as far as I can tell. From what I’ve seen most birds travel in packs and follow a similar direction to each other and at our house they often travel from tree to tree within the garden or with the trees lining the walls of my neighbours then they may stay at a tree for a while before moving again but always following a similar path in the air. Similarly, I see some birds stop at our pool to drink water often altogether. I wanted to replicate this behaviour in my project.

In addition, after deciding to do the simulation from a top down view I decided to add contour lines and make it appear to be a topographic map because it’s an idea I have been exploring for a while. I first learned how to make basic contour lines in photoshop using noise and since wanted to find places to use it since I feel it doesn’t get as much use as it should.

Implementation Details

This simulation is built around trees as emitters, birds as moving agents, a flow field for natural motion, mountains as obstacles that affect the vector field, contour lines for terrain, and a UI panel that lets you change parameters in real time.

Bird class

The Bird class represents one moving agent that travels between trees. Each bird stores position, destination, destination tree index, arrival/wait state, size, trail history, and a day-only behavior flag.

  • A destination tree is chosen randomly.
  • The bird travels to its destination.
  • When they arrive, they wait for a random duration.
  • After waiting, they pick a new destination tree at random.
  • Their movement is not only direct-to-target; a vector field is used for more natural movement.
class Bird {

  constructor(originX, originY, destinationX, destinationY, destTreeIndex, diameter = 10) {

    this.pos = createVector(originX, originY);

    this.des = createVector(destinationX, destinationY);

    this.destTreeIndex = destTreeIndex;

    this.arrived = false;

    this.trail = [];

  }

}

 

Tree class

The Tree class is both a visual node and an emitter for birds. Each tree has x and y coordinates, a diameter, and its own bird array.

  • A tree can initialize a number of birds.
  • Each spawned bird gets this tree as origin and a different tree as destination.
  • The tree updates and draws all birds in its own array.
class Tree {

  constructor(x, y, diameter = 40) {

    this.x = x;

    this.y = y;

    this.diameter = diameter;

    this.birdArray = [];

  }

}

The pathing starts with a direct vector from bird position to destination. Then the code samples the flow field at the bird position and then using vector addition it changes the heading of the bird itself.

 

let toDest = p5.Vector.sub(this.des, this.pos);

let desired = toDest.copy().setMag(birdSpeed);
let flow = getFlowAt(this.pos.x, this.pos.y);

let steer = desired.add(flow.mult(0.6));

steer.limit(birdSpeed + 0.5);

this.pos.add(steer);
function updateFlowField() {

  let noiseScale = 0.02;

  let time = frameCount * 0.005;

  for (let y = 0; y < flowFieldRows; y++) {

    for (let x = 0; x < flowFieldCols; x++) {

      let angle = noise(x * noiseScale, y * noiseScale, time) * TWO_PI * 2;

      flowField[y][x] = p5.Vector.fromAngle(angle).mult(0.8);

    }

  }

}

Mountain generation

Mountains are generated by sampling terrain elevation candidates from noise (to make spawning more natural as opposed to complete randomness), sorting candidates by highest elevation first, then placing mountains with spacing constraints.

const candidates = buildMountainCandidates();

for (let i = 0; i < candidates.length && mountains.length < quantity; i++) {

  const c = candidates[i];

  const baseRadius = map(c.elevation, 0, 1, mountainMinRadius, mountainMaxRadius, true);

  const radius = baseRadius * random(0.9, 1.12);

  const x = c.x;

  const y = c.y;

  if (isMountainPlacementValid(x, y, radius, 1, true)) {

    mountains.push({ x, y, radius });

  }

}

Mountains do not rewrite the global flow field grid directly. Instead, there’s an additional tangential force that gets added to the bird’s steer direction, not the vector field.

const repelStrength = map(edgeDistance, 0, 140, 2.8, 0.2, true);

const repel = away.copy().mult(repelStrength);

const tangent = createVector(-away.y, away.x);

if (tangent.dot(desiredDirection) < 0) {

  tangent.mult(-1);

}

 

Contour lines and how they work

Contours are generated from a hybrid elevation function that combines base noise terrain with mountain influence.

  • Build contour levels between minimum and maximum elevation.
  • For each cell, compute corner elevations.
  • Build a case mask and map it to edge segments.
  • Interpolate exact crossing points on edges.
  • Store line segments and then stitch/smooth them into polylines.
  • Draw major and minor contours with different stroke weights and alpha.
if (cell.tl.v >= iso) mask |= 8;

if (cell.tr.v >= iso) mask |= 4;

if (cell.br.v >= iso) mask |= 2;

if (cell.bl.v >= iso) mask |= 1;

const segments = caseToSegments[mask];
const a = getCellEdgePoint(edgeA, cell, iso);

const b = getCellEdgePoint(edgeB, cell, iso);

contourSegmentsByLevel[levelIndex].push({ a, b });

 

How the UI works

The UI is created in setupUIControls() as a fixed bottom panel.

  • Scene and simulation control (refresh noise seed, pause/resume).
  • Canvas preview and scale.
  • Mountains enabled and mountain count.
  • Tree count and birds per tree.
  • Bird speed and trail length.
  • Day/night cycle toggle, manual time selection, and day duration.
  • Contour smoothing iterations.

 

When values change, callback handlers rebuild affected systems. For example:

  • Tree count change regenerates trees and birds, then mountains.
  • Birds per tree change regenerates birds only.
  • Mountain toggle/count updates mountains and rebuilds contours.
  • Contour smooth input rebuilds contour polylines.

 

Example:

ui.birdCountInput.changed(() => {

  birdsPerTree = parseBoundedNumber(ui.birdCountInput.value(), birdsPerTree, 1, 20);

  ui.birdCountInput.value(String(birdsPerTree));

  regenerateBirdsForTrees();

});

States

Last but not least there are multiple states in the program. Day and Night, and with or without mountains. I decided to keep the states simple for this project. Day and night have visual changes and mountains affect the movements of the birds.

Milestones

Version 1

This was one of my first versions of my sketch it consisted of trees that were randomly spawned around the canvas (later I would switch to using random) and birds that travelled inbetween and there is the vector field although it is hard to notice in this image. There are some small details I implemented as well like birds would stop at the edge of the tree and I made the opacity of the background low so that I could see the paths of the birds more clearly.

Version 2

At this point I created the Mountain class, it would be different sizes and I just had it spawn randomly around the canvas at this point and I could change how many would spawn. As you can see at this point I didn’t implement the avoidance as you can see by the trail of one bird at the bottom that phased through the mountain.

Version 3

From my perspective at this point I had overcome the most important technical features. I now had birds that travelled between trees that moved organically and could avoid mountains and I was happy with the effect but I knew there was potential to make it more aesthetically pleasing but I didn’t know what my next step would be.

Version 4 (Daytime and Nighttime)

At this point I had simply implemented two states of it either being day or being night. Later I added a version of birds which would only move during the day and would stay at their trees during the night and a transition effect from day to night.

Version 5 (User Interface)

I took all the features I had before and now I added a user interface in the form of sliders at the bottom of the canvas. Because it changed the canvas in real time it allowed me to see different variations and it led to the idea of making the mountains spawn using noise but only at points of ‘highest elevation’. This later led to the idea of the topographic map.

Version 6

I started with just topographic lines with the noise function alone (not taking into consideration the placement of mountains) then after tweaking the strokeWeight to make it more visible I added the mountains  in the form of the contour lines. Then I added colors based on the elevation opting for a sand color and for the areas of low elevation a blue color to represent water and for the mountains a gray color but I later tweaked the gray color and sand color to make it more prominent.

Version 7

This is the latest version of my sketch. From the last version to this I added a few more elements that can be changed from the UI (not shown in this screenshot) and

(Looks better when viewing it on a larger canvas)

Video Documentation

Reflection

I’m happy with the movement of the birds especially the avoidance of the mountains. However, I wasn’t able to get an A3 sized screenshot of the simulation because it is too slow when scaled to be A3 sized. I’m not sure why exactly I haven’t taken the time to sit down and figure it out but I have a suspicion that its because of the size of the vector field and all the vector calculations. I’m just guessing.

I would also want to make improvements on the aesthetics. I thought about shadows and maybe having more detailed models for the birds and trees but I wasn’t sure how to without going against the topographic map aesthetic.

References

  • p5.js library — p5js.org
  • Daniel Shiffman, The Nature of Code: vector movement and flow fields (natureofcode.com)
  • AI disclosure: To assist with the mountain repulsion force implementation and the contour line generation using the Marching Squares algorithm

Assignment 8- steering behaviors

Concept + References / Inspiration

This sketch visualizes thoughts as a living system, behaving somewhere between a school of fish and coral formations. I was especially drawn to corals as a reference, not just visually but structurally. Corals are collective, slow, reactive organisms that grow through accumulation and interaction rather than control. That felt very aligned with how thoughts operate.

Instead of treating thoughts as isolated units, I approached them as a soft ecosystem. They cluster, avoid, drift, and react to an external stimulus, which in this case is the mouse acting as a “focus point” or “idea.” When the interaction is calm, thoughts gather and stabilize. When it becomes too fast or erratic, they scatter, almost like a disturbance in water.

There’s also influence from generative systems like flocking and steering behaviors, but I wanted to move away from purely mechanical logic and make it feel more emotional. The system shifts between states: wandering, focusing, and panicking. That transition is what makes it feel alive to me.

Code Highlight (something I’m proud of)

One part I’m particularly proud of is the behavior system tied to mouse speed, especially how it shifts between different “mental states”:

if (mouseSpeed > 40) {
let panic = this.flee(mouse);
panic.mult(3.5);
this.applyForce(panic);
this.maxspeed = 10;
} else if (mouseX !== 0 && mouseY !== 0) {
let arriveForce = this.arrive(mouse);
let separateForce = this.separate(allThoughts);arriveForce.mult(1.0);
separateForce.mult(1.5);this.applyForce(arriveForce);
this.applyForce(separateForce);
this.maxspeed = 4;
} else {
let wanderForce = this.wander();
wanderForce.mult(0.5);
this.applyForce(wanderForce);
this.maxspeed = 2;
}

I like this because it’s simple but expressive. It turns a technical input (mouse velocity) into something that feels psychological. The system isn’t just moving, it’s reacting, almost like attention and overwhelm are being simulated.

Embedded Sketch

Milestones 

✦ milestone 1  baseline dots

it started as just points on a screen, nothing intelligent, nothing reactive. just scattered presence. like the baseline state of a mind before anything kicks in, just existing, quiet but full.

✦ milestone 2  random motion

then they started moving, but without direction. drifting, floating, no purpose. it felt like background thoughts, the kind that just pass through you when you’re not paying attention to anything specific.

✦ milestone 3  steering toward focus

this is where intention entered. everything began to move toward the mouse, like trying to focus on something. it’s not perfect, it overshoots, it adjusts, but there’s a clear pull. like when you’re trying to gather your thoughts into one place.

✦ milestone 4  separation

once everything started gathering, it became too much, everything overlapping, collapsing into one point. so separation was introduced. thoughts began to keep distance from each other, like needing space to think clearly. it stopped being chaos and started feeling structured.

✦ milestone 5  trails (memory)

then memory appeared. each thought started leaving a trace behind it. nothing fully disappears anymore. even when it moves on, something lingers. this is where the system stopped feeling like motion and started feeling like time.

✦ milestone 6  color and mood

movement started affecting visuals. faster thoughts became brighter, more intense. slower ones stayed softer. the system began expressing mood, not just behavior. it became less about where things are and more about how they feel.

Reflection + Future Work

This project made me think a lot about how behavior systems can move beyond being purely functional and start becoming expressive. Small parameter changes completely shift the emotional tone of the piece, which I found really interesting.

If I were to develop this further, I would push it in a few directions. I want to explore more coral-like growth over time, where thoughts don’t just move but also accumulate or leave behind structure. Right now, everything is transient, but memory could become spatial.

I’m also interested in introducing clusters or hierarchies, where some thoughts carry more weight or influence others, instead of everything behaving equally. That could create moments of tension or dominance within the system.

Visually, I would refine the trails to feel even more organic, maybe closer to underwater motion or bioluminescence. Sound could also be interesting, where movement generates subtle audio feedback tied to speed or density.

This feels like a starting point for thinking about systems as emotional landscapes, not just simulations.

Mustafa Bakir Assignment 7 – Light Vortex

This sketch is laggy on p5 web editor so I included a video on VS Code.

 

This sketch is inspired by teamLab Phenomena’s Light Vortex.

 

This sketch creates a generative laser show where beams of light emerge from the screen edges and converge to form shifting geometric patterns.  The visual experience centers on the idea of a “central attractor” shape. Every beam starts at a fixed position on the edge of the canvas. The end of each beam connects to a point on a central shape. These shapes cycle through circles, squares, triangles, spirals, waves, and stars. As the user triggers a transition using the button Space, the endpoints of the beams slide smoothly from one shape’s perimeter to the next.

I began with a prototype and then improved my sketch.

 

 

The logic requires several fixed values to maintain performance and visual density. I chose 144 beams total. This provides 36 beams per side of the screen.

const NUM_BEAMS         = 144;  
const BEAMS_PER_EDGE    = 36;
const TRANSITION_FRAMES = 120;
const MODES             = ['circle', 'square', 'triangle', 'spiral', 'wave', 'star'];

The state variables manage the current shape and the animation progress. transitionT tracks the normalized time (0 to 1) of the current morph.

Each laser is an instance of a Beam class. This class stores the origin point on the screen edge and handles the color logic. The recalcOrigin method assigns each beam to one of the four sides of the rectangle.

recalcOrigin() {
    const e = this.edge;
    const k = this.index % BEAMS_PER_EDGE;  
    if (e === 0)      { this.ox = W * (k + 0.5) / BEAMS_PER_EDGE; this.oy = 0; }
    else if (e === 1) { this.ox = W; this.oy = H * (k + 0.5) / BEAMS_PER_EDGE; }
    // ... logic for other two edges
}

To create a “glow” effect, I draw the same line three times with different weights and transparencies. The bottom layer is wide and faint. The top layer is thin and bright white. Basically drawing from the gradiant concept professor Jack showed us when he created a gradient on a circle.

The most technical part of the code involves the getShapePoint function. Every shape needs to map a value t (from 0 to 1) to a coordinate (x, y).

The circle uses basic trigonometry. The square divides $t$ into four segments.

function squarePoint(t, r) {
  const seg  = t * 4;
  const side = Math.floor(seg) % 4;
  const frac = seg - Math.floor(seg);
  switch (side) {
    case 0: return { x: -r + frac * 2 * r, y: -r };
  }
}

For polygons, the code treats each side as a separate linear path. For the square, the path is divided into four equal segments. If normT is between 0 and 0.25, the point is on the top edge. If it is between 0.25 and 0.50, it moves to the right edge.

function calcSquareVert(normT, maxRadius) {
  const totalSegs = normT * 4;
  const activeEdge = Math.floor(totalSegs) % 4;
  const edgeLerpFrac = totalSegs - Math.floor(totalSegs);
  // map sub-coords based on current active edge
}

 

When a transition occurs, the code calculates the point for the current shape and the point for the next shape. I use a linear interpolation (lerp) between these two positions. I then apply an easeOutCubic function to make the movement feel more organic and less mechanical. The visual depth increases significantly because of the intersection points. When two beams cross, the code renders a glowing “node.”
I used a standard line-line intersection algorithm. This calculates the exact x and y where two segments meet.
x = x1 + t(x2-x1) and the same for y coordinates.
Drawing every single intersection would ruin the frame rate. I implemented two filters. First, the code only draws a maximum of 800 intersections per frame. Second, I created an intersectionAlpha function. This function checks how close an intersection is to the central shape. Nodes far away from the core are transparent. Nodes near the core glow brightly.

function intersectionAlpha(ix, iy) {
  const threshold = min(W, H) * 0.12;
  let minD = Infinity;
  for (let k = 0; k < SHAPE_SAMPLE_N; k++) {
    const d = Math.sqrt((ix - shapeSamples[k].x) ** 2 + (iy - shapeSamples[k].y) ** 2);
    if (d < minD) minD = d;
  }
  return constrain(Math.exp(-3.5 * minD / threshold), 0.03, 1.0);
}

The atmosphere relies on blendMode(ADD). This mode makes colors brighten as they overlap.

Then I wanted to add my special touch for the glow. As a video editor and motion designer, I use this special overlay effect a lot called Light Leaks. Here’s an example if you do not know what light leaks are.

Here’s a video of how it looked like before the light leaks. It was so flat so the light leaks were defintely a good addition.

I added a drawLightLeaks function. It uses p5’s noise() to move large, soft radial gradients around the background. These gradients use a low opacity to simulate lens flare or atmospheric haze.

function renderLightLeaks() {
  blendMode(ADD);
  const activeLeaks = 2;

  const leakColors = [
    'rgba(40, 120, 255, 0.25)', // elec blue
    'rgba(120, 40, 255, 0.20)', // deep violet
    'rgba(255, 175, 45, 0.15)'  // warm gold
  ];

  // iter to gen noise-driven radial gradients
  for (let iterIdx = 0; iterIdx < activeLeaks; iterIdx++) {
    let noiseX = noise(iterIdx * 10, frameCount * 0.002) * canvasWidth;
    let noiseY = noise(iterIdx * 20 + 100, frameCount * 0.002) * canvasHeight;
    let radiusVal = (0.5 + noise(iterIdx * 30 + 200, frameCount * 0.0015)) * max(canvasWidth, canvasHeight) * 0.8;

    let radGrad = drawingContext.createRadialGradient(noiseX, noiseY, 0, noiseX, noiseY, radiusVal);

    // bind colors and fade out alpha
    radGrad.addColorStop(0, leakColors[iterIdx % leakColors.length]);
    radGrad.addColorStop(1, 'rgba(0, 0, 0, 0)');

    drawingContext.fillStyle = radGrad;
    drawingContext.fillRect(noiseX - radiusVal, noiseY - radiusVal, radiusVal * 2, radiusVal * 2);
  }
  blendMode(BLEND);
}

As always with all of my sketches, I faced a performance issue. Thankfully after running into preformance issues a billion times in my life I managed to get better with knowing how to resolve them. Calculating 144 origins and 150 shape samples every frame is cheap. Calculating thousands of potential intersections is expensive. Drawing every intersection would create too much visual noise. The calcIntersectAlpha function calculates the distance between an intersection point and the nearest point on the central shape. Nodes far away from the core are transparent. Nodes near the core glow brightly.

  • updateCachedEnds: This function runs once at the start of the draw() loop. It stores the end position of every beam. This prevents the intersection loop from recalculating the morphing math thousands of times.

  • updateShapeCache: This pre-calculates the geometry of the central shape. The intersection alpha function uses this cache to quickly check distances without running the shape math again.

  • Collision Cap: I set MAX_LINE_INTERSECTS to 800. This ensures the computer never tries to render too many glowing dots at once.

For future improvements, I tried making the light beams draw an illusion of a 3d shape while still in a 2d canvas. This kind of worked but kind of didn’t because I think I would need to implement dynamic scaling because the canvas looked overwhelming even thou you can still see the object. I decided to scrap this idea and remove it from the code. here’s how it looked like.

Salem Al Shamsi – Assignment 7

TeamLab Recreation: Floating Microcosms

Inspiration Video

I visited teamLab Phenomena Abu Dhabi and chose the artwork called “Floating Microcosms.” Here is the official video from teamLab showing the installation:

https://youtu.be/Vy6JQM9V9FQ

It is a dark room with shallow water on the floor. Egg-shaped glass sculptures called “ovoids” float on the water. When you push one, it tilts, glows with color, and makes a sound. Then the eggs around it respond one after another, a wave of light that spreads across the room. The ceiling is a mirror, so everything reflects upward.

Here are some photos I took when I visited:

Why I Chose This

The interaction is so simple: push an egg,  but what happens is beautiful. One touch creates a chain reaction of light across the whole room. It makes you feel like you are part of the artwork. I also liked how the eggs look alive, they float, they glow from inside, and the colors keep changing.

Code I Am Proud Of

The chain reaction that fades with each hop. When you click one ovoid, it tells its neighbors to light up, but weaker. Those neighbors tell their neighbors even weaker. After 2-3 hops, it dies out, so not every egg lights up. Just the nearby ones.

let nextStrength = strength * 0.4;
if (nextStrength > 0.15) {
  for (let other of ovoids) {
    if (other === this) continue;
    let d = dist(this.x, this.y, other.x, other.y);
    if (d < 160) {
      let delay = floor(map(d, 0, 160, 20, 50));
      other.chainTimer = delay;
      other.chainStrength = nextStrength;
    }
  }
}

Direct click = strength 1.0. First neighbors get 0.4. Their neighbors get 0.16  too weak to keep going. This makes the wave fade naturally, just like the real installation.

Creative twist: “Water Memory.” In the real artwork, the water goes dark after each interaction. In my version, every touch leaves glowing color marks on the water that slowly fade. After many clicks, the water becomes a colorful painting of everywhere you have touched.

Embedded Sketches

Phase 1 — The Scene

Dark room, visible water, ovoids floating with internal swirl patterns, and a ceiling mirror. No interaction yet.

Phase 2 — Interaction

Click to glow, chain reaction that fades after 2-3 hops, ripples, mouse push, ceiling mirror lights up.

 

Final — Water Memory

Same as Phase 2 plus the creative twist: colored marks stay on the water after each touch.

Milestones and Challenges

Chain reaction spreading everywhere. My first version activated all neighbors at full strength; every egg in the room lit up from one click. The fix was to make each hop weaker (strength × 0.4 each time), so it dies out after 2-3 rings.

Water not visible. My first water was too dark, almost the same color as the ceiling. I made it much lighter (dark blue-purple) and added a bright shimmer line at the surface edge.

Reflection and Future Work

My sketch captures the core of “Floating Microcosms”: push an egg, watch light spread. But the real artwork has things I cannot recreate: the physical feeling of pushing a real sculpture, the sound each egg makes, warm water on your feet, and the infinite mirror ceiling.

Ideas for the future:

  • Add sound so each ovoid plays a tone when it glows
  • Use WebGL for real 3D translucent eggs
  • Make ovoids bump into each other when they collide

References

 

Midterm — “Shifting Grounds”

What Is This Project?

I built a desert that builds itself.

It is not a painting or an image I drew. It is a program that creates a new desert landscape every time you run it. The dunes grow, move, shake, get rained on, and then settle into a final peaceful scene, all on their own, with no human drawing anything.

The system moves through four stages, like a story:

  1. Wind — pushes the sand around and builds up the dunes
  2. Tremor — the ground shakes, and the dunes collapse and spread out
  3. Rain — water smooths everything down and darkens the sand
  4. Stillness — everything stops. The desert rests

Every time you press “New Seed,” you get a completely different desert. Same rules, different result. That is what makes it generative art: the system creates the art, not me.

Why a Desert?

I am from the UAE. I grew up around the desert. Most people think sand dunes just sit there, but they actually move and change shape constantly. Wind pushes sand from one side to another. After a storm, the dunes look completely different. When it rains (which is rare), the sand turns dark and the surface becomes smooth.

I wanted to recreate that in code. Not a realistic photograph, but the feeling of how a desert changes over time.

How It Works — The Big Idea

Layers Create Depth

The desert you see on screen is made of seven layers stacked on top of each other, like layers of paper.  The layers in the back are pale and barely move. The layers in the front are golden, tall, and move faster. This creates a feeling of distance and depth, even though everything is flat.

Each layer has its own terrain, a line of hills and valleys that represents the top of the sand. This terrain is stored as a list of numbers. Each number says, “how tall is the sand at this spot?” When the program draws the layer, it connects all those heights with smooth curves, fills everything below with color, and that is your dune.

The Sky Changes With Each Phase

The sky is not just a static background. It changes color depending on which phase is active:

  • Wind has a warm golden sunset sky
  • Tremor has a dark, heavy, ominous sky
  • Rain has a cool grey-blue sky
  • Stillness has a peaceful, warm dawn

The sky smoothly fades from one palette to the next when the phase changes. This makes the transitions feel natural instead of sudden.

The Four Phases — What Each One Does

Wind — Building the Dunes

This is the first and most important phase. The wind is what gives the dunes their shape.

Here is how it works in simple terms: the program looks at each point on the terrain and asks, “how strong is the wind here?” The wind strength is not random; it uses something called Perlin noise, which creates smooth, flowing patterns (think of it like a weather map where nearby areas have similar wind). Where the wind is strong, it picks up sand from that spot and drops it a little further along. Over many frames, this creates realistic dune shapes, ridges, valleys, and peaks.

But there is a problem: if sand just piles up forever, you get impossibly steep spikes. That does not happen in real life because sand slides when it gets too steep. So the program checks every point: “is the slope here steeper than sand can actually hold?” If yes, the excess sand slides down to the neighbors. This rule is called the angle of repose and it is from real physics.

There is also a safety check: the total amount of sand never changes. Sand is not created or destroyed, only moved from one place to another. This keeps the terrain looking realistic.

// Wind force from Perlin noise
let windNoise = noise(i * 0.05, t + layer.seedOffset * 0.001);
let windForce = map(windNoise, 0, 1, -0.4, 0.4 + WIND_BIAS) * spd;

let amount = windForce * windStrength;

// Move sand in wind direction
let target = windForce > 0 ? i + 1 : i - 1;
target = constrain(target, 1, NUM_POINTS - 2);

h[i] -= amount;
h[target] += amount;

Tremor — Shaking Things Up

After the wind has built up nice, tall dunes, the ground shakes.

The tremor does not just wobble the screen. It actually changes the terrain. Here is what happens:

  1. Tall dunes collapse. The program finds every point that is above average height and pulls it downward. The sand that falls off the top gets spread to nearby points. So tall, sharp dunes become wider and flatter, just like real sand behaves during a sandstorm.
  2. The layers shake. Each layer moves up and down by a small random amount every frame. The front layers shake a lot, the back layers barely move. This creates a convincing sandstorm effect.
  3. Dust rises. Small brown particles spawn from the tops of the front dunes and float upward, like dust being kicked up by the vibration.

The tremor starts gently and builds up over time (a “cold start”), which makes it feel like a real sandstorm building in intensity.

let diff = h[i] - avg;

if (diff > 0.01) {
  let fall = diff * TREMOR_EROSION * spd * (0.2 + power);
  h[i] -= fall;

  // Sand spreads to 3 neighbors on each side
  h[i - 1] += fall * 0.22;
  h[i - 2] += fall * 0.15;
  h[i - 3] += fall * 0.08;
  h[i + 1] += fall * 0.22;
  h[i + 2] += fall * 0.15;
  h[i + 3] += fall * 0.08;
}

Rain — Smoothing Everything

Rain does two things to the terrain:

  1. Splash erosion. When rain hits sand, it smooths it out. In the code, each point’s height gets averaged with its two neighbors. High points go down a little, low points come up a little. Over time, this erases sharp edges and makes everything gentler.
  2. Water flows downhill. Wherever one point is higher than the next, some sand flows from the high side to the low side, like water carrying sediment. This flattens the terrain even further.

You can see raindrops falling on the screen as small white streaks. When a drop hits the front dune, it kicks up 2-3 tiny sand splash particles that fly upward, a small detail that makes it feel alive.

The coolest visual effect in this phase is the wet sand. When it rains, the sand slowly darkens. Each layer has two colors: a dry color (warm golden) and a wet color (dark brown). As rain continues, the colors blend toward the wet version. The back layers get very dark grey-brown, and the front layers get rich brown. This creates a strong sense of depth when everything is wet; you can clearly see each layer separated by color.

// Blend between dry and wet color based on wetness
let r = lerp(this.dryColor[0], this.wetColor[0], this.wetness);
let g = lerp(this.dryColor[1], this.wetColor[1], this.wetness);
let b = lerp(this.dryColor[2], this.wetColor[2], this.wetness);
fill(r, g, b);

Stillness — The Quiet Ending

Nothing moves. The terrain is frozen exactly as the rain left it. The sand slowly dries back to its original golden color. Any remaining shake from the tremor settles to zero. The sky fades to a warm, peaceful dawn.

This is the “take a photo” moment. The desert has been through wind, tremor, and rain, and now it rests.

The Auto Timeline

The sketch runs all four phases automatically in sequence. You just press play and watch:

  • Wind runs for 10 seconds
  • Tremor runs for 8 seconds
  • Rain runs for 9 seconds
  • Stillness stays forever

Press Space to start over with a brand new landscape.

The Controls

There is a thin bar at the top of the screen with simple controls:

  • Wind / Tremor / Rain / Still — click any phase to jump to it manually
  • When you select a phase, a slider appears that lets you adjust that phase’s strength (how strong the wind blows, how powerful the tremor is, how fast the rain erodes)
  • Auto — toggles the automatic timeline on/off
  • New Seed — generates a completely new desert

The UI was designed to be minimal and not distract from the artwork. It uses warm gold text on a dark transparent bar, matching the desert color palette.

Techniques From Class

This project uses several techniques we learned in class:

Perlin Noise — I use noise in two ways. First, to generate the initial terrain shape for each dune layer (the hills and valleys). Second, to create the wind field, noise gives me smooth, flowing wind patterns where nearby points have similar wind strength, just like real weather.

Forces — Wind pushes sand along the terrain. Gravity pulls raindrops and dust particles downward. The angle of repose redistributes sand when slopes are too steep. These are all force-based systems.

Classes — I built three classes to organize the code:

  • DuneLayer handles everything about one layer of dunes (its height, color, position, drawing)
  • Sky manages the gradient background and phase transitions
  • ParticleSystem handles all the floating particles (dust, rain, splashes)

Arrays — Each dune layer stores its terrain as an array of 150 height values. All the physics (wind, tremor, rain) works by reading and modifying these arrays every frame.

Oscillation — During the tremor phase, each layer shakes up and down in a jittery motion. Front layers shake more, back layers shake less, creating a convincing depth effect.

Color Lerping — The lerp() function blends between two values smoothly. I use it everywhere: blending sky colors between phases, blending sand between dry and wet colors, fading particle transparency, and fading the phase label text.

The Three Prints

I captured three high-resolution images from the system, one from each active phase. Together, they tell the story of one desert going through three stages of change.

Print 1 — Wind

Warm golden sky. Sharp dune ridges carved by wind. The sand is dry and bright. This is the desert being actively shaped, the most dynamic moment.

Print 2 — Tremor

Dark, heavy sky. The tall dunes from the wind phase have collapsed and spread out. Dust particles hang in the air. The landscape has been shaken apart.

Print 3 — Rain

Cool grey-blue sky. White rain streaks fall across the scene. The sand has turned dark brown from moisture. The terrain is smoother, peaks are lower, and sharp edges are gone. A quiet, moody moment.

Video

A walkthrough of the full system: the auto timeline playing through all four phases, followed by manual switching between modes and adjusting the strength sliders.

How I Built It — The Process

I did not build everything at once. The project was developed in six phases, each one adding a new feature on top of the last. I tested each phase and made sure it worked before moving on:

Phase 1 — Foundation. I built the seven-layer dune system, the sky gradients, and the smooth curve rendering. No physics yet, just the visual base. The big decision here was making each layer auto-calculate all its properties (color, height, position, speed) from just its index number. This meant I could change the number of layers without rewriting anything.

Phase 2 — Wind. Added wind transport and slope stability. This was the hardest part of the whole project. If the wind is too strong, everything flattens instantly. If the slope rules are too strict, nothing interesting happens. Finding the right balance took a lot of trial and error. I also tried adding floating wind particles at first, but they looked messy and disconnected from the terrain. I removed them; the dune movement itself shows the wind better than any particle could.

 

Phase 3 — Tremor. Added the tremor effect with peak erosion, per-layer shaking, and dust particles. To activate tremor mode, press “T.” The sky transition was tricky; my first wind and tremor palettes looked too similar, so the change was not noticeable. I made the palettes more distinct and sped up the transition. I also experimented with a dust haze overlay, but it looked like a flat layer on top of the terrain, so I removed it.

Phase 4 — Rain. Added splash erosion, runoff, raindrops, and the wet sand color system. To activate rain mode, press “R.” The rain went through many iterations. At first, it came in bursts instead of a steady drizzle, so I adjusted the spawn rate and distributed drops across the full screen height. The wet sand color also needed depth-aware tuning: initially, all layers darkened to the same tone, which made the back layers hard to see. Assigning each layer a different wet color (darker for the back, warmer for the front) fixed this issue.
Phase 5 — Stillness + Timeline. Added the fourth phase, where everything stops, and the auto timeline that advances through all four phases automatically. A small phase label fades in at the bottom-left when each phase starts.
Phase 6 — UI. Added the top bar with phase buttons, contextual sliders, auto toggle, and a new seed button. The first version was a big panel on the right side, but it took too much space and did not feel right for a class project. I simplified it to a thin bar at the top.

What I Learned and What I Would Change

What works well:

  • The seven layers create a real feeling of depth and distance
  • The phase transitions feel smooth and natural, sky and terrain change together
  • The wet sand darkening during rain is subtle but makes a big difference
  • The auto timeline tells a complete story without any user input

What I would do differently next time:

  • Add a 3D perspective view instead of the flat side view, this would make the prints more dramatic
  • Add sound, wind howling, rain pattering, ground rumbling during tremor
  • Make the timeline longer with slower, more gradual transitions
  • Add mouse interaction, drag to create wind, click to trigger tremors
  • Try different environments, snow, ocean waves, volcanic landscapes using the same system

References

  • R.A. BagnoldThe Physics of Blown Sand and Desert Dunes (1941). The classic science book about how wind moves sand and shapes dunes. This is where I learned about saltation (how wind picks up and drops sand grains).
  • Angle of Repose — A concept from granular mechanics (the science of how piles of material behave). It is the steepest angle a pile of sand can have before it slides. This rule is what keeps my dunes looking realistic.
  • Ken Perlin — Perlin Noise (1983). The algorithm I use to generate smooth, natural-looking randomness for both terrain and wind patterns.
  • Soil Liquefaction — A real phenomenon where vibration makes sand temporarily act like liquid. This is the idea behind my tremor phase.
  • Daniel ShiffmanThe Nature of Code. The textbook for this course. Used as a general reference for forces, noise, and particle systems in p5.js.