Afra Binjerais – final project

Interactive Sadu Weaving Environment

Project Overview

My final project explores how traditional UAE heritage can be translated into a contemporary interactive digital environment. The work is inspired by Al Sadu, a traditional Bedouin weaving practice known for its geometric motifs, repetition, rhythm, and handcrafted structure.

The project imagines the user a weaver. Through hand gestures captured by a webcam, the user interacts with a woven digital textile in real time. Their movement shapes patterns, places motifs, and disturbs the surface of the cloth, creating a continuously evolving visual field.

The artistic intention is to explore how cultural heritage can exist dynamically through technology. Instead of presenting tradition as something static, the project allows it to be experienced as something responsive, alive, and constantly transforming.

The key themes explored are:

  • Handcraft vs algorithm
  • Tradition translated through code

Implementation Details

The project was developed using p5.js and ml5.js HandPose tracking. The webcam detects the user’s hand, and the index finger is used as the primary interaction point. This position is mapped from camera space to canvas space and used to interact with a grid-based woven system.

The project evolved through four main milestones:

Milestone 1: Basic Interactive Weaving System

The first version established the core interaction. A grid of cells represented a simplified woven surface, where each cell could either display a background block or a basic cross motif.

Using HandPose, the position of the user’s index finger was tracked and mapped onto the canvas. When the finger moved across the grid, nearby cells flipped states, creating the effect of disturbing or weaving the pattern.

Outcome:
This stage confirmed that the core interaction worked. However, the visual system was limited, and the interaction felt more like toggling pixels rather than weaving a rich textile.

Milestone 2: Motif Expansion and Generative Behavior

In the second version, the visual complexity increased significantly. Multiple Sadu-inspired motifs were introduced, including cross, diamond, stripe, and block-based patterns.

A cellular automata system was added, allowing the weave to evolve over time. Cells could grow, spread, or disappear depending on their neighbors, making the textile feel alive even without user input. Hand speed was also incorporated as a parameter. Faster movement resulted in a larger brush radius, allowing the user to affect a wider area of the weave.

Key additions:

  • Multiple motif types
  • Sadu-inspired color palette
  • Cellular automata system
  • Hand speed controlling brush size
  • Keyboard-based motif switching

Outcome:
The system became more generative and dynamic. The textile was no longer static but continuously evolving. However, interaction was still partially dependent on the keyboard, which interrupted the flow of the experience.

Milestone 3: Interface Design and User Experience

This stage focused on improving usability and visual clarity. Motif selection was moved into visible buttons, making the system easier to understand. The interaction area was restricted to the woven field, preventing accidental input in the interface zones.

Outcome:
The interface became more intuitive and readable. However, motif selection still relied on mouse interaction, which created a disconnect from the hand-based system. This is where I took it to the second stage where I wanted the hand to hover on the buttons

Milestone 4: Gesture-Based Interaction and Flow Field System

The final version fully integrates gesture-based interaction across the system. Users can select motifs by pointing at buttons and holding their hand steady for three seconds. A visual progress bar provides feedback during this interaction. The most significant development in this stage is the transformation of Motif 4 into a flow field system.

Unlike the other motifs, which are governed by cellular automata, Motif 4 operates as a dynamic flow field. Each cell contains directional movement, causing the internal elements of the motif to continuously shift and distort over time. The user’s hand acts as a force within this system. 

This creates a layered interaction:

  • Motifs 1–3 behave as structured, generative weaving systems
  • Motif 4 behaves as a fluid, responsive flow field shaped by gesture

Motif 4 is also excluded from the cellular automata system, meaning it does not grow or decay automatically. It remains as a direct trace of the user’s interaction.

Key additions:

  • Gesture-based UI (hover + dwell interaction)
  • Flow field system for Motif 4
  • Hand-influenced distortion behavior
  • Separation of algorithmic and gesture-driven layers

Outcome:
This version successfully merges interaction, generative systems, and visual expression. The system now reflects both structured weaving and fluid transformation, aligning more closely with the conceptual goals of the project.

Technical Process

The project is built around a grid system where each cell represents a unit of the woven textile. Each cell stores a value corresponding to a motif type. Hand tracking data from the webcam is mapped to the canvas, allowing the user to “paint” motifs onto the grid. Hand speed determines brush size, creating variation in interaction. A cellular automata system controls the growth and decay of Motifs 1–3 based on neighboring cells, introducing generative behavior. Motif 4 operates differently. It uses a flow field where each cell has a directional vector that affects how its internal elements are drawn. This produces continuous motion and distortion.

Video Documentation

Reflection

My project translates elements of traditional Sadu weaving into an interactive digital form. The use of hand tracking allows the user to engage with the system in a physical and intuitive way, reinforcing the idea of weaving as a gesture-based practice.

One of the most important developments was introducing the flow field in Motif 4. This created a contrast between structured, rule-based generation and fluid, responsive movement. It reflects the tension between tradition and transformation, which is central to the project’s concept.

A key challenge was balancing user control with generative behavior. Early versions either felt too static or too unpredictable. Separating Motif 4 from the cellular automata helped resolve this by giving the user a more direct and lasting impact on the system. I was also facing an issue of implementing decoding nature codes into this idea and i felt that motif 4 helped with that.

For future improvements, I would like to:

  • Add sound elements that respond to interaction
  • Develop more motifs based on authentic Sadu patterns
  • Introduce gesture variations (e.g., open palm, multiple hands)
  • Allow users to save or export their woven compositions (this was something i was willing to do but due to time constraints i decided to draft this idea)

F1 Track: Multi-Vehicle Steering Behaviors – Assignment 8

The Concept

So after doing the F1 attractor simulation in assignment 3, I kept thinking about it. It worked, it looked cool, but something about it always felt a bit… mechanical. The car was basically just getting yanked from point to point by invisible gravity wells. There was no real intelligence there, it was just physics doing all the work.

When we started going through the steering behaviors in class, path following, separation, the flocking stuff, I immediately thought: this is how I can rebuild that track properly. Not with attractors pulling a car around, but with a car that actually decides where to go, that reads the path and steers toward it, and knows to stay away from other cars around it. That difference matters to me. One feels like a simulation, the other feels like behavior.

So the idea for this assignment was to reimplement my F1 track from assignment 3 but change the physics underneath. Instead of attractors, I define an actual closed Path using the same centerline points. Then I put five cars on it, each with their own random top speed, and let them figure it out. Path following gets them around the track, separation keeps them from piling up on each other.

The thing I really wanted to see was whether giving each car a slightly different speed would naturally create that staggered grid effect you see in real racing where fast cars pull away and slow ones fall behind. It absolutely does and I love how it turned out.

The Physics Behind It

The two behaviors powering everything here come straight from what we did in class.

Path Following works by looking ahead  each car projects a “future position” 30px in front of itself based on its current velocity. It then finds the closest point on the path (the normal point) to that future position. If the future position has drifted more than the path radius away from the centerline, the car steers toward a target 25px ahead on that segment. If it’s still within the band, it does nothing and just keeps going.

Separation is the same weighted-distance pattern from the flocking exercise. Each car checks all neighbors within a 55px radius, and for each one it builds a repulsion vector pointing away, scaled by 1/d so closer cars get pushed harder. That sum gets turned into a steering force. I weighted separation at 1.8× and path following at 1.0×, so when two cars are about to collide, staying apart wins over staying on the line. In practice this means they’ll briefly drift wide in a corner to avoid each other, then snap back  which honestly looks exactly like real racing.

Each car also gets a random top speed assigned at setup, between 4.0 and 7.0 units per frame. That’s the one thing I really wanted to explore and it gives the whole thing this organic feel where cars naturally pull away from each other over time instead of bunching up into a permanent traffic jam.

Building It Up: Milestones & Challenges

Milestone 1: One Car, One Path

I started simple, just get a single car to follow the path before worrying about anything else. I defined the Path class using the same centerline coordinates I had mapped out in assignment 3, set the radius to 15px so there’s a comfortable band to work in, and got one car steering around the track.

Here’s Milestone 1:

This was actually more annoying to get right than I expected. The path is closed, which means I’m looping through all segments including the one that wraps from the last point back to the first. For a while I forgot to do (i + 1) % pts.length on the segment index so the car would follow the track fine for 12 segments and then just fly off the canvas when it hit the end. Once I fixed the wraparound it went smoothly.

I also had to think about what happens when the normal point falls outside a segment like when the car is near a corner and the normal projects past the endpoint. I clamped it using a dot product check, same way the vehicle path sketch from class handles it. Once the clamping was in, corners became much smoother.

Milestone 2: Adding the Other Four Cars

Once one car worked I duplicated it out to five, gave each one a random speed, and staggered their starting positions around the track so they wouldn’t spawn on top of each other. I used five positions that are roughly equally spaced top of the track, two in the straight sections, and two in the lower curves.

The first run with five cars immediately showed me the separation wasn’t strong enough. They’d follow the path totally fine individually but the moment two of them got close they’d just kind of phase through each other because the separation force wasn’t overriding the path force. I bumped the separation weight from 1.0 to 1.8 and that was enough; they now visibly push each other apart without losing the track completely.

Challenge: Minimum Speed vs. Separation Pushing Cars Off Track

The hardest thing to balance was what happens when separation is pushing a car sideways and the car’s velocity drops below the minimum threshold at the same time. My minimum speed enforcement was vel.setMag(topSpeed * 0.45) which always points the velocity in the direction it’s already going — but if separation had rotated that direction sideways or slightly off-track, locking in the minimum speed in that direction would send the car drifting into the infield.

The fix was ordering things correctly: I apply all behavior forces first, then let vel.add(acc) happen naturally, then apply vel.limit(topSpeed) for the max, then the minimum check. That way the minimum speed respects whatever combined steering direction the car has already settled on for that frame, rather than fighting the other forces. Once I got the order right the cars stopped randomly deciding to drive through the grass.

Milestone 3: Colored Cars and Trail Polish

Once the behavior was solid I went back to visuals. I gave each car its own color based on real F1 team colors: Ferrari red, Williams blue, McLaren orange, Aston green, a purple one. The trail draws with per-car color and fades using the same opacity and stroke weight gradient from assignment 3.

I also added a showPath toggle on the P key so you can see the path band overlaid on the track. This was really just for debugging but I kept it in because it’s actually interesting to see how the path sits right on the centerline and how the cars drift around within the band.

The Final Result

Five cars, each with a different top speed, following a closed path around the same F1 track from assignment 3. Faster cars pull ahead and lap slower ones. When two cars get close their separation forces kick in and they move apart like they’re actually racing side by side instead of overlapping.

Controls:

  • P — toggle path debug overlay
  • R — reset cars with new random speeds
  • S — save frame

 

Reflection & Future Work

What I find genuinely interesting about this compared to assignment 3 is how different the result feels even though the track looks identical. The attractor version felt deterministic, the car went where the physics sent it. This version feels like the cars have preferences. They want to stay on the path but they also want space. Watching two cars negotiate a corner together and one briefly drifts wide to give the other room is really satisfying.

What I learned:

  • Path following with look-ahead handles closed loops way more gracefully than I thought it would. The math is simple but it generalizes well.
  • Steering behavior weights matter a lot, changing separation from 1.0 to 1.8 was the difference between cars clipping through each other and cars that actually race properly.
  • Force application order is not trivial. You have to think about what each operation is doing to the velocity vector before the next one sees it.
  • The same visual output can feel completely different depending on the system underneath it.

What I’d add next:

  • Lap times: record how long each car takes per lap and display a live leaderboard
  • Slipstream effect: if you’re directly behind a car you should go slightly faster (draft effect)
  • Pit stops: a car exits the path, slows down in a pit lane area, then re-enters at a fresh speed

Final Project update

This project ended up being inspired by bukhoor, but not in a literal way. I was more interested in how it feels than how it looks. Bukhoor is something small and everyday, but it carries a lot, like memory, comfort, and presence, mostly through the smoke. The smoke became the main thing for me because it is always moving and never fixed.

While building my p5.js sketch , I started focusing less on making a realistic object and more on building a system. The piece is made of different parts that work together, the embers, the smoke, and the environment. The smoke especially became central because it reacts, grows, fades, and shifts over time. It is not something you can fully control, which felt important to keep.

After user testing, I realized the project was stuck between being realistic and abstract in a way that did not feel intentional. That made me rethink my direction. I considered pushing it to be fully stylized and less real, but instead I worked on balancing it better. I kept the expressive and generative aspects, but made parts of the bukhoor, especially the madhkhanah, feel more grounded.

I also started working more on the background. I added very subtle Islamic geometric patterns using a fractal system. They are not meant to stand out, but to sit behind everything and give context. It was important for me that they do not overpower the smoke or the interaction.

In the end, the project is not about showing bukhoor as an object. It is more about building an atmosphere that you experience over time. It sits somewhere between something you recognize and something that is constantly changing.

Final Project Progress

The Milestone

Over the last week, I made massive headway on my project. The biggest hurdle was getting the complex ml5.js Handpose computer vision model to successfully talk to my custom physics and flocking simulation.

I’ve managed to get the core interaction loop fully functional! I wrote a custom pose-classification function that measures the distance between the palm and the fingertips to figure out what gesture the user is making.

The Working Prototype

(Make sure you are in a well-lit room and give the model a few seconds to load. Try making a tight fist, and then suddenly opening your hand!)

Technical Hurdles & Fixes

Getting the interaction to work was only half the battle; getting it to run smoothly is the real challenge. Combining an $N^2$ flocking simulation (where every agent checks every other agent) with a live neural network absolutely tanked my framerate at first.

To get this ready for user testing, I had to optimize heavily:

  • Performance Tuning: I tried lowering the hidden webcam capture resolution so the machine learning model had less data to crunch, but that caused some issues with hand detection in low lighting. I can also optimize the boids’ visual rendering, stripping out some of the heavier additive blending layers that were killing the GPU, and slightly reduce the total population, but I don’t know what I’ll actually do.

  • Jitter Smoothing: Raw webcam data is incredibly noisy. If I mapped the flock’s target directly to the raw hand coordinates, everything vibrated uncontrollably. I implemented vector smoothing (lerp) so the digital orb that tracks your hand glides smoothly across the screen.

Next Steps

The sketch is finally in a place where I can put it in front of people. For user testing, my main goal is to see if the gestures (fist vs. open hand) feel intuitive, and if the visual feedback of the glowing orb clearly communicates what the user is doing to the swarm.

Honestly, I still think the direction of this project could do a 180 any time.

Final Project Proposal

Concept & Artistic Intention

For my final project, I am building an interactive digital environment that revolves around a flock of autonomous agents.

The artistic intention is to explore the tension between curiosity and fear in nature. The user plays the role of a foreign, glowing entity intruding on this dark abyss. I want the environment to feel organic, slightly eerie, and highly responsive to physical presence, stepping away from standard mouse-and-keyboard inputs.

Interaction Methodology

To achieve an unconventional interface, I will use ml5.js (Handpose) to track the user’s hand via webcam. The user’s hand will act as a massive physical force field within the simulation.

The interaction is mapped to specific hand gestures:

  • Neutral / No Hand: The boids exhibit standard, calm flocking behavior (wandering, aligning).

  • Closed Fist: The sketch interprets this as a small, dense, magnetic energy source. It triggers a Curious Attraction force. The boids tighten their formation and slowly swarm toward the hand.

  • Open Hand (Fingers Spread): The sketch interprets this as a sudden, bright flash of energy or a predator. It triggers a violent repulsion force. The flock’s cohesion drops to zero, their speed spikes, and they scatter away from the hand in a panic.

Canvas Design & User Experience

The visual aesthetic will rely heavily on blendMode(ADD) to create glowing, stacking neon colors against a near-black “abyssal” background.

The webcam feed will be horizontally flipped (so it acts like a mirror) but heavily tinted and darkened so it barely registers in the background.

To give the user immediate visual feedback of where their hand is in the digital space, a glowing orb will track their palm. The orb will change color and size based on the detected gesture (e.g., a tight cyan core for a fist, a large pulsing magenta explosion for an open hand).

Initial Explorations & Technical Plan

While I am not including the code in this proposal, I have already begun prototyping the physics. I might reuse code from the boids assignment initially to get an idea.

The biggest technical challenge I anticipate is performance. Running an $N^2$ flocking simulation (where every boid checks every other boid) at the same time as a neural network (ml5.js) is heavy on the browser.

My technical roadmap involves:

  • Optimizing the boid math by limiting interaction radii.

  • Lowering the background webcam capture resolution to speed up the ML model.

  • Refining the heuristic math that determines what constitutes a “fist” versus an “open hand” by calculating the distance between the fingertip landmarks and the palm base.

Assignment 11

Concept

For this assignment, I wanted to explore something that felt truly organic. My sketch is built on a mathematical model called Reaction-Diffusion (specifically the Gray-Scott model).

The concept mimics how two virtual liquids – Chemical A (the environment) and Chemical B (the organism) – interact over time. Chemical B eats Chemical A to reproduce, while also slowly dying off. This eternal tug-of-war is actually the exact same math that dictates how real-life animals get their spots and stripes, or how corals branch out! That was what inspired me to recreate this in a sketch.

Visually, I wanted the sketch to feel like you were peering into a dark ocean trench and watching neon coral grow in real-time. Just something about oceans.

Sketch

Milestones and Challeneges

Reaction-Diffusion is notoriously heavy. It requires calculating complex math for every single pixel, multiple times per frame. My initial versions of this sketch were incredibly slow and completely hung my browser.

I had to rethink how the data was stored. I moved away from standard 2D arrays and rewrote the grid using 1D Float32Arrays. This stores the data in a flat, highly optimized memory space. I also added bitwise operations for fast multiplication to keep the framerate high enough to actually watch the coral grow.

Getting the bioluminescent aesthetic right was also trickier than I expected. When I first tried to separate the glowing coral from a solid dark background, I used a hard cutoff (e.g. if the chemical value is above X, paint the background). Because the simulation uses continuous floating-point math, this resulted in ugly, pixelated ghost borders where the shapes used to be.

I went back to basics and removed the hard if/else statements. Instead, I used mathematical ratios to smoothly blend the colors based purely on the exact concentration of the chemicals.

The patterns are completely driven by two parameters: the Feed Rate (how fast Chemical A is added) and the Kill Rate (how fast Chemical B dies). Experimenting with these numbers yields wildly different shapes. I eventually curated two distinct modes for the final sketch: a classic branching “Coral” mode and a struggling, isolated “Dots” mode.

Reflection & Future Work

This project pushed my understanding of performance optimization in JavaScript. Moving from simple binary states (1s and 0s) to a Continuous Cellular Automata (floating-point numbers) completely changes how you have to handle memory and rendering in p5.js.

If I were to take this further, the next logical step would be moving the math out of the CPU entirely and rewriting it in WebGL (Shaders). That would allow the simulation to run at fullscreen resolution instantly. I’d also love to introduce an interactive element where the mouse acts as a “repellent” to the coral, forcing it to grow around your cursor.

Final Project Progress – Terra

I got inspired to make this project by looking at the world map and imaging it made in Cellular Automata. So after brainstorming I decided to make an interactive canvas for drawing maps and terraforming them with painting, erasing, and natural disasters.

Draft 1 code (not interactive)

The way I executed this code is by uploading this map png initFromImage() samples the source image pixel by pixel and converts it into a binary 2D grid, where each cell maps to a 4×4 block on the canvas.

Map Image

The rule is simple: if a pixel is opaque enough (alpha above 128) and dark enough (red channel below 80), it becomes a wall, marked as 1. Everything else becomes open space, marked as 0. The result is a grid that already carries the silhouette and rough geography of the original image, before a single CA rule has fired.

From there, generations of a cave-generation ruleset called B5678/S45678 reshape the terrain.

  • Birth (B5678): A floor cell turns into a wall if it has 5, 6, 7, or 8 neighboring wall cells.
  • Survival (S45678): A wall cell remains a wall if it has 4, 5, 6, 7, or 8 neighboring wall cells.

Each cell checks its eight Moore neighbors, and the rules are biased heavily toward consolidation: a dead cell comes alive if five or more neighbors are walls, and a living cell stays alive as long as four or more neighbors are walls. Cells at the border of the canvas are treated as walls unconditionally, which keeps the edges solid and prevents the map from fraying outward.  Isolated specks get absorbed into larger masses, jagged edges smooth into cave-like contours, and the map starts to feel less like a traced image and more like something that grew.

Here’s how the generation looks

Generation GIF

So then I added interactivity. The idea was simple: click on the canvas to paint land, press A to cycle between modes like paint, erase, earthquake, and tsunami, and use those modes to terraform the map in real time. It did not work. Pressing A did nothing. The canvas was registering mouse clicks to display focus but not actually gaining keyboard focus in the browser sense, so every keypress was going nowhere. I spent an ungodly amount of time on this. I tried canvas.focus(), I tried tabIndex, I tried clicking the element programmatically. Nothing stuck. The browser just refused to route keyboard events to the canvas the way I needed it to. I also didn’t want to add ugly UI buttons that ruins the aesthetics.

So I scrapped the whole clicking mechanism. The fix was to stop relying on canvas focus entirely and attach the key listeners to document instead. That meant rethinking the interaction model from scratch. Clicking to paint was gone. Instead, you hold Space to apply whatever mode is active, and press A to cycle through the modes: paint, erase, earthquake, tsunami, volcano. It is honestly a better interaction than what I had before. Holding Space to draw land feels more deliberate, like you are actively shaping the terrain rather than just clicking around. And cycling modes with A while holding Space to apply gives you a kind of two-handed control that actually makes sense for something like terraforming.

Painting Terrain

The modes themselves are where the real fun is. Paint and erase are straightforward, a circular brush radius of 3 cells that stamps land or water wherever the cursor sits. Earthquake cracks the terrain open along four random fault lines radiating from the cursor, each one carving through cells and kicking up particle debris.

Earthquake demo

Tsunami sends five expanding ring waves outward from the click origin, erasing wall cells on contact and spawning blue water particles as they break through. Volcano is the most involved: it blasts the center open into a crater, sprays upward lava particles in an arc, and slowly grows a lava field outward that has a 6% chance per cell per frame of solidifying into new land. The cellular automata rules here are brilliant to watch work together to increase this 6%. The eruption runs for 180 frames and dies down gradually, with spark count and lava radius both scaling with the remaining timer so the whole thing feels like it has weight and momentum.

Volcano

 

Here’s Draft 2 so u can TERRAform as you like.

 

Dancing circles (Harmonic Motion) – Assignment 4

The Concept

After exploring Memo Akten’s work, I got obsessed with how he uses mathematical functions to create these organic, almost living visuals. His pieces feel like they’re breathing, expanding and contracting in this hypnotic rhythm.

I wanted to create something that captures that same feeling using Simple Harmonic Motion. Instead of pendulums, I thought: what if I used the sine wave to control the size, position, and color of circles? Like watching something breathe or pulse to an invisible heartbeat.

The idea was to start with one breathing circle, then expand it into grids and layers, creating interference patterns that feel natural and meditative. Think of it like ripples in a pond, but frozen in time and space, constantly shifting.

The Physics Behind It

Simple Harmonic Motion shows up everywhere in nature – springs, sound waves, light waves, even the motion of atoms. At its core, it’s just the sine function:

position = amplitude × sin(frequency × time + phase)

Where:

  • Amplitude controls how far it moves
  • Frequency controls how fast it oscillates
  • Phase offsets the starting point

The beautiful thing about sine waves is that when you combine multiple ones with different parameters, you get these complex, organic patterns. It’s the foundation of how we understand waves in general.

Building It Up: Milestones & Challenges

Milestone 1: Single Breathing Circle

I started with the most basic concept – a single circle that grows and shrinks using a sine wave. This was about getting the rhythm right and understanding how amplitude and frequency affect the motion.

Here’s Milestone 1:

 

This proved the concept – a circle that breathes in and out smoothly. The challenge was finding the right frequency. Too fast, and it looks jittery. Too slow and it’s boring. I settled on 0.02, which gives it that calm, meditative breathing pace.

Milestone 2: Grid of Oscillating Circles

Next, I wanted to fill the whole canvas with breathing circles. I created a grid where each circle’s phase is determined by its distance from the center, creating a ripple effect that propagates outward.

Here’s Milestone 2:

The wave propagates from the center outward! Each circle’s phase is determined by its distance from the center, creating this mesmerizing ripple effect. You can see waves of expansion and contraction flowing across the grid.

Milestone 3: Multi-Layer Concentric System

This is where it got really interesting. I went back to a single point but added multiple concentric layers, each oscillating at different frequencies. The code I’m most proud of is the layering system:

for (let layer = 0; layer < 3; layer++) {
  let layerFreq = 0.02 + layer * 0.015;
  let layerPhase = layer * TWO_PI / 3;
  
  for (let i = numCircles - 1; i >= 0; i--) {
    let phase = i * PI / 8 + layerPhase;
    let size = (baseSize + i * 45) + amplitude * sin(time * layerFreq + phase);
    // ... draw circle
  }
}

By offsetting each layer’s phase by 120 degrees (TWO_PI / 3), they create this three-part harmony. When one layer is expanding, another is contracting, creating constant motion and depth.

Here’s Milestone 3:

 

The three-layer system creates this incredible depth where you can see different rhythms happening simultaneously. It’s almost musical – like hearing three different instruments playing in harmony. The circles breathe in and out of sync, creating these beautiful interference patterns.

Milestone 4: Combining Grid + Multi-Layer (The Final Form)

For the final version, I combined everything – the grid layout from Milestone 2 with the multi-layer system from Milestone 3. Each point on the grid now has its own concentric breathing system, and they all ripple together based on distance from the center.

This is where the magic happens. You get the propagating wave effect from the grid, but with the depth and complexity of the multi-layer system. It’s like watching a field of flowers breathing together in the wind.

Here’s the final version:

The final version creates this hypnotic field of breathing circles. Each cluster has its own internal rhythm (the three layers), but they’re all synchronized by the wave propagating from the center. Sometimes they all sync up for a moment, then slowly drift apart again into complex interference patterns.

I added keyboard controls to adjust the frequency in real-time so you can find your own favorite rhythm. Press ‘H’ to hide the UI for a cleaner view, and ‘S’ to save a frame.

Reflection & Future Work

This project really opened my eyes to how much beauty you can create with just the sine function. By layering multiple oscillations with different frequencies, phases, and amplitudes, you get these rich, complex patterns that feel alive and organic.

What I learned:

  • The sine wave is amazing for creating organic motion
  • Layering multiple frequencies creates visual richness and depth that a single oscillation can’t achieve
  • Phase offsets are crucial – they prevent everything from syncing up and create that wave propagation effect
  • Combining grid layouts with complex per-point systems creates the most interesting results
  • Even simple mathematical rules can create patterns that feel natural and alive

What I’d add next:

  • Audio reactivity – make it respond to music, with frequencies mapped to sound frequencies
  • 3D version – spheres breathing in 3D space with depth and perspective
  • Mouse interaction – let users disturb the field and watch the waves respond
  • Different grid patterns – hexagonal grids, Voronoi cells (I learned about this in parametric design lab class with prof Aya), or organic spacing
  • Color schemes – different palettes for different mood s
  • More control parameters – adjust layer  count, circle count, amplitude separately
  • Recording mode – export as video to create seamless loops (I bet this can viral on instagram reels)

The most hypnotic part is just letting it run and watching the patterns emerge. The waves flow across the grid, the layers breathe in and out of sync, and sometimes everything aligns for just a moment before drifting apart again. It’s meditative – I’ve caught myself just staring at it, watching the patterns shift and evolve.

Simulated F1 Track using Attractors – Assignment 3

The Concept

I wanted to create an F1 race car simulation using pure physics and particle systems . The idea was to use gravitational attractors positioned around a track like invisible “apex guides” that would pull the car through racing lines, just like how planets use gravity assists in space. I also thought that playing with attractors would give the car freedom or a factor of random drifting just like it happens in real life if drivers took a turn on a wrong speed; and it turned out as I expected.

The big challenge was making the car follow a racing line without getting trapped by the attractors or flying off into oblivion.

The Physics Behind It

The core of this simulation uses Newton’s law of universal gravitation as we did in class: F = G × (m₁ × m₂) / r²

Each attractor pulls on the car with a force that depends on:

  • The masses of both objects
  • The distance between them (squared)
  • A gravitational constant G that I tuned to 8000 (after trial and error with the numbers)

The tricky part was constraining the distance to prevent extreme forces when the car gets too close or too far.

Smart Attractor Activation

My first huge challenge was that the car would just get stuck orbiting the first attractor like a satellite. As expected, whatever I tried didn’t work to avoid getting the car trapped around one attractor or skipping all of them and getting lost. Nothing worked until I explored the idea of turning attractors on and off dynamically with their order through the track.

This was my breakthrough moment. Instead of having all attractors active at once, I created a workflow where only two are active at any time, and they activate/deactivate based on the car’s distance and velocity direction.

The code I’m most proud of uses the dot product to detect when the car is moving away from an attractor. When the dot product is negative, it means the car has passed the attractor and is heading away, so it’s safe to deactivate it and move to the next one. This prevents the car from getting pulled back!

Yellow attractors are active and pulling the car, while green ones are waiting their turn. Watch how they light up as the car approaches and turn off after it passes!

Here is the initial sketch I built while experimenting trying to figure out the physics details:


 

This proof-of-concept showed me the path was working. You can see the overlapping circles creating the racing line as the car laps around the track.

Building It Up: Milestones & Challenges

Milestone 1: Speed Management

Even with the activation system working, the car was either crawling or shooting off into space. I needed consistent speed for realistic racing. I added speed clamping that keeps the car between 4-9 units per frame. If it goes too fast, it gets clamped down. If it’s too slow, it gets boosted up. This gives it that consistent racing feel where you can actually follow the motion.

Milestone 2: Positioning the Attractors

Designing the track layout took forever. I had to position 9 attractors perfectly so they’d create smooth curves without sharp angles or weird wobbles. Each attractor has:

  • A specific mass (controls pull strength)
  • An attraction radius (how far out it affects the car)
  • A position that creates the racing line

The key insight was positioning them inside the curves. The car gets pulled toward the inside of the corner, creating this kinda perfect racing line, then slingshots out on the exit.

I spent a lot tweaking these positions, running the sketch, adjusting by a few pixels, running again… over and over until the car flowed smoothly through every turn.

Milestone 3: Visual Polish

Once the physics worked perfectly, I went all-in on the visuals. This is where it transformed from a proof-of-concept into something that actually looks like a racing game.

I added:

  • A proper asphalt track
  • Red and white rumble strips on the edges
  • A grass infield and grass surroundings
  • White racing line markings
  • A detailed F1 car with cockpit, front wing, rear wing, and wheels
  • Drift smoke trails that fade out gradually
  • A checkered start/finish line positioned horizontally across the track

The car rotates based on its velocity heading using vel.heading(), so it naturally points in the direction it’s moving. I implemented another visual trick to save the last 80 positions and draws an effect with fading opacity and decreasing stroke weight for that realistic drift smoke effect.

Milestone 4: Interactive Features

I added keyboard controls to make it more interactive:

  • Press ‘A’: Toggle attractor visibility so you can see the physics at work or hide them for a cleaner look
  • Press ‘R’: Reset the car to the start position for another lap
  • Press 1-9: Manually toggle individual attractors – this is great for experimenting with different configurations and seeing how each attractor affects the car’s path

The Final Result

Reflection & Future Work

This project taught me SO much about physics simulation and the importance of tuning parameters. The gravitational constant G, the masses, the attraction radii, the speed limits – they all needed to be just right to work together. Change one value and the whole thing falls apart!

What I learned:

  • Vector math is incredibly powerful for physics simulations
  • Small tweaks to physics parameters can have massive effects
  • Visual polish takes just as much time as getting the physics right
  • Breaking down complex problems (like “make a car race around a track”) into smaller pieces (activation system, speed management, visual layers) makes them manageable

What I’d add next:

  • Multiple cars racing against each other with different colors
  • Collision detection between cars
  • Lap counter and timing system to track bests scores
  • Different track layouts – maybe even let users draw their own tracks? I think is a bit challenging
  • Damage system – if you hit the walls too hard, you slow down
  • Pit stops – strategic element where you can reset speed but lose time

Yash – Final Project Proposal

Final Project Proposal: Boids, Audio, and Hand Gestures

Concept and Artistic Intention

For my final project, I want to build on my ninth assignment, Ephemeral Flocks, and turn it into something much more interactive. In that project, I worked with boids, which connects directly to the idea of autonomous agents from The Nature of Code.

Visually, it was really interesting to watch the flock respond to the webcam feed, it felt like they were “decoding” reality in their own way. But at the same time, the experience felt a bit passive. The user basically just clicked and watched, and I think that limited the potential of the piece.

This time, I want to bring the human back into the system in a more active way. The idea is for the user to feel like they’re conducting the flock, almost like an orchestra conductor. Instead of freezing the screen, the boids will constantly move over a live video feed, painting it with this messy, expressive texture inspired by Van Gogh or Studio Ghibli.

But the key difference is that now, the system will respond to the user’s body and voice, it will listen and react, not just exist.

Interaction Methodology

I’m planning to use two main types of input to control the behavior of the boids:

  • Hand gestures (via ml5)
  • Microphone input (via p5.sound)

The hand will act as a kind of steering force. Wherever the user moves their hand on the screen, it will create an attraction vector that pulls the flock toward that position. So you’re literally guiding them through space.

The microphone will control how chaotic the system becomes. If the user is quiet, the boids will behave more calmly, they’ll stay cohesive, aligned, and move smoothly as a group. But if the user makes noise (like clapping or speaking loudly), that volume will increase the separation force, causing the boids to scatter and behave more unpredictably.

So in a way, the user is constantly balancing control and chaos through movement and sound. This directly ties back to the forces and vector systems we’ve been working with in class, but turns them into something you can physically feel and experiment with.

Initial p5.js Sketch [Please Open Web Editor, give webcam permissions and wave your hand]

Design of the Canvas

I want the visual experience to feel minimal and immersive, almost like an installation rather than a typical interactive app. That means no buttons, no heavy UI,just the system itself.

Layout (rough idea):

[Image generated using gemini]

  • A fullscreen web canvas
  • The live webcam feed sits in the background
  • Hundreds of boids move across the screen, leaving painterly trails over the video
  • In the top-right corner, there’s a small line of text:
    “wave your hands and make some noise”

Once the system detects movement or sound, that text fades away so it doesn’t distract from the visuals.

The important part is that all control comes from the user’s body and voice. There are no sliders or settings, just interaction. The goal is for users to slowly discover how their gestures and sounds shape the digital painting over time, without being explicitly told how it works.