MIDTERM PROGRESS

Concept

ConceptFor my midterm project, I’m creating “Digital Kente,” a generative art system inspired by Ghana’s traditional Kente cloth. Kente originates from the Ashanti Kingdom and is deeply symbolic – each color represents something (gold for royalty, green for growth, red for passion) and the geometric patterns tell stories. Instead of creating organic, flowing generative art, I’m constraining the system to produce structured, geometric patterns that echo the woven textile tradition. The challenge is translating the craft of weaving into code while maintaining cultural authenticity.My system uses horizontal bands with different geometric motifs – zigzag diamonds, checkerboard patterns, diamond shapes, and horizontal stripes – all arranged like traditional Kente strips. Each band uses specific color combinations from authentic Kente palettes extracted from reference images. The patterns are grid-based and angular, mimicking how warp and weft threads create precise geometric designs through repetition and intersection.Cultural Context:
Kente isn’t just decorative – it carries meaning. The geometric patterns I’m implementing (zigzag diamonds, checkerboards) are traditional motifs with cultural significance. By bringing Kente into generative code, I’m exploring how traditional craft techniques can inform computational creativity while respecting the cultural heritage. Inspiration:

-Traditional Ghanaian Kente weaving patterns
-The geometric precision and bold color blocking of woven textiles
-How simple thread intersections create complex visual patterns
-Memo Akten’s approach to mathematical constraints creating variety

Some Kente Samples

Current Sketch

Milestones and Challenges
Milestones

-Researched traditional Kente patterns and extracted authentic color palettes
-Implemented grid-based system for geometric precision (20px cells)
-Created 4 distinct pattern types: zigzag diamonds, checkerboard, diamond motifs, and horizontal stripes
-Developed band system where each horizontal strip uses different patterns
-Added palette switching between Ashanti (warm) and Ewe (bold) color schemes
-Implemented texture lines to simulate woven thread appearance

Challenge 1: Maintaining Cultural Authenticity While Being Generative
My biggest struggle has been balancing algorithmic freedom with cultural respect. Early versions used smooth Perlin noise and organic curves – it looked generative but didn’t feel like Kente at all. Real Kente is precise, geometric, and structured. The breakthrough was realizing I needed to constrain the generative system rather than make it more random. By limiting patterns to grid-aligned shapes, sharp angles, and bold color blocks, the output finally started resembling actual woven cloth. The lesson: sometimes creative constraints (cultural traditions) produce better results than total freedom.

Challenge 2: Color Distribution Balance
Kente cloth doesn’t use colors randomly – certain colors dominate while others accent. My first attempts assigned random colors to each cell, which created visual noise rather than the bold color blocking you see in real Kente. I solved this by creating “band colors” – each horizontal band gets a curated subset of 2-3 colors from the full palette, not all 5. This mirrors how traditional weavers select specific thread colors for each strip. Now band 1 might use gold/orange/black, while band 2 uses green/red. This creates visual rhythm and hierarchy instead of chaos.

Current System Architecture
The system is organized into layers:
1. Color Palettes:

Extracted from actual Kente cloth samples
Two palettes: Warm Ashanti and Bold Ewe
Each band selects a 2-3 color subset for cohesion

2. Band System:

Canvas divided into horizontal bands (8 cells high)
Each band assigned one of 4 pattern types
Patterns cycle predictably: zigzag → checker → diamond → stripes → repeat

3. Pattern Functions:

drawZigzagBand() – Creates diagonal zigzag forming diamonds
drawCheckerBand() – 2×2 checkerboard with alternating colors
drawDiamondBand() – Concentric diamond shapes
drawStripeBand() – Horizontal color stripes with vertical texture

4. Animation:

Subtle time-based offsets in pattern calculations
Creates gentle “breathing” effect without losing structure
Can pause/resume with mouse click

 

Next Steps
Moving forward, I plan to:

-Implement multiple operational modes (particle weaving, harmonic oscillation, flow field variants)
-Add resolution scaling for A3 print size (2480 × 3508 px)
-Integrate traditional Ghanaian sounds (weaving sounds, drumming) for cultural immersion

Reflection So Far
The most valuable lesson from this project is understanding that constraint breeds creativity. By limiting myself to geometric shapes, grid alignment, and traditional color combinations, I’m forced to be more thoughtful about every design decision. This is similar to how real Kente weavers work within the constraints of their looms and thread colors yet produce infinite variety.
Working with cultural source material has changed my approach to generative art. Every design choice now asks: “Does this honor the tradition?” rather than “Does this look computationally interesting?” The planned audio integration will take this further – transforming the project from a purely visual experience into something that engages multiple senses with traditional weaving sounds and Ghanaian drumming.
The system already produces distinct looks depending on which patterns align, how colors distribute, and where the animation phase is captured. Once I add the additional modes and layer in traditional sounds, the generative space will expand while maintaining Kente’s visual and cultural language.

 

Amal – Midterm Progress: Proliferate

Concept

Here's how to make your own bacteria handprint | Vox

This project explores bacterial life inside a petri dish through the logic of binary fission. I was drawn to the simplicity of exponential growth. One cell becomes two. Two become four. Four become eight. That pattern is mathematically precise, yet visually it can feel organic and alive.

The petri dish acts as both a laboratory setting and a contained world. From the perspective of the bacteria, this circular boundary is their entire universe. Growth is not aggressive. It is natural, repetitive, and inevitable.

Through this project I am trying to visualize how simple systems can produce complex spatial transformation. The tension between geometric symmetry and organic movement is central to the concept.

System Design

The system is built around exponential growth using 2ⁿ logic.

It starts with one cell at the center.
Each click doubles the generation.
Each generation forms a new ring.

Everything is placed radially so the growth feels intentional and structured. At the same time, each cell has slight motion driven by noise so it does not feel like a static diagram.

I also introduced generational color shifts and soft background fades so the system leaves trails, almost like activity inside a petri dish.

The interaction is simple. Click and it divides. The simplicity is important to me. I did not want complicated controls. I wanted the act of division to feel immediate.

Prototype 1

The first prototype was very minimal.

It only tested the binary fission logic and radial placement. No sound. No complex motion. Just structure.

Visually it looked clean but almost too perfect. It felt more like a scientific chart than something alive. But it helped me understand how strong the doubling pattern actually is. Even in its simplest form, it already had presence.

That prototype gave me confidence that the system itself was strong enough to build on.

Current Version

The current version feels more alive.

Each cell now wiggles slightly within a constrained space. The trails create a sense of time passing. Multiple colonies can grow at once, which makes the space feel like an ecosystem rather than a single event.

The aesthetic has shifted away from realistic biology and more toward a luminous, speculative petri dish. It feels less clinical and more atmospheric.

I am excited about how something so rule based can still feel organic.

Concerns

I am still figuring out how to balance symmetry and irregularity.

Sometimes the radial placement feels too perfect. Real biological systems are not perfectly spaced. I may experiment with slight offsets to make the growth feel less mechanical.

Performance also becomes a concern at higher generations. The exponential logic is beautiful, but it scales quickly.

I also want the motion to feel softer and less jittery. I want it to feel intentional, not random.

Improvements for Final Midterm

For the final submission, I want to:

Refine the motion so it feels more like soft bodies.
Experiment with subtle asymmetry.
Introduce multiple modes, possibly a strict geometric mode and a more organic mode.
Test higher resolution rendering for A3 printing.

I feel like the core idea is strong. Now it is about refinement and sensitivity.

AI Disclosure

I used AI briefly to help refine parts of the exponential logic and tune motion parameters while troubleshooting performance.

Midterm progress – Mustafa Bakir

My main inspiration came from this video: Instagram

 

For this project, I chased the feeling of something between a pulsar and a deep-sea creature. The working title became Melancholic Bioluminescence, which sounds like a Spotify playlist but the fun thing about creating projects is having full authority and ownership over everything and its my sketch so I’ll name it that.

The core interaction is simple and satisfying to say out loud: hold your mouse down, energy accumulates, release it to detonate. Hold longer, bigger explosion. That’s the entire loop. What makes it interesting is the texture of the accumulation. Particles spiral inward like a black hole, and the glow resembles energy accumulating within that blackhole.

Before writing a single line, I sketched the architecture on paper. The system has three major layers of responsibility: the global state (are we holding? are we exploding? what’s the charge level?), the particle population (a pool of objects that each manage their own physics), and the vfx (trails, embers, glow pulses, which are short-lived visual elements that don’t need the full particle class). I accumulated my notes and compiled them into a beautiful psuedocode that I can follow, this is me abusing what I learned taking Data Structures and honestly desgining the system beforehand works for me really well

please check the pdf for the psuedcode because there’s this ANNOYING issue that no matter whats the scale of the screenshot I am uploading its always so blurry and small.

decoding_naturfe (1)

I also want to disclose that AI helped me with many mathematical sections within this sketch, I wouldn’t be able to understand the math or get around it on my own I think. But I promise my usage is not excessive or dependent and I actually use it to learn haha

Anyway, I started writing attributes for the particle class and oh boy they added up QUICKLY. Here’s a snippet. I tried assigning random values manually but it was very very hard to find the sweet spot for everything so I used some help from AI to assign the proper values to those attributes and I tweaked them a little bit and got really good results.

class Particle {
  constructor(x, y) {
    this.x = x;
    this.y = y;
    this.vx = 0; this.vy = 0;
    this.nox = random(10000);
    this.noy = random(10000);
    this.ns  = random(0.0015, 0.004);
    this.driftSpd = random(0.5, 1.2);

    this.baseSize = random(1.8, 4);
    this.size     = this.baseSize;

    this.baseHue = random(228, 288);
    this.hue     = this.baseHue;
    this.sat     = random(55, 85);
    this.bri     = random(75, 100);
    this.alpha   = random(40, 70);
    this.maxAlpha = this.alpha;

    this.life    = 1;
    this.dead    = false;
    this.wobAmp  = random(0.3, 0.9);
    this.wobFreq = random(2, 4.5);
    this.orbSpd  = random(0.015, 0.04) * (random() > 0.5 ? 1 : -1);
    this.drag    = random(0.93, 0.97);
    this.explSpd = random(0.6, 1.4);
    this.rotDrift = random(-0.35, 0.35);
    this.absorbed = false;
    this.trailTimer       = 0;
    this.suctionTrailTimer = 0;

    this.behavior     = BEHAVE_RADIAL;
    this.spiralDir    = random() > 0.5 ? 1 : -1;
    this.spiralTight  = random(0.03, 0.09);
    this.boomerangTimer = 0;
    this.boomerangPeak  = random(0.3, 0.5);
    this.flutterFreqX = random(5, 12);
    this.flutterFreqY = random(5, 12);
    this.flutterAmp   = random(2, 6);
    this.cometTrailRate = 0;
    this.explodeOriginX = 0;
    this.explodeOriginY = 0;
  }

 

A useful frame for interactive generative art is the state machine. This sketch has three primary states that produce visually distinct experiences, and the transitions between them are where most of the design work happened.

Idle state: No mouse interaction. 80 particles drift across the canvas on Perlin noise. Each particle has its own noise offset, frequency, and speed. The result is slow, organic, slightly hypnotic. The palette sits in the 228-288 HSB hue range (blue through violet) and particles breathe gently at a rate of 2 cycles per second. This is the sketch’s resting face, and it needs to be beautiful enough to watch on its own.

Charging state: Mouse held. New particles spawn at the edge of the screen and get pulled toward the cursor which acts as an attractor. Spawn rate accelerates from 1/frame to 18/frame as charge approaches maximum. The vortex arms appear past 8% charge: three logarithmic spirals that rotate faster as charge builds, drawn with beginShape()/vertex() and per-vertex stroke colors that fade toward the outer edge. The glow orb grows around the cursor. Screen rumble starts at 60% charge. Particles near the cursor compress and brighten. The hue of nearby particles shifts toward 305, a hot magenta-violet. Every visual element does the same narrative work: energy is accumulating.

Explosion state: Mouse released. This is tiered across four discrete levels (0 through 3) based on charge thresholds at 0.25, 0.55, and 0.85. Tier 0 is a gentle push. Tier 3 is a white-flash, screen-shake, 800-pixel-radius detonation that spawns up to 70 child particles from split candidates nearest the blast origin. Each particle in blast range gets a force vector calculated from distance falloff (pow(1 - d/blastRadius, 2)), a random rotation drift, and a behavior assignment weighted by proximity to center and charge level. The explosion duration scales with charge, from 1.4 seconds to 4 seconds. The slowdown at high charge gives full-tier explosions a cinematic quality: the cloud expands, holds, then dissipates.

The variation space this produces is wide. A quick series of light taps creates a dotted constellation. Holding in one place while moving slightly creates smeared, comet-like trails. A patient full charge, held long enough to feel the rumble, produces a different kind of satisfaction.

the scariest things about this project were two things braided together: performance under additive blending with 700+ particles, and making the multi-behavior explosion feel coherent rather than random.

Additive blending (blendMode(ADD)) is visually spectacular. Overlapping particles bloom into white rather than muddy brown. The cost is real though: each ellipse composites against everything underneath it. With three ellipses per particle (the outer glow halo, the mid-glow body, and the bright core), plus trail objects, plus embers, a naive implementation at 700 particles hits framerate problems fast. The risk was a beautiful system running at 20fps. I ran many optimization processes but then I migrated to VS code which was MUCH smoother but I don’t know how smart that is going to be because in the end I’m gonna have to embed the sketch in p5.js web editor so it wouldn’t make sense or a difference that it runs smoothy on my device but its laggy on the website.

The mitigation strategy relies on hard caps with graceful degradation. Particles cap at 600 during charging and 700 overall. Trails cap at 1200 objects. Embers die slowly at life -= 0.003 per frame, about 333 frames of life. The three-ellipse draw call per particle uses deliberately low-resolution sizes: the outer halo is s*4, the body s*2, the core s*0.6, where s is typically 1.8-4 pixels. The glow effect comes from accumulation of tiny translucent shapes. The drawGlow() function uses 50 layered ellipses for the cursor glow, each with an alpha under 4, nearly invisible on their own.

For the behavior system, the risk was that five different particle behaviors during explosion would read as a mess of conflicting physics. To test this, I implemented behaviors one at a time and ran the explosion at full charge with only that behavior active, watching whether each produced a readable visual signature. BEHAVE_COMET needed the highest speed and the lowest drag (0.99 vs the standard 0.93-0.97) to produce visible streaks. BEHAVE_BOOMERANG needed the timer offset: if the return force kicked in immediately, particles just wobbled. They needed to actually leave the origin first. BEHAVE_FLUTTER was the most unpredictable and required the dampening multiplier (vx *= 0.985) to prevent runaway acceleration from the oscillating force.

The assignBehavior() method’s probability table weights behavior by charge level and proximity to blast center. Close-in particles at high charge get COMET and SPIRAL; far particles get RADIAL and FLUTTER. This creates a natural visual structure: a dense bright core of fast-moving comets surrounded by a slowly oscillating outer cloud. The explosion has a center and a periphery, which reads as physically plausible even though the physics are entirely invented.

The remaining uncertainty heading into the midterm is whether the vortex spiral arm rendering, which uses nested beginShape()/endShape() with per-vertex stroke calls, holds up on lower-end hardware. The core mechanic, the charge-and-release loop, and the explosion tier system all feel solid. The scary part is mostly tamed.

But there comes another problem. I don’t like the black background so I decided to create a galaxy background. I had a rough idea how to make it but I had to do some research.

Guess what, all of those links were useless. I found nothing of help but I didn’t want to give up. so I found this reddit post and I found this page  and I follwed the principles and methods in the article to create something cool.

I tried to upload the gif of the animated image but I got this error so I will just upload a screenshot unfortunately.

I really dont know why the resolution is so low my monitor is 4k resolution and honestly it’s too late for me to worry about this. Anyway, I would love to go with the galaxy design but unfortunately it lags like HELL even on VS code, maybe I’ll book office hours and see how I can troubleshoot this.

my next steps for this is to figure out the background and also try to replicate the main inspiration video because right now everything feels flat and I am starting to hate it.

 

Salem Al Shamsi – Assignment 5

Midterm Progress — “Shifting Grounds”

Concept — What Am I Making?

I’m building a generative desert landscape. Not a drawing of a desert, a system that grows one. Dunes form, shift, break apart, and settle, all on their own, driven by wind, tremors, and rain.

The title is “Shifting Grounds.”

The system moves through four phases like a timeline:

Wind → Ground Tremor → Rain → Stillness

  • Wind pushes sand around and builds dunes
  • Ground Tremor shakes things up, dunes collapse and spread out
  • Rain smooths the surface through erosion
  • Stillness everything settles. The terrain stops changing. This is the frame I export for my A3 prints

The simulation runs once and ends. It doesn’t loop.

Why a Desert?

I’m from the UAE. The desert isn’t just a landscape to me, it’s something I grew up around. I’ve always noticed how dunes shift and reshape themselves. Sand looks still but it never really is.

I want to explore that through code. How do dunes actually get their shape? Why are some tall and some flat? What happens when wind hits a ridge? This project is my way of digging into those questions and turning them into generative art.

Design — How Will It Work?
The Height Map

Instead of simulating thousands of sand particles (which is way too complex and slow), I’m using a height map. It’s just an array:

heightMap[0] = 45
heightMap[1] = 47
heightMap[2] = 52

Each index is a column on the canvas. The value is how tall the sand is at that spot. When I draw it, I fill from the bottom up to the height. That gives me a terrain profile, like a side view of the desert.

Why this approach:

  • Way more stable than particles
  • Easier to control
  • Better for high-res A3 output
  • Clean and simple
Rendering

The look is minimal. Warm sandy monochrome palette. Shading is based on slope:

fill(baseColor - slope * contrast)

Steep slopes = darker. Flat areas = lighter. This fakes sunlight hitting the dunes from the side and creates a 3D look using just math. No textures, no images, just numbers and color.

Code Design — Functions, Classes, Structure

Here’s how I’m planning to organize the code:

  • windPhase() — moves sand using Perlin noise for wind direction and strength. After moving sand, it checks slope stability (angle of repose) so dunes don’t become unrealistically steep
  • tremorPhase() — lowers the stability threshold temporarily so dunes collapse and spread. Adds small random jitter to simulate vibration
  • rainPhase() — averages each column’s height with its neighbors. This is what erosion does, peaks go down, valleys fill up, everything smooths out
  • renderTerrain() — draws the height map with slope-based shading
State Management

A variable like currentPhase controls which phase is active. Each phase runs for a set number of frames, then transitions to the next:

WIND → TREMOR → RAIN → STILLNESS

In stillness, the draw loop still runs but nothing changes. The terrain is frozen in its final form.

Key Variables

  • heightMap[] — the core data. One value per pixel column
  • windStrength — controlled by Perlin noise, varies across space
  • maxSlope — the angle of repose. How steep sand pile can be before it slides
  • currentPhase — which phase the system is in
  • phaseTimer — counts frames in each phase

Interactivity (Planned)

  • Different noise seeds = different landscapes each run
  • Keyboard controls to adjust wind strength or skip phases
  • Parameter presets for different “climates” (strong wind, heavy rain, etc.)
States and Variations

Each run of the sketch will look different because of Perlin noise; different seeds create completely different dune formations from the same rules. That’s what makes it generative. I don’t place the dunes. The algorithm does.

For my three A3 prints, I plan to create variation by:

  • Changing the noise seed (different dune shapes)
  • Adjusting wind strength and direction (some runs make tall, sharp dunes, others make gentle rolling ones)
  • Varying how long each phase lasts (more wind = more dramatic terrain, more rain = smoother result)

The final stillness frame from each run becomes a unique print.

The Scariest Part

The most frightening part of this project is the wind simulation.

If sand transport is too strong, everything flattens instantly, and no dunes form. If the slope stability rules are too strict, the terrain freezes before anything interesting happens. The whole project depends on finding the right balance between these forces.

What I Did to Reduce This Risk

I wrote a basic prototype that tests the two core mechanics together: wind transport and slope stability.

This isn’t the final system. It only has the wind phase. But it confirms that the core mechanic, the hardest part, actually works. The tremor, rain, and stillness phases will be simpler to add on top of this foundation.

Other Risks I’m Watching

The final print might look too simple on A3 paper. Since it is just a 1D height map, it could feel flat. I need to test it early. If it looks too basic, I might add more depth, like a fake 3D effect or layered lines. I will decide after the full system is working.

References
  • R.A. BagnoldThe Physics of Blown Sand and Desert Dunes (1941).
  • Angle of Repose — From granular mechanics. The maximum steepness a pile of sand can have before it slides.
  • Ken Perlin — Perlin Noise (1983).
  • Soil Liquefaction — When vibration makes sand temporarily act like liquid.
  • Aeolian Transport — The geological process of wind moving sand.

 

Haris – Midterm Progress

Concept

For the midterm I decided to go back to space theme like I did in assignment 3. I really liked the design of the planets I came up with and wanted to possibly further explore that design with adding particles as an addition. I want again to use the planets as attractors but I would like to explore the idea of using the particle systems as little particles in space that go around these planets and make different constellations like Saturn’s ring.

Implementation

  • Background atmosphere

    • Stars placed in the background randomly.

    • A starfield with subtle twinkling for texture and scale.

  • Planet body

    • A solid planet drawn with  shadows and rings to give depth.

  • Ring particle system

    • Each particle is assigned a target ring radius (three radius “bands”).

    • Motion is driven by a combination of forces:

      • Inward pull (gravity-like) to keep particles bound to the planet.

      • Tangential swirl to create orbit motion.

      • A spring force that pulls particles back toward their target radius so the ring stays structured.

Light damping/drag to keep it smooth and stable.

States/variation

At this point in time I still don’t have multiple states of the sketch, I am still thinking about what I could make different exactly for now some ideas were either placing different kinds of planets that have varying gravitational pull or removing the planets to create galaxy like objects. Or potentially giving an option to add multiple planets.

Scary parts

Right now the scariest part was actually making the particles go in a circle around the planet. I achieved this by doing the following:

First I calculate the vector pointing from a particle to the planet

let toCenter = p5.Vector.sub(planet.pos, p.pos);
let d = max(toCenter.mag(), 1);
toCenter.normalize();

After which I create a tangential force which actually gives the particles the circular motion

let tangential = createVector(-toCenter.y, toCenter.x);
tangential.mult(0.22 * planet.spin);
p.applyForce(tangential);

I also added some pull to keep the particles from going away

let grav = toCenter.copy().mult(0.14 * planet.massScale / (1 + d * 0.006));

And that’s how I created the ring around the planet. I am just a little worried if making the galaxy style sketch would bring more complexity to the design and if I will need to use something different for the particle movement.

 

Midterm Progess 1 : Yash

Concept & Vision

A Painting That Breathes
Most generative art announces itself by motion or glitch. This project moves the other way. It aims for quiet conviction, a hand painted lotus pond rendered in code that rewards patient looking. Water moves almost not at all, petals sway just enough to suggest breath, leaves hold rain in waxy cups.

The aesthetic is botanical realism not photorealism. Think of careful natural history study and Japanese ink observation. Every element is built with gradients, procedural noise, and layered compositing. Stems read cylindrical, leaves show radial veins in perspective, petals catch light and fade to pink. The goal is to invite the eye into believing the scene exists, not to fool it.

System Design

Architecture of the Garden
Rendering follows a strict depth order from background to foreground. The design uses layered abstraction

Layer 1 Ground
Perspective leaves lie behind everything. Foreshortened ellipses suggest a low water view.

Layer 2 Structure
Stems are quadratic Bezier curves shaded as cylindrical forms with node rings at internodes.

Layer 3 Flower
The lotus bloom is built from sepals and two petal layers with a central receptacle. Petals contain dense micro vein texture.

Layer 4 Water planned
A subtle water surface will use Perlin noise for displacement, specular highlights, and interaction ripples that distort reflections.

Core Functions
Five focused drawing functions keep the code compact. They draw leaves, stems, the assembled lotus, individual petals with micro veins, and sepals with tonal transition. A hybrid of p5 and Canvas2D enables Path2D clipping, radial gradients, and precise shadow compositing.

Interactivity Design planned
Interaction is minimal
Hover makes nearby petals lean inward as if breathed on
Click drops a ripple that spreads and shears the reflected scene

Variations and States
Seeded randomness creates variation in petal size, vein curvature, leaf edges, bloom scale, light angle, and water tone.

Current Progress

What Exists Now
A static 600 by 800 canvas renders a complete scene with three perspective leaves, a tall central bloom, and a smaller secondary bloom. The scene draws once with no animation loop, showing the visual vocabulary.

Key achievements include foreshortened leaf geometry with accurate vein placement, three pass cylindrical stem shading with sampled node rings, and petals rendered with ninety micro vein strokes and soft gradients. The hybrid canvas approach confines detail through save clip restore patterns.

Risk Identification & Mitigation

The Most Frightening Part
Animating a convincing water surface with coherent reflections is the largest risk. A single sine wave looks fake, and per pixel displacement at full resolution is expensive. The aesthetic risk is motion that reads as mechanical.

What I did to reduce this risk
I isolated a water proof of concept using two octave Perlin noise summed across horizontal strips. The approach proved tunable and performant. Reflections will be rendered to an off screen buffer and displaced with the same noise field. Interaction ripples will be an additive decaying sine term centered at the click point.

Looking Forward

The Next Passes
Integrate animated water and place botanicals above the water line. Add off screen reflection rendering, petal Perlin oscillation seeded by index, mouse proximity response, and click ripples. Final work will be patient tuning of motion and color until the scene reads as living.

Assignment 5

Overview and Concept 

For the midterm I want to make a generative art system that explores the intersection of networks and organic motion. The core concept is based on The Neural Kingdom: a simulation of synaptic connectivity where autonomous “cells” traverse a digital petri dish, leaving behind a complex web of connections.

The design goal was to create a system that feels “alive” rather than programmed. Instead of drawing shapes directly, I designed a set of biological rules—movement, proximity, and reproduction—and allowed the artwork to emerge from the interaction of these rules over time. The result is a series of intricate structures that resemble neural pathways.

The generator also allows cycling between different color modes by pressing ‘c’ and saves a png with ‘s’

The Technical Logic

The system is built on three primary programming pillars:

Autonomous Agents: Each cell is an object with its own mass, velocity, and “personality” (a unique noise seed). To simulate chemotaxis (movement in response to chemical gradients), I used Perlin Noise. This ensures the cells move in smooth, curving paths rather than random lines.

Proximity-Based Synapses: The “art” is generated by the space between the movers. Every frame, each cell looks for neighbors within a radius. If another cell is close enough, a line is drawn between them. I mapped the transparency of these lines to the distance, creating a sense of three-dimensional depth and tension.

Mitosis Growth Simulation: Rather than filling the screen at once, the sketch begins with a small “seed” cluster. New cells are added to the system via a mitosis timer, spawning near existing parent cells. This allows the colony to expand organically from a central core.

Current Milestone

Future Improvements:

I want to allow for different drawing modes by allowing the user to click different numbers. I also the user to be able to influence the art by interfering with external forces. This could also help with the issue of the movers not covering huge parts of the sketch, which I also intend to fix.

Assignment 5 – Afra Binjerais

Midterm progress/ proposal

Concept & Design

For this project, I decided to continue exploring the idea of a moving Starry Night. In Week 3, I was inspired by Van Gogh’s The Starry Night and experimented with recreating some of its swirling motion using movers and attractors. I was also influenced by Batool’s work that week, since she implemented what I had envisioned. This time, however, I want to approach the concept differently, instead of movers and attractors, I am experimenting with: Perlin noise to create organic, natural-looking flow and Sine functions to introduce rhythmic oscillation and temporal movement to create the system

Designing the Code (Functions, Interactivity, Structure)

For this week, I experimented with using Perlin noise to determine the directional flow of the strokes and sine functions to introduce subtle rhythmic movement over time. There are no classes yet, since I decided not to use movers or attractors in this version of the system. For interactivity, I am planning to implement a slider to control movement strength or direction, possibly another slider to adjust speed or oscillation intensity, and a slider that changes the color palette. I am still deciding what additional controls would be meaningful without overcomplicating the system. 

States & Variations

As mentioned, I will have sliders and that will help me to produce different variations/ states to my system. I am thinking of creating variations such as:

  • Different movement speeds
  • Opposite swirl directions
  • Stronger vs softer oscillation
  • More chaotic vs more calm motion

Also, for the printing part, I’m thinking of having an icon on the top, so when I’m satisfied with how it looks, I will click on the icon and it will download as png to later print. 

Most Uncertain Part

The most complex and uncertain part for me was deciding whether to use movers and attractors again. In Week 3/4, I used attractors and movers; however, I remember that when I tried to recreate the motion of Starry Night using movers, it became very complex very quickly. Controlling behavior while maintaining aesthetic control was difficult with attractors and movers; at least for me. To minimize risk here is to either find an alternatives (which is where im headed) or learn new ways to use them. 

I provided below the current starting point, still a skeleton version, no interactions yet but its a vision to where I’m headed. 

Assignment 5

Concept

On brainstorming for my midterm project, thought of mimicking nature with my sketches, as I’ve done for a few of my sketches so far. I was intrigued by particle systems, which I felt like I had actually unknowingly used in my ocean sketch. I wanted to do something with particle systems again, pairing it up with another concept we’ve covered in class. I debated between two ideas: a mycelium system, or a cosmic dust cloud. I settled on the cosmic dust idea because I felt like the prints or images coming out if it could be very painterly.

So the idea is a generative system that simulates the birth, motion, and destruction of cosmic dust. The goal is to create a painterly vacuum: a space that feels filled with fluid-like gas clouds. By combining Particle Systems with Newtonian Forces (Gravity/Drag) and Periodic Oscillation, I want to produce high-resolution prints that capture the shimmer of deep-space photography.

Sketch

Implementation

  • The Particle Engine: I’ve developed a Dust class that handles movement and lifespan. Each particle is aware of its age, allowing the system to constantly recycle itself and stay generative.

  • The Different States: I’ve established three distinct states: NURSERY (Noise-driven movement), SINGULARITY (Point-based gravity), and SUPERNOVA (Radial repulsion).

  • Oscillation: I integrated Sine waves into the strokeWeight of the particles to create a subtle “twinkling” or shimmering effect that mimics starlight.

The Frightening Part & Risk Reduction 

One of the core requirements for the midterm is having multiple distinct operating modes. I handled this by implementing a State Machine – a logic structure that allows the entire physics engine of the sketch to change based on a single variable (state).

In my draw() loop, the particles check the current state of the “universe” every frame, instead of moving randomly. Using an if/else if structure tied to the keyPressed() function, I can switch between NURSERY, SINGULARITY, and SUPERNOVA instantaneously. This was a challenge at first because I had to ensure that the particles didn’t just break when the forces changed; by using p5.Vector and applyForce(), the transition between modes feels fluid, as the particles’ existing momentum carries over into the new force field.

Additionally, one of the most uncertain parts of this project is ensuring that the complex, high-particle-count visuals can be exported at A3 resolution without crashing the browser. If I simply used saveCanvas(), the print would be blurry and pixelated.

I settled on implementing a p5.Graphics buffer (canvasBuffer). This allows me to develop the logic on a small, fast 800 * 600 canvas while maintaining a hidden, high-resolution print-ready canvas in the background. My current save logic triggers on the ‘S’ key:

function saveHighRes() {
  canvasBuffer.background(240, 80, 5);
  // Future implementation:
  // for (let p of particles) { p.drawToBuffer(canvasBuffer); }
  canvasBuffer.save("nebula_output.png");
}

Instead of saving the current frame, I plan to loop over the particles one final time and re-render them onto this larger 1600 * 1131 canvas. This upscaling is a critical risk-reduction step that gives me peace of mind for the final physical print.

Another frightening part is thinking about how I’m going to make this sketch interesting enough to be a final midterm piece. Right now, it’s a skeleton. However, the basic physics – the drag (friction) that makes particles feel like they are moving through thick gas, and the Oscillation (shimmer) that makes them feel like stars – are already providing a strong foundation. I’m trusting the process: once the math is solid, the beauty usually follows in the polishing phase where I’ll play with color gradients and force magnitudes.

Saeed Lootah – Assignment 4

Concept

For this Assignment I was unsure where to start for a long time. I knew I was going to use sound in some way and that of course there would be harmonic motion. I also wanted to make my animation centered around a circle in some way.

Whilst reading through Memo Atken’s work on Simple Harmonic Motion I noticed at the end he mentioned “the fourier series”. For a physics experiment back in high school at one point we used a technique called the Fast Fourier Transform (FFT) to analyze audio by breaking it up into individual sin waves of multiple frequencies which when added together would produce the sound that we were hearing. After some searching I realized that p5.js has the FFT feature built in and so I decided I would use it.

Sketch

Unfortunately there are some small issues: Microphone doesn’t work unless you give permission through your browser, and as for the sound because I had to compress it to be less than 5 MB it doesn’t have the full range of frequencies that it’s supposed to.

Highlight
fft.analyze();
  fftArray = fft.linAverages(resolution);

  let x = width / 2;
  let y = height / 2;
  let radius = 180;

  for (let index = 0; index < resolution; index++) {
    let progress = index / resolution;
    let theta = TWO_PI * progress;

    push();
    let selected = frequencyArray[index];
    translate(x, y);
    rotate(theta);
    noStroke();
    // fill(0, 0, 255);
    selected.show(radius, 0);
    selected.updateFFT(fftArray[index]);
    pop();
  }

This is my personal favorite part of the code. What it does is initialize the

Milestones

This was my initial experiment with using harmonic motion for a rectangle. I created a class called FreqLine (which I ended up using at the end) and all it did was have a sin wave which would change the height of the rectangle accordingly.

This is my second milestone, after getting the rectangles to work I made multiple and turned them into a circle shape. Each rectangle followed a different frequency which created an interesting effect as they rectangles would start in phase of each other and then go out of phase and come back.

At this point I experimented with different rectMode()’s trying CENTER, TOP, BOTTOM. Top and bottom gave the same result (which is what you see in the screenshot) whereas for CENTER it wasn’t as interesting in my opinion.

In the end I decided I would use TOP but I would make the width’s negative so that the rectangles would increase towards the center as I enjoyed that look the most.

Reflection

I wish I didn’t stop myself from starting. In the beginning I wasn’t sure what to do and I felt that if I didn’t have an idea that was good enough I wouldn’t start. Looking back and for upcoming assignments I plan on starting with any idea and being willing to do multiple sketches if I have to. It wasn’t until I had started with the circle idea that I thought to use the FFT and it wasn’t until I started the FFT did I decide to include the option to alternate between a song and a microphone. While they seem simple I think with the midterm coming up its important that I start experimenting.