I chose this specific artwork because it’s the one that I felt the most confused about. I really like how small the room was compared to the other rooms in teamlab. I decided to recreate it but with a twist where the sky is dark instead and the rippling effect is circular. In the interaction itself the rippling effect as I remember it comes from waving but in my sketch it is just generated every 5 minutes. I decided to keep it simple and I really like how it turned out,
A highlight of some code:
let dCenter = dist(s.x, s.y, width / 2, height / 2);
let dWave = abs(dCenter - waveRadius);
if (dWave < 55) {
glowBoost = map(dWave, 0, 55, 2.6, 1);
}
I’m most proud of this part of the code because it controls how the glowing wave interacts with the stars in a natural and visually pleasing way. Instead of simply turning stars on and off, I calculate the distance between each star and the expanding wave, which allows the glow to change smoothly as the wave passes. This creates a soft ripple effect rather than something harsh or mechanical.
Sketch
Milestones and challenges in process:
I started with a very simple star field where all the stars were randomly placed across the canvas. While this worked, it felt messy and unintentional, so I moved to a more structured layout using a grid. After that, I introduced slight randomness in position, size, and glow to make the stars feel more natural instead of perfectly uniform. A key milestone was adding the wave interaction, where every few seconds a ripple passes through the stars and makes them glow. This helped bring the piece to life and added a sense of timing and rhythm. One of the main challenges I faced was making the wave feel smooth and natural instead of harsh or mechanical. I had to experiment with distance calculations and mapping values so that the glow would transition gradually rather than suddenly.
This assignment didnt have many milestones but this is the previous experimentation sketch:
Reflection and ideas for future work or improvements
Overall, the sketch creates a calm and atmospheric experience, especially with the subtle glow and wave effect. I like how the stars feel balanced but still slightly natural rather than perfectly uniform. In the future, I would like to add more variation, such as a few brighter or larger stars, and experiment with different types of waves or subtle color changes to make the piece more dynamic and immersive.
The inspiration for this project comes from Team Lab’s Graffiti Nature and Beating Earth from Team Lab Phenomena Abu Dhabi. I was inspired by the fact that the installation uses digital ecosystems to create something that feels alive, immersive, and responsive to human presence. The creatures within the installation are soft, glowing, and constantly moving, while the environment does not necessarily feel like an animation, but rather something that feels alive and responds to the things that are happening within it.
This does not simply show moving creatures and plants within an environment; rather, it feels like an entire ecosystem that incorporates life, growth, and death. This became the basis of my project and the reason I choose this visual.
My twist
My initial twist was to introduce the idea of healing versus pollution. While the original project centers on a living, breathing ecosystem, I wanted to take it a step further to highlight the effect of human intervention on such an ecosystem.
In my sketch, I wanted to provide the user with some power. Glowing flower-like seeds, spawn around the canvas and serve as an energy source for the fish. The user also has the power to spawn flowers on click which gives the user a constant choice between helping and damaging the environment. This would enable the fish to thrive and reproduce. On the other hand, I wanted to provide the user with another power: to introduce pollution blobs to the environment. The fish would start avoiding the place of pollution simulating real life where our pollution and wrongdoing force away the creatures, or in this example fish, from their natural habitat. This twist provided more depth to the project. Instead of simply recreating an animated ecosystem, I was able to transform it into a system where the user was responsible for creating a world.
Process
At first I started with the simplest step, the background. I noticed that in the Team Lab space the background had subtle lines that would give the space more depth and maybe try to simulate being under water.
This was done just by using simple lines and playing withe the color and placement. Once I was happy with the background it was time to go onto more exciting stuff and that was adding the fish. At first I was a bit scared of tackling this task as I was afraid simple design would make the project look too bland so I decided to add some glow effect and transparency as well as adding a subtle trail behind the fish to make the project look like a real time piece of art.
After the fish were done it was time for the final steps of adding the seeds spawning and the ability for the user to spawn pollution.
Code I am proud of
One part of the code I am particularly proud of is the logic that allows the fish to detect and move toward the nearest seed. I like this section because it makes the creatures feel more alive and intentional. Instead of moving randomly across the screen, the fish respond to the environment by seeking out glowing seeds, which helps create the feeling of a living digital ecosystem.
// attracted to healthy seeds
let closestSeed = null;
let closestSeedDist = Infinity;
for (let s of seeds) {
if (!s.healing) continue;
let d = dist(this.pos.x, this.pos.y, s.pos.x, s.pos.y);
if (d < closestSeedDist) {
closestSeedDist = d;
closestSeed = s;
}
}
if (closestSeed && closestSeedDist < 180) {
let desired = p5.Vector.sub(closestSeed.pos, this.pos);
desired.setMag(0.12);
this.applyForce(desired);
}
This simple implementation makes the world feel so much more alive and brings my recreation much closer to the original Team Lab project.
Future improvements
I am already really happy with the final result, but if I was to add anything new it would probably be the explosion mechanism that the Team Lab uses. This would be a fun implementation and I could probably use the particle system that we learned in class how to use, but because of the timeframe I decided not to include that in the current state of the project. Overall I am happy with the final result and have learned that recreating someone else’s work with a twist is actually an amazing way to learn and practice p5.
This project was inspired by The Starry Night and the visual qualities of the night sky, but reimagined in a different emotional context. Instead of a calm and dreamy atmosphere, I explored how the sky might look if it expressed anger. The project merges environment and emotion into a single visual system.
The design and intention developed throughout the process rather than being fully planned from the beginning. As I experimented, the idea of intensity became central, which led me to use a red color palette and a more minimal, abstract visual style.
Milestones:
This was my initial experiment from the proposal stage. It was simple, but it marked the starting point of my idea:
Here, I was experimenting with creating a background similar to The Starry Night, but I wasn’t satisfied with the result:
In this video, I started exploring intensity and motion more deeply. This is where the main direction of my project began to form. I really liked the movement here and considered it close to my final outcome, but I felt that something was missing; which was color:
Final outcome: In the final version, I introduced color to represent intensity and emotion. I chose red because it strongly represents anger and energy, which aligns with the concept of the project
I am most proud of the part of my code where I calculate the final angle of each line using a combination of noise, oscillation, swirl, and subtle randomness. In this section, I am not relying on a single technique, but instead layering multiple systems together to create more complex and expressive motion. The noise provides an organic flow, the sine wave adds rhythmic movement, the swirl introduces a sense of direction around the center, and the jitter prevents the motion from feeling too predictable. By blending all of these into one angle, the lines feel alive and continuously evolving rather than static or repetitive. This part of the code reflects my understanding of key concepts from the course, especially noise, vectors, and oscillation, and shows how I can combine them creatively to produce a visually rich and emotionally responsive result.
Canva designing:
I used Canva to design the button because I wanted a customized visual element. This process was relatively simple, and I exported the design as a PNG and uploaded it into p5.js.
Before the war, I contacted the inkjet printing team to test print my work and ensure the resolution was correct. I was able to partially test the print, and below is an image (not the best quality) of how it would have looked when printed.
Challenges:
One of the main challenges I faced was controlling the balance between calm and chaotic motion in the flow field. At first, small changes in parameters like speed, noise scale, and oscillation made the animation feel too chaotic too quickly, which made it difficult to achieve a smooth transition using the slider. I had to experiment with different ranges and easing functions to slow down the beginning and make the buildup feel more gradual and intentional.
Reflection:
Overall, I think the user experience of my project feels smooth and engaging, especially because the slider makes it easy to see how your input affects the visuals in real time. I like that the transition from calm to intense doesn’t feel sudden, but instead builds gradually, which makes it more satisfying to interact with. The motion and fading trails also help make the piece feel more alive and less static. At the same time, I feel like there’s still room to improve. Right now, the interaction is limited to just one slider, so in the future I’d like to add more controls, like changing colors, line thickness, or different motion styles, to make it more playful and customizable.
Link to drive for high resolution images: Afra Binjerais
Awakening Padmini: A Digital Triptych of the Lotus
Core Concept and Design
Inspired by Raja Ravi Varma’s masterpiece, Padmini, the Lotus Lady, this project seeks to transcend the static nature of a two-dimensional canvas. Varma had a profound ability to breathe warmth, vitality, and soul into the lotus, making it a character as alive as the lady holding it. This interactive installation translates that historical mastery into the realm of creative coding.
Instead of a single fixed image, the lotus is reborn as a living, responsive digital entity. The core design philosophy revolves around metamorphosis, observing the same botanical subject through three distinct computational lenses. By moving from a hyper-stylized natural environment to autonomous agent simulations and finally into the raw data of kinetic typography, the installation explores the tension between biological reality and digital representation.
The Three Modes: A Metamorphosis
The installation is divided into three interactive states, each representing a different philosophical interpretation of life and code.
1. Sajīva (सजीव) — The Breathing Canvas
Sajīva translates to “endowed with life” or “living.” This mode is the most direct homage to traditional painting, heavily inspired by the atmospheric depth of Studio Ghibli. The lotus exists in a hazy, serene aquatic environment. It doesn’t just sit on the water; it breathes. Generative physics drive the gentle sway of the stems, the drifting of Ghibli-style clouds, and the delicate, random detachment of falling petals that float upon interacting with the water’s surface.
2. Prāṇa (प्राण) — The Ethereal Threads
Prāṇa represents the “vital life force” or “breath.” In this mode, the physical form of the lotus dissolves into pure energy. Using a swarm of 2,500 autonomous agents (boids), the sketch actively seeks out and traces the high-contrast edges of the previous scene. The boids act as digital spirits, constantly building and rebuilding the outline of the lotus in real-time. Eventually, the swarm scatters, representing the ephemeral and fleeting nature of organic life.
3. Māyā (माया) — The Digital Echo
Māyā translates to “illusion,” pointing to the concept that the physical world is a veil over deeper truths. Here, the visual reality of the lotus is stripped away entirely, replaced by kinetic typography. The image is reconstructed using the sheer brightness values (luma) of the original scene to dynamically scale the word “LOTUS.” It ebbs and flows on a sine wave, representing the underlying matrix of data that constitutes all digital art, a reminder that in this space, life is just an illusion painted by mathematics.
Implementation Details & Creative Process
The journey from a blank canvas to a complex, multi-state system required several distinct milestones, blending mathematical precision with artistic intuition.
Milestone 1: The Geometry of the Petal
The anatomy of the lotus was constructed entirely through code, avoiding external image files. This required a deep dive into bezierCurveTo() to sculpt the organic teardrop shapes of the petals.
Initial Draft: The first iteration focused purely on overlapping geometry and basic opacity, establishing the layered scale of the bloom.
Introducing Texture: To mimic natural biology, micro-veins were generated using radial loops, drawing harsh, distinct striations across the petal surfaces.
Refining Luminescence: The final petal geometry balanced the harsh lines with a soft, glowing base gradient (cBaseGlow = '#eaf0c0') and deep magenta tips, achieving the flush of life seen in traditional oil paintings.
To create Sajīva, an entire ecosystem needed to be rendered without tanking the frame rate. The solution was architectural: rendering complex assets (like the fractal noise clouds and radial gradients) into hidden, static createGraphics() buffers during the setup() phase. In the draw() loop, these pre-rendered sprites are simply mapped and manipulated, allowing the CPU to focus entirely on the generative falling petals and water ripples.
Milestone 3: The Boid Edge-Detection Algorithm
For Prāṇa, the challenge was teaching the boids where the lotus actually was. The system captures a hidden, high-resolution snapshot of the scene, converts it to grayscale, and runs a custom density-mapping algorithm to detect sharp contrast boundaries.
// A snippet of the edge-detection logic allowing boids to "see" the lotus
let diff = abs(val - valR) + abs(val - valD);
if (diff > 15 || val > 200) {
edgeValues[y * w + x] = 255; // Solid trackable line for boids
}
Physical Realization: CAT Lab Inkjet Prints
While the project thrives as a kinetic, interactive digital installation, exploring the theme of “Decoding Nature” required bringing the digital back into the tangible world. I had the opportunity to run high-resolution exports of the three modes through the inkjet printers at the CAT lab.
Translating the light-emitting RGB screen into physical CMYK ink drastically altered the texture of the work. The sweeping threads of the Prāṇa boid simulation translated beautifully onto the paper, looking akin to an intricate silver-point etching, while the rich magentas of the Sajīva lotus gained a velvet-like matte quality that echoed the traditional canvas of Ravi Varma.
Video Documentation
The video documentation captures the seamless transition between the three states of the triptych. It highlights the generative nature of the falling petals in Sajīva, the mesmerizing, real-time flocking assembly and dissolution of the Prāṇa mode, and the rhythmic, breathing wave of the ASCII characters in Māyā.
P5.js Sketch :
Reflection and Future Improvements
The current user experience thrives on the element of surprise—the spacebar transforms the world instantly, forcing the viewer to re-contextualize what they are looking at. The technical optimization (using off-screen buffers) was highly successful, allowing thousands of agents to run smoothly in the browser.
Future Iterations:
Audio Reactivity: Integrating the p5.sound library so that the boids in Prāṇa and the text waves in Māyā react to ambient noise or a live microphone input.
Interactive Fluid Dynamics: Allowing the user’s mouse to disrupt the water surface in Sajīva, creating custom ripples that the falling petals physically react to.
Physical Computing: Utilizing a microcontroller (like an Arduino) to switch scenes based on physical proximity sensors, making the installation truly immersive in a gallery space.
References
Varma, Raja Ravi.Padmini, the Lotus Lady. (The primary artistic and thematic inspiration).
McCarthy, Lauren, et al.p5.js. (The core creative coding framework).
Perlin, Ken.Perlin Noise. (Utilized heavily for the organic generation of clouds and the natural sway of the lotus stems).
I chose to recreate the butterfly installation because of the profound way it handles the cycle of life. In the exhibition, the butterflies seem to move slowly, almost suspended in the moment, but before you know it, they are gone. It is a reminder of how fleeting time is.
Interestingly, the butterflies in the exhibit are affected by human interaction. On approaching them, they seemed to diffuse and on touching the space they were projected onto, they would fall, as if we’d killed them. I incorporated this interaction in my sketch as well.
My twist on the original visual is that the end of one life becomes the catalyst for another. When a butterfly “dies” (is clicked), instead of just disappearing, it falls to the earth and seeds a new, different, but equally beautiful life: a flower. To me, this represents the idea that contentment comes from accepting the changing nature of things. Even when a specific chapter ends, it provides the nutrients for something new to bloom.
Sketch
Butterflies emerge from the forest floor and drift upward.
Touch: Click a butterfly to end its flight. Watch it fall and reincarnate as a swaying flower (you might need to scroll to see the ground in this blogpost).
Interact: Move your mouse near the butterflies or flowers to see them glow brighter. The butterflies will gently shy away from your cursor.
Milestones & Challenges
The first goal was to get the butterflies moving from the bottom to the top. I used Perlin Noise to give them a natural, fluttery motion. However, I immediately hit a snag: the butterflies started accelerating aggressively toward the left or right edges of the canvas instead of staying on an upward path.
The Fix: I implemented a velocity limit and a constant “lift” force. This kept their speed under control while ensuring their primary direction remained vertical.
Next, I had to handle the transition from flying to falling. This required a State Machine within the Butterfly class. I added a state variable (FLYING or FALLING). When the mouse is clicked near a butterfly, its state flips, gravity is applied, and the “wing flap” oscillation slows down to simulate a loss of vitality.
The final stage was the “twist.” I created a Flower class that triggers at the exact x-coordinate where a butterfly hits the ground. I also added Sensory Logic:
Fleeing: Butterflies now calculate a repulsion vector from the mouse.
Proximity Glow: Using the dist() function, both butterflies and flowers “sense” the cursor and increase their transparency mapping (alpha) to glow brighter as you get closer.
Highlight
I am particularly proud of how the butterfly “seeds” the flower. To maintain the visual connection, I pass the specific hue of the butterfly to the new Flower object. This ensures that the life cycle feels continuous; the “soul” of the butterfly determines the color of the bloom.
// Inside the main draw loop
if (b.state === "FALLING" && b.pos.y > height - 25) {
// We pass the butterfly's unique hue to the new flower
flowers.push(new Flower(b.pos.x, b.hue));
butterflies.splice(i, 1); // The butterfly is removed
}
Reflection
This assignment taught me how powerful Additive Color Mixing (layering semi-transparent shapes) is for recreating the feel of a light projection. By using the HSB color space, I was able to match the neon, ethereal palette of the teamLab forest. Some potential improvements:
Life Span: I’d like to make the flowers eventually fade away as well, completing the cycle and allowing the ground to remain “clean” for new growth.
Collision Detection: It would be interesting if butterflies had to navigate around the stems of the flowers that the user has created.
Soundscape: Adding a soft, shimmering sound effect when a butterfly transforms into a flower would deepen the emotional impact of the interaction.
I chose a connecting corridor inside a teamLab installation. It is a transitional space, not a main artwork, which is exactly why it stood out to me. When you walk through it or touch the walls, the light disappears around you. It does not explode or react loudly. It just empties.
Why I Chose This Visual
What stayed with me was not the visuals themselves, but the behavior.
Most interactive works reward you. They give more when you touch them. More color, more movement, more feedback. This one does the opposite. It takes away. The space clears around your presence, almost like it is making room for you, or avoiding you.
That felt different. It felt quieter and slightly uncomfortable. You are not adding to the system, you are interrupting it.
I wanted to work with that idea. Not interaction as spectacle, but interaction as subtraction.
Code Production
I tried to recreate this logic using a particle system. The particles move continuously using noise, forming a kind of ambient field. When the user interacts, instead of attracting particles or creating brightness, the system creates a “void” that pushes particles away.
The goal was not to replicate the exact visual of teamLab, but to capture the feeling of space being cleared in response to presence.
One important part of the process was fixing how the system transitions between states. At first, the scene would abruptly reset when interaction stopped, which broke the illusion. It felt mechanical. I adjusted this by removing hard resets and introducing gradual transitions, so the system responds more like a continuous environment rather than a switch.
I also slightly increased the background fade during interaction so the space actually feels like it is dimming, not just rearranging.
My Twist
The original corridor is very minimal and almost silent in behavior.
My version exaggerates the system slightly. Instead of a clean empty space, the particles resist the user and leave behind traces of motion. The void is not perfectly clean. It is unstable and constantly reforming.
I also introduced layered particles with different sizes and brightness levels. This creates a glow effect that lingers, so the absence is never total. There is always a memory of what was there.
So instead of pure emptiness, my version becomes something closer to a shifting field that reacts, clears, and then slowly rebuilds itself.
Code I’m Proud Of
What I am most satisfied with is how I handled the transition between interaction and stillness.
Originally, I used a direct background reset:
background(0);
This caused the scene to snap instantly, which felt harsh and disconnected from the rest of the motion.
This small change made a big difference. The system now dims and recovers smoothly, which aligns better with the idea of a responsive environment rather than a binary state.
Embedded Sketch
Please view it in p5 itself, here its looking funny
Milestones and Challenges
The first step was simply getting something to move on the screen. I started by creating a “Particle” class. At this stage, I wasn’t worried about colors or glows; I just wanted to see if I could make 5,000 objects move without crashing the browser.I used a simple Particle object with a position and a velocity.
Now that the system was working, it needed to react to me. This milestone was about the “Control” half of my theme. I created a “void” around the mouse. I wrote a mathematical check: If the distance between a particle and my mouse is small, push the particle away.
The final version was about the transition. In the beginning, the lights snapped back instantly when I moved the mouse, which felt mechanical. I wanted it to feel like the system was “exhaling.” The Logic: I introduced lerp() (Linear Interpolation) to every state change.
Reflection
This project made me think more about interaction as something subtle. Not everything needs to respond loudly. There is something more interesting in systems that shift quietly or even withdraw.
I also realized that small technical decisions, like how a background is drawn, can completely change how the work feels. The difference between a reset and a fade is not just visual, it changes how the system is perceived.
Future Work
If I continue this, I want to push the idea of absence further.
One direction is to make the void linger longer, so the space remembers where the user was. Another is to introduce multiple interaction points, allowing different “voids” to overlap and interfere with each other.
I am also interested in connecting this to sound or vibration, so the clearing of space is not only visual but also sensory.
Right now, the system reacts. In the future, I want it to feel like it anticipates or resists.
For this midterm I wanted to approach Islamic geometric ornament as a system rather than a style. Instead of drawing an 8-fold star, I reconstructed the {8,8,4} tiling that produces it. The star is not designed first. It emerges from a checkerboard of octagons and squares.
I was interested in exposing the structure behind something we often read as decorative. Girih patterns are precise, proportional, and rule-based. They are algorithmic long before computers existed. After reconstructing the grid mathematically, I introduced controlled oscillation. The geometry breathes. The valleys of the star expand and contract subtly, but the proportional relationships remain intact.
This project investigates:
• Ornament as system • Pattern as consequence • Tradition as computation • Geometry as inheritance
The woven strapwork illusion is achieved through layered strokes only. There is no shading or depth simulation. The complexity comes from repetition and constraint.
Oscillation
The motion in the system is driven by a minimal oscillator that functions as a time engine. Rather than animating positions directly, I use a simple class that increments a time variable at a steady speed. This time value feeds into a sine function, which subtly modulates the inward valley radius of each tile.
Instead of having every tile move in perfect synchronization, I introduce phase offsets based on distance from the center. This causes the oscillation to ripple outward across the field. The pattern breathes, but it does not collapse. The proportional relationships remain intact. The system moves without losing structural stability.
The {8,8,4} Grid
The foundation of the project is the alternating tiling of octagons and squares, known as the {8,8,4} tiling. The grid is constructed as a checkerboard: when the sum of the tile indices is even, an octagon is placed; when it is odd, a square is placed.
The spacing of the grid is determined by the apothems of the octagon and square. These radii define the structural rhythm of the tiling and ensure that the shapes interlock precisely. Every star, intersection, and strapwork path derives from these underlying geometric relationships.
I included a toggle that reveals this hidden construction grid. Conceptually, this was important. I did not want the ornament to detach from its mathematical logic. The beauty of the pattern comes from its structure, and I wanted that structure to remain visible.
Strapwork Construction
Each tile generates strapwork by alternating between two radii: the midpoint radius and the inward valley radius.
The process is repetitive and rule-based. First, the path crosses the midpoint of a hidden polygon edge. Then it rotates halfway between edges and moves inward to form a valley. This sequence repeats around the polygon.
The 8-fold star is not explicitly drawn. It emerges from this alternating rhythm. The star is a consequence of structure, not a predefined graphic element.
The Weave Illusion
The woven ribbon effect is created through two drawing passes of the exact same geometry.
The first pass uses a thick black stroke to establish the structural band. The second pass uses a thinner white stroke on top of it. This layering creates the illusion of interlacing.
There is no masking, depth simulation, or z-index manipulation. The woven effect emerges purely from stroke layering. I wanted the illusion to remain structurally honest and consistent with the logic of the system.
Interface
The interface is intentionally minimal. It includes a morph slider, a thickness slider, a hidden grid toggle, and simple keyboard controls to pause or save the canvas.
The morph slider controls how deeply the valleys cut inward. At a value of 50, the star sits in a balanced classical configuration. Moving away from this midpoint exaggerates or compresses the geometry, revealing how sensitive the form is to proportional change.
The interface supports exploration, but it does not overpower the geometry. The system remains the focus.
Video Documentation
what I intended to print at the cat
Reflection
This project shifted how I understand Islamic ornament.The 8-fold star is not a symbol to be drawn. It is a structural outcome.Working through the math made me realize that the beauty of girih lies in constraint. The system is strict, but the visual outcomes feel expansive.
What works:
• Immediate feedback through sliders • Conceptual clarity via grid toggle • Subtle motion that remains architectural
What I would develop further:
• Introduce color logic tied to oscillation • Allow zooming and panning • Experiment with density variation • Explore extrusion into 3D space
This project feels like the beginning of a larger investigation into computational ornament and inherited geometry.
References
Conceptual Influences • Islamic girih tiling systems • Archimedean {8,8,4} tiling • Khatam geometric construction
Technical Resources • p5.js documentation • The Nature of Code- Daniel Shiffman • Trigonometric construction of regular polygons
AI Disclosure AI tools were used to refine some code language when encountered functioning errors. All geometric logic and implementation were developed independently.
For the midterm, I created a generative art system that explores the intersection of networks and organic motion. The core concept is artificial life simulation inspired by Craig Reynolds’ Boids, but also using lines between nodes as in Network Theory to generate unique paintbrush patterns.
The design goal was to make a system that feels alive rather than programmed. Rather than directly drawing static shapes, I implemented a set of biologically inspired rules such as, movement, proximity-based interactions, growth, and reproduction, allowing the artwork to emerge from these behaviors over time. The result is a visually rich, evolving network that evokes mycelial-network-like pathways, flocking organisms, and orbital motion depending on the selected mode.
The system also includes interactive features that allow the user to manipulate and explore the simulation in real-time:
Movement Modes: Wander, Flock, and Orbit, toggled with M, creating different emergent behaviors.
Color Palettes: Cycle through multiple palettes with C to alter the visual mood.
Growth Control: Nodes reproduce over time, with growth rate adjustable via a slider.
Movement Parameters: Sliders for speed, steering strength, and connection distance allow nuanced control over node behavior.
Mouse Interaction: Click to attract all nodes toward the cursor; press R to repel them.
Fullscreen and Export: Press F to toggle fullscreen, and S to save a PNG snapshot of the current artwork.
Implementation Details
Nodes Behavior:
Wander Mode: Nodes move using Perlin noise to simulate organic wandering.
Flock Mode: Nodes align and cohere with nearby nodes, creating emergent flocking patterns.
Orbit Mode: Nodes orbit around the canvas center with circular movement.
Mouse Influence: Nodes respond to attraction or repulsion forces for interactivity.
if (organisms.length < 400 && frameCount % growthSlider.value() === 0) {
let parent = random(organisms);
organisms.push(
new BioNode(
parent.pos.x + random(-20, 20),
parent.pos.y + random(-20, 20)
)
);
}
I made it so that the number of organisms in the sketch grows to 400 (capped so the sketch doesn’t lag) at a rate determined by the user using the slider
Midterm Progress Milestone
At this point I was facing a problem where, for some reason, all the nodes would cluster at the top right corner, which left huge parts of the canvas uncovered. I discussed with professor Jack that it would probably be a good idea to allow the user to influence the sketch with the mouse to get better patterns, which is what I did below.
wander() {
let n = noise(this.pos.x * 0.01, this.pos.y * 0.01, frameCount * 0.01);
let angle = map(n, 0, 1, 0, TWO_PI * 2);
let steer = p5.Vector.fromAngle(angle).mult(steerSlider.value());
this.applyForce(steer);
}
mouseInfluence() {
if (mouseIsPressed) {
let mouse = createVector(mouseX, mouseY);
let dir = p5.Vector.sub(mouse, this.pos);
dir.normalize();
let strength = repelActive ? -0.3 : 0.3;
this.applyForce(dir.mult(strength));
}
}
This is how I made the nodes feel alive. These snippets show how I used noise to smoothly randomize movement and steering, but also allow for the user to intervene with external forces to influence the final piece.
for (let i = organisms.length - 1; i >= 0; i--) {
let o = organisms[i];
o.update();
o.display();
for (let j = i - 1; j >= 0; j--) {
let other = organisms[j];
let d = dist(o.pos.x, o.pos.y, other.pos.x, other.pos.y);
if (d < connectSlider.value()) {
stroke(o.color, map(d, 0, connectSlider.value(), 100, 0));
strokeWeight(0.5);
line(o.pos.x, o.pos.y, other.pos.x, other.pos.y);
}
}
}
Because I wanted to try Node-to-Node connections that I saw from my Computer Science classes, I used this nested loop to draw a line between nodes that are a certain distance from each other, depending on the user’s choice, which resulted interesting paintbrush patterns.
orbit() {
let center = createVector(width / 2, height / 2);
let dir = p5.Vector.sub(center, this.pos);
let tangent = createVector(-dir.y, dir.x); // perpendicular to center vector
tangent.setMag(steerSlider.value());
this.applyForce(tangent);
}
This is the function the sketch switches to as one of the modes available, which I learned from The Nature of Code book. Since orbit is just a force pulling towards the center, you simply need to subtract the position vector and the center vector to get the direction of the force you need to apply.
The interactive simulation provides an engaging experience where users can explore emergent behaviors visually and through direct manipulation. Observing how small changes in parameters affect the system fosters a sense of discovery.
Future Improvements:
Smoother transitions between modes.
Adjustable node size and trail persistence for more visual variety.
Biological growth patterns, flocking birds, and orbital mechanics.
Creative coding projects on interactive generative art platforms.
AI Disclosure: I, of course, used AI to help me better understand and debug the complex parts of the code, especially flocking. I also asked ChatGPT to suggest the color palettes that the user can cycle between by clicking c since I am not very good at choosing colors. ChatGPT also did the html parts such as the sliders, labels, as well as the fullscreen feature.
As crazy as it sounds, a big inspiration of this sketch is a song by a not very well known band called fairtrade narcotics. Especially the part that starts around 4:10 , as well this video: Instagram
To toggle modes, press space to change to Galaxy and press Enter to change to Membrane
GALACTIC is an interactive particle system built around a single mechanic: pressure. Pressure from the mouse, pressure of formation, and pressure of, holding it together. This sketch is built around the state of mind I had when I first discovered the song in 2022. I played it on repeat during very dark times and it was mending my soul. After every successful moment I had at that time, during my college application period, I would play that specific part of the song and it would lift me to galactic levels. The sketch has 3 modes, the charging modes resembles when I put in a lot of effort into something and eventually it works out which is resembled by the explosion. The second state is illustrates discipline by forming the particles into a galaxy. The last is Membrane which represents warmth and support from all my loved ones.
Before writing a single line, I sketched the architecture on paper. The system has three major layers of responsibility: the global state (are we holding? are we exploding? what’s the charge level?), the particle population (a pool of objects that each manage their own physics), and the vfx (trails, embers, glow pulses, which are short-lived visual elements that don’t need the full particle class). I accumulated my notes and compiled them into a beautiful psuedocode that I can follow, this is me abusing what I learned taking Data Structures and honestly desgining the system beforehand works for me really well
please check the pdf for the psuedcode because there’s this ANNOYING issue that no matter whats the scale of the screenshot I am uploading its always so blurry and small.
I also want to disclose that AI helped me with many mathematical sections within this sketch, I wouldn’t be able to understand the math or get around it on my own I think. But I promise my usage is not excessive or dependent and I actually use it to learn haha
Anyway, I started writing attributes for the particle class and oh boy they added up QUICKLY. Here’s a snippet. I tried assigning random values manually but it was very very hard to find the sweet spot for everything so I used some help from AI to assign the proper values to those attributes and I tweaked them a little bit and got really good results.
A useful frame for interactive generative art is the state machine. This sketch has three primary states that produce visually distinct experiences, and the transitions between them are where most of the design work happened.
Idle state: No mouse interaction. 80 particles drift across the canvas on Perlin noise. Each particle has its own noise offset, frequency, and speed. The result is slow, organic, slightly hypnotic. The palette sits in the 228-288 HSB hue range (blue through violet) and particles breathe gently at a rate of 2 cycles per second. This is the sketch’s resting face, and it needs to be beautiful enough to watch on its own.
Charging state: Mouse held. New particles spawn at the edge of the screen and get pulled toward the cursor which acts as an attractor. Spawn rate accelerates from 1/frame to 18/frame as charge approaches maximum. The vortex arms appear past 8% charge: three logarithmic spirals that rotate faster as charge builds, drawn with beginShape()/vertex() and per-vertex stroke colors that fade toward the outer edge. The glow orb grows around the cursor. Screen rumble starts at 60% charge. Particles near the cursor compress and brighten. The hue of nearby particles shifts toward 305, a hot magenta-violet. Every visual element does the same narrative work: energy is accumulating.
Explosion state: Mouse released. This is tiered across four discrete levels (0 through 3) based on charge thresholds at 0.25, 0.55, and 0.85. Tier 0 is a gentle push. Tier 3 is a white-flash, screen-shake, 800-pixel-radius detonation that spawns up to 70 child particles from split candidates nearest the blast origin. Each particle in blast range gets a force vector calculated from distance falloff (pow(1 - d/blastRadius, 2)), a random rotation drift, and a behavior assignment weighted by proximity to center and charge level. The explosion duration scales with charge, from 1.4 seconds to 4 seconds. The slowdown at high charge gives full-tier explosions a cinematic quality: the cloud expands, holds, then dissipates.
The variation space this produces is wide. A quick series of light taps creates a dotted constellation. Holding in one place while moving slightly creates smeared, comet-like trails. A patient full charge, held long enough to feel the rumble, produces a different kind of satisfaction.
the scariest things about this project were two things braided together: performance under additive blending with 700+ particles, and making the multi-behavior explosion feel coherent rather than random.
Additive blending (blendMode(ADD)) is visually spectacular. Overlapping particles bloom into white rather than muddy brown. The cost is real though: each ellipse composites against everything underneath it. With three ellipses per particle (the outer glow halo, the mid-glow body, and the bright core), plus trail objects, plus embers, a naive implementation at 700 particles hits framerate problems fast. The risk was a beautiful system running at 20fps. I ran many optimization processes but then I migrated to VS code which was MUCH smoother but I don’t know how smart that is going to be because in the end I’m gonna have to embed the sketch in p5.js web editor so it wouldn’t make sense or a difference that it runs smoothy on my device but its laggy on the website.
For the behavior system, the risk was that five different particle behaviors during explosion would read as a mess of conflicting physics. To test this, I implemented behaviors one at a time and ran the explosion at full charge with only that behavior active, watching whether each produced a readable visual signature. BEHAVE_COMET needed the highest speed and the lowest drag (0.99 vs the standard 0.93-0.97) to produce visible streaks. BEHAVE_BOOMERANG needed the timer offset: if the return force kicked in immediately, particles just wobbled. They needed to actually leave the origin first. BEHAVE_FLUTTER was the most unpredictable and required the dampening multiplier (vx *= 0.985) to prevent runaway acceleration from the oscillating force.
The assignBehavior() method’s probability table weights behavior by charge level and proximity to blast center. Close-in particles at high charge get COMET and SPIRAL; far particles get RADIAL and FLUTTER. This creates a natural visual structure: a dense bright core of fast-moving comets surrounded by a slowly oscillating outer cloud. The explosion has a center and a periphery, which reads as physically plausible even though the physics are entirely invented.
But there comes another problem. I don’t like the black background so I decided to create a galaxy background. I had a rough idea how to make it but I had to do some research.
Guess what, all of those links were useless. I found nothing of help but I didn’t want to give up. so I found this reddit post and I found this page and I follwed the principles and methods in the article to create something cool.
I tried to upload the gif of the animated image but I got this error so I will just upload a screenshot unfortunately.
This is a wall a ran into. No matter what I did with the background, it either looked ugly or became very laggy. So I went with the vibe that space by nature is void and light in it is absent, black is the absence of light. Therefore, the background is black. I feel like a good skill of being an artist is to convince people that your approach makes sense when you’re forced to stick with it.
Then I moved to implementing the galaxy.
The galaxy implementation started with a question I couldn’t immediately answer: how do you turn drifting particles into something that looks like a spiral galaxy without teleporting them there? My first instinct was to assign each particle a slot on a pre-calculated spiral arm and pull it toward that slot. I wrote assignGalaxyTargets(), sorted particles by their angle from center, matched them to sorted target positions, and felt pretty good about it.
function assignGalaxyTargets() {
let n = particles.length;
// build target list at fc = 0 (static, for assignment geometry only)
let targets = [];
for (let j = 0; j < n; j++) {
let gi = j * (GALAXY_TOTAL / n);
let pos = galaxyOuterPos(gi, 0);
targets.push({ gi, x: pos.x, y: pos.y,
ang: atan2(pos.y - galaxyCY, pos.x - galaxyCX) });
}
// sort particles by current angle from galaxy center
let sortedP = particles
.map(p => ({ p, ang: atan2(p.y - galaxyCY, p.x - galaxyCX) }))
.sort((a, b) => a.ang - b.ang);
// sort targets by their angle
targets.sort((a, b) => a.ang - b.ang);
// assign in matched order → minimal travel distance
for (let j = 0; j < n; j++) {
sortedP[j].p.galaxyI = targets[j].gi;
}
galaxyAssigned = true;
}
I lied. It looked awful. Particles on the right side of the canvas were getting assigned to slots on the left and crossing the entire screen to get there. The transition looked like someone had scrambled an egg.
The fix was to delete almost all of that code. Instead of pulling particles toward external target positions, I read each particle’s current position every frame, converted it to polar coordinates relative to the galaxy center, and applied two forces directly: a tangential force that spins it into orbit, and a very weak radial spring that nudges it back if it drifts too far inward or outward. Inner particles orbit faster because the tangential speed coefficient scales inversely with radius. Nobody crosses the canvas. Every particle just starts rotating from wherever it already is.
The glow for galaxy mode uses the same concentric stroke circle method from a reference I found: loop from d=0 to d=width, stroke each circle with brightness mapped from high to zero outward. The alpha uses a power curve so it falls off quickly at the edges. The trick is running galaxyGlowT on a separate lerp from galaxyT. The particles start moving into orbit immediately when you press Space, but the ambient halo breathes in much slower, at 0.0035 per frame vs 0.018 for the particle forces. You get the orbital motion first, then the light catches up.
The galaxy center follows wherever you release the mouse. This is made so the galaxy forms where the explosion happens so the particles wrap around the galaxy center in a much more neat way instead of always having the galaxy in the center.
One line in mouseReleased():
galaxyCX = smoothX; galaxyCY = smoothY;
like honestly look how cool this looks now
The third mode came from a reference sketch by professor Jack that drew 1024 noise-driven circles around a fixed ring. Each circle’s radius came from Perlin noise sampled at a position that loops seamlessly around the ring’s circumference without a visible seam, the 999 + cos(angle)*0.5 trick. The output looks like a breathing cell membrane or a pulsar cross-section.
My first implementation was a direct port: 1024 fixed positions on the ring, circles drawn at each one. It worked but the blob had zero relationship to the particles underneath it. It just floated on top like a decal. Press Enter, blob appears. Press Enter again, blob disappears. The particles had nothing to do with any of it.
The version that actually felt right throws out the fixed ring entirely. Instead of iterating 1024 pre-calculated positions, drawMorphOverlay() iterates over the particle array. Each particle draws one circle centered at its own x, y. The noise seed comes from the particle’s live angle relative to morphCX/CY, so each particle carries a stable but slowly shifting petal radius with it as it moves.
let ang = atan2(p.y - morphCY, p.x - morphCX);
let nX = 999 + cos(ang) * 0.5 + cos(lp * TWO_PI) * 0.5;
let nY = 999 + sin(ang) * 0.5 + sin(lp * TWO_PI) * 0.5;
let r = map(noise(nX, nY, 555), 0, 1, height / 18, height / 2.2);
The rendered circle size scales by mt * p.life * proximity. Proximity is how close the particle sits to the ring. Particles clustered at the ring draw full circles. Particles still traveling inward draw small faint ones. When you activate morph mode, the blob coalesces as particles converge. When you deactivate it, the blob tears apart as particles scatter outward, circles traveling with them. The disintegration happens at the particle level, not as a fading overlay.
The core glow stopped rendering at a fixed point too. It now computes the centroid of all particles within 2x the ring radius and renders there. The glow radius scales by count / particles.length, so a sparse ring is dim and a dense ring is bright. The light follows the mass.
Originally I had Space and Enter both cycling through modes in sequence: bio to galaxy to membrane and back. That made no sense for how I actually wanted to use it. Space now toggles bio and galaxy. Enter toggles bio and membrane. If you’re in galaxy and press Enter, galaxyT starts lerping back to zero while morphT starts lerping toward one simultaneously. The cross-fade between two non-bio modes works automatically because both lerps run every frame regardless of which mode is active.
morphAssigned = false triggers the angle re-sort on the next frame, which maps current particle positions to evenly spaced ring angles in angular order. Same fix as the galaxy crossing problem: sort particles by angle from center, sort targets by angle, zip them in order. Nobody crosses the ring.
The sketch now has three fully functional modes with smooth bidirectional transitions. The galaxy holds its own as a visual. The membrane is the most satisfying of the three to toggle in and out of because the disintegration is legible. You can watch individual particles drag pieces of the blob away as they scatter.
I still haven’t solved the performance question on lower-end hardware. The membrane mode in particular runs 80 draw calls per particle at full opacity in additive blending, which is not nothing. My next steps are profiling this properly and figuring out whether the p5 web editor deployment is going to survive it. I’m cautiously optimistic but I’ve been cautiously optimistic before.
I faced many challenges throughout this project. I will list a couple below.
The trails of the particles
The explosion not being strong enough
The behavior of pulling particles
Performance issues
The behavior of particles explosion
The Texture of galaxy (scrapped idea)
and honestly I could go on for days.
The thing that worked the best for me is that I started very early and made a lot of progress early so I had time to play around with ideas. Like the galaxy texture idea for example from the last post, I had time to implement it and also scrap it because of performance issues. I also tried to write some shader code but honestly that went horribly and I didn’t want to learn all of that because the risk margin was high. Say I did learn it and spend days trying to perfect it and end up scrapping the idea. I also didn’t want to generate the whole shaders thing with AI, I actually wanted to at least undestand whats going on.
The most prominent issue was how the prints were going to look like as I didn’t know how to beautifully integrate trails as they looked very odd. I played around with the transparency of the background with many values until I got the sweet spot. My initial 3 modes were attract, condense, and explode but that wouldn’t be conveyed well with the prints so I switched to the modes we have right now.
Reflection
Honestly the user experience is in a better place than I expected it to be at this stage. The core loop, hold to charge, release to detonate, turned out to be one of those interactions that people understand immediately without any instructions, bur I can’t say the same about pressing Enter and Space to toggle around between modes haha. I’ve watched a few people pick it up cold and within thirty seconds they’re already testing how long they can hold before releasing. That’s a good sign. When an interaction teaches itself that quickly, you’ve probably found something worth keeping.
The three modes add a layer of depth that I wasn’t sure would land. Galaxy mode feels the most coherent because the visual logic is obvious: particles orbit a center, a halo breathes outward, the whole thing rotates slowly. Membrane mode is more abstract and I think some people will find it confusing on first contact. The blob emerging from particle convergence reads as intentional once you’ve seen it a few times, but the first time it happens it might just look like a bug. That’s a documentation problem as much as a design problem. A very subtle UI hint, maybe a faint key label in the corner, might do enough work there without breaking the aesthetic.
The transition speeds feel right in galaxy and a little slow in membrane. When you press Enter to leave membrane mode, the blob takes long enough to dissolve that it starts feeling like lag rather than a designed dissolve. I want to tighten the MORPH_LERP value and see if a slightly faster exit reads better while keeping the entrance speed the same. Entering slow, leaving fast, might be the right rhythm for that mode.
Performance is the thing I’m least settled about. On my machine in VS Code the sketch runs clean. The membrane mode specifically concerns me because it runs one draw call per particle per frame in additive blending, and additive blending is expensive in ways that only become obvious at 600 particles so it’s a little bit slower there.
The one thing I genuinely would love to add is audio. Not full sound design, something minimal. A low frequency hum that rises in pitch as charge builds, a short percussive hit on release scaled to the explosion tier. The sketch is very silent right now and I think sound would close a gap in the experience that visuals alone can’t. The charge accumulation in particular has this tension that wants a corresponding audio texture.
The naming situation I mentioned at the start, Melancholic Bioluminescence sounding like a Spotify playlist, has not resolved itself. If anything the addition of galaxy mode and the membrane makes the name less accurate. The name now is GALACTIC
Also yes, AI helped with a lot of the math again. The Keplerian orbital speed scaling, the seamless noise ring sampling, the proximity weighting in the blob. I understand what all of it does now though, which I count as a win. I use AI not do my work, but as a tool that helps me get to what I want as a mentor. I think I am very satisified with this output as I built the architecture, I build the algorithms, I designed everything beforehand and when things felt stuck I used AI as my mentor. I think the section where I used AI the most is filling in a lot of values to for things cus I couldn’t get values that felt nice. Here’s an example below.
For my midterm generative art system, I am developing three different design directions. This post focuses on Design 1, which explores Islamic geometric structure through sinusoidal motion.
This design is built around a 10-point star (decagram) arranged in a staggered grid. Instead of treating the geometry as static ornament, I animate it using a sine wave so each star opens and closes over time.
The goal is to reinterpret Islamic geometry in a contemporary, computational way. I’m not copying historical patterns directly. I’m rebuilding their mathematical logic and making them dynamic.
Core System
Each star is constructed using polar coordinates and radial symmetry. The opening movement is controlled by a sine wave:
const open01 = 0.5 + 0.5 * sin(t * 1.8 + phase);
Each grid cell has a slightly different phase offset, so the stars don’t all open at once. This creates a ripple across the surface instead of uniform motion.
I also applied easing to soften the movement so it feels less mechanical and more architectural.
The system integrates:
Trigonometry and polar coordinates
Grid logic with staggered rows
Sinusoidal animation
Easing functions
Interactive controls (pause and save)
Pressing space pauses the animation. Pressing “S” exports a high resolution frame, which allows me to capture specific moments for print.
Visual States
Although this is one generative system, it produces multiple distinct visual states:
Fully closed stars: dense and compact
Mid-open stars: balanced and structured
Fully expanded stars: light and porous
Ripple states: different areas opening at different times
These states will help me select compositions for printing.
Challenges and Next Steps
The main challenge has been controlling the amount of spread when the stars open. Too much expansion causes the geometry to lose its structural clarity. Finding that balance has been key.
Moving forward, I plan to:
Refine line weight variations
Experiment with subtle color variations
Test alternate grid densities
Develop Design 2 and Design 3 as distinct explorations
This first design establishes the structural and mathematical foundation of the project. The next two designs will push the system in different conceptual and visual directions.