For my final project, I aimed to capture the profound interplay between energy, emotion, and human existence, drawing inspiration from Antony Gormley’s exploration of quantum physics. Gormley’s idea that we are not merely particles but waves resonates deeply with me. It reflects the fluid, ever-changing nature of our emotions and experiences, which form the basis of this project.
This project visualizes the human form and its emotional states as both wave-like entities and particles to highlight their contrast, a choice influenced by the availability of two distinct options. The waves reflect the fluidity of human emotions, while the particles emphasize discrete, tangible aspects of existence. Together, they offer an abstract interpretation of the quantum perspective.
By embracing infinite possibilities through motion, interaction, and technology, my goal is to immerse users in a digital environment that mirrors the ebb and flow of emotions, resonating with the beauty and complexity of our quantum selves. Blending art and technology, I create a unique experience that integrates body tracking, fluid motion, and real-time interactivity. Through the medium of code, I reinterpret Gormley’s concepts, building an interactive visual world where users can perceive their presence as both waves and particles in constant motion.
Embedded Sketch:
Interaction Methodology
The interaction in this project is designed to feel intuitive and immersive, connecting the user’s movements to the behavior of particles and lines in real time. By using TensorFlow.js and ml5.js for body tracking, I’ve created a system that lets users explore and influence the visualization naturally.
Pose Detection
The program tracks key points on the body, like shoulders, elbows, wrists, hips, and knees, using TensorFlow’s MoveNet model via ml5.js. These points form a digital skeleton that moves with the user and acts as a guide for the particles. It’s simple: your body’s movements shape the world on the screen.
Movement Capture
The webcam does all the work in capturing your motion. The system reacts in real-time, mapping your movements to the particles and flow field. For example:
Raise your arm, and the particles scatter like ripples on water.
Move closer to the camera, and they gather around you like a magnetic field.
Timed Particle Phases
Instead of toggling modes through physical gestures like stepping closer or stepping back (which felt a bit clunky during testing), I added timed transitions. Once you press “Start,” the system runs automatically, and every 10 seconds, the particles change behavior.
Wave Mode: The particles move in smooth, flowing patterns, like ripples responding to your gestures.
Particle Mode: Your body is represented as a single particle, interacting with the others around it to highlight the contrast between wave and particle states.
Slider and Color Picker Interactions
Slider: Adjusts the size of the particles, letting the user influence how the particles behave (larger particles can dominate the field, smaller ones spread out).
Color Picker: Lets the user change the color of the particles, providing visual customization.
These controls allow users to fine-tune the appearance and behavior of the particle field while their movements continue to guide it.
The Experience
It’s pretty simple:
Hit “Start,” and the body tracking kicks in. You’ll see a skeleton of your movements on the screen, surrounded by particles.
Move around, wave your arms, step closer or farther—you’ll see how your actions affect the particles in real time.
Every 10 seconds, the visualization shifts between the flowing waves and particle-based interactions, giving you a chance to experience both states without interruption.
The goal is to make the whole experience feel effortless and engaging, showing the contrast between wave-like fluidity and particle-based structure while keeping the interaction playful and accessible.
Code I’m Proud Of:
I’m proud of this code because it combines multiple elements like motion, fluid dynamics, and pose detection into a cohesive interactive experience. It integrates advanced libraries and logic, showcasing creativity in designing vibrant visual effects while ensuring smooth transitions between states. This project reflects my ability to merge technical complexity with artistic expression.
function renderFluid() {
background(0, 40); // Dim background for a trailing effect
fill(255, 150); // White color with slight transparency
textSize(32); // Adjust the size to make it larger
textAlign(CENTER, TOP); // Center horizontally, align to the top
textFont(customFont1);
// Splitting the text into two lines
let line1 = "We are not mere particles,";
let line2 = "but whispers of the infinite, drifting through eternity.";
let yOffset = 10; // Starting Y position
// Draw the two lines of text
text(line1, width / 2, yOffset);
text(line2, width / 2, yOffset + 40);
for (j = 0; j < linesOld.length - 1; j += 4) {
oldX = linesOld[j];
oldY = linesOld[j + 1];
age = linesOld[j + 2];
col1 = linesOld[j + 3];
stroke(col1); // Set the stroke color
fill(col1); // Fill the dot with the same color
age++;
// Add random jitter for vibration
let jitterX = random(-1, 1); // Small horizontal movement
let jitterY = random(-1, 1); // Small vertical movement
newX = oldX + jitterX;
newY = oldY + jitterY;
// Draw a small dot
ellipse(newX, newY, 2, 2); // Small dot with width 2, height 2
// Check if the particle is too old
if (age > maxAge) {
newPoint(); // Generate a new starting point
}
// Save the updated position and properties
linesNew.push(newX, newY, age, col1);
}
linesOld = linesNew; // Swap arrays
linesNew = [];
}
function makeLines() {
background(0, 40);
fill(255, 150); // White color with slight transparency
textSize(32); // Adjust the size to make it larger
textAlign(CENTER, TOP); // Center horizontally, align to the top
textFont(customFont1);
// Splitting the text into two lines by breaking the string
let line1 = "We are made of vibrations";
let line2 = "and waves, resonating through space.";
let yOffset = 10; // Starting Y position
// Draw the two lines of text
text(line1, width / 2, yOffset);
text(line2, width / 2, yOffset + 40);
for (j = 0; j < linesOld.length - 1; j += 4) {
oldX = linesOld[j];
oldY = linesOld[j + 1];
age = linesOld[j + 2];
col1 = linesOld[j + 3];
stroke(col1);
age++;
n3 = noise(oldX * rez3, oldY * rez3, z * rez3) + 0.033;
ang = map(n3, 0.3, 0.7, 0, PI * 2);
newX = cos(ang) * len + oldX;
newY = sin(ang) * len + oldY;
line(oldX, oldY, newX, newY);
if (
((newX > width || newX < 0) && (newY > height || newY < 0)) ||
age > maxAge
) {
newPoint();
}
linesNew.push(newX, newY, age, col1);
}
linesOld = linesNew;
linesNew = [];
z += 2;
}
User testing:
For user testing, I asked my friend to use the sketch, and overall, it was a smooth process. I was pleasantly surprised by how intuitive the design turned out to be, as my friend was able to navigate it without much guidance. They gave me some valuable feedback on minor tweaks, but generally, the layout and flow were easy to follow. It was reassuring to see that the key features I focused on resonated well with someone else, which confirmed that the design choices were on the right track. This feedback will help refine the project further and ensure it’s as user-friendly as possible.
Im Showcase:
I absolutely loved the IM Showcase this semester, as I do every semester. It was such a joy to see my friends’ projects come to life in all their glory after witnessing the behind-the-scenes efforts. It was equally exciting to explore the incredible projects from other classes and admire the amazing creativity and talent of NYUAD students.
Challenges:
One challenge I faced was that the sketch wouldn’t function properly if the camera was directly positioned underneath. While I’m not entirely sure why this issue occurred, it might be related to how the sketch interprets depth or perspective, causing distortion in the visual or interaction logic. To address this, I decided to have the sketch automatically switch between two modes. This approach not only resolved the technical issue but also allowed the sketch to maintain a full-screen appearance that was aesthetically pleasing. Without this adjustment, the camera’s placement felt visually unbalanced and less polished.
However, I recognize that this solution reduced user interaction by automating the mode switch. Ideally, I’d like to enhance interactivity in the future by implementing a slider or another control that gives users the ability to switch modes themselves. This would provide a better balance between functionality and user engagement, and it’s something I’ll prioritize as I refine the project further.
Reflection:
Reflecting on this project, I’m incredibly proud of what I’ve created, especially in how I was able to replicate Anthony’s work. It was a challenge, but one that I really enjoyed taking on. I had to push myself to understand the intricate details of his style and techniques, and in doing so, I learned so much. From working through technical hurdles to fine-tuning the visual elements, every step felt like a personal achievement. I truly feel like I’ve captured the essence of his work in a way that’s both faithful to his vision and uniquely mine. This project has not only sharpened my technical skills but also given me a deeper appreciation for the craft and the creative process behind it
For my final project, I want to replicate the feeling evoked by Antony Gormley’s work, particularly the quantum physics concept that we are not made of particles, but of waves. This idea speaks to the ebb and flow of our emotions — how we experience ups and downs, and how our feelings constantly shift and flow like waves. When I came across Gormley’s work, I knew that I wanted to replicate this dynamic energy and motion in my own way, bringing my twist to it through code. I aim to visualize the human form and emotions as fluid, wave-like entities, mirroring the infinite possibilities of quantum existence.
embedded sketch:
The code creates an interactive visual experience that features fluid particle movement and dynamic lines. Particles jitter and move across the screen, leaving fading trails and regenerating in new positions as they age. Lines are drawn between points that flow smoothly, and real-time body tracking is used to draw a skeleton based on detected body landmarks. This combination of moving particles, flowing lines, and live body visualization creates an ever-changing and organic display, offering a dynamic and visually engaging experience.
INTERACTION METHODOLOGY:
To create an interactive experience where users influence the flow field particles with their movements, I started by building a skeleton using TensorFlow and ml5.js. This skeleton provides all the necessary body points that will be tracked both by the camera and by the particles drawn to them. I began by leveraging TensorFlow and ml5.js’s pre-trained models to establish the foundational body pose detection system. This skeleton not only tracks key points in real time but also serves as a bridge to manipulate the behavior of the flow field particles based on the user’s movements.
Steps to Implement Interaction:
Pose Detection: I used the pose detection model (MoveNet) from ml5.js in combination with TensorFlow.js. This setup enables the webcam to track key body points such as shoulders, elbows, wrists, hips, and knees. These body points are crucial because they provide the coordinates for each joint, creating a skeleton representation of the user’s body. The skeleton’s structure is essential for detecting specific gestures and movements, which will then influence the flow field.
Movement Capture: The webcam continuously captures the user’s movement in real time. TensorFlow’s MoveNet model processes the webcam feed frame by frame, detecting the position of the user’s body parts and providing their precise coordinates. These coordinates are translated into interactions that affect the flow field. For example, when the user raises an arm, the corresponding body points (such as the shoulder, elbow, and wrist) will influence nearby particles, causing them to move in specific ways.
Flow Field & Particle Interaction: The interaction is centered around two distinct modes, which the user can toggle between:
Flow Field Mode:
In this mode, you control the movement of particles in the environment. Your body’s movements, such as waving your arms or shifting your position, influence how the particles move across the screen. The particles will either be attracted to you or pushed away based on where you are and how you move. The result is a dynamic, fluid motion, as if the particles are reacting to your gestures. You’re shaping the flow of the field by simply moving around in space.Particle Mode:
In this mode, you become a particle yourself. Instead of just controlling the particles, your body is now represented as a single particle within the field. Your movements directly control the position of your particle. As you move, your particle interacts with the surrounding particles, affecting how they move and react. This mode makes you feel like you’re actually part of the field, interacting with it in a more direct and personal way.
Mode Toggle: A button will be implemented to allow the user to toggle between the two modes. When the user clicks the button, the system will switch from Flow Field Mode to Particle Mode, giving the user control over how they wish to engage with the system. In both modes, the user’s body movements drive how the particles behave, whether influencing the flow field or being represented as a particle within it.
Code i’m proud of:
function renderFluid() {
background(0, 40); // Dim background for a trailing effect
fill(255, 150); // White color with slight transparency
textSize(16); // Adjust the size as needed
textAlign(CENTER, TOP); // Center horizontally, align to the top
text("We are not mere particles, but whispers of the infinite, drifting through eternity.", width / 2, 10);
for (j = 0; j < linesOld.length - 1; j += 4) {
oldX = linesOld[j];
oldY = linesOld[j + 1];
age = linesOld[j + 2];
col1 = linesOld[j + 3];
stroke(col1); // Set the stroke color
fill(col1); // Fill the dot with the same color
age++;
// Add random jitter for vibration
let jitterX = random(-1, 1); // Small horizontal movement
let jitterY = random(-1, 1); // Small vertical movement
newX = oldX + jitterX;
newY = oldY + jitterY;
// Draw a small dot
ellipse(newX, newY, 2, 2); // Small dot with width 2, height 2
// Check if the particle is too old
if (age > maxAge) {
newPoint(); // Generate a new starting point
}
// Save the updated position and properties
linesNew.push(newX, newY, age, col1);
}
linesOld = linesNew; // Swap arrays
linesNew = [];
}
function makeLines() {
background(0, 40);
fill(255, 150); // White color with slight transparency
textSize(16); // Adjust the size as needed
textAlign(CENTER, TOP); // Center horizontally, align to the top
text("We are made of vibrations and waves, resonating through space.", width / 2, 10);
for (j = 0; j < linesOld.length - 1; j += 4) {
oldX = linesOld[j];
oldY = linesOld[j + 1];
age = linesOld[j + 2];
col1 = linesOld[j + 3];
stroke(col1);
age++;
n3 = noise(oldX * rez3, oldY * rez3, z * rez3) + 0.033;
ang = map(n3, 0.3, 0.7, 0, PI * 2);
//ang = n3*PI*2; // no mapping - flows left
newX = cos(ang) * len + oldX;
newY = sin(ang) * len + oldY;
line(oldX, oldY, newX, newY);
if (
((newX > width || newX < 0) && (newY > height || newY < 0)) ||
age > maxAge
) {
newPoint();
}
linesNew.push(newX, newY, age, col1);
}
linesOld = linesNew;
linesNew = [];
z += 2;
}
function newPoint() {
openSpace = false;
age = 0;
count2 = 0;
while (openSpace == false && count2 < 100) {
newX = random(width);
newY = random(height);
col = cnv.get(newX, newY);
col1 = get(newX, newY + hgt2);
if (col[0] == 255) {
openSpace = true;
}
count2++;
}
}
function drawSkeleton() {
cnv.background(0);
// Draw all the tracked landmark points
for (let i = 0; i < poses.length; i++) {
pose = poses[i];
// shoulder to wrist
for (j = 5; j < 9; j++) {
if (pose.keypoints[j].score > 0.1 && pose.keypoints[j + 2].score > 0.1) {
partA = pose.keypoints[j];
partB = pose.keypoints[j + 2];
cnv.line(partA.x, partA.y, partB.x, partB.y);
if (show == true) {
line(partA.x, partA.y + hgt2, partB.x, partB.y + hgt2);
}
}
}
// hip to foot
for (j = 11; j < 15; j++) {
if (pose.keypoints[j].score > 0.1 && pose.keypoints[j + 2].score > 0.1) {
partA = pose.keypoints[j];
partB = pose.keypoints[j + 2];
cnv.line(partA.x, partA.y, partB.x, partB.y);
if (show == true) {
line(partA.x, partA.y + hgt2, partB.x, partB.y + hgt2);
}
}
}
// shoulder to shoulder
partA = pose.keypoints[5];
partB = pose.keypoints[6];
if (partA.score > 0.1 && partB.score > 0.1) {
cnv.line(partA.x, partA.y, partB.x, partB.y);
if (show == true) {
line(partA.x, partA.y + hgt2, partB.x, partB.y + hgt2);
}
}
// hip to hip
partA = pose.keypoints[11];
partB = pose.keypoints[12];
if (partA.score > 0.1 && partB.score > 0.1) {
cnv.line(partA.x, partA.y, partB.x, partB.y);
if (show == true) {
line(partA.x, partA.y + hgt2, partB.x, partB.y + hgt2);
}
}
// shoulders to hips
partA = pose.keypoints[5];
partB = pose.keypoints[11];
if (partA.score > 0.1 && partB.score > 0.1) {
cnv.line(partA.x, partA.y, partB.x, partB.y);
if (show == true) {
line(partA.x, partA.y + hgt2, partB.x, partB.y + hgt2);
}
}
partA = pose.keypoints[6];
partB = pose.keypoints[12];
if (partA.score > 0.1 && partB.score > 0.1) {
cnv.line(partA.x, partA.y, partB.x, partB.y);
if (show == true) {
line(partA.x, partA.y + hgt2, partB.x, partB.y + hgt2);
}
}
// eyes, ears
partA = pose.keypoints[1];
partB = pose.keypoints[2];
if (partA.score > 0.1 && partB.score > 0.1) {
cnv.line(partA.x, partA.y, partB.x, partB.y);
if (show == true) {
line(partA.x, partA.y + hgt2, partB.x, partB.y + hgt2);
}
}
partA = pose.keypoints[3];
partB = pose.keypoints[4];
if (partA.score > 0.1 && partB.score > 0.1) {
cnv.line(partA.x, partA.y, partB.x, partB.y);
if (show == true) {
line(partA.x, partA.y + hgt2, partB.x, partB.y + hgt2);
}
}
//nose to mid shoulders
partA = pose.keypoints[0];
partB = pose.keypoints[5];
partC = pose.keypoints[6];
if (partA.score > 0.1 && partB.score > 0.1 && partC.score > 0.1) {
xAvg = (partB.x + partC.x) / 2;
yAvg = (partB.y + partC.y) / 2;
cnv.line(partA.x, partA.y, xAvg, yAvg);
if (show == true) {
line(partA.x, partA.y + hgt2, xAvg, yAvg + hgt2);
}
}
}
}
renderFluid():
This function creates a visual effect where particles (dots) move and vibrate on the screen. It starts by dimming the background to create a trailing effect, then displays a poetic message at the top of the screen. The main action involves iterating over previously drawn particles, moving them slightly in random directions (adding jitter for a vibrating effect), and drawing small dots at their new positions. If a particle becomes too old, it generates a new starting point. The particles’ positions and attributes (like color and age) are updated in arrays, creating an evolving, fluid motion.
makeLines():
This function generates moving lines, giving the impression of swirling or vibrating patterns. It displays another poetic message and creates lines that move based on Perlin noise (a smooth, continuous randomness). The lines change direction slightly each time, based on a calculated angle, and are drawn between old and new positions. If a line moves off-screen or exceeds its “age,” a new starting point is generated. The result is a dynamic flow of lines that appear to resonate across the screen, influenced by the noise function.
newPoint():
This function creates a new starting point for particles or lines. It looks for a location on the screen that hasn’t been used yet, ensuring that new points are placed in open spaces (areas where the color is white, meaning they are empty).
drawSkeleton():
This function visualizes a human skeleton-like figure using landmarks detected by a pose detection algorithm (likely from a camera feed). It draws lines connecting key points on the body (shoulders, wrists, hips, etc.) to form a skeleton-like structure. The positions of the body parts are updated continuously, and the function adds new lines as the pose changes. If a body part is detected with a low confidence score, it is ignored. The code allows for a 3D-like visualization by slightly adjusting the position in the Y-axis, depending on the show variable.
Future Work:
For future work, I plan to enhance the project by adding music to complement the visuals, with background tunes or sound effects triggered by movement. I also aim to refine the design, improving the layout and color scheme for a more immersive experience, and potentially adding customization options for users.
Additionally, I want to introduce a feature that lets users snap a picture of the live scene, capturing the dynamic motion of particles and body tracking. Users could save or share the image, with options to apply filters or effects before capturing, offering a more personalized touch.
For my final project, I want to replicate the feeling evoked by Antony Gormley’s work, particularly the quantum physics concept that we are not made of particles, but of waves. This idea speaks to the ebb and flow of our emotions — how we experience ups and downs, and how our feelings constantly shift and flow like waves. When I came across Gormley’s work, I knew that I wanted to replicate this dynamic energy and motion in my own way, bringing my twist to it through code. I aim to visualize the human form and emotions as fluid, wave-like entities, mirroring the infinite possibilities of quantum existence.
Interaction Methodology:
To create an interaction where users influence flow field particles with their movements, I will use ml5.js and TensorFlow.js for real-time machine learning. These libraries will allow the webcam to track the user’s body movement, and the detected positions (such as arms, legs, and joints) will influence how the flow field particles behave.
Steps to Implement Interaction:
Pose Detection:
Using ml5.js, I will implement pose detection models like MoveNet to track key body points (e.g., shoulders, elbows, wrists, hips) and convert them into coordinates.
Movement Capture:
The webcam will capture the user’s movement in real-time, and MoveNet will process the data frame by frame to track changes in the user’s position.
Particle Interaction:
The user’s proximity and movement will influence the particles. For example:
If the user moves closer, the particles will move toward them.
The direction of body movements (like moving an arm left or right) will control the direction of the flow field, allowing the user to “steer” the particles.
Flow Field Behavior:
The particles will change their behavior based on the user’s gestures and position. For example, raising or lowering the hands could speed up or slow down the flow, while lateral movements could push the particles in specific directions.
The goal is for the flow field to update continuously, with particles moving based on real-time data from the user’s body.
Libraries Used:
ml5.js for pose detection and movement tracking.
TensorFlow.js for more advanced machine learning tasks, if needed.
Design of Canvas
Interaction Idea 1: Side-by-Side Camera and Sketch View
Concept: In this design, the user will see both their live webcam feed and the flow field sketch on the screen at the same time. The webcam will show their movements, and the particles in the flow field will react in real-time to these movements. This approach highlights the connection between the user’s actions and how they influence the flow field, making the interaction more intuitive and visually engaging.
User Experience Flow:
Webcam Feed: The camera will be shown on one side of the screen (either the left or top half).
Flow Field Display: The flow field, containing the particles, will occupy the other side (right or bottom half).
As the user moves, they can immediately see how their body affects the movement of the particles in the flow field. For example, particles may gather around them, follow their gestures, or change direction based on their movements.
Interaction Design:
The user will control the flow field by using their body, which will be visible in the webcam feed.
The particles will react to the movement of specific body parts, such as arms or legs.
The user can influence the flow by moving closer to or further away from the camera or by making different gestures, which will change the pattern or direction of the wave-like particles.
Interaction idea 2: Dark Screen with Movement-Based Particle Control
Concept: In this design, the user’s movements will be the primary focus, with no webcam feed visible at first. The screen will be dark, and as the user begins to move, they will start influencing the flow field. This approach keeps the user’s attention solely on how their actions shape the environment, with no visual distractions from their own body.
User Experience Flow:
Initial Dark Screen: The screen starts out black, with no indication of the user’s presence.
Movement Trigger: Once the user starts to move, the flow field will emerge, and the particles will begin to react to the user’s gestures and position.
As the user moves, they’ll feel more engaged, knowing that their actions are directly influencing the particles, but without seeing themselves.
Interaction Design:
The user will only see the flow field, which will respond dynamically to their movement.
The particles will react to the user’s proximity and gestures, such as raising a hand, making the flow field change accordingly.
Base Sketch:
Currently, I have implemented the basic framework for MoveNet, and it’s working really well. To ensure stability and avoid potential issues with updates, I included the machine learning library and the compressed TensorFlow files directly into the project. This way, the setup is self-contained, and I don’t have to rely on external links in the index.html file. The sketch is already capable of detecting body movements, serving as the foundation for allowing users to influence the flow field with their motions.
For this week’s assignment, I was inspired by The Coding Train’s fire and falling sand simulations. I wanted to combine these ideas to create a fluid simulation where each particle behaves individually, like the falling sand, and interacts in a way that mimics fluid dynamics, similar to water ripples. I aimed to create an interactive experience where the particles respond to changes in velocity and density. As I started working on the project, the particles began to resemble smoke, with their movement and fading effect, which was visually interesting. Once I added color, the effect took on a glowing, infrared-like appearance, which inspired the project’s name.
Embedded Sketch:
Code I’m Proud Of:
One of the key pieces of code I’m most proud of is the particle fading effect and their interaction with one another. I implemented a system where particles gradually fade as they move, similar to smoke dissipating in the air. I also combined elements of fluid dynamics (from the coding train) and the falling sand simulation to make the particles interact naturally with each other. The movement of each particle is influenced by its neighbors, which makes the simulation more lifelike. The integration of color that simulates infrared light added a unique visual element that I found rewarding to develop.
let fluid;
function setup() {
let canvas = createCanvas(600, 600);
canvas.style('filter', 'blur(0.5px)');
frameRate(22);
fluid = new Fluid(0.2, 0, 0.0000001);
}
function draw() {
stroke(51);
strokeWeight(2);
let cx = int((0.5 * width) / SCALE);
let cy = int((0.5 * height) / SCALE);
for (let i = -1; i <= 1; i++) {
for (let j = -1; j <= 1; j++) {
fluid.addDensity(cx + i, cy + j, random(50, 150));
}
}
for (let i = 0; i < 2; i++) {
let angle = noise(t) * TWO_PI * 2;
let v = p5.Vector.fromAngle(angle);
v.mult(0.2);
t += 0.01;
fluid.addVelocity(cx, cy, v.x, v.y);
}
fluid.step();
fluid.renderD();
}
function mousePressed() {
let mx = int(mouseX / SCALE);
let my = int(mouseY / SCALE);
// Add density where the mouse is clicked
fluid.addDensity(mx, my, random(100, 200));
// Add a velocity to the clicked point to make the fluid move in a direction
let angle = random(TWO_PI);
let v = p5.Vector.fromAngle(angle);
v.mult(0.5); // Adjust the force of the movement
fluid.addVelocity(mx, my, v.x, v.y);
}
Key Features:
Some of the key features of the project include:
Interactive Particle Control: The user can influence the simulation by adding density and velocity to particles using the mouse.
Particle Interaction: Each particle responds to the others, creating a realistic fluid dynamic effect.
Fading Effect: Particles gradually fade out, simulating the natural behavior of smoke or fluid dissipating over time.
Color Simulation: The particles change color as they move, resembling infrared light or heat signatures, adding an extra layer of visual interest.
Reflection/Future Work:
I’m really happy with how the simulation turned out, especially the fluidity of the particles and their fading behavior. It was exciting to replicate the behavior of smoke and heat visually. Moving forward, I’d like to refine the particle interaction further, perhaps adding more complexity to the way particles influence each other. I’d also like to explore different types of particles or reactions, and experiment with creating more detailed textures or patterns within the simulation. Another area for improvement could be performance optimization, as simulations with many particles can become resource-intensive.
Resources:
The most helpful resources for this project were The Coding Train’s tutorials on fire and sand simulations, which served as a foundation for understanding how to simulate particle behavior. These resources helped me understand the technical aspects of simulating fluid-like behavior in a creative and interactive way.
For my Week 10 assignment, I decided to create a game inspired by football (soccer) and Daniel Shiffman’s Angry Birds slingshot mechanic. I came across a matter.js cloth sketch that moved like a football net, which sparked the idea to bring that into my sketch. I wanted to add a twist: instead of just launching the ball like in Angry Birds, the ball would interact with the goal net, creating a realistic effect. When the ball hits the net, it doesn’t just pass through—it actually impacts the net, which bends and moves as if hit by a real object. This reaction gives the game extra depth and makes scoring feel even more satisfying. Players use the slingshot to aim and score while racing against a 10-second timer. A goal counter keeps track of the number of goals scored, so you can try to beat your best score.
Embedded Sketch
Key Features
Slingshot Mechanic: You use the slingshot to aim and launch the ball, trying to score goals by hitting the target.
Interactive Goal Net: The net moves and reacts when the ball hits it, making the game feel more dynamic and realistic.
Scoring: Each time you score, your goal count increases, and you get a satisfying visual of your progress.
Physics-Based Collision: The ball interacts with both the goal and the net particles, which makes for a more immersive experience.
Code I’m Proud Of:
The piece of code I’m really proud of in this project is the way the ball interacts with the net. I wanted to make it so that when the ball hits the net, the net doesn’t just stay rigid or let the ball pass through like it’s not even there. Instead, it reacts and moves as though it’s actually impacted, which makes the game feel more real and satisfying.
Here’s the simplified idea: whenever the ball hits a certain area, which I’ve defined as the “goal box,” the code checks if the ball is near any of the net’s “particles.” These particles represent different points on the net that can move when hit by the ball. If the ball is close enough, a force is applied to these particles, which makes them move in response, simulating a realistic impact on the net. The ball’s velocity is then reduced to simulate a “catch,” which gives a nice effect of the ball being absorbed by the net.
// Define the area for the goal box (where the net is located)
let goalBoxX = width / 4 - 40;
let goalBoxY = 20;
let goalBoxW = goalBoxWidth;
let goalBoxH = goalBoxHeight;
if (
ball.body.position.x > goalBoxX &&
ball.body.position.x < goalBoxX + goalBoxW &&
ball.body.position.y > goalBoxY &&
ball.body.position.y < goalBoxY + goalBoxH
) {
// If the ball hasn't already scored, increase the goal count
if (!ball.scored) {
goalCount++;
ball.scored = true;
}
// Check for a "collision" with the net particles
for (let i = 0; i < cols; i++) {
for (let j = 0; j < rows; j++) {
let particle = particles[i][j];
let distance = dist(
ball.body.position.x,
ball.body.position.y,
particle.x,
particle.y
);
if (distance < ball.r + w / 2) {
// Apply force to the particle to simulate movement
let forceX = (particle.x - ball.body.position.x) * 2;
let forceY = (particle.y - ball.body.position.y) * 2;
particle.addForce(new Vec2D(forceX, forceY));
// Slow the ball down to simulate a "catch"
Matter.Body.setVelocity(ball.body, { x: 0, y: 0 });
}
}
}
}
Future Work / Reflections
Creating this game was both fun and challenging, especially figuring out how to make the net realistically react to the ball. Seeing it finally work as I imagined was super rewarding! Moving forward, I’d like to add more levels and maybe obstacles to make scoring harder. I would like to play with the positioning of the ball when it regenerates and maybe try to randomize that however, overall I am really proud of what I created.
For my Week 9 assignment, I developed a dynamic flocking system that illustrates the principles of autonomous agents through morphing shapes. This project was inspired by fond memories of teaching my younger siblings about shapes, using games where they would fit various geometric shapes into corresponding holes, like cubes with different cutouts. It’s interesting how my journey started with flocking and boids, but the morphing aspect allowed me to create something truly engaging. While it’s playful, it’s also educational, as I was able to design fundamental shapes like circles, squares, rectangles, and triangles, making it a fun way to learn about geometry!
Embedded Sketch
Key Features
Engaging Flocking Behavior: The particles demonstrate natural flocking behavior, moving cohesively as they navigate towards their target shapes, which creates an immersive experience that captivates the audience.
Dynamic Shape Morphing: As the particles morph into various geometric formations—such as circles, squares, rectangles, and triangles—they provide a visually stunning display that keeps viewers intrigued and encourages exploration of fundamental shapes.
Interactive Learning Experience: Each shape transition introduces an element of surprise, making learning about geometry enjoyable and interactive, as viewers can appreciate the beauty of shapes while observing the playful interactions of the particles.
Piece of Code That I’m Proud Of
While the concept is simple, implementing this code has been challenging due to the complex interactions between particles that mimic flocking behavior, requiring a solid understanding of vector mathematics. It’s tricky to ensure that the particles move toward their target points and morph seamlessly between shapes, and managing the timing for shape transitions while keeping everything smooth, especially with many particles, adds another layer of complexity.
One aspect I’m particularly proud of is the code for the shape transformations, which handles transitions smoothly and captivates my siblings’ attention. This showcases my ability to blend creativity with programming logic, making it a standout feature of my project. Their joy in watching the shapes dance across the screen highlights the impact of this code and its educational value.
let particles = [];
let numParticles = 100;
let shapeIndex = 0; // Current shape index
let shapePoints = []; // Array to hold shape points
let morphingDuration = 3000; // 3 seconds for morphing
let lastUpdateTime = 0; // Last time the shape was updated
function setup() {
createCanvas(windowWidth, windowHeight);
// Initialize particles
for (let i = 0; i < numParticles; i++) {
particles.push(new Particle(random(width), random(height)));
}
// Define shape points for morphing shapes
shapePoints.push(getCirclePoints());
shapePoints.push(getSquarePoints());
shapePoints.push(getTrianglePoints());
shapePoints.push(getRectanglePoints());
}
function draw() {
background(51);
// Get the target shape points based on the shapeIndex
let targetPoints = shapePoints[shapeIndex];
// Update each particle to move toward the target points with flocking behavior
for (let i = 0; i < particles.length; i++) {
particles[i].update(targetPoints[i]);
particles[i].show();
}
// Check if the time elapsed is greater than the morphing duration
if (millis() - lastUpdateTime > morphingDuration) {
if (areParticlesClose(targetPoints)) {
shapeIndex = (shapeIndex + 1) % shapePoints.length; // Cycle through shapes
lastUpdateTime = millis(); // Reset the timer
}
}
}
function areParticlesClose(targetPoints) {
for (let i = 0; i < particles.length; i++) {
let d = dist(particles[i].position.x, particles[i].position.y, targetPoints[i].x, targetPoints[i].y);
if (d > 10) return false; // Allow some tolerance
}
return true; // All particles are close to their target points
}
// Functions to generate shape points omitted for brevity...
class Particle {
constructor(x, y) {
this.position = createVector(x, y);
this.velocity = createVector(random(-1, 1), random(-1, 1));
this.acceleration = createVector();
this.size = 8;
this.colors = [color(255, 0, 0), color(0, 255, 0), color(0, 0, 255), color(255, 255, 0)]; // Different colors
this.color = this.colors[shapeIndex]; // Set color based on shape index
}
update(target) {
// Calculate flocking forces
let cohesionForce = this.cohesion();
let separationForce = this.separation();
this.acceleration.add(cohesionForce);
this.acceleration.add(separationForce);
// Move towards target shape point
let desired = p5.Vector.sub(target, this.position);
desired.setMag(2); // Set maximum speed
let steering = p5.Vector.sub(desired, this.velocity);
steering.limit(0.5); // Increase steering limit for faster response
this.acceleration.add(steering);
// Update velocity and position
this.velocity.add(this.acceleration);
this.velocity.limit(4); // Limit maximum velocity
this.position.add(this.velocity);
this.acceleration.mult(0); // Reset acceleration
// Update color based on shape index
this.color = this.colors[shapeIndex];
}
show() {
fill(this.color);
noStroke();
ellipse(this.position.x, this.position.y, this.size);
}
// Cohesion and separation functions omitted for brevity...
}
Future Work and Improvements
Moving forward, I plan to introduce more complex geometric shapes to deepen the learning experience and challenge understanding. Additionally, I aim to enhance interactivity by incorporating elements that allow users to manipulate shapes and observe immediate changes, fostering a hands-on learning approach. To further reinforce their learning in an engaging manner, I will implement educational features such as quizzes or prompts that encourage users to identify shapes or predict transformations.
For my week 8 project, I drew inspiration from the iconic “Jaws” movie. The basic idea was to recreate a virtual ocean where sharks roam and fish (or particles) swim around, constantly trying to avoid becoming shark food! The shark wanders around the canvas, much like how predators move unpredictably in real life, while the particles represent smaller fish, using a simple fleeing mechanism to stay out of the shark’s way.
I wanted to build a simple but dynamic scene that felt alive, where the fish-like particles would gracefully move around but change their behavior when a shark approached. The scene gives a sense of interaction between predator and prey, bringing the ocean to life.
Embedded Sketch:
Key Features:
Wandering Sharks: The shark doesn’t just move in straight lines. Instead, it “wanders” around the screen, changing direction smoothly using noise-based movement. This gives the shark a more natural and unpredictable feel, like a real predator hunting its prey.
Fleeing Particles (Fish): The particles (representing fish) swim around randomly, but when the shark gets too close, they immediately “flee.” I programmed them to avoid the shark by detecting when it comes within a certain distance and steering away, adding a subtle but clear predator-prey relationship.
Flowing Ocean Background: To immerse the viewer in an underwater world, I added a flowing ocean background that serves as the setting for this shark-fish interaction. The ocean gives a calming effect while the shark-fish chase adds the excitement.
Code Highlight:
One piece of code I’m especially proud of is the fleeing mechanism for the particles. It’s simple but effective, and it adds a lot of realism to the scene. Here’s a snippet of how it works:
// Ball class to define wandering shark behavior
class Ball {
constructor() {
this.position = createVector(random(width), random(height)); // Random starting position
this.angle = random(TWO_PI); // Random initial angle for wandering
this.speed = 2; // Speed of the shark's movement
this.noiseOffset = random(1000); // Noise offset for smooth wandering
}
update() {
// Use noise for smooth direction change
this.angle += map(noise(this.noiseOffset), 0, 1, -0.1, 0.1); // Gradual direction change
this.velocity = createVector(cos(this.angle), sin(this.angle)).mult(this.speed); // Velocity based on angle
this.position.add(this.velocity); // Move the shark based on velocity
// Keep the shark within the canvas, reverse direction if it hits edges
if (this.position.x > width || this.position.x < 0) {
this.position.x = constrain(this.position.x, 0, width);
this.angle += PI; // Flip direction
}
if (this.position.y > height || this.position.y < 0) {
this.position.y = constrain(this.position.y, 0, height);
this.angle += PI; // Flip direction
}
this.noiseOffset += 0.01; // Increment noise for smooth wandering motion
}
display() {
// Draw the shark image instead of a shape
imageMode(CENTER); // Center the shark image on the position
image(sharkImage, this.position.x, this.position.y, 100, 100); // Adjust size of the shark image
}
}
// Point class for the particles (fish)
class Point {
constructor(xTemp, yTemp) {
this.position = createVector(xTemp, yTemp); // Random initial position for particles
this.velocity = createVector(0, 0); // Initial velocity
this.color = color('blue'); // Color of the particles
}
// Display the particle
display() {
strokeWeight(2); // Set particle thickness
stroke(this.color); // Set particle color
point(this.position.x, this.position.y); // Draw particle at its current position
}
// Update particle's position based on noise
update() {
// Use Perlin noise for smooth, organic movement
let n = noise(this.position.x * noiseScale, this.position.y * noiseScale, frameCount * noiseScale * noiseScale);
let a = TAU * n; // Convert noise value to an angle
this.velocity.x = cos(a); // Calculate new velocity based on angle
this.velocity.y = sin(a); // Calculate new velocity based on angle
this.position.add(this.velocity); // Update the particle's position
// Reset particle to a random position if it goes off the screen
if (!this.onScreen()) {
this.position.x = random(width);
this.position.y = random(height);
}
}
// Check if the particle is still on the screen
onScreen() {
return (
this.position.x >= 0 && this.position.x <= width &&
this.position.y >= 0 && this.position.y <= height
);
}
// Flee behavior: particles avoid the shark (ball)
avoid(ball) {
let distance = dist(this.position.x, this.position.y, ball.position.x, ball.position.y); // Calculate distance to the shark
let radius = 50; // Define the radius within which particles start fleeing
if (distance < radius) {
// Calculate direction to flee from the shark
let flee = p5.Vector.sub(this.position, ball.position);
flee.normalize(); // Normalize to ensure consistent speed
flee.mult(2); // Scale the fleeing force
this.position.add(flee); // Move the particle away from the shark
}
}
}
This code makes each particle detect how close the shark is and react accordingly. When the shark enters a certain range, the particles move away, giving the illusion that they are fleeing in fear. It’s simple but adds so much character to the overall interaction.
Future Improvements:
While I’m happy with how the project turned out, there are always ways to make it better. Here are some ideas for future improvements:
More Realistic Shark Movements: While the wandering behavior works, I’d love to add more realism to the shark’s movements—perhaps making it faster when it spots fish or slowing down after a “hunt.”
Fish Grouping Behavior: Right now, the particles move independently, but it would be cool to introduce a “schooling” behavior, where fish move in groups. This could make the escape from the shark even more dynamic.
Improved Visual Effects: Adding more ocean elements like bubbles, light rays, or even underwater sounds would elevate the experience and make it feel more like a deep-sea dive.
When Aaron and Kiori talked about MUJO, I found the whole concept really intriguing. The idea of using the desert as a stage, with visuals projected onto the dunes, felt so different from anything I’d ever seen. It’s based on the Japanese idea of “impermanence,” how everything in life is always changing, just like sand being shaped by the wind. The dancers move on these dunes, and the visuals mimic that constant building up and falling apart, like sand or even thoughts in our minds.
The way they described the performance made me think about how we try to build things in life, but nothing really stays the same. The sound, the movement, and the visuals all come together to show that struggle, which I think makes it even more powerful. The installation part of it, with videos and sound, takes that same feeling and turns it into something you can experience in a gallery.
Even though I haven’t seen it live, I really connected with the concept. It seems like MUJO would be more than just a performance—it would leave you reflecting on how temporary everything is, but in a beautiful way.
The concept of Sankofa, derived from the Akan people of Ghana, embodies the idea that one should remember the past to foster positive progress in the future. The Akan tribe, part of the larger Ashanti (or Asante) group, encapsulates a rich cultural heritage that emphasizes the importance of history and self-identity. The word “Sankofa” translates to “to retrieve,” embodying the proverb, “Se wo were fi na wosankofa a yenkyi,” meaning “it is not taboo to go back and get what you forgot.” This principle highlights that understanding our history is crucial for personal growth and cultural awareness.
This philosophy has greatly inspired my project. Adinkra symbols, with their deep historical roots and intricate patterns, serve as a central element of my work. These symbols carry meanings that far surpass my personal experiences, urging me to look back at my heritage. I aim to recreate these age-old symbols in a modern, interactive format that pays homage to their origins. It’s my way of going back into the past to get what is good and moving forward with it.
Embedded Sketch
Images
Coding Translation and Logic
The core of my sketch is a dynamic grid-based visualization that reflects Adinkra symbols, infused with movement and interaction through music. Here’s how I approached this creative endeavor:
Creating a Grid Layout
I divided the canvas into a grid structure, with each cell serving as a small canvas for a unique Adinkra symbol. I utilized a 2D array to manage the placement of these symbols efficiently.
I defined variables for columns and rows to control the grid structure.
I calculated cellSize for evenly spaced symbols.
let x = currentCol * cellSize;
let y = currentRow * cellSize;
Pattern Assignment
I created an array of Adinkra patterns, randomly assigning them to each grid cell for a vibrant, ever-evolving display.
I looped through the grid and calling drawPattern() to render each symbol.
function initializePatterns() {
patterns = [
drawThickVerticalLines,
drawNestedTriangles,
drawSymbols,
drawZebraPrint,
drawDiamondsInDiamond,
drawCurves,
drawThickHorizontalLines,
drawSquareSpiral,
drawSpiralTriangles,
thinLines,
verticalLines,
drawXWithDots,
];
}
let colorfulPalette = [
"#fcf3cf", // Light cream
"#DAF7A6", // Light green
"#FFC300", // Bright yellow
"#FF5733", // Bright red
"#C70039", // Dark red
"#900C3F", // Dark magenta
];
function initializeColors() {
colors = [
color(255, 132, 0), // Vibrant Orange
color(230, 115, 0), // Darker Orange
color(191, 87, 0), // Earthy Brownish Orange
color(140, 70, 20), // Dark Brown
color(87, 53, 19), // Rich Brown
color(255, 183, 77), // Light Golden Orange
];
}
function drawSpiralTriangles(x, y, size) {
strokeWeight(2);
// Check the mode to set the stroke accordingly
if (currentMode === 0) {
// Regular mode: Use random colors from the regular palette
stroke(random(colors));
} else if (currentMode === 1) {
// Colorful mode: Use colors from the colorfulPalette
stroke(random(colorfulPalette));
} else if (currentMode === 2) {
// Random Size Mode: Use random colors from the regular palette
stroke(random(colors));
}
noFill();
// Adjust the initial size to ensure the triangle fits inside the cell
let adjustedSize = size * 0.9; // Reduce size slightly for padding
// Draw the recursive triangles centered in the cell
recursiveTriangle(
x - adjustedSize / 2,
y - adjustedSize / 2,
adjustedSize,
5
);
}
function recursiveTriangle(x, y, size, depth) {
if (depth == 0) return;
// Draw the outer triangle
let half = size / 2;
triangle(x, y, x + size, y, x + half, y + size);
// Recursively draw smaller triangles inside
recursiveTriangle(x, y, size / 2, depth - 1); // Top-left
recursiveTriangle(x + half / 2, y + size / 2, size / 2, depth - 1); // Center
recursiveTriangle(x + half, y, size / 2, depth - 1); // Top-right
}
function drawZigZagPattern(x, y, size) {
if (currentMode === 0) {
// Regular mode: Use random colors from the regular palette
stroke(random(colors));
} else if (currentMode === 1) {
// Colorful mode: Use colors from the colorfulPalette
stroke(random(colorfulPalette));
} else if (currentMode === 2) {
// Random Size Mode: Use random colors from the regular palette
stroke(random(colors));
}
noFill();
let amplitude = size / 4;
let frequency = size / 5;
// Draw zigzag shape and add dots
beginShape();
for (let i = 0; i <= size; i += frequency) {
let yOffset = (i / frequency) % 2 == 0 ? -amplitude : amplitude; // Create zigzag pattern
let currentX = x - size / 2 + i; // Current X position
let currentY = y + yOffset; // Current Y position
vertex(currentX, currentY);
// Calculate the vertices of the triangle
if (i > 0) {
// The triangle's vertices are:
// Previous vertex
let previousY = y + ((i / frequency) % 2 == 0 ? amplitude : -amplitude);
let triangleVertices = [
createVector(currentX, currentY), // Current peak
createVector(currentX - frequency / 2, previousY), // Left point
createVector(currentX + frequency / 2, previousY), // Right point
];
// Calculate the centroid of the triangle
let centroidX =
(triangleVertices[0].x +
triangleVertices[1].x +
triangleVertices[2].x) /
3;
let centroidY =
(triangleVertices[0].y +
triangleVertices[1].y +
triangleVertices[2].y) /
3;
// Draw a dot at the centroid
strokeWeight(5); // Set stroke weight for dots
point(centroidX, centroidY); // Draw the dot
}
}
endShape();
}
function drawXWithDots(x, y, size) {
noFill();
if (currentMode === 0) {
// Regular mode: Use random colors from the regular palette
stroke(random(colors));
} else if (currentMode === 1) {
// Colorful mode: Use colors from the colorfulPalette
stroke(random(colorfulPalette));
} else if (currentMode === 2) {
// Random Size Mode: Use random colors from the regular palette
stroke(random(colors));
}
// Draw the two diagonal lines to form the "X"
line(x - size / 2, y - size / 2, x + size / 2, y + size / 2); // Line from top-left to bottom-right
line(x - size / 2, y + size / 2, x + size / 2, y - size / 2); // Line from bottom-left to top-right
// Set fill for the dots
if (currentMode === 0) {
// Regular mode: Use random colors from the regular palette
stroke(random(colors));
} else if (currentMode === 1) {
// Colorful mode: Use colors from the colorfulPalette
stroke(random(colorfulPalette));
} else if (currentMode === 2) {
// Random Size Mode: Use random colors from the regular palette
stroke(random(colors));
}
let dotSize = 10; // Size of the dots
// Calculate positions for the dots in each triangle formed by the "X"
// Top-left triangle
ellipse(x - size / 4, y - size / 4, dotSize, dotSize);
// Top-right triangle
ellipse(x + size / 4, y - size / 4, dotSize, dotSize);
// Bottom-left triangle
ellipse(x - size / 4, y + size / 4, dotSize, dotSize);
// Bottom-right triangle
ellipse(x + size / 4, y + size / 4, dotSize, dotSize);
}
//thin lines
function verticalLines(x, y, size) {
if (currentMode === 0) {
// Regular mode: Use random colors from the regular palette
stroke(random(colors));
} else if (currentMode === 1) {
// Colorful mode: Use colors from the colorfulPalette
stroke(random(colorfulPalette));
} else if (currentMode === 2) {
// Random Size Mode: Use random colors from the regular palette
stroke(random(colors));
}
strokeWeight(2);
let gap = size / 5;
for (let i = 0; i < 6; i++) {
line(-size / 2 + gap * i, -size / 2, -size / 2 + gap * i, size / 2);
}
}
// Thick Vertical Lines
function drawThickVerticalLines(x, y, size) {
if (currentMode === 0) {
// Regular mode: Use random colors from the regular palette
stroke(random(colors));
} else if (currentMode === 1) {
// Colorful mode: Use colors from the colorfulPalette
stroke(random(colorfulPalette));
} else if (currentMode === 2) {
// Random Size Mode: Use random colors from the regular palette
stroke(random(colors));
}
strokeWeight(10); // Thick line weight
let gap = size / 5; // 5 lines with gaps
for (let i = 0; i < 6; i++) {
line(-size / 2 + gap * i, -size / 2, -size / 2 + gap * i, size / 2);
}
}
// Thick Horizontal Lines
function drawThickHorizontalLines(x, y, size) {
if (currentMode === 0) {
// Regular mode: Use random colors from the regular palette
stroke(random(colors));
} else if (currentMode === 1) {
// Colorful mode: Use colors from the colorfulPalette
stroke(random(colorfulPalette));
} else if (currentMode === 2) {
// Random Size Mode: Use random colors from the regular palette
stroke(random(colors));
}
strokeWeight(10); // Thick line weight
let gap = size / 6; // 5 lines with gaps
for (let i = 0; i < 6; i++) {
line(
-size / 2,
-size / 2 + gap * (i + 1),
size / 2,
-size / 2 + gap * (i + 1)
);
}
}
// Thin Horizontal Lines
function thinLines(x, y, size) {
if (currentMode === 0) {
// Regular mode: Use random colors from the regular palette
stroke(random(colors));
} else if (currentMode === 1) {
// Colorful mode: Use colors from the colorfulPalette
stroke(random(colorfulPalette));
} else if (currentMode === 2) {
// Random Size Mode: Use random colors from the regular palette
stroke(random(colors));
}
strokeWeight(2); // Thick line weight
let gap = size / 6; // 5 lines with gaps
for (let i = 0; i < 6; i++) {
line(
-size / 2,
-size / 2 + gap * (i + 1),
size / 2,
-size / 2 + gap * (i + 1)
);
}
}
// Nested Triangles
function drawNestedTriangles(x, y, size) {
let triangleSize = size;
noFill();
if (currentMode === 0) {
// Regular mode: Use random colors from the regular palette
stroke(random(colors));
} else if (currentMode === 1) {
// Colorful mode: Use colors from the colorfulPalette
stroke(random(colorfulPalette));
} else if (currentMode === 2) {
// Random Size Mode: Use random colors from the regular palette
stroke(random(colors));
}
strokeWeight(2);
for (let i = 0; i < 4; i++) {
triangle(
-triangleSize / 2,
triangleSize / 2,
triangleSize / 2,
triangleSize / 2,
0,
-triangleSize / 2
);
triangleSize *= 0.7;
}
}
// West African Symbols/Geometric Shapes
function drawSymbols(x, y, size) {
noFill();
let symbolSize = size * 0.6;
strokeWeight(2);
if (currentMode === 0) {
// Regular mode: Use random colors from the regular palette
stroke(random(colors));
} else if (currentMode === 1) {
// Colorful mode: Use colors from the colorfulPalette
stroke(random(colorfulPalette));
} else if (currentMode === 2) {
// Random Size Mode: Use random colors from the regular palette
stroke(random(colors));
}
// Circle with horizontal/vertical line cross
ellipse(0, 0, symbolSize, symbolSize);
line(-symbolSize / 2, 0, symbolSize / 2, 0);
line(0, -symbolSize / 2, 0, symbolSize / 2);
// Small triangles within
for (let i = 0; i < 3; i++) {
let triSize = symbolSize * (0.3 - i * 0.1);
triangle(
0,
-triSize / 2,
triSize / 2,
triSize / 2,
-triSize / 2,
triSize / 2
);
}
}
// Zebra Print
function drawZebraPrint(x, y, size) {
if (currentMode === 0) {
// Regular mode: Use random colors from the regular palette
stroke(random(colors));
} else if (currentMode === 1) {
// Colorful mode: Use colors from the colorfulPalette
stroke(random(colorfulPalette));
} else if (currentMode === 2) {
// Random Size Mode: Use random colors from the regular palette
stroke(random(colors));
}
strokeWeight(2);
let stripes = 10;
for (let i = 0; i < stripes; i++) {
let step = i * (size / stripes);
line(-size / 2 + step, -size / 2, size / 2 - step, size / 2);
line(size / 2 - step, -size / 2, -size / 2 + step, size / 2);
}
}
function drawSquareSpiral(x, y, size) {
if (currentMode === 0) {
// Regular mode: Use random colors from the regular palette
stroke(random(colors));
} else if (currentMode === 1) {
// Colorful mode: Use colors from the colorfulPalette
stroke(random(colorfulPalette));
} else if (currentMode === 2) {
// Random Size Mode: Use random colors from the regular palette
stroke(random(colors));
}
strokeWeight(4); // Set the stroke weight for the spiral
noFill(); // No fill for the square spiral
let step = size / 10; // Define the step size for each movement inward
let currentSize = size; // Start with the full square size
let startX = -currentSize / 2; // Initial X position (top-left corner)
let startY = -currentSize / 2; // Initial Y position (top-left corner)
beginShape(); // Start drawing the shape
// Draw the spiral by progressively making the square smaller and moving inward
while (currentSize > step) {
// Top edge
vertex(startX, startY);
vertex(startX + currentSize, startY);
// Right edge
vertex(startX + currentSize, startY + currentSize);
// Bottom edge
vertex(startX, startY + currentSize);
// Move inward for the next iteration
currentSize -= step * 2;
startX += step;
startY += step;
}
endShape();
}
// Diamonds within Diamonds
function drawDiamondsInDiamond(x, y, size) {
let dSize = size;
strokeWeight(2);
if (currentMode === 0) {
// Regular mode: Use random colors from the regular palette
stroke(random(colors));
} else if (currentMode === 1) {
// Colorful mode: Use colors from the colorfulPalette
stroke(random(colorfulPalette));
} else if (currentMode === 2) {
// Random Size Mode: Use random colors from the regular palette
stroke(random(colors));
}
noFill();
for (let i = 0; i < 5; i++) {
beginShape();
vertex(0, -dSize / 2);
vertex(dSize / 2, 0);
vertex(0, dSize / 2);
vertex(-dSize / 2, 0);
endShape(CLOSE);
dSize *= 0.7;
}
}
// Bezier Curves
function drawCurves(x, y, size) {
noFill();
if (currentMode === 0) {
// Regular mode: Use random colors from the regular palette
stroke(random(colors));
} else if (currentMode === 1) {
// Colorful mode: Use colors from the colorfulPalette
stroke(random(colorfulPalette));
} else if (currentMode === 2) {
// Random Size Mode: Use random colors from the regular palette
stroke(random(colors));
}
strokeWeight(3);
for (let i = 0; i < 6; i++) {
bezier(
-size / 2,
-size / 2,
random(-size, size),
random(-size, size),
random(-size, size),
random(-size, size),
size / 2,
size / 2
);
}
}
Introducing Modes
To enhance user engagement, I implemented multiple visual modes (Regular, Colorful, Randomized, and Monochrome), allowing diverse experiences based on user interaction.
I utilized a currentMode variable to switch between visual styles seamlessly.
function draw() {
// Set up the background for the current mode if needed
if (frameCount === 1 || (currentCol === 0 && currentRow === 0)) {
setupBackground(); // Set up the background for the current mode
}
// Analyze the frequency spectrum
spectrum = fft.analyze();
// Average the bass frequencies for a stronger response
let bass = (spectrum[0] + spectrum[1] + spectrum[2]) / 3;
// Log bass to check its values
console.log(bass);
// Map bass amplitude for size variation and oscillation
let sizeVariation = map(bass, 0, 255, 0.8, 1.2);
let amplitude = map(bass, 0, 255, 0, 1); // Normalize to [0, 1]
// Use sine wave for oscillation based on time
let time = millis() * 0.005; // Control the speed of oscillation
let oscillation = sin(time * TWO_PI) * amplitude * 50; // Scale the oscillation
// Calculate position in the grid
let x = currentCol * cellSize;
let y = currentRow * cellSize;
// Apply the logic depending on currentMode
if (currentMode === 0) {
// Regular mode
if (currentRow % 3 === 0) {
drawZigZagPattern(
x + cellSize / 2,
y + cellSize / 2 + oscillation,
cellSize
); // Draw zigzag on 3rd row with oscillation
} else {
let patternIndex = (currentCol + currentRow * cols) % patterns.length;
drawPattern(x, y + oscillation, patternIndex); // Default pattern with oscillation
}
} else if (currentMode === 1) {
// Colorful mode - only use colors from colorfulPalette
let patternIndex = (currentCol + currentRow * cols) % patterns.length;
drawColorfulPattern(x, y + oscillation, patternIndex); // Apply oscillation
} else if (currentMode === 2) {
// Random Size mode
let patternIndex = (currentCol + currentRow * cols) % patterns.length;
let randomSize = random(0.5, 1.5) * cellSize; // Random size
drawPattern(x, y + oscillation, patternIndex, randomSize); // Apply oscillation
} else if (currentMode === 3) {
// Alternating Patterns
drawAlternatingPatterns(x, y + oscillation, currentCol); // Apply oscillation
}
// Move to the next cell
currentCol++;
if (currentCol >= cols) {
currentCol = 0;
currentRow++;
}
if (currentRow >= rows) {
noLoop(); // Stop the loop when all rows are drawn
}
}
function setupBackground() {
let colorModeChoice = int(random(3)); // Randomize the choice for background color
if (currentMode === 0 || currentMode === 1 || currentMode === 2) {
// Regular, Colorful, and Random Size Modes
if (colorModeChoice === 0) {
background(255); // White background
stroke(0); // Black stroke
} else if (colorModeChoice === 1) {
background(0); // Black background
stroke(255); // White stroke
} else {
background(50, 25, 0); // Dark brown background
stroke(255, 165, 0); // Orange lines
}
} else if (currentMode === 3) {
// Alternating Patterns Mode
if (colorModeChoice === 0) {
background(255); // White background
stroke(0); // Black stroke
} else if (colorModeChoice === 1) {
background(0); // Black background
stroke(255); // White stroke
}
// No stroke if colorModeChoice is 2 (do nothing)
}
}
// Regular draw pattern function
function drawPattern(x, y, patternIndex, size = cellSize) {
if (patterns[patternIndex]) {
push();
translate(x + size / 2, y + size / 2); // Center the pattern
patterns[patternIndex](0, 0, size); // Draw the pattern using the provided size
pop();
}
}
// Draw patterns in colorful mode using only colors from colorfulPalette
function drawColorfulPattern(x, y, patternIndex) {
let chosenColor = random(colorfulPalette); // Choose a color from colorfulPalette
stroke(chosenColor); // Set stroke color
fill(chosenColor); // Set fill color for the colorful patterns
drawPattern(x, y, patternIndex); // Call the default drawPattern to handle the drawing
}
function drawAlternatingPatterns(x, y, col) {
let patternIndex = col % patterns.length; // Alternate patterns based on column
drawPattern(x, y, patternIndex);
}
Colorful mode with Music:
Music Integration
I integrated p5.js’s sound library to create an interactive experience where patterns respond to music. The FFT (Fast Fourier Transform) analyzes audio amplitude, allowing the symbols to offset based on the music. Essentially, once the music starts playing the symbols either go up, or down randomly based on the music, and this alters the pattern drawn. So for each mode, there are two states, one where the music is playing and one where it is not.
I mapped bass frequencies to create lively, jittering movements.
let bass = (spectrum[0] + spectrum[1] + spectrum[2]) / 3;
let xOffset = random(-sizeVariation * 10, sizeVariation * 10);
let yOffset = random(-sizeVariation * 10, sizeVariation * 10);
drawPattern(x + xOffset, y + yOffset, patternIndex);
Achievements and Challenges
Achievements:
One of the achievements I am most proud of in this project is the implementation of multiple visual modes. I designed four distinct modes (Regular, Colorful, Randomized, and Monochrome) that allow users to experience the artwork in different ways. Each mode enhances user engagement and provides a unique perspective on the Adinkra symbols, making the project versatile and appealing. The smooth transitions between modes, triggered by key presses, add to the project’s interactivity and keep the viewer engaged.
Challenges:
Despite these successes, the journey was not without its challenges. One significant challenge was achieving a balance between the dynamic interaction of patterns and the constraints of the grid layout. Initially, the grid felt too rigid, making it difficult for the symbols to exhibit the desired randomness in their movements. To overcome this, I experimented with various techniques, such as introducing random offsets and modifying the size of the patterns to create a sense of organic movement within the structured grid. This iterative process taught me the importance of flexibility in design, especially when blending creativity with structured coding.
Another challenge was ensuring that each visual mode felt distinct and engaging. I initially struggled with mode transitions that felt too similar or jarring. By meticulously adjusting the visual elements in each mode—such as color schemes, pattern sizes, and overall aesthetics—I was able to develop a clearer identity for each mode. This process not only enhanced the user experience but also reinforced my understanding of how design choices can significantly impact perception and engagement.
Pen Plotting Translation and Process
The pen plotting process was straightforward yet time-consuming. Due to the dense nature of my project, I had to hide many layers to emphasize the vibrant colors of the patterns. While I didn’t change any code for plotting, I organized each layer by color to ensure a smooth plotting process. Overall, it took around two hours to complete!
Areas for Improvement and Future Work
Looking ahead, I aim to explore how to enhance the music’s impact on pattern dynamics. The grid structure, while beneficial, may limit randomness in movement. I’m excited to experiment with breaking down these constraints for more fluid interactions. Additionally, I dream of translating these patterns into fabric designs—what a fun endeavor that would be!
This past summer, I had the incredible opportunity to spend time in Accra during a June term, and I quickly fell in love with the city. Everything about it—the culture, the people, the food, and even the clothing—felt so familiar, like a vibrant reflection of home. It was as if I had stepped into a living blueprint of my own country. The connections between Accra and home were unmistakable, especially in the patterns woven into the fabric of daily life.
I found myself especially drawn to the Adinkra symbols, which originated from the Ashanti tribe. These symbols, while simple to the untrained eye, carry deep significance. They appear on fabrics, pottery, logos, and are even on walls as decor—integrated into everyday life with meaning and tradition. Although I cannot fully capture the weight of their history and symbolism, I have been deeply inspired by them.
This project is my way of paying homage to these symbols by integrating their essence into a generative art form that reflects the patterns I admired so much.
Project Approach:
My goal is to create a dynamic, generative pattern that evolves with each click of the mouse. Not only will the patterns themselves change, but I also want the process to unfold visibly allowing the viewer to see the art as it’s being created. By adjusting the frame rate and tracking oscillations, I hope to give each sketch a sense of fluidity and movement.
The key elements of this project will be variations in color and symbol placement, ensuring that each generated pattern feels unique. While some patterns may repeat, my challenge is to make them complement one another seamlessly.
Embedded sketches:
code:
This is a piece of the code that I wrote that generates some of the patterns that you see in the sketches. Overall, I’m really proud of them so far but there are a lot of things that I want to add and possibly some things that I may remove as well. Currently, I’m working on some more patterns and continuing to test them within the code before I produce another sketch.
// Geometric Triangles
function drawGeometricTriangles(x, y, size) {
let numTriangles = 5;
let colorPick = random(colors);
stroke(colorPick);
noFill();
let triangleSize = size;
for (let i = 0; i < numTriangles; i++) {
triangle(-triangleSize / 2, triangleSize / 2, triangleSize / 2, triangleSize / 2, 0, -triangleSize / 2);
triangleSize *= 0.7;
}
}
// Harmonic Motion Style Sine Waves
function drawSineWaves(x, y, size) {
let waveHeight = size / 4;
let frequency = 0.2;
strokeWeight(2);
noFill();
beginShape();
for (let i = 0; i < size; i++) {
let xOffset = i;
let yOffset = sin(i * frequency) * waveHeight;
vertex(xOffset - size / 2, yOffset);
}
endShape();
}
// Nested Triangles
function drawNestedTriangles(x, y, size) {
let triangleSize = size;
noFill();
stroke(random(colors));
for (let i = 0; i < 4; i++) {
triangle(-triangleSize / 2, triangleSize / 2, triangleSize / 2, triangleSize / 2, 0, -triangleSize / 2);
triangleSize *= 0.7;
}
}
// West African Symbols/Geometric Shapes
function drawSymbols(x, y, size) {
noFill();
let symbolSize = size * 0.6;
stroke(random(colors));
// Circle with horizontal/vertical line cross
ellipse(0, 0, symbolSize, symbolSize);
line(-symbolSize / 2, 0, symbolSize / 2, 0);
line(0, -symbolSize / 2, 0, symbolSize / 2);
// Small triangles within
for (let i = 0; i < 3; i++) {
let triSize = symbolSize * (0.3 - i * 0.1);
triangle(0, -triSize / 2, triSize / 2, triSize / 2, -triSize / 2, triSize / 2);
}
}
// Zebra Print
function drawZebraPrint(x, y, size) {
let stripes = 10;
for (let i = 0; i < stripes; i++) {
let step = i * (size / stripes);
line(-size / 2 + step, -size / 2, size / 2 - step, size / 2);
line(size / 2 - step, -size / 2, -size / 2 + step, size / 2);
}
}
// Diamonds within Diamonds
function drawDiamondsInDiamond(x, y, size) {
let dSize = size;
noFill();
for (let i = 0; i < 5; i++) {
beginShape();
vertex(0, -dSize / 2);
vertex(dSize / 2, 0);
vertex(0, dSize / 2);
vertex(-dSize / 2, 0);
endShape(CLOSE);
dSize *= 0.7;
}
}
// Bezier Curves
function drawCurves(x, y, size) {
noFill();
strokeWeight(2);
for (let i = 0; i < 6; i++) {
bezier(-size / 2, -size / 2, random(-size, size), random(-size, size),
random(-size, size), random(-size, size), size / 2, size / 2);
}
}
Challenges:
The biggest challenge lies in generating patterns that I can feel genuinely proud of, especially given the personal connection I have to the source of inspiration. It’s a bit daunting to try and replicate the visual and symbolic depth of Adinkra, but I believe in trial and error as my guide. I’m starting by setting up a grid-based design in my sketches, generating initial patterns based on what we’ve learned in class. From there, I plan to randomly assign patterns to grid positions. The tricky part is not the overall pattern, but the design of each individual grid symbol—finding the balance between randomness and intentionality.
Risk Management:
To reduce the uncertainty, I simply dove in. I knew I wouldn’t find the right direction without first experimenting with the code. While the process is still evolving, I’m happy with some of the early results. I’m now focusing on refining the color combinations and continuing to develop patterns that are both visually compelling and meaningful. At the moment, I have two sketches—one that’s a modified version of the other—and I’m letting the process guide me, tweaking the code until I find something that truly resonates.