Final Project Draft #2 – Peacefulness in Green

a. The concept

My idea for the final project is to create a relaxing and interactive experience. While this idea can apply to many concepts, I decided to focus myself on recreating (albeit not 100% similar) the following figure:

A table on a park
Figure 1. The inspiration.

And also, the following figure:

Figure 2. My other inspiration.

The idea was to implement everything that we have learned so far in class. Although, due to some concerns that will be shared below, this might be carefully considered depending on the quality outcome. In the following sections of this blog post, the interaction and goals of this sketch will be explained.

b. The current interaction

The interaction was possible to implement, which is mostly a physics interaction with the glass cup and the seeds to grow some plants. I have not thought about other interactions since I think it would be very time-consuming, since I feel that the current focus should be on finishing the visual details for the canvas.

c. The design

Since I have some skills when it comes to image manipulation (not that much of a design master if I am honest), I decided to use Figure 2 as a basis for the background. This was done by manipulating the image using the software GIMP:

Figure 3. Working with GIMP for the background.
Figure 4. Working some more for the background using GIMP.

Some audio background has also been added in order to complement the experience. Nevertheless, in future iterations, audio feedback for the physics objects will be implemented.

d. The difficulties encountered

A quick sketch
Figure 5. The original sketch

Taking into consideration the original sketch, there are some elements that have been added, such as the physics interaction with the bottle with seeds (number 1). Nevertheless, elements such as the generation of patterns for the terrain with cellular automata has proven to be difficult (number 6). Instead, this changed to be like an “old filter”. It is a very subtle effect, but noticeable.

Elements number 2, 3, 4 and 5 are still pending to be added, which are the following:

#2 Ants moving from left to right: The ants represented will be moving through the dirt randomly, although they will try to stay inside an established range with the help of a flow field.

#3 Winds: While they are going to be composed of a particle system, to highlight that winds do exist in the sketch, they are mostly there to provide an additional force for the matter.js bodies.

#4 Birds flying around: These will be done with the help of a flocking system.

#5 A tree with moving leaves: The moving leaves will be simulated with the help of some forces.

If difficulties are encountered again, the ideas for these elements will be rethought in order to have a complete final product.

e. Code I am particularly proud of

Detecting the current position of each ball and then spawning a Perlin walker that moves linearly in the Y axis but randomly on the X axis, while leaving a trail, proved to be difficult. I had to readjust the positioning to match Matter.js:

Code found in classes/walker.js

  show() {
    //Show trail.
    push();
    for (let i = 0; i < this.history.length; i++) {
      let pos = this.history[i];
      ellipse(pos.x, pos.y, this.w);
      noStroke();
      fill(0,200,0);
      /* fill(
        map(noise(seed1), 0, 1, 0, width),
        map(noise(seed2), 0, 1, 0, height),
        map(noise(seed1 + seed2), 0, 1, 0, seed1 + seed2)) Commented since it is not required at the moment. It also looks wrong. */ 
      ;
    }
    pop();
  }

  move() {
    this.history.push(createVector(this.position.x, this.position.y));
    this.position.x = map(
      noise(this.tx),
      -30,
      30,
      this.position.x - 10,
      this.position.x + 10
    );
    /* this.position.y = map(noise(this.ty), 0, 1, this.lastposition, 0); */
    this.tx += 0.1;
  }
}

f. The current progress so far

Here is my second draft, with the current progress, for the final project:

Controls:

Holding mouse left click: Grab body/element
C key: Spawn circle.

Full-screen version: Go to the Full-screen version

g. Reflection on current progress

This has proven to be a difficult task so far, since we have to implement everything we have seen in class. Not only that, but some features I want to add into the sketch requires some time in order to get the desired results. For example, the interaction with the seeds took a lot of time to add, since working in conjunction with Matter.js needs that I approach p5.js rather differently. Likewise, some physics values are still misaligned to properly represented the bodies seen in the sketch.

Still, the idea is to try, and if there are any issues, as mentioned, ideas will be replanned.

h. Used sources

1. Cunningham, Andrew. “Today I Stumbled upon Microsoft’s 4K Rendering of the Windows XP Wallpaper.” Ars Technica, 8 June 2023, arstechnica.com/gadgets/2023/06/i-just-found-out-that-microsoft-made-a-4k-version-of-the-windows-xp-wallpaper/.

2. freesound_community. “Highland Winds FX | Royalty-Free Music.” Pixabay.com, 14 June 2023, pixabay.com/sound-effects/highland-winds-fx-56245/. Accessed 26 Nov. 2024.

3. flanniganable. “10b How to Make a Compound Body Matter.js.” YouTube, 4 Dec. 2021, www.youtube.com/watch?v=DR-iMDhUa-0. Accessed 25 Nov. 2024.

4. The Coding Train. “5.21: Matter.js: Mouse Constraints – the Nature of Code.” YouTube, 9 Mar. 2017, www.youtube.com/watch?v=W-ou_sVlTWk. Accessed 25 Nov. 2024.

5. The Coding Train. “9.7: Drawing Object Trails – P5.Js Tutorial.” YouTube, 9 Feb. 2016, www.youtube.com/watch?v=vqE8DMfOajk. Accessed 26 Nov. 2024.

Week 12 _ Final Draft 2 Progress _ Shadowing Presence

Concept:

Zach Lieberman’s work

This project is a journey of exploring the concept of digital presence. The idea of the project is being physically part of the work. To do this, I explore particle system interaction with the human body through a web camera where both the element of interaction and the body are made of particles. This project is inspired by Zach Lieberman’s work, which focuses on making digital interactive environments that invite participants to become performers. 

Progress: 

I initially began designing the interface of the project and the basic interaction elements, which are the buttons. I had some bugs regarding when a button should appear and when it should disappear, so I had to reorganize the code to make it work better. As a result, I decided to make the size buttons () and the color buttons () an array, which made it easier for me to apply them to the particle system as the project progressed. I added more functions to handle the buttons for each page: startbutton() and resetbutton(). For the main page buttons, I added a function to create them and another to remove them, and then some buttons needed their functions, such as the save button.  After that, I added the particle system, which is inspired by the ASCII text Image by The Coding Train.  The particle systems are initially randomly placed and then move toward the positions. The particle’s color is based on brightness, and the size is between 2 and 7, mapped based on brightness so that the darker ones are smaller and the brighter ones are bigger. Now, in terms of how the particles are drawn. I initially load the video image pixels, then map the brightness of each pixel ( which takes four places in the index) from the RGB values, and then render it. 

// particle system
function drawParticleSystem() {
  video.loadPixels();
for (let i = 0; i < particles.length; i++) {
    const x = i % video.width;
    const y = floor(i / video.width);
    const pixelIndex = (x + y * video.width) * 4;

    const r = video.pixels[pixelIndex + 0];
    const g = video.pixels[pixelIndex + 1];
    const b = video.pixels[pixelIndex + 2];
    const brightness = (r + g + b) / 3;

    particles[i].update(brightness);
    particles[i].show();
  }
}
// particle class
class Particle {
  constructor(x, y) {
    this.pos = createVector(random(width), random(height));
    this.target = createVector(x, y);
    this.vel = p5.Vector.random2D().mult(3);

    //size and color of particles
    this.size = 2;
    this.color = color(255);
  }

  update(brightness) {
    // https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Conditional_operator
    //       conditional (ternary) operator
    // the brightness value is less than 130 = darker region, the particle's color is set to black. vise versa
    this.color = brightness < 130 ? color(0) : color(255);

    // Smooth return to target position
    let dir = p5.Vector.sub(this.target, this.pos);
    this.vel.add(dir.mult(0.09));
    this.vel.limit(2);
    this.pos.add(this.vel);

    // Adjust particle size dynamically
    this.size = map(brightness, 0, 255, 2, 7);
  }

  show() {
    noStroke();
    fill(this.color);
    circle(this.pos.x, this.pos.y, this.size / 2);
  }
}

 

Sketch:

 

Future Work:

There is still a lot of work to be done for the final project. I want (1) to work more on the interface of the project in terms of design. Additionally, I want (2) to add another particle system, which will be the second body that interacts with the body in the digital world. These particles will take the color and the size the user picks before starting the experience. These particles will be able to detect motion in the video and follow it. 

Resources:

—. “Coding Challenge 166: ASCII Text Images.” YouTube, 12 Feb. 2022, www.youtube.com/watch?v=55iwMYv8tGI.

Zach Lieberman. zach.li.

Instagram. www.instagram.com/p/BLOchLcBVNz.

 

Vibrations of Being- Final Project Draft 2

CONCEPT:

For my final project, I want to replicate the feeling evoked by Antony Gormley’s work, particularly the quantum physics concept that we are not made of particles, but of waves. This idea speaks to the ebb and flow of our emotions — how we experience ups and downs, and how our feelings constantly shift and flow like waves. When I came across Gormley’s work, I knew that I wanted to replicate this dynamic energy and motion in my own way, bringing my twist to it through code. I aim to visualize the human form and emotions as fluid, wave-like entities, mirroring the infinite possibilities of quantum existence.

embedded sketch:

The code creates an interactive visual experience that features fluid particle movement and dynamic lines. Particles jitter and move across the screen, leaving fading trails and regenerating in new positions as they age. Lines are drawn between points that flow smoothly, and real-time body tracking is used to draw a skeleton based on detected body landmarks. This combination of moving particles, flowing lines, and live body visualization creates an ever-changing and organic display, offering a dynamic and visually engaging experience.

INTERACTION METHODOLOGY:

To create an interactive experience where users influence the flow field particles with their movements, I started by building a skeleton using TensorFlow and ml5.js. This skeleton provides all the necessary body points that will be tracked both by the camera and by the particles drawn to them. I began by leveraging TensorFlow and ml5.js’s pre-trained models to establish the foundational body pose detection system. This skeleton not only tracks key points in real time but also serves as a bridge to manipulate the behavior of the flow field particles based on the user’s movements.

Steps to Implement Interaction:

  1. Pose Detection: I used the pose detection model (MoveNet) from ml5.js in combination with TensorFlow.js. This setup enables the webcam to track key body points such as shoulders, elbows, wrists, hips, and knees. These body points are crucial because they provide the coordinates for each joint, creating a skeleton representation of the user’s body. The skeleton’s structure is essential for detecting specific gestures and movements, which will then influence the flow field.
  2. Movement Capture: The webcam continuously captures the user’s movement in real time. TensorFlow’s MoveNet model processes the webcam feed frame by frame, detecting the position of the user’s body parts and providing their precise coordinates. These coordinates are translated into interactions that affect the flow field. For example, when the user raises an arm, the corresponding body points (such as the shoulder, elbow, and wrist) will influence nearby particles, causing them to move in specific ways.
  3. Flow Field & Particle Interaction: The interaction is centered around two distinct modes, which the user can toggle between:
    • Flow Field Mode:
      In this mode, you control the movement of particles in the environment. Your body’s movements, such as waving your arms or shifting your position, influence how the particles move across the screen. The particles will either be attracted to you or pushed away based on where you are and how you move. The result is a dynamic, fluid motion, as if the particles are reacting to your gestures. You’re shaping the flow of the field by simply moving around in space.Particle Mode:
      In this mode, you become a particle yourself. Instead of just controlling the particles, your body is now represented as a single particle within the field. Your movements directly control the position of your particle. As you move, your particle interacts with the surrounding particles, affecting how they move and react. This mode makes you feel like you’re actually part of the field, interacting with it in a more direct and personal way.
  4. Mode Toggle: A button will be implemented to allow the user to toggle between the two modes. When the user clicks the button, the system will switch from Flow Field Mode to Particle Mode, giving the user control over how they wish to engage with the system. In both modes, the user’s body movements drive how the particles behave, whether influencing the flow field or being represented as a particle within it.

Code i’m proud of:

function renderFluid() {
  background(0, 40); // Dim background for a trailing effect
  
   fill(255, 150); // White color with slight transparency
  textSize(16); // Adjust the size as needed
  textAlign(CENTER, TOP); // Center horizontally, align to the top
  text("We are not mere particles, but whispers of the infinite, drifting through eternity.", width / 2, 10);
  
  
  for (j = 0; j < linesOld.length - 1; j += 4) {
    oldX = linesOld[j];
    oldY = linesOld[j + 1];
    age = linesOld[j + 2];
    col1 = linesOld[j + 3];
    stroke(col1); // Set the stroke color
    fill(col1); // Fill the dot with the same color
    age++;

    // Add random jitter for vibration
    let jitterX = random(-1, 1); // Small horizontal movement
    let jitterY = random(-1, 1); // Small vertical movement
    newX = oldX + jitterX;
    newY = oldY + jitterY;

    // Draw a small dot
    ellipse(newX, newY, 2, 2); // Small dot with width 2, height 2

    // Check if the particle is too old
    if (age > maxAge) {
      newPoint(); // Generate a new starting point
    }

    // Save the updated position and properties
    linesNew.push(newX, newY, age, col1);
  }

  linesOld = linesNew; // Swap arrays
  linesNew = [];
}


function makeLines() {
  background(0, 40);
  
  fill(255, 150); // White color with slight transparency
  textSize(16); // Adjust the size as needed
  textAlign(CENTER, TOP); // Center horizontally, align to the top
  text("We are made of vibrations and waves, resonating through space.", width / 2, 10);
  
  for (j = 0; j < linesOld.length - 1; j += 4) {
    oldX = linesOld[j];
    oldY = linesOld[j + 1];
    age = linesOld[j + 2];
    col1 = linesOld[j + 3];
    stroke(col1);
    age++;
    n3 = noise(oldX * rez3, oldY * rez3, z * rez3) + 0.033;
    ang = map(n3, 0.3, 0.7, 0, PI * 2);
    //ang = n3*PI*2; // no mapping - flows left
    newX = cos(ang) * len + oldX;
    newY = sin(ang) * len + oldY;
    line(oldX, oldY, newX, newY);
    if (
      ((newX > width || newX < 0) && (newY > height || newY < 0)) ||
      age > maxAge
    ) {
      newPoint();
    }
    linesNew.push(newX, newY, age, col1);
  }
  linesOld = linesNew;
  linesNew = [];
  z += 2;
}

function newPoint() {
  openSpace = false;
  age = 0;
  count2 = 0;
  while (openSpace == false && count2 < 100) {
    newX = random(width);
    newY = random(height);
    col = cnv.get(newX, newY);
    col1 = get(newX, newY + hgt2);
    if (col[0] == 255) {
      openSpace = true;
    }
    count2++;
  }
}

function drawSkeleton() {
  cnv.background(0);
  // Draw all the tracked landmark points
  for (let i = 0; i < poses.length; i++) {
    pose = poses[i];
    // shoulder to wrist
    for (j = 5; j < 9; j++) {
      if (pose.keypoints[j].score > 0.1 && pose.keypoints[j + 2].score > 0.1) {
        partA = pose.keypoints[j];
        partB = pose.keypoints[j + 2];
        cnv.line(partA.x, partA.y, partB.x, partB.y);
        if (show == true) {
          line(partA.x, partA.y + hgt2, partB.x, partB.y + hgt2);
        }
      }
    }
    // hip to foot
    for (j = 11; j < 15; j++) {
      if (pose.keypoints[j].score > 0.1 && pose.keypoints[j + 2].score > 0.1) {
        partA = pose.keypoints[j];
        partB = pose.keypoints[j + 2];
        cnv.line(partA.x, partA.y, partB.x, partB.y);
        if (show == true) {
          line(partA.x, partA.y + hgt2, partB.x, partB.y + hgt2);
        }
      }
    }
    // shoulder to shoulder
    partA = pose.keypoints[5];
    partB = pose.keypoints[6];
    if (partA.score > 0.1 && partB.score > 0.1) {
      cnv.line(partA.x, partA.y, partB.x, partB.y);
      if (show == true) {
        line(partA.x, partA.y + hgt2, partB.x, partB.y + hgt2);
      }
    }
    // hip to hip
    partA = pose.keypoints[11];
    partB = pose.keypoints[12];
    if (partA.score > 0.1 && partB.score > 0.1) {
      cnv.line(partA.x, partA.y, partB.x, partB.y);
      if (show == true) {
        line(partA.x, partA.y + hgt2, partB.x, partB.y + hgt2);
      }
    }
    // shoulders to hips
    partA = pose.keypoints[5];
    partB = pose.keypoints[11];
    if (partA.score > 0.1 && partB.score > 0.1) {
      cnv.line(partA.x, partA.y, partB.x, partB.y);
      if (show == true) {
        line(partA.x, partA.y + hgt2, partB.x, partB.y + hgt2);
      }
    }
    partA = pose.keypoints[6];
    partB = pose.keypoints[12];
    if (partA.score > 0.1 && partB.score > 0.1) {
      cnv.line(partA.x, partA.y, partB.x, partB.y);
      if (show == true) {
        line(partA.x, partA.y + hgt2, partB.x, partB.y + hgt2);
      }
    }

    // eyes, ears
    partA = pose.keypoints[1];
    partB = pose.keypoints[2];
    if (partA.score > 0.1 && partB.score > 0.1) {
      cnv.line(partA.x, partA.y, partB.x, partB.y);
      if (show == true) {
        line(partA.x, partA.y + hgt2, partB.x, partB.y + hgt2);
      }
    }
    partA = pose.keypoints[3];
    partB = pose.keypoints[4];
    if (partA.score > 0.1 && partB.score > 0.1) {
      cnv.line(partA.x, partA.y, partB.x, partB.y);
      if (show == true) {
        line(partA.x, partA.y + hgt2, partB.x, partB.y + hgt2);
      }
    }
    //nose to mid shoulders
    partA = pose.keypoints[0];
    partB = pose.keypoints[5];
    partC = pose.keypoints[6];
    if (partA.score > 0.1 && partB.score > 0.1 && partC.score > 0.1) {
      xAvg = (partB.x + partC.x) / 2;
      yAvg = (partB.y + partC.y) / 2;
      cnv.line(partA.x, partA.y, xAvg, yAvg);
      if (show == true) {
        line(partA.x, partA.y + hgt2, xAvg, yAvg + hgt2);
      }
    }
  }
}

renderFluid():

This function creates a visual effect where particles (dots) move and vibrate on the screen. It starts by dimming the background to create a trailing effect, then displays a poetic message at the top of the screen. The main action involves iterating over previously drawn particles, moving them slightly in random directions (adding jitter for a vibrating effect), and drawing small dots at their new positions. If a particle becomes too old, it generates a new starting point. The particles’ positions and attributes (like color and age) are updated in arrays, creating an evolving, fluid motion.

makeLines():

This function generates moving lines, giving the impression of swirling or vibrating patterns. It displays another poetic message and creates lines that move based on Perlin noise (a smooth, continuous randomness). The lines change direction slightly each time, based on a calculated angle, and are drawn between old and new positions. If a line moves off-screen or exceeds its “age,” a new starting point is generated. The result is a dynamic flow of lines that appear to resonate across the screen, influenced by the noise function.

newPoint():

This function creates a new starting point for particles or lines. It looks for a location on the screen that hasn’t been used yet, ensuring that new points are placed in open spaces (areas where the color is white, meaning they are empty).

drawSkeleton():

This function visualizes a human skeleton-like figure using landmarks detected by a pose detection algorithm (likely from a camera feed). It draws lines connecting key points on the body (shoulders, wrists, hips, etc.) to form a skeleton-like structure. The positions of the body parts are updated continuously, and the function adds new lines as the pose changes. If a body part is detected with a low confidence score, it is ignored. The code allows for a 3D-like visualization by slightly adjusting the position in the Y-axis, depending on the show variable.

Future Work:

For future work, I plan to enhance the project by adding music to complement the visuals, with background tunes or sound effects triggered by movement. I also aim to refine the design, improving the layout and color scheme for a more immersive experience, and potentially adding customization options for users.

Additionally, I want to introduce a feature that lets users snap a picture of the live scene, capturing the dynamic motion of particles and body tracking. Users could save or share the image, with options to apply filters or effects before capturing, offering a more personalized touch.

Final Project Draft

Concept

The idea for this project is to create a game that is similar to collecting balls. Instead of collecting balls in a jar in this game, the player will collect letters given the phrase P. To complete the game the player has to collect all the letters of the phrase P and letters will be falling from the top of the screen. Collecting the wrong letter will result in losing. The player also has 1 mins to collect all the letters. Most of the basic functionality is done for this draft.

Game

Remaining Features

The basic logic of the game is done. But right now once the player collects all letters a second round with longer letters does not appear and that will be implemented in the final version.  The phrases are hard coded right now and I will use an LLM to generate the phrases for all the rounds in real time. The length and complexity of the phrase will increase every round but the player will still have 1 minute to collect all the letters. The visuals are also basic right now. I will improve the visuals for the final version.

Ripples With Cellular Automata

Concept

This project mimics wave-like patterns that appear and disappear over time, drawing inspiration from the natural phenomena of water ripples. The project employs basic mathematical principles to create ripples on the canvas by representing every grid cell as a point in a system. By enabling users to create waves through clicks, drags, or even randomly generated events, user involvement brings the picture to life.

Each cell’s value is determined by the values of its neighboring cells, and the basic algorithm changes the grid state frame by frame. The simulation is both aesthetically pleasing and scientifically sound because of this behavior, which mimics how ripples dissipate and interact in real life.

 Code Review

let cols;
let rows;
let current;
let previous;

let dampening = 0.99; // Controls ripple dissipation
let cellSize = 4; // Size of each cell
let baseStrength = 5000; // Base intensity of interaction
let interactStrength = baseStrength; // Dynamic intensity
let autoRipples = false; // Automatic ripple generation
let mousePressDuration = 0; // Counter for how long the mouse is pressed

function setup() {
  createCanvas(windowWidth, windowHeight);
  initializeGrid();
  textSize(16);
  fill(255);
}

function windowResized() {
  resizeCanvas(windowWidth, windowHeight);
  initializeGrid();
}

function initializeGrid() {
  cols = floor(width / cellSize);
  rows = floor(height / cellSize);

  current = new Array(cols).fill(0).map(() => new Array(rows).fill(0));
  previous = new Array(cols).fill(0).map(() => new Array(rows).fill(0));
}

function mouseDragged() {
  mousePressDuration++;
  interactStrength = baseStrength + mousePressDuration * 50; // Increase ripple strength
  addRipple(mouseX, mouseY);
}

function mousePressed() {
  mousePressDuration++;
  interactStrength = baseStrength + mousePressDuration * 50; // Increase ripple strength
  addRipple(mouseX, mouseY);
}

function mouseReleased() {
  mousePressDuration = 0; // Reset the counter when the mouse is released
  interactStrength = baseStrength; // Reset ripple strength
}

function keyPressed() {
  if (key === 'A' || key === 'a') {
    autoRipples = !autoRipples; // Toggle automatic ripples
  } else if (key === 'R' || key === 'r') {
    initializeGrid(); // Reset the grid
  } else if (key === 'W' || key === 'w') {
    dampening = constrain(dampening + 0.01, 0.9, 1); // Increase dampening
  } else if (key === 'S' || key === 's') {
    dampening = constrain(dampening - 0.01, 0.9, 1); // Decrease dampening
  } else if (key === '+' && cellSize < 20) {
    cellSize += 1; // Increase cell size
    initializeGrid();
  } else if (key === '-' && cellSize > 2) {
    cellSize -= 1; // Decrease cell size
    initializeGrid();
  }
}

function addRipple(x, y) {
  let gridX = floor(x / cellSize);
  let gridY = floor(y / cellSize);
  if (gridX > 0 && gridX < cols && gridY > 0 && gridY < rows) {
    previous[gridX][gridY] = interactStrength;
  }
}

function draw() {
  background(0);

  noStroke();

  // Display ripples
  for (let i = 1; i < cols - 1; i++) {
    for (let j = 1; j < rows - 1; j++) {
      // Cellular automata ripple algorithm
      current[i][j] =
        (previous[i - 1][j] +
          previous[i + 1][j] +
          previous[i][j - 1] +
          previous[i][j + 1]) /
          2 -
        current[i][j];

      // Apply dampening to simulate energy dissipation
      current[i][j] *= dampening;

      // Map the current state to a color intensity
      let intensity = map(current[i][j], -interactStrength, interactStrength, 0, 255);

      // Render each cell as a circle with its intensity
      fill(intensity, intensity * 0.8, 255); // Blue-tinted ripple effect
      ellipse(i * cellSize, j * cellSize, cellSize, cellSize);
    }
  }

  // Swap buffers
  let temp = previous;
  previous = current;
  current = temp;

  if (autoRipples && frameCount % 10 === 0) {
    // Add a random ripple every 10 frames
    addRipple(random(width), random(height));
  }

  // Display info text
  displayInfoText();
}

function displayInfoText() {
  fill(255);
  noStroke();
  textAlign(LEFT, TOP);
  text(
    `Controls:
  A - Toggle auto ripples
  R - Reset grid
  W - Increase dampening (slower fade)
  S - Decrease dampening (faster fade)
  + - Increase cell size
  - - Decrease cell size
Click and drag to create ripples.`,
    10,
    10
  );
}

The grid is represented as two 2D arrays to store the current and past states of the simulation. The cell’s new value is computed as the average of its neighboring cells’ values, minus its value from the previous frame.  This controllers the ripple propagation. I also used a dampening factor that reduces the intensity of the ripples over time, simulating the gradual dissipation of energy.

Sketch


The sketch has the following user interactions.

  • A or a: Toggles automatic ripples on or off.
  • R or r: Resets the grid, clearing all current ripples.
  • W or w: Increases the dampening factor, making the ripples fade slowly.
  • S or s: Decreases the dampening factor, making the ripples fade faster.
  • +: Increases the cell size, which reduces the number of grid cells but increases their size.
  • -: Decreases the cell size, increasing the grid resolution for finer ripples.

Challenges and Future Improvements

The challenge in this project is managing large grids which makes the simulation computationally expensive. Also, achieving a smooth and seamless ripple effect was a bit challenging. For future improvements. Implementing this in 3D could be quite interesting.

Reference

thecodingtrain.com/challenges/102-2d-water-ripple

 

Week #11

Introduction
Cellular automata are fascinating systems where simple rules applied to cells in a grid lead to complex and often mesmerizing patterns. While 2D cellular automata like Conway’s Game of Life are well-known, extending the concept into 3D opens up a whole new dimension of possibilities—literally! In this project, I used p5.js to create an interactive 3D cellular automaton, combining computational elegance with visual appeal.

The Project

  1. The Grid
    The automaton uses a 3D array to represent the cells. Each cell is a small cube, and the entire grid is visualized in a 3D space using p5.js’s WEBGL mode.
  2. Random Initialization
    The grid starts with a random distribution of alive and dead cells, giving each simulation a unique starting point.
  3. Rule Application
    At each frame, the automaton calculates the next state of every cell based on its neighbors. The updated grid is then displayed.
  4. Interactivity
    Using p5.js’s orbitControl(), users can rotate and zoom into the 3D grid, exploring the automaton’s patterns from different perspectives.Code

    let grid, nextGrid;
    let cols = 10, rows = 10, layers = 10; // Grid dimensions
    let cellSize = 20;
    
    function setup() {
      createCanvas(600, 600, WEBGL);
      grid = create3DArray(cols, rows, layers);
      nextGrid = create3DArray(cols, rows, layers);
      randomizeGrid();
    }
    
    function draw() {
      background(30);
      orbitControl(); // Allows rotation and zoom with mouse
      
      // Center the grid
      translate(-cols * cellSize / 2, -rows * cellSize / 2, -layers * cellSize / 2);
      
      // Draw cells
      for (let x = 0; x < cols; x++) {
        for (let y = 0; y < rows; y++) {
          for (let z = 0; z < layers; z++) {
            if (grid[x][y][z] === 1) {
              push();
              translate(x * cellSize, y * cellSize, z * cellSize);
              fill(255);
              noStroke();
              box(cellSize * 0.9); // A slightly smaller cube for spacing
              pop();
            }
          }
        }
      }
      
      updateGrid(); // Update the grid for the next frame
    }
    
    // Create a 3D array
    function create3DArray(cols, rows, layers) {
      let arr = new Array(cols);
      for (let x = 0; x < cols; x++) {
        arr[x] = new Array(rows);
        for (let y = 0; y < rows; y++) {
          arr[x][y] = new Array(layers).fill(0);
        }
      }
      return arr;
    }
    
    // Randomize the initial state of the grid
    function randomizeGrid() {
      for (let x = 0; x < cols; x++) {
        for (let y = 0; y < rows; y++) {
          for (let z = 0; z < layers; z++) {
            grid[x][y][z] = random() > 0.7 ? 1 : 0; // 30% chance of being alive
          }
        }
      }
    }
    
    // Update the grid based on rules
    function updateGrid() {
      for (let x = 0; x < cols; x++) {
        for (let y = 0; y < rows; y++) {
          for (let z = 0; z < layers; z++) {
            let neighbors = countNeighbors(x, y, z);
            if (grid[x][y][z] === 1) {
              // Survival: A live cell stays alive with 4-6 neighbors
              nextGrid[x][y][z] = neighbors >= 4 && neighbors <= 6 ? 1 : 0;
            } else {
              // Birth: A dead cell becomes alive with exactly 5 neighbors
              nextGrid[x][y][z] = neighbors === 5 ? 1 : 0;
            }
          }
        }
      }
      // Swap grids
      let temp = grid;
      grid = nextGrid;
      nextGrid = temp;
    }
    
    // Count the alive neighbors of a cell
    function countNeighbors(x, y, z) {
      let count = 0;
      for (let dx = -1; dx <= 1; dx++) {
        for (let dy = -1; dy <= 1; dy++) {
          for (let dz = -1; dz <= 1; dz++) {
            if (dx === 0 && dy === 0 && dz === 0) continue; // Skip the cell itself
            let nx = (x + dx + cols) % cols;
            let ny = (y + dy + rows) % rows;
            let nz = (z + dz + layers) % layers;
            count += grid[nx][ny][nz];
          }
        }
      }
      return count;
    }
    

    Future Enhancements

    • Custom Rules: Experiment with different neighbor conditions to discover new behaviors.
    • Larger Grids: Scale up the grid size for more complex patterns (optimize for performance).
    • Color Variations: Assign colors based on neighbor count or generation age.
    • Save States: Let users save and reload interesting configurations.

Idea for Final Project

Introduction
Fractals are captivating geometric patterns that reveal infinite complexity as they scale. These self-repeating designs occur naturally in clouds, plants, and even coastlines. But what if we could interact with these fractals in real-time? That’s the inspiration behind my project: creating fractals that evolve and respond dynamically using p5.js.

Project Overview
The goal of this project is to make fractals interactive, turning them from static patterns into a live, engaging experience. Here’s the core concept:
Dynamic Complexity: Each mouse press increases the fractal’s depth, unveiling new levels of detail.
Real-Time Updates:Changes occur instantly, making the fractals feel responsive and alive.
Immersive Visuals: By pairing the fractals with color changes or subtle animations, users can dive into a visually rich environment.

Fractal Designs
There are several fractal types I’m considering for this project, each offering unique possibilities for interactivity:
1. Sierpinski Triangle: A classic fractal where smaller triangles nest within a larger one, emphasizing geometric symmetry.
2. Koch Snowflake: A fractal that begins as a simple line but transforms into an intricate, snowflake-like shape through recursion.
3. Tree Fractal: Mimicking branching patterns found in nature, this fractal evolves into a dense network of “branches” with each click.

Interactivity Features
Here are some ways the fractals will interact with the user:
– Mouse Press: Adjusts the fractal’s recursion depth, allowing the user to control its complexity.
– Mouse Position: Alters aspects like color, size, or rotation based on where the user hovers, creating an immersive effect.
– Animation: Adding subtle movement, like rotating or growing fractals, to make them feel more dynamic.

The idea is to make fractals not only visually appealing but also engaging. With every mouse click, users get to explore a new layer of complexity, making the experience intuitive and playful. Future iterations might include:
– Dynamic Color Schemes: Colors change based on time or user input, adding vibrancy to the fractals.
– Sound Integration: Pairing fractal interactions with sound effects or ambient music for a multisensory experience.
– Multiple Fractals: Allowing users to toggle between different fractal types for variety.

Final Draft – Experimenting with Hydra in P5.js

My final project uses a library I used in a previous class: Live Coding. The library is called Hydra, which is used to create visuals within a webpage. While it is intended to be used with live coding, it can also simply be used for its visuals. Normally, variations of visuals are created through the use of musical beats and sounds, but I plan on having the user interact to change these visuals instead.

I have not completely decided what my concept exactly is, nor what I want my program to do, but I know I wanted to use hydra in some way, because it can create these trippy and colorful visuals. An idea I had was to make the visuals be split between TV screens, similar to a Nam June Paik piece. However, at this point, I need to put some more time into finalizing the concept of my program. I had to do so much troubleshooting to re-familiarize myself with hydra and get it to work properly in P5.js, that I did not have time to work on my concept.

Currently, the program is still bare bones. There is a background canvas with a hydra visual on it, and a second one that is wrapped around a rotating torus. There are some lines on the screen, which I intend to be a grid that will make it appear the visuals are being shown through a grid of TVs. There is one bit of interactivity that I experimented with, a slider that allows you to shift the colors of the background visual, which was there to test the interaction of P5 and hydra. My one problem that I know i will run into with the visuals is adding or removing aspects of them, since I don’t think they can be coded in a modular way, since they are essentially a long string of methods together. Here is what the two visuals look like in code:

h1.osc(6, 0.1, 0.4).modulatePixelate(h1.noise(25,0.5),100).out(h1.o0);
h1.render(h1.o0);

h2.src(h2.o2)
     .modulate(h2.voronoi(10), 0.005)
     .blend(h2.osc(4, 0.1, 3).kaleid(30), 0.01)
     .out(h2.o2);
   h2.render(h2.o2);

 

Much of the issues I ran into had to do with me being unsure how to use hydra properly in p5. I had to do some coding within the index.html to make it work. To make the visuals more interesting, I tried adding a second hydra canvas to allow for more visual differences. This took a very long time to troubleshoot, because the two instances of hydra together did not like each other. I had to work with a lot of new things, like p5.graphics and the js canvas to make them work together. The next step for me will be to try to finalize my concept so I have a clear goal to work towards. I also want to try to utilize more aspects of things we learned in class, which at this point is not really implemented, other than the use of an external library. I think deciding an aspect to focus on will help me to finalize my concept. After this, I can finalize the program and add more user interaction, ideally with the visuals. I am also worried about the program being slow, since I am working in 3D. While it is fine for now, I worry that complexity will hurt this program.

Digital Bonsai

Design Concept

The Digital Bonsai project explores the intersection of traditional Japanese bonsai art and generative design. While physical bonsai takes years of careful cultivation, this digital interpretation allows instant exploration of organic growth patterns while maintaining the meditative qualities of bonsai shaping.

The artistic intention is to create a space where users can experience the joy of bonsai creation without the time investment, while still appreciating the core aesthetic principles of balance, asymmetry, and naturalness that define bonsai art.

Sketch

Mermaid State Diagram

 

stateDiagram-v2
[*] --> Initial: Load Canvas
Initial --> TrunkPlacement: User Click
TrunkPlacement --> BranchGrowth: Generate Trunk
BranchGrowth --> LeafPlacement: Create Branches
LeafPlacement --> Complete: Add Foliage

Complete --> TrunkPlacement: New Click

note right of TrunkPlacement
Click position determines:
- Trunk height
- Growth direction
- Initial thickness
end note

note right of BranchGrowth
Organic branching using:
- Bezier curves
- Width inheritance
- Natural tapering
end note

note right of LeafPlacement
Leaf generation at:
- Branch terminals
- Random variations
- Clustered groups
end note

Current Implementation

The base sketch uses a three-class system:

  1. Node class: Handles growth points and branching decisions
  2. Branch class: Manages the organic curves and width inheritance
  3. Leaf class: Controls foliage generation and placement

Key features:

  • Bezier curves for natural branch flow
  • Dynamic width tapering
  • Organic branching patterns
  • Terminal leaf generation
  • Wood texture simulation
  • Smooth joint transitions

Future Improvements

  1. Enhanced Naturalism
    • Bark texture variations
    • Age-based characteristics
    • Growth rings
    • Branch scarring
  2. Environmental Factors
    • Wind effects
    • Gravity influence
    • Light-seeking behavior
    • Season changes
  3. Interactive Features
    • Pruning tools
    • Branch wiring
    • Growth time-lapse
    • Style presets (formal upright, cascade, etc.)

Week 11 – Experimenting With 3D Cellular Automata

The project I have this week is relatively simple, it is using the 3D Cellular Automata algorithm in 3D. It was inspired by the 3D model we saw in class made in babylonjs. Here is the sketch:

There is a bit of interaction, where you can press to advance to the next stage of the automata, yielding a new pattern. You can also enable manual camera control if you want to take a closer look at the shape. The color changes over time to make it a bit more visually appealing to look at.

In the code, there are values a, b, c, and d. You can modify these in the code to change how the cells develop. Feel free to experiment and see what looks interesting.

The biggest problem I ran into had to do with computer limitations. The cell is relatively small, 20×20 cells, because adding anymore would slow the program down too much. Even with the current resolution I have the program can get slow when there are a large number of cells. I really wish I had a stronger computer to try to render a larger cell array, I think it could be really interesting.