Vibrations of Being + IM Showcase

CONCEPT

For my final project, I aimed to capture the profound interplay between energy, emotion, and human existence, drawing inspiration from Antony Gormley’s exploration of quantum physics. Gormley’s idea that we are not merely particles but waves resonates deeply with me. It reflects the fluid, ever-changing nature of our emotions and experiences, which form the basis of this project.

This project visualizes the human form and its emotional states as both wave-like entities and particles to highlight their contrast, a choice influenced by the availability of two distinct options. The waves reflect the fluidity of human emotions, while the particles emphasize discrete, tangible aspects of existence. Together, they offer an abstract interpretation of the quantum perspective.

By embracing infinite possibilities through motion, interaction, and technology, my goal is to immerse users in a digital environment that mirrors the ebb and flow of emotions, resonating with the beauty and complexity of our quantum selves. Blending art and technology, I create a unique experience that integrates body tracking, fluid motion, and real-time interactivity. Through the medium of code, I reinterpret Gormley’s concepts, building an interactive visual world where users can perceive their presence as both waves and particles in constant motion.

Embedded Sketch:

Interaction Methodology

The interaction in this project is designed to feel intuitive and immersive, connecting the user’s movements to the behavior of particles and lines in real time. By using TensorFlow.js and ml5.js for body tracking, I’ve created a system that lets users explore and influence the visualization naturally.

Pose Detection

The program tracks key points on the body, like shoulders, elbows, wrists, hips, and knees, using TensorFlow’s MoveNet model via ml5.js. These points form a digital skeleton that moves with the user and acts as a guide for the particles. It’s simple: your body’s movements shape the world on the screen.

Movement Capture

The webcam does all the work in capturing your motion. The system reacts in real-time, mapping your movements to the particles and flow field. For example:

  • Raise your arm, and the particles scatter like ripples on water.
  • Move closer to the camera, and they gather around you like a magnetic field.

Timed Particle Phases

Instead of toggling modes through physical gestures like stepping closer or stepping back (which felt a bit clunky during testing), I added timed transitions. Once you press “Start,” the system runs automatically, and every 10 seconds, the particles change behavior.

  • Wave Mode: The particles move in smooth, flowing patterns, like ripples responding to your gestures.
  • Particle Mode: Your body is represented as a single particle, interacting with the others around it to highlight the contrast between wave and particle states.

Slider and Color Picker Interactions

  • Slider: Adjusts the size of the particles, letting the user influence how the particles behave (larger particles can dominate the field, smaller ones spread out).
  • Color Picker: Lets the user change the color of the particles, providing visual customization.

These controls allow users to fine-tune the appearance and behavior of the particle field while their movements continue to guide it.

The Experience

It’s pretty simple:

  1. Hit “Start,” and the body tracking kicks in. You’ll see a skeleton of your movements on the screen, surrounded by particles.
  2. Move around, wave your arms, step closer or farther—you’ll see how your actions affect the particles in real time.
  3. Every 10 seconds, the visualization shifts between the flowing waves and particle-based interactions, giving you a chance to experience both states without interruption.

The goal is to make the whole experience feel effortless and engaging, showing the contrast between wave-like fluidity and particle-based structure while keeping the interaction playful and accessible.

Code I’m Proud Of:

I’m proud of this code because it combines multiple elements like motion, fluid dynamics, and pose detection into a cohesive interactive experience. It integrates advanced libraries and logic, showcasing creativity in designing vibrant visual effects while ensuring smooth transitions between states. This project reflects my ability to merge technical complexity with artistic expression.

function renderFluid() {
  background(0, 40); // Dim background for a trailing effect

  fill(255, 150); // White color with slight transparency
  textSize(32); // Adjust the size to make it larger
  textAlign(CENTER, TOP); // Center horizontally, align to the top
  textFont(customFont1);

  // Splitting the text into two lines
  let line1 = "We are not mere particles,";
  let line2 = "but whispers of the infinite, drifting through eternity.";

  let yOffset = 10; // Starting Y position

  // Draw the two lines of text
  text(line1, width / 2, yOffset);
  text(line2, width / 2, yOffset + 40);

  for (j = 0; j < linesOld.length - 1; j += 4) {
    oldX = linesOld[j];
    oldY = linesOld[j + 1];
    age = linesOld[j + 2];
    col1 = linesOld[j + 3];
    stroke(col1); // Set the stroke color
    fill(col1); // Fill the dot with the same color
    age++;

    // Add random jitter for vibration
    let jitterX = random(-1, 1); // Small horizontal movement
    let jitterY = random(-1, 1); // Small vertical movement
    newX = oldX + jitterX;
    newY = oldY + jitterY;

    // Draw a small dot
    ellipse(newX, newY, 2, 2); // Small dot with width 2, height 2

    // Check if the particle is too old
    if (age > maxAge) {
      newPoint(); // Generate a new starting point
    }

    // Save the updated position and properties
    linesNew.push(newX, newY, age, col1);
  }

  linesOld = linesNew; // Swap arrays
  linesNew = [];
}

function makeLines() {
  background(0, 40);

  fill(255, 150); // White color with slight transparency
  textSize(32); // Adjust the size to make it larger
  textAlign(CENTER, TOP); // Center horizontally, align to the top
  textFont(customFont1);

  // Splitting the text into two lines by breaking the string
  let line1 = "We are made of vibrations";
  let line2 = "and waves, resonating through space.";

  let yOffset = 10; // Starting Y position

  // Draw the two lines of text
  text(line1, width / 2, yOffset);
  text(line2, width / 2, yOffset + 40);

  for (j = 0; j < linesOld.length - 1; j += 4) {
    oldX = linesOld[j];
    oldY = linesOld[j + 1];
    age = linesOld[j + 2];
    col1 = linesOld[j + 3];
    stroke(col1);
    age++;
    n3 = noise(oldX * rez3, oldY * rez3, z * rez3) + 0.033;
    ang = map(n3, 0.3, 0.7, 0, PI * 2);
    newX = cos(ang) * len + oldX;
    newY = sin(ang) * len + oldY;
    line(oldX, oldY, newX, newY);

    if (
      ((newX > width || newX < 0) && (newY > height || newY < 0)) ||
      age > maxAge
    ) {
      newPoint();
    }

    linesNew.push(newX, newY, age, col1);
  }
  linesOld = linesNew;
  linesNew = [];
  z += 2;
}

User testing:

For user testing, I asked my friend to use the sketch, and overall, it was a smooth process. I was pleasantly surprised by how intuitive the design turned out to be, as my friend was able to navigate it without much guidance. They gave me some valuable feedback on minor tweaks, but generally, the layout and flow were easy to follow. It was reassuring to see that the key features I focused on resonated well with someone else, which confirmed that the design choices were on the right track. This feedback will help refine the project further and ensure it’s as user-friendly as possible.

Im Showcase:

I absolutely loved the IM Showcase this semester, as I do every semester. It was such a joy to see my friends’ projects come to life in all their glory after witnessing the behind-the-scenes efforts. It was equally exciting to explore the incredible projects from other classes and admire the amazing creativity and talent of NYUAD students.

Challenges:

One challenge I faced was that the sketch wouldn’t function properly if the camera was directly positioned underneath. While I’m not entirely sure why this issue occurred, it might be related to how the sketch interprets depth or perspective, causing distortion in the visual or interaction logic. To address this, I decided to have the sketch automatically switch between two modes. This approach not only resolved the technical issue but also allowed the sketch to maintain a full-screen appearance that was aesthetically pleasing. Without this adjustment, the camera’s placement felt visually unbalanced and less polished.

However, I recognize that this solution reduced user interaction by automating the mode switch. Ideally, I’d like to enhance interactivity in the future by implementing a slider or another control that gives users the ability to switch modes themselves. This would provide a better balance between functionality and user engagement, and it’s something I’ll prioritize as I refine the project further.

Reflection:

Reflecting on this project, I’m incredibly proud of what I’ve created, especially in how I was able to replicate Anthony’s work. It was a challenge, but one that I really enjoyed taking on. I had to push myself to understand the intricate details of his style and techniques, and in doing so, I learned so much. From working through technical hurdles to fine-tuning the visual elements, every step felt like a personal achievement. I truly feel like I’ve captured the essence of his work in a way that’s both faithful to his vision and uniquely mine. This project has not only sharpened my technical skills but also given me a deeper appreciation for the craft and the creative process behind it

Resources:

ML5.js

Snake: Survival of the Fittest Edition (Final)

Introduction

Snake: Survival of the Fittest Edition adds a modern twist to the classic snake game you know and love. This game pits you, the player, against a computer-controlled snake in a battle. As you maneuver to collect food and grow in size, you’ll need to avoid obstacles, strategize your movements, and ensure that you don’t become the prey.

 

Code Overview

let scl; // Grid scale (calculated dynamically)
let cols, rows;
let playerSnake, computerSnake;
let food;
let obstacles = [];
let obstacleTimer = 0; // Timer to track when to change obstacles
let obstacleInterval = 300; // Minimum frames before obstacles relocate
let help = "Press 'f' (possibly twice) to toggle fullscreen";
let gameState = 'idle'; // 'idle', 'playing', or 'gameOver' // NEW CODE
let hasEnded = false;

function setup() {
  createCanvas(windowWidth, windowHeight);
  frameRate(10); // Control the game speed
  print(help);

  updateGridDimensions(); // Calculate grid dimensions and scale

  // Initialize snakes
  playerSnake = new Snake(color(0, 0, 255)); // Blue snake for the player
  computerSnake = new Snake(color(255, 0, 0)); // Red snake for the computer
  computerSnake.chaseState = "food"; // Initial chase state

  // Place food and obstacles
  placeFood();
  placeObstacles();
}

function draw() {
  if (gameState === 'idle') {
    drawStartScreen(); // NEW CODE
  } else if (gameState === 'playing') {
    runGame(); // NEW CODE
  } else if (gameState === 'gameOver') {
    drawGameOverScreen(); // NEW CODE
  }
}

function runGame() {
  background(220);

  // Update obstacles at random intervals
  if (frameCount > obstacleTimer + obstacleInterval) {
    placeObstacles();
    obstacleTimer = frameCount; // Reset the timer
  }

  // Draw the grid
  drawGrid();

  // Update the chase state
  if (computerSnake.body.length > 0 && playerSnake.body.length > 0) {
    if (computerSnake.len >= playerSnake.len * 3) {
      computerSnake.chaseState = "user"; // Switch to chasing the user
    } else {
      computerSnake.chaseState = "food"; // Chase food
    }
  }

  // Update and show the snakes
  if (playerSnake.body.length > 0) {
    playerSnake.update();
    playerSnake.show();
  }

  if (computerSnake.body.length > 0) {
    computerSnake.update();
    computerSnake.show();
  }

  // Handle computer snake logic
  if (computerSnake.body.length > 0 && computerSnake.chaseState === "food") {
    computerSnake.chase(food); // Chase food
  } else if (
    computerSnake.body.length > 0 &&
    playerSnake.body.length > 0 &&
    computerSnake.chaseState === "user"
  ) {
    computerSnake.chase(playerSnake.body[playerSnake.body.length - 1]); // Chase player's head

    if (snakeCollidesWithSnake(computerSnake, playerSnake)) {
      playerSnake.body.shift(); // Remove one cell from the player's snake
      playerSnake.len--; // Decrease player's snake length

      // NEW CODE: Check if player snake is engulfed completely
      if (playerSnake.len <= 0) {
        gameState = 'gameOver';
        drawGameOverScreen();
        fill(255, 255, 0);
        textSize(32);
        textAlign(CENTER, CENTER);
        text("YOU WERE ENGULFED BY THE COMPUTER SNAKE!", width / 2, (2 * height) / 3);
        return; // Exit the function immediately
      }
    }
  }

  // Check if the player snake eats the food
  if (playerSnake.body.length > 0 && playerSnake.eat(food)) {
    placeFood();
  }

  // Check if the computer snake eats the food
  if (computerSnake.body.length > 0 && computerSnake.eat(food)) {
    placeFood();
  }

  // Check for collisions with obstacles (only for the player snake)
  if (playerSnake.body.length > 0 && snakeCollidesWithObstacles(playerSnake)) {
    gameState = 'gameOver'; // NEW CODE: End the game
    drawGameOverScreen();
    return; // Exit the function immediately
  }

  // Draw the food
  fill(0, 255, 0);
  rect(food.x * scl, food.y * scl, scl, scl);

  // Draw the obstacles
  fill(100); // Gray obstacles
  for (let obs of obstacles) {
    rect(obs.x * scl, obs.y * scl, scl, scl);
  }

  // Check for game over conditions
  if (playerSnake.body.length > 0 && playerSnake.isDead()) {
    gameState = 'gameOver';
    drawGameOverScreen();
    return;
  }

  if (computerSnake.body.length > 0 && computerSnake.isDead()) {
    gameState = 'gameOver';
    drawGameOverScreen();
    return;
  }
}


function drawStartScreen() { // UPDATED FUNCTION
  background(0);
  fill(255, 255, 0); // Bright yellow color for arcade-like feel
  textFont('monospace'); // Arcade-style font
  textSize(48); // Large text size
  textAlign(CENTER, CENTER);
  text("SNAKE: SURVIVAL OF THE FITTEST", width / 2, height / 3);
  fill(0, 255, 0); // Green color for instructions
  textSize(24);
  text("PRESS SPACE TO START", width / 2, height / 2);
  fill(255, 0, 0); // Red color for help
  textSize(18);
  text(help, width / 2, (2 * height) / 3);
}

function drawGameOverScreen() { // UPDATED FUNCTION
  background(0);
  fill(255, 0, 0); // Red for game over message
  textFont('monospace'); // Arcade-style font
  textSize(48);
  textAlign(CENTER, CENTER);
  text("GAME OVER!", width / 2, height / 3);
  fill(0, 255, 0); // Green for restart instructions
  textSize(24);
  text("PRESS SPACE TO RESTART", width / 2, height / 2);
  noLoop(); // Stop the game loop
}



function updateGridDimensions() {
  // Set desired number of columns and rows
  cols = 40;
  rows = 30;

  // Calculate scale to fit the window size
  scl = min(floor(windowWidth / cols), floor(windowHeight / rows));

  // Adjust cols and rows to fill the screen exactly
  cols = floor(windowWidth / scl);
  rows = floor(windowHeight / scl);

  // Resize the canvas to match the new grid
  resizeCanvas(cols * scl, rows * scl);
}

function windowResized() {
  // Update grid and canvas dimensions when the window is resized
  updateGridDimensions();

  // Reposition snakes, food, and obstacles to ensure they are within the new bounds
  playerSnake.reposition();
  computerSnake.reposition();
  placeFood();
  placeObstacles();
}

function resetGame() { // NEW FUNCTION
  playerSnake = new Snake(color(0, 0, 255)); // Reset player snake
  computerSnake = new Snake(color(255, 0, 0)); // Reset computer snake
  computerSnake.chaseState = "food"; // Reset chase state
  placeFood();
  placeObstacles();
  loop(); // Start the game loop // FIX
}

function keyTyped() {
  if (key === 'f') {
    toggleFullscreen(); // Toggle fullscreen mode
  }
}

function toggleFullscreen() {
  let fs = fullscreen();
  fullscreen(!fs); // Flip fullscreen state
}
function keyPressed() {
  if (gameState === 'idle' && key === ' ') { // NEW CODE
    gameState = 'playing';
    resetGame(); // NEW CODE
  } else if (gameState === 'gameOver' && key === ' ') { // NEW CODE
    gameState = 'playing';
    resetGame(); // NEW CODE
  } else if (gameState === 'playing') { // NEW CONDITION
    switch (keyCode) {
      case UP_ARROW:
        if (playerSnake.ydir !== 1) playerSnake.setDir(0, -1);
        break;
      case DOWN_ARROW:
        if (playerSnake.ydir !== -1) playerSnake.setDir(0, 1);
        break;
      case LEFT_ARROW:
        if (playerSnake.xdir !== 1) playerSnake.setDir(-1, 0);
        break;
      case RIGHT_ARROW:
        if (playerSnake.xdir !== -1) playerSnake.setDir(1, 0);
        break;
    }
  }
}


function drawGrid() {
  stroke(200);
  for (let i = 0; i <= cols; i++) {
    line(i * scl, 0, i * scl, rows * scl);
  }
  for (let j = 0; j <= rows; j++) {
    line(0, j * scl, cols * scl, j * scl);
  }
}

function placeFood() {
  // Place food at a random position, avoiding obstacles and snakes
  food = createVector(
    floor(random(1, cols - 1)), // Avoid edges
    floor(random(1, rows - 1))
  );

  // Ensure food does not overlap with obstacles or snakes
  while (
    obstacles.some(obs => obs.x === food.x && obs.y === food.y) ||
    playerSnake.body.some(part => part.x === food.x && part.y === food.y) ||
    computerSnake.body.some(part => part.x === food.x && part.y === food.y)
  ) {
    food = createVector(
      floor(random(1, cols - 1)),
      floor(random(1, rows - 1))
    );
  }
}

function placeObstacles() {
  // Place random obstacles
  obstacles = []; // Clear existing obstacles
  let numObstacles = floor(random(4, 9)); // Random number of obstacles between 4 and 8

  for (let i = 0; i < numObstacles; i++) {
    let obs = createVector(floor(random(cols)), floor(random(rows)));

    // Ensure obstacles do not overlap with food or snakes
    while (
      (food && food.x === obs.x && food.y === obs.y) ||
      playerSnake.body.some(part => part.x === obs.x && part.y === obs.y) ||
      computerSnake.body.some(part => part.x === obs.x && part.y === obs.y)
    ) {
      obs = createVector(floor(random(cols)), floor(random(rows)));
    }
    obstacles.push(obs);
  }
}

function snakeCollidesWithObstacles(snake) {
  // Check if the snake's head collides with any obstacle
  if (snake === playerSnake) {
    let head = snake.body[snake.body.length - 1];
    return obstacles.some(obs => head.x === obs.x && head.y === obs.y);
  }
  return false; // Obstacles do not affect the computer snake
}

function snakeCollidesWithSnake(snake1, snake2) {
  // Check if snake1's head collides with any part of snake2
  let head1 = snake1.body[snake1.body.length - 1];
  return snake2.body.some(part => part.x === head1.x && part.y === head1.y);
}

function endGame(message) {
  // End the game and display a message
  noLoop();
  fill(0); // Black color for text
  textSize(32);
  textAlign(CENTER, CENTER);
  text(message, width / 2, height / 2);
}

class Snake {
  constructor(snakeColor) {
    // Initialize snake properties
    this.body = [createVector(floor(cols / 2), floor(rows / 2))];
    this.xdir = 0;
    this.ydir = 0;
    this.len = 1;
    this.dead = false;
    this.snakeColor = snakeColor;
  }

  setDir(x, y) {
    // Set snake's movement direction
    this.xdir = x;
    this.ydir = y;
  }

  update() {
    // Update snake's position
    let head = this.body[this.body.length - 1].copy();
    head.x += this.xdir;
    head.y += this.ydir;

    // Check for wall collision
    if (head.x < 0 || head.x >= cols || head.y < 0 || head.y >= rows) {
      this.dead = true;
    }

    this.body.push(head);

    // Remove the tail if the snake has not grown
    if (this.body.length > this.len) {
      this.body.shift();
    }
  }

  eat(pos) {
    // Check if the snake's head is at the same position as the food
    let head = this.body[this.body.length - 1];
    if (head.x === pos.x && head.y === pos.y) {
      this.len++;
      return true;
    }
    return false;
  }

  isDead() {
    // Check if the snake runs into itself
    let head = this.body[this.body.length - 1];
    for (let i = 0; i < this.body.length - 1; i++) {
      let part = this.body[i];
      if (part.x === head.x && part.y === head.y) {
        return true;
      }
    }
    return this.dead;
  }

  show() {
    // Display the snake on the canvas
    fill(this.snakeColor);
    for (let part of this.body) {
      rect(part.x * scl, part.y * scl, scl, scl);
    }
  }

  chase(target) {
    // Chase a target (food or player's head)
    let head = this.body[this.body.length - 1];
    if (target.x > head.x && this.xdir !== -1) {
      this.setDir(1, 0); // Move right
    } else if (target.x < head.x && this.xdir !== 1) {
      this.setDir(-1, 0); // Move left
    } else if (target.y > head.y && this.ydir !== -1) {
      this.setDir(0, 1); // Move down
    } else if (target.y < head.y && this.ydir !== 1) {
      this.setDir(0, -1); // Move up
    }
  }

  reposition() {
    // Reposition the snake if the window is resized
    for (let part of this.body) {
      part.x = constrain(part.x, 0, cols - 1);
      part.y = constrain(part.y, 0, rows - 1);
    }
  }
}

 

Here’s a high-level breakdown of code:

1. Game Structure

The game is divided into three primary states:

  1. Idle State: Displays the start screen with instructions and the game title.
  2. Playing State: Runs the game loop where the snakes move, food is consumed, and collisions are handled.
  3. Game Over State: Displays a game-over message and prompts the user to restart.

This is achieved using a game state variable, which transitions between ‘idle’, ‘playing’, and ‘gameover’.

2. Grid System

The game grid dynamically adjusts to fit the browser window. The grid is divided into cells, and all game elements (snakes, food, obstacles) are positioned on this grid. The scale of the grid (scl) is calculated based on the window size to ensure the game looks consistent on any screen.

3. Snakes

Both the player and computer-controlled snakes are objects created using the snake class. Each snake:

  • Has a body represented as an array of segments (each segment is a grid cell).
  • Moves in a specified direction and grows when it consumes food.
  • Can “die” if it collides with itself, obstacles, or the grid boundaries.

The computer snake is programmed to switch between two behaviors:

  • Chasing Food: Moves toward food on the grid.
  • Chasing the Player: Pursues the player’s snake when it is three times larger, adding a competitive element.
4. Game Logic

The core gameplay logic is handled in the runGame function, which:

  • Updates the positions of the snakes.
  • Checks for collisions (e.g., snake colliding with obstacles or itself).
  • Manages interactions, such as consuming food or one snake engulfing the other.
  • Adjusts the state of the game based on player or computer actions.
5. Visuals

The game’s visuals, including the grid, snakes, food, and obstacles, are drawn dynamically using p5.js functions like rect and line. To create an arcade-like feel:

  • The title screen and game-over messages use bold, vibrant colors and pixelated fonts.
  • The game background, snakes, and food are kept simple yet visually distinct for clarity.
6. Interaction

The player controls their snake using the arrow keys. Pressing the space bar transitions the game between the start screen and gameplay or restarts the game after it ends. Additionally, pressing ‘f’ toggles fullscreen mode for an immersive experience.

7. Obstacles

Randomly placed obstacles add complexity to the game. They are repositioned periodically to ensure dynamic gameplay. The player’s snake must avoid these obstacles, while the computer snake is immune to them, giving it an advantage.

User Testing

I got feedback to include an info text about the game details and also the user suggested to reduce the speed of the computer’s snake.

IMG_0104

Future Considerations

While the game is already engaging, there are plenty of opportunities for enhancement:

  1. Improved AI:
    • Introduce smarter pathfinding for the computer snake, like using the AI
  2. Multiplayer Support:
    • Allow two players to compete head-to-head on the same grid.
  3. Power-Ups:
    • Include items that provide temporary benefits, such as invincibility, speed boosts, or traps for the computer snake.
  4. Dynamic Environments:
    • Add levels with different grid layouts, moving obstacles, or shrinking safe zones.
  5. Scoring System and Leaderboards:
    • Introduce a scoring mechanism based on time survived or food consumed.
    • Display high scores for competitive play.
  6. Sound Effects and Music:
    • Add classic arcade sound effects for eating food, colliding, and game-over events.
    • Include background music that changes tempo based on the intensity of gameplay.

 

Final _ Shadowing Presence

 

Concept:

“We are like artists or inventors uncovering meaning in the patterns of movement and connection.”– Zach Lieberman

“Shadowing Presence” is an experiment that delves into the idea of becoming physically part of the digital experience. Using particle systems that interact with the human body through a webcam, the project transforms participants into both performers and elements within the system. The particles respond in real-time, attracted by the participants’ motion, creating an immersive dialogue between the digital and the physical.

Interaction
Interaction

The aim is to create a playful digital space where users can experience how their physical presence influences a digital environment. The project also serves as a metaphor for how humans shape and are shaped by the digital systems they interact with. Through this project, I go on a journey to uncover what it means to be particles in the digital world interacting with other particles through a screen. This project is inspired by Zach Lieberman’s work, which focuses on making digital interactive environments that invite participants to become performers, seamlessly blending technology and human connection. 

Highlight and Progress:

Interaction

The program analyzes the pixels taken through a webcam and turns them into a black-and-white particle system. Then, a second particle system detects motion via the webcam and responds by moving toward the detected motion.

  • Pages Design and Navigation:
    1. Main Page:  The main page has an image I designed to match the program’s functionality. I moved the customization buttons closer to the center so they are easier for users to see and use. These buttons are for changing the color and the size of the interactive particle ‘object’. Further, I added some instructions to make it easier for the user to understand the function of the project.  Buttons help in the navigation between the two main pages of the program. The ‘Start’ Button moves the users to the interaction page after they pick the colors and size of the particles they want to interact with.
  • User Interface Development
  • Initial User Interface
    1. Experience Page (Page 1): This page is a real-time interactive environment where users can see themselves through a webcam. Here particles in the shape of a heart respond to the user’s motion while also reflecting their shadow in a particle-like form. There are two control buttons: the ‘reset’ and the ‘save’. The ‘reset’ goes back to the main page and users can change the color and size of particles, while the ‘save’ saves an image of the experience.
  • WebCam Particle Interactive Particle System:
    • The webcam is the primary input for capturing user motion and creating the particle interpretation of reality by analyzing the brightness of pixels. The brightness values of each pixel are calculated, and differences between the current frame and the previous frame are used to paint a sort of digital image of the user. For the motion,  I process the video feed to detect movement and drive the particle system’s behavior. Users personalize their experience by customizing the color and size of particles, which then playfully respond to motion. Initially, the particles are randomly positioned on the canvas and when they detect motion they move toward it. Here, users see themselves as a mirrored reflection of particles and then interact with objects that respond to their motion, creating an immersive and interactive experience.
  • Progress:
    • I began designing the interface of the project and the basic interaction elements, which are the buttons. I had some bugs regarding when a button should appear and when it should disappear, so I had to reorganize the code to make it work better. As a result, I decided to make the size buttons () and the color buttons () an array, which made it easier for me to apply them to the particle system as the project progressed. I added more functions to handle the buttons for each page: startbutton() and resetbutton(). For the main page buttons, I added a function to create them and another to remove them, and then some buttons needed their functions, such as the save button.  After that, I added the particle system, which is inspired by the ASCII text Image by The Coding Train.  The particle systems are initially randomly placed and then move toward the positions. The particle’s color is based on brightness, and the size is between 2 and 7, mapped based on brightness so that the darker ones are smaller and the brighter ones are bigger. Now, in terms of how the particles are drawn. I initially load the video image pixels, then map the brightness of each pixel ( which takes four places in the index) from the RGB values, and then render it.
    • Additionally, I added another particle system that is interactive. The Interactive particles visually represent the interaction between the user and the program. The Particle’s movement is a direct response to user activity, like a feedback loop where the user influences the behavior of the particles in real-time. It does this by detecting the motion. When the video is captured, the pixel data is loaded in an array RGBA then by comparing the current frame pixel data with the previous one the motion is detected. The threshold is used to see if the brightness difference is big enough to consider its motion or not. I visually made this particle system look like hearts because I wanted it to be different from the particles that reflect the user in the digital world.
  • let currentPage = "main";
    
    //make the color and size buttons and array so its easier to use and manege
    let sizeButton = [];
    let colorButton = [];
    
    // for interactive particles
    let selectedColor = "white";
    let selectedSize = 5;
    
    // interactive particle
    let interactiveParticles = [];
    // for motion
    let previousFrame;
    
    //navigation buttons switch btwn states
    let startButton;
    let resetButton;
    
    //Save Button
    let saveButton;
    
    //image load
    let Img1;
    
    //particle class array
    let particles = [];
    
    //video
    let video;
    
    function preload() {
      Img1 = loadImage("/image1.png");
    }
    
    function setup() {
      createCanvas(1200, 800);
    
      //functions that handle the buttons in each page
      setUpMainButtons();
    
      //for vid
      video = createCapture(VIDEO);
      video.size(80, 60);
    
      //for particles, generate a grid like to correspond to pixcels from webcam for better visualization
      for (let y = 0; y < video.height; y++) {
        for (let x = 0; x < video.width; x++) {
          // scale for display
          particles.push(new Particle(x * 15, y * 15));
        }
      }
    
      // for interactive particles. rray of particles random on canvas
      for (let i = 0; i < 400; i++) {
        interactiveParticles.push(
          new InteractiveParticle(random(width), random(height))
        );
      }
    }
    
    function draw() {
      background(0);
    
      video.loadPixels();
    
      // update canvas depending on current page
      if (currentPage === "main") {
        drawMainPage();
      } else if (currentPage === "page1") {
        drawPage1();
        drawParticleSystem();
      }
    }
    
    //function for main page buttons
    function setUpMainButtons() {
      //Start experiance button
      startButton = createButton("Start");
      startButton.size(150, 50);
      startButton.style("font-size", "38px");
      startButton.style("background-color", "rgb(250,100,190)");
      startButton.position(width / 2 - 100, height / 2);
      startButton.mousePressed(() => {
        currentPage = "page1";
    
        // call the remove function for main page to remove them
        removeMainButtons();
    
        //add the page 1 buttons function
        setUpPage1Buttons();
      });
      // color buttons
      colorButtons = [
        createButton("Pinkish")
          .style("background-color", "rgb(255,105,180)")
          .mousePressed(() => (selectedColor = color(255, 105, 180))),
        createButton("Blueish")
          .style("background-color", "rgb(0,191,255)")
          .mousePressed(() => (selectedColor = color(0, 191, 255))),
        createButton("Greenish")
          .style("background-color", "rgb(0,255,127)")
          .mousePressed(() => (selectedColor = color(0, 255, 127))),
      ];
      colorButtons[0].position(340, 300);
      colorButtons[1].position(400, 300);
      colorButtons[2].position(460, 300);
    
      // size buttons
      sizeButtons = [
        createButton("Random")
          .style("background-color", "rgb(205,165,200)")
          .mousePressed(() => (selectedSize = random(2, 15))),
        createButton("Large")
          .style("background-color", "rgb(150,200,255)")
          .mousePressed(() => (selectedSize = 15)),
        createButton("Small")
          .style("background-color", "rgb(100,150,227)")
          .mousePressed(() => (selectedSize = 2)),
      ];
      sizeButtons[0].position(610, 300);
      sizeButtons[1].position(680, 300);
      sizeButtons[2].position(730, 300);
    }
    
    // remove main page buttons
    function removeMainButtons() {
      startButton.remove();
      for (let btn of colorButtons) btn.remove();
      for (let btn of sizeButtons) btn.remove();
    }
    
    //function page 1 buttons
    function setUpPage1Buttons() {
      //   save button
      saveButton = createButton("Save Canvas");
      saveButton.style("background-color", "rgb(100,150,227)").position(460, 10);
      saveButton.mousePressed(saveCanvasImage);
    
      // reset button
      resetButton = createButton("Reset");
      resetButton.style("background-color", "rgb(150,200,255)").position(590, 10);
      resetButton.mousePressed(() => {
        currentPage = "main";
        // remove page 1 buttons
        removePage1Buttons();
        // add main page buttons instead
        setUpMainButtons();
      });
    }
    
    // remove page 1 buttons
    function removePage1Buttons() {
      saveButton.remove();
      resetButton.remove();
    }
    // main page content
    function drawMainPage() {
      image(Img1, 0, 0, 1200, 800);
      textFont("Courier New");
      textSize(42);
      fill(200, random(100, 250), 200);
      text("Shadowing Presence", width / 2 - 260, height / 2 - 40);
    
      //   instruction text
      textFont("Courier New");
      textSize(16);
      fill(255);
      text("Welcome!", width / 2 - 50, height / 2 + 100);
      text(
        "Personalize your digital object by picking a color and ",
        width / 2 - 260,
        height / 2 + 120
      );
      text(
        "a size. Enjoy the way these object interacto with your motion.",
        width / 2 - 260,
        height / 2 + 140
      );
      text("Press 'F' to toggle fullscreen", width / 2 - 260, height / 2 + 165);
    }
    
    // page 1 content
    function drawPage1() {
      // flip horizontally so that it feels like a mirror
      translate(width, 0);
      scale(-1, 1);
    
      // process video for motion detection
      video.loadPixels();
      if (video.pixels.length > 0) {
        if (!previousFrame) {
          previousFrame = new Uint8Array(video.pixels);
        }
    
        let motionPixels = detectMotion(
          video.pixels,
          previousFrame,
          video.width,
          video.height
        );
        previousFrame = new Uint8Array(video.pixels);
    
        // draw particles
        for (let particle of interactiveParticles) {
          particle.setColor(selectedColor);
          particle.setSize(selectedSize);
          particle.update(motionPixels, video.width, video.height);
          particle.show();
        }
      }
    }
    
    // save image
    function saveCanvasImage() {
      saveCanvas("Image", "png");
    }
    
    // particle system
    // to control the behavior and appearance of a particle system.
    function drawParticleSystem() {
      video.loadPixels();
      //   updating and drawing each particle based on the video feed.
      for (let i = 0; i < particles.length; i++) {
        //    cal the x and y of the current particle by taking the modulus (x) and div (y) of the particle index and the video width and height. This way i can map the 1D particle array index to its 2D pixel position
        const x = i % video.width;
        const y = floor(i / video.width);
        const pixelIndex = (x + y * video.width) * 4;
    
        const r = video.pixels[pixelIndex + 0];
        const g = video.pixels[pixelIndex + 1];
        const b = video.pixels[pixelIndex + 2];
        const brightness = (r + g + b) / 3;
    
        particles[i].update(brightness);
        particles[i].show();
      }
    }
    function keyPressed() {
      //press F for full screen
      if (key === "F" || key === "f") {
        // Check if 'F' is pressed
        let fs = fullscreen();
        fullscreen(!fs); // Toggle fullscreen mode
      }
    }
    
    update(motionPixels, videoWidth, videoHeight) {
       // detect motion and move towards it
       let closestMotion = null;
       let closestDist = Infinity;
    
       //processes pixel data from a video feed to track motion and calculates the distance from position to nearest point of motion
       for (let y = 0; y < videoHeight; y++) {
         for (let x = 0; x < videoWidth; x++) {
           let index = x + y * videoWidth;
           if (motionPixels[index] > 0) {
             let motionPos = createVector(
               x * (width / videoWidth),
               y * (height / videoHeight)
             );
             let distToMotion = p5.Vector.dist(this.pos, motionPos);
    
             if (distToMotion < closestDist) {
               closestDist = distToMotion;
               closestMotion = motionPos;
             }
           }
         }
       }
    
       // move towards closest motion
       if (closestMotion) {
         let dir = p5.Vector.sub(closestMotion, this.pos).normalize();
         this.vel.add(dir).limit(12);
       }
    
       this.pos.add(this.vel);
       //     to slow dawn
       this.vel.mult(0.8);
     }

     

User Testing:

https://drive.google.com/file/d/19wG0LIY-lJA8o-1Lv1v5EkUcFHZDqz2X/view?usp=sharing

I did the user testing with 3 people. Through the feedback process I decided to change two main things: 1) the shape of the interactive particle to heart to make it more distinguishable, 2) the placement of the customization buttons so that they can see them and click them.

Embedded Sketch:

https://editor.p5js.org/shn202/full/dSFLTnb4C

Future Work:

There is a lot of improvement that can go into the code mainly in the interactive particles. I think if I had time I would have added customization to how the object would look like a hear, a star, a smile face, a triangle, a number, etc. Additionally, I would make the interactive particle system have its own agency on it self by adding gravity to it, an attraction force between the particles, and maybe even make them attract to or steer away from the user depending on the nature of the motion detected.

Resources:

—. “Coding Challenge 166: ASCII Text Images.” YouTube, 12 Feb. 2022, www.youtube.com/watch?v=55iwMYv8tGI.

Zach Lieberman. zach.li.

Instagram. www.instagram.com/p/BLOchLcBVNz.

—. “11.4: Brightness Mirror – p5.js Tutorial.” YouTube, 1 Apr. 2016, www.youtube.com/watch?v=rNqaw8LT2ZU.

Particles Webcam Interaction by Kubi -p5.js Web Editor. editor.p5js.org/Kubi/sketches/Erd9Lt_Tz.

Webcam Particles Test by EthanHermsey -p5.js Web Editor. 

editor.p5js.org/EthanHermsey/sketches/OzjX8uw4P.

CP2: Webcam Input by jeffThompson -p5.js Web Editor. editor.p5js.org/jeffThompson/sketches/ael8Y4YMB.

Input and Button. p5js.org/examples/input-elements-input-button.

Heart Shape by Mithru -p5.js Web Editor. editor.p5js.org/Mithru/sketches/Hk1N1mMQg.

Galanakis, George. “Motion Detection in Javascript – HackerNoon.com – Medium.” Medium, 21 Oct. 2024, medium.com/hackernoon/motion-detection-in-javascript-2614adea9325.

“Conditional (Ternary) Operator – JavaScript | MDN.” MDN Web Docs, 14 Nov. 2024, developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Conditional_operator.

https://www.rapidtables.com/web/color/RGB_Color.html

https://archive.p5js.org/learn/color.html

Neon Doodle World – Final Project

Concept

The inspiration for Neon Doodle World came from my fascination with interactive media and real-time creativity. Initially inspired by a project at the IM show that used a webcam for puzzle-based interaction, I wanted to expand on the idea by enabling freehand drawing using hand gestures. The goal was to create an intuitive and engaging platform where users could unleash their creativity through dynamic drawing features.

Neon Doodle World lets users draw directly on a digital canvas using their hands, with options for paintbrush strokes, playful sprinkle trails, color customization, and more. The neon-themed interface enhances the visual experience, creating an immersive environment for artistic expression.

Interaction Methodology

The project uses MediaPipe Hands for real-time hand tracking. The webcam detects the user’s hand movements, focusing on key points like the index finger, to map gestures onto the canvas. Here’s how users interact with the system:

  • Multi-Hand Support: Users can now draw freely using both hands simultaneously, doubling the creative possibilities.
  • Paintbrush Mode: Draw clean, solid lines in any color by moving your index finger.
  • Sprinkle Trail Mode: Generate dynamic sprinkle/particle effects for playful, animated drawings.
  • Eraser Mode: Remove specific parts of your drawing with precision.

Toolbar Options:

      • Clear: Wipe the entire canvas.
      • Undo: Remove the last action for greater control.
      • Pause/Resume: Temporarily disable or resume drawing.
      • Save: Capture and save your creation along with the webcam feed.
  • Hover Tooltips: Each toolbar button displays clear instructions when hovered over, making the interface user-friendly for all experience levels.
  • Brush Size Adjustment: A slider allows users to adjust brush size, enabling detailed or bold strokes depending on their creative needs.

The webcam feed is mirrored horizontally to make interactions feel natural, ensuring seamless mapping between gestures and canvas movements. A dynamic toolbar and hover instructions simplify the interface, ensuring accessibility and ease of use.

Images of my Sketch

How the Implementation Works

The application uses p5.js and MediaPipe for its core functionalities:

  • Hand Tracking: MediaPipe detects up to two hands simultaneously, updating their positions to control canvas actions.
  • Drawing Logic: Depending on the selected mode, the system renders strokes, particles, or erases sections in the drawing layer.
  • Undo and Redraw: All actions are stored in a stack, allowing users to undo changes by redrawing the canvas without the last action.
  • Color Palette: A scrollable palette lets users easily switch between vibrant neon colors.
  • Brush Size Adjustment: Users can control the size of their brush using a slider, displayed dynamically for feedback.
  • Save Feature: Combines the live webcam feed and the drawing layer into a single image file.

Code Highlight

One of the standout features is the Undo Functionality. Here’s how it’s implemented:

function undoLastAction() {
  if (actionsStack.length > 0) {
    actionsStack.pop(); // Remove the last action
    redrawCanvas(); // Redraw the canvas to reflect changes
  }
}

function redrawCanvas() {
  drawingLayer.clear(); // Clear the drawing layer
  actionsStack.forEach((action) => {
    if (action.type === "ellipse") {
      // Redraw ellipses
      drawingLayer.noStroke();
      drawingLayer.fill(action.color);
      drawingLayer.ellipse(action.x, action.y, action.size, action.size);
    } else if (action.type === "sprinkle") {
      // Redraw sprinkle trails
      action.particles.forEach((particle) => {
        drawingLayer.fill(random(100, 255), random(100, 255), random(100, 255), 150);
        drawingLayer.noStroke();
        drawingLayer.ellipse(particle.x, particle.y, random(5, 10));
      });
    }
  });
}

Embedded Sketch


User Testing 

I asked my friend to test out the sketch, and she enjoyed the freedom of drawing with hand gestures. She decided to draw the UAE flag as a tribute to National Day, appreciating the vibrant color palette and smooth interaction that made it easy to bring her idea to life. Here’s the testing clip:

Reflection

Creating Neon Doodle World has been an incredible learning journey. I’m particularly proud of the following aspects:

  • Hover Instructions: These made the interface more accessible and beginner-friendly.
  • Adjustable Brush Size: This feature added a new dimension of control for users.
  • The Neon-Themed Interface: This enhances user immersion with vibrant visuals.
  • Added Features: Undo, save, and dynamic color selection significantly improved the final experience.

However, there are always areas to improve:

  • Adding gesture-based controls for even more intuitive interactions (e.g., using a hand gesture to clear the canvas).
  • Expanding the drawing modes with advanced brush effects or textured strokes.
  • Introducing background effects, such as gradients or animated visuals, to add depth to the canvas and enhance user engagement.

Challenges and Overcoming Them

Some key challenges I faced:

  1. Multi-Hand Implementation: Tracking and rendering independent actions for two hands required a flexible system to handle simultaneous inputs.
  2. Hover Tooltips: Designing tooltips that provide clear information without cluttering the interface was a balancing act.
  3. Brush Size Slider: Ensuring the slider dynamically updated both brush size and its display required careful synchronization.

Resources

https://emiguerra.github.io/p5MediaPipe/

https://github.com/mekiii/P5-Particle-Handtracking

https://www.youtube.com/watch?v=BX8ibqq0MJU

https://editor.p5js.org/shinjia168/sketches/NXUrGxzye

IM Show Documentation
At the IM Show, I showcased Neon Doodle World to students and faculty. It was exciting to see people interact with the sketch, experimenting with features like the particle trails and color palette. I also shared the project with my former professors, Evi Mansor and Aya Riad, and the head of Interactive Media, Michael Shiloh, who all gave positive feedback. Here’s a video of people testing and enjoying the sketch:

Final-Bringing a Flower to Life + IM Show

Concept:

My concept is about creating a flower that is both digital and physical, making it come to life with user interaction. On the screen, the flower sways in the wind and reacts to how you move your mouse and blow into a microphone. At the same time, a real flower-like structure made with Arduino mirrors this behavior, with its petals moving using a motor and LED lights changing color.

The screen version uses code to draw a flower that looks alive. The petals change size and color when you move your mouse, and the flower sways gently like it is in the wind. If you blow into the microphone, the sway becomes more dramatic, like a strong gust of wind.

The physical version is controlled by an Arduino board. The motor moves the petals, and the LED lights change brightness and color. It reacts to the same inputs as the digital version. Data flows between the digital and physical versions in real time, making them feel connected.

This idea mixes art and technology. It is interactive and fun while also showing how digital and physical systems can work together.

Images:

 mouse is moved downwards

 mouse is moved upwards

mouse is moved horizontally

User Testing:

 

Description of Interaction Design:

  • Mouse Movement:
    • Moving the mouse up or down changes the size of the flower petals.
    • Moving the mouse left or right changes the LED color of the physical flower and the digital flower’s appearance.
  • Sound Input:
    • Blowing into a microphone connected to the Arduino makes the flower sway more dramatically. Both the digital and physical flowers respond to the intensity of the blowing.
  • Real-Time Synchronization:
    • The digital flower sends data (servo angle, LED brightness, and color) to the Arduino over a serial connection.
    • The Arduino processes this data to control the physical flower’s movement and lighting.

How Does the Implementation Work?

The implementation brings the flower to life through interaction and synchronization between a digital animation (p5.js) and a physical flower model (Arduino). The interaction happens in real-time as data flows between the two systems via serial communication.

Since the p5 should control the arduino it means that the p5 should send data to arduino over serial and the arduino should keep listening to the serial port for any data to control the servo motor and the LED.

Arduino Code:

The Arduino code controls the physical flower’s servo motor (petals) and LED lights. It also listens to the microphone input to adjust the flower’s sway.

Key Logic:

Serial Communication: The Arduino reads input data from p5.js to control the servo angle and LED properties.

if (Serial.available() > 0) {
    String data = Serial.readStringUntil('\n'); // Read data from p5.js
    parseData(data); // Parse angle, brightness, and color
}
  • Servo Movement: Smoothly moves the servo motor to the target angle to simulate natural petal motion.
void smoothServoMove() {
    if (currentServoAngle != targetServoAngle) {
        currentServoAngle += (currentServoAngle < targetServoAngle) ? 1 : -1;
        petalServo.write(currentServoAngle); // Adjust servo position
    }
}
  • LED Control: Updates LED brightness and color based on values received from p5.js.
void updateLEDs() {
    uint32_t color = calculateColor(ledBrightness, ledColorValue); 
    for (int i = 0; i < NUM_LEDS; i++) {
        strip.setPixelColor(i, color); // Set LED color
    }
    strip.show();
}
  • Microphone Input: Reads sound intensity from the microphone to determine the sway effect.
int mn = 1024, mx = 0;
for (int i = 0; i < 100; ++i) {
    int val = analogRead(MIC_PIN);    
    mn = min(mn, val);
    mx = max(mx, val);
}
Serial.println(mx - mn); // Send sound intensity to p5.js

P5.js Code:

The p5.js code creates a dynamic flower animation and sends control data to the Arduino. It receives microphone input values to update the digital flower’s behavior.

Key Logic:

  • Sending Data to Arduino: Sends data in the format servoAngle,brightness,colorValue to control the physical flower.
let servoAngle = constrain(int(map(mouseY, 0, height, 75, 0)), 0, 75);
let ledBrightness = constrain(int(map(mouseY, 0, height, 0, 200)), 0, 200);
let ledColor = constrain(int(map(mouseX, 0, width, 0, 255)), 0, 255);
let dataString = `${servoAngle},${ledBrightness},${ledColor}\n`;
serial.write(dataString); // Send data to Arduino
  • Flower Animation: Draws a flower that sways using Perlin noise for natural motion. The petals’ size and color respond to mouse movement.
function flower() {
    let windFactor = noise(frameCount * 0.01) * 30; 
    let targetSway = inData > 20 ? map(inData, 20, 1024, 0, 50) : 0;
    windSway += (targetSway - windSway) * 0.1; // Smooth sway transition
    
    push();
    translate(width / 2 + windSway, height / 2);
    for (let i = 0; i < 10; i++) {
        let petalSize = map(mouseY, 0, height, 25, 50);
        let petalColor = lerpColor(color(230, 190, mouseY), color(255, mouseY, mouseY), i / 10);
        fill(petalColor);
        ellipse(0, 40, petalSize, petalSize * 2);
        rotate(PI / 5);
    }
    pop();
}
  • Leaf Falling Effect: Creates falling leaves when the microphone detects strong wind.
if (inData > 40 && !isBlowing) {
    isBlowing = true;
    fallingLeaves.push(new fallingLeaf(width / 2.1, height * 0.6));
}

Together, these systems create a synchronized and interactive flower experience. The digital and physical elements complement each other, making it both engaging and technically impressive.

Embedded Sketch:

Aspects of the Project I’m Particularly Proud Of:

  • Seamless Synchronization: The real-time interaction between the digital and physical flower is smooth and responsive. This makes the project feel alive and dynamic, as the changes on the screen and in the physical flower happen instantly.
  • Interactive Design: The project responds intuitively to user inputs like mouse movement and sound. For instance, the way the flower sways naturally with Perlin noise and the dramatic movement triggered by blowing into the mic make the interaction feel organic.
  • Creative Use of Technology: Combining p5.js for digital visuals with Arduino for physical movements and lights was challenging but rewarding. The servo motor, LED lights, and microphone sensor work together to mirror the digital animation.
  • Visual Appeal: The gradient background, dynamic petals, and swaying effect are visually engaging. Adding elements like falling leaves and soil patterns enriches the experience and makes the animation more immersive.
  • Problem-Solving and Optimization: Addressing challenges like abrupt movements, serial communication errors, and performance issues (e.g., limiting leaves per blow) helped make the project smoother and more efficient.

Links to Resources Used:

  1. p5.js Official Documentation:
  2. Arduino Official Documentation:
  3. Adafruit TiCoServo and NeoPixel Libraries:
  4. p5.serialport.js:
  5. Node.js Serial Server:
  6. Articles and Tutorials:
    • “How to Use Perlin Noise in Creative Coding”
      https://thecodingtrain.com/
      Helped with implementing smooth sway effects for the flower and leaves.
  7. Arduino and p5.js Community Forums:

Challenges Faced and How I Overcame Them:

  • Synchronization Issues:
    • Problem: The digital flower and the physical flower did not always behave the same due to delays in serial communication.
    • Solution: Optimized the data sent over the serial connection by minimizing unnecessary updates and using a consistent format. Added smoothing functions to reduce abrupt changes.
  • Abrupt Movement:
    • Problem: The flower’s sway and servo movements were jerky, making the animation feel unnatural.
    • Solution: Used Perlin noise for smooth sway effects and gradually interpolated servo angles with windSway += (targetSway - windSway) * 0.1.
  • Serial Communication Errors:
    • Problem: Occasionally, the Arduino would fail to read the data correctly, leading to erratic behavior or crashes.
    • Solution: Implemented error handling by trimming data and ensuring the format was consistent. Added checks for valid data ranges before processing.
  • Overloading the System:
    • Problem: Generating too many falling leaves or frequent updates caused performance issues.
    • Solution: Limited the number of leaves generated during each blow cycle and optimized the logic for removing off-screen leaves.
  • Mic Input Sensitivity:
    • Problem: The microphone often picked up background noise, causing unintentional flower movements.
    • Solution: Calibrated the microphone by adding a threshold to filter out low-intensity sounds and used the map function to scale input values proportionally.
  • User Understanding of Controls:
    • Problem: Users found it challenging to understand how to interact with the system.
    • Solution: Added an instructional overlay with text and icons to guide users on how to use mouse movement and microphone input effectively.

Future Improvement:

  1. Improved Interaction Design:
    • Add more user interaction options, such as touch inputs for mobile devices or keyboard controls.
    • Allow users to customize the flower’s appearance (e.g., petal shape, color palette) through an interface.
  2. Enhanced Physical Integration:
    • Use more advanced sensors, like an IMU (Inertial Measurement Unit), to detect hand gestures for controlling the physical flower.
    • Add more LEDs or motors to create a multi-layered flower for a more complex and realistic effect.
  3. Better Performance Optimization:
    • Explore using faster communication protocols like Bluetooth or Wi-Fi instead of serial communication to reduce latency.
    • Optimize the falling leaf particle system for smoother animation with larger leaf counts.
  4. Visual Enhancements:
    • Make the background gradient dynamic, transitioning based on user interactions or environmental factors like sound intensity.
    • Add weather effects, like raindrops or sunlight, to complement the flower’s animation.
  5. Educational Features:
    • Include real-time data visualization (e.g., a graph showing microphone input or servo angle changes).
    • Add explanations of how the flower’s behavior is controlled, making it a teaching tool for IoT and creative coding.
  6. Scalability:
    • Expand the project to include a garden of flowers, each responding differently to inputs.
    • Enable networked interaction, where multiple users can control different flowers from different locations.

These improvements can make the project more engaging, robust, and versatile for various applications, from education to art installations.

IM SHOW

For the IM Show, I wanted my interactive flower project to be as intuitive and engaging as possible. One key element of the setup was the microphone, which is essential for controlling the flower’s dramatic sway. To make it clear where visitors should blow, I printed out a small microphone image and taped it to the actual microphone sensor. This simple addition helped guide people, ensuring they could interact with the project effortlessly without needing additional instructions.

Challenges:

While setting up the project at the IM Show, I encountered an unexpected issue: when I switched the p5.js animation to full-screen mode, the application started lagging. This was frustrating because the smooth motion of the flower and leaves is a big part of the experience.

After some quick troubleshooting, I realized the issue was related to rendering performance on larger canvases. To fix it, I adjusted the canvas dimensions to be slightly smaller than the full screen. Here’s how I did it:

function setup() {
    let width = window.innerWidth * 0.9; // 90% of screen width
    let height = window.innerHeight * 0.9; // 90% of screen height
    createCanvas(width, height);
    noStroke();
}

This solution maintained the immersive feel of a full-screen display while reducing the computational load. The animation ran smoothly for the rest of the show, and visitors enjoyed interacting with the flower.

With the microphone image and the optimized display, people were able to blow into the mic, move the mouse, and watch both the digital and physical flowers come alive in real-time. Seeing the excitement and curiosity on people’s faces as they controlled the flower made all the effort worth it. It was a fantastic opportunity to share my work and learn from the feedback of others!

Final Project – Gravity Dance – Khalifa Alshamsi

IM Showcase

Design Concept

The project “Gravity Dance” aims to create an immersive simulation that explores the graceful and sometimes chaotic interactions within a celestial system. The inspiration comes from the game Universe Sandbox where you can play around with the solar system as a whole.

Project Overview

This project is a dynamic and interactive solar system simulation, where users can explore and engage with a digital recreation of the solar system. The simulation features 3D rendering, interactive planet placement, gravity manipulation, and a variety of other visual effects to create an immersive experience. Through this project, the user can visualize celestial bodies in motion, experience gravitational forces, and even interact with the simulation by selecting and placing new planets into the system.

Key Features

  1. 3D Solar System:
    • The simulation includes a central sun and planets arranged based on their real-world distances and sizes.
    • Users can observe the orbital motion of the planets around the sun, visualized in a 3D space environment.
  2. Planetary Interaction:
    • The simulation allows the user to add new planets to the existing solar system, and each newly added planet exerts gravitational forces on other planets in the system.
    • The gravitational forces dynamically affect the orbits of all planets, including both the pre-existing planets and the newly placed ones, making the system behave as a physically accurate model of the solar system.
  3. Gravity Control:
    • A gravity slider allows users to adjust the gravitational strength of any newly added planets. This affects the force of gravity that the planet exerts on the other planets in the system, allowing the user to experiment with different gravitational strengths and see the effects on the orbits in real time.
  4. Planet Selection and Placement:
    • A dropdown menu enables users to select from a range of planets (Earth, Mars, Jupiter, etc.). When a planet is selected, the user can click to place the planet in the simulation, where it will orbit the sun and interact with other celestial bodies.
    • The planets are rendered with realistic textures (such as Earth’s surface, Mars’ red color, and Saturn’s rings), adding to the visual realism of the simulation.
  5. Visual Effects:
    • The background features a dynamic starry sky, created using a cellular automata system that simulates star movement.
    • Shooting stars are randomly generated across the simulation, adding further interactivity and realism.
    • The planets themselves are visually rendered with rotating textures and effects, such as the Earth rotating to show its day/night cycle and Saturn’s rotating rings.
  6. Interactivity and Controls:
    • The simulation allows the user to zoom in and out, rotate the camera, and drag the view to explore different angles of the solar system.
    • A “Start” button launches the simulation, a “Back to Menu” button lets the user return to the main menu, and the “Restart” button resets the entire simulation, clearing all planets and stars.
  7. Responsive Design:
    • The canvas and controls are designed to adjust dynamically to the screen size, ensuring that the simulation is functional and visually consistent across different device types and screen sizes.
  8. User Experience:
    • The user interface is clean and easy to navigate, with buttons for various actions (fullscreen, back to menu, restart) and a gravity control slider.
    • Planet selection and placement are simple and intuitive, and users are provided with clear instructions on how to interact with the simulation.

Technical Implementations

The Solar System Simulation integrates several technical concepts such as 3D rendering, gravitational physics, interactive controls, and dynamic object handling. Below, I’ll discuss some of the key technical aspects of the implementation, illustrated with code snippets from different sections.

1. 3D Rendering with p5.js

The entire simulation is rendered using p5.js’ WEBGL mode, which enables 3D rendering. The canvas size is dynamic, adjusting to the screen size to ensure that the simulation works well on various devices.

function setup() {
    createCanvas(windowWidth, windowHeight, WEBGL); // Use WEBGL for 3D rendering
    frameRate(30); // Set the frame rate to 30 frames per second
}

In the setup() function, I created a 3D canvas using createCanvas(windowWidth, windowHeight, WEBGL). The WEBGL parameter tells p5.js to render 3D content, which is essential for representing planets, orbits, and other 3D objects in the solar system.

2. Planet Placement and Gravitational Physics

Each planet’s position is calculated in 3D space, and they orbit the sun using Newtonian gravity principles. The gravitational force is computed between the sun and each planet, and between planets, influencing their movement.

planetData.forEach((planet, index) => {
    let angle = index * (PI / 4); // Evenly space planets
    let planetPos = p5.Vector.fromAngle(angle).mult(planet.distance);
    let distance = planetPos.mag();
    let speed = sqrt((G * sunMass) / distance); // Orbital speed based on gravity
    let velocity = planetPos.copy().rotate(HALF_PI).normalize().mult(speed);

    planets.push(new Planet(planetPos.x, planetPos.y, 0, planet.mass * 2, velocity, planet.name, planet.color, planet.radius));
});

Here, I iterate over the planetData array, calculating the initial position and velocity of each planet based on their distance from the sun. The planets are arranged in a circular orbit using trigonometric functions (p5.Vector.fromAngle) and are given initial velocity based on gravitational dynamics (speed = sqrt ((G * sunMass) / distance )).

3. User Interaction: Adding Planets

The user can click to place a planet into the simulation. The mouse position is mapped to 3D coordinates in the simulation, and the selected planet’s gravitational force is applied to other planets in real-time.

function mousePressed() {
    if (mouseButton === LEFT && !isMouseOverDropdown() && !isMouseOverSlider()) {
        let planetDataObject = planetData.find((planet) => planet.name === selectedPlanet);
        if (planetDataObject) {
            let planetMass = planetDataObject.mass * 2; // Adjust mass using gravity slider
            let planetColor = planetDataObject.color;
            let planetRadius = planetDataObject.radius;

            // Convert mouse position to 3D space using raycasting
            let planetPos = screenToWorld(mouseX, mouseY);

            // Set initial velocity to simulate orbit
            let initialVelocity = createVector(-planetPos.y, planetPos.x, 0).normalize().mult(10);

            planets.push(new Planet(planetPos.x, planetPos.y, planetPos.z, planetMass, initialVelocity, selectedPlanet, planetColor, planetRadius));
        }
    }
}

The mousePressed() function checks if the user clicks the canvas and places a planet at the clicked position. The screenToWorld() function maps the 2D mouse coordinates to 3D space. The planet is then added with initial velocity to simulate its orbit around the sun.

4. Gravitational Force Calculation

Each planet is affected by gravitational forces, both from the sun and other planets. The gravitational force between two bodies is calculated using Newton’s law of gravitation. The force is then applied to the planet to adjust its acceleration and velocity.

xapplyForce(force) {
    let f = p5.Vector.div(force, this.mass); // Calculate acceleration based on force and mass
    this.acc.add(f); // Add the acceleration to the planet's total acceleration
}

update() {
    this.acc.set(0, 0, 0); // Reset acceleration

    // Calculate gravitational force from the sun
    let sunForce = p5.Vector.sub(createVector(0, 0, 0), this.pos);
    let sunDistance = constrain(sunForce.magSq(), (this.radius + sunSize) ** 2, Infinity);
    sunForce.setMag((G * sunMass * this.mass) / sunDistance);
    this.applyForce(sunForce); // Apply the force from the sun

    // Calculate gravitational force between this planet and all other planets
    planets.forEach(other => {
        if (other !== this) {
            let distance = this.pos.dist(other.pos);
            if (distance < this.radius + other.radius) {
                this.resolveCollision(other); // Resolve collision if planets collide
            }
        }
    });

    // Update velocity and position based on applied forces
    this.vel.add(this.acc); // Update velocity
    this.pos.add(this.vel); // Update position
}

In the update() method, the planet is subjected to gravitational forces from the sun, and the distance and mass are used to compute the force. The applyforce() method is used to apply this force to the planet, which updates its acceleration, velocity, and position.

5. Planet Collisions

Planets in the simulation can collide with one another. When a collision occurs, the velocities of the planets are adjusted based on their masses and the physics of the collision.

resolveCollision(other) {
    let normal = p5.Vector.sub(this.pos, other.pos); // Calculate direction vector between two planets
    normal.normalize();

    let relativeVelocity = p5.Vector.sub(this.vel, other.vel); // Calculate relative velocity
    let speedAlongNormal = relativeVelocity.dot(normal); // Find velocity along the collision normal

    if (speedAlongNormal > 0) return; // If planets are moving away from each other, no collision

    // Calculate the impulse to apply based on the restitution factor and masses
    let restitution = 0.1; // Coefficient of restitution (bounciness)
    let impulseMagnitude = (-speedAlongNormal * (1 + restitution)) / (1 / this.mass + 1 / other.mass);
    impulseMagnitude *= 0.1; // Scale down the impulse to make the collision lighter

    let impulse = p5.Vector.mult(normal, impulseMagnitude); // Calculate the impulse vector

    // Apply the impulse to the planets' velocities
    this.vel.add(p5.Vector.div(impulse, this.mass));
    other.vel.sub(p5.Vector.div(impulse, other.mass));
}

When a collision occurs, the resolveCollision() method is called. It calculates the normal vector (direction of the collision) and the relative velocity between the two planets. Using this, it computes an impulse (force) to apply to the planets to change their velocities, simulating a physical collision.

6. Visual Representation of Planets

Each planet’s visual representation depends on its name, with textures applied to different planets (e.g., Earth, Mars, Jupiter). The display() method renders the planet in 3D space and applies the corresponding texture to each planet.

display() {
    push(); // Save the current transformation state

    translate(this.pos.x, this.pos.y, this.pos.z); // Move the planet to its position in space

    noStroke(); // Disable outline for planets

    if (this.name === "Earth") {
        rotateY(frameCount * 0.01); // Rotate Earth for a spinning effect
        texture(Earth); // Apply Earth texture
        sphere(this.radius); // Draw Earth
        texture(clouds); // Apply clouds texture
        sphere(this.radius + 0.2); // Draw slightly larger cloud layer on top
    }
    // Handle other planets like Mars, Jupiter, Saturn similarly with textures
    pop(); // Restore the previous transformation state
}

The display() method checks the planet’s name and applies the appropriate texture. It uses the rotate() function to rotate the planet (for a day/night cycle effect) and the sphere() function to draw the planet in 3D.

Project Reflection

When I began working on this solar system simulation project, it started with a very basic and simple concept: to simulate gravitational forces and planetary motion in a 2D space. The goal was to create a static representation of how gravity influences the orbits of planets around a central star, with the ability for users to add planets and observe their behavior. The initial version of the project was functional but fairly rudimentary.

Initial State: Basic Gravitational Simulation

First Sketch

The first iteration of the project was minimalistic, with just a few lines of code that allowed for the creation of planets with a mouse click. These planets moved under the influence of a simple gravitational force, pulling them toward a central sun. The planets were drawn as basic circles, with their movement governed by the gravitational pull of the sun based on Newton’s law of gravitation. The background featured a cellular automata effect to simulate the stars in space.

At this stage, the simulation was static and lacked much interactivity beyond adding new planets. The gravitational interactions were limited to just the planet and the sun, with no other planetary interactions, making the system unrealistic and overly simplified.

Midway Evolution: Transition to 3D and Planetary Interactions

Second Sketch

As I progressed with the project, I realized that the 2D view was restrictive and did not provide the immersive experience that I was hoping for. I wanted users to experience the simulation from different angles and explore the solar system in a more interactive manner. This led to the transition from 2D to 3D, which was a major milestone in the development of the simulation.

The introduction of 3D rendering using p5.js’ WEBGL was a game-changer. It allowed me to visualize the planets, the sun, and their orbits in three-dimensional space, adding depth and realism to the simulation. The users were now able to zoom in, rotate, and interact with the solar system in ways that made the experience more engaging. Planets were not just static objects—they were part of a dynamic, interactive system.

Additionally, I implemented gravitational interactions between all planets, which was a significant step forward. In the earlier version, the planets were only influenced by the sun. However, as I added more planets to the simulation, I wanted them to interact with each other as well. This was crucial for making the simulation feel more like a real solar system, where planets’ movements are constantly affected by one another’s gravitational pull.

Further Enhancements: User Interaction and Control

Third Sketch (Draft)

As the project became more sophisticated, I added features that enabled users to interact with the simulation in a more hands-on way. One of the key features was the planet selection dropdown and the gravity slider. The user could now choose which planet to add to the simulation and adjust its gravitational strength, adding a layer of customization and experimentation.

The gravity slider allowed the user to adjust the gravitational force applied by newly spawned planets. This made the simulation more dynamic and responsive to the user’s input, allowing for different behaviors depending on the gravity settings.

I also implemented a menu system with options for starting, restarting, and going fullscreen, providing a user-friendly interface. This made the project more polished and intuitive, inviting users to experiment with the solar system’s dynamics in a more controlled environment.

Final State: A Fully Interactive and Realistic Solar System

Final Version

– Best to use a mouse

By the end of the development process, the project had transformed from a basic 2D simulation to a fully interactive and immersive 3D solar system.

Users could now:

  • Select and add planets to the system.
  • Adjust the gravity of newly spawned planets.
  • Rotate, zoom, and explore the solar system from different angles.
  • Watch the gravitational forces between planets affect their orbits in real time.

The solar system simulation was no longer just a visual tool—it had become an interactive learning experience. It demonstrated the complexity of gravitational interactions and the dynamic nature of planetary orbits in a way that was both educational and engaging.

The project’s evolution also highlighted the importance of user feedback and iteration. As the project progressed, I continuously added features based on my initial goals, iterating on the design to make it more interactive and realistic. The journey from a static gravitational simulation to a dynamic, interactive 3D solar system was an incredibly valuable learning experience.

Feedback and Reflection

After completing the solar system simulation project, I decided to share it with a few friends to gather feedback on the final product. The responses were overwhelmingly positive, which was incredibly encouraging.

Conclusion: The Journey from Concept to Completion

Reflecting on the project, it’s clear that it has come a long way from its humble beginnings. What started as a simple 2D simulation with basic gravitational physics has evolved into a fully interactive 3D simulation, with realistic planetary interactions and user control. This transformation highlights the importance of iterative development, where each small change builds on the previous one, resulting in a more sophisticated and polished final product.

The project not only deepened my understanding of gravitational mechanics but also taught me important lessons in user interface design, interactivity, and the power of simulation to convey complex concepts in an engaging way.

Future Improvements

While the current solar system simulation offers a solid foundation for understanding gravitational interactions, there are several ways it can be further enhanced to provide a richer and more interactive experience. Some potential future improvements include:

  • Improved Orbital Mechanics
  • Interactive Space Exploration: The simulation could allow users to interact with planets more directly by traveling through space.
  • Expansion to Other Star Systems
  • Performance Optimization
Sources Used:
  • https://www.solarsystemscope.com/textures/
  • https://webglfundamentals.org/

 

 

Final Project [FINAL] | Anthropocene Tornadoes!

Anthropocene Tornadoes

Anthropocene: a thought that humans have become a geological force. -Vladimir Verdansky

Tornado - Wikipedia

Climate change is happening, and it will reshape the way we think about our daily lives. At the epicenter of this global phenomenon is us: humans. Throughout the ages, we have been kneeling under the force of nature. But beginning in the Industrial Revolution and onwards, we began to revolutionize how we do things. Without looking at its long-term consequences, we conquered nature and became a destructive force in return.

Anthropocene Tornadoes is my attempt and initiative to raise awareness regarding this matter. It is a simple program but a philosophical piece that combines interactiveness and mindfulness.

The goal is simple: You are the tornado. Destroy.

Sketch DemoInspiration and Challenges

As some of you might have seen from the previous drafts, my initial concept of this project was to use AR systems to generate the tornado and have the user play around with it. But, after really really arduous long hours of testing and developing, I concluded: that AR is too limited and not the best environment to do this project.

I spent a good 5-6 hours trying to solve the big issue of interactiveness. I wanted the user to experience something different than the usual things displayed in the IM Show. But it seems like for this case, using AR was not the key.

SimpleAR library has a solid limitation: the ENTIRE p5.js sketch will turn into a camera. The camera then scans a marker, which then displays the sketch in the draw() function. But, because of this, it also significantly limits the amount of choice I have over the interaction the user can make. Plus, using camera and rendering particles were way too heavy for the intended viewing device, smartphones and ipads.

But the original concept remains the same. The program is a somewhat tornado simulation. It begins with a vortex-shaped particles that speeds up and inflates once the user move and ‘eat’ buildings represented by the rectangle.

i made this in an old 3d modeling software : r/LiminalSpace

The project’s visual choice is inspired by old 3d modeling software capabilities. Simple, abstracted, low-poly shapes are joined together to hopefully take the shape of something. In Anthropocene Tornadoes’ case, it was a city filled with skyscrapers, a grassland, and a tornado.

How The Program Works

▶️The sketch runs on a WebGL-based rendering, meaning that is has 3D capabilities. To adjust the camera, I used a p5.js library called EasyCam, which allowed me to manipulate the camera and only allowed the user to zoom in-and-out without rotating the canvas.

▶️The core of the program, the tornado, works by having a Particle class generated in a vortex-like shape in the beginning. This class is a vector class. It starts with a circle which converges downwards. This gives the effect of the vortex.

When the tornado collides with a skyscraper, it will increment a speed modifier, speedMultiplier, that effects the radius of the circle. Thus, with each particle spinning on this radius, it gets bigger and bigger. Combine it with noise, lerp, constrain, and you get yourself a somewhat convincing tornado-like particle system.

update(speedMultiplier) {
    // Increment the angular position to simulate swirling
    this.angle += (0.02 + noise(this.noiseOffset) * 0.02) * speedMultiplier;

    // Update the radius slightly with Perlin noise for organic motion
    this.radius += noise(this.noiseOffset) * 5 - 2.5;

    // Constrain the radius to a certain bound
    this.radius = constrain(this.radius, 20, 500);

    // Update height with sinusoidal oscillation (independent of speedMultiplier)
    this.pos.y += sin(frameCount * 0.01) * 0.5;

    // Wrap height to loop the tornado
    if (this.pos.y > 300) {
      this.pos.y = 0;
      this.radius = map(this.pos.y, 0, 300, 100, 10); // Reset radius
    }

    // Update the x and z coordinates based on the angle and radius
    this.pos.x = this.radius * cos(this.angle);
    this.pos.z = this.radius * sin(this.angle);

    this.noiseOffset += 0.01 * speedMultiplier;
  }

▶️The collisions are handled by a collision check, which also plays collision sounds. It works by splicing the collided skyscraper stored in the skyscraper array.

for (let i = skyscrapers.length - 1; i >= 0; i--) {
    let skyscraper = skyscrapers[i];
    let distance = dist(tornadoX, 0, tornadoZ, skyscraper.x, 0, skyscraper.z);

    // If tornado is within a threshold distance, remove the skyscraper
    if (distance < 50) {
      skyscrapers.splice(i, 1);
      fill(0, 100, 100);
      speedMultiplier += speedIncrement;
      let randomCollisionSound = random(collisionSounds);
      randomCollisionSound.play();
      continue;
    } else {
      fill(0, 0, 100);
    }

▶️The rest of the program is handled by a programStateLogic function which controls the state of the program and its appropriate ending.

Interaction

The program, as mentioned earlier is intentionally a simple one. The user moves the tornado using the mouse which controls the vector location of the tornado.

ENDINGs: There are three possible endings within the program.

1️⃣The user consumed all buildings.

2️⃣The user ‘failed’ to consume all buildings within the respective time

3️⃣The user does not decide to do anything and becomes a ‘savior’

These endings were created for the users to reflect our own actions as a human, and how we became a destructive force of nature. Nature here, being the tornado that we control.

User Testing & Reactions

Because my body said that it’s time to get sick at the end of the semester, I could not almost finish this project in time. But, I was able to receive feedback and user testing albeit from far away.

I sent the sketch link to some of my friends and asked them a simple question: How long did it take for you to figure out how to move the tornado?

I found two issues: a) While it is true that the user got the hang of it, I wanted the interaction to be more clear, so I added a simple mouse move command in the center of the screen. (Let’s not give super clear instructions shall we?)

b) The white screen. On some devices, the ending screen did not render the text properly. I fixed this and gave a clear message on what the ending is and what to do if the user wants to redo the program.

IM SHOWCASE

For the IM Showcase, I think it was very interesting to see how people reacted to both our midterm and final project. The leap from two-dimensional program to a three-dimensional is a huge one. From the ‘gameplay’, I believe that the endings could have been made more clear on how to obtain them. But regardless, it was interesting to give people a bit of information regarding tornadoes!

Reflections and Future Improvements

I wanted to work on the visuality of the project. Had I not been hospitalized for a few days during the holiday, I might have had more time to work on this aspect. But, after spending way too much time on the AR aspect and giving it up, I had no energy left to work on the visual aspect.

The project also surprisingly worked on mobile devices, although severely unoptimized. The gesture controls do work, but they don’t work very well. I was also thinking of doing something regarding the EF Scale. Perhaps, depending on how big the tornado gets, a color or something else changes. Also, for some reason the tornado flies upwards even though I did not change anything in that coordinate? what?

Regardless, this project was really fun for me to work on (although stressful at times). It really taught me how to emulate physics, but also how demanding simulating it for the machine.

Resources Used

touches – a p5.js feature I did not use

Tornado Sound #1 | Sound FX – YouTube – tornado sound

collision sounds – from plater addon, curseforge

p5.js WebGL camera – Daniel Shiffman

Tornado simulation inspiration – emmetdj

ChatGPT for the amazing debugging.

 

Final Project – Understanding Deforestation

Fractals have always fascinated me. These captivating geometric patterns reveal infinite complexity as they scale, mimicking the beauty of nature in everything from clouds to plants. Initially, my project began with the goal of creating interactive fractals using p5.js, allowing users to engage with these mesmerizing patterns in real-time. However, as I delved deeper, I realized that I could harness the power of these visuals to raise awareness about a pressing environmental issue: deforestation. This journey led me to intertwine the elegance of fractals with the urgency of ecological conservation.

Project Overview

The core goal of this project has evolved from merely creating interactive fractals into crafting a visually engaging experience that emphasizes the importance of preserving our natural world. Here’s how I approached it:

Dynamic Complexity: Each mouse press increases the fractal’s depth, revealing intricate new layers, reminiscent of how our forests grow and expand.

Real-Time Updates: The instant changes in the fractals mimic the fragility of nature, responding to user interaction much like ecosystems react to human activity.

Immersive Visuals: By pairing fractals with vibrant green hues and subtle animations, I aimed to create a visually rich environment that invites exploration while reflecting the lushness of forests.

Fractal Designs

Throughout this journey, I considered various fractal types, each offering unique opportunities for interactivity while connecting back to the theme of nature:

  1. Sierpinski Triangle: This classic fractal, with its nested triangles, symbolizes the intricate balance of ecosystems, where each part contributes to the whole.
  2. Koch Snowflake: Beginning as a simple line, this fractal transforms into an elaborate shape, illustrating how deforestation can disrupt natural patterns.
  3. Tree Fractal: Inspired by the branching structures of trees, this fractal evolves into a dense network of “branches,” echoing the beauty of forests that are often threatened by deforestation.

Interactivity Features

I aimed to create an engaging user experience that encourages reflection on deforestation:

  • Mouse Press: Each click adjusts the fractal’s recursion depth, giving users control and encouraging them to explore the complexity of nature, much like understanding the complexity of ecosystems.
  • Mouse Position: As users move their mouse, aspects like color, size, and rotation change, creating an immersive experience that reflects the vibrant life found in forests.
  • Animation: Subtle movements in the fractals symbolize the dynamic nature of ecosystems, reinforcing the message that our natural world is alive and needs protection.

The goal was to create a space where the beauty of fractals serves as a reminder of the richness of nature and the urgent need to address deforestation.

The Journey to Deforestation Awareness

As my project evolved, the focus on deforestation emerged organically. The beauty of fractals, now pulsating with vibrant green particles, symbolizes lush forests and the interconnectedness of life, emphasizing what we stand to lose. This transformation has deepened my understanding of the importance of preserving our natural world.

(Draft Number 1)
The project now serves as a visual narrative, illustrating the delicate balance of our ecosystems and the threats they face. By engaging users in this interactive experience, I hope to inspire them to reflect on their relationship with nature and the impact of deforestation.

(Draft Number 2)

Project Features

The project includes several interactive elements that invite users to engage deeply with the theme of deforestation:

  • Start Screen: Users are greeted with the message, “Let’s understand Deforestation together!” along with a prominently displayed start button, creating a welcoming entry point into this vital conversation.
  • Dynamic Fractal Visualization: As users move their mouse, the fractal responds in real-time, dynamically adjusting to create a mesmerizing experience that underscores the importance of protecting our forests.
  • Earthy Background: The warm brown background symbolizes the earth, grounding the visual experience and reinforcing our connection to the land that sustains us.

Code Snippets

Setup Function

function setup() {
  createCanvas(800, 800);
  colorMode(HSB, 360, 255, 255);
  noStroke();
}

Drawing the Fractal

function draw() {
  background(139, 69, 19); // Warm brown to represent the earth
  particles = []; // Resetting particles for fresh display

  let cRe = map(mouseX, 0, width, -1.5, 1.5);
  let cIm = map(mouseY, 0, height, -1.5, 1.5);

  let juliaSet = new JuliaSet(cRe, cIm);
  juliaSet.createParticles();

  for (let particle of particles) {
    particle.update();
    particle.display();
  }
}

Particle Class

class Particle {
  constructor(x, y, cRe, cIm) {
    this.x = x;
    this.y = y;
    this.cRe = cRe;
    this.cIm = cIm;
    this.hue = random(100, 140); // Greenish tones for a natural vibe
  }

  update() {
    // Logic for particle movement and color based on iteration count
  }

  display() {
    fill(this.hue, this.saturation, this.brightness, 200);
    ellipse(this.x, this.y, 2, 2);
  }
}

Embedded Sketch and Project Outcome:

let particles = [];
const maxIterations = 100;
const numParticles = 30000; // Increased number of particles for visibility
let isStarted = false; // To track if the main visualization should start
let startButton; // Variable to hold the start button

function setup() {
  createCanvas(600, 600);
  colorMode(HSB, 360, 255, 255); // HSB for smooth natural gradients
  noStroke();

  // Create a button to start the visualization
  startButton = createButton("Start");
  startButton.position(width / 2 - 40, height / 2); // Center the button
  startButton.size(80, 40);
  startButton.style('font-size', '20px');
  startButton.style('background-color', 'white');
  startButton.mousePressed(startVisualization); // Start the visualization on press

  // Create initial particles
  for (let i = 0; i < numParticles; i++) {
    particles.push(new Particle(random(width), random(height)));
  }
}

function startVisualization() {
  isStarted = true; // Set the flag to start the main visualization
  startButton.hide(); // Hide the button after it is pressed
}

function draw() {
  if (!isStarted) {
    background(34, 139, 34); // Green background for the start page
    
    // Display the title text
    textSize(24);
    fill(255); // White text color
    textAlign(CENTER);
    text("Let's understand deforestation together!", width / 2, height / 2 - 70); // Title text
    
    // Instruction text
    textSize(20);
    text("Move around the mouse", width / 2, height / 2 + 120); // Instruction text
    return; // Stop further execution until the button is pressed
  }

  background(153, 101, 21, 200); // Brown background resembling the ground

  // Dynamically adjust Julia constants based on mouse position
  let cRe = map(mouseX, 0, width, -1.5, 1.5); // Map mouseX to real part
  let cIm = map(mouseY, 0, height, -1.5, 1.5); // Map mouseY to imaginary part

  // Update and display all particles
  for (let particle of particles) {
    particle.update(cRe, cIm);
    particle.display();
  }
}

class Particle {
  constructor(x, y) {
    this.x = x;
    this.y = y;
    this.iteration = 0; // To track the iteration count
  }

  update(cRe, cIm) {
    // Center the mapping so the fractal stays in the middle
    let zx = map(this.x, 0, width, -2, 2); // Map to complex plane
    let zy = map(this.y, 0, height, -2, 2);
    this.iteration = 0; // Reset iteration count for update

    // Iterate the Julia formula
    while (this.iteration < maxIterations) {
      let xtemp = zx * zx - zy * zy + cRe;
      zy = 2.0 * zx * zy + cIm;
      zx = xtemp;

      if (zx * zx + zy * zy > 4) break; // Escape condition
      this.iteration++;
    }
  }

  display() {
    // Use a green color palette for a microbiological feel
    let hueValue = map(this.iteration, 0, maxIterations, 100, 140); // Green to light green
    let brightness = this.iteration === maxIterations ? 0 : map(this.iteration, 0, maxIterations, 100, 255);
    let size = map(this.iteration, 0, maxIterations, 2, 6); // Increased size variation

    fill(hueValue, 255, brightness, 200); // Vibrant colors with increased opacity
    ellipse(this.x, this.y, size, size); // Dynamic particle size
  }
}

Project Reflections

Reflecting on this project brings me joy. The transformation from a simple fractal exploration to a tool for raising awareness about deforestation highlights the power of creativity in fostering understanding. It’s truly rewarding to see users engage with the project, contemplating its message and the vital need to protect our forests.

Challenges and Overcoming Them

Like any creative endeavor, this project came with its challenges. One significant hurdle was maintaining performance while increasing the number of particles in the fractal visualization. I had to optimize the code carefully to ensure a smooth user experience. Additionally, crafting a clear and impactful message about deforestation required thoughtful incorporation of user feedback, allowing me to communicate the project’s importance effectively.

Future Improvements

Looking ahead, I see several exciting possibilities for this project:

  • Educational Content: I envision adding layers of information about deforestation, including its causes and potential solutions, to deepen user understanding.
  • Enhanced Visuals: Exploring advanced graphics techniques could enrich the user experience even further.
  • Expanded Interactivity: Integrating animations or informative pop-ups could create an even more engaging learning environment.

Documentation, Images, Videos, and User Interactions

 

Final Project – Peacefulness in Green

a. The concept

My idea for the final project is to create a relaxing and interactive experience. While this idea can apply to many concepts, I decided to focus myself on recreating (albeit not 100% similar) the following figures:

A table on a park
Figure 1. The inspiration.
Figure 2. My other inspiration.

There is not too much of a specific reason as to why these two figures were selected for inspiration aside of me being a fan of rural areas. Not to imply that I hate cities, but there is a charm in living in a place where you can take a sit, appreciate and take care of nature.

Now, the idea was to implement everything that we have learned so far in class with these figures as concepts. Although, as mentioned, the most important detail to be added (at least in my opinion) was interaction. Since I am a fan of interactable experiences with physics, something along the lines was sought after. Likewise, the details that were going to fill the background were planned to be concepts taught in class modified to look in-place with the setting.

b. The sketch

Here is the final, completed draft with the corresponding instructions:

Controls:

Holding mouse left click: Grab body/element

Full-screen version: Go to the Full-screen version

c. The design

Since I have some skills when it comes to image manipulation (not that much of a design master if I am honest), I decided to use Figure 2 as a basis for the background. This was done by manipulating the image using the software GIMP:

Figure 3. Working with GIMP for the background.
Figure 4. Working some more for the background using GIMP.

The bench also had to be made handmade to complement the background. If I just happen to add rectangles or some images from the internet, it would end up looking weird.

Figure 5. Designing the table’s leg.

As for the audio, some clips were carefully selected to complement the experience, such as the audio background and feedback when the bodies collide with each other:

Figure 6. Editing audio with Audacity

This part did not take as much time as I expected. Nevertheless, preparing a graphic from zero, that takes into account a specific design, can be troublesome.

d. The features

The main feature of this sketch is the object manipulation with the help of the physics library matter.js. For more information, here is the complete list of features that this sketch possesses:

  • Interactable bottle and seeds: These are mostly done with the help of matter.js for the physics. The idea is that, the seeds will spawn inside the cups and that, once they fall into the grass with the help of the user manipulation, they will stop in where they landed. After this, a plant will grow using Perlin noise. As for the cups, new ones will spawn when a very low probability—getting the number 1 from 5000 possible numbers—is met with the help of random().
  • Ants moving from left to right: The ants moved from either the left or right side, following an established target.
  • Winds: They are displayed as sine waves, and are mostly there to provide an additional force for the matter.js bodies once they are under a specific range of the Y axis.
  • CRT effect: It is basically a combination of multiple cellular automata layered on top of each other with a specific value of opacity.

Here is a quick demonstration of all the systems mentioned in place:

e. Cut features

A quick sketch
Figure 7. The original sketch

Most of the ideas from the original sketch were implemented, although some needed reconsideration. For example, the cellular automata were originally intended to be for dirt patterns, but it proved to be difficult to get the result. While trying to figure out how to make it work, I came across a “CRT” effect that could add into a nostalgia factor.

Elements number 4 and 5 had to be cut due to not having enough time. The features were the following:

#4 Birds flying around: These will be done with the help of a flocking system.

#5 A tree with moving leaves: The moving leaves will be simulated with the help of some forces.
Figure 8. Developing the flow field.

f. Code I am still particularly proud of

Detecting the current position of each ball and then spawning a Perlin walker that moves linearly in the Y axis but randomly on the X axis, while leaving a trail, proved to be difficult. I had to readjust the positioning to match matter.js:

Code found in classes/walker.js. All the class is provided, since many elements are necessary for this to work.

class Walker {
  constructor(x, y, w, angle) {
    this.position = createVector(x, y);
    this.angle = angle;

    this.w = w;

    //Stop growing it in the Y axis.
    this.y_limit = int(random(400, 500));

    //Shows the trail. Made with the following help: https://www.youtube.com/watch?v=vqE8DMfOajk
    this.history = [];

    //Variables for the perlin movement.
    this.tx = this.position.x;
    this.ty = this.position.y;

    this.last_position = 0; //Tracks the last position for reference. This avoids that the Perlin movement moves everywhere.
    this.rise = 0; //Controls the map to allow the plant to grow.

    this.free_x = 0; //Frees X space
    this.free_y = 0; //Frees Y space.  Both of them helps to create the illusion of grow.
  }

  show() {
    //Show trail.
    push();
    for (let i = 0; i < this.history.length; i++) {
      let pos = this.history[i];
      ellipse(pos.x, pos.y, this.w);
      noStroke();
      fill(0, 200, 0);
    }
    pop();
  }

  move() {
    this.history.push(createVector(this.position.x, this.position.y));
    this.position.x = map(
      noise(this.tx),
      -30,
      30,
      this.position.x - 10,
      this.position.x + 10
    );
    this.tx += 0.1;
  }
}

The rest of the code, while expansive, did not feel really challenging aside from adapting some features to my sketch. I highlight the provided coded, since most of it came from my own logic and a lot of trial and error. It is impressive how some lines of codes can be seen as simple, but in reality, it takes a lot of time to write the correct code that gives the desired result.

g. User testing

I asked a friend to try out the sketch. Here is the video of it:

There are some issues highlighted, such as physics being glitched or initial confusion as to what to do, despite putting some (rather mysterious) instructions to let the user know what to do.  The instructions were left vague to let the user feel, possibly, more engage after figuring out the interaction.

h. Interactive Media Showcase

Here are some videos of the Interactive Media showcase. What it is interesting from these videos is that some students actually understood what to do. Although, there was general confusion due to the physics glitching out and the (random) time that it took for new cups to arrive; they were sometimes waiting for something to happen. Aside from that, I feel that the instructions were mostly helpful for the amount of time the participants took.

Video 1:

Video 2:

i. Previous sketches

Here are the two previous sketches if you want to see the progress, or you can also click here to access the GitHub repo.

j. Reflection

This proved to be a difficult and very time-consuming final project. While could be understood as possibly annoying, it is the contrary: I enjoyed the process. I felt as I understood the concepts that were taught in class, and I could through without too much of an issue with what I wanted to add. Of course, some features had to be cut in order to complete the final project in time, but the difference this time is that I feel capable of adding them if enough time is provided.

As for the technicality of the project, I could not figure out the solution for the bodies going through bodies if they are at a high speed. This itself is a limitation of matter.js, although I wish there was a solution. Nevertheless, I feel satisfied with how the final project turned out and the new knowledge I possess now. I realized that there are a lot of uses outside p5.js I can use this for.

Thank you for the new knowledge, I deeply appreciate it.

k. Used sources

1. Cunningham, Andrew. “Today I Stumbled upon Microsoft’s 4K Rendering of the Windows XP Wallpaper.” Ars Technica, 8 June 2023, arstechnica.com/gadgets/2023/06/i-just-found-out-that-microsoft-made-a-4k-version-of-the-windows-xp-wallpaper/.

2. “Cursor.” P5js.org, 2024, p5js.org/reference/p5/cursor/. Accessed 2 Dec. 2024.

3. freesound_community. “Highland Winds FX | Royalty-Free Music.” Pixabay.com, 14 June 2023, pixabay.com/sound-effects/highland-winds-fx-56245/. Accessed 26 Nov. 2024.

4. flanniganable. “10b How to Make a Compound Body Matter.js.” YouTube, 4 Dec. 2021, www.youtube.com/watch?v=DR-iMDhUa-0. Accessed 25 Nov. 2024.

5. The Coding Train. “3.6 Graphing Sine Wave.” Thecodingtrain.com, 2016, thecodingtrain.com/tracks/the-nature-of-code-2/noc/3-angles/6-graphing-sine-wave. Accessed 2 Dec. 2024.

6. The Coding Train. “5.21: Matter.js: Mouse Constraints – the Nature of Code.” YouTube, 9 Mar. 2017, www.youtube.com/watch?v=W-ou_sVlTWk. Accessed 25 Nov. 2024.

7. The Coding Train. “9.7: Drawing Object Trails – P5.Js Tutorial.” YouTube, 9 Feb. 2016, www.youtube.com/watch?v=vqE8DMfOajk. Accessed 26 Nov. 2024.

Uzumaki (Interactive Spiral Art) by Dachi – Final

User Interaction:

During the IM show, due to the slow performance of the computers, the main features of the project, especially the spiral transition, were not fully captured, which left users confused. Nevertheless, people still had fun, and I tried to explain the issue and guide them through the process.

Sketch: p5.js Web Editor | Uzumaki v3

Inspiration

This project draws inspiration from Junji Ito’s “Uzumaki,” a manga known for its distinctive use of spiral imagery and psychological horror elements. This interactive artwork translates the manga’s distinct visual elements into a digital medium, allowing users to create their own spiral patterns through hand gestures. The project maintains a monochromatic color scheme with digital noise to reflect the manga’s original black and white aesthetic, creating an atmosphere that captures the hypnotic quality of Ito’s work. The decision to focus on hand gestures as the primary interaction method was influenced by the manga’s themes of body horror and transformation, where the human form itself becomes part of the spiral pattern. The integration of accelerating warping effects and atmospheric audio further reinforces the manga’s themes of inevitable spiral corruption.

Methodology

Using ml5.js’s handPose model, the system tracks hand movements through a webcam, focusing on the pinch gesture between thumb and index finger to control spiral creation. The pinching motion itself mimics a spiral form, creating a thematic connection between the gesture and its effect. A custom SpiralBrush class handles the generation and animation of spirals, while also implementing a sophisticated two-phase warping effect that distorts the surrounding space. The initial warping effect adds depth to the interaction, making each spiral feel dynamic and impactful on the canvas. After 200 frames, a second acceleration phase kicks in, causing the spiral’s warping to intensify dramatically – a direct reference to the manga’s portrayal of spirals as entities that grow increasingly powerful and uncontrollable over time.
The technical implementation uses p5.js for graphics rendering and includes a pixel manipulation system for the warping effects. The graphics are processed using a double-buffer system to ensure smooth animation, with real-time grayscale filtering applied to maintain the monochromatic theme. The ambient background music and UI elements, including a fullscreen button and horror-themed instructions modal, work together to create an immersive experience that captures the unsettling atmosphere of the source material. The instructions themselves are presented in a way that suggests the spiral creation process is a form of dark ritual, enhancing the project’s horror elements.

Code I am Proud Of

One of the most interesting pieces of code in this project is the warping effect implementation in the SpiralBrush class:
warpPixels(pg) {
  if (this.swirlAngle > 0) {
    pg.loadPixels();
    let d = pixelDensity();
    let originalPixels = pg.pixels.slice();

    // Calculate warping area
    let minX = max(0, int((this.origin.x - warpRadius) * d));
    let maxX = min(w, int((this.origin.x + warpRadius) * d));
    let minY = max(0, int((this.origin.y - warpRadius) * d));
    let maxY = min(h, int((this.origin.y + warpRadius) * d));

    // Process pixels within radius
    for (let y = minY; y < maxY; y++) {
      for (let x = minX; x < maxX; x++) {
        let distance = dist(x/d, y/d, this.origin.x, this.origin.y);
        if (distance < warpRadius) {
          // Calculate warping effect
          let warpFactor = pow(map(distance, 0, warpRadius, 1, 0), 2);
          let angle = atan2(y/d - this.origin.y, x/d - this.origin.x);
          let newAngle = angle + warpFactor * this.swirlAngle;
          
          // Apply displacement
          let sx = this.origin.x + distance * cos(newAngle);
          let sy = this.origin.y + distance * sin(newAngle);
          // Transfer pixel data
          [...]
        }
      }
    }
    pg.updatePixels();
  }
}

This code creates a mesmerizing warping effect by calculating pixel displacement based on distance from the spiral center and the current swirl angle. The use of polar coordinates allows for smooth circular distortion that enhances the spiral theme. The acceleration component, triggered after 200 frames, gradually increases the warping intensity, creating a sense of growing unease that mirrors the manga’s narrative progression.

Challenges

Several technical challenges were encountered during development:
  1. Performance Optimization: Implementing pixel-level manipulation while maintaining smooth frame rates required careful optimization of the processing area and efficient buffer management. The addition of the acceleration phase complicated this further, as the increased warping intensity required more computational resources.
  2. Gesture Recognition: Achieving reliable pinch detection required fine-tuning the distance threshold and handling edge cases when hand tracking momentarily fails. The system needed to maintain consistent spiral generation while dealing with the inherent variability of webcam-based hand tracking.
  3. Visual Coherence: Balancing the intensity of the warping effect with the spiral growth to maintain visual appeal while avoiding overwhelming distortion proved particularly challenging when implementing the acceleration phase. The transition between normal and accelerated warping needed to feel natural while still creating the desired unsettling effect.

Conclusion

I had a lot of fun doing this project and I think it successfully achieves goal of creating an interactive experience that captures the essence of Uzumaki’s spiral motif. The combination of hand tracking, real-time visual effects, and audio creates an engaging installation that allows users to explore the hypnotic nature of spirals through natural gestures. The monochromatic aesthetic, warping effects, and horror-themed UI elements effectively translate the manga’s visual style and atmosphere into an interactive digital medium. The addition of the acceleration phase adds a deeper layer of narrative connection to the source material, while the fullscreen capability and atmospheric audio create a more immersive experience.

Future Improvements

Looking ahead, there are several avenues for enhancing the project’s impact and functionality. The implementation of WebGL support would enable smoother rendering on larger canvases, allowing for more complex spiral patterns without compromising frame rates. The warping system could be optimized through GPU acceleration, enabling more sophisticated interactions between multiple spirals. A particularly intriguing possibility is the introduction of spiral memory, where new spirals could be influenced by the historical positions and intensities of previous ones, creating a cumulative effect that mirrors the spreading corruption theme in the manga. The addition of procedural audio generation could create dynamic soundscapes that respond to spiral intensity and acceleration, deepening the horror atmosphere. The instruction interface could be expanded into a more narrative experience, perhaps incorporating procedurally generated horror-themed text that changes based on user interactions. More digital distortion effect and ink based spiral texture would further enhance the experience.