Final Project: Kinetic Personalities

Concept

My project serves as a metaphor for the ever-changing nature of human identity and the many facets that constitute an individual. Inspired by the dynamic principles of cellular automata, the project visualizes a grid of cells that continuously transition between phases of life and dormancy, mirroring the fluidity of human existence. Each cell represents a different element of one’s personality, similar to the various roles, hobbies, and experiences that define a person at a certain point in time. The periodic interplay of dying and born cells encapsulates the core of personal development and adaptability over time.

Video Demonstration

Images

Interaction Design

I crafted the interaction design to be both intuitive and playful, encouraging whole-body engagement. A key goal was to instill an element of discoverability and surprise within the user experience. For instance, the skeleton dynamically lights up when wrists are drawn near, while the color palette transforms as the wrists move apart. This intentional design seeks to not only captivate users but also symbolize a broader narrative—the idea that individuals possess the inherent power to shape and sculpt their own personalities, paralleling the dynamic changes observed in the visual representation. More about the interaction design was discovered during the user testing, described below.

User Testing

User Testing was a crucial stage in the development of my project. Observing and hearing people’s expectations and frustrations while using my project helped to see the goals of my project more clearly.

For instance, at first I was thinking not to include a human skeleton figure mimicking the participant, and I was considering the option of a black and white video display. Participants were more fond of the video as it allowed them to get visual feedback of their pose and how their actions are perceived by the camera. Since video display was a little too distracting for the eye, but visual feedback of participant’s pose was desired, my solution was including an abstract skeleton figure by taking advantage of the ml5.js library.

An additional valuable observation emerged in relation to event design. Initially, I had one event set the event trigger to activate cells within the skeleton when wrists came close. While contemplating potential actions for triggering another event, a participant proposed that an intuitive approachwould be activating the second event when hands were stretched apart. Taking this insightful suggestion into account, I subsequently integrated the color change mechanism to occur when the distance between wrists was wide.

Here is a video of the final user testing:

Code Design

The code utilizes of the p5.js and ml5.js libraries to create a cellular automata simulation that reacts to a user’s body movements filmed via a webcam. The ml5 PoseNet model gathers skeletal data from the video feed of the user and identifies major body parts. The activation of cells in a grid is influenced by the positions of the wrists. The grid symbolizes a cellular automata, in which cells evolve according to predefined rules. The user’s wrist movements activate and deactivate cells, resulting in complicated patterns. The project entails real-time translation, scaling, and updating of the cellular automata state, resulting in an interactive and visually pleasant experience that combines cellular automata, body movement, and visual aesthetics.

One of the key parts regarding code was correctly calculating the indices of the cells that need to be activated based on the video ratio. I decided that a 9×9 grid gave the best visual result, here is my code for the activation of cells on the left wrist:

let leftWristGridX = floor(
      ((pose.leftWrist.x / video.width) * videoWidth) / w
    );
    let leftWristGridY = floor(
      ((pose.leftWrist.y / video.height) * videoHeight) / w
    );

    // Activate cells in a 9x9 grid around the left wrist
    for (let i = -4; i <= 4; i++) {
      for (let j = -4; j <= 4; j++) {
        let xIndex = leftWristGridX + i;
        let yIndex = leftWristGridY + j;

        // Check if the indices are within bounds
        if (xIndex >= 0 && xIndex < columns && yIndex >= 0 && yIndex < rows) {
          // Set the state of the cell to 1 (activated)
          board[xIndex][yIndex].state = 1;
        }
      }
    }

Another key part was the events. Here is the code for the color switch event:

// Creating an event to change colors
    let wristsOpen =
      dist(leftWristGridX, leftWristGridY, rightWristGridX, rightWristGridY) >
        60 &&
      dist(leftWristGridX, leftWristGridY, rightWristGridX, rightWristGridY) <
        80;

    if (wristsOpen) {
      // Activate the event for all existing cells
      for (let i = 0; i < columns; i++) {
        for (let j = 0; j < rows; j++) {
          board[i][j].event = true; // responsible for coloir change in Cell class
        }
      }
    } else {
      // Deactivate the event for all existing cells
      for (let i = 0; i < columns; i++) {
        for (let j = 0; j < rows; j++) {
          board[i][j].event = false;
        }
      }
    }

Nevertheless, probably the biggest challenge was the accurate full-screen display. I utilized additional functions to handle that, which required to re-initialize the board once the dimensions of the screen changed.

Another important function was deactivateEdgeCells() functions. For some reason (probably because of a different number of neighbors), the edge cells would not deactivate as the rest of the cells once a wrist crossed them. Therefore, I used an additional function to handle the issue that loops through the edge cells and sets their state to 0 if they were activated:

function deactivateEdgeCells() {
  for (let i = 0; i < columns; i++) {
    for (let j = 0; j < rows; j++) {
      // Check if the cell is at the edge and active
      if (
        (i === 0 || i === columns - 1 || j === 0 || j === rows - 1) &&
        board[i][j].state === 1
      ) {
        board[i][j].state = 0; // Deactivate the edge cell
      }
    }
  }
}
Sketch

Future Improvements

Here is a list of possible further implementations:

  • Music Integration: The addition of music could enhance the overall experience, encouraging more movement and adding a playful dimension to the interaction.
  • Dance: Exploring the combination of the sketch with a live dance performance could result in a unique and captivating synergy of visual and kinesthetic arts.
  • Multi-User Collaboration: Sketch is currently supporting interaction for one person. Expanding the sketch to accommodate multiple users simultaneously would amplify the playfulness and enrich the collaborative aspect of the experience.
  • Additional Events: one event that I would have loved to explore further was the change in CA rules that generated a beautiful pattern expanding through the whole canvas. I believe it would make the sketch more dynamic.
  • Events on more advanced poses: Involving the legs or the head movements could make the project more intricate and add to the discoverability and surprise aspects.
Resources

A key element was the use of the ml5.js library, for the implementation of which I was relying on Daniel Shiffman’s tutorials.

The CA rules were a happy accident which I discovered when I was experimenting in my CA weekly assignment.

IM Show Documentation

Final Project Draft #2

Concept development

I believe I was able to achieve my initial concept and allow the user to control the CA simulation by body movements.

I have developed 2 versions of my concept, one with video displayed and the other without. I hope to pick the final version after the user testing stage. Here are the 2 versions (open in full screen, camera access required):

Here are some of my thoughts on the different versions.

Advantages of Displaying Video:

  1. Enhanced Engagement: Seeing live video might enhance user engagement, especially since the interaction involves real-time reactions to movements.
  2. Visual Feedback: Video provides visual feedback that helps users understand how their actions are affecting the system, creating a more clear experience.
  3. Creative Expression: Video display adds to the creative expression, allowing users to be part of the visual outcome.

Advantages of Not Displaying Video:

  1. Focus on Visualization: Without the distraction of the live video, users may focus more on the visual elements generated by the application, such as patterns, colors, and the CA element.
  2. Aesthetic Choice: Absence of video results in a cleaner, more stylized look. It is more aesthetically appealing.
  3. Reduced Cognitive Load: The absence of a live video stream may reduce cognitive load, allowing users to concentrate on specific interactions without additional visual input, thus enhancing focus on the visual outcome.

To make a better decision about which version to proceed with, I am planning to track the engagement time as well as record the different opinions and reasoning of the participants in the user testing.

Next steps:

I want to advance my project by adding an event, either a change in colors or in the rules of CA when a certain pose is detected. I believe that such an addition would enhance the interactivity aspect, as it would provide additional feedback to the user’s movements.

Final Project Proposal

 

For my final project I want to merge generative art and machine learning by employing the ml5.js library to trigger simulations based on wrist movements detected through PoseNet. By adapting cellular automaton rules and visual aesthetics, the resulting interactive experience will transform hand gestures into an interactive experience that relies on cellular automata.

This inspiration comes from my wish to explore my last assignment further, especially the sketch below. I am thinking of connecting it to the ml5js library and triggering the simulations from the points where the wrists are detected. Additionally, to make it a bit more complicated I am thinking of maybe changing the colors or CA rules when certain events occur, for example, to switch the color palette when wrists are in the same place. By such, the project should allow users to explore the boundaries between their creative input and the algorithmic generation of art.

Coding Assignment – Week #11

For this week’s assignment, I wanted to play with the different visual illustrations of the game of life. Also, I wanted to add the mouseDragged() function as an element of interaction. Here are the different versions:

  1. For this one I experimented with a different shape and background color. The circle in a square shape creates an effect like the simulation would be eating up the canvas, almost like it got infected.

2. In this one, the rules are altered a little bit which creates an interesting pattern as the simulation spreads.

3. For this one, I wanted to keep only the birth and the death of the cells. This allows to focus more on the different shapes created by the simulation, especially in the beginning.

4. This one was my favorite. Although very similar to the 3rd, the rules are different slightly which makes the simulation to die out sooner. I imagine this to be interesting to explore with real-time human movements instead of mouse dragging. Combining the game of life simulation and the body movements would make it an organic interactive experience.

The tricky part was figuring out how to connect the mousePressed() or mouseDragged() into the simulation. After a lot of thought about why it did not work, apparently, the missing piece was in the rules of the game of life. In general, the most interesting part was playing with the different rules. For instance:

// this is the 4th sketch:   
if (board[x][y].state == 1 && neighbors > 0) {
    board[x][y].state = 0;

// this is the third sketch 
if (board[x][y].state == 1 && neighbors < 0) {
    board[x][y].state = 0;

The only difference is the more or less than 0 but it creates a very different visual effect. To sum up, this assignment was an interesting exploration of the different visual possibilities of the game of life.

Coding Assignment – Week #10

Concept:

I wanted to utilize this assignment for an exploration of physics principles, particularly focusing on forces and collision events inside a dynamic particle system. I started from the example 6.10: Collision Events by Daniel Shiffman, and added new elements and events.

The sketch:

Technical Implementation:

The sketch is implemented using the p5.js and Matter.js libraries. The p5.js library is responsible for the creation of the canvas, particle rendering, and overall animation loop, while Matter.js handles the physics engine, collision detection, and force application. The particles are randomly added to the system, and each of them undergoes a color change upon its first collision. Once the number of particles is divisible by 60,  explosive force is applied to randomly selected particles giving a more dynamic feel to the sketch by initiating a small explosion. Additionally, a boundary is added to the bottom, and the number of particles is constantly being updated by removing the particles that are pushed outside of the canvas.

Code:

The most interesting part of the code was the handleCollisions(events) function. It is triggered by collision events in the Matter.js physics engine. It iterates through the pairs of bodies involved in the collisions, extracting the corresponding particle instances associated with each body. This function handles color-changing behavior for both particles upon their initial collision. Additionally, it applies forces to each particle based on their velocities, amplifying the collision event.

// handling collisions between particles
function handleCollisions(event) {
  for (let pair of event.pairs) {
    let bodyA = pair.bodyA;
    let bodyB = pair.bodyB;

    let particleA = bodyA.plugin.particle;
    let particleB = bodyB.plugin.particle;

    if (particleA instanceof Particle && particleB instanceof Particle) {
      // changing colors upon first collision 
      particleA.change();
      particleB.change();

      // Applying a force when particles collide
      let forceMagnitude = 0.005;
      let forceA = Vector.mult(
        Vector.normalise(particleA.body.velocity),
        forceMagnitude
      );
      let forceB = Vector.mult(
        Vector.normalise(particleB.body.velocity),
        forceMagnitude
      );

      particleA.applyForce(forceA);
      particleB.applyForce(forceB);
    }
  }
}
Future Improvements:

For future improvements, it would be interesting to explore different shapes and how the collision events and forces would act on them. I like that the sketch is dynamic and develops over time, although perhaps there could be a more clear story behind it.

AI for Design: AI and the Future of Architecture

In Professor Neil Leach’s talk titled ” AI and the Future of Architecture” what caught my attention the most was how he presented AI as a paradigm shift in our understanding of intelligence, challenging the notion that human intelligence is the focal point. Leach introduced the concept of a “second Copernican revolution,” meaning that humans are no longer the central intelligence, with AI representing a superior form of intelligence. Although quite scary, he proposed how AI can serve as a mirror to study our own cognitive processes, allowing us to gain insights into the intricacies of our own thought patterns, decision-making processes, and creativity (I liked his thought that creativity as a concept should be questioned in general, as it might be that it is only a broad term used to explain things we don’t understand).

This mirror-like function of AI becomes particularly relevant in fields such as design and architecture, where the design process is deeply rooted in human creativity and decision-making. For example, Leach suggested that AI has hacked into our sense of design and composition, by showing examples of how AI manipulates lighting conditions, rendering and our mental models of certain spaces with very little guidance. I believe this is a great way to notice and reflect on certain mental models and perhaps design by re-thinking them.

Another advantage of AI is of course AI as a design tool. AI is able to see patterns in the data that humans can’t, thus generating better solutions in accordance to possible constraints that designers might not be aware of. Leach also mentioned how clients will start requiring the use of AI in design processes, which made me think of how the role of a designer will look like. Will it only encompass picking the best AI generated solution? If so, how are creative professions, and many others as well, going to define themselves and how will that affect human identities?

Coding Assignment – Week #9

Concept

For this week’s assignment, I did not specifically want to simulate a real-life system but rather focus on creating an interesting visual piece. As instructed, I started from the dynamic flocking system as the base. I experimented with different values of distances for separation, cohesion, and alignment. I wanted to create something more chaotic than the base: I wanted the cohesion to be stronger, and alignment to be less prominent. In general, I wanted the flocking to be more aggressive if that makes sense. Although I experimented without a goal to mimic a particular system, the end result sort of reminds me of fireworks at certain stages. Here is the sketch:

Implementation:

The most altered part from the base was the show function. In the show() method, the visual effect is achieved by manipulating the ‘pulsation’ variable, creating a dynamic look for each element. The dynamic size of the circles is based on a sine function. Additionally, I added a fading trail effect by initializing a trail list for every boid, and storing the last 15 previous positions as vectors and constantly updating them (I created an extended class so that I would not mess up the base too much, and at some point the boids reminded me of fireflies so that’s where the naming comes from).

show() {
    // manipulating pulsation and alpha value
    let glow_size = 1 + 10 * sin(this.pulsation / 1.2);
    let alpha_value = map(glow_size, 1, 10, 100, 200);

    // drawing the trail
    for (let i = 0; i < this.trail.length; i++) {
      let trail_alpha = map(i, 0, this.trail.length, 0, 180); // Fade out the trail
      fill(255, 255, 200, trail_alpha);
      ellipse(this.trail[i].x, this.trail[i].y, glow_size, glow_size);
    }

    // Draw the current firefly
    noStroke();
    fill(255, 255, 25, alpha_value);
    ellipse(this.position.x, this.position.y, glow_size, glow_size);

    // Update pulsation for the next frame
    this.pulsation += this.pulsation_speed;

    // updating the trail array
    this.trail.push(createVector(this.position.x, this.position.y));
    if (this.trail.length > 15) {
      this.trail.shift(); // keeping the trail length limited
    }
  }
Improvements

If I were to follow the fireworks theme as my inspiration, I think switches in colors would be an interesting field of exploration. Perhaps it would even be possible to make the flocking orientated more toward the vertical path and create a firing-like effect . Additionally, I believe another direction could be taking inspiration from specific microorganisms and creating a more defined system that stabilizes a little more over time.

what I meant when I said that I was reminded of fireworks:

Coding Assignment – Week #8

Concept:

For this week’s assignment, I wanted to focus on developing a concept and creating a fun adaptation for the new simulations of movements that we learned in class. As I was revising the different behaviors of agents that we simulated in code, the pursue and evade example caught my eye the most, mostly because of the endless opportunities it opens to tell simple stories. The three agents are like characters in a tale: the main character pursues something while trying to evade the bad guy. Many many possibilities to give this plot a more detailed and interesting scenario.

The scenario that I decided to with is the one of a Fisherman. The movement of the initial agent reminded me of that of a canoe in the sea. Here was my first experiment (I see a boat lost in the sea at night, but that might as well just be me haha!):

Nevertheless, I wanted to come back to the pursue and evade story plot. I created three characters: a fisherman in a canoe, a fish, and a shark. The story: the fisherman pursues the fish while avoiding the shark. Here is my final sketch:

 Implementation:

For the sketch to come together, I decided to roughly sketch my own PNGs. Here they are:

Regarding the code, the difficult part was to correctly display the images, mostly the rotation so that the canoe moves with its tip facing the fish, and so does the fish and the shark. Here is how I modified the show() function for the Vehicle class:

show() {
  let angle = this.velocity.heading();
  fill(127);
  stroke(0);
  push();
  translate(this.position.x, this.position.y);
  rotate(angle + HALF_PI); 
  image(boat, -50, -50, 100, 100); 
  pop();
}

For further improvements, it would be interesting to make the shark to also pursue the fish. Additionally, for more details, I would add an event that occurs when either the fisherman or the shark gets to the fish first and a collision occurs. Now, when the images intersect, one just flows on top of the other, which is not the most smooth visual result.

Midterm Project: “Love at First Force: Either Pure Attraction or Complete Repulsion”

Sketch
Link to fullscreen: https://editor.p5js.org/llluka/full/NvFLbfOZK
Images of different compositions:
Concept
This project aims to build a dynamic and visually appealing simulation of magnetic particle and magnet interactions. It allows users to examine the behavior of these magnetic forces in a virtual environment by utilizing the p5.js framework. It’s intended to be both an interactive and instructional tool, with users able to change the amount of magnets, the number of particles generated, and even the particle colors. The simulation’s basic goal is to demonstrate how particles behave to magnetic fields, exhibiting the attraction or repulsive forces they encounter based on their individual charges. By incorporating the concept of powered distance, where forces weaken rapidly with increasing distance, the project effectively captures the essence of magnetic interactions in a visually intuitive manner. Overall, this project provides an engaging approach to visualize magnetic principles while also giving users the ability to influence particle behavior in response to changing magnetic conditions.
Coding Logic
This project’s technical execution is driven by two central classes, “Magnet” and “Particle,” which serve as the building blocks for the magnetic simulation. Magnets are initialized with randomized positions across the canvas in the simulation, providing an element of unpredictability to their arrangement and the produced pattern. The ability of these magnets to switch between attraction and repulsion, which is controlled by user input (‘t’ on the keyboard), gives the simulation further variety. Particles, on the other hand, are also given random starting points, contributing to the generative aspect of this sketch. The movement of the particles is controlled by the magnetic forces exerted by the magnets.
Another key component of the simulation is the “powered distance” factor. The choice of a power exponent (in this case, set to 4 for visual and aesthetic reasons) greatly influences how the magnetic forces weaken with distance. By amplifying the distance factor to the fourth power, the simulation is an illustration on how magnetic principles work, with stronger forces close to the magnets and rapid reduction in forces as particles move away.
Regarding the content learned in class, I relied heavily on vectors and forces. The idea was in a way similar to movers and attractors, and combining it with particles is what produced the final result.
Challenging Parts

The main and most difficult part was simulating the magnetic force between the particle and the magnet. I had to apply the Inverse Square Law for Forces, since many physical forces, including gravity and electromagnetic forces (which magnetic forces are a part of), follow this law. This law states that the strength of a force is inversely proportional to the square of the distance between the objects involved. Mathematically, it’s represented as:F ∝ 1 / r^2    -> the force (F) is directly proportional to the inverse square of the distance (1 / r^2). Here is my update() function inside the particles class that deals with the magnetic force:

update(magnets) {
    // Update the position of the particle based on magnetic interactions
    let sum = createVector(); // vector that accumulates the magnetic forces

    // iterating through the magnets and calculating the forces they exert on the particle
    for (let magnet of magnets) {
      let diff = p5.Vector.sub(this.pos, magnet.pos); // calculating the vector between particle and magnet
      let poweredDst = pow(diff.x, dstPow) + pow(diff.y, dstPow); // calculating the powered distance, which is used to determine the strength of the force between the particle and the magnet. The higher the poweredDst, the weaker the force.
      let force_mag = (this.power * magnet.power) / poweredDst; // (this.power * magnet.power) determines whether the particle and the magnet attract or repel each other based on their powers. If they have opposite powers, they will attract , and if they have the same powers, they will repel. (this.power * magnet.power) / poweredDst calculates the force magnitude by considering the power and distance. A larger poweredDst results in a weaker force.
      diff.mult(force_mag); 
      sum.add(diff); // accumulating the forces.Adds the current force to the cumulative force. Since multiple magnets can affect the particle, and this line accumulates their combined effects.
    }

    this.pos.add(sum.setMag(1).rotate(HALF_PI)); // Update the position based on accumulated forces
  }
Pen Plotting translation/ process
Pen Plotting was the most exciting part of the project, as it was highly
 satisfactory to observe a digital sketch come alive in a physical form. The

challenging part was dealing with the SVG files, as for some reason particles would extend outside the canvas in the exported SVG file, which did not show up in the P5.js sketch. This became complicated as there were no borders and the Pen Plotter would catch the edge of the paper and thus it took a couple of tries to plot the sketch right. I used the version of my sketch that only had the white lines  to export the SVG files. That version of code also had an altered Particle class that saved the previous position and would draw a line between previous and current positions rather than an ellipse at a current position.  (I removed the SVG code from the sketch so that the code runs faster, but apart from that and the canvas size, nothing else was altered).

Areas for improvement / future work

A feature that I would implement next would be giving the magnets some movement. I believe it would make the sketch more dynamic, and it would be interesting to observe how the particle movement would change when the magnets would move versus what it is now when they are static. From aesthetic perspective, it would have been interesting to experiment with light, perhaps adding some glow to the magnets or the particles when they collide with magnets. Combining that with a bit of shading or fading perhaps would give the sketch some depth and make it appear more 3D.

Midterm Progress #2

Concept:

I decided to alter my initial idea due to some issues. In order to achieve the desired effect, I needed to have hundreds of particles, and because they had to be aware of each other and interact among themselves, that required a lot of iterations which turned out to be too heavy on p5js and resulted in a lot of lagging. Thus, I decided that I still want to explore particles, and have a theme of emergence but not in an artificial life way, but more in a creating a pattern way.

To achieve this concept, I wanted to allow the particles to move independently and only be attracted or repelled in random areas. For this reason, I decided to create magnet objects, that would have a power coefficient of either 1 or -1 and would thus either attract or repel the particles respectively.  With such design, I have a lot of opportunities for generative art and customization, since a lot of components can be either randomized or granted control over for the user.

Here is my current sketch:

Next steps:

This version looks optimal for the pen plotter, however I still need to fix my SVG files. Since my lines are a bunch of dots, I need to convert them into curves or otherwise this will take forever for the pen plotter to draw. Regarding my sketch, I need to implement the different modes. I am thinking of introducing simple UI to allow the user to adjust the number of magnets, particles and colors of the sketch. This way I would have a high number of combinations a user can achieve with only 5 sliders(one for magnets, one for particles and 3 for color: red, green and blue). Below are a few versions of my sketch I was able to achieve by altering these components.