Shreya’s Wilderness Forest – Final Project

THE FINAL SKETCH

Click on the button to enter the simulation!

Here is the full p5.js code: https://editor.p5js.org/shreyagoel81601/sketches/Y6gK7f1Iz

MY DESIGN GALLERY

Below are the various different outputs I got from my above sketch:

INSPIRATION, CONCEPT & ARTISTIC VISION

We studied fractals in class and I was specially fascinated by it. Just a recursive function with some property to it, a set of rules, and one can make such interesting patterns! I wanted to delve deeper and explore the field of fractals further for my final project, using it in a creative way to design intricate and mesmerising patterns that one wants to keep looking at.

Fractals are an important branch in mathematics and its recursive properties (i.e. self-similarity) are of interest in many fields. They are used in applied mathematics for modelling a variety of phenomena from physical objects to the behavior of the stock market. The concept of the fractal has also given rise to a new system of geometry crucial in physical chemistry, physiology, and fluid mechanics. But what if fractals were not limited to these set of equations and properties jargon to many but presented in a visually appealing way to the wider audience? To really show how broad and fascinating the field of math is and how it can be used to create art. That is what I set out my final project to be. To really curate a design or simulation which evolves over a period of time just based of fractal properties we study in math. More about fractals: source1, source2, source3.

Mathematical fractals:

Fractals in nature:

Out of all the fractals that exist out there, I was most intrigued by the ones that appear in leaves and trees and hence decided to re-create those using the idea of recursion with some of the known fractal equations and set of rules. However, the focus was not only on generating naturally occurring fractal patterns – the trees, but also putting them together in a creative way, artistically curating it into some sort of a generative algorithm or simulation, evolving over period of time with which the users could interact and play with. I proceeded with simulating a forest scenario – where users can plant a tree, make them fall, or have them sway upon their hover interaction, as if one is shaking their trunk!

THE PROCESS, TECHNICAL DESIGN & IMPLEMENTATION

I started with first reading thoroughly upon fractals and informing myself about those so I could choose which kind of fractals to go with, how to code them, what properties to use, etc. I had known what fractals are, but now, I was interested in knowing their origin, their nature, how it gives rise to certain forms and patterns, and so on. I went down a rabbit hole studying fractals, to the extent that I started proving its properties xD. It was very enriching!

My project uses 2 kinds of fractals – the L-system (for the instructions/home page), and stochastic branching fractals for the main wilderness forest part. I started with what we had in class first, the recursive way of building the stochastic trees and played with its design to create a forest like system. See below my initial sketch:

// in setup()
  fractalTree(width/4, height, length*1.2, 8);
  fractalTree(width/8, height, length*0.5, 3);
  fractalTree(width/2, height, length*1.5, 7);
  fractalTree(width/8*7, height, length*0.5, 3);
  fractalTree(width/4*3, height, length*0.7, 5);

// the recursive function
function fractalTree(x, y, len, weight) {
  push();
  if (len >= 2) {
    //draw the first line
    strokeWeight(weight);
    translate(x, y);
    line(0, 0, 0, -len);  
    translate(0, -len);
    weight *= sweight;
    strokeWeight(weight);
    weight *= sweight;
    
    let num = int(random(4));
    for(let i = 0; i <= num; i++) {
      push();
      let theta = random(-PI/2, PI/2);
      rotate(theta);
      fractalTree(0, 0, len*0.67, weight);
      pop();
    }
  }
  pop();
}

Above is a static version, it has no movement or user interaction yet. Next I tried exploring L-systems. Here is the sketch for that:

// in setup()   
  let ruleset = {
    F: "F[F]-F+F[--F]+F-F",
  };
  lsystem = new LSystem("F-F-F-F", ruleset);
  turtle = new Turtle(4, 0.3);

  for (let i = 0; i < 4; i++) {
    lsystem.generate();
  }

// in draw()
  translate(width / 2, height);
  
  let angle = map(mouseX, 0, width, -0.3, 0.3);
  let h = map(mouseY, height, 0, 0, 8);
  turtle = new Turtle(h, angle);
  
  turtle.render(lsystem.sentence);

After playing around and educating myself of the different ways one can use fractals to simulate trees and forest situations, it was time to curate the performance. I wanted to have the trees not be static, but grow slowly, and also sway as if wind is blowing. For this, it was necessary that I move away from the recursive way of coding because a recursive function just draws the tree once, done, it stores no property taht one can alter to later on play with it and achieve the results I intended to create. Hence, I transitioned to an OOP way inspired by Coding Train.

class Branch {

  constructor(begin, end, strokew) {
    this.begin = begin;
    this.end = end;
    this.finished = false;
    this.strokew = strokew;
    this.speed = random(4,12);
  }

  branchA() {
    let dir = p5.Vector.sub(this.end, this.begin);
    dir.rotate(random(-PI/2, PI/2));
    // dir.rotate(PI / 6 + dtheta);
    dir.mult(0.67);
    let newEnd = p5.Vector.add(this.end, dir);
    let b = new Branch(this.end, newEnd, this.strokew*0.8*0.8);
    return b;
  }

  branchB() {
    let dir = p5.Vector.sub(this.end, this.begin);
    dir.rotate(random(-PI/2, PI/2));
    dir.mult(0.67);
    let newEnd = p5.Vector.add(this.end, dir);
    let b = new Branch(this.end, newEnd, this.strokew*0.8*0.8);
    return b;
  }
}
let forest = [];
let numTrees = -1;

genButton = createButton("Grow");  
genButton.mousePressed(createNewTree);

function createNewTree() {
  let len = randomGaussian(height*0.25, 20);
  let x = random(10, width-10);
  let w = map(len, 0, height/2, 2, 12);
  
  let a = createVector(x, height);
  let b = createVector(x, height - len);
  let root = new Branch(a, b, w);

  let tree = []
  tree[0] = root;
  forest.push(tree);
  
  numTrees++;
  count = 0;

  generateBool = true;
}

function generate() {
  let tree = forest[numTrees];

  if (count < 12) {
    for (let i = tree.length - 1; i >= 0; i--) {
      if (!tree[i].finished) {
        tree.push(tree[i].branchA());
        tree.push(tree[i].branchB());
      }
      tree[i].finished = true;
    }
    count++;
  }
  else {
    generateBool = false;
  }
}

function draw() {
  if (generateBool) {
    generate();
  }
}

However, this was not easy or straightforward. I ran into many challenges.

BLOOPERS

To implement the sway feature, I decided to have the trees jitter upon mouse hover, to create the effect of a user shaking the tree. The basic idea for this was to change the angle of each branch for it to create a sway effect, but that lead to disintegrating the tree (see below).

This happens because when creating the tree, each branch starts from the end of previous branch, and when I rotate, the end point of previous branch moves but not the starting point of the new branch. The way I fixed this is by altering the end points for sway and not the angle and then redrawing the next branches based on this new end point.

if (mouseX < width && mouseX > 0 && mouseY < height && mouseY > 25) {
      for (let j = 0; j < forest.length; j++) {
        let tree = forest[j];
        if (abs(mouseX - tree[0].begin.x) < range){
          for (let i = 1; i < tree.length; i++) {
            tree[i].jitter();
          }
        }
      }
    }

Another issue I faced was nailing how many branches the tree should have. Because otherwise it was getting too heavy for the program to run and my laptop kept on crashing. If it was too little branches, then it would not look like a tree, or real, and would defeat the purpose of the whole model.

INSTRUCTIONS PAGE

THE PROCESS PHOTOS

IM SHOWCASE PRESENTATION

My project was exhibited at the Interactive Media End of Semester Showcase at New York University Abu Dhabi where it was interacted with by many people, not just students, but faculty, staff, dean from all majors.

Final Project: Vector Body Motion Visualizer

Inspiration:

I have always been interested by programs that can locate certain areas of the body or face through advanced technology. Although it is a very critical process and it takes several trials and sample pictures to achieve, its results in a very exciting and rather interactive process.

Symmetry | Free Full-Text | Facial Feature Movements Caused by Various Emotions: Differences According to Sex

Although this concept is really interesting, what fascinated me even more is just how much we can do with these facial marker. What I found interesting was the ability to track vectors to mimic the facial movements, and to draw those vectors and represent over the camera. As I thought of these changes in the displacement and the x and y values of the vector I wondered how we can visualize this change, and that is when I thought of a simple graph used in physics and maths all the time, a Position vs. Time Graph!

2.3 Position vs. Time Graphs | Texas GatewayVector notation - Wikipedia

Concept:

My program aims to use the p5.js library to capture video input, analyze optical flow, and visualize motion through graphical representations with the help of vectors. Mainly visualizing vertical and horizontal changes in the vectors. The flow file deals with detecting motion and optical flow, which are concepts beyond the scope of the class, but with the help of implementations I found online I integrated them in the flow.js file

The flow and its arguments returned are then used to draw motion vectors on the canvas, representing vertical and horizontal motion detected. In addition, the program instantiates two instances of the graph class to visualize the left-right (graph on top) and up-down (graph on the bottom) motion over time, creating trailing graphs that represent the history of the patterns. Throughout the draw loop, the motion vectors are modified with Perlin noise to introduce randomness and trigonometry as well.

How it Started:

Initially I had a completely different idea in mind. I planned to take several of these quick sketches i implemented and have a program represent how words and actions can make a difference, and my initial sketches are shown bellow:


When looking at the pros and the cons of integrating all these sketches into one, I found that the cons were outweighing the pros. The positives of this idea is that it uses many concepts of the ideas we covered in class, from vectors, fractals, steering forces, perlin noise, particle system, and more.

However the biggest downside of this idea was although it ticked the boxes of integrating the material taught in class, it lacked visual aesthetics. Not only that, but p5.js usually lags and slows down when integrating too many things at once, especially if it includes user interactions.

Initially I took inspiration from the flow field code demonstration we had gone over in class that integrated perlin noise. I then integrated the webcam as my user interaction method. While working with displaying the vectors I went back to the perlin noise integration and decided to add that to the vectors to ensure a smoother transition when the vectors x and y components are changing.

Final Project: Stages

Stage 1: Camera Input 

https://editor.p5js.org/ea2749/full/W9QWMntNK

Stage 2: Integrating flow field and Perlin Noise

https://editor.p5js.org/ea2749/full/aTHHs17Mj

Stage3: RGB dependencies on vector’s x and y

https://editor.p5js.org/ea2749/full/hrdedXJrD

Stage 4: Graphical Representation& Final Tocuches

https://editor.p5js.org/ea2749/full/2-pGvXr8o

Tackling Challenges:

The main challenge I faced was trying to understand the user interaction methods I wanted to use. Since I wanted to explore outside my comfort zone, I had to try new types of user interactions besides the mouse and keyboard, so I decided I wanted to use the webcam

Face marker locations (blue dots) in the Vicon recordings. The markers... | Download Scientific DiagramMarker MPEG-FAP association with the TRACK's reference model. The MPEG... | Download Scientific Diagram

While using the camera, I found ways online in which we can get these facial points, however, translating the returned values of these methods and programs that help us navigate such interactions into arguments I can use similar to our vector implementation in class was difficult. When dealing with user interaction methods like these, we don’t always get values or return types that are flexible to use with any method or class, so modifying, simplifying the code, using chat GPT,  and trials and errors helped.

Snippets of Code:

Displaying Vectors and Using them as Arguments:

Vectors were represented by lines, taking the x and y values of the vector as arguments for the line. As for the color, we use the map function to take a value from the horizontal/verticle component of the vector and translate it to a value from 0 to 255 so we can use them as our RGB values. Notice how the colors will only change if the vectors horizontal/verticle component changes, meaning the colors change only when movement is detected.

Using Perlin Noise:

Using perlin noise to achieve a smoother transition between the movement of the vectors and also using the map function to translate the values into smaller values that will make the noise function more seamless and smooth.

Using Trigonometry:

Using trigonometry specifically sin and cosine to limit the incrementing of the vertical and horizontal component of the vector from [-1,1]. This limits our incrementing even more making the transitions even more smooth and seamless.

FINAL Product:

https://editor.p5js.org/ea2749/full/2-pGvXr8o

Reflection and Areas of Improvement:

To conclude this project, and this semester, I really enjoyed the process of developing this project mostly because I got to learn so many concepts of user interaction, and I also was able to integrate concepts taught in class such as vectors, perlin noise, and trigonometry.

Core Principles: Critical Reflection - Center for the Professional Education of Teachers

To further advance this project, I plan to find other ways to make this project more interactive, perhaps creating more sensitivity to motion. Or perhaps having the vectors react differently based on the extent of motion. I also think the project could have been better if I used objects from the physics libraries instead of representing the vectors as lines.

The IM ShowCase:

Final Project – Pacman’s EcoSim

Inspiration:

The inspiration for this project was the concept of an ecosystem simulation. The idea was to create a dynamic environment where different entities interact with each other and evolve. The entities in this ecosystem are Boids, Pacmen, and Ghosts, each with their own behaviors and attributes. The user can be the decider of how these creatures evolve by adjusting these attributes using sliders.

Why Pacman?

The Pacman theme was chosen for this project due to its familiarity, aesthetics, and game dynamics. It’s a well-known game, making the ecosystem simulation easy to understand. The bright colors and simple shapes enhance the user experience, and the game’s chase and evade mechanics effectively illustrate predator-prey relationships in nature.

First Iteration:


The first iteration of the project involved setting up the basic structure of the ecosystem. These vehicles exhibit behaviors similar to those of living organisms, such as seeking food, avoiding poison, and reproducing. The main idea is to create a basic system that I can build on top of for my next iteration. I set up the foundational functions, such as seeking and reproduction.

Next Steps:

Fleeing Behavior: Fleeing behavior for the boids was added, which is triggered when a predator or apex predator is nearby. This made the simulation more realistic and dynamic.

Introduction of Apex Predators: Apex predators were introduced into the ecosystem. These entities hunt both boids and predators, adding another layer of complexity to the simulation.

User Controls Added: More user controls were added to the simulation. This included sliders to adjust the speed, reproduction rate, and lifespan of each entity type. This allows users to experiment with different settings and observe how they affect the ecosystem.

Pacman Theme: The visuals of the simulation were improved by using images instead of simple shapes for the entities. This made the simulation more visually appealing and engaging.

Performance Optimized: As the complexity of the simulation increased, the code was optimized to ensure that it ran smoothly. This involved techniques such as quadtree optimization for collision detection

Code that I am proud of:

seek(target) {
  let desired = p5.Vector.sub(target, this.position);
  desired.setMag(this.maxSpeed);
  let steer = p5.Vector.sub(desired, this.velocity);
  steer.limit(this.maxForce);
  this.applyForce(steer);
}

The seek() function in the ApexPredator class is used to move the apex predator towards a target. It calculates a desired velocity vector pointing from the apex predator to the target, sets its magnitude to the maximum speed of the apex predator, and then calculates a steering force to apply to the apex predator to move it towards the target.

flee(target) {
  let desired = p5.Vector.sub(this.position, target);
  desired.setMag(this.maxSpeed);
  let steer = p5.Vector.sub(desired, this.velocity);
  steer.limit(this.maxForce);
  this.applyForce(steer);
}

The flee() function in the Boid class is used to move the boid away from a target. It calculates a desired velocity vector pointing from the target to the boid, sets its magnitude to the maximum speed of the boid, and then calculates a steering force to apply to the boid to move it away from the target.

function changePlaySpeed(){
  if (playSpeed === 1) {
    playSpeed = 2;
  } else if (playSpeed === 2) {
    playSpeed = 4;
  } else if (playSpeed === 4) {
    playSpeed = 8;
  } else if (playSpeed === 8) {
    playSpeed = 1;
  }
  playSpeedButton.html('Play Speed:' + playSpeed + 'x');
}

The changePlaySpeed() function is used to cycle through different play speeds each time it is called. The play speed is displayed on a button in the user interface.

Challenges:

Autonomous Agent Logic

Implementing autonomous agent logic was a significant challenge. This involved creating behaviors for the boids, pacmen, and ghosts in the ecosystem. Each entity needed to have its own set of behaviors and interactions with other entities, which required a deep understanding of vectors, forces, and steering behaviors.

Slider Range Balancing

Balancing the range of the sliders was another challenge. The sliders control various attributes of the entities, such as speed, force, and reproduction rate. Finding a range that provided meaningful changes without causing extreme behaviors was a delicate balancing act.

Performance Management

Managing performance was a critical challenge. With potentially hundreds of entities on the screen at once, each with its own set of behaviors and interactions, the simulation could easily become slow and unresponsive. Optimizing the code to handle this complexity while still running smoothly was a significant part of the project.

Final Touches:

The final steps of the project involved refining the behaviors of the entities and improving the user interface. This included adding a button for generating the next generation of entities, and creating a screen for displaying the sliders that control the attributes of the entities.

Final Sketch:

Possible Future Improvements:

– Adding more entity types: The ecosystem could be made even more complex and interesting by adding more types of entities, such as plants or other types of animals.
– Implementing genetic algorithms: The entities could be made to evolve over time using genetic algorithms, with the most successful entities passing on their traits to the next generation.

-Special Creature Abilities: A possible improvement could be adding new abilities that the creatures can occasionally use to grow their numbers and recover from the brink of extinction.

IM Showcase:

The IM showcase was a success, and many people were stopping by to check out the class’s projects. The learning curve on the simulation was quite high for visitors and it took a while for them to get the hang of the game.

I also decided to showcase my midterm project, which actually caught the eyes of more people because it was more visually interesting.

Final Thoughts:

The journey of this project, from its initial conception to its final iteration, has been a testament to the iterative nature of design. It presented several challenges, but also provided the opportunity to learn and apply new concepts. The end result is a dynamic ecosystem simulation that is visually interesting to interact with. I still think however there is a lot more room for improvements in the future which I would like to explore.

You, I, everyone is a Black Hole

Black Holes are fun. The fact that they are so dense that even light can’t escape from their gravitational pull is itself intriguing. In this project, I present you a chance to become a black hole, at least an imaginary one.

Ambition

  1. I started with an unrefined generative art idea and experimented with a few things that went terribly wrong and couldn’t scale on a giant screen.
  2. For me, the most important part of this project was collaborative interaction that I wanted to achieve. And trust me on this, it’s not that simple as I imagined it would be.

I started with an idea of combining screens to generate an art piece. Based on the movement of the screen, the art would change and scale.

But this posed a lot of issues.

  1. First, not all screens are of same width and height.
  2. Resolution difference of each device also affects the generated browser viewport.
  3. Finding exact position is not impossible but is hard to achieve given the timeframe of this project.

With numerous failed attempts to perfect device positioning and motion, I realized I can still use device movements for the project to be interactive. Why? While mouse and keyboard are fun, there are only one each for one machine. How would multiple users participate?

This led to the idea of using devices to detect movement and affecting the generated art.

Pivot

I wanted to use Cellular Automata, Flocking, or Fractals but the idea of movement was quite difficult to visualize (at least I wasn’t able to).

But hey! We have Autonomous Agents!

Based on what we had studied in class about autonomous agents and their behavior, I came up with the black hole idea. Why not give everyone an opportunity to control a black hole and attract things towards it? And with the trail paths, we generate a vibrant collaborative art piece.

Falling into a Black Hole

The concept of Arrival in autonomous agents is quite similar to how things fall into a black hole. When objects are near to the event horizon, time beyond the black hole moves fast for the objects falling in, but for an observer observing the falling object, it appears to fall quite slow.

Einstein’s theory of General Relativity!

This led to the following initial paper sketch:

And the final art piece to be the following:

In this piece, there are multiple targets (black holes) and many particles falling in (vehicles).

Interactivity

Here’s how interactivity occurs:

A screenshot of how the client device looks like:

Codebase

I can’t put the code as an online p5 sketch because it will interfere with the communication. That’s because every time it loads up it will send messages to the server.

Here’s a GitHub Repository of the codebase: https://github.com/ayushpandeynp/decoding-nature-final-project

IM Showcase

Here’s a picture from the IM Showcase:

Blackhole Dynamic Exploration

Project


Introduction:

Embark on a fascinating journey into the enigmatic realm of black holes through our interactive simulation project. This walkthrough delves into the intricacies of simulating black holes’ gravitational influence on particles in a 2D space.

Simulation Dynamics:

Our simulation is designed to emulate the gravitational pull of black holes on particles. The setup() function initializes the canvas and GUI interface, allowing users to manipulate parameters such as black hole types, gravitational constant, particle count, and reset functionality.
Particle Behavior:
The Particle class defines particle behavior, including position, velocity, history of movement, and their interaction with black holes. Each particle’s trajectory is influenced by gravitational forces exerted by the black holes, leading to dynamic and visually engaging movements.
Black Hole Representation:
Utilizing the Blackhole class, we represent black holes on the canvas based on their mass and Schwarzschild radius. The visualization showcases their gravitational influence by affecting the trajectories of nearby particles.
Interactive Controls and Rendering:
Our project features an intuitive GUI interface allowing users to dynamically modify parameters, alter particle behavior, and manipulate black hole properties in real-time. This interactivity enhances user engagement and facilitates a deeper understanding of black hole dynamics.
Code Mechanics and Principles:
The core mechanics of our simulation are based on Newtonian gravitational principles, where each particle’s velocity is adjusted according to the gravitational force exerted by nearby black holes. We implement rules to halt particle movement when they enter the event horizon of a black hole, replicating the physics around these cosmic phenomena.
Conclusion:
This code walkthrough provides insight into the simulation of black hole dynamics, illustrating gravitational interactions between particles and black holes. Through this project, users can explore and visualize the captivating behavior surrounding these astronomical entities.

Inspiration:

Papers to be used:
https://digitalcommons.usu.edu/phys_capstoneproject/75/

Emotion Filters

Inspiration

I have decided to change my final idea so I decided to go with something we have all seen in cartoons or even selfie filters such and that is the animation of dizziness, love, anger etc over our head. In cartoons, we tend to see these kind of humorous animations when a character gets hurt and I have listed a few examples of what kind of animation I am thinking off.

User Interactivity

From my old idea, I want to still incorporate the use of the camera and user and have the program be able to identify the head shape. From there on, the animation of the head spinning would be in a neutral state. But as soon as another user enters the screen, both their head spaces would ‘interact’ and end up with an emotion, whether it is love, anger, or confusion.

  • Neutral state – I am thinking the neutral state would be like stars or fireflies which I have loved the animation of from this semesters work
  • Love – the neutral shape or state would end up having the user’s head spaces be sped up and have the
  • Anger  – the neutral state would end up have the user’s head space become steam images and red to show anger and both user’s headspace will try to speed up.
  • sad – the neutral state would have clouds and plasters and bleu heartbreak to resemble sadness.

I think a very cute minute interaction of my idea would be that people can take selfies of my laptop or take a picture of themself with their friend.

Decoding Nature

From the class content we have gone through, I want to use particle systems, to create the head spaces as foundations.

I am also wanting to include the cellular automata movemnts around the ellipse of the headspace to give the ‘pop’ effect.

Foundation

For this project, I am going to be using ml5.js and face face-api.js for the detection of emotions. face-api is an accurate and appropriate library to use as it uses certain points on the face tot detect these emotions and these movements and facial positions have proven to be the same for every human so this library will work well with all my users.

I first needed the camera set up so I used the following basic code to just have my camera initialised.

From the library of emotions, the 7 choices are: neutral, happy, angry, sad, disgusted, surprised, fearful. I only want to make use of neutral, happy, angry and sad so I can adjust the filters accordingly.

The following is my first prototype which simply reads one users face and has the emotions displayed nicely on the screen. This is also my first time using the camera in my code and originally the video was inverted and so I adjusted that with the following code. I wanted to highlight the camera and video set up.

function setup() {
  canvas = createCanvas(480, 360);
  canvas.id("canvas");

  video = createCapture(VIDEO);// Create the video: 
  video.id("video");
  video.size(width, height); 
  video

(please open on website to see actual prototype)

Two User implementation

I now want to see what it would be like if more than one person was on the screen because ideally, I want the code to be used by 2 people. There were errors with my code when there was more than one person so I changed the code accordingly added some extra if-conditions.

function drawExpressions(detections, x, y, textYSpace){
  if(detections.length >1){//If at least 2 face is detected
    let {neutral, happy, sad, angry } = detections[0].expressions;
    let {neutral_one, happy_one, sad_one, angry_one} = detections[1].expressions;
    // console.log(detections[0].expressions);

    // console.log(detections[1].expressions);
  
    textFont('Helvetica Neue');
    textSize(14);
    noStroke();
    fill(255);

    text("neutral:       " + nf(neutral*100, 2, 2)+"%", x, y);
    text("happiness: " + nf(happy*100, 2, 2)+"%", x, y+textYSpace);
    text("sad:        " + nf(angry*100, 2, 2)+"%", x, y+textYSpace*2);
    text("angry:            "+ nf(sad*100, 2, 2)+"%", x, y+textYSpace*3);

    console.log(neutral_one*100);
    text("neutral:       " + nf(neutral_one*100, 2, 2)+"%", 300, y);
    text("happiness: " + nf(happy_one*100, 2, 2)+"%", 300, y+textYSpace);
    text("sad:        " + nf(angry_one*100, 2, 2)+"%", 300, y+textYSpace*2);
    text("angry:            "+ nf(sad_one*100, 2, 2)+"%", 300, y+textYSpace*3);
  


  }
  else if(detections.length ===1){//If at least 1 face is detected
    let {neutral, happy, angry, sad, } = detections[0].expressions;

    // console.log(detections[0].expressions);
    textFont('Helvetica Neue');
    textSize(14);
    noStroke();
    fill(255);

    text("neutral:       " + nf(neutral*100, 2, 2)+"%", x, y);
    text("happiness: " + nf(happy*100, 2, 2)+"%", x, y+textYSpace);
    text("anger:        " + nf(angry*100, 2, 2)+"%", x, y+textYSpace*2);
    text("sad:            "+ nf(sad*100, 2, 2)+"%", x, y+textYSpace*3);


  }
  else{//If no faces is detected: 
    text("neutral: ", x, y);
    text("happiness: ", x, y + textYSpace);
    text("anger: ", x, y + textYSpace*2);
    text("sad: ", x, y + textYSpace*3);
   
  }
}

I added some quick console.log statements to test some if conditions, in the following example, I checked for neutral face to be detected and to output the word ‘neutral’.

For the filters and effects above the users heads, I want it have a 3D effect and recreate something similar to a solar system which has the sun as the ‘head’.

Particle / solar system

The following code makes use of 3D vectors and WEBGL which I had not use yet. After playing with some parameters, I tried to make the solar system look like 3D from the users direct point of view. In 2D, we see the solar system similar to rings around a centre piece but for my version, I want to see it from the side so it looks like a headspace on the users head.

Head detection

In this part of my code, I want to track the middle of the head, and the top part. In this following code, I have a box drawn around the head and I want to have a point for the middle top part of the box as that will be my ‘sun’ to my particle system.

Using this point, I am able to have my particle system based around this point even when the user is moving.

Combining the camera and solar system

So now it was time to combine my two main components of the final project. After MANY attempts,  I decided to not have the WEBGL 3D motion included as my text on screen, face detection was all changing to work in 3D and therefore would not stay still and it caused many errors when I could have just had the headspace in 2D.

I altered my code as following so that the headspace rotation is elliptic and have it based of the centre top part of the users face. I tried many times and also realised I wanted my elliptic offset on the X-axis to be proportional to the width of the rectangle that recognises the users face.

class Particle {
  constructor(x, y, radius, isSun = false, orbitSpeed = 0, orbitRadius = 0) {
    this.x = x;
    this.y = y;
    this.position = createVector(x, y);
    this.radius = radius;
    this.isSun = isSun;
    this.angle = random(TWO_PI);
    this.orbitSpeed = orbitSpeed;
    this.orbitRadius = orbitRadius;
    this.velocity = createVector(0, 0);
  }

  update() {
    if (this.isSun) {
      this.position.x = returnFaceX(detections);
      this.position.y = returnFaceY(detections);
    }
    if (!this.isSun) {
      this.angle += this.orbitSpeed;
      this.x =
        width -
        (returnFaceX(detections) +
          0.75 * rect_width(detections) * cos(this.angle));
      this.y = returnFaceY(detections) + 25 * sin(this.angle);
    }
  }

  display() {
    noStroke();
    if (this.isSun) {
      fill(255, 200, 0);
    } else {
      fill(255, 105, 180);
      circle(this.x, this.y, 10);
    }
  }
}

The function has the sun always move accordingly to the midpoint in this code as well.

Altering the aesthetic of the filter

I have added the following function to the code to test out the happiness emotion and have the colour of the filter change in correspondence.

function emotion(detections, feeling){

  if (detections.length > 0) {
    //If at least 2 face is detected
    let { neutral, happy, sad, angry } = detections[0].expressions;
    if(feeling==='happy' && happy > 0.9){
      return true
    }
    else return false;
  
}}
/////////////////////// calling the function
display() {
    noStroke();
    if (this.isSun) {
      fill(255, 200, 0);
    } else {
      if (emotion(detections, "happy")){
      fill(255, 105, 180);}
      
      else{
        fill(255, 0, 0);
      }
      circle(this.x, this.y, 10);
    }
  }

After testing out my code and seeing how the colours alter depending on the emotion, I know it works correctly and so I can now focus on the aesthetic of the filter themself.

Happy or Love <3

For this filter, I want to have hearts instead of the circles and so I just quickly coded the following for a rough sketch, I would of course have multiple of these hearts.

function drawHeart(x, y, size) {
  // noStroke();
  
  fill(255, 0, 0);
  stroke(0); // Black stroke
  strokeWeight(0.5); // Stroke thickness
  beginShape();
  vertex(x, y);
  bezierVertex(x - size / 2, y - size / 2, x - size, y + size / 3, x, y + size);
  bezierVertex(x + size, y + size / 3, x + size / 2, y - size / 2, x, y);
  endShape(CLOSE);
}

/// coding it into the program

else if (emotion(detections, "happy")) {
        drawHeart(this.x, this.y,10);

 

These are some inspirations of what I want my filters to look like.I adjusted the happy filter and this is what it finally looks like.

Cellular Automata around the filter

I wanted to have come cellular automata movement around my headspace to focus on more aspects of the class. The code below shows it with a blue colour and transparency value so the background images of my headspaces are not covered.

if (random() < 0.025) { //  chance to restart the life of a cell
        next[i][j] = floor(random(2)); // Randomly set to 0 or 1
        continue;
      }

By adjusting this value, the effect restarts and it should also work with the filter when the head is moved around with the functions and dynamic code.

//insert image of green rect

The image above was just for me to roughly see where the filter would be so I can have the cellular automata there. I applied the same logic on my code and had the parameters be for the rectangular area of my filter, it took a lot of calculations and flooring function but it worked nicely after. My only problem was after the head moved, the coloured cells would not be removed and just stay there so I had to add some extra code that goes over that.

let x =  (rect_width(detections) *(1.25/2))
      let y = (rect_height(detections) / 3)
      //=====================================================
      rect(this.x-x, this.y, 2*x,y);
      
      for (let i = 0; i < col; i++) {
    for (let j = 0; j < row; j++) {
      if (i < floor((this.x - x)/w) || i >= floor((this.x - x)/w) +( floor((2*x)/w))|| j < floor(this.y/w) || j >= floor(this.y/w)+ floor(y/w)) {
        board[i][j] = 0;
      }
    }
  }

I decreased the probability by a large amount just to make sure it didn’t clump up the cells too much. I also added a 3rd colour just for aesthetic purposes.

//insert image

I made the calculations of the pixel be when the object is the sun as its much easier to calculate the region from the suns position, but I will have it displayed on the !isSun code to change the colour depending on the emotion.

This was the following code:

else {
      
      for (let i = 0; i < col; i++) {
        for (let j = 0; j < row; j++) {
          

          if (emotion(detections, "neutral")) {
            // noFill();
            // stroke(0, 255, 0);
            // strokeWeight(1);
            // circle(this.x, this.y, 10);
            if (board[i][j] === 0) {
              noFill(); // White for dead cells
            } else if (board[i][j] === 1) {
              fill(255,255,153, 5); // yellow
            } else {
              fill(205, 5); //white
            }
            noStroke();
          square(i * w, j * w, w);
          } else if (emotion(detections, "happy")) {
            if (board[i][j] === 0) {
              noFill(); // White for dead cells
            } else if (board[i][j] === 1) {
              fill(255, 192, 203, 5); // pink
            } else {
              fill(255, 255, 255, 5); //white
            }
            noStroke();
          square(i * w, j * w, w);

            if (temp > 0.66) {
              drawHeart(this.x, this.y, 10);
            } else if (temp > 0.33) {
              // image(bubble_img, this.x, this.y, 15, 15);
              noFill();
              stroke(255);
              strokeWeight(1);
              circle(this.x, this.y, 10);
            } else {
              image(butterfly_img, this.x, this.y, 15, 15);
            }
          } else if (emotion(detections, "angry")) {
            if (board[i][j] === 0) {
              noFill(); // White for dead cells
            } else if (board[i][j] === 1) {
              fill(122, 22, 25, 5); // red
            } else {
              fill(127, 5); //grey
            }
            noStroke();
          square(i * w, j * w, w);

            if (temp > 0.66) {
              // fill(255, 0, 0);
              // circle(this.x, this.y, 10);
              image(bolt_img, this.x, this.y, 15, 15);
            } else if (temp > 0.33) {
              image(puff_img, this.x, this.y, 15, 15);
            } else {
              image(explode_img, this.x, this.y, 15, 15);
            }
          }
          else if (emotion(detections, "sad")) {
            if (board[i][j] === 0) {
              noFill(); // White for dead cells
            } else if (board[i][j] === 1) {
              fill(116, 144, 153, 5); // blue grey
            } else {
              fill(127, 5); //grey
            }
            noStroke();
          square(i * w, j * w, w);
            if (temp > 0.66) {
              image(plaster_img, this.x, this.y, 25, 25);
            } else if (temp > 0.33) {
              image(cloud_img, this.x, this.y, 25, 25);
            }

            image(blue_img, this.x, this.y, 15, 15); //blue heart image
          }

        }
      }
          board = next;
    }

Revisions

With the progress made, I decided to change some parts of my code. With the cellular automata, the code is a lot heavier as I need to calculate and go through two large 2D arrays which requires a lot of time. Therefore, this code will be for one user only.

I also want to add a screenshot feature for people to have a picture with the filters. I also want to have some personal text to highlight this final project and a signature at the bottom corner.

I also cleaned up the code like summing up my four functions that return dimensions, into one with an extra parameter. I also plan to have my Particle class in a separate file.

My main concern is that the CA is incredibly heavy and making me code work very slowly so I need to find a way to fix that. So I recreated the grid on another program and figured the dimensions of where the headspace is likely to be and limited to that region which helped the code be a LOT smoother.

function draw() {
  let currentTime = millis();

  if (currentTime - lastUpdateTime > updateInterval) {
    lastUpdateTime = currentTime;

    // Update and display sun and particles
    sun.update();
    particles.forEach(particle => {
      particle.update();
    });
  }

  // Always display the particles, but only update them based on the interval
  sun.display();
  particles.forEach(particle => {
    particle.display();
  });
}

I added this time delay in the draw function to have the user can see the filter for a few seconds before it changes.

IM showcase

The following are some images that I got from user testing. I was happy to see people enjoy it and actually take pictures or screenshots so they can share.

Reflections

Next time I would love to incorporate some of my original ideas such as the 2 people interaction. My only problem with this is that there are multiple double nested for loops so with two users, that complexity will get worse.
I want to try and simplify some parts and have the timing work better so it performs smoothly. I would love more emotions to be included from the ml library.

Final product

https://editor.p5js.org/kk4827/full/ypZHWvVOb

please click on the link to access it.

 

Final Project Draft 2 – Rude Goldberg’s Machine

Inspiration

I want to use a wide range of mechanisms, like those below, in my final project.

Interaction

I want to use pose.net to control physical object in real life that affect the game, or even being able to knock over/pick up elements in the game.

Final Project Progress: blackhole band

Inspiration:
https://editor.p5js.org/mberger75/sketches/f_8oKzndG
https://www.saatchiart.com/art/Painting-Black-Hole/1014988/3765274/view

Interaction:
I use second idea, make variance on black hole idea. I learnt the coding of blackhole on nature of code. And want to make a picture with the traces created by the protons approaching blackholes.

It begins with the foundations of gravitational theory, symbolized through the visualization of black holes. In this code, we simulate the interaction of particles with these black holes using principles derived from Newton’s law of universal gravitation and Einstein’s theory of general relativity.

The code comprises several key components. The setup() function initializes our simulation environment, including the creation of a graphical user interface (GUI) powered by the dat.GUI library, allowing for real-time adjustments of simulation parameters. Through the draw() function, we depict the motion of particles under the influence of gravity from black holes. The simulation incorporates visualizations, such as the trails left by moving particles and the representations of black holes as massive gravitational entities.

To enhance our exploration, modifications were made to the code. Additional black holes were introduced, enriching the simulation by showcasing the dynamics between multiple gravitational centers. Moreover, the number and size of the black holes dynamically change upon mouse clicks, offering an interactive and engaging experience.

This project serves as an educational and immersive tool, providing a visual understanding of complex gravitational interactions. It merges science, mathematics, and programming to present a captivating visualization of celestial phenomena.

In conclusion, this project invites us to explore the wonders of space-time curvature and gravitational forces through a captivating visual representation. It stands as an embodiment of curiosity, knowledge, and the fusion of science with technology, allowing us to immerse ourselves in the fascinating world of astrophysics and gravitational dynamics.