Xiaozao Wang – Final Project

Project Title: Morphing the Nature

Source code: https://github.com/XiaozaoWang/DNFinal

Video trailer:

A. Concept:

Similar patterns have been found in animal bodies, plants, and even landscapes. This shows us that maybe things in nature share the same basic algorithm to form their body, including the humans. However, with the development of technology, we think that we have control over other beings and nature, and begin to forget that nature’s wisdom is inherent in our own bodies all the time.

I want to visually explore patterns found in nature and overlay them to the viewer’s figure in the computer through their webcam. Through this project, I aim to promote awareness of our interconnectedness with nature and other living beings. While humans have developed great abilities, we remain part of the natural world that we evolved from. We should respect and learn from nature rather than try to exploit it.

B. Design and Implementation

My project consists of two main parts: 

  1. Generating the patterns based on the mathematical principle behind them.
  2. Capturing the user’s figure using the camera and morphing the patterns onto the figure.

I used Turing’s Reaction-Diffusion Model as the blueprint for generating the patterns. That’s because this model shows how different patterns in nature, from stripes to spots, can arise naturally from a homogeneous state. It is based on the interplay between two kinds of chemicals: the Activator and the Inhibitor, where the activator is trying to reproduce and the inhibitor is stopping it from doing so. Different generating and dying rates of these chemicals create a variety of interesting behaviors that explain the mystery of animal/plant patterns.

I mainly referred to Karl Sims’s version of the reaction-diffusion equation. He has a wonderful research and web art project about Turing’s pattern. https://www.karlsims.com/rd.html

I also learned about ways to translate this equation into code from the coding train: https://youtu.be/BV9ny785UNc?si=aoU4__mLw6Pze6ir

grid[y][x].a = a +
        ((dA * laplaceA(x, y)) -
        (a * b * b) +
        (feed * (1 - a))) * 1;
      grid[y][x].b = b +
        ((dB * laplaceB(x, y)) +
        (a * b * b) -
        ((k + feed) * b)) * 1;

I created a class that stores the concentration of Chemicals A and B of every pixel in the 2D array.

One of the interesting parts is that the “diffusion” part works similarly to the Game of Life. In every update, the new concentration of the center pixel of a 3×3 convolution is calculated based on the concentration of its 8 neighbors, each with a different weight. This causes the chemicals to diffuse into areas around them. In our case, the weights are as follows.

There are results like this:

However, one thing is that the complicated calculations slow down the running of the sketches, and making the canvas bigger also results in lagging. After testing, I found that the p5js library is causing the problem (because it’s a rather large library).

As you can see, even the difference between using the p5 file and the p5.min file can cause such a huge difference in running efficiency: (They both start from the same seed. The one on the right is using p5.min, and runs twice as fast as the one on the left)

Therefore, I decided to use Processing as the platform to develop my project. It is a local software therefore doesn’t have to fetch the library from the web.

Moreover, I reduced the resolution of the canvas by utilizing 2D arrays. (In the webcam part, I also reduced the resolution by storing the information of only the top-left pixel out of an 8×8 pixel grid). By doing this, I was able to expand the canvas size.

Then it comes to the next step: Capturing the user’s figure with the webcam and projecting the patterns on the figure.

This is the logic of implementation:

Firstly, we will need to capture an empty background without any humans, and store the color data in a 2d array. Then we will compare the real-time video captured by the webcam with that empty background, and identify the areas that have large color differences. ( I used the Euclidean distance) These areas will represent where the figure is. And then, we use this precessed layer as a map that controls which part of the pattern layer is shown to the user. Then, we will be able to see that the patterns are only growing on the user’s figure but not on the background!

I added some customizable values to make the project more flexible to different lighting, skin colors, and preferences. As a user, you can move your mouse across the X-axis to change the exposure, and across the Y-axis to change the transparency of the mask.

At last, I added a GUI using the controlP5 library. The user will be able to use the preset patterns and color palettes as well as adjust the parameters on their own.

User testing on IM show:

C. Future Development

  1. I would like to add a color picker to the control panel and allow users to select the color on their own. It is doable with controlP5.
  2. To increase performance, the resolution is sacrificed. I wonder if building a more powerful and fast simulation engine is possible?
  3. I think it would be very interesting to map the patterns to a 3d character in game engines like Unity. As long as we have an understanding of how the equation works, it can be applied to many project forms!

Xiaozao – Final Proposal

Embodying Nature

Concept:

Similar patterns have been found in animal bodies, plants, and even landscapes. This shows us that maybe things in nature share the same basic algorithm to form their body, including the humans. However, with the development of technology, we think that we have control over other beings and nature, and begin to forget that nature’s wisdom is inherent in our own bodies all the time.

I plan to visually explore patterns found in nature and overlay them to the viewer’s figure in the computer through their webcam. Through this project, I aim to promote awareness of our interconnectedness with nature and other living beings. While humans have developed great abilities, we remain part of the natural world that we evolved from. We should respect and learn from nature rather than try to exploit it.

Inspirations:

Implementation plan:

  1. Uncover the logic under different animal/plant patterns.
  2. Create unique patterns based on those logics.
  3. Be able to capture the viewer’s silhouette through the webcam.
  4. Overlay the patterns onto the viewer’s silhouette.

Key phrases:

  • Turing’s patterns
  • Voronoi cells
  • Colonization
  • Activator and Inhibitor

Alan Turing's Patterns in Nature, and Beyond | WIRED

 

Xiaozao Week #11 Assignment

Decay

Link to sketch: https://editor.p5js.org/Xiaozao/sketches/SZ2gVL7sW

How to interact: Press the mouse to refill. Press again to decay.

After learning 1D and 2D cellular automata, I decided to create a “3D” one. However, it isn’t the real 3D using webGL, but instead showing the 3D space through a 2D perspective. I created a 3D array with x,y, and z-axis. But the z-axis is shown as “layers” and each layer is translucent, meaning that the state of cells on every layer will be stacked together to affect the overall opacity of the cell.

for (let n = 0; n < layers; n++) {
    board = create2DArray(columns, rows);
    for (let i = 1; i < columns - 1; i++) {
      for (let j = 1; j < rows - 1; j++) {
        board[i][j] = new Cell(floor(random(2)), i * scale, j * scale, scale);
      }
    }
    // console.log(board)
    boards.push(board);

 

The principle of the 3D cellular automata is similar to the 1D and 2D one. Regarding the rules or algorithms, I drew inspiration from this article, which implemented lots of interesting rules in the 3D world. https://softologyblog.wordpress.com/2019/12/28/3d-cellular-automata-3/

I modified the rules a bit. Also, I added a mouse interaction that refills the cells with another rule.

for (let z = 1; z < layers - 1; z++) {
    for (let x = 1; x < columns - 1; x++) {
      for (let y = 1; y < rows - 1; y++) {
        let neighbors = 0;
        for (let k = -1; k <= 1; k++) {
          for (let i = -1; i <= 1; i++) {
            for (let j = -1; j <= 1; j++) {
              //Use the previous state when counting neighbors
              neighbors += boards[z + k][x + i][y + j].previous;
            }
          }
        }

        neighbors -= boards[z][x][y].previous;

        // desicion
        if (growing_state == -1) {
          if (neighbors >= 13 && neighbors <= 22) {
            boards[z][x][y].state = 1;
          } else {
            boards[z][x][y].state = 0;
          }
        } else {
          if (neighbors >= 6 && neighbors <= 7) {
            boards[z][x][y].state = 1;
          } else if (boards[z][x][y].state == 1) {
            boards[z][x][y].state = 1;
          } else {
            boards[z][x][y].state = 0;
          }
        }

 

Alien Intelligence Reflection Xiaozao

The idea of comparing AI to “alien” is mindblowing. With the development of AI, it is no longer seen as a tool, but rather a creature that is better than humans in some aspects and even taking over humans’ self-given role as the center of evolution. A quote I really liked from the lecture is “AI has hacked the operating system of human intelligence.” In front of AI, we are more doubtful than ever about the most acknowledged “truth” that we have been believing for centuries. For example, what is the meaning of “think”? What is the meaning of “being creative”? I used to believe that even if AI can take over a lot of jobs, the one thing that it can’t replace should be creativity. However, the lecture and the performance of AIs today make us doubt: What is creativity anyway? Even humans themselves are not fully conscious of being creative. So, it is possible that AI is actually redefining the world from a very low-level perspective.

Xiaozao Week10 Assignment

Jellyfish

Link: https://editor.p5js.org/Xiaozao/sketches/XI0c1CLwj

The physics and constraint properties of the matter.js library give us a lot of opportunities to construct an environment with unique properties such as gravity, air friction, force field, and complicated linked objects. Therefore, I wanted to create a jellyfish with flexible tentacles consisting of many nodes linked with constraints, and this jellyfish is floating in a nearly zero-gravity environment.

However, due to limited time, this is still a work in progress and I think it really has a lot of space to improve.

Coding:

The first challenge is how to create a tentacle that is a chain of nodes. I referred to the coding train’s tutorial to create a series of nodes and added a constraint between each pair of them.

for (let i = 0; i < 40; i += 1) {
      let r = 1;
      let new_node = new Circle(200+n+n*i, i, r);
      circles.push(new_node);

      if (i != 0) {
        let constraint_options = {
          bodyA: circles[circles.length - 1].body,
          bodyB: circles[circles.length - 2].body,
          length: 2 + r,
          stiffness: 0.1-i/400,
        };
        

        let constraint = Constraint.create(constraint_options);
        Composite.add(world, constraint);
      }
    }
    
    
    
  }

However, the nodes are moving too vibrantly and they are just bouncing back and forth on the whole canvas. I figured out several ways to slow them down:

/* methods to reduce crazy movements:
1. add r to constraint length, otherwise, the constraint will intend to shrink and cause too vibrant movement
2. decrease stiffness of the constraint
3. decrease gravity scale
4. increase the air friction*/

The second challenge is to connect the leading nodes of all the tentacles to a mutual point: the jellyfish’s head. I used a revolute constraint whose position is updated every frame to be mouse position. However, I don’t know how to “update”, so I could only create a new constraint every frame. I will figure this out in the future!

let base_options = {
    bodyA: circles[0].body,
    pointB: { x:mouseX+n, y:mouseY },
    length: 0,
    stiffness: 0.1,
  };
  base_constraint = Constraint.create(base_options);
  Composite.add(world, base_constraint);
    
  tentacles.push(circles);

At last, I set the gravity scale to zero.

Future improvements:

The first thing is to organize the code, I should create a Tentacle class to better organize the system of objects. Also, I can incorporate the steering force we learned earlier into the movement of the jellyfish head to create a more organic feeling.

Xiaozao Assignment #9

Project title: Dark Side of the Moon

https://editor.p5js.org/Xiaozao/sketches/eqtX3a2Cv

This week’s assignment is inspired by the surface of the moon. It is covered with craters large and small, and the sand is being blown among the surface.

I wanted to create a more dynamic movement of the boids, that’s why I applied a sine value to the separate, align, and cohere functions and made the distance limitation change over time.

flock(boids) {
    time_off += 0.01;
    let sin_value = sin(time_off);
    let separate_control = map(sin_value, -1, 1, 1, 50);
    let align_control = map(sin_value, -1, 1, 30, 0);
    let cohere_control = map(sin_value, -1, 1, 50, 500);
    
    let sep = this.separate(boids, separate_control); // Separation
    let ali = this.align(boids, align_control); // Alignment
    let coh = this.cohere(boids, cohere_control); // Cohesion

To create the craters on the moon’s surface, I placed several “obstacles” around the canvas, and made the boids near the obstacles flee from them. I adjusted the parameters to make some of the boids go across the obstacles while the others bounce back. This creates a more natural feeling.

// main
for (let i = 0; i < obstacles.length; i ++) {
    let obstacle = obstacles[i];
    // circle(obstacle.x, obstacle.y, 10);
    flock.avoid(obstacle);
  }


// flock class
avoid(obstacle) {
    for (let boid of this.boids) {
      let d = p5.Vector.dist(boid.position, obstacle);
      if (d > 0 && d < 100) {
        boid.avoid(obstacle); 
      }
    }
  }


// boid class
avoid(obstacle) {
    let desired = p5.Vector.sub(this.position, obstacle);
    desired.setMag(this.maxspeed);
    
    
    // Steering = Desired minus velocity
    let steer = p5.Vector.sub(desired, this.velocity);
    steer.mult(0.02); // Limit to maximum steering force
  this.applyForce(steer);
    
  }

Finally, I set the blend mode to “add” which makes the patterns more aesthetic.

Possible improvements:

Maybe I can draw lines between each pair of boids that are close enough to each other, or try other kinds of blend modes.

Xiaozao Assignment #8 – Ant’s Death Spiral

Ant’s Death Spiral

Code: https://editor.p5js.org/Xiaozao/sketches/e8ilAu4Ew

This week’s assignment is inspired by a well-known phenomenon in animals’ behavior called “the ant’s death spiral”. When ants are navigating in a dense forest, each ant always maintains a close distance from the ant ahead of them by following the pheromone trail it leaves. However, when the leading ant loses its track and accidentally runs into one of the other ants, it will form a closed circle among the ants, and make them fall into the endless loop of circular motion that leads to their death.

Therefore, I wanted to create this autonomous agent system where at first, the leading ant is wandering to find food, and every ant in the line follows the ant in front of it. But when the leading ant runs into another ant, it will stop wandering and instead decide to follow that ant’s track. Therefore, all the ants are following the ant ahead of them, meaning that no one is leading, or you can say everyone is leading. They will continue this pattern forever.

Code-wise, I mainly used two classes: the normal ant (vehicle) and the leading ant (target). All the ants can seek but the leading ant has the extra ability to wander.

I used this loop to make any ant follow the ant ahead of it:

let second_ant = ants[num-1];
  let second_seek = second_ant.seek(leading_ant);
  second_ant.applyForce(second_seek);
  second_ant.update();
  second_ant.show();
  
  for (let i = 0; i < num-1; i++) {
    let front_ant = ants[i+1];
    let back_ant = ants[i];
    let steering = back_ant.seek(front_ant);
    back_ant.applyForce(steering);
    back_ant.update();
    back_ant.show();
  }
  
  leading_ant.update();
  leading_ant.show();

And I check if the leading ant is running into any of the ants, and change its behavior if so:

if (trapped == false) {
    let steering_wander = leading_ant.wander();
    leading_ant.applyForce(steering_wander);
      
    // checking leading ant collision
    for (let i = 0; i < num; i ++) {
      let ant = ants[i];
      if (leading_ant.position.dist(ant.position) < 1) {
        let new_steering = leading_ant.seek(ant);
        leading_ant.applyForce(new_steering);
        followed_ant = ant;
        trapped = true;
        break;
      }
    }  
  } else {
    let steering = leading_ant.seek(followed_ant);
    leading_ant.applyForce(steering);
  }

What to improve:

In this sketch, I think that the ants are over-ordered. In reality, the ants are not in a perfect line and they are slightly affected by other ants and their surroundings. Maybe I can add other factors that can affect the ants’ movement in the future.

 

Xiaozao Midterm Project – Flowing Painting

Project Name: Starry Night: A Flowing Painting

Final Sketches:
Version 1 (Starry Night original): https://editor.p5js.org/Xiaozao/sketches/ywn4IB8US
Version 2 (Allow user modify):
Version 3 (Blinking effect):
Trail video:

A. Concept

I was attracted by this image when reading the article “Particle animation and rendering using data parallel computation” by Karl Sims:
This vortex field (or swirling pattern) reminds me of Starry Night by Vincent van Gogh. He created a strong sense of motion, energy, and flux through the use of brushwork and colors.
There are many other examples that the static 2D paintings are trying to convey a sense of movement. I think it would be great if I could enhance this message to the audience by animating these drawings. Therefore, I decided to create a “moving” version of Starry Night for my midterm project using the flow field and the particle system.

B. Implementation

Here are the questions to solve to achieve my goal of creating a flowing painting.

    1. Swirling movement of the particles
    2. Location of the center
    3. Aesthetics of the strokes

1. Swirling movement of the particles

Firstly, I need to know how to make the particles move in a circular pattern. The particles need to be placed in a flow field that they will be given the velocity according to their position in the field. However, there are different kinds of flow fields: Perlin noise flow fields, vortex flow fields, magnetic flow fields, and so on. I searched the Internet and found a simple way of creating this swirling effect.

Basically, you create a background image showing the distribution of the centers of vortices based on a 2D Perlin noise space. And then you calculate the “pressure differential” around every cell in the grid. That differential vector will point to the center of the vortex, creating an attraction effect to the particles. However, if you rotate every vector by 90 degrees, they will suddenly be turned to some force similar to the “tangential force”. This will magically turn the movement of the particles into a swirling pattern.

Generating a flow field with Perlin noise is a good starting point.

p.s. Here’s a good way of modifying the noise field to make it cleaner and less “noisy”. You can change the parameters in this function called noiseDetail(octave count, falloff).

I also make a 3D noise field that changes through time.

Here are some generative patterns I created from the swirling field:

Here’s my pen plotting:
Plotting video:
I also made this blinking effect. The explanation is in the image.

2. Location of the center

Then, we need to know the rotation center of the stars.

In the previous sketches, I used the Perlin noise field to determine the rotation centers. Every time, the resulting image is different because of the difference in random Perlin noise.

However, the Perlin noise is too random and you can’t really decide where your centers are. Therefore, I came up with the idea of allowing the users to paint on a buffer canvas and have control over their center locations.

The method is to use pixel manipulation.

I wrote a function called renderImg() that modifies an initially pure black buffer canvas. Whenever the users press their mouse, the pixels around the mouse position will gradually turn brighter according to their distance from the mouse position.

  (A clearer demonstration)

Then, the users will be able to draw their own “base image” in the field and affect the movement of the particles.

Do you want to have a try? ( press i to show your base image)

i

Some result:

(Stroke-like effect)

Lastly, apart from user modification, I also want to “animate” the original painting by Van Gogh. This requires us to take the original painting as the source image.

But how can the computer know the locations of rotation centers of the stars from the painting?

At first, I tried to detect the pixels that have a higher grey scale (brighter). But as you can see, there is only one star that is statistically much brighter than others. This method didn’t work well.

Then I noticed that all the stars are kind of yellow. Therefore, I thought of calculating the Euclidean distance of the RGB value of the pixel with the RGB value of a “standard yellow color”. After some tests, I found that this rule works the best (but still not perfect) anyways I tried…

abs(r-g)<=20;
r: 180 += 20;
g: 165 += 20;
b: 60 += 30;

And then I used the previous renderImg() function to render the base image based on the original painting. You can see that nearly half of the stars have been detected.

Finally, I combined the Perlin noise flow field with this swirling field. Adding some dynamics to the sketch. I also grabbed the original color from the painting and modified the color of the pixels.

Here is the final sketch:

C. Reflection

Program thinking that I gained from this project:
  • Combination of forces
    • I combined three kinds of force to affect the velocity (change of position) of my particles: Centripetal force, tangential force, and force due to Perlin noise). The combination adds a lot to the dynamic and aesthetic of the project.
  • View buffer canvas as a data source
    • A generative art must have a generative rule. And I think my “generative rule” is the base image placed on a buffer canvas that contains information in every single pixel and tells the particles which direction to go. At first, I placed 2D or 3D Perlin noise images on the buffer canvas, then I allowed the users to add their own stars to this canvas through pixel manipulation, and at last, I extracted information from the original Starry Night painting and used it as a data source.
  • Computer vision
    • Of course, I only scratched the surface of “computer vision”. But I believe that my attempt to identify the positions of the stars in the painting through different methods is kind of asking the computer to see and interpret the visual information that humans can easily capture. Through setting rules such as “find pixels with higher greyscale” or “find pixels with smaller RGB distance to the standard yellow”, the computer tries to better detect the positions of the stars with optimized logic.
Unsolved problem:

The biggest challenge I encountered in this project is the pixelDensity() and scaling down and up the image. Since the source image was too large, it needs to be scaled down. Plus I had some buffer canvases, and I often needed to check the index and location of certain pixels and modify them. I used a lot of pixelDensity() function, but it led to some weird results that I couldn’t understand. Therefore I had to change all the pixel densities to 1 to avoid those problems, but it made the resolution of my project less ideal.

Future Improvements:

I’m excited to make the project run in a 3D space where the particles rotate around a sphere. Also, due to the limited capacity of p5js running in a browser, I couldn’t add more particles to make a more appealing effect. So if possible, I will run it in Processing or other faster tools.

Xiaozao Midterm Progress #2

Project title: Flowing Painting

The goal of my project is to create an animated version of the famous painting Starry Sky by Vincent van Gogh. These are the core logic of how I plan to achieve that:

As I mentioned last week, I found a way of creating the vortex movement pattern based on a 2D Perlin noise field. The logic is to generate a base image of the noise field, put it at the bottom of the canvas, and calculate the driving force using the “difference of pressure” around any pixel.

I tried with the vortex field and generated some images:

And then I tried with the 3d Perlin noise:

Eventually, I want to achieve two things:

The first one is to convert the painting Starry Sky into this kind of alpha terrain. The second is to allow users to build their own terrain (base image). I plan to work on this the next week! Here’s the plan:

Another thing that is important to animating the painting is to create “strokes” that look real, instead of the particles.

Here’s how I generate the strokes to make them more like the strokes in the painting:

Xiaozao #Midterm Project Proposal

Project Title: Moving Paintings

1. Inspiration

I was attracted by this image when reading the article “Particle animation and rendering using data parallel computation” by Karl Sims:

This vortex field (or swirling pattern) reminds me of Starry Night by Vincent van Gogh. He created a strong sense of motion, energy, and flux through the use of brushwork and colors.

Conveying Movement in Art: A Comprehensive Guide

There are many other examples that the static 2D paintings are trying to convey a sense of movement. I think it would be great if I could enhance this message to the audience by animating these drawings. Therefore, I decided to create a “moving” version of Starry Night for my midterm project using the flow field and the particle system.

2. Plan of Implementation

The vortex field is one of the sub-types of the flow field. All the flow fields have the same base logic, which is creating a grid of vectors that decides the velocity of the particles moving inside it. However, there are perlin noise flow fields, vortex flow fields, magnetic flow fields, and so on. I searched the Internet and found a simple way of creating this swirling effect.

Basically, you create a background image showing the distribution of the centers of vortices based on a 2D Perlin noise space. And then you calculate the “pressure differential” around every cell in the grid. That differential vector will point to the center of the vortex, creating an attraction effect to the particles. However, if you rotate every vector by 90 degrees, they will suddenly be turned to some force similar to the “tangential force”. This will magically turn the movement of the particles into a swirling pattern.

* The centripetal force and the tangential force

It is a brilliant idea and not hard to implement. And the challenge may be adding some aesthetic and creative aspects to my project.

Some ideas are:

    • Create the dying and respawn effect of the particles to make the moving painting more dynamic.
    • Explore other possible patterns of flow field. For example, how to achieve the effect of a river flow?
    • Allow user interaction. Users can create more “stars” and flowing directions in the painting through mouse interaction.

 

Sources of Inspiration and References: