Final Project Draft 1- Vibrations of Being

Concept:

For my final project, I want to replicate the feeling evoked by Antony Gormley’s work, particularly the quantum physics concept that we are not made of particles, but of waves. This idea speaks to the ebb and flow of our emotions — how we experience ups and downs, and how our feelings constantly shift and flow like waves. When I came across Gormley’s work, I knew that I wanted to replicate this dynamic energy and motion in my own way, bringing my twist to it through code. I aim to visualize the human form and emotions as fluid, wave-like entities, mirroring the infinite possibilities of quantum existence.

Interaction Methodology:

To create an interaction where users influence flow field particles with their movements, I will use ml5.js and TensorFlow.js for real-time machine learning. These libraries will allow the webcam to track the user’s body movement, and the detected positions (such as arms, legs, and joints) will influence how the flow field particles behave.

Steps to Implement Interaction:

  1. Pose Detection:
    • Using ml5.js, I will implement pose detection models like MoveNet to track key body points (e.g., shoulders, elbows, wrists, hips) and convert them into coordinates.
  2. Movement Capture:
    • The webcam will capture the user’s movement in real-time, and MoveNet will process the data frame by frame to track changes in the user’s position.
  3. Particle Interaction:
    • The user’s proximity and movement will influence the particles. For example:
      • If the user moves closer, the particles will move toward them.
      • The direction of body movements (like moving an arm left or right) will control the direction of the flow field, allowing the user to “steer” the particles.
  4. Flow Field Behavior:
    • The particles will change their behavior based on the user’s gestures and position. For example, raising or lowering the hands could speed up or slow down the flow, while lateral movements could push the particles in specific directions.

The goal is for the flow field to update continuously, with particles moving based on real-time data from the user’s body.

Libraries Used:

  • ml5.js for pose detection and movement tracking.
  • TensorFlow.js for more advanced machine learning tasks, if needed.

Design of Canvas

Interaction Idea 1: Side-by-Side Camera and Sketch View

Concept: In this design, the user will see both their live webcam feed and the flow field sketch on the screen at the same time. The webcam will show their movements, and the particles in the flow field will react in real-time to these movements. This approach highlights the connection between the user’s actions and how they influence the flow field, making the interaction more intuitive and visually engaging.

User Experience Flow:

  • Webcam Feed: The camera will be shown on one side of the screen (either the left or top half).
  • Flow Field Display: The flow field, containing the particles, will occupy the other side (right or bottom half).
  • As the user moves, they can immediately see how their body affects the movement of the particles in the flow field. For example, particles may gather around them, follow their gestures, or change direction based on their movements.

Interaction Design:

  • The user will control the flow field by using their body, which will be visible in the webcam feed.
  • The particles will react to the movement of specific body parts, such as arms or legs.
  • The user can influence the flow by moving closer to or further away from the camera or by making different gestures, which will change the pattern or direction of the wave-like particles.

Interaction idea 2: Dark Screen with Movement-Based Particle Control

Concept: In this design, the user’s movements will be the primary focus, with no webcam feed visible at first. The screen will be dark, and as the user begins to move, they will start influencing the flow field. This approach keeps the user’s attention solely on how their actions shape the environment, with no visual distractions from their own body.

User Experience Flow:

  • Initial Dark Screen: The screen starts out black, with no indication of the user’s presence.
  • Movement Trigger: Once the user starts to move, the flow field will emerge, and the particles will begin to react to the user’s gestures and position.
  • As the user moves, they’ll feel more engaged, knowing that their actions are directly influencing the particles, but without seeing themselves.

Interaction Design:

  • The user will only see the flow field, which will respond dynamically to their movement.
  • The particles will react to the user’s proximity and gestures, such as raising a hand, making the flow field change accordingly.

Base Sketch:

Currently, I have implemented the basic framework for MoveNet, and it’s working really well. To ensure stability and avoid potential issues with updates, I included the machine learning library and the compressed TensorFlow files directly into the project. This way, the setup is self-contained, and I don’t have to rely on external links in the index.html file. The sketch is already capable of detecting body movements, serving as the foundation for allowing users to influence the flow field with their motions.

Resources:

https://docs.ml5js.org/#/reference/bodypose

https://docs.ml5js.org/#/reference/bodypose

Leave a Reply

Your email address will not be published. Required fields are marked *