Week 11 – Zombie Automata by Dachi

Sketch:

p5.js Web Editor | Zombie Automata

Inspiration

To begin, I followed existing coding tutorials by The Coding Train on cellular automata to understand the basics and gather ideas for implementation. While working on the project, I drew inspiration from my high school IB Math Internal Assessment, where I explored the Susceptible-Infected-Recovered (SIR) model of disease spread (well technically I did SZR model). The concepts I learned there seemed to work well for current task.
Additionally, being a fan of zombie-themed shows and series, I thought that modeling a zombie outbreak would add an engaging narrative to the project. Combining these elements, I designed a simulation that not only explored cellular automata but also offered a creative and interactive way to visualize infection dynamics.

Process

The development process started with studying cellular automata and experimenting with simple rulesets to understand how basic principles could lead to complex behavior. After following coding tutorials to build a foundational understanding, I modified and expanded on these ideas to create a zombie outbreak simulation. The automata were structured to include four states, empty, human, zombie, and dead, each with defined transition rules.
I implemented the grid and the rules governing state transitions. I experimented with parameters such as infection and recovery rates, as well as grid sizes and cell dimensions, to observe how these changes affected the visual patterns. To ensure interactivity, I developed a user interface with sliders and buttons, allowing users to adjust parameters and directly interact with the simulation in real time.

How It Works

The simulation is based on a grid where each cell represents a specific state:
  • Humans: Are susceptible to infection if neighboring zombies are present. The probability of infection is determined by the user-adjustable infection rate.
  • Zombies: Persist unless a recovery rate is enabled, which allows them to turn back into humans.
  • Dead Cells: Represent the aftermath of human-zombie interactions and remain static.
  • Empty Cells: Simply occupy space with no active behavior.
At the start of the simulation, a few cells are randomly assigned as zombies to initiate the outbreak, and users can also click on any cell to manually spawn zombies or toggle states between humans and zombies.
Users can interact with the simulation by toggling the state of cells (e.g., turning humans into zombies) or by adjusting sliders to modify parameters such as infection rate, recovery rate, and cell size. The real-time interactivity encourages exploration of how these factors influence the patterns and dynamics.

Code I’m Proud Of

A part of the project that I am particularly proud of is the implementation of probabilistic infection dynamics

if (state === HUMAN) {
  let neighbors = countNeighbors(i, j, ZOMBIE);
  if (neighbors > 0) {
    if (random() < 1 - pow(1 - infectionRate, neighbors)) {
      nextGrid[i][j] = ZOMBIE;
    } else {
      nextGrid[i][j] = HUMAN;
    }
  } else {
    nextGrid[i][j] = HUMAN;
  }
}

This code not only introduces a realistic element of risk-based infection but also produces visually interesting outcomes as the patterns evolve. Watching the outbreak spread dynamically based on these probabilities was quite fun.

Challenges

One of the main challenges was balancing the simulation’s performance and functionality. With many cells updating at each frame, the program occasionally slowed down, especially with smaller cell sizes.  I also tried adding some features (cure) which I later removed due to lack of visual engagement (other structures might suite it better), of course such simulation in itself is oversimplification so you have to be mindful when adding parameters.

Reflection and Future Considerations

This project was a good opportunity to deepen my understanding of cellular automata and their potential for creating dynamic patterns. The combination of technical programming and creative design made the process both educational and enjoyable. I’m particularly pleased with how the interactivity turned the simulation into a fun engaging experience.
Looking ahead, I would like to enhance the simulation by introducing additional rulesets or elements, such as safe zones or zombie types with varying behaviors. Adding a graph to track population changes over time would also provide users with a clearer understanding of the dynamics at play. These improvements would further expand the educational and aesthetic appeal of the project. Furthermore, I could switch from grid cell to other structures similar to real life scenarios.

Week 10 – Fabrik by Dachi

Sketch: p5.js Web Editor | Fabrik

Inspiration

The development of this fabric simulation was mainly influenced by the two provided topics outlined in Daniel Shiffman’s “The Nature of Code.” Cloth simulations represent a perfect convergence of multiple physics concepts, making them an ideal platform for exploring forces, constraints, and collision dynamics. What makes cloth particularly fascinating is its ability to demonstrate complex emergent behavior through the interaction of simple forces. While many physics simulations deal with discrete objects, cloth presents the unique challenge of simulating a continuous, flexible surface that must respond naturally to both external forces and its own internal constraints. This complexity makes cloth simulation a particularly challenging and rewarding subject in game development, as it requires careful consideration of both physical accuracy and computational efficiency.

Development Process

The development of this simulation followed an iterative approach, building complexity gradually to ensure stability at each stage. The foundation began with a simple grid of particles connected by spring constraints, establishing the basic structure of the cloth. This was followed by the implementation of mouse interactions, allowing users to grab and manipulate the cloth directly. The addition of a rock object introduced collision dynamics, creating opportunities for more complex interactions. Throughout development, considerable time was spent fine-tuning the physical properties – adjusting stiffness, damping, and grab radius parameters until the cloth behaved naturally. Performance optimization was a constant consideration, leading to the implementation of particle limiting systems during grab interactions. The final stage involved adding velocity-based interactions to create more dynamic and realistic behavior when throwing or quickly manipulating the cloth.

How It Works

At its core, the simulation operates on a particle system where each point in the cloth is connected to its neighbors through spring constraints. The cloth grabbing mechanism works by detecting particles within a specified radius of the mouse position and creating dynamic constraints between these points and the mouse location. These constraints maintain the relative positions of grabbed particles, allowing the cloth to deform naturally when pulled. A separate interaction mode for the rock object is activated by holding the ‘R’ key, creating a single stiff constraint for precise control, with velocity applied upon release to enable throwing mechanics. The physics simulation uses a constraint-based approach for stable cloth behavior, with distance-based stiffness calculations providing natural-feeling grab mechanics and appropriate velocity transfer for realistic momentum.

Code I am proud of

The particle grabbing system stands out as the most sophisticated portion of the codebase. It is sorting particles based on their distance from the mouse and applying distance-based stiffness calculations. Here’s the core implementation:
// Array to store particles within grab radius
let grabbableParticles = [];

// Scan all cloth particles
for (let i = 0; i < cloth.cols; i++) {
    for (let j = 0; j < cloth.rows; j++) {
        let particle = cloth.particles[i][j];
        if (!particle.isStatic) {  // Skip fixed particles
            // Calculate distance from mouse to particle
            let d = dist(mouseX, mouseY, particle.position.x, particle.position.y);
            if (d < DRAG_CONFIG.GRAB_RADIUS) {
                // Store particle info if within grab radius
                grabbableParticles.push({
                    particle: particle,
                    distance: d,
                    offset: {
                        // Store initial offset from mouse to maintain relative positions
                        x: particle.position.x - mouseX,
                        y: particle.position.y - mouseY
                    }
                });
            }
        }
    }
}

// Sort particles by distance to mouse (closest first)
grabbableParticles.sort((a, b) => a.distance - b.distance);
// Limit number of grabbed particles
grabbableParticles = grabbableParticles.slice(0, DRAG_CONFIG.MAX_GRAB_POINTS);

// Only proceed if we have enough particles for natural grab
if (grabbableParticles.length >= DRAG_CONFIG.MIN_POINTS) {
    grabbableParticles.forEach(({particle, distance, offset}) => {
        // Calculate stiffness based on distance (closer = stiffer)
        let constraintStiffness = DRAG_CONFIG.STIFFNESS * (1 - distance / DRAG_CONFIG.GRAB_RADIUS);
        
        // Create constraint between mouse and particle
        let constraint = Constraint.create({
            pointA: { x: mouseX, y: mouseY },  // Anchor at mouse position
            bodyB: particle,                   // Connect to particle
            stiffness: constraintStiffness,    // Distance-based stiffness
            damping: DRAG_CONFIG.DAMPING,      // Reduce oscillation
            length: distance * 0.5             // Allow some slack based on distance
        });
        
        // Store constraint and particle info
        mouseConstraints.push(constraint);
        draggedParticles.add(particle);
        initialGrabOffsets.set(particle.id, offset);
        Composite.add(engine.world, constraint);
        
        // Stop particle's current motion
        Body.setVelocity(particle, { x: 0, y: 0 });
    });
}

This system maintains a minimum number of grab points to ensure stable behavior while limiting the maximum to prevent performance issues. The stiffness of each constraint is calculated based on the particle’s distance from the grab point, creating a more realistic deformation pattern where closer particles are more strongly influenced by the mouse movement.

Challenges

While performance optimization was addressed through careful limiting of active constraints, the primary challenge was in achieving authentic cloth behavior. Real fabric exhibits complex properties that proved difficult to replicate – it stretches but maintains its shape, folds naturally along stress lines, and responds to forces with varying degrees of resistance depending on the direction of the force. The initial implementation used uniform spring constants throughout the cloth, resulting in a rubber-like behavior that felt artificial and bouncy. Achieving natural draping behavior required extensive experimentation with different constraint configurations, ultimately leading to a system where horizontal and vertical constraints had different properties than diagonal ones. The way cloth bunches and folds was another significant challenge – early versions would either stretch indefinitely or resist folding altogether. This was solved by implementing a constraint lengths and stiffness values, allowing the cloth to maintain its overall structure while still being able to fold naturally. The grab mechanics also required considerable refinement to feel natural – initial versions would either grab too rigidly, causing the cloth to behave like a solid sheet, or too loosely, resulting in unrealistic stretching like a pointy tear. The solution involved implementing distance-based stiffness calculations and maintaining relative positions between grabbed particles, creating more natural deformation patterns during interaction.

Reflection and Future Considerations

The current implementation successfully demonstrates complex physics interactions in an accessible and intuitive way, but there remain numerous opportunities for enhancement. Future development could incorporate air resistance for more realistic cloth movement, along with self-collision detection to enable proper folding behavior. The addition of tear mechanics would introduce another layer of physical simulation, allowing the cloth to react more realistically to extreme forces. From a performance perspective, implementing spatial partitioning for collision detection and utilizing Web Workers for physics calculations could significantly improve efficiency, especially when dealing with larger cloth sizes. The interactive aspects could be expanded by implementing multiple cloth layers, cutting mechanics, and advanced texture mapping and shading systems. There’s also significant potential for educational applications, such as adding visualizations of forces and constraints, creating interactive tutorials about physics concepts, and implementing different material properties for comparison. Additionally, there is no depth to current implementation because this is inherently a 2D library. For depth-based clashing (which is what happens in real world) we would need to find 3D library.
These enhancements would further strengthen the project’s value as both a technical demonstration and an educational tool, illustrating how complex physical behaviors can be effectively simulated through carefully crafted rules and constraints.

Week 9 – Elison by Dachi

Sketch: p5.js Web Editor | Brindle butterkase

Inspiration

The project emerges from a fascination with Avatar: The Last Airbender’s representation of the four elements and their unique bending styles. Craig Reynolds’ Boids algorithm provided the perfect foundation to bring these elements to life through code. Each element in Avatar demonstrates distinct movement patterns that could be translated into flocking behaviors: water’s flowing movements, fire’s aggressive bursts, earth’s solid formations, and air’s spiral patterns.
The four elements offered different ways to explore collective motion: water’s fluid cohesion, fire’s upward turbulence, earth’s gravitational clustering, and air’s connected patterns. While the original Boids algorithm focused on simulating flocks of birds, adapting it to represent these elemental movements created an interesting technical challenge that pushed the boundaries of what the algorithm could achieve.

Process

The development started by building the core Boids algorithm and gradually shaping it to capture each element’s unique characteristics. Water proved to be the ideal starting point, as its flowing nature aligned well with traditional flocking behavior. I experimented with different parameter combinations for cohesion, alignment, and separation until the movement felt naturally fluid.
Fire came next, requiring significant modifications to the base algorithm. Adding upward forces and increasing separation helped create the energetic, spreading behavior characteristic of flames. The particle system was developed during this phase, as additional visual elements were needed to capture fire’s dynamic nature.
Earth presented an interesting challenge in making the movement feel solid and deliberate. This led to implementing stronger cohesion forces and slower movement speeds, making the boids cluster together like moving stones. Air was perhaps the most technically challenging, requiring the implementation of Perlin noise to create unpredictable yet connected movement patterns.
The transition system was the final major challenge, which would allow smooth morphing between elements. This involved careful consideration of how parameters should interpolate and how visual elements should blend. Through iterative testing and refinement, I managed to find a somewhat balanced visuals with unique patterns.

How It Works

The system operates on two main components: the boid behavior system and the particle effects system. Each boid follows three basic rules – alignment, cohesion, and separation – but the strength of these rules varies depending on the current element. For example, water boids maintain moderate values across all three rules, creating smooth, coordinated movement. Fire boids have high separation and low cohesion, causing them to spread out while moving upward.
The particle system adds visual richness to each element. Water particles drift downward with slight horizontal movement, while fire particles rise with random flickering. Earth particles maintain longer lifespans and move more predictably, and air particles follow noise-based patterns that create swirling effects.
The transition system smoothly blends between elements by interpolating parameters and visual properties. This includes not just the boid behavior parameters, but also particle characteristics, colors, and shapes. The system uses linear interpolation to gradually shift from one element’s properties to another, ensuring smooth visual and behavioral transitions.

 Code I’m Proud Of

switch(this.element) {
  case elementParams.fire:
    this.pos.y -= 1;
    this.vel.x += random(-0.1, 0.1);
    break;
  case elementParams.air:
    let time = (frameCount + this.offset) * 0.01;
    let noiseX = smoothNoise(this.pos.x * 0.006, this.pos.y * 0.006, time);
    let noiseY = smoothNoise(this.pos.x * 0.006, this.pos.y * 0.006, time + 100);
    this.vel.add(createVector(noiseX * 0.15, noiseY * 0.15));
    this.vel.limit(1.5);
    break;
}

This code efficiently handles the unique behavior of each element’s particles while remaining clean and maintainable. The fire particles rise and flicker naturally, while air particles follow smooth, noise-based patterns that create convincing wind-like movements.

Challenges

Performance optimization proved to be one of the biggest challenges. With hundreds of boids and particles active at once, maintaining smooth animation required careful optimization of the force calculations and particle management. I implemented efficient distance calculations and particle lifecycle management to keep the system running smoothly.
Creating convincing transitions between elements was another significant challenge. Moving from the rapid, dispersed movement of air to the slow, clustered movement of earth initially created jarring transitions. The solution involved creating a multi-layered transition system that handled both behavioral and visual properties gradually.
Balancing the elements’ distinct characteristics while maintaining a cohesive feel required extensive experimentation with parameters. Each element needed to feel unique while still being part of the same system. This involved finding the right parameter ranges that could create distinct behaviors without breaking the overall unity of the visualization.

Reflections and Future Considerations

The project successfully captures the essence of each element while maintaining smooth transitions between them. The combination of flocking behavior and particle effects creates an engaging visualization that responds well to user interaction. However, there’s still room for improvement and expansion.
Future technical improvements could include implementing spatial partitioning for better performance with larger boid counts, adding WebGL rendering for improved graphics, and creating more complex particle effects. The behavior system could be enhanced with influence mechanics where fire and water cancel out each other and other elements interact in various ways.
Adding procedural audio based on boid behavior could create a more immersive experience. The modular design of the current system makes these expansions feasible while maintaining the core aesthetic that makes the visualization engaging.
The project has taught me valuable lessons about optimizing particle systems, managing complex transitions, and creating natural-looking movement through code.
Throughout the development process, I gained a deeper appreciation for both the complexity of natural phenomena and the elegance of the algorithms we use to simulate them.

Week 8 – Black Hole Vehicles by Dachi

Sketch: p5.js Web Editor | black hole

Concept

This space simulation project evolved from the foundation of the vehicle sketch code provided on WordPress for current weekly objective, transforming the basic principles of object movement and forces into a more planetary scale simulation. The original vehicle concept was as inspiration for implementing celestial bodies that respond to gravitational forces. By adapting the core mechanics of velocity and acceleration from the vehicle example, I developed a more complex system that models the behavior of various celestial objects interacting with a central black hole. The simulation aims to create an immersive experience that, while not strictly scientifically accurate, captures the wonder and dynamic nature of cosmic interactions.

Process

The development began with establishing the CelestialBody class as the center of the simulation. This class handles the physics calculations and rendering for all space objects, including planets, stars, comets, and the central black hole. I implemented Newton’s law of universal gravitation to create realistic orbital mechanics, though with modified constants to ensure visually appealing movement within the canvas constraints.
The black hole visualization required special attention to create a convincing representation of its extreme gravitational effects. I developed an accretion disk system using separate particle objects that orbit the black hole, complete with temperature-based coloring to simulate the intense energy of matter approaching the event horizon. The background starfield and nebula effects were added to create depth and atmosphere in the simulation.
The implementation process involved several iterations to fine-tune the visual effects and physics calculations. I spent a lot of time on creation of the particle system for the accretion disk, which needed to balance performance with visual fidelity. The addition of comet trails and star glows helped to create a more dynamic and engaging visual experience.

Challenges

One of the primary challenges was balancing realistic physics with visual appeal. True gravitational forces would result in either extremely slow movement or very quick collisions, so finding the right constants and limits for the simulation required careful tuning. Another significant challenge was creating convincing visual effects for the black hole’s event horizon and gravitational lensing without overwhelming the system’s performance.
The implementation of the accretion disk presented its own challenges, particularly in managing particle behavior and ensuring smooth orbital motion while maintaining good performance with hundreds of particles. Creating a visually striking distortion effect around the black hole without impacting the frame rate was also difficult. I spent a lot of time on gravitiational lensing component but despite this could not get it to work like I imagined. However, that is beyond the scope of weekly assignment, and it could be something I would work for bigger timeframe.

Code I’m Proud Of

The following section creates multiple layers of distortion to simulate gravitational lensing:
for (let i = 20; i > 0; i--) {
    let radius = this.radius * (i * 0.7);
    let alpha = map(i, 0, 20, 100, 0);
    
    for (let angle = 0; angle < TWO_PI; angle += 0.05) {
        let time = frameCount * 0.02;
        let xOff = cos(angle + time) * radius;
        let yOff = sin(angle + time) * radius;
        
        let distortion1 = noise(xOff * 0.01, yOff * 0.01, time) * 20;
        let distortion2 = noise(xOff * 0.02, yOff * 0.02, time + 1000) * 15;
        let finalDistortion = distortion1 + distortion2;
        
        let spiralFactor = (sin(angle * 3 + time) * cos(angle * 2 + time * 0.5)) * radius * 0.1;

This code combines Perlin noise with circular motion to create a dynamic, organic-looking distortion field that suggests the warping of space-time around the black hole. The layered approach with varying alpha values creates a sense of depth and intensity that enhances the overall visual effect. The addition of the spiral factor creates a more complex and realistic representation of the gravitational distortion.

Reflection and Future Considerations

The project successfully achieves its goal of creating an engaging and visually impressive space simulation. The interaction between celestial bodies and the central black hole creates emergent behaviors that can be both predictable and surprising, making the simulation entertaining to watch. The visual effects, particularly around the black hole, effectively convey the sense of powerful gravitational forces at work.
For future iterations, several enhancements could be considered. Implementing relativistic effects could make the simulation more scientifically accurate, though this would need to be balanced against performance and visual clarity. Adding user interaction capabilities, such as allowing viewers to create new celestial bodies or adjust gravitational constants in real-time, could make the simulation more engaging and educational.
Another potential improvement would be the addition of collision detection and handling between celestial bodies, which could lead to interesting events like the formation of new bodies or the creation of debris fields. The visual effects could also be enhanced with WebGL shaders to create more sophisticated gravitational lensing and accretion disk effects while potentially improving performance.
The addition of sound effects and music could enhance the immersive experience, perhaps with dynamic audio that responds to the movement and interactions of celestial bodies. A more sophisticated particle system could be implemented to simulate solar winds, cosmic radiation, and other space phenomena, further enriching the visual experience.
Additionally, implementing a system to generate and track interesting events in the simulation could provide educational value, helping viewers understand concepts like orbital mechanics and the behavior of matter around black holes.

Week 8 – Mujo Reflection by Dachi

Listening to lecture about MUJO, I was quote moved by how this multimedia performance piece explores the concept of impermanence through multiple artistic dimensions. The work masterfully integrates dance, projection mapping, and sound in the desert landscape to create a profound meditation on the lasting nature of existence.
The decision to use desert dunes as both stage and canvas is particularly fascinating. The natural formation and erosion of sand dunes serves as a perfect metaphor for the piece’s central theme of impermanence, mirroring the way human experiences and emotions constantly shift and transform. The digital projections that create abstract dunes over real ones cleverly amplify this concept, creating a dialogue between the natural and the digital.
What makes MUJO especially compelling is its dual existence as both a live desert performance and a multi-channel installation. The installation version demonstrates how site-specific art can be thoughtfully adapted for different contexts while maintaining its core message. The multi-channel approach in the installation allows for a more fragmented and intimate exploration of the body’s relationship with elemental forces.
The collaboration between choreographer Kiori Kawai and multimedia artist Aaron Sherwood shows significant effort. The dancers’ movements, as they climb and descend the dunes, physically embody the struggle with constant change, while the immersive soundscape and visuals reinforce this theme. The technical aspects – from projection mapping to sound design – don’t merely serve as technicalities but actively participate in the narrative.
The work draws fascinating parallels between the impermanence of natural phenomena and human existence. Just as sand particles come together to form dunes only to be reshaped by wind, the piece suggests our bodies and thoughts are similarly temporary mediums. This Buddhist-influenced perspective on impermanence is expressed not just conceptually but through every artistic choice in the performance.
Additionally, having an opportunity to ask questions from their direct experience was very helpful as we were able to see not only the steps taken by them but what kind of hindrances they were challenged with throughout. Overcoming those obstacles, whether they are technological limitations or artistic was very interesting to learn and hear about.

(https://www.aaron-sherwood.com/works/mujo/)

Midterm – Painterize by Dachi

 

Sketch: (won’t work without my server, explained later in code)

Timelapse:

SVG Print:

Digital Prints:

(This one is same as SVG version without edge detecting algorithm and simplification)

Concept Inspiration

As a technology enthusiast with a keen interest in machine learning, I’ve been fascinated by the recent advancements in generative AI, particularly in the realm of image generation. While I don’t have the expertise nor timeframe to create a generative AI model from scratch, I saw an exciting opportunity to explore the possibilities of generative art by incorporating existing AI image generation tools.

My goal was to create a smooth, integrated experience that combines the power of AI-generated images with classic artistic styles. The idea of applying different painter theme to AI-generated images came to mind as a way to blend cutting-edge technology with traditional art forms. For my initial experiment, I chose to focus on the distinctive style of Vincent van Gogh, known for his bold colors and expressive brushstrokes.

Development Process

The development process consisted of two main components:

  1. Backend Development: A Node.js server using Express was created to handle communication with the AI API. This server receives requests from the frontend, interacts with the API to generate images, and serves these images back to the client.
  2. Frontend Development: The user interface and image processing were implemented using p5.js. This includes the input form for text prompts, display of generated images, application of the Van Gogh effect, and SVG extraction based on edge detection algorithm.

Initially, I attempted to implement everything in p5.js, but API security constraints necessitated the creation of a separate backend.

Implementation Details

The application works as follows:

  1. The user enters a text prompt in the web interface.
  2. The frontend sends a request to the Node.js server.
  3. The server communicates with the StarryAI API to generate an image.
  4. The generated image is saved on the server and its path is sent back to the frontend.
  5. The frontend displays the generated image.
  6. The user can apply the Van Gogh effect, which uses a custom algorithm to create a painterly style.
  7. User is able to export the image in PNG format with or without Van Gogh effect
  8. User is also able to export two different kinds of SVG (simplified and even more simplified)
  9. Version of SVG extraction for Pen Plotting is done through edge detection algorithm of which the user is able to calibrate sensitivity.

A key component of the project is the Van Gogh effect algorithm:

This function applies a custom effect that mimics Van Gogh’s style using Poisson disc sampling and a swirling line algorithm. Here is significant code:

// Class for Poisson disc sampling
class PoissonDiscSampler {
  constructor() {
    this.r = model.pointr;
    this.k = 50;  // Number of attempts to find a valid sample before rejecting
    this.grid = [];
    this.w = this.r / Math.sqrt(2);  // Cell size for spatial subdivision
    this.active = [];  // List of active samples
    this.ordered = [];  // List of all samples in order of creation
    
    // Use image dimensions instead of canvas dimensions
    this.cols = floor(generatedImage.width / this.w);
    this.rows = floor(generatedImage.height / this.w);
    
    // Initialize grid
    for (let i = 0; i < this.cols * this.rows; i++) {
      this.grid[i] = undefined;
    }
    
    // Add the first sample point (center of the image)
    let x = generatedImage.width / 2;
    let y = generatedImage.height / 2;
    let i = floor(x / this.w);
    let j = floor(y / this.w);
    let pos = createVector(x, y);
    this.grid[i + j * this.cols] = pos;
    this.active.push(pos);
    this.ordered.push(pos);
    
    // Generate samples
    while (this.ordered.length < model.pointcount && this.active.length > 0) {
      let randIndex = floor(random(this.active.length));
      pos = this.active[randIndex];
      let found = false;
      for (let n = 0; n < this.k; n++) {
        // Generate a random sample point
        let sample = p5.Vector.random2D();
        let m = random(this.r, 2 * this.r);
        sample.setMag(m);
        sample.add(pos);
        
        let col = floor(sample.x / this.w);
        let row = floor(sample.y / this.w);
        
        // Check if the sample is within the image boundaries
        if (col > -1 && row > -1 && col < this.cols && row < this.rows && 
            sample.x >= 0 && sample.x < generatedImage.width && 
            sample.y >= 0 && sample.y < generatedImage.height && 
            !this.grid[col + row * this.cols]) {
          let ok = true;
          // Check neighboring cells for proximity
          for (let i = -1; i <= 1; i++) {
            for (let j = -1; j <= 1; j++) {
              let index = (col + i) + (row + j) * this.cols;
              let neighbor = this.grid[index];
              if (neighbor) {
                let d = p5.Vector.dist(sample, neighbor);
                if (d < this.r) {
                  ok = false;
                  break;
                }
              }
            }
            if (!ok) break;
          }
          if (ok) {
            found = true;
            this.grid[col + row * this.cols] = sample;
            this.active.push(sample);
            this.ordered.push(sample);
            break;
          }
        }
      }
      if (!found) {
        this.active.splice(randIndex, 1);
      }
      
      // Stop if we've reached the desired point count
      if (this.ordered.length >= model.pointcount) {
        break;
      }
    }
  }
}

// LineMom class for managing line objects
class LineMom {
  constructor(pointcloud) {
    this.lineObjects = [];
    this.lineCount = pointcloud.length;
    this.randomZ = random(10000);
    for (let i = 0; i < pointcloud.length; i++) {
      if (pointcloud[i].x < -model.linelength || pointcloud[i].y < -model.linelength ||
          pointcloud[i].x > width + model.linelength || pointcloud[i].y > height + model.linelength) {
        continue;
      }
      this.lineObjects[i] = new LineObject(pointcloud[i], this.randomZ);
    }
  }
  
  render(canvas) {
    for (let i = 0; i < this.lineCount; i++) {
      if (this.lineObjects[i]) {
        this.lineObjects[i].render(canvas);
      }
    }
  }
}

Another key component of the project was SVG extraction based on edge detection.

  1. The image is downscaled for faster processing.
  2. Edge detection is performed on the image using a simple algorithm that compares the brightness of each pixel to the average brightness of its 3×3 neighborhood. If the difference is above a threshold, the pixel is considered an edge.
  3. The algorithm traces paths along the edges by starting at an unvisited edge pixel and following the edges until no more unvisited edge pixels are found or the path becomes too long.
  4. The traced paths are simplified using the Ramer-Douglas-Peucker algorithm, which removes points that don’t contribute significantly to the overall shape while preserving the most important points.
  5. The simplified paths are converted into SVG path elements and combined into a complete SVG document.
  6. The SVG is saved as a file that can be used for plotting or further editing.

This approach extracts the main outlines and features of the image as a simplified SVG representation.

// Function to export a simplified SVG based on edge detection
function exportSimpleSVG() {
  if (!generatedImage) {
    console.error('No image generated yet');
    return;
  }

  // Downscale the image for faster processing
  let scaleFactor = 0.5;
  let img = createImage(generatedImage.width * scaleFactor, generatedImage.height * scaleFactor);
  img.copy(generatedImage, 0, 0, generatedImage.width, generatedImage.height, 0, 0, img.width, img.height);

  // Detect edges in the image
  let edges = detectEdges(img);
  edges.loadPixels();

  let paths = [];
  let visited = new Array(img.width * img.height).fill(false);

  // Trace paths along the edges
  for (let x = 0; x < img.width; x++) {
    for (let y = 0; y < img.height; y++) {
      if (!visited[y * img.width + x] && brightness(edges.get(x, y)) > 0) {
        let path = tracePath(edges, x, y, visited);
        if (path.length > 5) { // Ignore very short paths
          paths.push(simplifyPath(path, 1)); // Simplify the path
        }
      }
    }
  }
// Function to detect edges in an image
function detectEdges(img) {
  img.loadPixels(); //load pixels of input image
  let edges = createImage(img.width, img.height); //new image for storing
  edges.loadPixels();

  // Simple edge detection algorithm
  for (let x = 1; x < img.width - 1; x++) { //for each pixel exlcuding broder
    for (let y = 1; y < img.height - 1; y++) {
      let sum = 0;
      for (let dx = -1; dx <= 1; dx++) {
        for (let dy = -1; dy <= 1; dy++) {
          let idx = 4 * ((y + dy) * img.width + (x + dx));
          sum += img.pixels[idx];
        }
      }
      let avg = sum / 9; //calculate avg brightness of 3x3 neighborhood
      let idx = 4 * (y * img.width + x);
      edges.pixels[idx] = edges.pixels[idx + 1] = edges.pixels[idx + 2] = 
        abs(img.pixels[idx] - avg) > 1 ? 255 : 0; //change this
      edges.pixels[idx + 3] = 255; //if difference between pixel brightness and average is above 3 its considered an edge. result is binary image where edges are white and none edges are black
    }
  }
  edges.updatePixels();
  return edges;
}

// Function to trace a path along edges
function tracePath(edges, startX, startY, visited) {
  let path = [];
  let x = startX;
  let y = startY;
  let direction = 0; // 0: right, 1: down, 2: left, 3: up

  while (true) {
    path.push({x, y});
    visited[y * edges.width + x] = true;

    let found = false;
    for (let i = 0; i < 4; i++) { //It continues tracing until it can't find an unvisited edge pixel 
      let newDirection = (direction + i) % 4;
      let [dx, dy] = [[1, 0], [0, 1], [-1, 0], [0, -1]][newDirection];
      let newX = x + dx;
      let newY = y + dy;

      if (newX >= 0 && newX < edges.width && newY >= 0 && newY < edges.height &&
          !visited[newY * edges.width + newX] && brightness(edges.get(newX, newY)) > 0) {
        x = newX;
        y = newY;
        direction = newDirection;
        found = true;
        break;
      }
    }

    if (!found || path.length > 500) break; // Stop if no unvisited neighbors or path is too long
  }

  return path;
}

//Function to simplify a path using the Ramer-Douglas-Peucker algorithm The key idea behind this algorithm is that it preserves the most important points of the path (those that deviate the most from a straight line) while removing points that don't contribute significantly to the overall shape.
function simplifyPath(path, tolerance) {
  if (path.length < 3) return path; //If the path has fewer than 3 points, it can't be simplified further, so we return it as is.

  function pointLineDistance(point, lineStart, lineEnd) { //This function calculates the perpendicular distance from a point to a line segment. It's used to determine how far a point is from the line formed by the start and end points of the current path segment.
    let dx = lineEnd.x - lineStart.x;
    let dy = lineEnd.y - lineStart.y;
    let u = ((point.x - lineStart.x) * dx + (point.y - lineStart.y) * dy) / (dx * dx + dy * dy);
    u = constrain(u, 0, 1);
    let x = lineStart.x + u * dx;
    let y = lineStart.y + u * dy;
    return dist(point.x, point.y, x, y);
  }

  //This loop iterates through all points (except the first and last) to find the point that's farthest from the line formed by the first and last points of the path.
  let maxDistance = 0;
  let index = 0; 
  for (let i = 1; i < path.length - 1; i++) {
    let distance = pointLineDistance(path[i], path[0], path[path.length - 1]);
    if (distance > maxDistance) {
      index = i;
      maxDistance = distance;
    }
  }

  if (maxDistance > tolerance) { //split and recursively simplify each
    let leftPath = simplifyPath(path.slice(0, index + 1), tolerance);
    let rightPath = simplifyPath(path.slice(index), tolerance);
    return leftPath.slice(0, -1).concat(rightPath);
  } else {
    return [path[0], path[path.length - 1]];
  }
}

Challenges

The main challenges encountered during this project were:

  1. Implementing secure API communication: API security constraints led to the development of a separate backend, which added complexity to the project architecture.
  2. Managing asynchronous operations in the image generation process: The AI image generation process is not instantaneous, which required implementing a waiting mechanism in the backend. (Promised base) Here’s how it works:
    • When the server receives a request to generate an image, it initiates the process with the StarryAI API.
    • The API responds with a creation ID, but the image isn’t ready immediately.
    • The server then enters a polling loop, repeatedly checking the status of the image generation process:

    • This loop continues until the image is ready or an error occurs.
    • Once the image is ready, it’s downloaded and saved on the server.
    • Finally, the image path is sent back to the frontend.
    • This process ensures that the frontend doesn’t hang while waiting for the image, but it also means managing potential timeout issues and providing appropriate feedback to the user.
  1. Integrating the AI image generation with the Van Gogh effect seamlessly: Ensuring that the generated image could be smoothly processed by the Van Gogh effect algorithm required careful handling of image data.
  2. Ensuring smooth user experience: Managing the state of the application across image generation and styling and providing appropriate feedback to the user during potentially long wait times, was crucial for a good user experience.
  3. Developing an edge detection algorithm for pen plotting:
    • Adjusting the threshold value for edge detection was important, as it affects the level of detail captured in the resulting SVG file. Setting the threshold too low would result in an overly complex SVG, while setting it too high would oversimplify the image.
    • Ensuring that the custom edge detection algorithm produced satisfactory results across different input images was also a consideration, as images vary in contrast and detail. Initially, I had problem with edge pixels but later excluded them.
    • Integrating the edge detection algorithm seamlessly into the existing image processing pipeline and ensuring compatibility with the path simplification step (Ramer-Douglas-Peucker algorithm) was another challenge that required careful design and testing.
  4. Image generation, I experimented with different image generation models provided by StarryAI. From default to fantasy to anime. Eventually I settled down for detailed Illustration model which is perfect for svg extraction as it provides more distinct lines based on cartoonish appearance and also works well for Van Gogh effect due to its bold colors and more simplified nature compared to more realistic images.

Reflection

This project provided valuable experience in several areas:

  1. Working with external APIs and handling asynchronous operations
  2. Working with full-stack approach with Node.js and p5.js
  3. Integrating different technologies (AI image generation and artistic styling) into a cohesive application
  4. Implementing algorithms for edge detection.

I am quite happy with the result and plotted image also works well stylistically although it is different from initial painter effect, it provides another physical dimension to the project which is just as important.

Future Improvements:

  1. Implementing additional artistic styles
  2. Refining the user interface for a better user experience
  3. Combining art styles with edge detection for more customizable SVG extraction.
  4. Hosting site online to keep project running without my interference. This would also require me to have some kind of subscription for Image generation API because current one is capped at around 100 requests for current model.

Week 5 – Painterize by Dachi

Sketch:  (can’t work without my server for now so just pasting for code and demonstration purposes)

Image based on prompt “Designing interactive media project about art generation” with Van Gogh effect.

Concept Inspiration

As a technology enthusiast with a keen interest in machine learning, I’ve been fascinated by the recent advancements in generative AI, particularly in the realm of image generation. While I don’t have the expertise to create a generative AI model from scratch, I saw an exciting opportunity to explore the possibilities of generative art by incorporating existing AI image generation tools.

My goal was to create a smooth, integrated experience that combines the power of AI-generated images with classic artistic styles. The idea of applying different painter themes to AI-generated images came to mind as a way to blend cutting-edge technology with traditional art forms. For my initial experiment, I chose to focus on the distinctive style of Vincent van Gogh, known for his bold colors and expressive brushstrokes.

Development Process

The development process consisted of two main components:

  1. Backend Development: A Node.js server using Express was created to handle communication with the StarryAI API. This server receives requests from the frontend, interacts with the API to generate images, and serves these images back to the client.
  2. Frontend Development: The user interface and image processing were implemented using p5.js. This includes the input form for text prompts, display of generated images, and application of the Van Gogh effect.

Initially, I attempted to implement everything in p5.js, but API security constraints necessitated the creation of a separate backend.

Implementation Details

The application works as follows:

  1. The user enters a text prompt in the web interface.
  2. The frontend sends a request to the Node.js server.
  3. The server communicates with the StarryAI API to generate an image.
  4. The generated image is saved on the server and its path is sent back to the frontend.
  5. The frontend displays the generated image.
  6. The user can apply the Van Gogh effect, which uses a custom algorithm to create a painterly style.

A key component of the project is the Van Gogh effect algorithm:

 

// Function to apply the Van Gogh effect to the generated image
function applyVanGoghEffect() {
  if (!generatedImage) {
    statusText.html('Please generate an image first');
    return;
  }

  vanGoghEffect = true;
  statusText.html('Applying Van Gogh effect...');

  // Prepare the image for processing
  generatedImage.loadPixels();

  // Create Poisson disc sampler and line objects
  poisson = new PoissonDiscSampler();
  lines = new LineMom(poisson.ordered);

  // Set up canvas for drawing the effect
  background(model.backgroundbrightness);
  strokeWeight(model.linewidth);
  noFill();

  redraw();  // Force a redraw to apply the effect

  statusText.html('Van Gogh effect applied');
}

This function applies a custom effect that mimics Van Gogh’s style using Poisson disc sampling and a swirling line algorithm.

Challenges

The main challenges encountered during this project were:

  1. Implementing secure API communication: API security constraints led to the development of a separate backend, which added complexity to the project architecture.
  2. Managing asynchronous operations in the image generation process: The AI image generation process is not instantaneous, which required implementing a waiting mechanism in the backend. Here’s how it works:
    • When the server receives a request to generate an image, it initiates the process with the StarryAI API.
    • The API responds with a creation ID, but the image isn’t ready immediately.
    • The server then enters a polling loop, repeatedly checking the status of the image generation process:

    • This loop continues until the image is ready or an error occurs.
    • Once the image is ready, it’s downloaded and saved on the server.
    • Finally, the image path is sent back to the frontend.
    • This process ensures that the frontend doesn’t hang while waiting for the image, but it also means managing potential timeout issues and providing appropriate feedback to the user.

3. Integrating the AI image generation with the Van Gogh effect seamlessly: Ensuring that the generated image could be smoothly processed by the Van Gogh effect algorithm required careful handling of image data.

4. Ensuring smooth user experience: Managing the state of the application across image generation and styling and providing appropriate feedback to the user during potentially long wait times, was crucial for a good user experience.

Reflection

This project provided valuable experience in several areas:

  1. Working with external APIs and handling asynchronous operations
  2. Developing full-stack applications with Node.js and p5.js
  3. Integrating different technologies (AI image generation and artistic styling) into a cohesive application

While the result is a functional prototype rather than a polished product, it successfully demonstrates the concept of combining AI-generated images with artistic post-processing.

Future Improvements

Potential areas for future development include:

  1. Implementing additional artistic styles
  2. Refining the user interface for a better user experience
  3. Adding functionality to save generated artworks
  4. Optimizing the integration between image generation and styling for better performance
  5. Allowing user customization of effect parameters

These improvements could enhance the application’s functionality and user engagement.

References:

Coding Train

p5.js Web Editor | curl flowfield lineobjects image (p5js.org)

Week 4 – Fourier Coloring by Dachi

Sketch: 

Example drawing

Concept Inspiration

My project was created with a focus on intersection of art and mathematics. I was particularly intrigued by the concept of Fourier transforms and their ability to break down complex patterns into simpler components. After seeing various implementations of Fourier drawings online, I was inspired to create my own version with a unique twist. I wanted to not only recreate drawings using Fourier series but also add an interactive coloring feature that would make the final result more visually appealing and engaging for users.

Process of Development

I began by following the Coding Train tutorial on Fourier transforms to implement the basic drawing and reconstruction functionality. This gave me a solid foundation to build upon. Once I had the core Fourier drawing working, I shifted my focus to developing the coloring aspect, which became my main contribution to the project.

The development process was iterative. I started with a simple algorithm to detect different sections of the drawing and then refined it over time. I experimented with various thresholds for determining when one section ends and another begins and worked on methods to close gaps between sections that should be connected. Even now, it is far from perfect but it does what I initially intended to.

How It Works

The application works in several stages:

  1. User Input: Users draw on a canvas using their mouse or touchscreen.
  2. Fourier Transform: The drawing is converted into a series of complex numbers and then transformed into the frequency domain using the Discrete Fourier Transform (DFT) algorithm. This part is largely based on the Coding Train tutorial.
  3. Drawing Reconstruction: The Fourier coefficients are used to recreate the drawing using a series of rotating circles (epicycles). The sum of all these rotations traces out a path that approximates the original drawing.
  4. Section Detection: My algorithm analyzes the original drawing to identify distinct sections based on the user’s drawing motion.
  5. Coloring: Each detected section is assigned a random color.
  6. Visualization: The reconstructed drawing is displayed, with each section filled in with its assigned color.
  7. Re: User is able to start the process again and creature unique coloring look.
  8. Save: User is able to save the image to their local machine.

Code I’m Proud Of

While the Fourier transform implementation was based on the tutorial, I’m particularly proud of the section detection and coloring algorithm I developed:

 

function detectSections(points) {
  let sections = [];
  let currentSection = [];
  let lastPoint = null;
  const distanceThreshold = 20;

  // Iterate over each point in the drawing
  for (let point of points) {
    if (lastPoint && dist(point.x, point.y, lastPoint.x, lastPoint.y) > distanceThreshold) {
      // If the distance between the current point and the last point exceeds the threshold,
      // consider it a new section and push the current section to the sections array
      if (currentSection.length > 0) {
        sections.push(currentSection);
        currentSection = [];
      }
    }
    // Add the current point to the current section
    currentSection.push(point);
    lastPoint = point;
  }

  // Push the last section to the sections array
  if (currentSection.length > 0) {
    sections.push(currentSection);
  }

  // Close gaps between sections by merging nearby sections
  return closeGapsBetweenSections(sections, distanceThreshold * 2);
}

function closeGapsBetweenSections(sections, maxGapSize) {
  let mergedSections = [];
  let currentMergedSection = sections[0];

  // Iterate over each section starting from the second section
  for (let i = 1; i < sections.length; i++) {
    let lastPoint = currentMergedSection[currentMergedSection.length - 1];
    let firstPointNextSection = sections[i][0];

    if (dist(lastPoint.x, lastPoint.y, firstPointNextSection.x, firstPointNextSection.y) <= maxGapSize) {
      // If the distance between the last point of the current merged section and the first point of the next section
      // is within the maxGapSize, merge the next section into the current merged section
      currentMergedSection = currentMergedSection.concat(sections[i]);
    } else {
      // If the distance exceeds the maxGapSize, push the current merged section to the mergedSections array
      // and start a new merged section with the next section
      mergedSections.push(currentMergedSection);
      currentMergedSection = sections[i];
    }
  }

  // Push the last merged section to the mergedSections array
  mergedSections.push(currentMergedSection);
  return mergedSections;
}

This algorithm detects separate sections in the drawing based on the distance between points, allowing for intuitive color separation. It also includes a method to close gaps between sections that are likely part of the same continuous line, which helps create more coherent colored areas.

Challenges

The main challenge I faced was implementing the coloring feature effectively. Determining where one section of the drawing ends and another begins was not straightforward, especially for complex drawings with overlapping lines or varying drawing speeds. I had to experiment with different distance thresholds to strike a balance between oversegmentation (too many small colored sections) and undersegmentation (not enough color variation).

Another challenge was ensuring that the coloring didn’t interfere with the Fourier reconstruction process. I needed to make sure that the section detection and coloring were applied to the original drawing data in a way that could be mapped onto the reconstructed Fourier drawing.

Reflection

This project was a valuable learning experience. It helped me understand how to apply mathematical concepts like Fourier transforms to create something visually interesting and interactive. While the core Fourier transform implementation was based on the tutorial, developing the coloring feature pushed me to think creatively about how to analyze and segment a drawing. Nevertheless, following tutorial also helped me comprehend mathematical side of the concept.

I gained insights into image processing techniques, particularly in terms of detecting continuity and breaks in line drawings. The project also improved my skills in working with canvas graphics and animation in JavaScript.

Moreover, this project taught me the importance of user experience in mathematical visualizations. Adding the coloring feature made the Fourier drawing process more engaging and accessible to users who might not be as interested in the underlying mathematics.

 

Future Improvements

Looking ahead, there are several ways I could enhance this project:

  1. User-defined Colors: Allow users to choose their own colors for sections instead of using random colors.
  2. Improved Section Detection: Implement more sophisticated algorithms for detecting drawing sections, possibly using machine learning techniques to better understand the user’s intent.
  3. Smooth Color Transitions: Add an option for smooth color gradients between sections instead of solid colors.
  4. Interactivity: Allow users to manipulate the colored sections after the drawing is complete, perhaps by dragging section boundaries or merging/splitting sections.
  5. Improved interface: make interface look more modern and polished.

References

  1. The Coding Train’s Fourier Transform tutorial by Daniel Shiffman
  2. P5.js documentation and examples
  3. Various online sources

Week 3 – “Be Not Afraid” by Dachi

Sketch

Concept Inspiration

My project, titled “Be Not Afraid,” was inspired by the concept of biblically accurate angels, specifically the Thrones (also known as Ophanim). In biblical and extrabiblical texts, Thrones are described as extraordinary celestial beings. The prophet Ezekiel describes them in Ezekiel 1:15-21 as wheel-like creatures: “Their appearance and structure was as it were a wheel in the middle of a wheel.” They are often depicted as fiery wheels covered with many eyes.

I wanted to recreate this awe-inspiring and somewhat unsettling image using digital art. The multiple rotating rings adorned with eyes in my project directly represent the wheel-within-wheel nature of Thrones, while the overall structure aims to capture their celestial and otherworldly essence. By creating this digital interpretation, I hoped to evoke the same sense of wonder and unease that the biblical descriptions might have inspired in ancient times.

Process of Development

I started by conceptualizing the basic structure – a series of rotating rings with eyes to represent the Thrones’ form. Initially, I implemented sliders for parameter adjustment, thinking it would be interesting to allow for interactive manipulation. However, as I developed the project, I realized I preferred a specific aesthetic that more closely aligned with the biblical descriptions and decided to remove the sliders and keep fixed values.

A key requirement of the project was to use invisible attractors and visible movers to create a pattern or design. This led me to implement a system of attractors that influence the movement of the entire Throne structure. This is mainly expressed in rotation around the center and more turbulent up and down movement. Values for these were adjusted to make motion smooth and graceful, corresponding to that of divine being.

As I progressed, I kept adding new elements to enhance the overall impact and atmosphere. The central eye came later in the process, as did the cloud background and sound elements. The project was all about refinement after refinement. Even at this stage I am sure there are lots of things to improve since lot of is visual representation which at times can be quite subjective.

How It Works

My project uses p5.js to create a 3D canvas with several interacting elements:

  1. Rings: I created four torus shapes with different orientations and sizes to form the base structure, representing the “wheel within a wheel” form of Thrones. Those wheels or rings were taken to be different values but eventually settled for four as it is not too overcrowded while delivering needed effect.
  2. Eyes: I positioned multiple eyes of varying sizes on these rings, reflecting the “full of eyes” description associated with Thrones.
  3. Central Eye: I added a larger eye in the center that responds to mouse movement when the cursor is over the canvas, symbolizing the all-seeing nature of these beings.
  4. Attractors and Movement: I implemented a system of invisible attractors that influence the movement of the entire structure. This includes:
  5. A central attractor that creates a circular motion.
  6. Vertical attractors that add turbulence and complexity to the movement. These attractors work together to create the organic, flowing motion of the Throne structure, evoking a sense of constant, ethereal rotation as described in biblical texts.
  7. Background: I used a cloud texture to provide a heavenly backdrop.
  8. Audio: I incorporated background music and a rotation sound whose volume correlates with the ring speeds to enhance the atmosphere.

Code I’m Proud Of

There are several pieces of code in this project that I’m particularly proud of, as they work together to create the complex, ethereal movement of the Thrones:

  1. The attractor system:
// Calculate attractor position
let attractorX = cos(attractorAngle) * attractorRadius;
let attractorY = sin(attractorAngle) * attractorRadius;

// Calculate vertical attractor position with increased turbulence
let verticalAttractorY = 
  sin(verticalAttractorAngle1) * verticalAttractorAmplitude1 +
  sin(verticalAttractorAngle2) * verticalAttractorAmplitude2 +
  sin(verticalAttractorAngle3) * verticalAttractorAmplitude3;

// Move the entire scene based on the attractor position
translate(attractorX, attractorY + verticalAttractorY, 0);

This code creates complex, organic motion by combining a circular attractor with vertical attractors. It achieves a nuanced, lifelike movement that adds significant depth to the visual experience, simulating the constant, ethereal rotation associated with the biblical descriptions of Thrones.

2. The ring and eye movement, including fading effects:

// Update outer ring spin speed
outerRingTimer++;
if (outerRingTimer >= pauseDuration && !isOuterRingAccelerating) {
  isOuterRingAccelerating = true;
  outerRingTimer = 0;
  fadeOutStartTime = 0;
} else if (outerRingTimer >= accelerationDuration && isOuterRingAccelerating) {
  isOuterRingAccelerating = false;
  outerRingTimer = 0;
  fadeOutStartTime = frameCount;
}

if (isOuterRingAccelerating) {
  outerRingSpeed += ringAcceleration;
  rotationSoundVolume = min(rotationSoundVolume + 0.01, 1);
} else {
  outerRingSpeed = max(outerRingSpeed - ringAcceleration / 3, 0.01);
  
  if (frameCount - fadeOutStartTime < decelerationDuration - fadeOutDuration) {
    rotationSoundVolume = 1;
  } else {
    let fadeOutProgress = (frameCount - (fadeOutStartTime + decelerationDuration - fadeOutDuration)) / fadeOutDuration;
    rotationSoundVolume = max(1 - fadeOutProgress, 0);
  }
}

rotationSound.setVolume(rotationSoundVolume);

// Update ring spins
rings[1].spin += outerRingSpeed;
rings[3].spin += innerRingSpeed;

// Draw and update eyes
for (let eye of eyes) {
  let ring = rings[eye.ring];
  let r = ring.radius + ring.tubeRadius * eye.offset;
  let x = r * cos(eye.angle);
  let y = r * sin(eye.angle);
  
  push();
  rotateX(ring.rotation.x + sin(angle + ring.phase) * 0.1);
  rotateY(ring.rotation.y + cos(angle * 1.3 + ring.phase) * 0.1);
  rotateZ(ring.rotation.z + sin(angle * 0.7 + ring.phase) * 0.1);
  if (eye.ring === 1 || eye.ring === 3) {
    rotateZ(ring.spin);
  }
  translate(x, y, 0);
  
  let eyePos = createVector(x, y, 0);
  let screenCenter = createVector(0, 0, -1);
  let directionVector = p5.Vector.sub(screenCenter, eyePos).normalize();
  
  let rotationAxis = createVector(-directionVector.y, directionVector.x, 0).normalize();
  let rotationAngle = acos(directionVector.z);
  
  rotate(rotationAngle, rotationAxis);
  
  if (eye.isInner) {
    rotateY(PI);
  }
  
  texture(eyeTexture);
  sphere(eye.size);
  pop();
}


This code manages the complex movement of the rings and eyes, including acceleration, deceleration, and fading effects. It creates a mesmerizing visual that captures the otherworldly nature of the Thrones. The fading of the rotation sound adds an extra layer of immersion.

I’m particularly proud of how these pieces of code work together to create a cohesive, organic motion that feels both alien and somehow alive, which is exactly what I was aiming for in this representation of biblically accurate angels.

 

Challenges

The biggest challenge I faced was definitely the movement and implementing the attractor system effectively. Creating smooth, organic motion in a 3D space while managing multiple rotating elements was incredibly complex. I struggled with:

  1. Coordinating the rotation of rings with the positioning and rotation of eyes.
  2. Implementing the acceleration and deceleration of ring rotations smoothly.
  3. Balancing the various movement elements (ring rotation, attractor motion, eye tracking) to create a cohesive, not chaotic, visual effect.

Another significant challenge was accurately representing the complex, wheel-within-wheel structure of Thrones. Balancing the need for a faithful representation with artistic interpretation and technical limitations required careful consideration and multiple iterations.

Reflection

Looking back, I’m satisfied with how my “Be Not Afraid” project turned out. I feel I’ve successfully created an interesting  and slightly unsettling visual experience that captures the essence of Thrones as described in biblical texts. The layered motion effects created by the attractor system effectively evoke the constant rotation associated with these beings. I’m particularly pleased with how the central eye and the eyes on the rings work together to create a sense of an all-seeing, celestial entity.

Future Improvements

While I’m happy with the current state of my project, there are several improvements I’d like to make in the future:

  1. Blinking: I want to implement a sophisticated blinking mechanism for the eyes, possibly with randomized patterns or reactive blinking based on scene events. This could add to the lifelike quality of the Throne.
  2. Face Tracking: It would be exciting to replace mouse tracking with face tracking using a webcam and computer vision libraries. This would allow the central eye to follow the viewer’s face, making the experience even more immersive and unsettling.
  3. Increased Realism: I’d like to further refine the eye textures and shading to create more photorealistic eyes, potentially using advanced shaders. This could enhance the “full of eyes” aspect described in biblical texts.
  4. Interactive Audio: Developing a more complex audio system that reacts to the movement and states of various elements in the scene is definitely on my to-do list.
  5. Performance Optimization: I want to explore ways to optimize rendering and calculation to allow for even more complex scenes or smoother performance on lower-end devices.
  6. Enhanced Wheel Structure: While the current ring structure represents the wheel-like form of Thrones, I’d like to further refine this to more clearly show the “wheel within a wheel” aspect. This could involve creating interlocking ring structures or implementing a more complex geometry system.
  7. Fiery Effects: Many descriptions of Thrones mention their fiery nature. Adding particle effects or shader-based fire simulations could enhance this aspect of their appearance.

References

  1. Biblical descriptions of Thrones/Ophanim, particularly from the Book of Ezekiel
  2. Provided Coding Train video about attractors
  3. Various Art depicting thrones
  4. General internet
  5. Royalty free music
  6. Eye texture PNG (Eye (Texture) (filterforge.com))
  7. https://www.geeksforgeeks.org/materials-in-webgl-and-p5-js/

Update: Added eye movement, removed torus shape, increased eye frequency

Update2: removed outer frame, increased distance to Ophanim, fog effect, 2x zoom effect, modified picture (Photoshop Generative AI). Added more extensive comments. Eye twitch movement (random).

Week 2 – Algae Simulation by Dachi

Sketch:

 

Concept: My project is an interactive digital artwork that simulates the movement and appearance of algae in a swamp environment. Inspired by what I have seen in my home country many times, it aims to capture the flow of algae in a river. I used different methodologies to create a dynamic, visually interesting scene that mimics the organic, flowing nature of algae. By incorporating various elements such as multiple algae clusters, water particles, and background rocks, I tried to recreate a cohesive river like ecosystem.

Inspiration: The inspiration for this project came from my trip to Dashbashi Mountain in Georgia. I saw algae flowing in a river near the waterfall, and it was very pretty, almost from a fantasy world. This happened very recently so it was the first thing that came to mind when I was thinking about this project. This brief encounter with nature became the foundation for my work, as I tried to translate the organic movement of algae and water into an interactive digital format.

IMG_7042 – Short clip of moving Algae

Process of Development: I developed this project iteratively, adding various features and complexities over time:

At first I visualized the algae movement. I realized it had to do something with waves and sinusoidal shapes were first thing that came to my mind. Unfortunately, few hours in implementation I realized assignment specifically asked for acceleration. Soon after implementing acceleration, I realized this movement alone limited me in creating algae flow so I had to go back and forth multiple times to get at result that somewhat resembled the movement while using acceleration. Unfortunately, I did not find any definitive tutorials of such movement. As such this is more of a strand like simulation which I slowly built up, line by line, also looking at other works like hair movement for inspiration, I will mention them in references.

These screenshots are from earlier development of the simulations:

As you can see by adding various parameters to each strand as well as overall cluster, we are able to achieve similar wavy pulsating pattern that algae have. I also added particle systems and noise-based algorithms for water movement (you can see inspiration of this from reference). To enhance the environment, I included rock formations and a sky.  I integrated sliders and toggles for user interaction. Finally, I kept refining values till I achieved desire perfomance and visuals. The simulation is pretty heavy to run and you can expect drastic FPS drops, based on number of strands we are running. Water simulation is a significant culprit here despite multiple scaling that was needed to achieve even that. 

How It Works:

Algae Simulation: I created multiple clusters of algae strands, each with unique properties. I animate each strand using sine waves and apply tapering effects and clustering forces for a more natural-looking movement. I also calculate acceleration and velocity for each strand to simulate fluid dynamics.

Water Effects: I used a particle system to create the illusion of flowing water, with Perlin noise for natural-looking movement. I applied color gradients to enhance the swamp-like appearance. There is also background audio of waterfall that helps the immersion.

Environmental Elements: I drew rocks using noise-based shapes with gradients and added a toggleable sky for depth.

Interactivity: I included multiple sliders that allow users to adjust various parameters in real-time.

If you want to know more about in depth working and how everything is related, it will be better to check out my code as it is commented thoroughly.

Code: 

One piece of code I’m particularly proud of is the function that generates and animates individual algae strands:

function algae(strandPhase, strandLength, strandAmplitude, clusterEndX, clusterPulsePhase) {
  beginShape();
  
  let taperingPoint = taperingPointSlider.value() * strandLength;
  let taperingStrength = taperingStrengthSlider.value();
  
  for (let i = 0; i <= strandLength; i += 10) {
    let x = i;
    let y = 0;
    
    let progress = i / strandLength;
    
    let taperingFactor = 1;
    if (i > taperingPoint) {
      taperingFactor = pow(map(i, taperingPoint, strandLength, 1, 0), taperingStrength);
    }
    
    let currentAmplitude = strandAmplitude * (1 - progress * 0.8) * taperingFactor;
    
    let movementFactor = sin(map(i, 0, strandLength, 0, PI));
    let movement = sin(strandPhase + i * 0.02) * currentAmplitude * movementFactor;
    
    let angle = map(i, 0, strandLength, 0, PI * 2);
    x += cos(angle) * movement;
    y += sin(angle) * movement;
    
    let curvature = sin(i * 0.05 + phase + clusterPulsePhase) * 5 * (1 - progress * 0.8) * taperingFactor;
    y += curvature;
    
    let clusteringForce = map(i, 0, strandLength, 0, 1);
    let increasedClusteringFactor = clusteringFactor + (progress * 0.5);
    x += (clusterEndX - x) * clusteringForce * increasedClusteringFactor;
    
    vertex(x, y);
  }
  endShape();
}

This code incorporates acceleration and velocity calculations to simulate realistic fluid dynamics, creating more natural and unpredictable movement. The function also creates a tapering effect along the strand’s length, generates wave-like movement using sine functions, and applies a clustering force to mimic how algae clumps in water. I’m especially pleased with how it combines mathematical concepts like acceleration, sine waves, and mapping with artistic principles to create a visually appealing and believable representation of algae in motion. The integration of user controls allows for real-time adjustment of parameters like acceleration strength, making the simulation highly interactive and customizable.

Challenges

Balancing visual quality with smooth performance was tricky, especially when animating multiple elements at once. Getting the algae to move naturally in response to water currents took a lot of tweaking. The water particle system was also tough to optimize – I had to find ways to make it look good without slowing everything down. Another challenge was making the user controls useful but not overwhelming.

Reflection:

This project was a good learning experience for me. I enjoyed turning what I saw at Dashbashi Mountain into a digital artwork. It was challenging at times, especially when I had to figure out how to make the algae move realistically. I’m happy with how I combined math and art to create something that looks pretty close to real algae movement. The project helped me improve my coding skills and while it’s not perfect, I’m pleased with how finished product looks.

Future Improvements:

Speed it up: The simulation can be slow, especially with lots of algae strands. I’d try to make it run smoother.
Better water: The water effect is okay, but it could look more realistic.
Add more stuff: Maybe include some fish or bugs to make it feel more like a real ecosystem.

References:

p5.js Web Editor | Blade seaweed copy copy (p5js.org)

p5.js Web Editor | sine wave movement (hair practice) (p5js.org)

p5.js Web Editor | Water Effect (p5js.org)

YouTube

Internet