Uzumaki (Interactive Spiral Art) by Dachi – Final

User Interaction:

During the IM show, due to the slow performance of the computers, the main features of the project, especially the spiral transition, were not fully captured, which left users confused. Nevertheless, people still had fun, and I tried to explain the issue and guide them through the process.

Sketch: p5.js Web Editor | Uzumaki v3

Inspiration

This project draws inspiration from Junji Ito’s “Uzumaki,” a manga known for its distinctive use of spiral imagery and psychological horror elements. This interactive artwork translates the manga’s distinct visual elements into a digital medium, allowing users to create their own spiral patterns through hand gestures. The project maintains a monochromatic color scheme with digital noise to reflect the manga’s original black and white aesthetic, creating an atmosphere that captures the hypnotic quality of Ito’s work. The decision to focus on hand gestures as the primary interaction method was influenced by the manga’s themes of body horror and transformation, where the human form itself becomes part of the spiral pattern. The integration of accelerating warping effects and atmospheric audio further reinforces the manga’s themes of inevitable spiral corruption.

Methodology

Using ml5.js’s handPose model, the system tracks hand movements through a webcam, focusing on the pinch gesture between thumb and index finger to control spiral creation. The pinching motion itself mimics a spiral form, creating a thematic connection between the gesture and its effect. A custom SpiralBrush class handles the generation and animation of spirals, while also implementing a sophisticated two-phase warping effect that distorts the surrounding space. The initial warping effect adds depth to the interaction, making each spiral feel dynamic and impactful on the canvas. After 200 frames, a second acceleration phase kicks in, causing the spiral’s warping to intensify dramatically – a direct reference to the manga’s portrayal of spirals as entities that grow increasingly powerful and uncontrollable over time.
The technical implementation uses p5.js for graphics rendering and includes a pixel manipulation system for the warping effects. The graphics are processed using a double-buffer system to ensure smooth animation, with real-time grayscale filtering applied to maintain the monochromatic theme. The ambient background music and UI elements, including a fullscreen button and horror-themed instructions modal, work together to create an immersive experience that captures the unsettling atmosphere of the source material. The instructions themselves are presented in a way that suggests the spiral creation process is a form of dark ritual, enhancing the project’s horror elements.

Code I am Proud Of

One of the most interesting pieces of code in this project is the warping effect implementation in the SpiralBrush class:
warpPixels(pg) {
  if (this.swirlAngle > 0) {
    pg.loadPixels();
    let d = pixelDensity();
    let originalPixels = pg.pixels.slice();

    // Calculate warping area
    let minX = max(0, int((this.origin.x - warpRadius) * d));
    let maxX = min(w, int((this.origin.x + warpRadius) * d));
    let minY = max(0, int((this.origin.y - warpRadius) * d));
    let maxY = min(h, int((this.origin.y + warpRadius) * d));

    // Process pixels within radius
    for (let y = minY; y < maxY; y++) {
      for (let x = minX; x < maxX; x++) {
        let distance = dist(x/d, y/d, this.origin.x, this.origin.y);
        if (distance < warpRadius) {
          // Calculate warping effect
          let warpFactor = pow(map(distance, 0, warpRadius, 1, 0), 2);
          let angle = atan2(y/d - this.origin.y, x/d - this.origin.x);
          let newAngle = angle + warpFactor * this.swirlAngle;
          
          // Apply displacement
          let sx = this.origin.x + distance * cos(newAngle);
          let sy = this.origin.y + distance * sin(newAngle);
          // Transfer pixel data
          [...]
        }
      }
    }
    pg.updatePixels();
  }
}

This code creates a mesmerizing warping effect by calculating pixel displacement based on distance from the spiral center and the current swirl angle. The use of polar coordinates allows for smooth circular distortion that enhances the spiral theme. The acceleration component, triggered after 200 frames, gradually increases the warping intensity, creating a sense of growing unease that mirrors the manga’s narrative progression.

Challenges

Several technical challenges were encountered during development:
  1. Performance Optimization: Implementing pixel-level manipulation while maintaining smooth frame rates required careful optimization of the processing area and efficient buffer management. The addition of the acceleration phase complicated this further, as the increased warping intensity required more computational resources.
  2. Gesture Recognition: Achieving reliable pinch detection required fine-tuning the distance threshold and handling edge cases when hand tracking momentarily fails. The system needed to maintain consistent spiral generation while dealing with the inherent variability of webcam-based hand tracking.
  3. Visual Coherence: Balancing the intensity of the warping effect with the spiral growth to maintain visual appeal while avoiding overwhelming distortion proved particularly challenging when implementing the acceleration phase. The transition between normal and accelerated warping needed to feel natural while still creating the desired unsettling effect.

Conclusion

I had a lot of fun doing this project and I think it successfully achieves goal of creating an interactive experience that captures the essence of Uzumaki’s spiral motif. The combination of hand tracking, real-time visual effects, and audio creates an engaging installation that allows users to explore the hypnotic nature of spirals through natural gestures. The monochromatic aesthetic, warping effects, and horror-themed UI elements effectively translate the manga’s visual style and atmosphere into an interactive digital medium. The addition of the acceleration phase adds a deeper layer of narrative connection to the source material, while the fullscreen capability and atmospheric audio create a more immersive experience.

Future Improvements

Looking ahead, there are several avenues for enhancing the project’s impact and functionality. The implementation of WebGL support would enable smoother rendering on larger canvases, allowing for more complex spiral patterns without compromising frame rates. The warping system could be optimized through GPU acceleration, enabling more sophisticated interactions between multiple spirals. A particularly intriguing possibility is the introduction of spiral memory, where new spirals could be influenced by the historical positions and intensities of previous ones, creating a cumulative effect that mirrors the spreading corruption theme in the manga. The addition of procedural audio generation could create dynamic soundscapes that respond to spiral intensity and acceleration, deepening the horror atmosphere. The instruction interface could be expanded into a more narrative experience, perhaps incorporating procedurally generated horror-themed text that changes based on user interactions. More digital distortion effect and ink based spiral texture would further enhance the experience.

Final Draft 2 by Dachi (Update)

I removed the unspiraling effect as I thought it took away from the interaction.

I added some noise and contrast adjustment to mimic horror aspect of the manga a bit more.

I added a sort of another phase warp which has acceleration component to it and kicks in after 200 frames to mimic spiral behavior from the anime.

+ some other changes with canvas and general codebase.

 

Sketch Updated: p5.js Web Editor | Uzumaki v2

Final Draft 1 by Dachi

Sketch: p5.js Web Editor | Uzumaki

Inspiration

This project draws inspiration from Junji Ito’s “Uzumaki,” a manga known for its distinctive use of spiral imagery. This interactive artwork translates the manga’s distinct visual elements into a digital medium, allowing users to create their own spiral patterns through hand gestures. The project maintains a monochromatic color scheme to reflect the manga’s original black and white aesthetic, creating an atmosphere that captures the hypnotic quality of Ito’s work.

Methodology

Using ml5.js’s handPose model, the system tracks hand movements through a webcam, focusing on the pinch gesture between thumb and index finger to control spiral creation. A custom SpiralBrush class handles the generation and animation of spirals, while also implementing a warping effect that distorts the surrounding space. The warping effect adds depth to the interaction, making each spiral feel more dynamic and impactful on the canvas.
The technical implementation uses p5.js for graphics rendering and includes a pixel manipulation system for the warping effects. The graphics are processed using a double-buffer system to ensure smooth animation, with real-time grayscale filtering applied to maintain the monochromatic theme. When users perform a pinch gesture, the system generates a spiral that grows and warps according to their hand position.

Future Improvements

Several practical improvements could enhance the project’s functionality. Performance optimization through WebGL support would allow for smoother rendering on larger canvases, enabling more complex spiral patterns without compromising the frame rate. The current pixel-based warping system could be optimized to handle multiple spirals more efficiently, reducing computational overhead during intensive use.
Adding two-handed interaction would enable users to create multiple spirals simultaneously, opening up new possibilities for complex pattern creation. This could be extended to include interaction between spirals, where proximity could affect their behavior and warping patterns. The visual experience could be enhanced with manga-style ink effects and varying line weights based on gesture speed, adding more expressiveness to the spiral creation process.

Week 11 – Zombie Automata by Dachi

Sketch:

p5.js Web Editor | Zombie Automata

Inspiration

To begin, I followed existing coding tutorials by The Coding Train on cellular automata to understand the basics and gather ideas for implementation. While working on the project, I drew inspiration from my high school IB Math Internal Assessment, where I explored the Susceptible-Infected-Recovered (SIR) model of disease spread (well technically I did SZR model). The concepts I learned there seemed to work well for current task.
Additionally, being a fan of zombie-themed shows and series, I thought that modeling a zombie outbreak would add an engaging narrative to the project. Combining these elements, I designed a simulation that not only explored cellular automata but also offered a creative and interactive way to visualize infection dynamics.

Process

The development process started with studying cellular automata and experimenting with simple rulesets to understand how basic principles could lead to complex behavior. After following coding tutorials to build a foundational understanding, I modified and expanded on these ideas to create a zombie outbreak simulation. The automata were structured to include four states, empty, human, zombie, and dead, each with defined transition rules.
I implemented the grid and the rules governing state transitions. I experimented with parameters such as infection and recovery rates, as well as grid sizes and cell dimensions, to observe how these changes affected the visual patterns. To ensure interactivity, I developed a user interface with sliders and buttons, allowing users to adjust parameters and directly interact with the simulation in real time.

How It Works

The simulation is based on a grid where each cell represents a specific state:
  • Humans: Are susceptible to infection if neighboring zombies are present. The probability of infection is determined by the user-adjustable infection rate.
  • Zombies: Persist unless a recovery rate is enabled, which allows them to turn back into humans.
  • Dead Cells: Represent the aftermath of human-zombie interactions and remain static.
  • Empty Cells: Simply occupy space with no active behavior.
At the start of the simulation, a few cells are randomly assigned as zombies to initiate the outbreak, and users can also click on any cell to manually spawn zombies or toggle states between humans and zombies.
Users can interact with the simulation by toggling the state of cells (e.g., turning humans into zombies) or by adjusting sliders to modify parameters such as infection rate, recovery rate, and cell size. The real-time interactivity encourages exploration of how these factors influence the patterns and dynamics.

Code I’m Proud Of

A part of the project that I am particularly proud of is the implementation of probabilistic infection dynamics

if (state === HUMAN) {
  let neighbors = countNeighbors(i, j, ZOMBIE);
  if (neighbors > 0) {
    if (random() < 1 - pow(1 - infectionRate, neighbors)) {
      nextGrid[i][j] = ZOMBIE;
    } else {
      nextGrid[i][j] = HUMAN;
    }
  } else {
    nextGrid[i][j] = HUMAN;
  }
}

This code not only introduces a realistic element of risk-based infection but also produces visually interesting outcomes as the patterns evolve. Watching the outbreak spread dynamically based on these probabilities was quite fun.

Challenges

One of the main challenges was balancing the simulation’s performance and functionality. With many cells updating at each frame, the program occasionally slowed down, especially with smaller cell sizes.  I also tried adding some features (cure) which I later removed due to lack of visual engagement (other structures might suite it better), of course such simulation in itself is oversimplification so you have to be mindful when adding parameters.

Reflection and Future Considerations

This project was a good opportunity to deepen my understanding of cellular automata and their potential for creating dynamic patterns. The combination of technical programming and creative design made the process both educational and enjoyable. I’m particularly pleased with how the interactivity turned the simulation into a fun engaging experience.
Looking ahead, I would like to enhance the simulation by introducing additional rulesets or elements, such as safe zones or zombie types with varying behaviors. Adding a graph to track population changes over time would also provide users with a clearer understanding of the dynamics at play. These improvements would further expand the educational and aesthetic appeal of the project. Furthermore, I could switch from grid cell to other structures similar to real life scenarios.

Week 10 – Fabrik by Dachi

Sketch: p5.js Web Editor | Fabrik

Inspiration

The development of this fabric simulation was mainly influenced by the two provided topics outlined in Daniel Shiffman’s “The Nature of Code.” Cloth simulations represent a perfect convergence of multiple physics concepts, making them an ideal platform for exploring forces, constraints, and collision dynamics. What makes cloth particularly fascinating is its ability to demonstrate complex emergent behavior through the interaction of simple forces. While many physics simulations deal with discrete objects, cloth presents the unique challenge of simulating a continuous, flexible surface that must respond naturally to both external forces and its own internal constraints. This complexity makes cloth simulation a particularly challenging and rewarding subject in game development, as it requires careful consideration of both physical accuracy and computational efficiency.

Development Process

The development of this simulation followed an iterative approach, building complexity gradually to ensure stability at each stage. The foundation began with a simple grid of particles connected by spring constraints, establishing the basic structure of the cloth. This was followed by the implementation of mouse interactions, allowing users to grab and manipulate the cloth directly. The addition of a rock object introduced collision dynamics, creating opportunities for more complex interactions. Throughout development, considerable time was spent fine-tuning the physical properties – adjusting stiffness, damping, and grab radius parameters until the cloth behaved naturally. Performance optimization was a constant consideration, leading to the implementation of particle limiting systems during grab interactions. The final stage involved adding velocity-based interactions to create more dynamic and realistic behavior when throwing or quickly manipulating the cloth.

How It Works

At its core, the simulation operates on a particle system where each point in the cloth is connected to its neighbors through spring constraints. The cloth grabbing mechanism works by detecting particles within a specified radius of the mouse position and creating dynamic constraints between these points and the mouse location. These constraints maintain the relative positions of grabbed particles, allowing the cloth to deform naturally when pulled. A separate interaction mode for the rock object is activated by holding the ‘R’ key, creating a single stiff constraint for precise control, with velocity applied upon release to enable throwing mechanics. The physics simulation uses a constraint-based approach for stable cloth behavior, with distance-based stiffness calculations providing natural-feeling grab mechanics and appropriate velocity transfer for realistic momentum.

Code I am proud of

The particle grabbing system stands out as the most sophisticated portion of the codebase. It is sorting particles based on their distance from the mouse and applying distance-based stiffness calculations. Here’s the core implementation:
// Array to store particles within grab radius
let grabbableParticles = [];

// Scan all cloth particles
for (let i = 0; i < cloth.cols; i++) {
    for (let j = 0; j < cloth.rows; j++) {
        let particle = cloth.particles[i][j];
        if (!particle.isStatic) {  // Skip fixed particles
            // Calculate distance from mouse to particle
            let d = dist(mouseX, mouseY, particle.position.x, particle.position.y);
            if (d < DRAG_CONFIG.GRAB_RADIUS) {
                // Store particle info if within grab radius
                grabbableParticles.push({
                    particle: particle,
                    distance: d,
                    offset: {
                        // Store initial offset from mouse to maintain relative positions
                        x: particle.position.x - mouseX,
                        y: particle.position.y - mouseY
                    }
                });
            }
        }
    }
}

// Sort particles by distance to mouse (closest first)
grabbableParticles.sort((a, b) => a.distance - b.distance);
// Limit number of grabbed particles
grabbableParticles = grabbableParticles.slice(0, DRAG_CONFIG.MAX_GRAB_POINTS);

// Only proceed if we have enough particles for natural grab
if (grabbableParticles.length >= DRAG_CONFIG.MIN_POINTS) {
    grabbableParticles.forEach(({particle, distance, offset}) => {
        // Calculate stiffness based on distance (closer = stiffer)
        let constraintStiffness = DRAG_CONFIG.STIFFNESS * (1 - distance / DRAG_CONFIG.GRAB_RADIUS);
        
        // Create constraint between mouse and particle
        let constraint = Constraint.create({
            pointA: { x: mouseX, y: mouseY },  // Anchor at mouse position
            bodyB: particle,                   // Connect to particle
            stiffness: constraintStiffness,    // Distance-based stiffness
            damping: DRAG_CONFIG.DAMPING,      // Reduce oscillation
            length: distance * 0.5             // Allow some slack based on distance
        });
        
        // Store constraint and particle info
        mouseConstraints.push(constraint);
        draggedParticles.add(particle);
        initialGrabOffsets.set(particle.id, offset);
        Composite.add(engine.world, constraint);
        
        // Stop particle's current motion
        Body.setVelocity(particle, { x: 0, y: 0 });
    });
}

This system maintains a minimum number of grab points to ensure stable behavior while limiting the maximum to prevent performance issues. The stiffness of each constraint is calculated based on the particle’s distance from the grab point, creating a more realistic deformation pattern where closer particles are more strongly influenced by the mouse movement.

Challenges

While performance optimization was addressed through careful limiting of active constraints, the primary challenge was in achieving authentic cloth behavior. Real fabric exhibits complex properties that proved difficult to replicate – it stretches but maintains its shape, folds naturally along stress lines, and responds to forces with varying degrees of resistance depending on the direction of the force. The initial implementation used uniform spring constants throughout the cloth, resulting in a rubber-like behavior that felt artificial and bouncy. Achieving natural draping behavior required extensive experimentation with different constraint configurations, ultimately leading to a system where horizontal and vertical constraints had different properties than diagonal ones. The way cloth bunches and folds was another significant challenge – early versions would either stretch indefinitely or resist folding altogether. This was solved by implementing a constraint lengths and stiffness values, allowing the cloth to maintain its overall structure while still being able to fold naturally. The grab mechanics also required considerable refinement to feel natural – initial versions would either grab too rigidly, causing the cloth to behave like a solid sheet, or too loosely, resulting in unrealistic stretching like a pointy tear. The solution involved implementing distance-based stiffness calculations and maintaining relative positions between grabbed particles, creating more natural deformation patterns during interaction.

Reflection and Future Considerations

The current implementation successfully demonstrates complex physics interactions in an accessible and intuitive way, but there remain numerous opportunities for enhancement. Future development could incorporate air resistance for more realistic cloth movement, along with self-collision detection to enable proper folding behavior. The addition of tear mechanics would introduce another layer of physical simulation, allowing the cloth to react more realistically to extreme forces. From a performance perspective, implementing spatial partitioning for collision detection and utilizing Web Workers for physics calculations could significantly improve efficiency, especially when dealing with larger cloth sizes. The interactive aspects could be expanded by implementing multiple cloth layers, cutting mechanics, and advanced texture mapping and shading systems. There’s also significant potential for educational applications, such as adding visualizations of forces and constraints, creating interactive tutorials about physics concepts, and implementing different material properties for comparison. Additionally, there is no depth to current implementation because this is inherently a 2D library. For depth-based clashing (which is what happens in real world) we would need to find 3D library.
These enhancements would further strengthen the project’s value as both a technical demonstration and an educational tool, illustrating how complex physical behaviors can be effectively simulated through carefully crafted rules and constraints.

Week 9 – Elison by Dachi

Sketch: p5.js Web Editor | Brindle butterkase

Inspiration

The project emerges from a fascination with Avatar: The Last Airbender’s representation of the four elements and their unique bending styles. Craig Reynolds’ Boids algorithm provided the perfect foundation to bring these elements to life through code. Each element in Avatar demonstrates distinct movement patterns that could be translated into flocking behaviors: water’s flowing movements, fire’s aggressive bursts, earth’s solid formations, and air’s spiral patterns.
The four elements offered different ways to explore collective motion: water’s fluid cohesion, fire’s upward turbulence, earth’s gravitational clustering, and air’s connected patterns. While the original Boids algorithm focused on simulating flocks of birds, adapting it to represent these elemental movements created an interesting technical challenge that pushed the boundaries of what the algorithm could achieve.

Process

The development started by building the core Boids algorithm and gradually shaping it to capture each element’s unique characteristics. Water proved to be the ideal starting point, as its flowing nature aligned well with traditional flocking behavior. I experimented with different parameter combinations for cohesion, alignment, and separation until the movement felt naturally fluid.
Fire came next, requiring significant modifications to the base algorithm. Adding upward forces and increasing separation helped create the energetic, spreading behavior characteristic of flames. The particle system was developed during this phase, as additional visual elements were needed to capture fire’s dynamic nature.
Earth presented an interesting challenge in making the movement feel solid and deliberate. This led to implementing stronger cohesion forces and slower movement speeds, making the boids cluster together like moving stones. Air was perhaps the most technically challenging, requiring the implementation of Perlin noise to create unpredictable yet connected movement patterns.
The transition system was the final major challenge, which would allow smooth morphing between elements. This involved careful consideration of how parameters should interpolate and how visual elements should blend. Through iterative testing and refinement, I managed to find a somewhat balanced visuals with unique patterns.

How It Works

The system operates on two main components: the boid behavior system and the particle effects system. Each boid follows three basic rules – alignment, cohesion, and separation – but the strength of these rules varies depending on the current element. For example, water boids maintain moderate values across all three rules, creating smooth, coordinated movement. Fire boids have high separation and low cohesion, causing them to spread out while moving upward.
The particle system adds visual richness to each element. Water particles drift downward with slight horizontal movement, while fire particles rise with random flickering. Earth particles maintain longer lifespans and move more predictably, and air particles follow noise-based patterns that create swirling effects.
The transition system smoothly blends between elements by interpolating parameters and visual properties. This includes not just the boid behavior parameters, but also particle characteristics, colors, and shapes. The system uses linear interpolation to gradually shift from one element’s properties to another, ensuring smooth visual and behavioral transitions.

 Code I’m Proud Of

switch(this.element) {
  case elementParams.fire:
    this.pos.y -= 1;
    this.vel.x += random(-0.1, 0.1);
    break;
  case elementParams.air:
    let time = (frameCount + this.offset) * 0.01;
    let noiseX = smoothNoise(this.pos.x * 0.006, this.pos.y * 0.006, time);
    let noiseY = smoothNoise(this.pos.x * 0.006, this.pos.y * 0.006, time + 100);
    this.vel.add(createVector(noiseX * 0.15, noiseY * 0.15));
    this.vel.limit(1.5);
    break;
}

This code efficiently handles the unique behavior of each element’s particles while remaining clean and maintainable. The fire particles rise and flicker naturally, while air particles follow smooth, noise-based patterns that create convincing wind-like movements.

Challenges

Performance optimization proved to be one of the biggest challenges. With hundreds of boids and particles active at once, maintaining smooth animation required careful optimization of the force calculations and particle management. I implemented efficient distance calculations and particle lifecycle management to keep the system running smoothly.
Creating convincing transitions between elements was another significant challenge. Moving from the rapid, dispersed movement of air to the slow, clustered movement of earth initially created jarring transitions. The solution involved creating a multi-layered transition system that handled both behavioral and visual properties gradually.
Balancing the elements’ distinct characteristics while maintaining a cohesive feel required extensive experimentation with parameters. Each element needed to feel unique while still being part of the same system. This involved finding the right parameter ranges that could create distinct behaviors without breaking the overall unity of the visualization.

Reflections and Future Considerations

The project successfully captures the essence of each element while maintaining smooth transitions between them. The combination of flocking behavior and particle effects creates an engaging visualization that responds well to user interaction. However, there’s still room for improvement and expansion.
Future technical improvements could include implementing spatial partitioning for better performance with larger boid counts, adding WebGL rendering for improved graphics, and creating more complex particle effects. The behavior system could be enhanced with influence mechanics where fire and water cancel out each other and other elements interact in various ways.
Adding procedural audio based on boid behavior could create a more immersive experience. The modular design of the current system makes these expansions feasible while maintaining the core aesthetic that makes the visualization engaging.
The project has taught me valuable lessons about optimizing particle systems, managing complex transitions, and creating natural-looking movement through code.
Throughout the development process, I gained a deeper appreciation for both the complexity of natural phenomena and the elegance of the algorithms we use to simulate them.

Week 8 – Black Hole Vehicles by Dachi

Sketch: p5.js Web Editor | black hole

Concept

This space simulation project evolved from the foundation of the vehicle sketch code provided on WordPress for current weekly objective, transforming the basic principles of object movement and forces into a more planetary scale simulation. The original vehicle concept was as inspiration for implementing celestial bodies that respond to gravitational forces. By adapting the core mechanics of velocity and acceleration from the vehicle example, I developed a more complex system that models the behavior of various celestial objects interacting with a central black hole. The simulation aims to create an immersive experience that, while not strictly scientifically accurate, captures the wonder and dynamic nature of cosmic interactions.

Process

The development began with establishing the CelestialBody class as the center of the simulation. This class handles the physics calculations and rendering for all space objects, including planets, stars, comets, and the central black hole. I implemented Newton’s law of universal gravitation to create realistic orbital mechanics, though with modified constants to ensure visually appealing movement within the canvas constraints.
The black hole visualization required special attention to create a convincing representation of its extreme gravitational effects. I developed an accretion disk system using separate particle objects that orbit the black hole, complete with temperature-based coloring to simulate the intense energy of matter approaching the event horizon. The background starfield and nebula effects were added to create depth and atmosphere in the simulation.
The implementation process involved several iterations to fine-tune the visual effects and physics calculations. I spent a lot of time on creation of the particle system for the accretion disk, which needed to balance performance with visual fidelity. The addition of comet trails and star glows helped to create a more dynamic and engaging visual experience.

Challenges

One of the primary challenges was balancing realistic physics with visual appeal. True gravitational forces would result in either extremely slow movement or very quick collisions, so finding the right constants and limits for the simulation required careful tuning. Another significant challenge was creating convincing visual effects for the black hole’s event horizon and gravitational lensing without overwhelming the system’s performance.
The implementation of the accretion disk presented its own challenges, particularly in managing particle behavior and ensuring smooth orbital motion while maintaining good performance with hundreds of particles. Creating a visually striking distortion effect around the black hole without impacting the frame rate was also difficult. I spent a lot of time on gravitiational lensing component but despite this could not get it to work like I imagined. However, that is beyond the scope of weekly assignment, and it could be something I would work for bigger timeframe.

Code I’m Proud Of

The following section creates multiple layers of distortion to simulate gravitational lensing:
for (let i = 20; i > 0; i--) {
    let radius = this.radius * (i * 0.7);
    let alpha = map(i, 0, 20, 100, 0);
    
    for (let angle = 0; angle < TWO_PI; angle += 0.05) {
        let time = frameCount * 0.02;
        let xOff = cos(angle + time) * radius;
        let yOff = sin(angle + time) * radius;
        
        let distortion1 = noise(xOff * 0.01, yOff * 0.01, time) * 20;
        let distortion2 = noise(xOff * 0.02, yOff * 0.02, time + 1000) * 15;
        let finalDistortion = distortion1 + distortion2;
        
        let spiralFactor = (sin(angle * 3 + time) * cos(angle * 2 + time * 0.5)) * radius * 0.1;

This code combines Perlin noise with circular motion to create a dynamic, organic-looking distortion field that suggests the warping of space-time around the black hole. The layered approach with varying alpha values creates a sense of depth and intensity that enhances the overall visual effect. The addition of the spiral factor creates a more complex and realistic representation of the gravitational distortion.

Reflection and Future Considerations

The project successfully achieves its goal of creating an engaging and visually impressive space simulation. The interaction between celestial bodies and the central black hole creates emergent behaviors that can be both predictable and surprising, making the simulation entertaining to watch. The visual effects, particularly around the black hole, effectively convey the sense of powerful gravitational forces at work.
For future iterations, several enhancements could be considered. Implementing relativistic effects could make the simulation more scientifically accurate, though this would need to be balanced against performance and visual clarity. Adding user interaction capabilities, such as allowing viewers to create new celestial bodies or adjust gravitational constants in real-time, could make the simulation more engaging and educational.
Another potential improvement would be the addition of collision detection and handling between celestial bodies, which could lead to interesting events like the formation of new bodies or the creation of debris fields. The visual effects could also be enhanced with WebGL shaders to create more sophisticated gravitational lensing and accretion disk effects while potentially improving performance.
The addition of sound effects and music could enhance the immersive experience, perhaps with dynamic audio that responds to the movement and interactions of celestial bodies. A more sophisticated particle system could be implemented to simulate solar winds, cosmic radiation, and other space phenomena, further enriching the visual experience.
Additionally, implementing a system to generate and track interesting events in the simulation could provide educational value, helping viewers understand concepts like orbital mechanics and the behavior of matter around black holes.

Week 8 – Mujo Reflection by Dachi

Listening to lecture about MUJO, I was quote moved by how this multimedia performance piece explores the concept of impermanence through multiple artistic dimensions. The work masterfully integrates dance, projection mapping, and sound in the desert landscape to create a profound meditation on the lasting nature of existence.
The decision to use desert dunes as both stage and canvas is particularly fascinating. The natural formation and erosion of sand dunes serves as a perfect metaphor for the piece’s central theme of impermanence, mirroring the way human experiences and emotions constantly shift and transform. The digital projections that create abstract dunes over real ones cleverly amplify this concept, creating a dialogue between the natural and the digital.
What makes MUJO especially compelling is its dual existence as both a live desert performance and a multi-channel installation. The installation version demonstrates how site-specific art can be thoughtfully adapted for different contexts while maintaining its core message. The multi-channel approach in the installation allows for a more fragmented and intimate exploration of the body’s relationship with elemental forces.
The collaboration between choreographer Kiori Kawai and multimedia artist Aaron Sherwood shows significant effort. The dancers’ movements, as they climb and descend the dunes, physically embody the struggle with constant change, while the immersive soundscape and visuals reinforce this theme. The technical aspects – from projection mapping to sound design – don’t merely serve as technicalities but actively participate in the narrative.
The work draws fascinating parallels between the impermanence of natural phenomena and human existence. Just as sand particles come together to form dunes only to be reshaped by wind, the piece suggests our bodies and thoughts are similarly temporary mediums. This Buddhist-influenced perspective on impermanence is expressed not just conceptually but through every artistic choice in the performance.
Additionally, having an opportunity to ask questions from their direct experience was very helpful as we were able to see not only the steps taken by them but what kind of hindrances they were challenged with throughout. Overcoming those obstacles, whether they are technological limitations or artistic was very interesting to learn and hear about.

(https://www.aaron-sherwood.com/works/mujo/)

Midterm – Painterize by Dachi

 

Sketch: (won’t work without my server, explained later in code)

Timelapse:

SVG Print:

Digital Prints:

(This one is same as SVG version without edge detecting algorithm and simplification)

Concept Inspiration

As a technology enthusiast with a keen interest in machine learning, I’ve been fascinated by the recent advancements in generative AI, particularly in the realm of image generation. While I don’t have the expertise nor timeframe to create a generative AI model from scratch, I saw an exciting opportunity to explore the possibilities of generative art by incorporating existing AI image generation tools.

My goal was to create a smooth, integrated experience that combines the power of AI-generated images with classic artistic styles. The idea of applying different painter theme to AI-generated images came to mind as a way to blend cutting-edge technology with traditional art forms. For my initial experiment, I chose to focus on the distinctive style of Vincent van Gogh, known for his bold colors and expressive brushstrokes.

Development Process

The development process consisted of two main components:

  1. Backend Development: A Node.js server using Express was created to handle communication with the AI API. This server receives requests from the frontend, interacts with the API to generate images, and serves these images back to the client.
  2. Frontend Development: The user interface and image processing were implemented using p5.js. This includes the input form for text prompts, display of generated images, application of the Van Gogh effect, and SVG extraction based on edge detection algorithm.

Initially, I attempted to implement everything in p5.js, but API security constraints necessitated the creation of a separate backend.

Implementation Details

The application works as follows:

  1. The user enters a text prompt in the web interface.
  2. The frontend sends a request to the Node.js server.
  3. The server communicates with the StarryAI API to generate an image.
  4. The generated image is saved on the server and its path is sent back to the frontend.
  5. The frontend displays the generated image.
  6. The user can apply the Van Gogh effect, which uses a custom algorithm to create a painterly style.
  7. User is able to export the image in PNG format with or without Van Gogh effect
  8. User is also able to export two different kinds of SVG (simplified and even more simplified)
  9. Version of SVG extraction for Pen Plotting is done through edge detection algorithm of which the user is able to calibrate sensitivity.

A key component of the project is the Van Gogh effect algorithm:

This function applies a custom effect that mimics Van Gogh’s style using Poisson disc sampling and a swirling line algorithm. Here is significant code:

// Class for Poisson disc sampling
class PoissonDiscSampler {
  constructor() {
    this.r = model.pointr;
    this.k = 50;  // Number of attempts to find a valid sample before rejecting
    this.grid = [];
    this.w = this.r / Math.sqrt(2);  // Cell size for spatial subdivision
    this.active = [];  // List of active samples
    this.ordered = [];  // List of all samples in order of creation
    
    // Use image dimensions instead of canvas dimensions
    this.cols = floor(generatedImage.width / this.w);
    this.rows = floor(generatedImage.height / this.w);
    
    // Initialize grid
    for (let i = 0; i < this.cols * this.rows; i++) {
      this.grid[i] = undefined;
    }
    
    // Add the first sample point (center of the image)
    let x = generatedImage.width / 2;
    let y = generatedImage.height / 2;
    let i = floor(x / this.w);
    let j = floor(y / this.w);
    let pos = createVector(x, y);
    this.grid[i + j * this.cols] = pos;
    this.active.push(pos);
    this.ordered.push(pos);
    
    // Generate samples
    while (this.ordered.length < model.pointcount && this.active.length > 0) {
      let randIndex = floor(random(this.active.length));
      pos = this.active[randIndex];
      let found = false;
      for (let n = 0; n < this.k; n++) {
        // Generate a random sample point
        let sample = p5.Vector.random2D();
        let m = random(this.r, 2 * this.r);
        sample.setMag(m);
        sample.add(pos);
        
        let col = floor(sample.x / this.w);
        let row = floor(sample.y / this.w);
        
        // Check if the sample is within the image boundaries
        if (col > -1 && row > -1 && col < this.cols && row < this.rows && 
            sample.x >= 0 && sample.x < generatedImage.width && 
            sample.y >= 0 && sample.y < generatedImage.height && 
            !this.grid[col + row * this.cols]) {
          let ok = true;
          // Check neighboring cells for proximity
          for (let i = -1; i <= 1; i++) {
            for (let j = -1; j <= 1; j++) {
              let index = (col + i) + (row + j) * this.cols;
              let neighbor = this.grid[index];
              if (neighbor) {
                let d = p5.Vector.dist(sample, neighbor);
                if (d < this.r) {
                  ok = false;
                  break;
                }
              }
            }
            if (!ok) break;
          }
          if (ok) {
            found = true;
            this.grid[col + row * this.cols] = sample;
            this.active.push(sample);
            this.ordered.push(sample);
            break;
          }
        }
      }
      if (!found) {
        this.active.splice(randIndex, 1);
      }
      
      // Stop if we've reached the desired point count
      if (this.ordered.length >= model.pointcount) {
        break;
      }
    }
  }
}

// LineMom class for managing line objects
class LineMom {
  constructor(pointcloud) {
    this.lineObjects = [];
    this.lineCount = pointcloud.length;
    this.randomZ = random(10000);
    for (let i = 0; i < pointcloud.length; i++) {
      if (pointcloud[i].x < -model.linelength || pointcloud[i].y < -model.linelength ||
          pointcloud[i].x > width + model.linelength || pointcloud[i].y > height + model.linelength) {
        continue;
      }
      this.lineObjects[i] = new LineObject(pointcloud[i], this.randomZ);
    }
  }
  
  render(canvas) {
    for (let i = 0; i < this.lineCount; i++) {
      if (this.lineObjects[i]) {
        this.lineObjects[i].render(canvas);
      }
    }
  }
}

Another key component of the project was SVG extraction based on edge detection.

  1. The image is downscaled for faster processing.
  2. Edge detection is performed on the image using a simple algorithm that compares the brightness of each pixel to the average brightness of its 3×3 neighborhood. If the difference is above a threshold, the pixel is considered an edge.
  3. The algorithm traces paths along the edges by starting at an unvisited edge pixel and following the edges until no more unvisited edge pixels are found or the path becomes too long.
  4. The traced paths are simplified using the Ramer-Douglas-Peucker algorithm, which removes points that don’t contribute significantly to the overall shape while preserving the most important points.
  5. The simplified paths are converted into SVG path elements and combined into a complete SVG document.
  6. The SVG is saved as a file that can be used for plotting or further editing.

This approach extracts the main outlines and features of the image as a simplified SVG representation.

// Function to export a simplified SVG based on edge detection
function exportSimpleSVG() {
  if (!generatedImage) {
    console.error('No image generated yet');
    return;
  }

  // Downscale the image for faster processing
  let scaleFactor = 0.5;
  let img = createImage(generatedImage.width * scaleFactor, generatedImage.height * scaleFactor);
  img.copy(generatedImage, 0, 0, generatedImage.width, generatedImage.height, 0, 0, img.width, img.height);

  // Detect edges in the image
  let edges = detectEdges(img);
  edges.loadPixels();

  let paths = [];
  let visited = new Array(img.width * img.height).fill(false);

  // Trace paths along the edges
  for (let x = 0; x < img.width; x++) {
    for (let y = 0; y < img.height; y++) {
      if (!visited[y * img.width + x] && brightness(edges.get(x, y)) > 0) {
        let path = tracePath(edges, x, y, visited);
        if (path.length > 5) { // Ignore very short paths
          paths.push(simplifyPath(path, 1)); // Simplify the path
        }
      }
    }
  }
// Function to detect edges in an image
function detectEdges(img) {
  img.loadPixels(); //load pixels of input image
  let edges = createImage(img.width, img.height); //new image for storing
  edges.loadPixels();

  // Simple edge detection algorithm
  for (let x = 1; x < img.width - 1; x++) { //for each pixel exlcuding broder
    for (let y = 1; y < img.height - 1; y++) {
      let sum = 0;
      for (let dx = -1; dx <= 1; dx++) {
        for (let dy = -1; dy <= 1; dy++) {
          let idx = 4 * ((y + dy) * img.width + (x + dx));
          sum += img.pixels[idx];
        }
      }
      let avg = sum / 9; //calculate avg brightness of 3x3 neighborhood
      let idx = 4 * (y * img.width + x);
      edges.pixels[idx] = edges.pixels[idx + 1] = edges.pixels[idx + 2] = 
        abs(img.pixels[idx] - avg) > 1 ? 255 : 0; //change this
      edges.pixels[idx + 3] = 255; //if difference between pixel brightness and average is above 3 its considered an edge. result is binary image where edges are white and none edges are black
    }
  }
  edges.updatePixels();
  return edges;
}

// Function to trace a path along edges
function tracePath(edges, startX, startY, visited) {
  let path = [];
  let x = startX;
  let y = startY;
  let direction = 0; // 0: right, 1: down, 2: left, 3: up

  while (true) {
    path.push({x, y});
    visited[y * edges.width + x] = true;

    let found = false;
    for (let i = 0; i < 4; i++) { //It continues tracing until it can't find an unvisited edge pixel 
      let newDirection = (direction + i) % 4;
      let [dx, dy] = [[1, 0], [0, 1], [-1, 0], [0, -1]][newDirection];
      let newX = x + dx;
      let newY = y + dy;

      if (newX >= 0 && newX < edges.width && newY >= 0 && newY < edges.height &&
          !visited[newY * edges.width + newX] && brightness(edges.get(newX, newY)) > 0) {
        x = newX;
        y = newY;
        direction = newDirection;
        found = true;
        break;
      }
    }

    if (!found || path.length > 500) break; // Stop if no unvisited neighbors or path is too long
  }

  return path;
}

//Function to simplify a path using the Ramer-Douglas-Peucker algorithm The key idea behind this algorithm is that it preserves the most important points of the path (those that deviate the most from a straight line) while removing points that don't contribute significantly to the overall shape.
function simplifyPath(path, tolerance) {
  if (path.length < 3) return path; //If the path has fewer than 3 points, it can't be simplified further, so we return it as is.

  function pointLineDistance(point, lineStart, lineEnd) { //This function calculates the perpendicular distance from a point to a line segment. It's used to determine how far a point is from the line formed by the start and end points of the current path segment.
    let dx = lineEnd.x - lineStart.x;
    let dy = lineEnd.y - lineStart.y;
    let u = ((point.x - lineStart.x) * dx + (point.y - lineStart.y) * dy) / (dx * dx + dy * dy);
    u = constrain(u, 0, 1);
    let x = lineStart.x + u * dx;
    let y = lineStart.y + u * dy;
    return dist(point.x, point.y, x, y);
  }

  //This loop iterates through all points (except the first and last) to find the point that's farthest from the line formed by the first and last points of the path.
  let maxDistance = 0;
  let index = 0; 
  for (let i = 1; i < path.length - 1; i++) {
    let distance = pointLineDistance(path[i], path[0], path[path.length - 1]);
    if (distance > maxDistance) {
      index = i;
      maxDistance = distance;
    }
  }

  if (maxDistance > tolerance) { //split and recursively simplify each
    let leftPath = simplifyPath(path.slice(0, index + 1), tolerance);
    let rightPath = simplifyPath(path.slice(index), tolerance);
    return leftPath.slice(0, -1).concat(rightPath);
  } else {
    return [path[0], path[path.length - 1]];
  }
}

Challenges

The main challenges encountered during this project were:

  1. Implementing secure API communication: API security constraints led to the development of a separate backend, which added complexity to the project architecture.
  2. Managing asynchronous operations in the image generation process: The AI image generation process is not instantaneous, which required implementing a waiting mechanism in the backend. (Promised base) Here’s how it works:
    • When the server receives a request to generate an image, it initiates the process with the StarryAI API.
    • The API responds with a creation ID, but the image isn’t ready immediately.
    • The server then enters a polling loop, repeatedly checking the status of the image generation process:

    • This loop continues until the image is ready or an error occurs.
    • Once the image is ready, it’s downloaded and saved on the server.
    • Finally, the image path is sent back to the frontend.
    • This process ensures that the frontend doesn’t hang while waiting for the image, but it also means managing potential timeout issues and providing appropriate feedback to the user.
  1. Integrating the AI image generation with the Van Gogh effect seamlessly: Ensuring that the generated image could be smoothly processed by the Van Gogh effect algorithm required careful handling of image data.
  2. Ensuring smooth user experience: Managing the state of the application across image generation and styling and providing appropriate feedback to the user during potentially long wait times, was crucial for a good user experience.
  3. Developing an edge detection algorithm for pen plotting:
    • Adjusting the threshold value for edge detection was important, as it affects the level of detail captured in the resulting SVG file. Setting the threshold too low would result in an overly complex SVG, while setting it too high would oversimplify the image.
    • Ensuring that the custom edge detection algorithm produced satisfactory results across different input images was also a consideration, as images vary in contrast and detail. Initially, I had problem with edge pixels but later excluded them.
    • Integrating the edge detection algorithm seamlessly into the existing image processing pipeline and ensuring compatibility with the path simplification step (Ramer-Douglas-Peucker algorithm) was another challenge that required careful design and testing.
  4. Image generation, I experimented with different image generation models provided by StarryAI. From default to fantasy to anime. Eventually I settled down for detailed Illustration model which is perfect for svg extraction as it provides more distinct lines based on cartoonish appearance and also works well for Van Gogh effect due to its bold colors and more simplified nature compared to more realistic images.

Reflection

This project provided valuable experience in several areas:

  1. Working with external APIs and handling asynchronous operations
  2. Working with full-stack approach with Node.js and p5.js
  3. Integrating different technologies (AI image generation and artistic styling) into a cohesive application
  4. Implementing algorithms for edge detection.

I am quite happy with the result and plotted image also works well stylistically although it is different from initial painter effect, it provides another physical dimension to the project which is just as important.

Future Improvements:

  1. Implementing additional artistic styles
  2. Refining the user interface for a better user experience
  3. Combining art styles with edge detection for more customizable SVG extraction.
  4. Hosting site online to keep project running without my interference. This would also require me to have some kind of subscription for Image generation API because current one is capped at around 100 requests for current model.

Week 5 – Painterize by Dachi

Sketch:  (can’t work without my server for now so just pasting for code and demonstration purposes)

Image based on prompt “Designing interactive media project about art generation” with Van Gogh effect.

Concept Inspiration

As a technology enthusiast with a keen interest in machine learning, I’ve been fascinated by the recent advancements in generative AI, particularly in the realm of image generation. While I don’t have the expertise to create a generative AI model from scratch, I saw an exciting opportunity to explore the possibilities of generative art by incorporating existing AI image generation tools.

My goal was to create a smooth, integrated experience that combines the power of AI-generated images with classic artistic styles. The idea of applying different painter themes to AI-generated images came to mind as a way to blend cutting-edge technology with traditional art forms. For my initial experiment, I chose to focus on the distinctive style of Vincent van Gogh, known for his bold colors and expressive brushstrokes.

Development Process

The development process consisted of two main components:

  1. Backend Development: A Node.js server using Express was created to handle communication with the StarryAI API. This server receives requests from the frontend, interacts with the API to generate images, and serves these images back to the client.
  2. Frontend Development: The user interface and image processing were implemented using p5.js. This includes the input form for text prompts, display of generated images, and application of the Van Gogh effect.

Initially, I attempted to implement everything in p5.js, but API security constraints necessitated the creation of a separate backend.

Implementation Details

The application works as follows:

  1. The user enters a text prompt in the web interface.
  2. The frontend sends a request to the Node.js server.
  3. The server communicates with the StarryAI API to generate an image.
  4. The generated image is saved on the server and its path is sent back to the frontend.
  5. The frontend displays the generated image.
  6. The user can apply the Van Gogh effect, which uses a custom algorithm to create a painterly style.

A key component of the project is the Van Gogh effect algorithm:

 

// Function to apply the Van Gogh effect to the generated image
function applyVanGoghEffect() {
  if (!generatedImage) {
    statusText.html('Please generate an image first');
    return;
  }

  vanGoghEffect = true;
  statusText.html('Applying Van Gogh effect...');

  // Prepare the image for processing
  generatedImage.loadPixels();

  // Create Poisson disc sampler and line objects
  poisson = new PoissonDiscSampler();
  lines = new LineMom(poisson.ordered);

  // Set up canvas for drawing the effect
  background(model.backgroundbrightness);
  strokeWeight(model.linewidth);
  noFill();

  redraw();  // Force a redraw to apply the effect

  statusText.html('Van Gogh effect applied');
}

This function applies a custom effect that mimics Van Gogh’s style using Poisson disc sampling and a swirling line algorithm.

Challenges

The main challenges encountered during this project were:

  1. Implementing secure API communication: API security constraints led to the development of a separate backend, which added complexity to the project architecture.
  2. Managing asynchronous operations in the image generation process: The AI image generation process is not instantaneous, which required implementing a waiting mechanism in the backend. Here’s how it works:
    • When the server receives a request to generate an image, it initiates the process with the StarryAI API.
    • The API responds with a creation ID, but the image isn’t ready immediately.
    • The server then enters a polling loop, repeatedly checking the status of the image generation process:

    • This loop continues until the image is ready or an error occurs.
    • Once the image is ready, it’s downloaded and saved on the server.
    • Finally, the image path is sent back to the frontend.
    • This process ensures that the frontend doesn’t hang while waiting for the image, but it also means managing potential timeout issues and providing appropriate feedback to the user.

3. Integrating the AI image generation with the Van Gogh effect seamlessly: Ensuring that the generated image could be smoothly processed by the Van Gogh effect algorithm required careful handling of image data.

4. Ensuring smooth user experience: Managing the state of the application across image generation and styling and providing appropriate feedback to the user during potentially long wait times, was crucial for a good user experience.

Reflection

This project provided valuable experience in several areas:

  1. Working with external APIs and handling asynchronous operations
  2. Developing full-stack applications with Node.js and p5.js
  3. Integrating different technologies (AI image generation and artistic styling) into a cohesive application

While the result is a functional prototype rather than a polished product, it successfully demonstrates the concept of combining AI-generated images with artistic post-processing.

Future Improvements

Potential areas for future development include:

  1. Implementing additional artistic styles
  2. Refining the user interface for a better user experience
  3. Adding functionality to save generated artworks
  4. Optimizing the integration between image generation and styling for better performance
  5. Allowing user customization of effect parameters

These improvements could enhance the application’s functionality and user engagement.

References:

Coding Train

p5.js Web Editor | curl flowfield lineobjects image (p5js.org)