Final Project update #2 – Wildfire

Updates:

For this week’s update, I have added temperature as a parameter. If the temperature is not ideal, the fire won’t propagate. SImilarly, if the temperate is too good, the wildfire will propagate rapidly.

As a next step, I will be adding all the secondary maps:
– Heat map
– Vegetation type
– Wind
– Elevation (height and slope of the land – mountain, hill…)

I will be adding more ways to monitor data, proportions of lands to the terrain, water, burned land etc…

I will be refining more colors to give it a natural look.

https://editor.p5js.org/bdr/full/ShjdWF8uN

Blog Post Reflection

In Professor Neil Leach’s enlightening lecture, ‘Alien Intelligence – Intro to AI for Designers,’ the fusion of AI and architectural design unfolds. Exploring AI’s impact in architecture, Leach highlights its potential through cases. He introduces the visionary of. Spiral Neural Network, envisioning a revolutionary neural architecture. Leach’s discourse evokes the transformative influence of AI, igniting a curiosity to harness its power for innovative, sustainable, and efficient architectural solutions, echoing a future where technology and creativity converge in unprecedented ways.

Week 11: Cellular Automata

Inspiration


This is the variation of the coding train’s project in 3D grid. Users engage by placing live cells via mouse click, adjusting speed with a slider, and observing evolving patterns. It combines interactivity, 3D visualization, and computational rules, fostering an immersive exploration.”
Let’s dive into this 3D cellular automaton project and check out its unique block design. Imagine a virtual world made up of tiny cubes – that’s our grid! Each cube, or ‘block,’ represents a cell in this three-dimensional space.

When you click your mouse on the canvas! You’re activating certain blocks within the grid. These activated blocks turn white and become part of the evolving pattern. It’s like you’re playing a creative role in shaping this digital world!

// This function draws an active block as a white cube
function drawActiveBlock(x, y, z) {
push();
translate(x * resolution - width / 2, y * resolution - height / 2, z * resolution - stacks / 2 * resolution);
fill(255, 150);
box(resolution);
pop();
}

The 'computeNext' function does all the behind-the-scenes work. It's like the brains of our project! This function uses the rules of Conway's Game of Life to decide the fate of each block based on its neighbors. If a block has too few or too many active neighbors, it 'dies' and becomes inactive. But if it has just the right number of neighbors, it 'lives' and becomes active.

Final Project Update: many worlds

Progress Overview

Since my initial blog post outlining the ambitious concept of simulating multiverses and timelines, I have delved deep into the realms of coding and artistic expression using p5.js. My journey has been both challenging and exhilarating, as I work to bring the intricate theories of multiverses, timeline divergence, and the butterfly effect to life.

Implementing Gravitational Points and Line Formation

A significant milestone in my project has been the implementation of gravitational points on the canvas. These points act as origins for particles that radiate outward, forming random line figures reminiscent of strings. The concept is to simulate the idea of timelines diverging from a central point, influenced by random attractors placed throughout the canvas. This design choice aligns beautifully with the core theories driving my project, particularly the notion of intertwined timelines.

Attraction Between Lines

To enhance the visual appeal and to resonate more deeply with the underlying theories, I introduced an attraction mechanism between lines. This feature ensures that the lines not only diverge but also converge, creating a dynamic and visually stunning representation of timelines that are constantly interacting with one another.

Challenges and Solutions

Computational Intensity and Optimization

One of the major challenges faced during development has been the computational intensity required by the project. Initially, this limitation restricted me to working with a smaller canvas size. However, recognizing the need to expand and enhance the user experience, I am currently focused on optimizing the processes. The goal is to enable a larger canvas size without compromising the performance or the intricate details of the simulation.

Looking Ahead

Enhancing User Interaction

The next phase of the project will involve refining the user interaction methodology. While the current design allows for user influence through clicking and dragging, I plan to explore more intuitive and engaging ways for users to interact with the timeline network.

User Experience and Visual Aesthetics

In parallel with the technical optimizations, a key focus will remain on ensuring that the canvas is not only visually captivating but also intuitive for users. This includes fine-tuning the parameters like the thickness and color of the timelines and their interaction dynamics.

Demo

Conclusion

This project is a journey through the complexities of physics and the beauty of visual arts. As I continue to tackle the technical challenges and enhance the user experience, I am ever more excited about the potential of this simulation to provide a unique and thought-provoking exploration of multiverses and timelines. Stay tuned for more updates as the project evolves!

Final Project Progress

Current Stage + Reflection:

There’s been many changes to my initial idea, and for now I settled with the below sketch:

There’s still many aspects about the sketch that I want to change, such as:

  • Making the branches so that they won’t grow to be that long (I don’t really like how long they’re growing because it loses its original shape)
  • Adjusting the particles so that they are limited to a certain area (i.e. only having them roam around at the top of the canvas like stars)
  • Modifying the fractal tree so that the thickness of the tree is varying (i.e. the branches are thinner, while the trunk of the tree is thicker)
  • Implement audio so that it’s either playing one track that is responsive to the growth of the flowers, or multiple different snippets of sound from varying traditional instruments per mouse click so that they form a harmony as the user generates more flowers onto the canvas.
  • Make the initial placement of the flowers less random.

There were the key characteristics I wanted to include prior to settling with this sketch:

  • gradually draw patterns (i.e. each mouse click = each new drawn line/curve) that are correlating with the triggered audio sound at each mouse click.
  • have one pattern drawn fully and completed before another pattern is drawn.
  • incorporate the randomness/generative aspect for the designs while still keeping korean traditional pattern’s general principles.
  • incorporate colors.

Process:

Here are many different stages that I went through:

    1. Playing with generative patterns. –> My main theme that I wanted to experiment with was creating generative artwork that showcases the beauty of Korean traditional art such as patterns and colors, so I played around with cellular automana and random generations of these patterns.

(at mouse click, a new pattern will show up and create a pattern onto the canvas)
(click the canvas and cells will draw a traditional Korean pattern)
(click on the canvas multiple times to make the sketch more clear/brighter)
(each time you refresh the code it’ll generate random patterns)

2. Incorporating audio into my sketch. –> I wanted to implement a function where the user not just manipulates and controls the visuals (art) but also the audio, so I decided to trigger the creation new patterns and playing of audio simultaneously at mouse press.

(Press the canvas to play the audio and generate patterns)

(The flowers are floating upwards on their own, but once you keep the mouse pressed onto the canvas, the flowers will speed up and play the audio)

(Basically the same as above but color palette limited to warm colors and now they’re being floated downwards by gravity)

3. Playing with mouse click and generating new patterns. –> I still felt like I wanted to give user more freedom in terms of manipulating the locations of the patterns that appear rather than having them be in clumps like the ones above, so I decided to generate one full pattern per mouse click.

I was also contemplating between generating ripples, flowers, or traditional patterns per mouse click, which will trigger a Korean traditional audio track. I also began to envision a more concrete and complete sketch starting this point, and I was sure I wanted a: fractal tree, randomly generated pattern of some sort, and an audio aspect.
(Here’s a ripples one.)

(Here’s a flowers one.)
(This one generated flowers and a tree at mouse click, but because the tree was too complex, the sketch was lagging too much.)

Because of the above lag that I experienced, I decided to limit the number of branches so that it won’t interfere with the rest of the sketch.

4. Implementing particles as well as steering, fleeing, wandering, etc. behaviors. –> I thought it’d be fun to have the user to recreate a specific significant Korean traditional painting that was used by the royalty, and because the painting was showing a night sky, I remembered the particle system sketch that I created a while ago and thought it’d be fun to incorporate them to be the “stars” on the sketch.

(The particles are wandering to the right side of the canvas, and they’re also steering away from the mouse position.)

5. Final current stage where I’m at. –> Now that I had a clear idea of what elements (tree, particles, flower patterns, background image) and skills I’ll use for my sketch (particle system, fractals, generative art), I set onto creating a demo sketch to test it before combining all of them together.
(At mouse click, the flowers are generated.)

I also used this image for the background image and this hex color code website for the Korean traditional color palette.

I can’t wait to expand my sketch and come up with a final design this week!

Final Project Update

Concept

The project is a simulation of an ecosystem featuring three types of entities: boids, predators, and apex predators. The goal is to maintain balance in the ecosystem by adjusting the attributes of these entities. The simulation allows the user to influence the survival and evolution of the creatures.

Sketch

Mechanics

Each entity type has its own behaviors and attributes:

– Boids: These are the prey in the ecosystem. They flock together and reproduce.
– Predators: These entities seek out and consume boids. They also reproduce.
– Apex Predators: The top of the food chain. They seek out and consume predators and also reproduce.

Stats

The entities inhabit a 2D space. The user can interact with the simulation by adjusting the attributes of the entities using sliders. These attributes include speed, maximum force, and reproduction rate. Each entity type has a reproduction method. When an entity reproduces, a new entity of the same type is added to the ecosystem.

Expected Challenges

– Balancing: The main challenge is to maintain balance in the ecosystem. If the predators are too strong, the boids might die out. If the boids reproduce too quickly, they might overpopulate the ecosystem. This is the users objective to find the right balance between the stats, however, the sliders need to have the appropriate ranges of values to make sense in the context of the enviornment. This requires further experimentation to find the ranges.

Next Steps

– Improve User Interaction: Allow the user to adjust more attributes of the entities. This could include things like the lifespan of the entities, the rate at which they get hungry.
– Visual Improvements: Make the UI more finalized and polished as will as the creatures.

Final Project Update – Abdelrahman Mallasi

Key Developments:

  1. Face Detection and Isolation: Utilizing ml5.js’s FaceAPI, the sketch can now detect and isolate the user’s face. The surrounding area is rendered white, enhancing the clarity and focus of our visual effects.
  2. Pixel Manipulation for Face Distortion: I’ve implemented a pixel manipulation technique to create a melting effect on the detected face area, adding a facial distortion quality to represent hallucinatory experiences.
  3. Dynamic Flocking Systems: When the user’s face moves to the right half of the screen, a black flocking system activates, while the left half activated a red flocking system. These boids, as a well as the user’s distorted face,  leave a trace behind them as they move. This creates a haunting visual effect which contributes to the disturbing nature of the hallucinations.Embedded Code

Next Steps: A feature I want to add is the ability to detect when the user smiles. This functionality will be tied to an interactive element that transforms the visual environment, marking a positive shift in the hallucinatory experience.

Challenges: It was time-consuming implementing the integration and distortion of the user’s face. Firstly, I initially ran into a lot of errors using faceAPI feature, and it was difficult navigating and learning a new workspace like ml5.js. Furthermore, implementing the facial distortion was difficult since I had no prior experience with pixel manipulation.

Final Project Proposal – Abdelrahman Mallasi

Concept:

My final project is an interactive art piece designed to visually represent the phenomena of hypnagogic and hypnopompic hallucinations. These hallucinations occur in the transitional states between wakefulness and sleep. Hypnagogic hallucinations happen as one drifts off to sleep, while hypnopompic occur upon waking. Fascinatingly, these vivid, dream-like experiences emerge from the brain without any external stimuli or chemicals.

I chose this topic due to my fascination with these mind-states and the capacity of our brains to generate alternate states of consciousness with surreal visuals and experiences. I drew inspiration from Casey REAS, an artist known for his generative artworks. Below is some examples of his work named “Untitled Film Stills, Series 5“, showcasing facial distortions and dream-like states.

Project Description:

I’m envisioning the p5.js sketch to have the user’s webcam capture their image. The user’s face will then be distorted and an action will be triggered by specific user actions and expressions. These actions might include implementations of particle systems, flocking systems, or fractals.

The goal is to create an immersive experience that entertains but also educates the audience about these unique states of consciousness.

Challenges:

– For the actions triggered by the user’s actions, I’m unsure of which actions to include
– There’s also a fear of the project not looking cohesive if too many unrelated elements are implemented, like flocking systems and fractals.

Cellular Automata – Week #10

https://editor.p5js.org/oae233/sketches/raawVHP82

Concept / Idea

For this assignment, I really struggled to come up with a concept. I also would get stuck for a long time just playing around with the Game of Life demos online ahahah. I found some of them very interesting (like the Gosper glider gun & the concept of gliders in general) I thought about doing something related to gliders, which are basically a setup of cells that forms a loop and moves across the canvas diagonally, but eventually, I felt that visually it wasn’t what I wanted to do. I ended up just loading the Game of Life code from the P5JS website (https://p5js.org/examples/simulate-game-of-life.html) into a sketch of mine and playing around with it. Eventually, I came across some settings I liked.

The initial setup draws this beautiful erupting pattern that eventually descends into chaos. I played around with the rules to create a version that never reaches stability / is in constant motion (I basically added a rule that a dead cell also comes alive it has 4 neighbors) and then added another rule that turns cells with 7 neighbors into a 3rd color. I like the idea of playing with 3 colors In most of my sketches. Finally, the faded background and blur canvas filter help bring the effect together.

For interactivity, I added the ability for users to change the 3 colors of the main sketch, a button to reset the animation, and a slider to control the frame rate.

Some code I want to highlight:

   

if      ((board[x][y] == 1) && (neighbors <  2)) next[x][y] = 0;           // Loneliness

      else if ((board[x][y] == 1) && (neighbors >  3)) next[x][y] = 0; 

       else if ((board[x][y] == 2) && (neighbors >  3)) next[x][y] = 0;

       else if ((board[x][y] == 2) && (neighbors < 2)) next[x][y] = 0;

      

      // Overpopulation

      else if ((board[x][y] == 0) && (neighbors == 3)) next[x][y] = 1; 

      else if ((board[x][y] == 0) && (neighbors == 4)) next[x][y] = 1;    

      

      else if ((board[x][y] == 0) && (neighbors == 7)) next[x][y] = 2;

 

These are all the rules I used

Future work:

I want to add more interactivity to the project, maybe have a couple of different patterns people can choose from or have users be able to draw on the canvas as well.

Physics Playground – Matter.js – Week #9

https://editor.p5js.org/oae233/sketches/Ci6eLCTM_

Concept / Idea

For this assignment, I wanted to create something like an interactive playground where users can play around with objects and manipulate them to create/build interesting compositions.

I have 3 types of shapes, triangles that float upwards, circles that fall downwards, and squares that are gravity-neutral. Users can move the shapes around, using their forces to hold them together in interesting places. You can generate any one of these shapes at any time, and either expand them or contract them. The force acting on them changes proportionally to the change in size/ mass.

 

if (this.body.position.x > (zone1.x - 40) && this.body.position.y > (zone1.y - 40) && this.body.position.x < (zone1.x + 40) && this.body.position.y < (zone1.y + 40)&& mouseX > (zone1.x - 40) && mouseY > (zone1.y - 40) && mouseX < (zone1.x + 40) && mouseY < (zone1.y + 40)){

      Body.scale(this.body,1.01,1.01);

      this.r*=1.01;

    } else if (this.body.position.x > (zone2.x - 40) && this.body.position.y > (zone2.y - 40) && this.body.position.x < (zone2.x + 40) && this.body.position.y < (zone2.y + 40)&& mouseX > (zone2.x - 40) && mouseY > (zone2.y - 40) && mouseX < (zone2.x + 40) && mouseY < (zone2.y + 40)){

      Body.scale(this.body,0.99,0.99);

      this.r*=0.99

    }

  }

 

This is how I check if the mouse & and object are in the designated active area and then scale the object accordingly.

Future work:

I’d love to add more features, like giving the user the ability to switch an object’s gravity / force acting upon it. I’m thinking this could be done as two baskets, one upside down and one upright, and when u put an object in it and click the mouse it flips the sign of the force acting on the object.