Final Project – Consciousness Canvas by Abdelrahman Mallasi

Concept

In my final project, I delve into the mysterious realm of hypnagogic and hypnopompic hallucinations – vivid, dream-like experiences that occur during the transition between wakefulness and sleep. These phenomena, emerging spontaneously from the brain without external stimuli, have long fascinated me. They raise profound questions about the brain’s capacity to generate alternate states of consciousness filled with surreal visuals.

My interest in these hallucinations, particularly their elusive nature and the lack of complete understanding about their causes, inspired me to create an interactive art piece.I chose this topic due to my fascination with these mind-states and the capacity of our brains to generate alternate states of consciousness with surreal visuals and experiences. I drew inspiration from Casey REAS, an artist known for his generative artworks. Below is some examples of his work named “Untitled Film Stills, Series 5“, showcasing facial distortions and dream-like states.

.

Experts are still exploring what exactly triggers these hallucinations. As noted by the Sleep Foundation, “Visual hypnagogic hallucinations often involve moving shapes, colors, and images, similar to looking into a kaleidoscope,” an effect I aimed to replicate with the dynamic movement of boids in my project. In addition, auditory hallucinations, as mentioned by VeryWellHealth, typically involve background noises, which I have tried to represent through the integration of sound corresponding to each intensity level in the project.

In conceptualizing and creating this project, I embraced certain qualities that may not conventionally align with typical aesthetic standards. The final outcome of the project might not appear as the most visually appealing or aesthetically organized work. However, this mirrors the inherent nature of hallucinations – they are often messy, unorganized, and disconnected. Hallucinations, especially those experienced at the edges of sleep, can be chaotic and disjointed, reflecting a mind that is transitioning between states of consciousness.

Images and User Testing

 

IM Showcase Documentation

 

Implementation Details and Code Snippets

At the onset, users are prompted to choose an intensity level, which dictates the pace and density of the visual elements that follow. Once the webcam is activated using ml5.js, the program isolates the user’s face, creating a distinct square around it. This face area is then subjected to pixel manipulation, achieving a melting effect that symbolizes the distortion characteristic of hallucinations.

Key features of the implementation include:

  • Face Detection with ml5.js: Utilizing ml5.js’s FaceAPI, the sketch identifies the user’s face in real-time through the webcam.
       faceapi = ml5.faceApi(video, options, modelReady); // initializes the Faceapi with the video element, options, and a callback function modelReady
      
      function modelReady() {
    
        faceapi.detect(gotFaces); //starts the face detection process
    }
    
    function gotFaces(error, results) {
        if (error) {
            console.error(error);
            return;
        }
        detections = results; // stores the face detection results in the detections variable
        faceapi.detect(gotFaces); // ensures continuous detection by recursion
    }
  • Distortion Effect:
    • Pixelation Effect: The region of the user’s face, identified by the face detection coordinates, undergoes a pixelation process. Here, I average the colors of pixels within small blocks (about 10×10 pixels each) and then recolor these blocks with the average color. This technique results in a pixelated appearance, making the facial features more abstract.
    • Melting Effect: To enhance the hallucinatory experience, I applied a melting effect to the pixels within the face area. This effect is achieved by shifting pixels downwards at varying speeds. I use Perlin noise to help create an organic, fluid motion, making the distortion seem natural and less uniform.
      // in draw function
      
      // captures the area of the face detected by the ml5.js faceapi       
      image(video.get(_x, _y, _width, _height), _x, _y, _width, _height);
      
      // apply pixelation and melting effects within the face area
              let face = get(_x, _y, _width, _height);
      
              face.loadPixels();
      
      // for pixelation effect, this goes through the pixels of the isolated face area 
      // creates pixelated effect by averaging colors in blocks of pixels
              for (let y = 0; y < face.height; y += 100) {
                  for (let x = 0; x < face.width; x += 100) {
                      let i = (x + y * face.width) * 4;
                      let r = face.pixels[i + 0];
                      let g = face.pixels[i + 1];
                      let b = face.pixels[i + 2];
                      fill(r, g, b);
                      noStroke();     
                  }
              }
      
      // for melting effecr, this shifts horizontal lines of pixels by an offset determined by Perlin noise
        for (let y = 0; y < face.height; y++) {
          let offset = floor(noise(y * 0.1, millis() * 0.005) * 50);
                  copy(face, 0, y, face.width, 1, _x + offset, _y + y, face.width, 1);
              }
  • Boid and Flock Classes: The core of the dynamic flocking system lies in the creation and management of boid objects. Each boid is an autonomous agent which exhibits behaviors like separation, alignment, and cohesion.
    In selecting the shape and movement of the boids, I chose a triangular form pointing in the direction of their movement. This design choice was done to evoke the unsettling feeling of a worm infestation, contributing to the overall creepy and surreal atmosphere of the project.

    show(col) {
    
            let angle = this.velocity.heading(); // to point in direction of motion
            fill(col);
            stroke(0);
            push();
            translate(this.position.x, this.position.y);
            rotate(angle);
           
    
                beginShape(); // to draw triangle
                vertex(this.r * 2, 0);
                vertex(-this.r * 2, -this.r);
                vertex(-this.r * 2, this.r);
                endShape(CLOSE);
            
            pop();
        }
    
  • Intensity Levels: By adjusting parameters like velocity, force magnitudes, and number of boids created, I varied the dynamics and sound suitable for each intensity level of the hallucination simulation. the below code shows the Medium settings for instance.
     let btnMed = createButton('Medium');
        btnMed.mousePressed(() => {
            maxBoids=150; // change numbe rof boids
    //ensures user selects only one level at a time
           btnLow.remove();
           btnMed.remove();
           btnHigh.remove();
          initializeFlock(); // starts the flock system only after the user selects a level
          soundMedium.loop(); // plays sound
        });
    // in boids class
    // done for each intensity  
    
     behaviorIntensity() {
    
          if (maxBoids === 150) { // Medium intensity
                this.maxspeed = 3;
                this.maxforce = 0.04;
                this.perceptionRadius = 75;
        }
    
  • Tracing: the boids, as a well as the user’s distorted face,  leave a trace behind them as they move. This creates a haunting visual effect which contributes to the disturbing nature of the hallucinations. This effect is achieved by not including a traditional background function in the sketch. This design choice ensures that previous frame’s drawings are not cleared, allowing for a persistent visual trail that adds to the hallucinatory quality of the piece. I experimented with alternative methods to create a trailing effect, but found that slight, fading trails did not deliver the intense, lingering impact I sought. The decision to forego a full webcam feed was crucial in preserving this effect.
  • Integrating Sound: for each intensity level, I integrated different auditory effects. These sounds play a crucial role in immersing the user in the experience, with each intensity level featuring a sound that complements the visual elements.
  • User Interactivity: the user’s position relative to the screen – left or right – changes the color of the boids, directly involving the user in the creation process. The intensity level selection further personalizes the experience, representing the varying nature of hallucinations among individuals.
    if (_x + _width / 2 < width / 2) { // Face on the left half
               flockColor = color(255, 0, 0); // Red flock
           } else { // Face on the right half
               flockColor = color(100,100,100); // Black flock
           }
    
           // Run flock with determined color
           flock.run(flockColor);

    The embedded sketch doesn’t work here but here’s a link to the sketch.

    Challenges Faced

    • Integrating ml5.js for Face Detection: I initially faced numerous errors in implementing face detection using the ml5.js library.
    • Pixel Manipulation for Facial Distortion: Having no prior background in pixel manipulation, this aspect of the project was both challenging and fun.
    • Optimization Issues:
    • In earlier iterations, I aimed to allow users to control the colors and shapes of the boids. However, this significantly impacted the sketch’s performance, resulting in a heavy lag.I tried various approaches to mitigate this:
      • Reducing Color Options: Initially, I reduced the range of colors available for selection, hoping that fewer options would ease the load. The sketch was still not performing optimally.
      • Limiting User Control: I then experimented with allowing users to choose either the color or the shape of the boids, but not both. However, this didn’t help either.
      • Decreasing the Number of Boids:  I also experimented with reducing the number of boids. However, this approach had a downside; fewer boids meant less dynamic and complex flocking behavior. The interaction between boids influences their movement, and reducing their numbers took away from the visuals.
    •  I then decided to shift the focus from user-controlled aesthetics to user-selected intensity levels. This change allowed for a dynamic and engaging experience without lagging.

Aspects I’m Proud Of

  • Successfully integrating ml5’s face detection elements
  • Being able to distort the face by manipulating pixels
  • Being able to change the colors of the boids by the user’s position on the screen
  • Introducing interactivity by creating buttons that customize the user’s experience
  • Remaining flexible and thinking of other alternative solutions when running into issues.

Future Improvements

Looking ahead, I aim to revisit the idea of user-selected colors and shapes for the boids. I believe that with further optimization and refinement, this feature could greatly enhance the interactivity and visual appeal of the project, making it an even more immersive experience.

I also plan to add a starter screen which will provide users with an introduction to the project, offering instructions and a description of what to expect. This would make the project more user-friendly. Due to time constraints, this feature couldn’t be included.

References

Hypnagogic Hallucinations: Unveiling the Mystery of Waking Dreams

 

https://www.verywellhealth.com/what-causes-sleep-related-hallucinations-3014744

 

Final Project Update – Abdelrahman Mallasi

Key Developments:

  1. Face Detection and Isolation: Utilizing ml5.js’s FaceAPI, the sketch can now detect and isolate the user’s face. The surrounding area is rendered white, enhancing the clarity and focus of our visual effects.
  2. Pixel Manipulation for Face Distortion: I’ve implemented a pixel manipulation technique to create a melting effect on the detected face area, adding a facial distortion quality to represent hallucinatory experiences.
  3. Dynamic Flocking Systems: When the user’s face moves to the right half of the screen, a black flocking system activates, while the left half activated a red flocking system. These boids, as a well as the user’s distorted face,  leave a trace behind them as they move. This creates a haunting visual effect which contributes to the disturbing nature of the hallucinations.Embedded Code

Next Steps: A feature I want to add is the ability to detect when the user smiles. This functionality will be tied to an interactive element that transforms the visual environment, marking a positive shift in the hallucinatory experience.

Challenges: It was time-consuming implementing the integration and distortion of the user’s face. Firstly, I initially ran into a lot of errors using faceAPI feature, and it was difficult navigating and learning a new workspace like ml5.js. Furthermore, implementing the facial distortion was difficult since I had no prior experience with pixel manipulation.

Final Project Proposal – Abdelrahman Mallasi

Concept:

My final project is an interactive art piece designed to visually represent the phenomena of hypnagogic and hypnopompic hallucinations. These hallucinations occur in the transitional states between wakefulness and sleep. Hypnagogic hallucinations happen as one drifts off to sleep, while hypnopompic occur upon waking. Fascinatingly, these vivid, dream-like experiences emerge from the brain without any external stimuli or chemicals.

I chose this topic due to my fascination with these mind-states and the capacity of our brains to generate alternate states of consciousness with surreal visuals and experiences. I drew inspiration from Casey REAS, an artist known for his generative artworks. Below is some examples of his work named “Untitled Film Stills, Series 5“, showcasing facial distortions and dream-like states.

Project Description:

I’m envisioning the p5.js sketch to have the user’s webcam capture their image. The user’s face will then be distorted and an action will be triggered by specific user actions and expressions. These actions might include implementations of particle systems, flocking systems, or fractals.

The goal is to create an immersive experience that entertains but also educates the audience about these unique states of consciousness.

Challenges:

– For the actions triggered by the user’s actions, I’m unsure of which actions to include
– There’s also a fear of the project not looking cohesive if too many unrelated elements are implemented, like flocking systems and fractals.

Week #11 Assignment: The Artist’s Choice by Abdelrahman Mallasi

Concept & Inspiration:

This project is inspired by Paint, the drawing application on Windows, with which I had an unhealthy obsession during my childhood. This nostalgia, combined with the concepts of Conway’s Game of Life, led to the creation of this sketch. It’s a digital canvas where users can draw cells, change their colors, and then watch as they evolve following the principles of cellular automata. It results in a constantly changing tapestry of color and form.

This sketch is an interactive playground that combines drawing and the rules of cellular evolution. Users can draw on the grid by dragging their mouse, activating cells in any pattern they desire. Pressing the spacebar changes the colors of all active cells to random colors, adding an element of surprise and vibrancy to the evolving patterns. The sketch applies the Game of Life rules when the mouse is pressed, and the frame rate was decreased, allowing users to see the evolution of the cells. The freedom to control the position, color, and evolution of cells makes for an engaging and personal experience.

Code Walkthrough:

function mouseDragged() {
  let x = floor(mouseX / w);
  let y = floor(mouseY / w);
  if (x >= 0 && x < columns && y >= 0 && y < rows) {
    board[x][y].state = 1; // Activate the cell
    board[x][y].color = [random(255), random(255), random(255)]; // Assign a random color
  }
}

This function is the heart of the sketch’s interactivity. As the user drags the mouse across the canvas, it activates cells (sets their state to 1) and assigns them a random color. This interaction is what allows the user to paint on the digital canvas.

function keyPressed() {
  if (key === ' ') {
    for (let i = 0; i < columns; i++) {
      for (let j = 0; j < rows; j++) {
        if (board[i][j].state === 1) {
          board[i][j].color = [random(255), random(255), random(255)];
        }
      }
    }
  } 
}

When the spacebar is pressed, this function kicks into action, changing the color of every alive cell to a new random color.

function generateNextBoard() {
  let next = create2DArray(columns, rows);

// looping through each cell to apply Game of Life rules
  for (let x = 0; x < columns; x++) {
    for (let y = 0; y < rows; y++) {
       // counting alive neighbors
      let neighbors = 0;
      for (let i = -1; i <= 1; i++) {
        for (let j = -1; j <= 1; j++) {
          if (i === 0 && j === 0) {
            // skip the current cell itself
            continue;
          }
          let col = (x + i + columns) % columns; //edge wrapping
          let row = (y + j + rows) % rows; // edge wrapping
          neighbors += board[col][row].state;
        }
      }

      // Rules of Life defined below

    }
  }

  // swap new board with the old
  board = next;
}

This function is invoked each time the mouse is pressed. It starts by creating a new grid called next, which will hold the next generation of cells. The function then loops through each cell in the current board to apply the rules of the Game of Life. For each cell, it counts the number of living neighbors. Finally, the board is updated to be the next grid, which moves the simulation forward one generation.

 if (board[x][y].state == 1 && (neighbors<2 || neighbors > 3)) {
       next[x][y] = new Cell(x, y, w, 0); // cell dies due to overpopulation
     } else if (board[x][y].state == 0 && neighbors == 3) {
       // cell becomes alive 
       next[x][y] = new Cell(x, y, w, 1);
       next[x][y].color = [random(255), random(255), random(255)]; //  cell gets a random color
     } else {
       
       next[x][y] = new Cell(x, y, w, board[x][y].state); // cell stays in current state
       next[x][y].color = board[x][y].color; // keep the same color
     }

These lines encapsulate Conway’s Game of Life rules within the sketch. They determine the fate of each cell based on its neighbors. A cell comes to life if it has exactly 3 neighbors, dies of overpopulation if it has more than 3 and of underpopulation if it has less than 2, and stays in its current state otherwise.

Embedded Code

Reflections and Future Ideas:

Playing with this sketch was an enjoyable experience. Tweaking the rules and observing their visual impact was fascinating.

Looking ahead, I’m intrigued by the idea of adding an auditory dimension to this project. Imagine if each cell produced a unique sound when it came to life, creating a melody that evolves with the pattern. While I’m not yet sure how feasible this is, the idea of combining visual and auditory elements in the CA universe is exciting to explore.

Week #10 Assignment: The Volcano by Abdelrahman Mallasi

Concept

This project aims to simulate a volcanic eruption using  Matter.js for physics. Particles representing lava are ejected with a mouse press. These particles are subject to gravity and collisions. Each particle changes color upon collision, enhancing the visual feedback of the simulation, visually indicating interactions between particles and their environment. There’s also the ability  for the user to add wind forces by pressing the spacebar.

The environment includes a triangular volcano and a ground set as static bodies in a physics world. Particles are initialized with an initial force applied to mimic the explosiveness of a volcanic eruption.

Embedded Code

Link to Sketch

Highlight of Code

Events.on(engine, 'collisionStart', function(event) {
    event.pairs.forEach(pair => {
 
        const { bodyA, bodyB } = pair;

        if (bodyA.render && bodyA.circleRadius) bodyA.render.fillStyle = color(random(255), random(255), random(255));
        if (bodyB.render && bodyB.circleRadius) bodyB.render.fillStyle = color(random(255), random(255), random(255));
    });
});

The above code is under the setup function and is responsible for the colour changing collisions. Let’s break it down:

  • Events.on(engine, ‘collisionStart’, function(event)): Sets up a ‘listener’ on the physics engine to react when two bodies collide
  • event.pairs.forEach(pair =>…): Iterates over each pair of colliding bodies in the collision event
  • const { bodyA, bodyB } = pair: Extracts the two colliding bodies from each pair.
  • if (bodyA.render && bodyA.circleRadius) bodyA.render.fillStyle = color(random(255), random(255), random(255)): bodyA’s color is changed to a new random color.
  • if (bodyB.render && bodyB.circleRadius) bodyB.render.fillStyle = color(random(255), random(255), random(255)): bodyA’s color is changed to a new random color.

Reflections and Future Additions

  • Working with Matter.js made the project both easier and more difficult to implement. It was quicker to use the predefined physics functions rather than hardcoding them from scratch. However, it was difficult getting used to another workspace with a whole new set of functions and elements.
  • Adding acoustic elements: It would exciting to have each collision create a unique sound, turning the simulation into a symphony of sorts. This auditory layer would  provide a more multi-sensory experience.

Week #9 Assignment: The Dancing Nightsky by Abdelrahman Mallasi

Concept

This project creates an interactive audio-visual experience that represents a Dancing Night Sky. This concept was done by simulating a night sky where each star (dot) moves in response to music, creating a visual dance which mirrors the song’s rhythm and melody. The chosen song for this project was “This Is What Space Feels Like” by JVKE, which aptly talks about space, aligning with the theme of a dancing night sky.

To achieve this, the visual properties of the dots – representing stars – were programmed to change in response to the music. The movement, force, and size of each dot varied according to the intensity and frequency of the music, due to the implementation of Fast Fourier Transform (FFT) analysis. This project showcases the properties of flocking behavior through the use of separation and alignment algorithms.

Highlight of Code

let spectrum = fft.analyze();
let bass = fft.getEnergy("bass");
let treble = fft.getEnergy("treble");

// Mapping the music to visual properties
for (let boid of flock.boids) {
    boid.maxspeed = map(treble, 0, 255, 2, 8);
    boid.maxforce = map(bass, 0, 255, 0.05, 5);
    boid.r = map(bass, 0, 255, 3, 6);
}
  • The analyze() method computes the frequency spectrum of the song, resulting in an array where each value represents the amplitude of a specific frequency band
  • The getEnergy() method returns the amplitude of  the bass and treble frequencies of the song
  • Then, the amplitude values for bass and treble frequencies are mapped to certain properties of the boids, where the treble affects the speed and the bass affects the force and the size.

Embedded sketch

Link to Sketch

Reflections & Future Ideas

  • I intentionally wanted to pick a song with musical pauses to effectively demonstrate how the music affects the stars. However, I believe there could be another song that could better demonstrate the effect.
  • I had to compress the song to make the sketch load faster. However, the compression lowers the quality of the sound, and leads to less pronounced effects on the boids. The original uncompressed song file is still uploaded in the sketch. If you’re patient and would like to see the sketch at its finest, simply replace “Compressed Song.mp3” to “Original Song.mp3” for the loadSound() function.
  • In future iterations of the project, the aim is to enhance the visual representation of the stars to make them appear more star-like rather than simple dots. Additionally, an exciting feature to be explored is the creation of trails behind each star, simulating shooting stars and adding a more visually appealing effect.

Alien Intelligence Talk Reflection – Abdelrahman Mallasi

In a future where AI replaces various occupations, it pushes us to shift our focus from capitalistic, economically-driven values to our inherent human qualities.  This shift places greater emphasis on interpersonal and soft skills than technical abilities that are likely to be automated. This leads to an existential question: How will we define our identities in the absence of conventional professions? We may end up needing to seek alternative methods to derive a sense of achievement and intelligence, or maybe reassess our dependence on such emotions for happiness. Ultimately, as Professor Leach mentioned, AI can act as a mirror to understand humans.

 

Week #8 Assignment: Flower Heliotropism by Abdelrahman Mallasi

Concept

The concept of this project revolves around simulating heliotropism, which is the ability of certain plants to grow and orient themselves in response to the direction of sunlight. , which regulates cell elongation and growth.
This phenomenon is primarily controlled by the plant hormone auxin, which stimulates growth. When light shines on one side of a plant, it triggers the redistribution of auxin, causing the plant to grow more on the shaded side. This growth pattern leads to the plant bending or turning towards the light source. Sunflowers are an example of plants that exhibit heliotropism, as they follow the sun’s path across the sky during the day.

In this project, I aim to represent heliotropism through an interactive simulation. I created a visual display where a flowers grow towards the sun, which is controlled by the mouse cursor. As the flowers reach a specific height, it blooms, showcasing the transition from growth to flowering. Each flower has its own maximum height and growth rate that are randomly generated each time the sketch. The project exhibits properties of autonomous objects by illustrating seeking behavior, with the sun acting as the target and the plants as the vehicle.

Highlight of Code

grow() {
   if (this.height < this.maxHeightForBlossom) {
     this.height += this.growthRate;
   }
 }

 blossom() {
   if (this.height >= this.maxHeightForBlossom) {
     fill(255, 0, 255);
     stroke(255, 0, 255);
     ellipse(this.root.x, this.root.y - this.height - 20, 20, 20);
   }
 }

These two methods are under the Plant class

grow() represents the process of the plant growing over time. The if statement checks whether the current height of the plant (this.height) is less than the maximum height for blossoming (this.maxHeightForBlossom). If so, the plant’s height is increased by the growth rate (this.height += this.growthRate). This means that the plant grows a certain amount with each frame of the animation until it reaches the maximum height.

blossom() is responsible for checking whether the plant has reached the height for blossoming and displaying a flower if it has. Similar to before, it checks whether this.height is greater than or equal to this.maxHeightForBlossom. If so, it draws a flower at the root adjusted for its height (this.root.y – this.height – 20). The 20 is used to position the flower slightly above the top of the plant stem.

Embedded sketch

Link to Sketch

Future Ideas

  • The initial idea for this project was to depict realistic plant growth with the root firmly anchored at the bottom of the canvas, creating a curving stem as the plant grows towards the sun. However, this proved to be a challenge to implement

Midterm Final: Rising Tide of Change by Abdelrahman Mallasi

Concept

In today’s age, climate change remains one of our most pressing concerns, and our role as humans in this issue is undeniable. This generative art project aims to convey this message, highlighting our influence on our environment.

The dynamic waves in the visualization, expanding outwards when the mouse is pressed, are symbolic of the rising sea levels. According to NASA, sea level rise is mainly due to two factors related to global warming: from “melting ice sheets and glaciers, and the expansion of seawater as it warms.” The rise in sea levels is at an unprecedented scale. According to Climate.org, the average sea level has risen 8–9 inches since 1880, and it has more than doubled from 0.06 inches  per year throughout the previous century to 0.14 inches per year only from 2006–2015. Rising sea levels can damage important local structures like roads, bridges, and power plants.  Almost everything, from transportation to waste management, is at risk due to higher sea levels.


SATELLITE DATA: 1993-PRESENT
Data source: Satellite sea level observations.
Credit: NASA’s Goddard Space Flight Center

At the heart of the piece lies a yellow circle – the sun, representing hope and the promise of a new day. Placing the sun in the center implies that despite the dangers and challenges, there’s a core of hope and a chance for a new beginning. This balances the alarming nature of the design with optimism.

The colors of the waves transition from blue to green to brown, encompassing the elements of water, vegetation, and the desert, paying homage to the UAE’s natural landscapes as it hosts the 2023 COP convention. COP – or Conference of Parties- is an annual global conference under the United Nations that brings together nations to discuss and advance international responses to climate change.

The waves do not move in a smooth, organized, calm manner. Instead, they are overlapping, some have bigger amplitudes than others and their speed increases over time. This utilizes oscillations and creates a chaotic motion, representing the uncertain reality of climate change. Each time the sketch runs, the radius and the number of waves change. This variability represents the unpredictable and changing nature of our environment, highlighting that the state of our climate constantly evolves and adapts. I’ve also added a drag force to the wave to create a realistic undercurrent force.

In my opinion, the most important aspect of this piece is its interactivity. The fact that the user, through their control of the mouse, directly influences the expansion of the waves and the size of the sun highlights the significant role humans play in the climate crisis. Whether we’re causing the sea levels to rise or projecting more hope for the future, the power is quite literally in our hands. This concept is further reinforced with the text embedded within the sketch “The future is in your hands”, emphasizing that every action counts.

The design of this project was partly inspired by this animation by @mr_praline on Instagram.

Embedded Code

Link to Sketch

Pen Plotter

Highlight of Code

function setup() {
 ....
  // Define the start and end colors for the waves.
  let colorStart = color("#38A7DD");
  let colorEnd = color("#745807");

  // Loop to create individual wave objects and push them to the waves array.
  for (let i = 0; i < numWaves; i++) {
    // Get a value between 0 and 1, mapping the current wave number i
    let lerpAmt = map(i, 0, numWaves, 0, 1);

    // Interpolate between the start and end colors based on the mapped value.
    let waveColor = lerpColor(colorStart, colorEnd, lerpAmt);

    waves.push(new Wave(i,centerRadius + i * 5, i % 2 === 0 ? 0.01 : -0.01, i / 10, waveColor));
  }
}

I like how I used the lerpColor function to create a nice gradient between the colors of the waves. At first I defined a starting color and an ending color, then mapped the value of i (number of wave in the iteration which maps between 0 and total number of waves) to map between 0 and 1 and used that value in the lerpColor function.

I’m also proud of how I used the parameters of the wave function, especially the second and third parameters:

– centerRadius + i * 5: this sets the radius of the wave being slightly bigger than the previous one by an increment of 5 units.

– i % 2 === 0 ? 0.01 : -0.01: this is equivalent to the statement (if i %2===0, return 0.01, if not return -0.01). It checks if i is even or odd. Even values result in a direction of 0.01 and odd values give -0.01, ensuring alternating directions for waves.

Progress, Mess-Ups, and Challenges

In the first iterations of this project, the waves had a more uniform, predictable and smoother motion, with no sense of randomness. Since then, I’ve introduced an element of randomness and a more chaotic movement and a drag force, all to mimic the reality of climate change. Below is how it looked before.

I tried to make the sun and waves shrink id the mouse is released but it didn’t work out. I tried different versions but they all seemed to fail and I’m not sure how to fix it. The sketch below demonstrates the issue.

Manipulating and keeping track of all the different parameters in order to make the sketch’s animation flow seamlessly was one of the most challenging and confusing parts of this project.

Future Ideas

  • It could be cool if the waves and sun had a breathing effect
  • Introduce aquatic creatures as random walkers
  • I wanted the piece to reinforce its homage to the UAE. An idea is to outline UAE landmarks at the bottom of the sketch, perhaps the Burj Khalifa.
  • Emphasize the desert more by creating sand particles being blown off the dunes, maybe through the emitters we learned in Particle Systems

References

NASA article: https://climate.nasa.gov/vital-signs/sea-level/#:~:text=Global%20sea%20levels%20are%20rising,of%20seawater%20as%20it%20warms.

Climate.org: https://www.climate.gov/news-features/understanding-climate/climate-change-global-sea-level#:~:text=Global%20average%20sea%20level%20has,4%20inches)%20above%201993%20levels.

Midterm Progress #2: The Rising Tide of Change by Abdelrahman Mallasi

Concept:

In today’s age, climate change remains one of our most pressing concerns, and our role as humans in this issue is undeniable. This generative art project aims to convey this message, highlighting our influence on our environment.

The dynamic waves in the visualization, expanding outwards when the mouse is pressed, are symbolic of the rising sea levels. At the heart of the piece lies a yellow circle – the sun, representing hope and the promise of a new day. Placing the sun in the center implies that despite the dangers and challenges, there’s a core of hope and a chance for a new beginning. This balances the alarming nature of the design with optimism.

The colors of the waves transition from blue to green to brown, encompassing the elements of water, vegetation, and the desert, paying homage to the UAE’s natural landscapes as it hosts the 2023 COP convention. COP – or Conference of Parties- is an annual global conference under the United Nations that brings together nations to discuss and advance international responses to climate change.

The waves do not move in a smooth, organized, calm manner. Instead, they are overlapping, some have bigger amplitudes than others and their speed increases over time. This creates a chaotic motion, representing the uncertain and messy reality of climate change.

In my opinion, the most important aspect of this piece is its interactivity. The fact that the user, through their control of the mouse, directly influences the expansion of the waves and the size of the sun highlights the significant role humans play in the climate crisis. Whether we’re causing the sea levels to rise or projecting more hope for the future, the power is quite literally in our hands. This concept is further reinforced with the text embedded within the sketch “The future is in your hands”, emphasizing that every action counts.

The design of this project was partly inspired by this animation by @mr_praline on Instagram.

Embedded Code

 

Future Direction:

1) I want to enhance the immersive experience by integrating sound – maybe a gentle lapping of waves or the whistles of wind.
2) I want the piece to reinforce its homage to the UAE. An idea is to outline UAE landmarks at the bottom of the sketch, perhaps the Burj Khalifa.
3) Emphasize the desert more by creating sand particles being blown off the dunes, maybe through the emitters we learned in Particle Systems
4) Add a drag force to the waves to represent water undercurrents
5) Maybe introduce aquatic creatures as random walkers