Introduction
I was without any ideas for a while. I originally thought about doing something that would replicate something around me in nature, then I thought about the previous features that I enjoyed most such as: Conway’s Game of Life, because of the simplicity; Boids, because I found the movement satisfying; and creating 3D simulations. The problem was I didn’t know where to start so I began searching online for other interactive installations. Perhaps I might find something I liked such as other installations in teamLab or some of the previous installations we viewed in class.
Then when I began to think about the features more deeply I remembered a class we did learning ml5.js to create a sketch with hand recognition. I immediately knew I wanted to use that with whatever I would end up creating.
Inspiration
During my search I found an installation called Pulse Topology by artist Atelier Lozano-Hemmer. It’s a large array of pulsing LED lightbulbs hanging from the ceiling, each with a different height, such that it resembles a noise map. From some of the videos he showed I did not like the rate at which they pulsed; I felt it was too aggressive when visually with the curves it looks like it should be calmer.

Made by the same artist was Pulse Island I (I had no idea this was made by the same person until I began searching for an image to write this). I liked the rate of pulsing from that installation more.

In my search for hand gesture recognition examples I stumbled upon this GitHub repo. It wasn’t made in p5; instead it was made using Three.js and something else. It used gesture recognition to rotate objects, zoom in and out, and cycle between different variations.


Seeing gesture recognition used to rotate an object reminded me of this video from SpaceX from 2013 showing gesture-based design. However, after seeing this video and interacting with the GitHub repo I saw before, I decided I wouldn’t use gestures as just another way to rotate an object because it felt unnatural, ironically. This was because I, and I imagine most people, are so used to using a keyboard or mouse/trackpad to rotate something in 3D that using anything else felt slower and less precise.
At the end I decided I liked the topology installation and wanted something like it in my sketch, but rather than lights from the ceiling I wanted it to be more minimalistic with dots, and have them be at ground level. Rather than pulsing, they would move up and down and their brightness would change accordingly. Secondly, gesture recognition is nice at first, but an effect I found much more exciting was to see your hands in 3D space in the sketch. There are a couple of VR headsets such as the Meta Quest 3 which have gesture recognition where you can see your hands in a virtual simulation, and I wanted to get as close to that as I could using the webcam, hoping that maybe the ml5.js library could be enough.

Meta Quest 3 Hands
Sketch
Watch the sketch demo on YouTube
I couldn’t get the embedded sketch to work properly because of a bug when putting it into the web editor.
Milestones
Milestone 1: Programming hands with ml5.js in a 3D space, but the result was flat.

I tried to use the depth estimation method in ml5.js to have the hands move back and forth. This approached the result I wanted, but it couldn’t handle rotation of the hand.
Needed to find an alternative library that could handle this. Needed help from AI to program it since it was my first time, but finally found something that worked.
Milestone 2: MediaPipe Hand Landmarker

Its hard to tell from an image alone but with orbitcontrol() you’ll be able to see that it does move back and forth.
Milestone 3: Topology Map

Found that combined there were performance issues. Realized it was because rendering and calculations were done on the CPU, but if I could use a shader it would perform better.
Got help from AI to create a shader that would achieve the same effect without affecting performance.
Milestone 4: Bringing both together

Having the hands cause indentations in the topology, like those pin screen toys. Again, from an image alone its hard to tell.
Pending Features
So far I’ve completed what seems like most of the technical implementation, but there is one more feature I want to add: boids. I want it so that with a clenched fist the boids will be repelled, but if you open your hand and extend your fingers some will be attracted and will go between your fingers and maybe twirl around them.
I will also need to change the hand from a skeleton to a mesh or some kind of model more resembling a hand, like the ones seen when using the VR headset shown earlier. Something semi-transparent, but I want the lighting from the topology at the bottom to have an effect on the hand.
Reflection
I’m happy with how things have turned out but there’s still much more work to be done. I did need to use AI for a couple of things: optimizing the topology grid and switching from using ml5.js to using MediaPipe Hand Landmarker, since it could output an estimation of the hand in the xyz coordinate system whereas ml5.js could only do x and y.