The whole reductionism vs. emergence thing really clicked for me this week, especially after coding that self-avoiding creature.
Flake talks about how knowing what a single ant does won’t tell you anything about what an ant colony can do. That’s exactly what I experienced with my sketch. Each point in my creature just checks “can I move here without hitting myself?” Super simple rule. When you watch it grow though, you get these weird tentacle formations and organic shapes that I definitely didn’t program. The complexity just emerges from repeating that one simple check.
What got me thinking was his point about nature using the same rules everywhere. The collision detection in my code works the same way for anything avoiding anything else. Planets avoiding black holes, people maintaining personal space in crowds, cells not overlapping during growth. One rule, infinite applications.
The part about computers blurring the line between theory and experimentation felt super relevant too. When I was debugging my creature, I was literally experimenting with artificial life. Tweaking the growth probability or cell size and watching how behavior changes feels more like running biology experiments than writing traditional code. You make a hypothesis (“smaller cells will create more detailed shapes”), run it, observe what actually happens (nope, just slower performance), adjust and repeat.
I’m curious about the “critical region” he mentions between static patterns and chaos. My creature usually grows until it gets trapped and can’t find new space. That feels like hitting some kind of critical point where the system freezes. Maybe if I added randomness to let it occasionally break its own rules, it could escape those dead ends? Then would it still count as “self-avoiding” though?