Skip to content

Working with emergence

Alexey Yurchenko edited this page Jan 3, 2026 · 1 revision

Emergence can be seen when you consider rules as agents.

Imagine a class, some students in front of a blackboard.

They are all scribbling on it. Right now, the students are all working on their own part of the board. Everything is more or less clear to you, an observer -- all neat, disentangled, cause-and-effect clearly traced. You can tell very easily who caused the change on that piece of the board, and who did it on this one.

Each student is executing their own little algorithm, but all follow the same general procedure: they look at their bit of the board, do their calculation, and write the result back, maybe erasing something beforehand.

Now, let the students group. They will now cooperate: each student will help others in their respective group when they see a problem on the blackboard that they can solve.

It's now harder and harder to distinguish who solves the problem. In group 1, it's not Alice, nor Bob. They're both solving the problem. Group 1 is solving the problem. If you look at the progress of how the problem is solved, you'll see behavior that is much more complex and nonlinear. This assumes, of course, Alice and Bob aren't chained or dependent on each other: they don't wait for each other to finish. It's not that Alice writes an equation and Bob solves it, and then Alice writes another one. It's that both do their thing on their bit of the board, simultaneously, entangled.

Mapping back, students are rules, them using the board is rule application, the board is the world, and groups of students operating on the same bit of the board are rule systems. Rule systems, in Wirewright's parlance, are producing emergence.

You can get fairly complex behavior out of rule systems, one that becomes hard to impossible to analyze -- to split "signal sources", to disentangle. This is especially true if you are an observer inside the world: if you are a little character living on the board, where Alice and Bob are busy solving their problem.

Effectively, emergence is how the execution of a distributed algorithm by a rule system looks like "from the inside".

The amount of surprise increases with the number of rules, because the number of ways the rules can interact with each other on the same bit of the board increases; and the potential for rule applications increases as well, because the more interaction there is, the more of the neighboring state-space is explored, and as rules tend to cluster in state-space, this will likely trigger even more rule applications and so on.

Rule systems can form spontaneously, because -- and this is where the board analogy falls apart -- rule agents, humph, don't have agency! They're not like Alice and Bob, moving their arms intentionally and thinking out loud. Their application depends on what's written on the board -- whether that excites the rule's "sensory organ", its pattern, or not. Some rules are negative: they look for what doesn't excite their pattern, so to speak, and work on that.

Another way the analogy falls short is that with rules, proximity is semantic, not spatial, like with the blackboard. It's not that Alice and Bob write on the same piece of the board; it's more that their sensory organs happen to be in such a configuration at this tick of time that the parts of the board they perceive overlap. Today they overlap, tomorrow they don't. Or maybe even the next tick, as both rush to that part of the board to update it.

Clone this wiki locally