Here is a follow-up of my 2 last posts about Open Ended Evolution. This time I would like to talk about energy arrays as a solution to 2 issues of the simulation:
- The environment’s map cannot be modified
- The individual agents move at random – they cannot decide where to go.
Introducing energy arrays is also a nice way to generalize the interface (as in Interface Theory of Perception) of the agents. It allows an agent to potentially detect numerous actions with only few sensors. It goes like this:
Imagine that there are different ways for an agent to emit energy in the simulation. By emitting light, sound, heat, smells, other kind of vibrations… it does not matter what we call them; what is important is that these forms of energy have different values for the following properties: speed of transmission (how fast does it go from here to there), inverse-square law (how fast does the intensity decreases with the distance) and dissipation (how long does it take to disappear from a specific point). In reality these values are linked, but in simulation we don’t need to follow the laws of physics so we just decide the values by ourselves.
Everything an agent does (moving, eating, mating, dying or just standing there) emits energy in different forms. For example, you have 3 forms of energy and represent it with an array [w1,w2,w3]. Each cell of the map (environment) has such an array. A specific individual doing a specific action in this cell will add a set of values to the cell’s energy array, that will propagate to the neighboring cells according to the properties of that form of energy. For example, a lion eating might emit a lot of sound, a bit of heat and a strong smell : [+5,+1,+3]. These values are decided by the genes of the individual, so each “species” will have different values for each action. And if you remember, the concept of “species” is just an emergent property of the simulation, so really each individual of the species might have slightly different values for the array of each action.
Now let’s solve the 2 issues mentioned earlier.
Making the environment modifiable
Each form of energy has 3 properties: speed of transmission, inverse-square law and dissipation. The values of these properties is different for each form of energy. But we can also make these values different for different regions of the environment: after all, the behavior of sound or light is different in water, air or ground.
Even better, we can allow the agents to change these values, which is equivalent to modifying the environment. In the real world, if you’re a spider, you can build a web that will transmit vibrations to you in a fast and reliable way. Or you can make a hole in the ground, to make yourself invisible to others. This is what the modifiable energy properties allow us to do in the simulation.
Now if an agent’s speed per iteration depends on its genes but also on modifiable environmental properties, it becomes possible for a prey to slow down its predator by modifying the environment, or for a predator to trap its prey. The equivalent of a squid inking a predator, or an antlion trapping ants. Which leads us to the next point:
Giving agency to the agents
We don’t want our agents to move simply at random, and we want them to be able to chose to modify the environment or not. Energy arrays offer a solution. Back to the example: if you have 3 forms of energy, your agents can have at most 3 types of sensors (eye, ear, nose for example). Say that each sensor takes values from 4 neighboring cells (front, back, left, right) and transforms it into a 2D vector (coordinates: [x = right – left, y = front – back]).
The sensor map/perceptual interface that we defined 2 posts ago can be rebuilt adding these new sensor types and mapping them to motion actions: if the vector for sound points to that direction, go in the opposite direction for example. This map is also encoded in genes, so the motion is not an individual choice; but now our agents are not moving at random. We can also add “modification actions”: if the sensors have these values, apply that modification to the environment.
Note that sensors cost energy, and if you can sense a large range of values for a given sensor, it will cost you a lot of energy. Not only you must earn that energy by attacking and eating other agents, but the energy you spend “sensing” around is dissipated in the environment, making you more detectable to potential predators. In short, having lots of precise sensors is not a viable solution. Instead you must go for a heuristic that will be “good enough”, but never perfect (local fitness).
The implementation of energy arrays and properties require little effort: in terms of programming, only one new class “energy” with 3 variables and 3 instances, and some modifications of existing classes. But the benefits are huge, as we now have a lot more potential behaviors for our agents: hunting, hiding, building traps, running away, defense mechanisms, even indirect communication between agents are now possible (which may lead to group behavior), all of that in a rather simple simulation, still based on perceptual interfaces. We also have much more potential for creating environmental niches, as the environment itself can be modified by agents. A big regret is that, visually speaking, it still just looks like little squares moving on a white screen – you have to observe really well to understand what is going on, and what the agents are doing may not be obvious. Is it doing random stuff? Is it building a trap?
One serious concern could be that too much is possible in this simulation. With so many possibilities, how can you be sure that meaningful interactions like eating and mating will ever appear or be maintained between agents? A first element is that we start simple at the beginning of the simulation: only one type of agent, with no sensors and no actions at all. Every increase in complexity comes from random mutations, therefore complex agents will only be able to survive if they can actually interact with the rest of the environment. A second element is that a “species” of agents cannot drift too far away from what already exists. If you suddenly change the way you spend energy in the environment or your actions, you might confuse predators. But you will also confuse your potential mates and lose precious advantages coming from them (like genome diversity and reduced cost for producing offspring). Furthermore, as explained 2 posts ago, a species that is “too efficient” is not viable on the long term and will disappear.
Next time I could talk about how generalized perceptual interfaces might lead to sexual dimorphism, or much better, give the first results of the actual simulation.