For some obscure reason I stumbled upon this paper the other day: “The user-interface theory of perception: Natural selection drives true perception to swift extinction” Download PDF here.
It’s 26 pages, published in 2009, but it’s worth the read; both the contents and the writing are great.
To summarize what interests me today, the author claims that the commonly accepted statements are false:
– A goal of perception is to estimate true properties of the world.
– Evolution has shaped our senses to reach this goal.
These statements feel intuitively true, but the author convincingly argue that:
– A goal of perception is to simplify the world.
– Evolution favorizes fitness, which can be (and most probably is) different from “exactitude”.
I feel a strong link between these claims and my previous post about OEE. If you remember, in my imaginary world where light can become grass, there is no hardwired definition of species, and therefore two individuals meeting each other can never be sure of each other’s exact identity, including “species”, strengths and weaknesses. They can have access to specific properties trhough their sensors, but must rely on heuristics to guide their behaviour. One heuristic could be “slow animals usually have less energy than me, therefore I should attack and eat them”. But this is not an optimal rule; you can well meet a new individual wich is slow as to save energy for reproduction, and has more energy than you. You will attack them and die. But your heuristic just has to be true “most of the time” for you to survive.
The paper, which is not about OEE at all but about the real world, says this at p2:
“(1)[these solutions] are, in general, only local maxima of fitness. (2) […] the fitness function depends not just on one factor, but on numerous factors, including the costs of classification errors, the time and energy required to compute a category, and the specific properties of predators, prey and mates in a particular niche. Furthermore, (3) the solutions depend critically on what adaptive structures the organism already has: It can be less costly to co-opt an existing structure for a new purpose than to evolve de novo a structure that might better solve the problem.”
You might recognize this as exactly the argumentation in my previous post. To achieve OEE, we want local fitness inside niches (1 and 2); we want evolution to be directed (3). For that, I introduced this simulated world where individuals do not have access to the exact, direct, “identity” of others (2): what we may call according to this paper a “perceptual interface”, which simplifies the world while not representing it with fidelity, which can lead to terrible errors.
Why would perceptual interfaces be a key to OEE?
In most simulation that I have seen, an individual from species A can recognize any individual from species B or from its own species A with absolute certainty.
I suspect that often, this is hardcoded inside the program: “if x.species = A then …”. Even if B undergoes a series of mutations increasing its fitness, A might be able to keep up by developing corresponding counter-mutations – *because there is no choice*. A eats B. If B becomes “stronger”(more energy storage), only the strongest members of A will survive and reproduce, making the entire group of A stronger. If some members of B become weaker trhough muation, they will die.
Play the same scenario with a perceptual interface: A only detects and eats individuals that have a maximum energy storage of X. Usually these individuals are from species B. If some B mutate to get stronger, as far as A is concerned, they stop being food. They are not recognized as “B”. To survive, A might mutate to store more than X energy AND detect the new value of energy corresponding to B, but any other mutation is equally likely to help the survival of A: maybe detecting only lower levels of energy would work, if there are weak species around. Maybe exchanging the energy sensor for a speed sensor would help detecting Bs again, or any other species.
What if B become weaker? As far as A is concerned, B also stops being food because A’s sensors can only detect a certain level of energy. Not only B has several ways to “win” over A, but A also has several ways to survive despite B’s adaptations: by adapting to find B again, or by changing its food source.
You might object that the real world does not work this way. A cat will chase mice even if they get slower.
Or will they? Quite a lot of animals actually evolved as not to be detected by their predators using tactics involving slow motion, even if it means moving slower in general (like sloths) or in specific situations (playing dead).
In simulated worlds, going faster / becoming stronger is usually the best way to “win” at evolution.
By introducing perceptual interfaces, we allow the interplay between individuals or “species” to be much richer and original. What is the limit? If you have ever heard of an OEE simulation with perceptual interfaces, I would be very happy to hear about it. All the papers I found about simulated perceptual interfaces were purely about game theory.
In 1 or 2 posts, I will talk about how to make my model more fun and general, by overcoming some current shortcomings in an programatically elegant way. I’m not only theory-talking, I’m implementing too, but slowly.