States of information and attractors

Today I will just publish several old drafts that were lying around. I actualized them a bit. There is no specific goal to this post, sorry!

INFORMATION

There are so many definitions of information, sometimes contradicting one another. I wonder why it is so difficult to agree on what it is and how it is used.

The other day at an AI conference, someone gave a 1 hour talk about the future of a current popular AI approach, and said the world information several dozen times.
He did not define it once. I was burning to ask, what do you mean by information? Half of your hypotheses sound like they rely on one definition, and the other half on a totally different definition. What do you mean? Is your information something that exists in the world, or something that we can/must create by ourselves? Is your information something that gets inside me through my senses, or is it something external I can only interact with? These questions are important, because depending on the answer, information can be in finite or infinite quantities, absolute or relative, perfect or error-ridden, etc.

How would a naive view of information look like? Maybe like this.

The world is full of information. Or rather, the world is full of potential information, of which only a small part can be extracted and actually used by agents.

For example, the softness of a pillow could inform you of its resting qualities. This is potential information. To access this potential information, you could touch the pillow: the interaction between the environment (the pillow’s stuffing) and your sensors (touch sensors in your finger) allows you to extract some of the information on and maybe, use it. Your interaction with the pillow is a loop: you apply an increasing force on the pillow, the stuffing applies a resisting force on your finger, and a small part of the energy is dissipated into the environment.  It is really the interaction loop between the environment and you, the interplay of applied force and resisting force, that transforms the potential information into extracted information for you to use. The potential information in the whole world is absolute, infinite, and perfect. The information extracted from an interaction, on the other hand, is relative to the dynamics of your sensors, finite, and noisy.

Not all the information extracted by an agent is actually used by it. By “using information”, I mean that the agent’s dynamics are changed by acquiring that information compared to the same agent without that information. If the behavior of an agent after extracting one type of information is exactly the same than the behavior of that agent without that information, we can conclude that the information is not used. If you act exactly the same (e.g. taking a nap) after touching a pillow than after merely waving your hand in direction of the pillow, I conclude that you don’t actually care about whether the pillow if soft of not. You take the nap anyway.

But say that you were actually been checking if you needed a new pillow or not, registered that information, and took a nap independently of your conclusions. This is information that is extracted and stored. It is expected to change your behavior (buying a pillow), albeit not immediately.

Stored information can change your behavior in very drastic ways. You could be a 1 year old child, touching the pillow because you don’t know that pillows are usually soft. Next time that you see a pillow, you might deduce that it is soft, without having to touch it. In short: learning. It is a form of deeply stored information. You don’t have to remind yourself that pillows are soft (like you have to remind yourself to buy a new pillow). It simply diffuses into all your future interactions with pillows.

The other day, a labmate’s talk made me aware of a deeper form of information storage: DNA. You won’t be surprised if I say that DNA stores information about you, but my labmate turned that around. DNA stores information about your environment. And you are just the storage… That really stuck with me!

A problem with artificial agents might be that they are often simulated in virtual worlds where there is not a lot of potential information. Potential information lies primarily in “hard laws”: the laws of physics (and chemistry, optics etc). Virtual worlds are typically poor in potential information, because it takes time and computational power to simulate in detail lots of hard laws and their interactions.
An artificial agent embodied in the real world, a robot, would have tons of potential information to extract, use, store, learn. But it is even more difficult to build real agents than virtual agents. A virtual agent needs one line of code to read an audio file; a robot need actual electric parts (microphones, cables, CPU), appropriate wiring and positioning of the microphones, plus the same line of code than its virtual counterpart.

When we build virtual agents, we usually decide what kind of laws they will use, simulate a simplified version of these laws and give the agents the sensors that we think are appropriate. This already takes time and effort, but imposes terrible limits on virtual Artificial Intelligence (AI).
Furthermore, there is little space for “soft laws”. These are laws that are often right, but sometimes wrong: they have limited validity in time, or in space, or can be changed by the agent itself. Soft laws may well be the reason why living beings must evolve learning abilities; if all events can easily be summed up with hard laws, then there is no need for adaptive behavior. Just store everything your agent will ever have to do inside its DNA. (This first approach is a very common approach in AI!)
But soft laws can often serve as heuristics for hard laws: simplifications that work well enough to be useful. That’s why it is very advantageous to store something like an “ability to learn” inside yous agent’s DNA, rather than use the first “hardwire everything!” approach.

DYNAMICS

Agents interact dynamically with their environment. These dynamics typically have attractors. An attractor is a state to which a system tends to be “attracted” to. Consider bipedal locomotion: there are a lot of possible things that we can do by moving our two legs, but we typically either walk or run. Walking and running are attractors in human locomotion. If you push a system out of one of its attractors, it can go back to this attractor state, fall into another, or just behave chaotically. In other words, an attractor represents a kind of balance in the dynamics in the system.

Imagine a simpler case: a ball rolling on an endless floor. This ball has infinite supply of energy and never stops rolling. Its state is defined by its position on the floor. If there is a bowl-shaped depression in the floor, the ball can fall into it and roll around without ever getting out. If you push the ball a little, it can keep rolling inside the depression or get out. The depression is an attractor for the dynamical system “ball rolling on uneven floor”: the coordinates of the ball tend to be always comprised in a small set of values. If the floor is completely flat, the ball will roll forever: there is no attractor.

What do attractors have to do with AI? Let’s go back to locomotion. Human babies start without the walking attractor. Adults can walk. Somewhere in between an attractor has appeared, or rather, was learned. How do humans, as dynamical systems, modify our dynamic landscape to create an attractor? Can learning be reduced to creating attractors where there were none? Humans and other animals are, in a very real sense, creatures of habits. The first years of our lives, we learn to associate the right behaviors to the right cues. Not only our brain, but the rest of our body as well. When you walk, your don’t have to think about how to move your legs: you mostly think about where to go and when to pause or stop. As adults, we can go entire days without doing anything completely new, simply by switching to the right habit at the right moment. And it is very hard to break an ingrained habit. There is actually hints that habits cannot be broken in the human brain, only replaces

So habits can be seen as attractors. The dynamical system they refer to is the system of our interactions with the environment. As we live, we extract information from our environment. If we find a particularly good way to extract a type of information, we’re likely to repeat it again and again, and build a habit. So one way to try to build powerful AIs is to look for ways to build useful attractors in artificial agents. Of course, when a baby learns to walk, it isn’t only the brain that changes. Newborn babies don’t walk, not only because they don’t know how, but also because their body simply can’t allow it. Learning to walk requires your body and brain to change on a medium-term scale (development) before you can actually work out the mechanics of walking on a short-term scale (learning).

So how to build attractors? They could be there from the very beginning (innate); they could be built by changes in your own body’s dynamics (development) or by changes into the way you extract information (learning). My opinion is that most attractors are a mix of these three influences. Walking certainly is. We have “innate” reflexes like kicking, we develop others like trying to stand and keep our balance, and after trial and error they combine into the right amount of kicking and balancing and off we go. An interesting path for AI would be to achieve integration of these three time scales to build attractors for locomotion, object manipulation, visual processing and everything that can be reduced to a combination of nicely steered habits.

MODALITES AND OVERLAP

I have been talking about attractors, but I did not really define the dynamical system they refer to. Ideally, we would take all possible inputs and outputs of the agent and as parameters to describe the system, but the sheer complexity of the task makes it virtually impossible. So let’s consider simple systems. We can for example, consider the values of one type of sensors (touch,vision..) and one corresponding motor output (contraction of muscles in your finger, eye..)

Until now, I mainly talked about interactions between one type of sensory input and the corresponding potential information sources in the environment. The values successively taken by the sensory input can be described as trajectories on an imaginary landscape. This landscape may have attractors; each attractor is a set of trajectories representing a state of balance in the system. Evolution, development and learning help us build attractors in our sensory landscapes. Extracting information then become much simpler: attractors guide our interactions so we extract the kind of information that we need.

Well, that was a rather strange and disorganized post, but I wanted to get rid of all those drafts. See you next time!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: