The Emperor’s New Mind – Chinese room and stability

I recently started reading this 1989 book by Sir Roger Penrose: The Emperor’s New Mind . The interesting thing is, although I disagree with a lot of what is written (the book is a bit old), the logic of it is very interesting. Therefore instead of feeling frustrated (looking at you, book on mirror neurons…), I feel stimulated by these disagreements.

Today I would like to think about the Chinese room thought experiment. It was devised by Searle, but I never really understood why it was interesting until I read Penrose’s narration.

Summary of the experiment as I understand it: you’re an English speaker who cannot speak Chinese. You are in a room where you can receive a set of Chinese characters arranged to form a question. You must arrange another set of Chinese characters as to answer that question. You also have a set of rules, written in English, that allows you to perform the task while still not understanding a single word of Chinese. These rules are algorithmic, so you can execute them “mechanically” without actually having to think about anything. If we take a native Chinese speaker to stand outside of the room, devise the questions and receive your answers, for them it is exactly as if you could really speak Chinese.

The question is: can we truthfully say that you can actually speak and understand Chinese, just because you act as if you could? In Artificial Intelligence terms: can we say that a robot is intelligent or understands something, just because it runs algorithms making it act as if it were intelligent? Which comes with another question in the book: are there tasks that cannot be reduced to algorithms, and therefore will never be reached by an algorithmic approach to AI?

From here, I will expose my own thoughts on the subject.

First of all, what would a non-algorithmic approach to something look like? Since I was little, long before I learned about computer programming, I always tended to execute things in an algorithmic way (“do thing T1 until condition C1 is met. Then do T2. If condition C2, stop. Alternate T3 and T4 in exact proportions.”). This really is my default approach to anything from sweeping the floor to eating cookies, so the question of “non-algorithmic” processing is a hard one for me. My current answer looks something like this:

  • Algorithms take digital inputs
  • The structure of a given algorithm is extremely stable, it does not change (although that would make for a very interesting algorithm indeed)
  • Processing can be “parallel-like” but never truly parallel
  • Therefore algorithms have a finite number of states

So what would an extreme non-algorithm look like?

  • Analog inputs
  • Unstable structure, that is a structure that is changed by the influx of inputs itself
  • “Extremely parallel” processing
  • Therefore, an infinite number of states

This kind of processing system would have interesting properties. For example, the fact that the passage of information itself changes the structure of the system creates a form of memory, which influences the system without needing to be “retrieved” (as you would need with an algorithm). In addition, if you allow true parallel processing, any “piece of information” currently transiting in the system can be directly influenced (multiplied, added etc) or indirectly influenced (by changes in the structure of the system) by all the other information currently transiting the system.

But we now have a new problem. How can such an unstable system produce anything interesting? If similar inputs always produce a completely different behaviour (output) of your system, it is basically useless (but certainly artistic in some way). Imagine an animal that would once eat food, once avoid the same food, once rotate on itself in presence of food etc. The specie would not survive a single day, and you would certainly not call it intelligent. The same goes for a robot.

So we do need some kind of stability. We need it to emerge from that unstable system. What a powerful system that would be! Patently unstable when you look at it from too close, yet having some elements of stability when you zoom out. By the way, it is probably obvious by now, but networks are an easy way of building a system exhibiting the 4 non-algorithmic properties I thought of.  This is not completely intentional, I really wanted to find something more general, but anyway. Let’s go with networks. Back to stability: stability emerging from fundamental instability sounds a little bit like magic to me, and I don’t like that. But I think we have the same problem with our own brain: a network of neurons, always undergoing lots of changes. New neurons are born, some die, the structure changes all the time. Yet your image of yourself, what you think about your own identity, is relatively stable. Your daily behaviour is mostly stable and predictable. Something must be “passed along” despite all these changes in your brain. Maybe it is the fact that most changes are quite slow? But this answer is very unsatisfying to me.

One way to find an answer to this specific kind of difficult questions in a creative way is to use artificial evolution, and I think “evolving stability from instability” would be an awesome, and totally doable project. I might take it as a side project actually, because it really sounds super interesting to me and of course, it relates to my research. And from here I would like to go back to the Chinese room experiment. My answer to the question “would you say that the person in the room understands Chinese” was “yes” a few months ago. Now my answer is: “No, but who built the algorithm in the first place?”

I think the original problem leaves out a very important part of the experiment. The system is not complete if you do not include the person who built the algorithm in the first place. If you include them, then yes, without a doubt, the system “English speaker + algorithm + person who built the algorithm” can and does understand Chinese. I rarely have original ideas, so I suppose someone else said it before me. But what if we want to take the algorithm programmer out of the system? After all, an AI that is intelligent only if you take the AI programmer into consideration is not A nor I. So let’s assume that the Chinese room algorithm was evolved using artificial evolution. Let’s leave aside the details to focus on the results. Now you have a system (algorithm+English speaker) which evolved to speak Chinese like a Chinese person on one hand, and an Chinese person who is also the product of (natural) evolution on the other hand.

We have two possibilities. Possibility 1: the artificial and the natural system perform the same task of answering Chinese questions in Chinese, but in different ways. Then we must accept that there are 2 ways to speak Chinese (the system’s way, and the Chinese way). And one will necessarily be more efficient than the other. So either “understanding Chinese” is not the most efficient way to speak Chinese, which is not very likely! Or the system stumbled upon a not efficient way to speak Chinese without actually understanding Chinese, in which case we can repeat the artificial evolution process as many times as necessary with as many parameters as we want, until we find the “most efficient” way to speak Chinese (that is, understanding it). That leads us to Possibility 2: the artificial and the natural system perform the same task of answering Chinese questions in Chinese, in the same way. Of course the exact mechanisms cannot be the same, as the two systems are different, but at a slightly zoomed out level, both systems take a Chinese question in input, understand it, and answer in Chinese.

My point is that if you evolved to solve a task, you will likely do it in a very efficient way. And if you solve the task in the more efficient way possible, who can pretend that they “understand” the task better than you? It seems to me that that is not even the question anymore. On a related note, what do you think has a greater potential for learning and evolution: a stable system with everything neatly arranged in “if … then” branches, or a system born to create relative stability from instability? Seems to me that one of those is pretty close to the very definitions of “evolution” and “learning”.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: