In the discussion over the nature of mathematics, one interesting side-argument that recently started was over the idea of "understanding." What does it take to really understand something? The answer is not as simple as you'd think. In fact it is a hotly debated topic in artificial intelligence (AI) circles. J.R. Searle gave the following argument - called the Chinese room argument - against what is called Strong AI. I wrote a post on my personal blog about this two years ago and some of the following is excerpted from there.
Imagine a monolingual English speaker/reader in a room. The person has on a table an instruction booklet, pen, and paper. Notes written in Chinese are then passed into the room. The instruction booklet tells the person things like, "If you see Chinese character X on a slip of paper and Chinese character Y on another slip of paper, write Chinese character Z on your pad." Chinese speakers outside the room label the slips going in 'stories' and 'questions' and the slips coming out 'answers to questions.' The instruction manual can be as sophisticated as you'd like. The question is, does our English speaker/reader - who only speaks and reads English - understand the Chinese, i.e. the details of the story and the associated questions and answers? Searle says no. To Searle, the room and the English speaker/reader is a computer and you can run as sophisticated a program as you'd like, but that it cannot understand Chinese regardless of the program. As such, Searle claims no program can be constitutive of understanding.
There have been critiques to Searle's argument and the first one that comes to mind is adaptability. In a sense, for example, one might say that a spam filter "learns" what is spam and what is not and thus "understands" spam. Likewise, linguistic programs can "learn" language. Is there not an element of understanding inherent in learning?
Now, one point to remember here is that Chinese is a non-phonetic language and should remain completely indecipherable to a native English speaker without formal training. Another take on it is the systems approach where one states that while the man is not intelligent, the system (i.e the entire room) is. Take for example the human brain, one could argue that none of our neurons are intelligent however when they are combined together within the human brain , they represent an intelligent system (i.e the human).
The only problem with the systems approach in this particular case is that the room isn't necessarily an interconnected system. The person and the papers are, but the actual processing boils down to neurons in the guy's brain in the end. So I'm not sure I buy that argument, particularly since it implies that a highly complex robot would be intelligent and suggests we likely would already have created AI then. It does bring up a very interesting question, though. If it is true that a systems approach is the answer, at what point does a system become complex enough to exhibit true intelligence? This is an interesting question related to emergence. But where does emergence occur?
For our English speaker to truly understand the ideas behind the Chinese symbols they need to be translated into English (or someone needs to explain them to him, which is essentially the same thing). So, does translation then produce understanding? Again, no. It is still possible for the English speaker to execute the program and he/she at least understands more of the symbols, but it is akin to recoding the program into the native language of the operating system (the English speaker). But there are plenty of things written in English that I don't understand. I may understand the individual words, but I don't understand the way in which they are combined. For a rudimentary example of this, imagine I'm a spy and intercepted the code "The jackal screams at midnight." I know what all of those words mean individually, but I have absolutely no idea of their context. It seems to me, then, that true understanding requires context. As it turns out, Aerts, Broekaert, and Gabora considered this about ten years ago. More recently, Svozil has made in-roads into this exact process by proposing a context translation principle.
So where does that leave us? It seems that there is an important connection between the notion of true "understanding" and emergence & complexity.