Tuesday, July 18, 2006

the emotion machine

Marvin Minsky's new book, The Emotion Machine, is due for release in November this year. You can download a draft copy from the link. I've read his earlier book, Society of Mind and it has long been a favourite. Minsky is a painstakingly clear writer who makes heroic efforts to communicate IMO.

Recently, I was very interested to find that some of Minsky's ideas have been challenged by Rodney Brooks (here). I have ordered a copy of Brooks book, Flesh and Machines, from amazon to find out more. Brooks claim is that Minsky erred in not putting the concepts of situatedness and embodiment onto the AI research agenda.

One of Minsky's long standing claims is that common sense is very hard to explain or program. Here is an excerpt from a recent interview (thanks to Al Upton for the link) :
Back when I was writing The Society of Mind, we worked for a couple of years on making a computer understand a simple children's story: "Mary was invited to Jack's party. She wondered if he would like a kite." If you ask the question "Why did Mary wonder about a kite?" everybody knows the answer -- it's probably a birthday party, and if she's going that means she has been invited, and everybody who is invited has to bring a present, and it has to be a present for a young boy, so it has to be something boys like, and boys like certain kinds of toys like bats and balls and kites. You have to know all of that to answer the question. We managed to make a little database and got the program to understand some simple questions. But we tried it on another story and it didn't know what to do. Some of us concluded that you'd have to know a couple million things before you could make a machine do some common-sense thinking.
He goes onto explain that emotions enable us to swap between different modes of thinking depending on the situation:

The main idea in the book is what I call resourcefulness. Unless you understand something in several different ways, you are likely to get stuck. So the first thing in the book is that you have got to have different ways of describing things. I made up a word for it: "panalogy." When you represent something, you should represent it in several different ways, so that you can switch from one to another without thinking.

The second thing is that you should have several ways to think. The trouble with AI is that each person says they're going to make a system based on statistical inference or genetic algorithms, or whatever, and each system is good for some problems but not for most others. The reason for the title The Emotion Machine is that we have these things called emotions, and people think of them as mysterious additions to rational thinking. My view is that an emotional state is a different way of thinking.

When you're angry, you give up your long-range planning and you think more quickly. You are changing the set of resources you activate. A machine is going to need a hundred ways to think. And we happen to have a hundred names for emotions, but not for ways to think. So the book discusses about 20 different directions people can go in their thinking. But they need to have extra meta-knowledge about which way of thinking is appropriate in each situation.

Minsky also expresses disappointment about "how few people have been working on higher-level theories of how thinking works", that too many "people look around to see what field is currently popular, and then waste their lives on that. If it's popular, then to my mind you don't want to work on it."

No comments: