Tuesday, July 04, 2006

materialist world view: Minsky

footprints
As an atheist I ask myself: Does world view (materialist or idealist) matter? Is it worth making waves about?

When I think of the outstanding teachers I know who are motivated by religion or secular humanism to do the hard yards of helping disadvantaged students, then, from that perspective, I am inclined to go quiet, to turn the atheist cheek. Many people do good deeds and are motivated partly through religion. I don't doubt that.

But I don't want to be trapped in the present. When I think of history and the future and those working in the present to create the future, then materialist world view becomes very important.

The history seems very obvious. Our modern view of the world (astronomy, evolution, superiority of capitalism to feudalism etc.) arose in opposition to preordained sacred religious opinion.

There are no sacred concepts. There are no gods. As Christopher Hitchens said in concluding an amazing conversation with his brother, "They want a happy ending - that's their problem"

In the first few chapters of his book, Society of Mind, Marvin Minsky, outlines his materialist world view in simple, compelling language.

The point is this. How could Minsky even embark on the enterprise of Artificial Intelligence, of starting the process of building machines that can think, if he wasn't a materialist to begin with? Well, thankfully, we have reached that point in history. It seems reasonable to say that in order to explore the view whole heartedly then you need to embrace the view whole heartedly. The philosophy argument (that the materialist world view is better) has to be won as a starting condition for scientific progress to be made.

How can matter think? That is the task Minsky and other AI researchers have set themselves.

They have not succeeded yet. Why? Minsky's response there is that, "we need better theories about how thinking works" (1.2) That is a useful research path.

In Science, we take complex things and break them down into simpler things in order to explain how they work. This method works and accounts for a lot of human progress in understanding and changing our world and our view of the world - astronomy, evolution, medicine. Simplification produces enlightenment (1.3)

Galileo and Newton learnt a lot by studying simple things like pendulums, weights, mirrors and prisms. Minsky's and Papert's approach to AI has been to try to understand how a child learns to build blocks. They tried to build machines that could build blocks and through this process claim to have learnt a lot about how "societies of mind" develop. How a complex, intelligent mind could be made up of myriad simpler agents. (2.5)

We lose touch with simple things. We have amnesia about our infancy (1.4). If we didn't forget and if we didn't become bored with simple things then we wouldn't move on. One problem arising from this forgetfulness is that it is hard for us to reconstruct how we learn simple things. Common sense is very complex.

How do we know that we understand something? How could a machine know that it understood something? When that knowledge becomes more or less automatic.(1.6)

The purpose of fashionable terms like "holistic" and "gestalt" is to anesthetise a sense of ignorance, to create a sense of understanding where there is none (2.3)

Some people are offended by the suggestion that they are machines. Machines are lifeless, mechanical. This is partly because the word "machine" is becoming out of date. Modern machines are far more intelligent than simple machines and that trend is accelerating.(2.6)

We can already imagine a future (eg. Blade Runner) where a machine might be confused about its rights or whether it is human or a machine.

No comments: