Comparing what computers can do (play chess) with what humans can do which computers can't yet do (make a bed, read a book or babysit) provides us with insights about humans. Computer programs don't have commonsense knowledge, eg. when someone says, "a package is tied up with string", this includes "obvious" facts about the nature of string and packages (eg. with string you can pull but not push a thing)
Computer programs are not self aware of their goals - whether they are achieved or at what quality or cost. Computer programs are not as resourceful as humans, when "stuck", eg. they don't reason by analogies
What Do We Mean by Common Sense?
Minsky provides an extensive account of the common sense knowledge involved in answering a phone call. We aren't normally aware of how much we know
He introduces a new word, panalogy, which means parallel analogy
"Charles gave Joan the book"
- Physical Realm - book moves from Charles to Joan
- Social Realm - is Charles generous or hoping to ingratiate himself?
- Dominion Realm - Joan now controls the book
Here we have three meanings of the word "give". Our brains may structurally connect analogous items of knowledge from different realms (points of view). This might explain how we can easily switch without even being conscious of it between these different meanings
Multiple meanings are sometimes seen as a defect because of their ambiguities. The panalogy concept reframes them as a strength
It is hard to categorise commonsense knowledge. The scheme favoured by Minsky is along the lines of the kinds of thinking that can be applied to the categories:
- Positive expertise
- Negative expertise
- Debugging skills - knowing alternatives when usual methods fail
- Adaptive skills - how to adapt old knowledge to new situations
Building intelligent machines becomes stuck due to not having ways to overcome problems like:
The Optimisation Paradox: difficult to improve once you work well
The Investment Principle: reliance on existing processes makes it hard to develop alternatives
The Complexity Barrier: changing complex systems has unexpected side effects
At any rate, good new ways to represent knowledge are usually not quickly and widely adopted:
- you need new skills to work with them efficiently
- such skills take time and performance will probably worsen during the changeover period
We first need to evolve ways to protect against changes that cause bad side effects. Excellent method here is to split system into parts that can evolve more independently. eg. organs
Our Amnesia of Infancy leads us to develop simplistic views of what memories are and how they work. Minsky argues for goal based organisation (accomplishment) rather than descriptive organisation (data base and matches):
- What kinds of goals might this item serve?
- In which situations might it be relevant?
- How has it been applied in the past?
- What are its most likely side effects?
- How much will it cost to use it?
- What are its common exceptions and bugs?
SOURCES AND LINKS
- Was it learned from a reliable source?
- Is it likely to be outdated soon?
- Which other people are likely to know it?
Intentions and Goals
What is "self control", responsibility or intention? Moralists, Psychiatrists and Jurists argue about this.
Sometimes a goal can seem like a physical force, hard to resist, even though part of us does not want to do it. The goal may conflict with our high level values. There is no reason to expect that all our goals should be consistent.
Psychology words don't meaningfully describe goals, they just pass the meaning onto another word that needs to be explained (want, motive, desire, purpose, aim, hope, aspire, yearn, crave). We need to talk about the underlying machinery:
"A system will seem to have a goal when it persists at applying different techniques until the present situation changes into a certain other condition"Motives and goals could be explained as consisting of these three things:
Aim - description of a possible future situation
Resourcefulness - methods to reduce the difference between the present situation and the future situation
Persistence - keep applying those methods
When you hear a story you react most to how it differs from what you expected. Some names for this include accommodation, adaptation, acclimatization, habituation, becoming accustomed
Our eyes normally make small motions which helps maintain an image. Our systems mainly react to change.
Roger Schank, Tell Me a Story (1990) has conjectured that representing events as stories may be one of our principal ways to learn and remember
When people say, "I used my free will to make that decision," this is roughly the same as saying, "some process stopped my deliberations and made me adopt what seemed best at the moment". "Free will" is not a process we use to make a decision, but one that we use to stop other processes! "My decison was free" is similar to "I don't want to know what decided me"
Reasoning by Analogy
Minsky identifies reasoning by analogy as one of the main methods by which we solve new problems. Why does analogy work so well? According to Douglas Lenet it is because there is a lot of common causality in the world.
Positive vs. Negative Expertise
We see things as positive because we censor or suppress other processes that would see them as unpleasant.
Minsky simulates a dialogue with a teacher who believes in positive reinforcement and small steps. Such an approach is not bad in itself but limited:
- difficult tasks almost always involve episodes of distress and discomfort
- reinforcement can lead to rigidity, lack of adaption
- other processes may fail when the "normal way" is abandoned
- the development of higher level managerial resources is put on hold