When I dig below the hype and both positive and critical evaluations of ChatGPT what I discover is that AI thought leaders disagree and argue with each other a lot. As well as the disagreements about the reliability of ChatGPT there is also the issue of different types of AI. The ascendancy of Deep Learning in the public consciousness is a relatively recent phenomenon in the 60 plus years history of AI. As part of my research I found the need to clarify a broad bunch of terms that refer to very different approaches. A fundamental part of understanding AI is knowing the different types of AI.
Aside for future article: Moreover, underlying these different approaches originally were different ideas about how the human mind and / or brain works.
What is AI?
Artificial Intelligence is the field of developing computers and robots that are capable of behaving in ways that both mimic and go beyond human capabilities. An AI can do discovery, inference and reasoning just like a human. I’ve underlined robots because they, too, have been very much sidelined in the current ChatGPT hype.
The difference between AI and AGI, which also introduces the terms narrow and weak AI
The phrase “Artificial Intelligence” originated from John McCarthy at the Dartmouth Conference in 1956. In the beginning there was only AI, the goal to build a machine from which we couldn’t tell the difference from a human. This was Alan Turing’s imitation game, aka the Turing test. In 1950 Alan Turing published “Computer Machinery and Intelligence” which presented his Imitation Game challenge which later became known as the Turing Test. The Turing Test has now become outdated but that is a side issue to the present article.
Imitating humans in all or most respects turned out to be very hard to do. Over time, as researchers built machines that could perform a particular sub task that humans could do, such as playing chess or correctly labelling images or self driving cars then those sub goals retained the AI label. So, new labels, AGI (Artificial General Intelligence) and ASI (Artificial Super Intelligence) were invented to renew the original goal of imitating or surpassing all of human intelligence.
An alternative could have been using the terms narrow AI or weak AI. Narrow AI is a good descriptor for one narrow task. Weak AI is a similar term, an AI that implements a limited part of the mind.
What is Machine Learning (ML)?
Some experts refer to their work as Machine Learning. For me this needs clarification because there is overlap in the usage of ML and AI. ML is a subset of AI but in what way?
ML means that the machine is Learning (duh!). It is learning by example. In some cases lots of examples. It improves its performance with training over time. This is a different type of coding to the one I am used to where you write code to do something and that something is predetermined by the code and doesn’t change over time. With machine learning, we can use algorithms that have the ability to learn.
There are lots of these algorithms. See this geeks for geeks page for a comprehensive list. One simple example is linear regression. The machine can take an input of a series of points on a graph and map a straight line of best fit to those points. To do this you would need the data (the points), the linear regression algorithm and some python code to do the work. I provide links to a couple of beginner's hands on AI courses in the reference section below that take you through this process.
Phrases that go with ML: data, big data (especially for Deep Learning which is a subset of ML), self learning, statistical models, self correction, can only use structured and semi-structured data
I've been searching for a relatively simple example of making Machine Learning (making rather than doing ML) suitable for middle school students. Most people are current excitedly focused on the doing but my belief is that to understand it deeply you need to make it.
Rodney Brooks illustration of the early machine learning device by Donald Michie provides an entertaining introduction to the topic. Michie built a machine that could learn to play the game of tic-tac-toe (Noughts and Crosses in British English) from 304 matchboxes, small rectangular boxes which were the containers for matches, and which had an outer cover and a sliding inner box to hold the matches. He put a label on one end of each of these sliding boxes, and carefully filled them with precise numbers of colored beads. With the help of a human operator, mindlessly following some simple rules, he had a machine that could not only play tic-tac-toe but could learn to get better at it. He called his machine MENACE, for Matchbox Educable Noughts And Crosses Engine, and published a report on it in 1961
Brooks references a 1962 Scientific American article by Martin Gardner which illustrated the concepts with a simpler version to play hexapawn, three white chess pawns against three black chess pawns on a three by three chessboard. As in chess, a pawn may move forward one space into an empty square, or capture an enemy pawn by moving diagonally forward one space. If you get a pawn to the last row, you win. You also win if you capture all the enemy's pawns, or if the enemy cannot move.
Impressively, TheGamer has coded MENACE in Scratch, here and puttering has coded hexapawn in Scratch, here.
What AI isn’t Machine Learning?
You can have forms of AI that don’t learn over time. Symbolic AI, Traditional robotics and Behaviour based robotics could all fit this category. They are programmed, they do some human like stuff but don’t change or improve over time without human reprogramming. They are still important but sidelined at the moment due to the LLM (Large Language Models) hype. A little more on this below.
This diagram, found on the web, is missing the two robotic forms of AI and the Neuro-Symbolic hybridsWhat is Deep Learning?
To repeat, Machine learning (ML) is a subfield of AI that uses algorithms trained on data to produce adaptable models that can perform a variety of complex tasks.
Deep learning is a subset of machine learning that uses several layers within neural networks to do some of the most complex ML tasks sometimes without any human intervention. But sometimes there is human intervention, in the case of Reinforcement Learning. I won't go into detail about Deep Learning here because it is long as well as smelly. I do provide a link to a detailed version in the Reference.
Four main types of AI
Following Rodney Brooks, there are at least four types of AI. They are, along with approximate start dates:- Symbolic (1956) aka Good Old Fashioned AI (GOFAI). Acccording to Herb Simon symbols represent the external world; thought consists of expanding, breaking up, and reforming symbol structures, and intelligence is nothing more than the ability to process symbols. This was the originally dominant of AI (from the 1956 Dartmouth Conference) but the enormous effort over decades by Doug Lenant’s Cyc Project failed so now it has become marginalised due to the successes of Deep Learning.
- Neural networks (1954, 1960, 1969, 1986, 2006) aka Connectionism, which evolved into Deep Learning…lots of different start dates here since it has died and then returned from the dead a few times). Deep Learning gets all the media attention these days.
- Traditional robotics (1968)
- Behaviour-based robotics (1985) aka embodied or situated AI or insect inspired AI! (my term)
One more important thing. Some authors, notably Gary Marcus, say that Neuro-Symbolic hybrids are the way forward to robust, reliable AI.
Here's my crude Venn diagram of the different types of AI: ML = Machine Learning; DL = Deep Learning; S = Symbolic AI;H = neuro-symbolic hybrid; TR = Traditional Robotics;
BR = Behavioural Robotics
To understand the current deficiencies of the AI debate / hype it’s necessary to look at the strengths and weaknesses of these different types. Rodney Brooks does evaluate them, in his 2018 blog referenced below, against these criteria: Composition, Grounding, Spatial, Sentience and Ambiguity
AI development has had a tortured zig, zag history. Another fascinating way to view it is from the influences and underlying belief systems / philosophies of the AI founders and developers. If we build our machines in our own image then what is that image?
REFERENCE
Brooks, Rodney. Steps Towards Super Intelligence, 1. How we got here (2018)Brooks, Rodney. Machine Learning Explained (2017)
Marcus, Gary. The next decade in AI: Four Steps Towards Robust Artificial Intelligence (2020)
Deep Learning (DL) vs Machine Learning (ML): A Comparative Guide
Excellent, wide ranging explanations of ML and DL
The 10 Best Machine Learning Algorithms for Data Science Beginners
Linear Regression to fit points to a straight line is their number one
Your first machine learning project in python step by step
Free introductory hands on course to Deep Learning
No comments:
Post a Comment