Huyssteen, First attempts for Artificial Intelligence: Symbolic AI

The field of AI is considered to have its origin in the publication of British mathematician Alan Turing’s (1912–1954) paper “Computing Machinery and Intelligence” (1950). The term itself was coined six years later by mathematician and computer scientist John McCarthy (b. 1927) at a summer conference at Dartmouth College in New Hampshire. The earliest approach to AI is called symbolic or classical AI and is predicated on the hypothesis that every process in which either a human being or a machine engages can be expressed by a string of symbols that is modifiable according to a limited set of rules that can be logically defined. Just as geometry can be built from a finite set of axioms and primitive objects such as points and lines, so symbolicists, following rationalist philosophers such as Ludwig Wittgenstein (1889–1951) and Alfred North Whitehead (1861–1947), predicated that human thought is represented in the mind by concepts that can be broken down into basic rules and primitive objects.

Simple concepts or objects are directly expressed by a single symbol while more complex ideas are the product of many symbols, combined by certain rules. For a symbolicist, any patternable kind of matter can thus represent intelligent thought. Symbolic AI met with immediate success in areas in which problems could be easily described using a limited domain of objects that operate in a highly rule-based manner, such as games. The game of chess takes place in a world where the only objects are thirty-two pieces moving on a sixty-four square board according to a limited number of rules. The limited options this world provides give the computer the potential to look far ahead, examining all possible moves and counter-moves, looking for a sequence that will leave its pieces in the most advantageous position. Other successes for symbolic AI occurred rapidly in similarly restricted domains such as medical diagnosis, mineral prospecting, chemical analysis, and mathematical theorem proving.

Symbolic AI faltered, however, not on difficult problems like passing a calculus exam, but on the easy things a two year old child can do, such as recognizing a face in various settings or understanding a simple story. McCarthy labels symbolic programs as brittle because they crack or break down at the edges; they cannot function outside or near the edges of their domain of expertise since they lack knowledge outside of that domain, knowledge that most human “experts” possess in the form of what is known as common sense. Humans make use of general knowledge—the millions of things that are known and applied to a situation— both consciously and subconsciously. Should it exist, it is now clear to AI researchers that the set of primitive facts necessary for representing human knowledge is exceedingly large.

Another critique of symbolic AI, advanced by Terry Winograd and Fernando Flores in their 1986 book Understanding Computers and Cognition is that human intelligence may not be a process of symbol manipulation; humans do not carry mental models around in their heads. Hubert Dreyfus makes a similar argument in Mind over Machine (1986); he suggests that human experts do not arrive at their solutions to problems through the application of rules or the manipulation of symbols, but rather use intuition, acquired through multiple experiences in the real world. He describes symbolic AI as a “degenerating research project,” by which he means that, while promising at first, it has produced fewer results as time has progressed and is likely to be abandoned should other alternatives become available. This prediction has proven fairly accurate. By 2000 the once dominant symbolic approach had been all but abandoned in AI, with only one major ongoing project, Douglas Lenat’s Cyc (pronounced “psych”). Lenat hopes to overcome the general knowledge problem by providing an extremely large base of primitive facts.

Lenat plans to combine this large database with the ability to communicate in a natural language, hoping that once enough information is entered into Cyc, the computer will be able to continue the learning process on its own, through conversation, reading, and applying logical rules to detect patterns or inconsistencies in the data Cyc is given. Initially conceived in 1984 as a ten-year initiative, Cyc has not yet shown convincing evidence of extended independent learning.