Can a machine ever be regarded as intelligent? British mathematician and theoretical computer scientist, Alan Turing, proposed in 1950 what he called the "imitation test." The person performing the test sits in a room that has two computer terminals at which questions can be typed. One is connected to a room where a human responds to the questions, and the other has a computer generating the responses. The tester engages in a lengthy conversation with the two concerning any topic, such as the weather, sports, politics, mathematics, and so on, and then decides which responder is the human and which the computer.
Turing proposed that one may regard the computer as intelligent when it was no longer possible to distinguish between the two any more reliably than by guessing--that is, when the tester guessed correctly which respondent was human only 50 percent of the time. This is now known as "Turing's test" and is commonly regarded as fulfilling every practical need for the verification of a machine as intelligent in the human pattern. Consider a fragment such as:
Question Are you able to tell a lie?
Answer Yes, I am.
Question Are you self-aware?
Answer But, of course.
Question Do you have a soul?
Answer Please explain what is a soul?
Such an exchange would not in itself be enough to settle the issue, for these are obvious questions for a programmer to anticipate and make provisions for. At the very least, a machine would have not only to claim self-consciousness but also to defend the claim capably in order to pass the Turing test. At this point there is no device that can come close to approximating human behaviour this well. Whether it will ever be possible is a question open to argument--one that some suggest can only be settled by the machines, acting on their own behalf. While the production of a machine that can behave in a way indistinguishable from a human (social intelligence) is regarded by some as the ultimate goal of research in this field, there are also other more practical and more immediate goals.
The most important of these shorter term projects have to do with knowledge-based machines, which carry out tasks using processes that in some ways could be described as human thinking, but that are so far also profoundly different. A machine whose purpose is the analysis of knowledge is far easier to build than one that could pass Turing's test. There are four kinds of tasks that current machines are commonly programmed for: simulations, expert tasks, inference tasks, and design tasks.
One of the most popular computer games is a program to simulate the controls of an aircraft. The player can practice flying and landing at various airports under safe conditions, where a crash signifies only the end of the game, not the end of the pilot's life. The aircraft industry has long had specialized machines for this purpose, and with the help of computers, these are becoming more realistic. Some occupy an entire room and come complete with a cabin at the end of a long rotating arm capable of both motion and acceleration. No matter how elaborate they are, such simulators are cheaper and safer than employing a genuine aeroplane in the same exercises.
Simulations are also used in the design of expensive components or systems. Once again, the aircraft industry is an important user of such devices and programs. For example, it is now possible to run a graphical simulation of a wind tunnel and picture the stresses on an airframe using a computer. Expensive though such machines are, they are cheaper than building the wind tunnel itself, and far less expensive than testing a prototype of the plane.
Medical schools have found it difficult to obtain cadavers for students to practice surgical technique. Artificial cadavers connected to a computerized analyser can allow a safe practice of many types of operations, and provide a detailed summary for the instructor afterwards. The military also uses war games or simulations to train personnel in order to avoid unnecessary risk to human lives.
Indeed, wherever the cost in money or lives of doing a test of technique or machinery is very great, simulations can be used to reduce the risk. The goal is to approach realism as closely as possible, without subjecting the learners to any real risk except those of the failures necessary to learn.
Perhaps the best-known and most successful examples of computers performing expert tasks are in the field of medical diagnosis. There have been several programs, including the ones known as CADUCEUS, CASNET and MYCIN, that have performed at the level of human diagnosticians. The idea behind such systems is to create a very large data base of diseases and their symptoms together with probabilities that the two will be associated, and the suggested treatments together with their success rates, side effects and contra-indications.
Such a program uses a search scheme to take a list of symptoms provided by a doctor and suggest tests that can be performed to narrow down the possible causes. Once the results of a series of tests have also been entered, a probable diagnosis is made and treatment suggested. At each stage of the testing regimen, the medical practitioner is given a list of the possible diagnoses still "in the running" together with the probability that each is the correct one. During treatment, the program can be updated with patient responses and provide expert assistance with drug dosages and alternate treatments.
Similar software is available in other fields, including the law, metals and minerals prospecting, and chemistry. It is from the base of such knowledge devices that hypertext and ultimately Metalibrary systems are gradually growing.
Although inference tasks overlap the expert tasks and will one day merge with them, these are somewhat different in concept. Here, the major data base on which the tasks operate is not so much a pool of facts but a history of the success or failure of previous decisions made by the system. Moreover, the program is designed not so much for the analysis of data as it is to follow a collection of rules.
Consider, for example, a program designed to play chess. There are two kinds of rules that must be made available to the program. The first are the rules of the game, wherein the program is instructed how to move the board pieces legally. The second group consists of a set of rules of thumb, which are also called heuristics. These are collections of general ideas about the overall strategies that work best at various stages of the game. Standard opening sequences, together with such ideas as controlling the centre of the board and when to trade pieces for advantage, are all among the chess heuristics. A set of sample games completes the system, and this collection is added to by the machine as it plays.
The chess program uses the board rules, the heuristics, and the history, together with brute force computational methods that can examine tens of thousands of combinations that may arise from any of the possible legal moves at a given time. The actual move made by the program is based on what generates the best possibilities two, three or even ten moves ahead. A human chess player does not work in this fashion, but employs a broader and subtler array of heuristics for making such decisions. Even the masters of the game do not try to envision all the possibilities more than a couple of moves ahead of the current position, but play for strategic advantage based on experience.
Even though chess playing machines are now capable of generating games that can defeat a world champion, the type of machine logic used here is very low level. It is based entirely on fast computational ability, and does not even approximate human thinking. Thus, it does not have human intelligence, even though it can achieve some of the same results.
The approach taken by other programs designed to simulate intelligence, such as EURISKU, developed at Stanford, is rather different from the chess-playing ones. It relies more on heuristics and less computational speed and is capable of developing logical lines of analysis, suggesting new heuristics and rating these with other heuristics. It can also develop competing heuristics, and remove defective or parasitic ones. EURISKO has been used to solve problems in computer programming, mathematics, games, circuit design and various engineering applications.
This approach may have great potential, for the ability to devise and test competing models of the universe being studied and to make logical inferences based on both data and decision history are both essential to anything that will be called artificial intelligence. Such an approach is also a better simulation of human thinking than is the purely computational, even though it too depends for its success on computational ability.
These also overlap the other two, but are important enough to discuss separately. Drawing on a knowledge base and sometimes using rules for analysis and inference, computers are already being used to assist in the design of both manufactured products and of the machines to make them. They are increasingly being employed to develop new designs for more complicated devices such as three-dimensional integrated circuits. It is a short step from this point to the successful design of more powerful computers using software alone. Better designing software could then be designed by a computer for installation in the next machine, and the history of the first designer downloaded as the initial data base for the second. Thus, computers could eventually design their successors' hardware and software, and each machine in the sequence would be smaller, faster, and a better designer. In theory, the process could be continued with more intricate machines being built in this fashion until the processing power and memory reached and exceeded that of the human brain. Numerous such elaborate processors, working in parallel would be required to control all the functions of the Metalibrary. One task of an automated designer will be to monitor the available computer technology and continue the process in order to make improvements to its own capabilities. Perhaps with some robotic help, these improvements could be automatic, achieved without human intervention.
At some point along this trail, enough will also become known about the chemical construction of large molecules to design new ones, and these new molecules might in turn be programmed to design others. Some researchers have suggested that people may one day be able to employ virus-sized machines (i.e., another form of nanotechnology) for such tasks as studying brain functions neuron by neuron, locating and repairing arteries blocked by strokes, and eliminating specific toxins, bacteria and viruses from the body. AIDS, herpes, and other retroviruses that go dormant and hide inside certain cells for extended periods might be eliminated from the body by such means.
Japan for a time made artificial intelligence research (AI) a national priority in an effort to secure a lead in computer technology for so-called fifth-generation machines. Similar work has also been undertaken at various universities in North American and Europe. This research has been given high priority by funding agencies of governments, the military and private foundations. As a result, those making research commitments in areas relating to AI have little difficulty securing monetary support.
Problems of language translation have also provided one of the strongest motivations for the Japanese involvement with these projects, for one of their goals has been machines that can translate to and from Japanese and other languages in both spoken and written form. At first, it was thought that only a machine that could do this would certainly be worthy of the label "fifth-generation." However, such problems have a variety of full and partial solutions in software alone. Such programs will be employed by telephone companies to allow verbal and written communication between speakers of different languages, and for the deaf or blind. They will also be used by cable companies to convert the closed-captions transmitted with their programs into the language of the viewer's choice. Whether later systems in this category will be thought of as artificially intelligent in any new sense of the term remains to be seen.
On the hardware scene, attention is focusing upon parallel processing, in an effort to break the von Neumann bottleneck associated with the traditional sequential processing. Machines that rely on a central processing unit must execute instructions from a stored program one at a time in sequence--a technique suggested by the mathematician John von Neumann in the 1940s. Even at the limits of today's fastest experimental processors, such machines are limited to speeds under a billion instructions per second, or BIPS. If problems can be broken down into many parts for processing, each portion being handled simultaneously by a different processor (i.e., parallel processing), the overall rate can go up many orders of magnitude. Supposing that, say, a PowerPC chip runs at 1 BIPS by itself; a computer with 10,000 of these working simultaneously would execute 10,000 BIPS or 10 trillion instructions per second (TIPS). Even this machine would have only a small fraction of the power of a human brain, but if it were reduced to a single chip, and 10 of these were in turn paralleled, the resulting device would be up to 100 TIPS. The last figure may be close to that of the brain.
Of course, new hardware demands new types of software. Traditional AI work has been done in the programming languages known as LISP and PROLOG, but lately the Smalltalk and Prometheus notations have gained some credibility in this field. In order to work on a multiply paralleled machine, the language must be modular and have the ability to schedule its processing both sequentially and simultaneously. Notations such as Modula-2 (designed to replace Pascal) have this capability, and perhaps the new machines will initially be programmed in some common descendent of these current programming notations. Of course, devices that will be used extensively as design tools for other machines and to simulate intelligence must be capable of programming themselves, and of devising the languages in which to do this. Ultimately, it may not be necessary to have many human programmers or human-readable notations, for the machines (in theory) will be capable of translating voice or other requests into programs and then executing these without further human intervention.
As indicated, the ultimate goals of artificial intelligence research extend beyond computational and design tasks to the understanding and emulation of the behaviour of the human brain. There are two paths down which this research may lead, and these are examined in the next two sections. The first path, seen as an ultimate goal by some researchers, is probably the more difficult. The second path, a somewhat more short-term solution may be easier to accomplish.
Profile On ... Technology
What is required to build an expert, or knowledge-based system?
o A acknowledged human expert at performing the task must be available.
o The performance of the human expert must be based on special knowledge and the application of techniques.
o The expert must be able to explain the special knowledge and techniques.
o The rules used by the expert must each be capable of controlling decisions for large data sets and combinations of situations.
o The boundaries of the application in question must be clearly defined.
o The use of the system must improve the performance of the expert.
o The expert must remain available, if not as the system operator, then as the consultant to the operator.
What situations are not good candidates for expert systems?
o Those requiring the application of common sense.
o Those involving open-ended questions.
o Those with large numbers of special cases and subtleties (e.g., language processing).
o Those in which a belief system or world view is a factor in producing a decision.
o Those that involve the generation of new ideas from data, rather than the application of existing ideas to data.
Name | Use | Developer |
---|---|---|
NOAH | Robotics planning | University College, Santa Cruz |
MOLGEN | Molecular genetics work | Rand Corporation |
CADUCEUS | Medical diagnostics | University of Pittsburgh |
MYCIN, PUFF | Medical diagnostics | Stanford University |
DENDRAL | Chemical data analysis | Stanford University |
PROSPECTOR | Geological data analysis | SRI |
ELAS | Analysis of oil well logs | AMOCO |
MACSYMA | Symbolic mathematics | MIT |
SPERIL | Earthquake damage | Perdue University |
IDT | Computer fault diagnosis | Stanford University/IBM |
CRITTER | Digital circuit analysis | Rutgers University |
EMYCIN, AGE | Expert system construction | Stanford University |
ROSIE | Expert system construction | Rand Corporation |
VISIONS | Image processing | University of Massachussetts |
BATTLE | Weapons in battle | National Research Lab |
EURISKO | Learning from experience | Stanford University |
RAYDEX | Radiology assistant | Rutgers University |
TECH | Naval Task force analysis | Rand Corporation |
OP-PLANNER | Mission planning | Jet Propulsion Lab |
SYM | Circuit Design | MIT |