Review of A Practical Guide to Designing Expert Systems1984
Reluctantly, I must admit that this is a good book. Weiss and Kulikowski have admirably delivered what they promise: a simple, proven-effective means for building prototype expert systems. The authors have considerable experience and speak with authority. Their points concerning diverse problems, such as selecting applications, knowledge acquisition, and strategic issues such as controlling questioning are clear and useful. What I most like about this book is that it is not pretentious. It deals only with what the authors understand best about expert systems, and all of that is presented simply, with good examples. The book steers clear of academic arguments about knowledge representation, and this simplification seems appropriate for a practical engineer's guide. As a basic guide for designing expert systems, the book offers the classification model as a common theme for describing how certain expert programs solve problems. A classification expert system is one that selects an output from a pre-enumerated list of possible solutions that is built into the program. Weiss and Kulikowski present this model in a simple way, describing CASNET, PROSPECTOR, DART/DASD, and similar systems as examples. Problem definition, elements of knowledge, and uncertain reasoning are treated concisely. The brief discussion of traditional problem solving methods, such as decision theory, is valuable. EXPERT, a production rule language, is illustrated by a hypothetical car diagnosis problem as well as a model for serum protein interpretation. Of particular interest is a description of the ELAS system for oil well log analysis, which integrates EX-PERT with traditional analysis programs. The book concludes with an interesting, down-to-earth essay on the state of the art and consideration of the future. But for all its good sense and clear exposition, the book has two important limitations. First, the classification model presented here is weakly developed; it applies only to the simplest problems. Much more is known about classification from studies of human problem solving. The authors ignore cognitive science studies altogether and so leave out basic ideas that are relevant to designing expert systems. Even more serious, the authors advocate a rule-based programming style that I am afraid may become the FORTRAN of knowledge engineering. So much knowledge is left implicit or is redundantly coded that modifications and extensions to the program will be expensive-just like maintaining FOR-TRAN programs. If we want to make knowledge engineering an efficient, well-structured enterprise, we can only hope that approaches like those used in EMYCIN, EXPERT, and OPS5 will soon die out. Examples from this book make my point. I will consider the classification model first. It is noteworthy that the two AI researchers who first described expert systems in terms of classification- and Chandrasekaran (Chandrasekaran, 1984)-both had experience with pattern recognition research in Electrical Engineering. Some of the most informative parts of Designing Expert Systems relate expert system research to pattern recognition and decision analysis. What is lacking in this analysis is similar attention to the other fork of the evolutionary tree, studies of human problem solving in cognitive science. After all, the patterns of an expert system are not linear discrimination functions, they are concepts. Research concerning the nature of memory and learning of categories is relevant for designing expert systems. In particular, the hierarchical structure of knowledge, the nature of schemas as stereotypes, and the hypothesis formation process all have a bearing in how we design an expert system. Certainly, in the language of EXPERT, Weiss and Kulikowski have taken a big step beyond EMYCIN by structuring knowledge in terms of findings, hypotheses, and different kinds of rules relating them. They list three kinds of rules: finding -finding, finding -hypothesis, and hypothesis hypothesis. Thus, the classification nature of the problem solving method is revealed as a mapping of findings onto hypotheses. Moreover, Weiss and Kulikowski describe search of this knowledge network independently, so inference knowledge is not mixed with process knowledge. But their analysis stops here. Weiss and Kulikowski are right to put forth the classification model as a scheme for structuring expert knowledge, but they have not made any attempt to relate it to what is known about experiential human knowledge. Further analysis shows that there are common relations that underlie the rules (Clancey, 1984). For example, findings are related to each other by definition, qualitative abstraction, and generalization. Knowing this provides a basis for acquiring, documenting, and explaining finding/finding rules. Besides asking the expert, "Do you have any way to conclude about F from other findings?" the knowledge engineer could also say, "Do you know subtypes of F?" or "Given this numeric finding, do you speak in terms of qualitative ranges?" Similarly, hypotheses are related by subtype or cause. Rather than considering car failure diagnoses (an example developed in the book) as a simple linear list, the knowledge engineer can start with the assumption that the expert organizes his knowledge as a hierarchy of diagnoses. The classification model can be further refined in several ways. First, a distinction can be made between heuristic classification and simple classification by direct matching of features (as in botany and zoology). The pre-specified solutions in expert systems are often stereotypic descriptions, not patterns of necessary and sufficient features. This has important implications for knowledge acquisition and ensuring robustness in dealing with noisy data. Second, emphasizing rule implication alone, Weiss and Kulikowski fail to mention 84 THE AI MAGAZINE Winter, 1985 AI Magazine Volume 5 Number 4 (1984) (© AAAI)