Editor: Alison Kidd
ISBN 0306424541 (boards) |
Author: Anna Hart
ISBN 185091091X (boards) Buy it from Amazon.com
|
Back in the 1980s, no-one had heard of Requirements Engineering, and indeed when I first did (from a respected co-author of mine ...) I wondered how you could possibly call anything as simple as writing down a requirement 'engineering'. But I digress. Back in the 1980's, when primitive beings with hand-axes, simple earthenware pottery and 'expert systems' ruled the planet, phrases like 'Knowledge Elicitation' and 'Knowledge Acquisition' were common - indeed, they were all the fashion. In those far-off days, hominids with large brains held the belief that knowledge could be extracted from people's heads, and bottled up in 'knowledge-based systems' which exhibited 'artificial intelligence'. However quaint (or cringe-making, if you were close enough to it) this superstition may now seem, the fact is that a large number of sophisticated techniques were developed, or more likely purloined from established disciplines such as psychology and human-computer interaction, for finding out about what was inside people's heads. Requirements Elicitation existed before Requirements Engineering!
Since I have already made one admission of my age, let me make another. I wrote a book review of Alison Kidd's excellent Knowledge Acquisition for Expert Systems in 1989. For curiosity's sake, here it is. I will then attempt a retrospective review and a brief comparison of Kidd and Hart.
The big danger with books containing chapters by different authors is that they frequently become disjointed, written in a hotchpotch of styles and full of contradictions. [Some things never change!] It is pleasant to find a really well-edited book on a subject which is much in need of a sound scholarly approach, and some practical guidance. [Hmm, somewhat pious tone.] It is comparatively short, with 189 pages including at least a page of references in each of the eight chapters. But for the attributions in the chapter headings, it looks like the work of a single author. Each chapter manages to introduce a practical problem in knowledge acquisition, briefly reviews past work, and describes, authoritatively, the most modern approaches to a solution.Alison Kidd, apart from placing her editorial stamp on every page, has written a crisp introductory framework to the book. Characteristically, she poses some tough questions, like 'What is the relationship between knowledge and language?' and 'What constitutes a theory of human problem solving?'. Kidd dismisses the Feigenbaum analogy (mining jewels of knowledge out of the experts' heads) as misguided. No such simple correspondence of knowledge to things experts say when interviewed exists. Instead, knowledge acquisition is a process of interpretation: it depends on the knowledge engineer's own model of the domain, of reality. [A precursor to the term 'requirements engineer'...]
Two Dutch authors, Breuker and Wielinga, describe a systematic and principled approach for acquiring knowledge. It is worlds apart from the naively enthusiastic candy-box approach to artificial intelligence. [Mmm, thank goodness, I was a bit sceptical even then.] Their approach has been implemented as an interactive support tool and it is well explained. [Is that all? My original review must have been ruthlessly chopped by the editor.]
Other chapters describe in turn how to make a qualitative structural description of some mechanism, how to extract knowledge from transcripts of interviews -- a dreadful task -- and to model an expert's conceptual structures.
Leslie and Nancy Johnson discuss what they call teachback interviewing. When you can teach a topic back to the expert who just explained it to you, and he [oops!] agrees with your version, then you share the same concept. This is certainly more secure than a simple question and answer interview.
Repertory grids and hierarchies of concepts are simply and clearly explained in the next two chapters [by Mildred Shaw & Brian Gaines, and by John Gammack respectively]. These are useful and powerful techniques, and every knowledge engineer should at least know about them.
Finally, Anna Hart looks at the controversial topic of rule induction. If you discover that most pensioners are bald or have grey hair, can you make a general rule with any predictive value? Hart wittily deflates some of the more outrageous claims [Ah yes, I'm sure I gave more examples here], while discussing the topic in enough detail for the knowledge engineer thinking of using an induction package.
This book is a welcome breath of fresh air in a murky subject. It is suitable as a general introduction for students and would-be researchers, and helpful as background for industrial projects. The only negative aspect of the book is the high price. [It cost $39-50, which must be $70 or more at 2003 prices; and it's still in print at $78-50.]
The next step is to ask whether progress has been made: for instance, whether a classic book has actually been superseded.
The one thing that is noticeably missing from all this is any notion of collaboration between experts, and hence of any techniques such as workshops to elicit process knowledge from several people at once. Apart from that, don't we get the impression that this is a rich and varied set of approaches, probably considerably better than the usual round of interviews and jumping-to-conclusions that we know today? So, although a dozen or so years is rather a short time to identify a classic conclusively, I suggest that Kidd is well worth revisiting.
The aspect of Kidd's book that is as dated as beehive hairdos and 12" black vinyl records is of course the emphasis on expert systems themselves. What makes this an unusually good book on the matter is the careful and skeptical attitude of most of the authors. For example:
Have we extracted all the knowledge possible from the transcripts? We do not know. ... Although we believe in our judgement, it is completely intuitive and we cannot be dogmatic.We are even less confident about knowledge that may be implicit... "Commonsense" knowledge, general problem-solving strategies, "deep" knowledge of the .. foundations of the domain .. are likely examples of the sort of knowledge that a human expert may not think to mention. [Fox et al, p85]
Tacit Knowledge is perhaps the key issue that made the AI enterprise unworkable; the (unstated!) assumption that expertise could be elicited by interview and captured into hard-and-fast rules was simply wrong. So it is interesting to see that by 1986-7 the foundations were already cracking.
However, what Feigenbaum terms knowledge engineering, the reduction of a large body of knowledge to a precise set of facts and rules, has already become a major bottleneck impeding the application of ESs [expert systems] in new domains. We need to understand more about the nature of expertise in itself and to be able to apply this knowledge to the elicitation of expertise in specific domains. [Shaw and Gaines, p109]
In other words, it was already proving unexpectedly slow and difficult to acquire, extract, elicit, codify the knowledge from experts' heads into rules. Somehow, it just wasn't working too well.
The methodology described is very easy to implement and run. Experts enjoy using the system and need far less attention from a knowledge engineer. .. It is essential that the expert become familiar with the [approach] .. by experience ..we put great emphasis on the need to be able to validate any technique for knowledge engineering. It is not sufficient to show that reasonable expert systems can be developed. One must attempt to evaluate the accuracy and completeness of the knowledge transfer. This is not an easy task ... [Shaw and Gaines, p130]
This is fascinating. The authors are one moment full of childish enthusiasm for their new gadget; the next, they accidentally admit that experts themselves need to become familiar with the elicitation techniques by non-verbal means! Then they pour cold water all over the enthusiasm of others by insisting, quite rightly, on systematic validation -- which turns out to be just as impossible as elicitation.
Induction is consistent and unbiased, although it probably uses only one form of reasoning. Rules are relatively easy to understand... [Hart, p187]
Rules are nice if you can get them. If you can prime your induction engine with a good set of examples covering all eventualities, the engine is guaranteed to come up with good rules. Otherwise, alas, it is guaranteed to come up with garbage, making over-generalised assumptions and risky extrapolations from limited data. To her credit, Hart does say that they must be validated "preferably with both documented examples and the expert."
So, in one way Kidd documents one of the great failures of software engineering, but in another she puts together a powerful set of techniques for getting the most from interviews.
The fact that Anna Hart's book had the same title shows that the subject was immensely fashionable in software engineering circles, at least as hot as all things Object and UML today. Her book is less interesting to us now; it has a narrower range and a less reflective outlook. Its heart is a tutorial on rule induction, though there's a good and helpful chapter on Repertory Grids (easier than Kidd's) and introductions to reasoning and probability theory as well as a simple discussion of interviewing. There's also a chapter on the qualities required of a knowledge (aka requirements) engineer, including 'empathy and patience', 'persistence', 'tact and diplomacy', 'logicality', and 'domain knowledge' - though Hart admits this last is "not essential". Apart from the sprinkling of AI terms, the chapter would do fine today as an introduction to RE.
Finally, do I stand by the review written by my earlier self? Well, yes.
© Ian Alexander 1989 and 2003-4