Knowledge elicitation is usually directed at normal or primary business processes, those which directly contribute towards users' goals. However, exception cases often outnumber 'normal' cases, can drive project cost, and can cause system failures if not handled.
This paper describes an experiment to see whether scenarios could effectively drive the search for exceptions. Experienced engineers asked to specify a small system found up to three times as many exceptions by searching all scenario steps as by examining user requirements.
Knowledge elicitation tends naturally to describe primary business processes, those which give the results that users want to achieve in normal circumstances. These are often described in more or less concrete scenarios or use cases, or more traditionally in flow diagrams.
But progress towards normal or primary user goals can be interrupted by external events, operator errors, or component faults. These sources create exceptions which must be handled to prevent system failures [7, 9, 10]. There is no presumption that exceptions are caused or solved by software, though this is possible.
Systems thus have secondary exception-handling goals, namely to restore normal business processes or at least to stop processes that have gone astray, and hence avoid possibly disastrous and costly operational failures. The identification of exceptions is therefore an important part of the knowledge elicitation process.
The Scenario Plus approach [1, 2, 3, 4] aims to model business processes with a structure of goals, an approach shared with some object-oriented methods [5, 8]. Each goal can be interpreted as a task or activity of some kind; in particular, the leaf goals correspond to individual scenario steps.
Exceptions create branches in the structure; each exception is created by an exception event, leading to a high-level exception-handling goal. The leaf goals in an exception branch form an exception-handling scenario, often a sequence of practical subgoals. Scenario Plus uses a DOORS database [6] to permit symbolic execution of business processes. The 'executed' sequences can be recorded as test scripts, drawn as UML swimlanes diagrams automatically, replayed as desired, and linked to user and system requirements.
Experience with Scenario Plus suggests that there are often more exception goals than 'normal' or primary goals. Graham and Maiden have argued that exceptions are of central importance in process modelling [7, 9, 10]. This leads to the question: how can exceptions be identified most efficiently?
It seemed plausible that a well-organized structure showing the relationship of scenarios to each other - e.g., two scenarios may share an initial leg, then diverge - would be effective in finding exceptions. As this question was quite complex, it was reformulated as:
'Would a scenario structure of discrete user tasks help to find more exceptions than a list of user requirements?'
Groups of systems engineers on tools training courses were given a brief refresher course in the basic concepts of process modelling including the problem and solution domains, stakeholders, scenarios, activities and exceptions.
An example project was introduced, and the groups collectively identified lists of users to interview, business objectives, and user requirements. Each group then conducted the control procedure followed by the main experimental procedure. One control group omitted the control procedure to enable its effect to be assessed.
4.1 Control Procedure
The group was instructed to study the list of requirements and individually, without discussion, to write a list of possible exceptions. Each engineer wrote a list which was collected for analysis. Enough time was allowed for everyone to finish writing.
4.2 Main Experimental Procedure
The group interactively prepared a set of scenarios for the example project, and organized them into a Dewey-numbered non-redundant structure. The group was instructed to step through the scenarios, asking 'What could go wrong here?' to generate possible exceptions for each step. Each engineer wrote a list of exceptions under the scenario step numbers. The lists were collected for analysis. Enough time was allowed for everyone to finish writing.
4.3 Example Project
The chosen example project was an electronic household protection system for an alarm company to provide profits and annual income: hence a call centre was required.
The stakeholders, scenarios and exceptions identified in the model answer are listed in Tables 2 and 3. The groups were asked not to consider installation and maintenance scenarios; these would be expected to generate additional exceptions.
A model answer was prepared with knowledge of all the exception cases identified by the groups. The principal stakeholders are listed with convenient abbreviations in Table 1. Installation and Maintenance scenarios were not considered further, however.
Class of Stakeholder |
Interest in System |
Householder (HH) |
property protected, feel safe |
Burglar (B) |
not be illegally injured |
Alarm Company (AC) |
sell systems, annual income |
Installation Engineer (I) |
install new domestic systems |
Maintenance Engineer (M) |
maintain systems effectively |
Centre Operator (CO) |
be able to do job effectively |
Guard (G) |
be called out appropriately |
Table 1 : Stakeholders
The analysis yielded a total of 29 plausible operationally important exceptions (Table 2). These are classified by their primary causes into Types: System failures (S), Human errors (H), human Overload (O), or Deception/Attack (D).
Scenario Steps |
Related Exceptions |
Type |
||||||||||||||||||||||||||||||||||||||||||||||||||||||
1 Secure the house |
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||
1.1 HH sets alarm |
1 HH forgets to set |
H
|
1.2 HH closes door
|
3 HH fails to close
|
H
|
1.3 HH locks door
|
4 HH fails to lock
|
H
|
2 Watch (alternatives)
|
|
|
2.1 detects nothing
|
5 fault generates false alarm
|
S
|
2.2 detects animal/insect
|
6 fail to discriminate
| S
|
2.3 detects burglar
|
7 fail to detect intrusion
| S
|
3 Unsecure the house
|
|
|
3.1 HH unlocks door
|
-
|
|
3.2 HH opens door
|
-
|
|
3.3 HH unsets alarm
|
15 HH forgets to unset
| H
|
4 Respond to intrusion
|
|
|
4.1 sounds alarm
|
18 bell fails
|
S
|
4.2 signals to Center
|
19 signalling fails
|
S
|
4.3 CO calls HH
|
20 Center fails to respond
| S
|
4.4 HH disconfirms
|
24 CO has wrong HH details
| O
|
4.5 CO calls G
|
26 G is busy
|
O
|
4.6 G checks house
|
27 G conducts the burglary
| D
|
4.7 CO unsets alarm
|
29* CO fails to unset
|
D |
Table 2 : Model Answer Exceptions
These exceptions are not necessarily all the responsibility of the alarm company which is modelling its business processes. For example, the householder can create a false alarm by forgetting a deactivation code, or invite a burglary by failing to lock the door. However, the alarm company has at least two good commercial reasons to consider exceptions whatever their cause.
Firstly, the alarm company wants to have satisfied customers, i.e. householders. The better the alarm system in the broad sense - including domestic alarm, call centre, and daily operations - serves the customers' needs, the more successful the alarm company will be. If that means correcting their mistakes without blame, that is business.
Secondly, from a practical viewpoint, good system design can mitigate such problems, provided they are identified and prioritised. The alarm company therefore has a business opportunity to address more customer problems than it formerly thought, so each new exception identified means a chance to increase its profits. From a marketing perspective, an exception is the inverse of a feature of a product or service.
The exceptions identified by each subject were checked and counted, discarding any that were badly formed. Mean and Standard deviation were then prepared for each group. A Chi2 Test was conducted, with the null hypothesis that the Main procedure's results were not different from the Control procedure. This shows that the results were very unlikely to have been obtained by chance (Table 3). The two procedures had significantly different effects.
No. of Candidate Exceptions Found |
|||||
Group |
Size |
Control Procedure |
Main Procedure |
Chi2 Test |
Improvement |
A |
12 |
4.75 ± 2.14 |
13.0 ± 3.61 |
5.6E-52 |
× 2.73 |
B |
5 |
2.8 ± 1.79 |
13.4 ± 5.32 |
2.1E-56 |
× 4.78 |
A+B |
17 |
4.18 ± 2.19 |
13.23 ± 4.07 |
1E-107 |
× 3.17 |
Ctrl |
8 |
none |
12.12 ± 5.82 |
n/a |
n/a |
Table 3 : Unfiltered Results
The results achieved by individuals on the Main procedure were on average a factor of 3 better than those on the Control procedure. Subjects in the control group achieved results similar to the other groups despite the lack of a preceding Control procedure. This suggests that the Control procedure did not affect the results on the Main procedure.
The results were then analysed to identify the percentage of the Model Answer exceptions discovered by each subject (Table 4), i.e. omitting any implausible or out-of-domain exceptions suggested as well as defining duplications more precisely.
No. of Plausible Exceptions Found |
|||||
Group |
Size |
Control Procedure |
Main Procedure |
Chi2 Test |
Improvement |
A |
12 |
3.83 ± 1.89 |
7.58 ± 1.51 |
6.9E-15 |
× 1.97 |
B |
5 |
2.8 ± 1.49 |
7.6 ± 2.41 |
7.5E-12 |
× 2.71 |
A+B |
17 |
3.53 ± 1.81 |
7.59 ± 1.73 |
7.8E-24 |
× 2.15 |
Ctrl |
8 |
none |
9.0 ± 3.89 |
n/a |
n/a |
Table 4 : Results Filtered by Model Answer
These are less dramatic than the unfiltered results, but still show that subjects individually identified twice as many Model Answer exceptions using the Main procedure as using the Control procedure.
This simple experiment suggests that engineers with varied backgrounds may benefit from a requirements engineering process that includes a scenario-based method for identifying possible exceptions.
It is never easy to obtain experimental time with busy professional subjects; training courses provide a reasonable sample of systems engineers interested in requirements engineering, and willing to participate in short experiments.
The example project was necessarily small; large projects, while more complex, may not have proportionally as many exceptions to consider. However, exceptions may be both more serious and harder to identify for large systems, so the approach could bring significant improvements.
7.1 Interpretation of Results
The experiment revealed a surprisingly large number of plausible exceptions - methods of causing system failure - for the apparently simple domain of a domestic burglar alarm and linked call centre. Only 8 of the 29 model answer exceptions seem primarily to concern failures of the hardware/software system.
Of the remainder, at least 10 (and perhaps as many as 14) involve human error or overload, though these could be mitigated by good system design. For example, good user interface design could help to reduce failures to set the alarm, so the allocation of exceptions to human and system cannot be precise.
A further 11 exceptions require deception, skill, or violence by the burglar together with conspiring call centre operators or guards. Methods that tend to focus attention on "system" issues could therefore fail to discover up to 21/29 (72%) of the Model Answer exceptions.
The experiment benefited from having experienced engineers as its subjects, avoiding the validity problems associated with extrapolating from students to practising engineers. Since all of the subjects found more possible exceptions when using scenario-driven search, the experiment suggests that the approach is robust with respect to individual experience. The smallest improvement for any individual in generating possible exceptions was 60%, while the average individual improvement was over 300% though with wide variation (× 3.12 ± 2.77).
In one suggestive case, an individual failed to identify any exceptions using requirements-driven search, but found 13, close to the unfiltered average, using scenarios. This result offers a hint that a scenario-based process may be effective in helping possibly inexperienced stakeholders to identify exceptions.
Smaller gains were revealed when the data were filtered for plausibility (Table 5). However the results were still decisive, with an average of over 7.5 exceptions discovered using scenario-driven search compared to 3.5 using requirements, a twofold improvement.
With this more careful analysis, it could be seen that some subjects had in fact found different exceptions using the two techniques. One subject found 8 of the Model Answer exceptions using requirements-driven search, but only 7 using scenario-driven search - for some reason not repeating 4 valid exceptions found earlier. The subject thus identified 11 of the Model Answer exceptions using both techniques. For individuals in Groups A and B, search using both techniques identified on average two more exceptions than did scenario-driven search alone. This suggests that it may be useful for individuals to apply both techniques (Table 5).
Technique used by Individual Modeller |
Average No. of Exceptions found |
RD Search |
3.53 ± 1.81 |
SD Search |
7.59 ± 1.73 |
Both Techniques |
9.59 ± 2.18 |
Table 5 : Requirements-Directed versus Scenario-Directed Search as Techniques for Individual Process Modellers
However, the results also demonstrate that groups found more plausible exceptions (a total of 27) than any single individual did - the highest total for any individual was 11. Table 6 summarizes the effectiveness of the techniques for groups A and B combined. The number of individual hits for each technique is shown in parentheses.
Exceptions (Hits) |
System Failures |
Human Errors |
Human Overload |
Deception/ Attack |
Totals |
found by |
4 (15) |
5 (18) |
3 (7) |
4 (19) |
16 |
found by SD Search |
7 (68) |
7 (58) |
3 (31) |
10 (54) |
27 |
found by both techniques |
4 |
5 |
3 |
4 |
16 |
found by neither technique |
1 |
0 |
0 |
1 |
2 |
Totals |
8 |
7 |
3 |
11 |
29 |
Table 6 : Requirements-Directed versus Scenario-Directed Search as Group Techniques
Where it is feasible for a group to work together, for instance in a process modelling workshop (such as Co-Operative Inquiry [2, 3], RAD/JAD [11], or SOMA [7]), the results in Table 6 suggest that Scenario-Directed search will probably be sufficient, and that Requirements-Directed search fails to find any additional exceptions in any of the four exception type categories. Alternatively, as in the experiment, modellers may make separate lists and then merge them.
It is also notable that there were four times as many individual hits in all four exception type categories for Scenario-Directed search. This suggests that small groups should find the technique significantly more effective than Requirements-Directed search.
Scenario-Directed search was especially advantageous in the Deception/Attack category, suggesting that these more subtle exceptions could easily be missed by techniques that do not encourage people to try to think of exceptions to specific cases. Focussing on scenarios appears to give groups a better chance of thinking creatively, or as Edward de Bono calls it, thinking laterally.
Two exceptions in the model answer were not found by groups A or B. One was identified by an individual in the control group; the other by considering whether the scenarios were complete.
These results indicate the benefit of the more systematic approach, as well as the advantages of applying more minds to process modelling. The experimental subjects did not have time to review their materials in detail, nor were they able to discuss their findings with each other: both activities could be expected to identify more exceptions. A tool such as Scenario Plus which animates the scenarios helps people to identify missing or wrong steps in a process scenario. That too is a form of scenario-directed search.
7.2 Validity Threats
One possible validity threat is that the subjects might have benefited in some way from the initial requirements-driven search. A control group was therefore asked to carry out a scenario-driven search for exceptions without an initial requirements-driven search. Their mean score was 12.12 ± 5.82, within the other groups' combined 13.23 ± 4.07 range of error.
The groups were not formally matched for experience, but were effectively selected at random with respect to the experiment, while the groups' experience and age ranges appeared similar. It therefore seems likely that a requirements-driven search does not significantly improve scores on a subsequent scenario-driven search.
7.3 Impact on Process Modelling
This experiment helps to reinforce the view that an effective business modelling process should systematically identify the stakeholders, and capture problem descriptions from them using scenarios or use cases. This should be followed by organizing the scenarios into definite steps (activities or tasks). Then the stakeholders should be asked to identify possible candidate exceptions that might occur at each step.
Some of these may be trivial or unlikely, but among them will be important exception cases that would cause system failures unless handled adequately. It is wiser to adopt a generate-and-filter approach than to assume that the exceptions are obvious or all in "system" areas.
Revisiting the scenario structure once exception scenarios have been added is a valuable reasonableness check and may help to detect further scenarios and missing or wrong steps, including exception scenarios.
Scenario structure may also help people to identify exceptions within exceptions. For each exception, the stakeholders can be asked for a scenario that would handle it. These scenarios can in turn be used to guide the search for possible exceptions in the exception-handling procedures.
Since the price of a missed exception can be a failure in operations, improved techniques for identifying exceptions early in a project may be valuable.
Similar techniques may perhaps be effective at finding other kinds of errors and omissions in business process models.
[1] Alexander, I., Supporting a Co-Operative Requirements Engineering Process, pp. 340-344, Workshop on the Requirements Engineering Process, Proceedings Tenth International Workshop on Database and Expert Systems Applications (DEXA 99), IEEE Computer Society, 1999
[2] Alexander, I., Engineering as a Co-operative Inquiry: A Framework. Requirements Engineering (1998) 3:130-137 (Springer-Verlag)
[3] Alexander, I., Migrating Towards Co-operative Requirements Engineering.
CCEJ, 1999, 9, (1), pp. 17-22[4] Alexander, I., Scenario Plus User Guide and Reference Manual,
http://www.scenarioplus.org.uk, 1999[5] Cockburn, A., Structuring Use Cases with Goals, JOOP, September and November 1997
[6] DOORS Reference Manual. Telelogic, http://www.telelogic.com
[7] Graham, I., Requirements Engineering and Rapid Development. Addison-Wesley, 1998
[8] Kendall, E., U. Palanivelan, S. Kalikivayi, M. Malkoun
,Capturing and Structuring Goals: Analysis Patterns, European Pattern Languages of Programming, (EuroPlop'98), Germany, July, 1998[9] Sutcliffe A.G., Maiden N.A.M., Minocha S. & Manuel D., 1998, 'Supporting Scenario-Based Requirements Engineering', IEEE Transactions on Software Engineering, 24(12), 1072-1088.
[10] Maiden N.A.M., 1998, 'SAVRE: Scenarios for Acquiring and Validating Requirements', Journal of Automated Software Engineering 5, 419-446.
[11] DSDM, 'The Dynamic Systems Development Method', http://www.avnet.co.uk/sqm/DSDM/ 1998
More Papers, Consultancy and Training on
Ian Alexander's Home Page