Modelling the Interplay of Conflicting Goals
with Use and Misuse Cases

Ian Alexander

Paper presented at REFSQ, Essen, 9th-10th September 2002
Proceedings Eighth International Workshop on Requirements Engineering: Foundation for Software Quality (REFSQ'02), pp 145-152

Abstract. The path of projects is not always smooth. Conflicting goals can come from within an organisation, or may appear as external threats: goals may be friendly or hostile. Relationships between goals therefore need to go beyond conventional and/or inclusion. This position paper suggests an economical set of relationships to describe the interplay of goals. It gives examples of some general types of situation in which these relationships between goals help to define processes and requirements. The approach has already been applied in a trade-off study.

Introduction

Both goal modelling and use case modelling are well-tried approaches in the software domain, and both have been applied to some extent to business processes. Some authors such as Alistair Cockburn have clearly stated that a use case has a goal, named in its title, and that a use case model forms a goal hierarchy [Cockburn 2001]. This is a useful and powerful approach.

However, in business there are often conflicting goals, for many reasons. For example:

These conflicts may prove damaging if not identified and presented for resolution: the first step in conflict negotiation is always to know what the conflict is about. An adequate goal model, whether of a business process or of the behaviour of a desired system must therefore be capable of describing potential conflicts.

This paper illustrates a simple way of describing conflicts in a use case model. It represents legitimate goals as (the titles of) use cases; hostile goals as misuse cases; and interactions between goals as relationships of standard types (includes, extends, has exception), and just a few conflict analysis types (threatens, mitigates, prevents, restricts, aggravates, conflicts with).

Since the purpose of modelling a business process is to create shared understanding, it seems important that the concepts used should be simple and readily understood by people who are not analysts. I suggest that the small set of proposed concepts (goal, hostile goal, actor, hostile agent, threat, mitigation, prevention, making less likely, aggravation, and conflict) are familiar to business people, and can be explained in a few minutes.

The following sections introduce the use of these modelling concepts with simple examples of common types of conflict in business.

Direct Conflict of SubGoals

The simplest situation is perhaps where a goal is to be achieved through two subgoals, both considered desirable, which directly conflict and need to be traded-off against each other. For example, to run a profitable E-business, one subgoal may involve simplifying and speeding customer access, while another may involve enforcing a strict security regime, including numerous steps to authenticate customers before allowing them access. Clearly a project or business that unknowingly held both these subgoals would be in constant difficulty. Note that describing such a situation as a conflict is not a claim that the problem is insoluble: rather, that there is a problem to be solved. Note also that there may well be underlying business goals, policies or strategies, such as to encourage customers to return, and to minimize fraud. These may themselves not visibly conflict, but may lead to conflicting use cases.

We can straightforwardly model this situation by treating the goal as a high-level use case and the subgoals as included use cases, following Cockburn. The UML (Unified Modeling Language) allows additional relationships to be defined as stereotypes. A bidirectional relationship, 'conflicts with' makes it easy to describe the business situation (Figure 1).

Figure 1: Directly Conflicting Goals

The Threat / Mitigation Cycle

A second common situation is where a business goal is threatened by a hostile goal which is desired by some hostile agent. This is familiar in the real world where businesses compete and are threatened in many ways. An easily-understood current example is the security of electronic commerce websites in the face of various forms of attack. A more traditional example is the physical security of a property against theft, using alarms and locks that increase in sophistication to counter the threat from increasingly skilled intruders.

Businesses respond to threats by specifying appropriate responses. Achieving such a response is a subgoal, which would not otherwise have been desired: if there were no thieves, we would not want to lock our property. The responding subgoal attempts to mitigate the threat, for example aiming to prevent a hacker from breaking into a website. The hostile agent may respond to the business' response, such as increased security, by developing more sophisticated attacks; each of these is a hostile subgoal. The business may respond to these subgoals in turn by improving its own defences, such as by developing improved security measures.

This situation is a classic maximise/minimise (minimax) approach, similar to that used by Chess or Go players: white's best move is to work out black's best move, and neutralise it. The threat-mitigation cycle can be represented in goal form quite directly by introducing stereotypes for 'threatens' and 'mitigates' relationships, and by labelling hostile goals as black 'Misuse Cases' [Sindre & Opdahl 2000, 2001]. The inverted colours provide an immediate visual and emotional cue of the negative nature of the hostile goals. Far from being frivolous, one can argue along the lines of [Potts 2001] that visual metaphors are entirely helpful when used appropriately.

The general pattern is illustrated in Figure 2; there may of course be several subgoals on each side, rather than just one. The result is a zigzagging arms-race pattern of threats, mitigations, counter-threats and counter-mitigations.

Figure 2: Threat and Mitigation

Different proposed mitigations, in the form of candidate security requirements, need to be traded-off against each other in terms of the cost of the subsystems (such as locks and firewalls) needed to implement them, and the benefits measured by the likelihood of the threats that they mitigate and the seriousness of the threatened damage should the threats actually materialize.

The threat/mitigation cycle is obviously directly applicable to the elicitation of security requirements where the hostile agents are typically human (though web spiders might be considered hostile agents also). However, by anthropomorphizing natural forces such as fire or storm or market pressure as hostile agents, the same approach can be applied to help elicit and trade-off a wide range of types of non-functional requirements. For example, the threat that a storm will destroy a bridge is real, but the likelihood diminishes rapidly with the power of the storm. In the UK, a hurricane with winds of 150 mph (240 km/h) is considered so rare that civil engineers accept the risk to bridges rather than design costly structures to resist such extreme winds. Mitigation is possible but unjustified. On the other hand, nuclear power stations are rightly designed to resist extreme storms.

Indirect Conflict through Mitigation / Aggravation

The third and last type of situation to be considered here is the kind of conflict where two subgoals, both felt to be desirable for some reason, have opposing effects on a hostile goal. One mitigates (completely or partially) the hostile goal's undesirable effects; the other aggravates them. This causes an indirect conflict, which may be less easy to detect and trade off than a direct one.

For example, the subgoal 'make the business secure' may mitigate the threat of intrusion, but it aggravates the implicit threat 'make access so difficult that customers go away'. Conversely, the subgoal 'make the business easily accessible to genuine customers' may aggravate threats to security. There are many instances in business where desired goals do not necessarily and logically conflict, but given the limitations of current technology they cause trade-offs in practice.

The resulting pattern is illustrated in Figure 3. Again, in a real situation it is likely that there are multiple conflicts and threats, not just one. Indeed, all the situations described here can occur together.

Figure 3: Mitigation and Aggravation

Dealing with Threats and Obstacles

A Threat implies the presence of a consciously intended attack on a system or business, as indicated by the symbol for a Hostile Agent.

By extension, we can choose to treat non-sentient sources of risk as if they were Hostile Agents, and hence to treat the associated risks as Threats to be analysed in Misuse Cases. This treatment is both metaphoric and anthropomorphic; it may be useful for business and system modelling because the human mind has evolved in a social context to deal with intentional threats.

An Obstacle is like a Threat, but suggests passivity: an obstacle in the road is something we can avoid by going round it, or can remove if we notice it and choose to invest the effort to destroy it. Where the term Obstacle is used to mean the work of a Hostile Agent, it is a synonym for Threat. Otherwise it means something such as a system fault or a shortage of resources that would prevent system or business goals from being achieved.

Such Obstacles can be handled – by a metaphoric stretch – as Misuse Cases, though they will not usually contain many scenario steps. They can be said to block desired goals. (Their 'Hostile Agent' can at best be anthropomorphized as 'Captain Murphy', which is not especially convincing.)

Mitigation means taking action to make the outcome of a threat less severe, should the threat materialize. Mitigation could in principle be quantified (in the range >0..100%), though one can question whether mitigations ever attain 100% success. This is not the only possible type of neutralizing action in response to threats (Figure 4)+.

Sometimes a threatened attack of known type can be prevented completely. For example, remote 'hacking' into a secure system can be prevented by providing no off-site access; a whole class of attacks becomes impossible (though other threats remain). Hence, a suitably chosen approach prevents a threat.

Since mitigates and prevents deal with reducing the severity of the effects of a threat, it is worth asking whether one can meaningfully model reducing the probability of a threat, for instance by making systems more reliable or more robust. It might seem logical to propose a relationship makes less likely; but this is problematic. Firstly, this is hard to distinguish from mitigation. More seriously, the laws of probability do not really work for single events, for intended actions, or for unknown types of events.

For all these reasons, it may be safer to define a relationship where a goal restricts the scope for a threat to occur. For example, a security goal may be to block off all known trapdoors by applying all current security patches to a system. This does not prevent unknown threats, nor does it necessarily make them less probable, but it does restrict the number of ways in which an attack might succeed.

Only where a definite probability can be assigned, as through analysis of failure rates of components, would makes less likely be acceptable. This might occur when a Use Case model is provided to a subsystem contractor as part of the subsystem's specifications, and the system had been designed to provide a desired level of safety and reliability.

Figure 4: Prevention, Restriction, and Mitigation of Threats

Related Work

Goal modelling is nothing new, though perhaps its application within the framework of use cases is unfamiliar. Human-computer interface designers have used AND-OR goal trees and task models for many years (e.g. Carroll 1995).

Antón & Potts 1998 describe the use of goals to surface requirements, while van Lamsweerde 1998 considers how to deal with obstacles in a goal modelling framework (KAOS). In both cases, the idea is to consider the negative in order to elicit positive goals. This is taken further by Sindre & Opdahl 2000 and 2001 in their Misuse Case approach, by explicitly considering active opposition. However they consider a very limited set of relationships between use and misuse cases.

A completely different approach to goal modelling, based on the work of Chung, Yu, and Mylopoulos, is the I-Star (i*) modelling language e.g. Chung 2000, now extended into GRL (GRL 2002). This notation makes it easy to express soft goals such as the desire for safety or reliability, enabling non-functional requirements to be modelled at an early stage to support elicitation. However it does not provide any of the context of time and immediate purpose that is provided naturally by scenario-based techniques such as use cases.

The relationship of scenarios and goals has been investigated by Ben Achour-Salinesi and others within the CREWS Esprit project (e.g. Tawbi 2000). In general, goal-directed structuring of requirements is being seen as a more direct and more effective way of communicating users' needs.

Conflict analysis has been considered in various academic approaches, notably VORD (Viewpoint-Oriented Requirements Definition) which offers a complete method for identifying and resolving conflicting viewpoints from stakeholders (e.g. Kotonya 1997). However such methods (and even proper identification of stakeholders) remain rare in industry.

Risk analysis is increasingly being applied (e.g. Toval 2002 examines Information Systems Security) outside traditional safety-conscious areas such as aerospace to identify hazards, and hence to elicit goals to deal with those hazards. This is ironic at a time when the limitations of those techniques are being called into question. For example, failure cases, modelled as use cases, are being explored in aerospace (Allenby & Kelly 2001), as a way of improving functional hazard analysis. The challenge is to identify new types of risks in new systems, whereas the focus of traditional safety case work is known risks and known mitigation approaches.

Discussion

A simple goal notation, such as this one based on use and misuse cases, seems capable of expressing a wide range of business relationships and dynamic situations. These may be closer to game-play (and military strategy) than to conventional business process models of the kind that emphasize rigid sequences of actions and decisions. The use case approach (handled more conventionally) is capable of describing sequential behaviour as primary, alternative, and exception scenarios [Alexander 2001] but is not limited to that.

Diagrams like those in this paper make clear in a non-technical way that there is not necessarily a 'right answer', nor a single definite sequence of events that is guaranteed to achieve a high-level business goal. Instead, the diagrams may be useful in helping business people and engineers come to grips with situations where different people justifiably hold differing views; where threats cannot be assumed to be totally neutralized; and where equally desirable subgoals cannot all simultaneously be satisfied. The diagrams were produced by extending a use case toolkit [Scenario Plus 2001].

I have already used this approach to explain some of the actual and potential conflicts in a railway trade-off workshop [Alexander 2002]. Once it became clear what the interactions were, the designers present in the workshop were able in a few minutes to propose three candidate solutions to a problem that had up till then appeared intractable. None of the stakeholders in the workshop had previously been exposed to goal models or use cases.

The approach described here could be applied to a wide range of business situations. It is simple, appears to be effective, and introduces no new notations, though it does put together concepts and tools formerly used in different domains.

Challenges to the Approach

There is a definite tendency when framing Misuse Cases and thinking out countermeasures to arrive at goals which have little scenario content. Some goals are natural Use Cases, others are more like single steps (or very low-level Use Cases which probably do not justify detailed specification, such as 'Make a backup'). However, some examples such as 'Control Access Strictly' and 'Simplify Customer Access' are both important design goals and significant Use Cases worth spelling out as detailed scenarios, complete with alternative paths and exceptions.

Other products of considering the interplay of Use and Misuse Cases may most naturally be goals for the system analyst or designer, such as 'Devise safer ways of braking on slippery surfaces'; the equivalent goals for the system under design are either references to features such as 'Use Traction Control', or softgoals such as 'Brake safely', which are somewhat imperfect as classical Use Cases. Against this, it is appropriate to refer to a feature where it is expected by stakeholders – for instance, where Marketing intend to advertise a new car with details of its braking approach; and where features are already known from requirement/design trade-offs. There is no advantage in pretending that in those parts of the model where the design approach is known in advance that a purely functional approach can be taken.

All of these difficulties stem from the fact that Use Cases are best at modelling simple functional goals. The Use Case paradigm deals less well with goals of other kinds, so there comes a point at which the analyst should consider other methods (such as i* and viewpoint analysis) to supplement the scenario-based approach.

Is there a value to a notation which expresses ideas that could sometimes be expressed some other way? There is no holy grail among notations, one that can express everything perfectly. A notation that does a good job on some cases may be beneficial. The Use Case idea is currently a powerful paradigm, with millions of followers. Something that helps people reason about goals, negative scenarios, softgoals, threats, exceptions and mitigating approaches, in a framework that they feel comfortable with, may well be useful in bringing goal and stakeholder modelling to a wider community.

References

Alexander 2001: Visualising Requirements in UML, article for Telelogic NewsByte, 2001,
available at
http://easyweb.easynet.co.uk/~iany/consultancy/reqts_in_uml/reqts_in_uml.htm

Alexander 2002: Initial Industrial Experience of Misuse Cases in Trade-Off Analysis, Proceedings of the IEEE Joint International Requirements Engineering Conference (RE'02), 9-13 September 2002, Essen, Germany

Allenby & Kelly 2001: Allenby, Karen and Tim Kelly, Deriving Safety Requirements Using Scenarios, Proceedings of the 5th International Symposium on Requirements Engineering, 27-31 August 2001, Toronto, Canada, 228-235

Antón & Potts 1998: Annie Antón and Colin Potts, The Use of Goals to Surface Requirements for Evolving Systems, in Proceedings of the 20th International Conference on Software Engineering (ICSE`98), Kyoto, Japan, pp. 157-166, 19-25 April 1998.

Carroll 1995: John M. Carroll (editor): Scenario-Based Design, Envisioning Work and Technology in System Development, Wiley 1995

Chung 2000: L. Chung, B.A. Nixon, E. Yu, J. Mylopoulos, Non-Functional Requirements in Software Engineering, Kluwer, 2000

Cockburn 2001: Alistair Cockburn, Writing Effective Use Cases, Addison-Wesley, 2001

GRL 2000: http://www.cs.toronto.edu/km/GRL/grl_syntax.html

Kotonya 1997: Gerald Kotonya and Ian Sommerville, Requirements Engineering, Processes and Techniques, Wiley 1997

Potts 2001: Colin Potts, Metaphors of Intent, Proc. 5th Intl Symposium on Requirements Engineering (RE'01), pp 31-38, 2001

Scenario Plus 2001: website (free Use Case toolkit for DOORS), http://www.scenarioplus.org.uk

Sindre & Opdahl 2000: Sindre, Guttorm and Andreas L. Opdahl, Eliciting Security Requirements by Misuse Cases, Proc. TOOLS Pacific 2000, pp 120-131, 20-23 November 2000

Sindre & Opdahl 2001: Sindre, Guttorm and Andreas L. Opdahl, Templates for Misuse Case Description, Proc. 7th Intl Workshop on Requirements Engineering, Foundation for Software Quality (REFSQ'2001), Interlaken, Switzerland, 4-5 June 2001

Tawbi 2000: Mustapha Tawbi, Fernando Velez, Carine Souveyet, Camille Ben Achour, Evaluating the CREWS-L'Ecritoire Requirements Elicitation Process, ICRE 2000, Fourth IEEE International Conference on Requirements Engineering, Schaumburg, Illinois, USA, June 2000

Toval 2002: Ambrosio Toval, Joaquín Nicolás, Begoña Moros, Fernando García, Requirements Reuse for Improving Information Systems Security: A Practitioner's Approach, REJ 2002 6:205-219

van Lamsweerde 1998: Axel van Lamsweerde, E. Letier, Integrating Obstacles in Goal-Driven Requirements Engineering, Proceedings ICSE'98 - 20th International Conference on Software Engineering, IEEE-ACM, Kyoto, 19-25 April 1998.

Ian Alexander's Home Page