Requirements and Testing:
Two Sides of the Same Coin
by Ian Alexander
Introduction
People who are used to thinking that requirements happen at the start of a project, and testing at the end, can be forgiven for feeling that these are unrelated activities. The old picture of the system life-cycle was of a mountain stream splashing its way down from one pool of activity to another, with waterfalls in between. Each activity more or less finished before the next started, and was conducted independently.
But actually the purpose of a project is to deliver a product to the people who need it (including those who will use it). Their needs are documented in the requirements, satisfied in the design, and demonstrated to have been satisfied by testing or other verification. From this point of view, both design and test activities are closely connected to requirements. All system engineering documents have a logical connection to the requirements. In a little more detail:
Requirements and Tests together form a model of the system under design. Some engineers argue that during development, they together with their traceability links actually are the first realization of the system.
Testable Requirements
At first glance, a requirement is just a piece of text, possibly with a 'shall' in the middle. But this isn't enough. Requirements are shared, so each requirement must have an identifier enabling people to refer to it uniquely. Product Managers also need to know the priority and status of each item: the requirement becomes a database record, with a set of fields or slots or attributes holding different pieces of information.
From the point of view of testing, the core question is how each requirement is to be verified. Some requirements practically define their own acceptance tests. For instance, The pilot reads the Altitude of the Aircraft. Well, if the Altitude is correctly displayed, the test is passed. Many functions are apparently simple to test in this way, but:
For example, is the instrument to be readable at night? In a storm? Must it be accurate to 1 or 100 metres? How far away must the pilot be able to read it?
Questions of this kind lead directly to the idea of Acceptance Criteria for each requirement. Suzanne Robertson, in Mastering the Requirements Process (Addison-Wesley, 1999), calls this the 'Fit Criterion' that shows whether the system fits the user's needs. It is a rephrasing of the requirement for the purpose of testing. A requirement is not a static thing, but something that you sketch out in rough charcoal, then draw in more carefully, and finally paint in full detail.
Here, for example, is the Altitude requirement in a little more detail:
ID |
Requirement Text |
Justification |
Priority |
Status |
Acceptance Criteria |
UR-121 |
The pilot reads the Altitude of the Aircraft. |
Ground Avoidance; ability to obey Air Traffic Control commands to fly at specified Altitudes |
Essential |
Accepted |
Altitude accurate to 100 m, legible from 1 m day or night |
You may feel that the Requirement Text, the Justification, and the Acceptance Criteria are all versions of the real requirement, and you'd be right; they record the requirements process.
A requirement arranged like this immediately raises questions: should we talk about storms, or accuracy at different altitudes; and how should we measure those things? This is progress. Such questions can only be answered by a dialogue between developers and other stakeholders. The answers may lead to more requirements, or to more precise Acceptance Criteria and better tests: either way, to better quality.
Other types of requirement can be more difficult to verify. For example, The Altimeter has a probability of failing of less than 1 in 106 years of operation. Building a million instruments and running them in test harness for a year is not an attractive option. Requirements on qualities like safety and reliability are often hard to prove.
Fortunately, test is not the only option. Alternatives include experience if there is a suitable model of Altimeter that has already flown millions of hours, we can use that. If the device is made of components of known reliability using a well-understood fabrication technique also with known reliability, we can calculate the resulting device reliability. Or if we know the possible failure modes, we can simulate their occurrence to derive the device's likely behaviour. Verification is more than just testing.
We can therefore add a column to our table of requirements to show Verification Approach: its allowed values include Test, Analysis, Demonstration, Inspection, and Simulation. With a tool such as DOORS it is simple to provide a fixed list of allowed values (an Enumeration Attribute) and to check that all requirements have a reasonable value. In this and other ways, much of the verification work is completed before testing begins.
Verification Approach |
Test t |
For software, the ultimate option is formal (mathematical) proof that the code is equivalent to its specification, which must be written in a precise language such as Z, SDL, LOTOS or VDM (see the review of Marc Frappier and Henri Habrias's Software Specification Methods, Springer 2001). Whereas testing shows that some ways of using the system do not fail, a proof demonstrates that the specification is met in all circumstances. This does not mean that the specification is what the stakeholders wanted. A good project has to align:
Complying with Requirements (by verification) results in a good quality product; complying with Stakeholders' needs (by validation) results in fitness for purpose.
Scenarios
So far we have talked as if tests verified individual requirements. However, this is not the whole story. Stakeholders want results, generally achieved through a sequence of steps called a Scenario. For example, The pilot collects the flight plan and boards the aircraft. The pilot sets the Altimeter to the altitude of the airport .
Each step produces a small-scale result, corresponding only roughly with a requirement because steps in several scenarios can call for the same thing (such as the ability to view an aircraft's altitude). Requirements are therefore often supplemented by a set of scenarios. Scenarios on complex projects are best managed with tool support, such as DOORS with Scenario Plus
(http://www.scenarioplus.org.uk). For detailed system specification and analysis, scenarios are elaborated with tools such as Tau to create Unified Modeling Language (UML) Sequence and Activity Diagrams.Scenarios give developers a direct insight into how stakeholders envisage using the system. Scenarios are generally realistic, because when you ask people what they need to do to get a result, they describe what is absolutely necessary.
Use Cases, Test Cases
Small systems can often be specified by collecting up separate scenarios, and designing mechanisms to satisfy each one individually. By the same logic, such systems can be tested by running each scenario as a Test Case.
But for larger systems, scenarios often overlap. The resulting complexity can be handled by identifying core scenarios, and annotating these to form Use Cases. A Use Case provides for subsidiary scenarios to branch off each core scenario. Some branches offer alternatives; others deal with potential failures by describing how to respond to identified exception events. Such branches may or may not return to the core scenario.
Tests must be repeatable they must give the same results every time they are run. Tests to be conducted 'by hand' therefore consist of straight-line sequences of steps, with absolutely no branches. (Test tools can sometimes deal with more complex specifications see below.)
There are therefore many Test Cases for each complex Use Case. Testing should exercise every defined path (sequence of steps); the response to every exception event; and should demonstrate that normal service can be resumed successfully after each recoverable exception. When as commonly happens, many exceptions and variations are possible, there are combinatorially many possible Test Cases.
Testers have long known all this. They know that it is not enough simply to test each requirement; and equally, that it is impossible to test all conceivable combinations as Dijkstra famously pointed out, you can't even comprehensively test a simple multiplier that just multiplies two floating-point numbers together: it would take many thousands of years. Therefore the tester's problem is to do enough to have a reasonable chance of discovering errors, without the certainty that comprehensive testing would bring.
This means that when verification is by test, it is essential to select Test Cases intelligently. Good test design selects an economical set of Test Cases that cover the requirements in a realistic set of scenarios. Coverage is demonstrated by 'complete' traceability from test steps to scenario steps or requirements. A requirements traceability tool such as DOORS can quickly ascertain if any requirement is not linked to any test at all (you simply filter on 'Does not have any Incoming Links of type 'verifies'). This helps in identifying areas that have not been considered for testing; but engineering knowledge is needed to ensure that the tests are sufficient.
Traceability between Requirements and Tests managed by DOORS
The requirement is on the left; 5 linked tests (residing in another document) are listed on the right
For example, testing that a pilot can read an Altimeter at a distance of 1m in normal daylight does not prove that the instrument will remain legible in direct sunlight; nor does it demonstrate that the Altimeter is accurate; but there will be a correct trace. Several tests may be needed to verify one requirement; conversely, one test may contribute to the verification of many requirements. Traceability tool support is therefore valuable but not a complete solution. Traceability does facilitate regression testing (retesting to show that a change has not impaired system operation) by helping to identify tests that might be affected by a changed requirement.
Other kinds of tool support may also be helpful.
It helps if all these tools are integrated, so that changes to requirements or tests are known about in all the places where they are used.
Summary
There is a close link in good system development between requirements and tests. Requirements should drive the whole development; other system engineering elements must trace back to requirements. The requirements must therefore be constructed from the start to support verification. Successful projects need an integrated systems engineering process, supported by integrated tools.
Requirements and Testing, when understood in the context of system development, are two sides of the same coin. They need to be managed together.
© Ian Alexander 2002