This chapter introduces the system-testing
strategy and methods that I have used and refined
over some twenty years. The methodology is presented
first in outline form, with subsequent chapters used
to provide detail for its application. To facilitate
cross-reference, I use a capital-letter "M"
to make it clear when I am referring to the Methodology
described in this book.
Presenting the Methodology will require
that some terms be defined, but many of the terms
won't come to life until later in the book, so please
be patient. In addition to providing definitions,
I need to make two initial points to frame the Methodology
and its relationship to the chapters that follow:
- Applying the Methodology requires
individuals to have an in-depth understanding of
the users of the system to be tested, the business
environment in which the system will operate, and
the risks associated with the system. This subject
is the focus of Chapter 13: "Understanding
the Typical User."
- Steps in the Methodology are to
be applied throughout the entire software development
life cycle (not just during the system-test phase
of the project). Chapter 19: "Game Plan for
a System Test" discusses when to apply each
step in the Methodology in the context of the SDLC.
The objective of the Methodology is
to provide a framework for developing well-documented,
repeatable, data-dimensional system tests that
cover typical business flows. I'll start by
defining what I mean by the terms "well-documented,"
"repeatable," and "data-dimensional"
(I define what I mean by "typical business flows"
later in this chapter and in Chapter 13: "Understanding
the Typical User"). Once you become familiar
with my definitions, you will see that these terms
apply to characteristics needed for other types of
testing, not just system testing. [top]
The techniques I advocate for achieving
these characteristics have broader application than
merely to system testing, but aspects of the Methodology
that are specifically pertinent to system testing
are the architecture of system tests and the
story of the test. These two aspects are detailed
in Chapter 14: "Defining an Architecture of System
Tests" and Chapter 16: "The Story of the
Test," respectively. Let's look at the definitions.
A test is said to be well documented
if and only if
- the documentation for the test
clearly identifies the objective of the test, which
includes classifying the specific types of problems
that the test is designed to detect.
- the test documentation clearly
identifies the expected results of the test and
how to tell whether the test passed or failed by
comparing expected results with actual results.
- the documentation is sufficient
to allow someone other than the person who developed
the test to execute the test with exactly the same
results.
- the documentation is sufficient
to allow the test to be maintained successfully
by someone other than the person who developed the
test.
A test is repeatable if and
only if
- the test will produce exactly the
same results as long as the documented preconditions
of the test are met and the system capabilities
have not changed. This means that each time the
test is executed correctly, the expected results
and the actual results will match as long as the
precise version and configuration of the system
under test are exactly the same.
A test is data dimensional
if and only if
- variations of the test have been
analyzed and action has been taken to expand the
scope of the test to cover additional, important
data situations efficiently.
Why are these characteristics so important
to testing? The value of developing well-documented
tests should be pretty clear from the definition.
Well-documented tests are designed to become the lasting,
intellectual property of the test team. Their value
in protecting against a clearly defined class of problems
is not tied to the specific developer of the test.
The developer of the test can move on to other jobs
without fear of being needed to execute or maintain
the test. Even while the test developer remains with
the system-test team, a well-documented test (in the
way we have defined it) makes it possible for a less
experienced (and less costly) test-execution specialist
to execute the test after it has been fully debugged,
freeing up the test developer for more productive
tasks. [top]
As for repeatability, achieving this
characteristic makes a test straightforward to execute
and re-execute since the test works exactly the same
way each time. This increases the likelihood that
the test will be executed correctly even by a relatively
inexperienced test-execution specialist or, even better,
by an automated test tool.[1]
But whether manual or automated, repeatability helps
to ensure that the test will have lasting value. Rather
than being a "one-shot throw-away," a repeatable
test is designed to be executed and re-executed multiple
times in the testing of a particular software release
and to become a regression test in the testing
of future software releases. Finding problems the
first time a repeatable test executes is only the
beginning of its value. The same test can be used
again to check out the fixes to the problems that
were found. After that, the test can be used to check
that further changes to the system do not inadvertently
destroy old, existing capabilities in the current
system under test, and in future releases of the system
under test as well.
As for the third characteristic, making
a test data dimensional is a more-bang-for-the-buck
item. It means that, as an integral part of developing
the test, possible variations are considered that
could extend the range of problems that the test protects
against. It is often the case that, while a test is
being developed, the test logic can be extended to
cover additional data variations with little cost
and without complicating ongoing maintenance. Sometimes
this involves making the test data-driven and sometimes
it doesn't.[2] We'll
discuss these characteristics (and how to achieve
them) in subsequent chapters.
Following are additional terms I use
in the Methodology, along with their definitions:
Basic-sanity test: A system
test designed to detect cases in which the system
has not been installed correctly and/or to quickly
identify problems that prevent fundamental capabilities
of the system from being accessed.
Administrative-functions system
test: A system test designed to find problems
that would prevent an administrative user from performing
work activities associated with such functions as
adding/changing/deleting reference data, system default
settings, security, and authorization permissions.
Installation/migration system test:
A system test designed to find problems that would
prevent the initial installation of the system from
being done correctly. Installations often include
migrating data from an earlier version of the system,
a system being replaced by the system under test,
or from manual records. [top]
Flow-thru system test: A system
test in which system components are strung together
and exercised in a complete and meaningful business
context, causing interfaces to occur naturally (rather
than simulating them) and ignoring any intermediate
results that are not visible to users. Flow-thru tests
are the result of implementing "test stories,"
described in Chapter 16: "The Story of the Test."
Flow-thru support-system test:
Also called a business-flow or business-process
support test, this is a system test whose objective
is to prepare the system-test environment (to create
data situations, for example) for specific flow-thru
tests that expect these conditions to exist at the
time that these flow-thru tests begin to execute.
Special-situations system test:
A system test that is designed to supplement one or
more other system tests by protecting against less
typical situations than are covered by the other test(s).
A high-risk special-situations system test
also protects against problems not covered by other
tests; however, it focuses on protecting against situations
that have been identified by risk analysis as having
highly disruptive (or worse) symptoms even though
the conditions under which the problems might be triggered
may not be particularly likely to occur.[3]
Higher-order system test:[4]
A system test that is designed to protect against
problems with performance and volume characteristics
of the system (as opposed to functional characteristics).
For example, a load test is a higher-order test designed
to detect problems that show up only when the system
is subjected to the level and types of traffic that
the system might realistically encounter in a production
operation. A volume test is designed to protect against
problems that show up only when the system is processing
the high amount of data that a system might typically
encounter in a production environment. A stress test
protects against problems that could occur when the
system is pressed beyond its design limits in terms
of traffic or data volume.
With these definitions in mind, we
are ready for the Methodology, which is presented
in the balance of this chapter and boxed in gray in
excerpts as they appear in subsequent chapters. I
will amplify and explain aspects of the Methodology
in the chapters that follow. [top]
Methodology for System Testing
Framework for the Methodology
Effective planning for an architecture
of well-documented, repeatable, data-dimensional system
tests begins with identifying a set of typical
business flows that reflect the usage of the system
to be tested. A typical business flow is a
description of how users will accomplish work through
a sequence of interactions that one or more users
(or other systems, devices, and so on) will have with
the system. Typical business flows (sometimes simply
referred to as "business flows") should
be identified and documented as part of the requirements/specification
process.[5]
In addition to understanding the business
flows of the user, the Methodology requires that risk
analyses be conducted at strategic points in the system
development life cycle. Understanding the risks associated
with the functionality of the system is necessary
so that appropriate priorities can be assigned for
developing the system tests in the architecture. Knowing
the relative importance of tests in the architecture
will influence the order in which they will be developed
(and may determine whether or not a system test will
be developed at all should a resource shortfall occur).
The Steps in the Methodology
1. Follow Steps 1.1 to 1.5 to define
an architecture of system tests for the system to
be tested. Each node in the architecture must be assigned
a unique identification.
1.1 Begin by allocating a node in
the system-test architecture for each business flow
identified and documented during the requirements/specification
process. Add a node for each of the basic-sanity
tests and higher-order tests. Be sure that there
are nodes in the architecture that correspond to
administrative functions and to installation/migration
(if not, add these nodes and decompose them to correspond
to the functionality for these areas specified in
the requirements).
1.2 For each node corresponding
to a business flow, decompose the node into three
sub-nodes: flow-thru, high-risk special situations,
and other special situations. (The high-risk special-situations
node is for system tests that are not part of a
typical business flow but are important based on
risk assessment.) [top]
1.3 For the administrative-functions
and installation/migration nodes in the test architecture,
decompose each node into two nodes: a flow-thru
support test and a special-situations test (high-risk
special situations involving administrative functions
and installation/migration should be covered in
the flow-thru tests associated with business flows).
1.4 Go through the requirements
documents and systematically determine where in
the architecture of system tests each requirement
is covered. Document this coverage in a traceability
matrix. Coverage should be based on the following
criterion: The requirement is covered by a specific
system test if the system test would be responsible
for finding that the requirement is missing or erroneously
implemented, assuming that the system test fulfilled
its objective perfectly. If you cannot find a system
test that meets the requirement, define an additional
node in the architecture corresponding to the test
that covers the requirement. If any node in the
architecture appears too broad in scope for a single
system test, split the node into multiple system
tests. If you can conceive of any type of problem
that could occur with the system for which there
is not a test in the architecture responsible for
protecting against that type of problem, add a node
to the architecture representing a system test whose
objective would include finding that type of problem
(I call this type of placeholder system test a "worry
bead"). Keep iterating until all requirements
have been covered by one or more tests in the architecture
of tests, until the scope of each node appears manageable,
and until the architecture appears conceptually
complete from a problem-coverage standpoint.
1.5 Document the test objective
of each node that has not been further decomposed.
The identification of requirements and hypothesized
types of problems that are covered by a specific
system test should be included as part of the documentation
of the test objective.
2. Prioritize the tests in the test
architecture and determine the sequence in which the
tests are to be designed and implemented.
3. For each basic-sanity test to be
developed, design and document a set of test cases
(action plus expected results) that can serve as part
of the entrance criteria into system testing. (See
the documentation principles described in Step 4.7.)
The objective of this test should be to detect situations
that indicate that the installation of the system
has not been done correctly and to quickly identify
problems that prevent fundamental capabilities of
the system from being accessed. If possible, these
tests should be designed to be independent of the
data environment at the start of the test and should
leave the data environment unchanged at the end of
the test (if a clean-up of the data environment is
necessary, you can make the clean-up part of the test).
This allows you to execute the tests immediately after
an installation of a new iteration of the system and,
if the installation appears to function properly,
to begin execution of other tests immediately after
the basic-sanity test has executed. [top]
4. For each system test to be developed
that corresponds to a business flow, follow Steps
4.1 through 4.7.
4.1 Construct the story of the test
-- that is, a story that illustrates how users will
execute a typical instance of the business flow
with typical data. The story of the test has three
parts:
- Preconditions: This is the "once-upon-a-time"
section of the story. It includes actions and/or
assumptions regarding the functional state of
the test environment at the time the test begins.
This includes the results of actions that may
have taken place before the system under test
is installed and operating (for example, it may
describe transactions that took place prior to
the data migration to the new system). As described
in the steps below, this part of the test may
be affected by the technique chosen to solve cycle-acceleration
and repeatability problems. Documentation for
this section of the test (see Step 4.7) should
introduce the characters in the story (and other
systems that may be involved in the test) and
describe the state of the system at the time the
body of the test begins.
- Body of the test: This is the
"plot" section of the story. The plot
begins after all the actions in the once-upon-a-time
section are complete. The plot describes the interactions
between the characters and the system under test
to get business done through the capabilities
of the system. These interactions compose the
test cases through which the plot unfolds. Each
test case must have a unique ID and must include
expected results for actions (see Step 4.2). As
test cases are added to cover specific requirements,
the mapping between the requirement, the test,
and the test case is recorded in the traceability
matrix. A test case is said to "cover a requirement"
if the test case protects against the requirement
being missing or erroneously implemented. The
identification of requirements and hypothesized
types of problems that are covered by specific
test cases in the system test should be included
as part of the documentation of the test (see
Step 4.7).
- Post-conditions: This is the
"happily-ever-after" section of the
test. It includes actions and/or descriptions
that affect how the test leaves the test environment
at the test's conclusion. (As described in the
steps below, this part of the test will be impacted
by the technique chosen to solve repeatability
problems.) Documentation for this section of the
test (see Step 4.7) should describe both the state
of the system at the time the test ends and whether
the test is serially rerunable and/or capable
of concurrent execution (see Step 4.4).
The story of the test becomes part
of the system-test documentation (see Step 4.7).
4.2 Identify what specific system
responses need to be observed to determine if the
actions performed by the characters worked as expected
(focus on the variables, not the values, at this
point). As part of the plot section of the test,
be sure to have the characters observe these and
only these results using capabilities provided
by the system wherever possible. Using as much information
on the implementation of the system as available
at this point in the life cycle, figure out how
the expected values should be determined and record
this information as part of the documentation of
the system test (see Step 4.7). [top]
4.3 Solve cycle-acceleration problems.
Perhaps a test story is intended to illustrate events
in a business cycle that would normally take place
over an extended period of time. In order to go
through a complete cycle in the test story, you
may need to reach a threshold (for example, sell
out a sporting event) or move through events that
happen only at specific time intervals (daily, weekly,
monthly, or yearly, for example) in the real business
environment. These types of tests pose problems
because (obviously) it's unrealistic to spend a
year running a test that illustrates activity that
takes place over a period of a year (or similarly,
of a month, a week, or even a couple of days). In
the case of needing to reach a threshold in a test
story, find creative ways to perform the necessary
activities in a way that avoids unnecessary tedious
and redundant effort but still accomplishes the
objectives of the test.[6]
In the case of test stories that require cycle acceleration
with respect to time, if the software you are testing
has been designed to address this testing issue,
it may be possible to control the passage of time
parametrically or by changing data fields through
capabilities available to the system-test team.[7]
(Watch for this testability issue in reviewing system
designs for testability.)
4.4 Solve repeatability problems
(so that the system test will execute exactly the
same way each time it is executed, assuming that
the software configuration of the system under test
remains invariant). In particular, solve the following
problems:
- The "remnants" problem:
Once you've executed the system test, the data
in the test environment will change; the results
of rerunning the test may be different, unless
you take action to clean up the "remnants"
of the last execution. In some cases, it does
not take much work to make the test serially rerunable;
in other cases, the data must be refreshed to
an initial state. In any case, find ways to address
the remnants problem.
- The "common-sandbox"
problem: You will probably be sharing the test-database
environment with other system testers. You need
to be sure that the impact of their testing will
not change the results of your testing (and vice
versa). In other words, tests should be pair-wise
non-interfering.
- The "self-competition"
problem: If two or more copies of the test are
to be run simultaneously (for load, performance,
or stress testing, for example), you need to be
sure that the test will not interfere with itself.
This is essentially the common-sandbox problem,
except that the test is sharing the environment
with itself. In some cases, it doesn't take much
work to make the test capable of concurrent execution.
For some system tests, it may be too costly or
too difficult to solve this problem completely.
If the latter is the case, the documentation should
indicate the restrictions or special issues associated
with concurrent execution of this test. Your method
for addressing these repeatability problems will
impact the once-upon-a-time section and/or the
happily-ever-after section and should be documented
there.
4.5 Determine if there are important
situations in the life cycle of the business flow
that could be in progress when the software of the
system under test is actually installed and operating.
If this is the case, design additional plot lines
with unique test-case IDs that correspond to these
transactions in progress. Reuse elements of the
plot that you have already designed in Steps 4.1
through 4.4 whenever doing so is useful. Enrichments
will generally need to be made to the test-data
environment in order to realize these plot lines.
For example, the once-upon-a-time section may require
that you have data in the test environment that
correspond to paper or electronic records that preceded
the installation/migration of the system under test.
Review cycle-acceleration issues and repeatability
issues for the new plot lines and make adjustments
as necessary. [top]
4.6 Identify variations of the test
story that could expand the coverage of the system
test, possibly by changing the selection of data
used by the test. If appropriate, incorporate these
variations using the following techniques:
- Spread data variations among
multiple characters in the story.
- Add more characters to the test
story that exhibit the variations.
- Add test cases to the special-situations
test.
- Add a new test (or tests) to
the architecture of system tests that is (are)
responsible for covering these variations.
In some cases, it may make sense
to parameterize the test so that you can cover a
variety of important data situations by assigning
different values to the parameters and rerunning
the test.[8] Parameterization
is generally preferable to "cloning" the
test logic, which complicates system-test maintenance.
If you parameterize the test but you decide that
an individual variation should be executed independently
of the test being developed, add a separate node
to the system-test architecture for that variation.
If you decide that a particular variation should
be part of the scope of the test but you will not
develop that variation immediately, make a note
of this in the documentation of the test objective.
4.7 Develop the system-test documentation
for the test. Define the objective of the test by
describing the types of problems this test should
protect against and by identifying the specific
system requirements that are covered by the test.
A requirement is covered by a system test if the
system test contains test cases that would be responsible
for finding that the requirement is missing or erroneously
implemented, assuming that the system test fulfilled
its objective perfectly. Hypothesized problems are
covered by a system test if the system test contains
test cases that would protect against this problem
reaching a user, assuming that the system test fulfilled
its objective perfectly. Include any information
in the test documentation that would be needed if
someone other than you will maintain the test. Likewise,
include any information in the test documentation
that would be needed if someone other than you will
execute the system test. Document the three parts
of the story of the test, the technique for achieving
repeatability, the solution to the cycle-acceleration
problems, and parameters that would need to be changed
to realize important variations identified in Step
4.6. Select the values of data for the test using
typical values from the operating range. Document
the script of actions and, based on the user documentation
(or equivalent information), document the expected
results (see Step 4.2) that implement the system
test (the details of the plot). If there are variations
of the data that you consider important and within
the scope of this test but that are not implemented
in the current version of the test, document these
variations as an aspect of the test objective that
is not achieved in the current version.
5. Design and document the flow-thru
support tests for both the administrative-functions
node and the installation/migration node in the test
architecture. These are the tests that create the
data environment that provides the starting point
for the flow-thru tests (either through installation/migration
activity or by reference-data and security-data updates).
Use the once-upon-a-time sections of the flow-thru
tests as the basis for determining what belongs in
these flow-thru support tests. [top]
6. For each high-risk special-situations
test that is to be developed, design and document
test cases whose objective is to protect against problems
associated with the high-risk special situations that
have not already been covered by other tests that
have been developed.
7. If the architecture includes higher-order
tests (load tests, for example) that need to be developed,
try to realize these tests by building on other tests
in the architecture that have already been fully debugged.
In order to avoid the need to maintain multiple versions
of the tests being reused, avoid cloning them. In
other words, create load tests, stress tests, and
performance tests by leveraging debugged business-flow
tests as a basis of representative, typical-user activity
on the system.
8. For each special-situations test
that is to be developed, design and document test
cases whose objective is to protect against problems
associated with the special situations that have not
already been covered by other tests that have been
developed.
9. Once the system is available for
system testing, install the system, performing any
data migration called for in the installation instructions
provided by the development team. Use the flow-thru
support tests for both the administrative-functions
node and the installation/migration node in the test
architecture (see Step 5) to populate the data necessary
for the other system tests. Use the basic-sanity test(s)
as part of the entrance criteria to system testing.
10. Execute and debug each system
test developed in Steps 3 through 9. Compare actual
results with expected results for each test case in
the system test. Investigate each discrepancy found.
Based on the user documentation (or equivalent information),
determine if the discrepancy is due to a problem in
the software, in the documentation of the system under
test, or in the test itself. If the problem is with
the software or documentation, file a problem report.
If the problem is with the system test, fix the test
and repeat this step. [top]
11. For each data variation identified
in Step 4.6 that requires an additional execution
of the test with different parameters, repeat Step
10 with the data variation.
12. Execute the system tests that
have been designed and implemented in the preceding
steps over and over again exactly the same way for
each successive iteration of the system that reaches
the system-test team. Investigate each discrepancy
found. Based on the user documentation (or equivalent
information), determine if the discrepancy is due
to a problem in the software, in the documentation
of the system under test, or in the test itself. If
the problem is with the software or documentation,
file a problem report. If the problem is with the
system test, fix the test and repeat the test.
13. Enhance and maintain each system
test based on your own experience in executing the
test and also based on feedback about the system from
users (problems reported). For system defects that
reached users but that should have been found by this
test, improve the test so that it is capable of finding
the defects. Be sure to update the test documentation
to cite the specific problem reports that are now
covered by the test.
14. After the system-test exit criteria
are met and the software has been released to the
user organization(s), use measurements collected throughout
the process and during production operation to identify
lessons learned. Apply these lessons learned to achieve
continuous process improvement in the software development
life cycle. [top]
Notes
[1]
Test automation and its relationship to the Methodology
are discussed in Chapter 18: "System-Testing
Tools."
[2]
Some authors use the term "data-driven"
for parameterized tests and table-driven tests, especially
when test automation is involved. With my definition
of data dimensionality, making a test data-driven
is one approach to achieving this characteristic but
it's not the only way.
[3]
Risk analysis is discussed in Chapter 13: "Understanding
the Typical User."
[4]
I first saw the term "Higher-Order Testing"
in [Myers, 1979]. Glen Myers used the term "Higher-Order
Testing" to refer to all testing beyond module
testing (unit testing). Over the years, IÕve found
it convenient to reserve the term "Higher-Order
System Test" for tests that focus on performance
and volume as described in the definition above.
[5]
As described in Chapter 6: "The System-Test Oracle,"
variations exist among industries and companies on
how requirements and specifications are prepared,
what they are called, and the level of detail in each.
The term "requirements/specification process"
refers to the overall activity of developing detailed
requirements and functional specifications.
[6]
Reaching high thresholds is much less of an issue
when automated test tools are available, as we discuss
in Chapter 16: "The Story of the Test" and
Chapter 18: "System-Testing Tools."
[7]
Although it can be extremely important to software
testing, the capability to change times and dates
in an application may need to be surrounded by tight
security and management controls in a deployed application.
In some cases, changing a time or date in the processes
associated with a production application could be
bound by legal or regulatory guidelines.
[8]
As we discuss in Chapter 16: "The Story of the
Test," and in Chapter 18: "System-Testing
Tools," the option of parameterizing a test (as
an approach to achieving data-dimensionality) becomes
more attractive when automated tools are being used.
|