At a high level, these are the characteristics you should see in a testing strategy.
- There should be formal test planning.
- Each round of testing should have a clear objective or define what is in and out of scope (set out by test plan).
- Each test must explicitly define pre-conditions, execution steps and expected outcomes.
- Tests (regression) should have pre-defined priority levels of test.
- Testing must be shareable by a team of testers (there should be test deliverables).
- Test visibility should be high. What testing has been performed and the result of testing should be visible, and referable.
- Test and development must work efficiently as a team.
- Tests must not all be fine-grained. They can usefully reveal a lot of detail when they find a problem, but take a long time to execute.
- The relationships between system functions must be borne out by the relationships between tests in a plan (testing should be intelligent and appreciate or incorporate certain relationships. System function dependency is an essential part of test planning).
- New feature testing must prioritise effort where it should be be applied (related to having pre-defined priority levels).
- Feature tests should go on to form the basis of regression tests.
This specifically tests the features being delivered (or have been delivered), and is where most effort, in terms of detail, is spent. The test execution will be iterative, as will building up the tests that assure that the feature is delivered correctly (set out by what is called a definition of done for the feature, and which is one of the key inputs into planning the feature test plan).
Running these tests is often a box ticking exercise, since the tests are prescriptive with well defined outcomes.
Exploratory testing is the act of using the system in expected and unexpected ways, prioritising and concentrating on the feature/function under test, but spreading outwards to connected functions where test effort and attention to detail will become less and less.
This is adhoc and not prescriptive. It is through Exploratory testing that unexpected behaviour can be discovered. Depending on the nature of failure scenario found, it may be pertinent to create a functional test for the scenario and add that to the functional test suite so that it becomes part of ongoing regression tests. Where such a test is created during a development testing phase, the test should be subsequently run as part of the iterative feature test cycles to show that the failure has been corrected.
This testing is made up of feature/functional tests previously created when features were developed. They are categorised into several levels, usually high and low priority.
High priority tests are run with great frequency and are chosen because they test vulnerable or sensitive parts of the system, or provide sufficient test coverage to give confidence that the system is functioning correctly.
Low priority tests are run with far less frequency. They test the system operation in greater detail, and are the most time consuming to run. For this reason it would be unusual to run low priority tests across the entire system, instead a targeted approach is taken, and the low priority tests that are directly related to a feature/function that has undergone change are run. Depending on the level of interconnection or functional dependency between the feature under test and a connected feature or function, the low priority tests of those features may also be run.
First thing to note is that there is a distinction between test planning and test execution.
- Test planning is an activity that precedes test execution, and in all probability won't be done by the test executors.
- Test execution is performed according to the plan.
As part of discovery and feature development planning, the test team should create a high level test plan. This helps set the scope for testing, objectives and start to inform the team as to the level of testing that will be required (and thus how long it will take) and some idea as to the areas that will need to be tested.
It is extremely useful for test planning to have a thorough understanding of the relationships between system components and features. An excellent tool for this is a mind map of system component relationships.
There should be a complete system architecture diagram / relationship map that can be used as the basis of creating a relationship map of the important associated features potentially directly connected to the feature to be developed. Such a map will get increasingly sparse the further out it gets from the feature under test.
An example tool for creating such a map is MindMup.
During this phase, feature tests are created and run, iteratively. Exploratory testing is also carried out, around the feature being developed, and periodically high priority regression tests are run.
A feature under development will usually consist of several components. These components will be developed and built up iteratively and along with them the capability or function of the component will be increasingly understood. Feature tests are therefore developed iteratively alongside the feature/components and added to the set, which is executed in an appropriate order.
Exploratory testing is carried out within and around the feature being tested, and it is therefore essential to understand the relationships between various system features. As exploratory testing works out away from the feature being tested, the level of detail of testing reduces, so that exploratory testing is focused at the feature.
Issues found during exploratory testing will usually be translated into a new feature test and added to the test suite.
High priority regression tests are so called because they cover vulnerable or sensitive areas of the system or because they give a good indication as to the correctness of the system. It is important to run these tests periodically during a features development cycle, to ensure that the stability of the system has not changed, and to maintain continued confidence in the correctness of the system.
This is essentially a complete system regression test, however the area of change is prioritised. It would be understood that to get to the release stage, the feature testing must have passed, and any high level regression tests of functionally related parts of the system have also passed. Therefore release acceptance testing it a due diligence test of the release build.
Some tests are general and specific to the release cycle, but others may be identified and created during a features development, and would thus be specific to the product and/or feature being released.
Such generic release tests might be:
- That the correct release version number is being used
- That the install is successful on the test systems
- That the new feature is present in the release
It is important to run the feature tests and high level regression tests and, depending on the feature or change being released, it may be deemed necessary to run the low level regression tests to give a complete system test. This would normally be for a major feature release, but whether to run these tests should be an informed decision of the test team based on information from the developers or pre-existing knowledge about the complexity of the system area that has been affected by the change.
Tests must be created in a form that makes them easy to edit and accessible on a variety of platforms and operating systems. They should also be under revision control, which may affect the choice of tool used to create and manage tests.
Feature tests will be set out according to a test template (that may itself develop over time to meet the needs of the project). There are usually five main sections of the test:
- Objective
- Preconditions
- Scope (what is in and what is out)
- Execution steps
- Expected outcomes (related to the definition of done, but it is the outcome of all test stories that determines whether the feature meets its definition of done).
An important precondition may be that a test is run after another test. This is usually the case where one test sets up some important preconditions that several tests rely on, or that the test uses the result of a previous test as the precondition of its test.
Test data must be considered part of the test. Any test data required as a precondition of a test must be archived so that it may be re-instantiated to a guaranteed consistent state each time the test is to be run. If it is a database, then the relevant records must be backed up and stored as a .zip
(or similar) within the test pack (or linked to). Similarly for files on a disk or USB drive.
Tests are not generally written for exploratory testing. There is a danger of being prescriptive, creating a test pattern and not being exploratory at all. Where problems of faults are discovered during Exploratory testing, a feature test will usually be written and added to the feature test set. This will aid the developers in understanding, reproducing and fixing the issue, and will also demonstrate that the issue has been fixed to the test team. This new feature test would subsequently end up as part of the regression pack, to guard against the issue reappearing at a later date.
A testing log must be maintained while performing exploratory testing. Even though it is exploratory, if an issue is found, it is vital to have a documented record of what was previously performed, and what was not. The output of exploratory testing should be to have repeatable feature tests for all issues found.
These are made up of feature tests, which will be split into two priorities.
If the feature tests are stored in a link-able archive, such as a git repository, then high and low priority regression packs can be created which are a series of links to the relevant feature tests.
The high priority regression pack would link to all high priority feature tests, and is a single pack for the entire system. Such a regression pack is designed to have a wide, even coverage, sufficient detail and be relatively quick to execute.
A low priority regression pack would be feature development specific, so one would be created for each new feature under tests, built up while the feature is developed. Such low priority regression packs will be archived so that they may be run as required (occasionally) during feature development, but more likely during release testing.