Chapter 2: Testing Throughout the software Life cycle
We will cover the following topics in this chapter.
- 2.1 Software Development Models (U)
- 2.2 Test Levels (U)
- 2.3 Test Types (U)
- 2.4 Maintenance Testing(U)
2.1 Software Development Models
Testing does not exist in isolation;test activities are related to software development activities.Different development life cycle models need different approaches to testing.
2.1.1 V-model (Sequential Development Model
Generally V-model uses four test levels,corresponding to the four development levels. The four levels are:
- Component (unit) testing
- Integration testing
- System testing
- Acceptance testing
In real-time, a V-model may have more,fewer or different levels of development and testing,depending on the project and the software product.For example,there may be component integration testing after component testing,and system integration testing after system testing.
Software work products such as business scenarios or use cases,requirements specifications,design documents and code produced during development are often the basis of testing in one or more test levels.
2.1.2 Iterative-incremental Development Models
Iterative-incremental development is the process of establishing requirements, designing, building,and testing a system in a series of short development cycles. Examples for Iterative and incremental models are: prototyping,Rapid Application Development (RAD), Rational Unified Process (RUP) and agile development models.
A system that is produced using these models may be tested at several test levels during each iteration.An increment,added to others developed previously, forms a growing partial system,which should also be tested. Regression testing is increasingly important on all iterations after the first one. Verification and Validation can be carried out one each increment.
2.1.3 Testing Within a Life Cycle Model
In any life cycle model,there are several characteristics of good testing:
- For every development activity there is a corresponding testing activity
- Each test level has test objectives specific to that level
- The analysis and design of tests for a given test level should begin during the corresponding development activity
- Testers should be involved in reviewing documents as soon as drafts are available in the development life cycle
Test levels can be combined or reorganized depending on the nature of the project or the system architecture.
2.2 Test Levels
Based on the test objective( main purpose of testing) and test object(what is being tested) test levels can be classified as below.
- 2.2.1 Component Testing (U)
- 2.2.2 Integration Testing (U)
- 2.2.3 System Testing (U)
- 2.2.4 Acceptance Testing (U)
In this section, we will discussion on the each of the above test levels.
2.2.1 Component Testing
Component testing (also known as unit, module or program testing) searches for defects in, and verifies the functioning of, software modules, programs, objects, classes, etc; that are separately testable. It may be done in isolation from the rest of the system, depending on the context of the development life cycle and the system. Stubs, drivers and simulators may be used.
Component testing may include testing of functionality and specific non-functional characteristics, such as resource-behavior (ex: searching for memory leaks) or robustness testing, as well as structural testing (ex:, decision coverage). Test cases are derived from work products such as a specification of the component, the software design or the data model.
Typically, component testing occurs with access to the code being tested and with the support of a development environment, such as a unit test framework or debugging tool. In practice, component testing usually involves the programmer who wrote the code. Defects are typically fixed as soon as they are found, without formally managing these defects.
One approach to component testing is to prepare and automate test cases before coding. This is called a test-first approach or test –driven development. This approach is highly iterative and is based on cycles of developing test cases, then building and integrating small pieces of code, and executing the component tests correcting any issues and iterating until they pass.
Test basis(reference) for component testing are: Component requirements, design, code.
Typical test objects tested during component testing are: Components, Programs, Database modules.
2.2.2 Integration Testing
Integration testing tests interfaces between components, interactions with different parts of a system, such as the operating system, file system and hardware, and interfaces between systems.
There may be more than one level of integration testing and it may be carried out on test objects of varying size as follows:
- Component integration testing tests the interactions between software components and is done after component testing.
- System integration testing tests the interactions between different systems or between hardware and software and may be done after system testing.
The greater the scope of integration, the more difficult it becomes to isolate defects to a specific component or system, which may lead to increased risk and additional time for troubleshooting.
Systematic integration strategies may be based on the system architecture (such as top-down and bottom-up), functional tasks, transaction processing sequences, or some other aspect of the system or components. In order to ease fault isolation and detect defects early, integration should normally be incremental rather than “big bang”.
Testing of specific non-functional characteristics such as performance may be included in integration testing as well as function testing.
At each stage of integration, testers concentrate solely on the integration itself. For example, if they are integrating module A with module B they are interested in testing the communication between the modules, not the functionality of the individual module as that was done during component testing. Both function and structural approaches may be used.
Ideally, testers should understand the architecture and influence integration planning. If integration tests are planned before components or systems are built, those components can be built in the order required for most efficient testing.
Test basis(reference) for integration testing are: Software and system design, Architecture, Use cases, Workflows
Typical test objects tested during integration testing are: Subsystems, Infrastructure, Database implementation, Interfaces, System configuration, Configuration data
2.2.3 System Testing
System testing is concerned with the behavior of a whole system or product.
In system testing, the test environment should correspond to the final target or production environment as much as possible in order to minimize the risk of environment-specific failures not being found in testing.
System testing may include tests based on risks and/or on requirements specifications, business processes, use cases, or other high level test descriptions or modules of system behavior, interactions with the operating system, and system resources.
System testing should investigate functional and non functional requirements of the system, and data quality characteristics. Testers also need to deal with incomplete or undocumented requirements. System testing of functional requirements starts by using the most appropriate specification-based (black-box) techniques for the aspect of the system to be tested. For example, a decision table may be created for combinations of effects described in business rules. Structure-based techniques (white-box) may then be used to assess the thoroughness of the testing with respect to a structural element, such as menu structure or web page navigation. An independent test team often carries out system testing.
Test basis(reference) for System testing are: System and software requirement specification, Use cases, Functional specification, Risk analysis reports
Typical test objects tested during integration testing are: System, user and operation manuals, System configuration and configuration data
2.2.4 Acceptance Testing
Acceptance testing is often the responsibility of the customers or users of a system; other stakeholders may be involved as well.
The goal in acceptance testing is to establish confidence in the system, parts of the system or specific non-functional characteristics of the system. Finding defects is not the main focus in acceptance testing. Acceptance testing may assess the system’s readiness for deployment and use, although it is not necessarily the final level of testing.
Typical forms of acceptance testing include the following:
User acceptance testing
Typically verifies the fitness for use of the system by business users.
Operational (acceptance) testing
The acceptance of the system by the system administrators, including:
- Maintenance tasks
- Data load and migration tasks
- Testing of backup/restore
- Disaster recovery
- User management
- Periodic checks of security vulnerabilities
Contract and regulation acceptance testing
Contract acceptance testing is performed against a contract’s acceptance criteria for producing custom-developed software. Acceptance criteria should be defined when the parties agree to the contract. Regulation acceptance testing is performed against any regulations that must be adhered to, such as government, legal or safety regulations.
Alpha and beta (field) testing
Developers of market, or COTS, software often went to get feedback from potential or existing customers in their market before the software product is put up for sale commercially. Alpha testing is performed at the developing organization’s site but not by the developing team. Beta testing, or field-testing, is performed by customers or potential customers at their own locations.
Organization may use other terms as well, such as factory acceptance testing and site acceptance testing for systems that are tested before and after being moved to a customer’s site.
Test basis(reference) for Acceptance testing are: Use cases, User requirements, System requirements, Business processes, Risk analysis reports
Typical test objects tested during Acceptance testing are: Forms, Reports, User procedures, Business processes, Operational and maintenance processes
2.3 Test Types
A group of test activities can be aimed at verifying the software system (or part of a system) based on a specific reason or target for testing.
A test type is focused on a particular test objective, which could be any of the following:
- A function to be performed by the software
- A non-functional quality characteristic, such as reliability and usability
- The structure or architecture of the software or system
- Change related, i.e., confirming that defects have been fixed (confirmation testing) and looking for unintended changes (regression testing)
A model of the software may be developed and/or used in structural testing (e.g., a control flow model or menu structure model), non-function testing, (e.g., performance model, usability model security threat modeling), and function testing (e.g., a process flow model, a state transition model or a plain language specification).
2.3.1 Testing of Function (Functional Testing)
The functions that a system, subsystem or component are to perform may be described in work products such as a requirements specification, use cases, or a function specification, or they may be undocumented. The functions are “what the system does.
Functional tests are based on functions and features (described in documents or understood by the testers) and their interoperability with specific systems, and may be performed at all test levels (e.g., tests for components may be based on a component specification).
Specification-based techniques may be used to derive test conditions and test cases from the functionality of the software or system (see Chapter4). Function testing considers the external behavior of the software (black box testing).
A type of functional testing, security testing, investigates the functions (e.g., a firewall ) relating to detection of threats, such as viruses, from malicious outsiders. Another type of function testing , interoperability testing, evaluates the capability of the software product to interact with one or more specified components or systems.
2.3.2 Testing of Non-Functional Software Characteristics (Non-functional Testing)
Non-functional testing includes, but is not limited to, performance testing, load testing, stress testing, usability testing, maintainability testing, reliability testing, and portability testing, it is the testing of “how” the system works.
Non-function testing may be performed at all test levels. The term non-function testing describes the tests required to measure characteristics of systems and software that can be quantified on a varying scale, such as response times for performance testing. These tests can be referenced to a quality model such as the one defined in ‘Software Engineering-Software Product Quality’ (ISO 1926). Non-function testing considers the external behavior of the software and in most cases uses black-box test design techniques to accomplish that.
2.3.3 Testing of Software Structure/Architecture (Structural Testing)
Structural (white-box) testing may be performed at all test levels. Structural techniques are best used after specification-based techniques, in order to help measure the thoroughness of testing through assessment of coverage of a type of structure.
Coverage is the extent that a structure has been exercised by a test suite, expressed as a percentage of the items being covered. If coverage is not 100%, then more tests may be designed to test those items that were missed to increase coverage. Coverage techniques are covered in Chapter4.
At all test levels, but especially in component testing and component integration testing. Tools can be used to measure the code coverage of elements, such as statements or decisions. Structural testing may be based on the architecture of the system, such as a calling hierarchy.
Structural testing approaches can also be applied at a system, system integration or acceptance testing levels (e.g., to business models or menu structures).
2.3.4 Testing Related to Changes: Re-testing and Regression Testing
After a defect is detected and fixed, the software should be re-tested to confirm that the original defect has been successfully removed. This is called confirmation. Dubbing (locating and fixing a defect) is a development activity, not a testing activity.
Regression testing is the repeated testing of an already tested program. after modification, to discover any defects introduced or uncovered as a result of the change(s). These defects may be either in the software being tested, or in another related or unrelated software component. It is performed when the software, or its environment, is changed. The extent of regression testing is based on the risk of not finding defects in software that was working previously.
Tests should be repeatable if they are to be used for conformation testing and to assist regression testing.
Regression testing may be performed at all test levels, and includes functional, non-functional and structural testing. Regression test suites are run many times and generally evolve slowly, so regression testing is a strong candidate for automation
2.4 Maintenance Testing
Maintenance testing is a testing of the existing operational systems.
Once deployed, a software system is often in service for years or decades. During this time the system, its configuration data, or its environment are often corrected, changed or extended. The planning of releases in advance is crucial for successful maintenance testing. A distinction has to be made between planned releases and hot fixes. Maintenance testing is done on an existing operational system, and is triggered by modifications, migration, or retirement, of the software or system.
Modifications include planned enhancement changes, corrective and emergency changes, and changes of environment, such as planned operating system or database upgrades, planned upgrade of Commercial-Off-The-Shelf software, or patches to correct newly exposed or discovered vulnerabilities of the operating system.
Maintenance testing for migration (ex: form one platform to another) should include operational tests of the new environment as well as of the changed software. Migration testing (conversion testing) is also needed when data from another application will be migrated into the system being maintained.
Maintenance testing for the retirement of a system may include the testing of data migration or archiving if long data-retention periods are required.
In addition to testing what has been changed, maintenance testing includes regression testing to parts of the system that have not been changed. The scope of maintenance testing is related to the risk of the change, the size of the existing system and to the size of the change. Depending on the changes, maintenance testing may be done at any or all test levels and for any or all test types. Determining how the existing system may be affected by changes is called impact analysis, and is used to help decide how much regression testing to do. The impact analysis, may be used to determine the regression test suite.
Maintenance testing can be difficult if specifications are out of data or missing, or testers with domain knowledge are not available.