Chapter 1 : Introduction and Fundamentals of Testing
Introduction
In this course from Chapter 1 to Chapter 8, we will cover the basics testing concepts. We will also cover the few questions and answers in the end of each chapter. These questions should help you clear various certification exams. Also in the chapter 8, we will cover the software testing interview questions and answers.
After learning this course it should help you to start career in testing and take your software testing knowledge to next level.
Start learning now. We wish you all the best.
Note: Following notations will indicate the learning objectives:
- (R) : Remember
- (U) : Understand
- (Y) : Apply
- (Z) : Analyze
In this Chapter we will discuss the following:
1.1 What is testing?
- 1.1a. Objectives of Testing (R)
- 1.1b. Testing Objectives at different stages of Software Development Life Cycle (U)
- 1.1c. Debugging Vs Testing (U)
1.2 Why is testing necessary?
- 1.2a. Software System Context (R)
- 1.2b. Causes of Defects (U)
- 1.2c. Role of Testing in Software Development maintenance and Operations (U)
- 1.2d. Testing and Quality (U)
- 1.2e. How much testing is enough? (U)
1.3 Principles of testing (U)
1.4 Fundamental Test Process
1.5 Psychology of Testing
1.6 Practice Questions and Answers
--------------------------------------------------------------------------------------------------------------------
1.1 What is Testing ? (U)
Testing is a process of identifying defects in software. Defect is also called as bug. It can also be defined as process of validating and verifying software program/product/application works as expected.
1.1a Objectives of Testing (R):
- Preventing a defect
- Finding a defect
- Gaining a confidence about the level of quality
- Providing information for decision making
Designing test early in the life cycle can help to prevent the defects being introduced into the code.
1.1b Testing Objectives at different stages of Software Development Life Cycle (U)
Different viewpoints in testing take different objectives into account.
- Acceptance testing – to confirm system works as expected and to gain the confidence in the quality of the software.
- Maintenance testing – testing that no new defects have been introduced during the development of changes.
- Operations testing – to test the system characteristics such as reliability and availability.
1.1c Debugging Vs Testing (U)
Debugging:
Debugging is a process of identifying, analyzing and removing the cause of the failure. Debugging is a development activity. It is done by the developers.
Testing:
Testing is performed by the testers. Testing is activity of finding and reporting the defects in software. Re-testing is done after the defect is fixed to ensure that fix resolve the failure.
1.2 Why is testing necessary?
Testing is necessary to ensure that system is working as expected or working as per the requirement.
1.2a Software System Context (R)
Software systems are an integral part of life, from business applications (e.g., Insurance, Banking) to consumer products (e.g., motors, cars). People may have had an experience with software that did not work as expected. Software that does not work correctly can lead to many problems, including loss of money, time, business reputation, and could even cause injury or death.
1.2b Causes of Defects (U)
A human being can make an error (mistake), which produces a defect (fault, bug) in the program code, or in a document. If a defect in code is executed, the system may fail to do what it should do (or do something it shouldn’t), causing a failure. Defects in a software, systems or documents may result in failures, but not all defects do so.
Defects occur because human beings are fallible and because there is time pressure, complex code, complexity of infrastructure, changing technologies, and/or many system interactions.
Failures can be caused by environmental conditions as well. For example, radiation, magnetism, electronic fields, and pollution can cause faults in firmware or influence the execution of software by changing the hardware conditions.
1.2c Role of Testing in Software Development maintenance and Operations (U)
Rigorous testing of systems and documentation can help to reduce the risk of problems occurring during operation and contribute to the quality of the software system, if the defects found are corrected before the system is released for operational use.
Software testing may also be required to meet contractual or legal requirements, or industry –specific standards.
1.2d Testing and Quality (U)
With the help of testing, it is possible to measure the quality of software in terms of defects found, for both functional and non-functional software requirements and characteristics (e.g.,reliability,usability,efficiency,maintainability and portability).
Testing can give confidence in the quality of the software if it finds few or no defects. A properly designed test that passes reduces the overall level of risk in a system when testing does find defects, the quality of the software system increases when those defects are fixed.
Lessons should be learned from previous projects.By under standing the root causes of defects found in other projects,processes can be improved,which in turn should prevent those defects from reoccurring and,as a consequence,improve the quality of future systems.This is an aspect of quality assurance.
Testing should be integrated as one of the quality assurance activities (i.e., alongside development standards, training and defect analysis).
1.2e How much testing is enough? (U)
Deciding how much testing is enough should take account of the level of risk, including technical, safety, business risks, and project constraints such as time and budget.
Testing should provide sufficient information to stakeholders to make informed decisions about the release of the software or system being tested, for the next development step or handover to customers.
1.3 Principles of Testing(U)
There are 7 suggested testing principles. These are general guidelines evolved over the years.
Principle 1 –Testing shows presence of defects
Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness.
Principle 2 –Exhaustive testing is impossible
Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus testing efforts.
Principle 3- Early testing
To find defects early, testing activities shall be started as early as possible in the software or system development life cycle, and shall be focused on defined objectives.
Principle 4- Defect clustering
Testing effort shall be focused proportionally to the expected and later observed defect density of modules. A small number of modules usually contains most of the defects discovered during pre-release testing, or is responsible for most of the operational failures.
Principle 5- Pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new defects. To overcome this “pesticide paradox”, test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to find potentially more defects.
Principle 6- Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site.
Principle 7- Absence –of-errors fallacy
Finding and fixing defects does not help if the system built is unusable and does not fulfill the user’s needs and expectations.
1.4 Fundamental Test Process (R)
The fundamental test process consists of the following main activities:
- Test planning and control
- Test analysis and design
- Test implementation and execution
- Evaluating exit criteria and reporting
- Test closure activities
These activities are logically sequential, however the activities in the process may overlap or take place concurrently depending the project and context of the system. We will discusses these activities one by one
1.4.1 Test Planning and control
Test planning is the activity of defining the objectives of testing and the specification of test activities in order to meet the defined objectives .
Test control is the ongoing activity of comparing actual progress against the plan,and reporting the status, including deviations from the plan.It involves taking actions necessary to meet the objectives of the project.In order to control testing,the testing activities should be monitored throughout the project.
1.4.2 Test Analysis and Design
Test analysis and design is the activity during which general testing objectives are transformed into tangible test conditions and test cases.
The test analysis and design activity has the following major tasks:
- Reviewing the test basis (such as requirements,software integrity level (risk level),risk analysis reports,architecture,design,interface specifications)
- Evaluating test ability of the test basis and test objects
- Identifying and prioritizing test conditions based on analysis of test items,the specification,behavior and structure of the software
- Designing and prioritizing high level test cases
- Identifying necessary test data to support the test conditions and test cases
- Designing the test environment setup and identifying any required infrastructure and tools
- Creating bi-directional traceability between test basis and test cases
1.4.3 Test Implementation and Execution
Test implementation and Execution is the activity where test procedures or scripts are specified by combining the test cases in a particular order and including any other information needed for test execution,the environment is set up and the tests are run.
Test implementation and execution has the following major tasks:
- Finalizing,implementing and prioritizing test cases including the identification of test data & test conditions.
- Developing and prioritizing test procedures,creating test data and preparing test harnesses and writing automated test scripts.
- Creating test suites from the test procedures for efficient test execution
- Verifying that the test environment has been set up correctly
- Verifying and updating bi-directional traceability between the test basis and test cases
- Executing test procedures either manually or by using test execution tools,according to the planned sequence
- Logging the outcome of test execution and recording the identities and versions of the software under test,test tools and testware.
- Comparing actual results with expected results
- Reporting discrepancies as incidents and analyzing them in order to establish their cause.
- Repeating test activities as a result of action taken for each discrepancy,for example,re-execution of a test that previously failed in order to confirm a fix (confirmation testing), execution of a corrected test and/or execution of tests in order to ensure that defects have not been introduced in unchanged areas of the software or that defect fixing did not uncover other defects (regression testing)
1.4.4 Evaluating Exit Criteria and Reporting
Evaluating exit criteria is the activity where test execution is assessed against the defined objectives. This should be done for each test level.
Evaluating exit criteria has the following major tasks:
- Checking test logs against the exit criteria specified in test planning
- Assessing if more tests are needed or if the exit criteria specified should be changed
- Writing a test summary report for stakeholders
1.4.5 Test Closure Activities
Test closure activities collect data from completed test activities to consolidate experience,testware,facts and numbers.Test closure activities occur at project milestones such as when a software system is released,a test project is completed (or cancelled),a milestone has been achieved,or a maintenance release has been completed.
Test closure activities include the following major tasks:
- Checking which planned deliverables have been delivered
- Closing incident reports or raising change records for any that remain open.
- Documenting the acceptance of the system
- Finalizing and archiving testware,the test environment and the test infrastructure for later reuse
- Handing over the testware to the maintenance organization
- Analyzing lessons learned to determine changes needed for future releases and projects
- Using the information gathered to improve test maturity
1.5 The Psychology of Testing (U)
Background
The mindset to be used while testing and reviewing is different from that used while developing software. With the right mindset developers are able to test their own code, but separation of this responsibility to a tester is typically done to help focus effort and provide additional benefits,such as an independence view by trained and professional testing resources.Independent testing may be carried out at any level of testing.
A certain degree of independence (avoiding the author bias) often makes the tester more effective at finding defects and failures. Independence is not a replacement for familiarity and developers can efficiently find many defects in their own code. Several levels of independence can be defined as shown here from low to high:
- Tests designed by the person(s)who wrote the software under test (low level of independence)
- Tests designed by another person(s) (e.g.,from the development team)
- Tests designed by a person(s) from a different organizational group (e.g.,an independence test team) or test specialists (e.g.,usability or performance test specialists)
- Tests designed by a person(s) from a different organization or company (i.e.,outsourcing or certification by an external body)
People and projects are driven by objectives.people tend to align their plans with the objectives set by management and other stakeholders,for example,to find defects or to confirm that software meets its objectives.Therefore,it is important to clearly state the objectives of testing.
Identifying failures during testing may be perceived as criticism against the product and against the author.As a result,testing is often seen as a destructive activity,even though it is very constructive in the management of product risks.Looking for failures in a system requires curiosity,professional pessimism,a critical eye,attention to detail,good communication with development peers,and experience on which to base error guessing.
If errors,defects or failures are communicated in a constructive way,bad feelings between the testers and the analysts,designers and developers can be avoided.This applies to defects found during reviews as well as in testing.
The tester and test leader need good interpersonal skills to communicate factual information about defects,progress and risks in a constructive way. For the author of the software or document,defect information can help them improve their skills.Defects found and fixed during testing will save time and money later,and reduce risks.
Communication problems may occur,particularly if testers are seen only as messengers of unwanted news about defects.However,there are several ways to improve communication and relationships between testers and others:
- Start with collaboration rather then battles-remind everyone of the common goal of better quality systems
- Communicate findings on the product in a neutral,fact-focused way without criticizing the person who created it,for example,write objective and factual incident reports and review findings
- Try to understand how the other person feels and why they react as they do
- Confirm that the other person has understood what you have said and vice versa