Chapter 1.Testing Fundamentals

1.Testing Fundamentals

Testing : Testing is to avoid effects of defects that might be Human mistakes or Environmental defects.
Testing is necessary
  • To avoid effects of defects.
  • To avoid failures of Software
We have defects Because humans by nature make mistakes but certain conditions makes humans to make more mistakes.
Humans make mistakes because of
  • Time Pressure(Dead Lines)
  • Complexity of the Requirement/Technology
  • Lack of experience/skill
  • Lack of information.
  • Frequent changes
The effects of defects are
  • Leads to Injury/Death
  • Leads to loss of Time
  • Leads to loss of money
  • Leads to Bad reputation
Environmental factors can result in mistakes
Pollution (ex: mobile)
Radiation(ex:electromagnetic radiation)
Error :Detected at the same level/Stage
Bug/Fault/Defect:A deviation identified by another person at a different stage
Failure:Client or End Users identifies the defect
Testing Principles
 Principle 1 – Testing shows presence of defects
Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness.

Principle 2 – Exhaustive testing is impossible
Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus testing efforts.

 Principle 3 – Early testing
 Testing activities should start as early as possible in the software or system development life cycle, and should be focused on defined objectives.

Principle 4 – Defect clustering

A small number of modules contain most of the defects discovered during pre-release testing, or are responsible for the most operational failures.
Principle 5 – Pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new defects. To overcome this “pesticide paradox”, the test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects.
Principle 6 – Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site.

Principle 7 – Absence-of-errors fallacy

Finding and fixing defects does not help if the system built is unusable and does not fulfill the users’ needs and expectations.

 Fundamental test process
The fundamental test process consists of the following main activities:

  • planning and control;
  • analysis and design;
  • implementation and execution;
  • evaluating exit criteria and reporting;
  • test closure activities.
Although logically sequential, the activities in the process may overlap or take place concurrently.
Test planning and control
Test planning is the activity of verifying the mission of testing, defining the objectives of testing and the specification of test activities in order to meet the objectives and mission.
Test control is the ongoing activity of comparing actual progress against the plan, and reporting the status, including deviations from the plan. It involves taking actions necessary to meet the mission and objectives of the project. In order to control testing, it should be monitored throughout the
project. Test planning takes into account the feedback from monitoring and control activities.
Test planning and control tasks are defined in Chapter 5 .
 Test analysis and design
Test analysis and design is the activity where general testing objectives are transformed into tangible test conditions and test cases.
Test analysis and design has the following major tasks:
  • Reviewing the test basis (such as requirements, architecture, design, interfaces).
  • Evaluating testability of the test basis and test objects.
  • Identifying and prioritizing test conditions based on analysis of test items, the specification, behaviour and structure.
  • Designing and prioritizing test cases.
  • Identifying necessary test data to support the test conditions and test cases.
  • Designing the test environment set-up and identifying any required infrastructure and tools.
Test implementation and execution
Test implementation and execution is the activity where test procedures or scripts are specified by combining the test cases in a particular order and including any other information needed for test execution, the environment is set up and the tests are run.
Test implementation and execution has the following major tasks:
  • Developing, implementing and prioritizing test cases.
  • Developing and prioritizing test procedures, creating test data and, optionally, preparing test harnesses and writing automated test scripts.
  • Creating test suites from the test procedures for efficient test execution.
  • Verifying that the test environment has been set up correctly.
  • Executing test procedures either manually or by using test execution tools, according to the planned sequence.
  • Logging the outcome of test execution and recording the identities and versions of the software under test, test tools and testware.
  • Comparing actual results with expected results.
  • Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g. a defect in the code, in specified test data, in the test document, or a mistake in the way the test was executed).
  • Repeating test activities as a result of action taken for each discrepancy.
For example, reexecution of a test that previously failed in order to confirm a fix (confirmation testing), execution of a corrected test and/or execution of tests in order to ensure that defects have not been introduced in unchanged areas of the software or that defect fixing did not uncover other defects (regression testing)
Evaluating exit criteria and reporting
Evaluating exit criteria is the activity where test execution is assessed against the defined objectives. This should be done for each test level.
Evaluating exit criteria has the following major tasks:
  •  Checking test logs against the exit criteria specified in test planning.
  •  Assessing if more tests are needed or if the exit criteria specified should be changed.
  • Writing a test summary report for stakeholders.
Test closure activities
Test closure activities collect data from completed test activities to consolidate experience, testware, facts and numbers. For example, when a software system is released, a test project is completed (or cancelled), a milestone has been achieved, or a maintenance release has been
completed.
Test closure activities include the following major tasks:

  • Checking which planned deliverables have been delivered, the closure of incident reports or raising of change records for any that remain open, and the documentation of the acceptance of the system.
  • Finalizing and archiving testware, the test environment and the test infrastructure for later reuse.
  • Handover of testware to the maintenance organization.
  • Analyzing lessons learned for future releases and projects, and the improvement of test maturity.
Independent level of Testing
  • Tests designed by the person(s) who wrote the software under test (low level of independence).
  • Tests designed by another person(s) (e.g. from the development team).
  • Tests designed by a person(s) from a different organizational group (e.g. an independent test team) or test specialists (e.g. usability or performance test specialists).
  • Tests designed by a person(s) from a different organization or company (i.e. outsourcing or Certification by an external body)