Testing phases and phrases glossary

No matter the IT methodology you’re using – agile, waterfall or iterative – test phases for your projects and programmes will be the same. They just align to different methodologies in different ways. So what are these test phases and what tests does each phase incorporate? This Testing Phases and Phrases glossary tells you all you need to know…..

Key testing Phases

Depending on the delivery methodology selected (e.g. waterfall, agile, iterative etc), test phases will align with that methodology. However, the methodology selected does not impact the types of testing that should be considered. This post outlines key test phases, rather than describing how they align to different methodologies. Key test phases include:

Static Testing

Static testing can take the form of several test types, but is generally executed against business requirements and code review. As part of the requirements phase, testers pair with business analysts to test and assure the testability of requirements, using measures such as: if they are complete; singular; or unambiguous. Code review may be carried out manually, or using code review tools, to ensure adherence to coding standards.

Unit & Unit Test Integration (may be referred to as Component)

Unit & unit integration testing are development activities. Each system component (code) is tested in isolation by the developer, to confirm the integrity of that code. By compiling code, a unit integration test is completed to ensure that code tested in isolation can integrate with the completed code base.

Functional Test Phases

Functional test phases ensure that the requirements used to develop and deliver the application are proved to be functionally compliant; does the application – and the functions within the application – work as expected, for example. Functional testing usually includes system testing, system integration testing, end to end testing, and regression (other forms may include functional & site acceptance testing (FAT & SAT)).

System tests – are singular and test the function that is described by the requirement.

System integration test – ensures that the functions described by the requirements integrate to deliver the application.

End to end (e2e) test – ensures the application under test can integrate with both internal and external applications.

Regression testing – ensures that changes to an application do not impact areas of the existing application that have not been changed or updated.

User Acceptance Testing (UAT)

Ensures that both functional and non-functional requirements support the user expectations. UAT ensures that day to day activities are supported by the delivered application. Does the application support business processes? Is it performing so it does not impact daily activities? Does it meet the business requirements and are process changes or workarounds supported?

Non-Functional Test Phases

Non-functional test phases ensure that the non-functional requirements used to develop and deliver the application are proved to be non-functionally compliant; does the application perform as expected? Is it secure? Can it be supported by service owners and their teams? Non-functional testing generally takes the form of performance, security and operational acceptance.

Performance – does the application perform as expected i.e. can it support load (users), volume (data), stress (load & volume break point), soak (availability over time)

Security – is the application secure to both internal and external threats? Does code and infrastructure comply to security standards described by company security documentation?

Operational acceptance testing (OAT) – ensures the application is service complaint and ready to release into a production environment. This phase ensures supportability, recoverability and usability amongst other OAT types.

Key Testing Phrases

Acceptance testing: formal testing with respect to user needs, requirements and business processes conducted to determine whether a system satisfies the acceptance criteria

Baseline: a specification or software product that has been formally reviewed or agreed upon, that can be changed only through a formal change control process.

Black-box testing: testing, either functional or non-functional, without reference to the internal structure of the component or system.

Component: a minimal software item that can be tested in isolation, often referred to as ‘unit’.

Component integration testing: testing performed to expose defects in the interfaces and interaction between integrated components exercised by a test suite.

Defect: a flaw in a component or system that can cause the component or system to fail to perform its required function.

End to end (e2e) test – ensures the application under test can integrate with both internal and external applications.

Entry criteria: the set of generic and specific conditions for permitting a process to go forward with a defined task e.g. test phase.

Exit criteria: the set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed.

Fail: a test is deemed to fail if its actual result does not match its expected result.

Functional testing: testing based on an analysis of the specification of the functionality of a component or system. See also black box testing.

Integration testing: testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. See also component integration testing, system integration testing.

Load testing: a test type concerned with measuring the behaviour of a component or system with increasing load. See also stress testing.

Negative testing: tests aimed at showing that a component or system does not work.

Non-functional testing: testing the attributes of a component or system that do not relate to functionality e.g. reliability, efficiency, usability, maintainability and portability.

Pass: a test is deemed to pass if its actual result matches its expected result.

Performance testing: the process of testing to determine the performance of a software product.

Phase test plan: a test plan that typically addresses one test phase. See also test plan.

Quality: the degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.

Quality assurance: the element of quality management focused on providing confidence that quality requirements will be fulfilled.

Regression testing: testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made.

Requirement: a condition or capability needed by a user to solve a problem or achieve anobjective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document.

Risk-based testing: testing oriented towards exploring and providing information about product risks.

Root cause: an underlying factor that caused a non-conformance and possibly should be permanently eliminated through process improvement.

Security testing: testing to determine the security of the software product.

Shift-left: the process of testing throughout the delivery lifecycle. Ensure that testing ‘shifts-left’ as defects found early are easier and cheaper to fix.

Smoke test: a subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertain that the most crucial functions work.

Static testing: testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static code analysis.

Stress testing: testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. See also load testing.

Stub: a skeletal, or special-purpose implementation of a software component, used to develop or test a component that calls on or is otherwise dependent on it.

Suspension criteria: the criteria used to (temporarily) stop all or a portion of the testing activities on the test items.

System integration testing: testing the integration of systems and packages; testing interfaces to external organisations.

System testing: the process of testing an integrated system to verify that it meets specified requirements.

Test case: a set of input values, execution preconditions, expected results and execution post conditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.

Test condition: an item or event of a component or system that could be verified by one or more test cases e.g. a function, transaction, feature, quality attribute, or structural element.

Test cycle: execution of the test process against a single identifiable release of the test object.

Test data: data that exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test.

Test environment: an environment containing hardware, instrumentation, simulators, software tools and other support elements needed to conduct a test.

Test execution: the process of running a test on the component or system under test, producing actual results.

Test management: the planning, estimating, monitoring and control of test activities, typically carried out by a test manager.

Test phase: a distinct set of test activities collected into a manageable phase of a project e.g. the execution activities of a test level.

Test plan: a document describing the scope, approach, resources and schedule of intended test activities.

Test policy: a high level document describing the principles, approach and major objectives of the organisation regarding testing.

Test script: commonly used to refer to a test procedure specification, especially an automated one.

Test strategy: a high-level description of the test levels to be performed and the testing within those levels for an organisation or programme.

Test suite: a set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.

Testing: the process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate they are fit for purpose and to detect defects.

Traceability: the ability to identify related items in documentation and software, such as requirements with associated tests. See also horizontal traceability, vertical traceability.

Use case: a sequence of transactions in a dialogue between a user and the system with a tangible result.

User acceptance testing: See acceptance testing.

Validation & Verification: confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.

White-box testing: testing based on an analysis of the internal structure of the component or system.

Valcon’s Test Management capability delivers world-class ability to establish, assure and operate a continuous-testing capability that places quality at the centre of your full delivery lifecycle.​ To find out how we can help you email [email protected].

Business,And,Technology,Concept.,Communication,Network.,Gui,(graphical,User,Interface).

Insights