You are on page 1of 17

ITIL Service Validation and Testing

Service Validation and Testing is "The Process responsible for Validation and
Testing of a new or Changed IT Service. Service Validation and Testing ensures
that the IT Service matches its Design Specification and will meet the needs of
the Business
According to ITIL, Service Validation and Testing is "The Process responsible for
Validation and Testing of a new or Changed IT Service. Service Validation and
Testing ensures that the IT Service matches its Design Specification and will
meet the needs of the Business."
Service Validation and Testing ensures Quality of Service (QoS). The
specifications or requirements for Service qualities are defined in the Service
Design phase. The deciding factor for quality Service within Service
Management is testing. The main requirements for the successful testing of a
Service are defined in Service Level Packages (SLPs).
The frameworks drawn up in the Release, Validation and Control (RCV) phase,
which guide and support the implementation of quality requirements in the RCV
process are

Service quality policy

Risk policy

Service Transition policy

Release policy

Change Management policy


A test strategy defines the organization of tests and the allocation of resources
for the tests. Tests that revel the Utility (usability) of a Service are structured
tests. However, there are tests that reveal the adherence with Warranty targets
and with the guarantee of the Service level. Tests on contractual and regulatory
requirements ensure corporate and IT governance.
Overview
When you implement a Service Change, it is necessary to evaluate its
effectiveness. The Change should fulfill user expectations to avoid instances of
mismatches between expectations and Service Delivery. To do this, you need to
validate and test the Service before you implement it to ensure its fitness for
use and value for money.
Because every organization aims to meet customer requirements by providing
quality Service, ITIL includes the Service Validation and Testing process. This
process helps you fulfill customer expectations for any new or changed Service
by providing assurance of quality.
You must remember that Testing and Validation oi a key area within Service
Management. In fact, lack of Testing and Validation might lead to an impression
that the Service itself is inefficient. Consequently, it is important to implement
best practices in this area and improve the overall Service Delivery.

Testing and Validation is a new process introduced in ITIL covered some aspects
of Release testing within the Release Management process. ITIL gives more
detailed guidance. A major addition is details on various testing stages during
Service Transition and descriptions of their testing approaches.

5.1 Purpose, Goal and Objectives


Purpose
The purposes of Service Validation and Testing are:

Plan and implement a process that will provide objective evidence that
the Service Change will fulfill the customers' and stakeholders' business
needs and deliver the appropriate level of Service.

Provide quality assurance of the Service and its components

Identify and address all the issues, errors and Risks in Service Transition
Goal
The goal of Service Validation and Testing is to assure that the Service will
provide value to customers and fulfill their business requirements
Objectives
The objectives of Service Validation and Testing are:

Give confidence that a Release will create a new or changed Service or


Service offering that delivers the expected outcomes and value for the
customers within the projected costs, capacity and constraints

Check that a Service is "fit for purpose" - it will deliver the required
performance with the desired constraints removed

Ensure that a Service is "fit for use" - it meet certain specifications under
specified terms and conditions of use

Verify that the customer and stakeholder requirements for the new or
changed Service are correctly defined and rectify any errors or variances
early in the Service Lifecycle, because this is considerably cheaper than
fixing errors in production.

5.2. Scope of the Process and Value to Business


Scope of Service Validation and Testing

Deliver and maintain Services within a specified Warranty level and


Service agreement

Ensure the quality and timely delivery of a Service that meets all the
requirements of a successful Service Release

Understand the supplier, customer and partner requirements clearly, to


validate and test the Service. Service Validation and Testing happens
within a limit specified by the process and organization

Test the new or changed Service in the target Business Unit, Service
Unit, deployment group or environment
Usually, a Service Provider is responsible for delivering, operation and/or
maintaining customer or Service Assets at specified levels of Warranty under a

Service agreement. To do this successfully, you need to coordinate with


suppliers, customers and partners for validating and testing a Service.
You use Service Validation and Testing throughout the Service Lifecycle to
ensure Service quality, meet customer requirements and deliver the Service
successfully. To do this, you nede to understand factors such as process
interfaces and organizational interfaces, which will define the limits of the
Service you have to test.
Testing directly supports the Release and Deployment process by ensuring that
appropriate and necessary tests are performed at various levels in the build
and deploy activities.
Testing is applicable to hardware, software or knowledge-based Services
developed in-house and includes the testing of new or changed Services and
Service components.
Value of Service Validation and Testing
If the new or changed Service meets all the requirements and has had a
successful Release, it has a positive effect on the business of the Service
Provider as well as the customer. On the other hand, if the Service fails to
deliver as expected, it might lead to loss in the business, revenue, reputation
and time.
When you implement Service Validation and Testing, it enables you to ensure
the expected quality and assure the customer that the new or changed service
will deliver the expected results. The assurance level, however, differs based
on the complexity of business requirements and the environment of
organizations.
Service Validation and Testing Policies

Service quality policy


o
The organization's management defines quality. Service Strategy
defines four quality parameters: level of excellence, value for money,
conformance with specifications and meeting or exceeding
expectations. The Service Provider has to meet one or more of these
parameters to measure the quality of the Service. These parameters
help evaluate how good the design of the Service is

Risk policy
o
Each organization, customer, business or Service Unit has a
different perspective of Risk and its management. Consequently,
having a Risk, policy in place ensures that the validation and testing
team can control and minimize all Risk types, such as availability,
security, continuity and capacity Risks

Service Transition policy


o
Each organization should define a policy to help the senior
management manage all Changes to Services through Service
Transition. The Service Provider requires this policy to align Service
Transition plans with business needs using a standard framework.
This policy also includes principles that will guide you to handle all

activities related to Service Changes with respect to Service


Transition principles
Release policy
o
The testing approach that a Service Provider will follow for a
Service depends on the type and frequency of Release. Consequently,
it is important that the organization define a Release policy to state
the frequency and type of Releases, based on which it can use
specific, reusable Test Models or automated testing to improve
efficiency
Change Management policy
o
The Change Management policy may allow substitution late in the
Change schedule. If this is the case, more testing may be required to
test the new combination that is going to be released. The additional
testing defined in the testing policy will verify and validate the
Service again to avoid any gaps with the requirements
Service Transition policy
o
The Service Transition policy helps the senior management:

Define, document and approve a policy for Service


Transition that all departments of an organization can implement

Manage all Changes to Services through Service


Transition

Implement a standard framework for Service


Transition, which constitutes reusable processes and
systems, to improve efficiency and reduce variance in
results

Adopt and introduce existing or new processes and


systems in the organization that you can reuse to improve
efficiency and effectiveness

Align Service Transition plans with business


requirements and manage Service Changes to improve
effectiveness and business value

Adopt and maintain relationships with stakeholders to


understand their requirements for new or changed Services

Apply a controlled and disciplined process to manage


Service Changed and Releases

Develop systems and processes for knowledge


transfer, which helps you function correctly

Plan and design Release Packages to deliver Services


at a low cost and effectively

Manage and apply corrections in the course of the


transition

Manage all the resources across Service Transition to


avoid delays

Validate the Service Change early in the Service


Lifecycle to ensure that the Service Change meets all the
expectations

Evaluate and assure that the new or changed Service


functions and performs as expected

Plan and improve the quality of the new or changed


Services during Service Transition

Testing Policy
The Testing policy usually reflects the requirements from
Service Strategy

Example of Policy Statements

Test the library and reuse policy: because the nature of IT Service
Management is repetitive, it benefits from reuse

Integrate testing into the project and Service Lifecycle

Adopt a Risk-based testing approach aimed at reducing Risk to the


Service and the customer's business

Engage with customers, stakeholders, users and Service teams


throughout the project and Service Lifecycle

Establish test measurement and monitoring systems to improve the


efficiency and effectiveness of the Service Validation and Testing and
Continual Service Improvement (CSI) processes

Automate the policy using automated testing tools and systems


Test Model
A Test Model includes a test plan and test scripts that define the testing
approach for testing each element, Release test conditions, predicted results
and test cycles. The details in the Test Model enable you to ensure that
whenever you repeat tests on a Service, you do so efficiently and effectively.
As a result, a Test Model should be well structured and enable you to

Trace elements to the requirements or design stage

Audit tests

Change and maintain test elements


As you progress in the Service Design phase, you can use the updated Service
Design and Release plan to define the specific requirements, validation, test
conditions cases and mechanisms to be tested.
Test model

Objective/target deliverable

Test conditions based on

Service contract test


model

To validate that the customer can use the


service to deliver a value proposition

Contract requirements. Fit


for purpose, fit for User
criteria

Service requirements
test model

To validate that the service provider


Service requirements and
can/has delivered the service required and Service Acceptance criteria
expected by the customer

Service level test


model

To ensure that the service provider can


Service level requirements
deliver the service level requirements, and SLA, OLA
service level requirements can be met in
the production environment, e.g. testing
the response and fix time, availability,
product delivery times, support services,

Service test model

To ensure that the service provider is


capable of delivering, operating and
managing the new or changed service

Service model

using the "as-designed" service model that


includes the resource model, cost model,
integrated process model, capacity and
performance model, etc
Operations test model

To ensure that the Service Operations


Service model, Service
teams can operate and support the new or Operations standards,
changed service/service component
processes and plans
including the service desk, IT operations,
application management, technical
management. It includes local IT support
staff and business representatives
responsible for IT service support and
operations. There may be different models
at different release/test levels, e.g.
technology infrastructure, applications

Deployment release
test model

To verify that the deployment team, tools


Release and deployment
and procedures can deploy the release
design and plan
package into a target deployment group or
environment within the estimated
timeframe. To ensure that the release
package contains all the service
components required for deployment, e.g.
by performing a configuration audit

Deployment
installation test model

To test that the deployment team, tools


and procedures can install the release
package into a target environment within
the estimated timeframe

Release and deployment


design and plan

Deployment
To test that a deployment has completed
Tests and audits of "actual
verification test model successfully and that all service assets and service assets and
configurations are in place as planned and configurations"
meet their quality criteria
The Service V-Model
This model helps you:

Meet customer requirements and ensure quality early in the Service


Lifecycle

Provide a framework to organize the Configuration Item levels and their


validation and testing activities across the Lifecycle stages

Map the types of tests to each development stage


The Service V-Model gives one example of the way testing levels in Service
Transition can be mapped to their corresponding stages of Service
requirements and design
In the Service V-Model

The left side of the diagram represents the Service requirements


specification down to the detailed Service Design

The right side of the diagram represents the validation activities that you
perform against the specifications defined on the left

Each stage on the left has a corresponding activity on the right. This means
that you must start the Service Validation and acceptance test planning with
the definition of Service requirements. The customer who signs-off the Service
requirements will also sign-off the Service Acceptance Criteria (SAC) and test
plan.
Validation and Testing Perspectives
Successful validation and testing focuses on the following question: Will the
Service deliver as required?
The answer to this question is based on the perspective of those who will do
the following activities for the Service

Use

Deliver

Deploy

Manage

Operate
Service Acceptance Testing
This starts with verifying Service requirements, which are, in turn, based on
customer requirements. Customers, Service Providers and other stakeholders
sign-off the Service requirements, Acceptance Criteria and Service acceptance
test plan before you start building the Service. Stakeholders can be:

Business customers or customer representatives

Users of the Service

Suppliers

Service Providers or Service Units


It is important to consider the perspective of all the roles involved in the
implementation of the Service Change. The different perspectives are:

Perspective of business users and customers

Perspective of the Service Provider

Perspective of end users within the customer's business

Perspective of Operations and Service Improvement


To start with, let us discuss the importance of the perspectives of business
users and customers.
Perspective of Business Users and Customers
The business users' and customers' perspectives are important to the success
of the Service because they

Allow you to have a defined way of measuring the acceptability of the


Service

Plan for the resources and level of expertise required to undertake


Service Acceptance
Perspective of the Service Provider

Feedback from the Service Provider helps you

Be involved with the business prior to testing to avoid surprises during


Service acceptance

Ensure the quality of the Service. This plays a key role in influencing
Business Units about the quality, reliability and usability of the Service
even before the Service goes live

Deliver and maintain robust acceptance test facilities per business


requirements

Understand how the acceptance test fits into the business service or
product development testing activity
Perspective of End Users Within the Customer's Business
You must do a User Acceptance Test after building the service. This ensures
that the customer checks and verifies the Service before accepting it. You must
test the Service in an environment that closely resembles the live operational
environment. The testing helps you determine whether the Service is meeting
the customer's expectations.
You must define the testing details and scope in the user test and UAT plans,
which the stakeholders should agree to at the start of the process.
The end users within the customer's business will:

Test the functional requirements of the Service to ensure that it meets


their expectations

Perform tests on Service Management activities, such as the ability to


use the Service Desk, respond to diagnostic scripts, Incident Management,
Request Fulfilment and Change Request Management
You must ensure that you set the expectations of the customers at the onset of
these tests. This is to avoid any early testing dissatisfaction. You must explain
to the customers that this is just a test and that it is possible the Service will
not perform as expected in all aspects.
Perspective of Operations and Service Improvement
The operations staff must ensure that they deliver all the IT staff requirements
to the customer before deployment. The feedback from the operations staff
helps you:

Set up technological facilities before delivering the new or changed


service

Provide the relevant staff skills, knowledge and resources to support the
Service after it goes live

Arrange for supporting processes and resources, such as the Service


Desk and second -or third- line support

Consider business and IT continuity

Provide access to documentation and the Service Knowledge


Management System (SKMS)
After knowing the perspectives of the various roles involved in the Service
implementation, you need to apply the appropriate testing model and
thoroughly test the Service by considering the various perspectives.

All the components and assets of the Service are tested separately using a
specific Test Model. Also, each component must have an associated acceptance
test in addition to the overall acceptance criteria of the Service.
All the Service Models and associated Service deliverables are supported by
their own reusable Test Model. You can use this Test Model during deployment
and in future testing. These Test Models help you ensure quality at an early
stage instead of waiting for feedback at the end.

5.5 Main Activities, Methods, Techniques and


Relationship with RCV
Testing Process
The testing process involves several activities. However, these activities are
not performed in sequence. You can perform them in parallel.
Important activities in the testing process are:

Validation and test management: In this, you:


o
Plan, control and report activities through the test stages of
Service Transition
o
Manage issues, mitigate Risks and implement Changes identified
from the testing activities. These Changes can delay the
implementation and create dependencies that you must manage

Plan and design test: you plan and design in the early Service Lifecycle
phase. To plan and design, you need to:
o
Plan resources such as hardware, networking and staff
o
Identify business and customer requirements, such as raw
material
o
Plan for supporting Services and their support features, such as
access and security
o
Create schedules and get approval for them
o
Define the timelines and place for Service Delivery
o
Define financial requirements

Verify the test plan and test design: you need to verify the test plan and
test design to ensure that:
o
The Test Model delivers appropriate test coverage for the risk
profile of the Service
o
The Test Model covers the key integration aspects and interfaces
o
The test scripts are accurate and complete

Prepare the test environment: you must define the design plan of the
initial test environment. You can prepare the test environment by using
the:
o
Services of the build and test environment resource
o
Release and Deployment processes

Perform tests: in this, you need to:


o
Use manual or automated techniques and procedures to perform
the tests

Record the results of the tests. Even if the test fails, you must
document the result along with the reason for the failure of the test
o
Follow the test plan and scripts for testing, whenever possible
o
Resolve and document the Incident or issue when part of the test
fails. The person who is testing that part of the Service should resolve
the issue and test it again
Evaluate exit criteria and report: When you evaluate the exit criteria and
exit report, you need to:
o
Compare the predicted results to the actual results
o
Interpret the results in terms of Risk to the business, Risk to the
Service Provider, or change in expected cost
o
Collate the test metrics and summarize the results of the tests
o
Evaluate the exit criteria based on the performance of the Service
and client feedback. The Service must meet the customer's
technology and quality requirements
o
Ensure that Configuration Baselines have been recorded in the
Configuration Management System (CMS)
Test clean-up and closure: In this, you need to:
o
Clean or initialize the test environments
o
Identify improvements that can be input to design, build, decision
parameters or future testing policies and procedures
o

Validation and Test Management


Includes the following activities:

Plan the test resources

Prioritize and schedule what is to be tested and when

Manage Incidents, Problems, errors, nonconformance, Risks and issues

Implement Changes to reduce errors going into production

Record Configuration Baselines

Collect, analyze, report and manage test metrics


Test Metrics

Measure the test process and manage and control the testing activities

Determine the progress of testing, the earned value and the outstanding
testing. The Test Manager uses this data to estimate the duration of the
testing

Help the management prioritize, schedule and manage Risk

Provide information to estimate and schedule future Releases


Outputs of the Perform Tests Activity

Reports with details of testing along with cross-references to the Test


Model, test cycles and conditions

Data for Problems, errors, issues, nonconformance and Risks that you
have to resolve

Data for resolved Problems, Known Errors and related Changes

Sign-off document
Guidelines to Prepare the Test Environment

Use Service Management best practices to actively maintain and protect test
environments. For significant Changes, the possibility that the test data needs
to be updated should be considered.

5.6 Triggers, Inputs, Outputs and Interfaces with other


processes
Triggers
Triggers for the testing phase are a scheduled activity on a:

Release plan

Test plan

Quality assurance plan


Inputs

The Service Package


o
Consists of a core Service Package and reusable components, in
other words, supporting Services. It also defines Utilities and
Warranties for the delivered Services. You can use the Service
Package to map the requirements to a Service Level Package (SLP).

An SLP
o
Plays a very important role in test planning and design because it
provides Utility and Warranty based on the customer's requirements,
assets and Patterns of Business Activity (PBAs)

Service Provider interface definitions


o
Define the process interfaces and organizational interfaces that
you can use to test the Service efficiently

The Service Design Package (SDP)


o
Defines customer requirements for the Service along with the
Service Model and the Service Operations plan. It includes:

Operation models, support resource, escalation procedures


and critical situation-handling procedures

Capacity/resource model and plans, along with performance


and availability aspects

Financial/economic/cost models with Total Cost of


Ownership (TCO) and Total Cost of Utilization (TCU)

Service Management model, for example, an integrated


process model as in ISO/IEC 20000

Design and interface specifications

Release and Deployment plans


o
Define the order in which you need to deploy, build and install
Release Units

Acceptance Criteria
o
Consist of specific requirements for testing at all levels

Request for Change (RFC)


o
Is a request for a new Service or a Change to an existing Service
Output

The output from the process is the test report that you need to forward to the
Evaluation team. This report includes all information relevant to the testing
phase, such as details of the testing environment and test results.
Based on the output, you can evaluate the Service only after a specific duration
of the Service going live. At that time, you can compare the predicted
performance with the actual performance of the Service. If the Service is
functioning as expected, the evaluation is successful. You then send a report to
Change Management along with a suggestion to remove Early Life Support
(ELS) from the Service and make it part of normal operations.
Interface with the Service Lifecycle
Service Validation and Testing supports all steps of the Release and
Deployment phase in Service Transition. You align the testing strategy to work
with all other Lifecycle stages to improve the quality of the Service. Some of
these interfaces are:

Service Design
o
Ensures that you can test the designs. For example, you can test
hardware, software, Service elements that you reuse, third-party
access rights or delivered Service elements

Continual Service Improvement (CSI)


o
Ensures that you work continuously to improve the Service and
test plans

Service Operation
o
Uses maintenance tests to ensure that Services are effective. You
will need to maintain these tests to handle innovation and
environmental changes

Service Strategy
o
Ensures that you test the services within the specified cost and
time, using limited resources
Service Validation and Testing Interfaces with Other Processes
The relationships from Service Validation and Testing with other processes
within the RCV context are:

Service Package

Service Design Package

Service Level Package

Release and Deployment Plans

RFCs - Change Management

Configuration Baselines - Service asset and Configuration Management


(SACM)

Updated knowledge in the SKMS

Test Incidents and Problems - Incident and Problem Management

Improvement suggestions - CSI

5.7 Test Data and Test Environments


Testing Good Practices

IT Service Management tasks are repetitive and benefit from the reuse of
predefined models, processes and formats.
Some testing good practices are:

Tests library
o
A good, practical approach is to create and maintain a library of
relevant tests and update it whenever you make any Changes. The
test management group can take responsibility for organizing and
maintaining test scripts, test cases and test data in an organization.

Automated testing tools


o
Another timesaving method is to use automated testing tools,
such as Computer-Aided Software Testing (CAST). These tools enable
you to test any type of Service effectively in complex software
environments. Similar tools are available for hardware testing
Test Data
Test data is very important for running tests. If the test data is inaccurate, it will
not give the required results, even if you maintain a well-designed test
environment. The test data is also important in hardware and documentation
tests.
Test Environment
Apart from test data, the test environment also plays a crucial role. This is
because test results might be inaccurate if the testing environment is not
constant.
You must maintain and protect the test environment appropriately. If you are
making a Change to a Service, you must also check if you need to make any
Changes to the test data and environment. You must update the relevant
records for the test data and test environment in the CMS.
If necessitated by the Change:

Update the test data

Create a new set of data or a new test environment

Check for the redundancy of the test data or environment. This helps
you test the Service within another, existing test environment and using
another set of test data

Consider the fact that the test data and test environment might provide
low-level testing because of the new or updated changed Service
When maintaining the test data, you should:

Separate the test data from any live data to ensure that the test data is
not mistaken for live data and ice versa

Follow data protection regulations. For example, when you use live data
to create a test database, you should protect the data and ensure that it is
not transferable

Create a back-up copy of the test data. You must restore the database
for future testing. You can also do this for hardware tests

Use an established test database to ensure that you have a safe and
realistic training environment for a Service

Test Report
A test report includes:

The base configuration structure of the testing environment

Details of the test along with requirements and constraints

Test results

The test analysis, for example, a comparison of the actual performance


with the expected performance and a report of the Risks identifies during
the testing phase

The updated data and other information that is to be added to the SKMS,
for example, errors and Workarounds, testing techniques and analysis
methods

Test Incidents, Problems and errors records

Data or suggestions to improve the testing process or the process of


documenting the outputs of Service Design

Information on customers, suppliers, partners and stakeholders

5.8 Process Measurement


KPIs
Let us understand how KPIs help you analyze validation and testing reports to
identify trends.
There are primary and secondary PKIs applicable for Service Validation and
Testing

The primary KPIs include the indicators that you can use to judge the
effectiveness of testing in delivering Services that affect the business of
the organization

The secondary KPIs include KPIs to measure the effectiveness and


efficiency of the testing process
Primary KPIs

Validate the Service early to correct it in case of any discrepancies

Reduce Incidents and errors after deployment, which is a characteristic


of newly transitioned service

Interact more efficiently with customers and Business Units

Reduce delays in testing to avoid affecting the business adversely

Understand the new or changed service

Allocate roles and responsibilities to customers, users and Service


Providers when implementing a Service Change

Sign-off on cost, effort and user or customer requirements. For example,


at the time of user acceptance testing
In addition to the above primary KPIs, there are other primary KPIs to measure
the economic effectiveness of the testing process. These KPs include:

Test planning, preparation and execution rates

Incident, Problem and Event rates

Issue and risk rate

Problem resolution rate


Resolution effectiveness rate
Stage containment - analysis by Service Lifecycle stage
Repair effort percentage
Problems and changes by Service asset or Configuration Item type
Late Changes by Service Lifecycle stage
Inspection effectiveness percentage
Residual risk percentage
Inspection and testing return on investment (ROI)
Cost of unplanned and unbudgeted overtime to the business
Cost of fixing errors in live operation compared to fixing errors early in
the Lifecycle
Operational cost improvements associated with reducing errors in new or
Changed Services

Secondary KPIs

Control effort and cost to set up a testing environment

Minimize effort required to find defects, that is the number of defects

Reduction of repeat errors. Feedback from testing ensures that corrective


action within design and transition (through CSI) prevents mistakes
repetition in subsequent Releases or Services

Reduce error/defect rate in later testing stages or production

Re-use of testing data

Percentage Incidents linked to errors detected during testing and release


into live

Percentage errors at each Lifecycle stage

Number and percentage of errors that could have been discovered in


testing

Testing Incidents found as percentage of Incidents occurring in live


operations

Percentage of faults found in earlier assessment stages, since there is


always an increase in the remedial costs in later stages of transition

Number of Known Errors documented in earlier testing phases


Testing is measuring the ability of a Service to perform as required in a
simulated or actual environment. It focuses on measurement. You must ensure
that you separate the measures related to the testing process from the errors
introduced into Services and systems. The main aim of testing is to deliver
Services that are beneficial and reduce changes of failure. You must be careful
in identifying errors during the testing phase because rectifying errors after
deployment involves more effort and cost.

5.9 Roles and Responsibilities


There are various roles involved in executing Service Validation and Testing.
However, you must always create separate roles for all the responsibilities to
ensure efficient and independent testing and validation
Some of the roles involved in the process are:

Service Test manager


o
Is responsible for handling the test support and the test team(s)
functions for specific Service Transition. The role reports to the
Service Transition Manager. The roles of the Service Test Manager and
Release and Deployment Manager should be undertaken by separate
people and never combined to always ensure independent testing
and test verification
Test support
o
Provides independent testing
Build and test environment management
o
Ensures that relevant people have the required environment, test
data and versioned software, when required. This role coordinates the
use of testing resources to ensure they are used maximally

Other Responsibilities of the Service Test Manager

Defines the test strategy

Designs and plans testing conditions, test scripts and test data sets to
ensure appropriate and adequate coverage and control

Allocates and oversees test resources, ensuring that they adhere to the
test policies

Provides management reporting on test progress, test outcomes,


success rates, issues and risks

Conducts tests as defined in the test plans and design

Records, analyses, diagnoses, reports and manages test events,


Incidents, Problems and retest dependent on agreed criteria

Manages test environment requirements

Verifies tests conducted by Release and deployment teams

Administers test assets and components


Other Factors of Test Support

The Change Manager is responsible for ensuring that tests are


developed appropriate for approved Changes and tat an agreed testing
strategy and policy is applied to all Changes

Test analysts carry out the tests as set out in the testing plans and/or
Service Package

The developer/supplier is responsible for establishing the root cause of


test failures - the fault in the Service component that made the test fail.
For complex situations, this may require collaboration between testing
staff and development/build/supplier personnel. It should always be
accepted as a possibility that faults can lie within the testing design as
well as within design/development

Service Design will design the test, as an element of the overall Service
Design. For many services, standard tests will exist, perhaps contained
within the transition model chosen as already accepted as appropriate for
the type of new or changed service under consideration

Customers and users perform customer and user acceptance testing.


Such user resource should be able to cover the full range of user profile
and requirements, and adequately sign-off the conformance of a new or
changed service. Users will already have played a major role in helping to
design the acceptance testing approaches during the design phase

Other Responsibilities of Build and Test Environment Management

Ensures and builds Service infrastructure and application as per design


specification

Plans acquisition, builds, implements and maintains Information and


Communications Technology (ICT) infrastructure

Ensures that the build delivery components are from controlled sources

Develops an integrated application software and infrastructure build

Delivers appropriate build, operations and support documentation for the


build an test environments prior to handover to Service Operations

Builds, delivers and maintains required testing environments

You might also like