You are on page 1of 2

INFRASTRUCTURE EXECUTIVE COUNCIL

ITIL V3 SERVICE TRANSITION SERVICE VALIDATION AND TESTING

DEFINITION
A service providers method for ensuring that a new or changed service meets customer requirements and verifying that it is ready for production support by IT operations.

VALUE TO BUSINESS:
An effective Service Validation and Testing process ensures: 1. Reduction in business disruption from incident introduced via release 2. Improved resource allocation through alignment of testing with areas of risk and business value

HIGH-LEVEL PROCESS FLOW:

Plan

Test

Evaluate

Stakeholders:

Test Manager Business Stakeholders Demand Manager Service Level Manager Infrastructure Manager Application Manager

Test Manager

Test Manager Service Level Manager Demand Manager Release Manager

Focus On:

Elicit customers expectations Perform all automated or manual tests Allocate testing resources Document all test results Schedule delivery and acceptance stages Prepare and baseline the test environment Create a testing model that measures service efficiency and effectiveness against customer-relevant KPIs

Compare test results to projections Reset the test environment Identify areas of improvement (if any)

KEY CONCEPTS:

Test: A validation that all release components, tools and processes required for deployment, migration and back out are acceptable, and only the components meeting stringent quality criteria are deployed into the live productive environment. Test Environment: A controlled environment used to test release components Test Strategy: A master plan for quality assurance outlining all testing processes and resources within a broader project plan Test Model: A more detailed test plan and script indicating how release components will be tested Service Design Package: Specifications about how customer requirements for a business service will be fulfilled through a combination of supporting services Validation: Confirmation that a new or changed service is complete, accurate, reliable, and matches its design specifications Acceptance Test Failure Rate: Percentage of all release components that fail the acceptance test Total Identified Errors Per Release: Total number of errors identified during testing New Incidents Per Release: Number of incidents caused due to the new release of a service Error Resolution Time: Time taken to fix errors identified during testing and resubmit the release Test process do not use results of a business risk assessment (including performance, lost business, security, etc) which results in development of services which do not meet customer's expectation Change management process does not have synergic relationship with service validation and testing process leading to changes in services which do not meet business requirements. This leads to redundant and/or less effective services

KEY PERFORMANCE INDICATORS (KPI):

COMMON PITFALLS:

INFRASTRUCTURE EXECUTIVE COUNCIL

ITIL V3 SERVICE TRANSITION SERVICE VALIDATION AND TESTING

IEC RESEARCH:
The Infrastructure Executive Council has identified the following core elements of effective validation and testing: Engage the infrastructure team at the pre-coding stage Measure projects for alignment with target architecture and performance across all key phases Offer prescriptive and actionable infrastructure feedback to applications as a service during design and coding Winnow the number of stakeholders who need to be deeply involved in production acceptance by asking project stakeholders to complete a checklist identifying the areas of greatest risk CASE IN POINT: THE PROGRESSIVE GROUP - PULL FORWARD LIFECYCLE PERFORMANCE Although SLAs are captured and documented early in a development cycle, they are generally set aside until the beginning of the testing phase. Moreover, project timelines often threaten to compress the testing cycle, resulting in user-facing defects and disruptions. To resolve these issues, Progressive created a strategy to test alignment with service level requirements across earlier stages of the development cycle. This strategy comprises of two distinct components: 1. An Early-Action Checklist: Progressive captures service level requirements early and builds out built out an entire process devoted to optimizing performance, provides to developers as a set of guidelines for making decisions about nonfunctional requirements needed to satisfy performance. Developers can then follow the checklist and typically perform activities during design that happen much later, such as logging, hooks to infrastructure instrumentation, or capacity scripts.

Performance Optimization Guidelines for Developers


Service Level Agreement Availability Requirements Developer Activities Checklist Reliability Supportability (Proactive Measures) (Improving Reactive Measures) Error/ All exception-handling code Exception conforms to the Exception Handling Handling standards described in the PolicyPro Development Standards Document. Sufficient failure test cases have been created and executed to exercise every catch block. Fault Tolerance Application failover logic, retry timers, retry counters, and retry logic have been agreed upon and implemented for external and internal dependencies.

Performance Requirements Response Time Requirements Normally expressed as a range Example: 23 seconds Throughput Requirements Normally expressed as transactions per second Example: 25 transactions per second Completion Times

Availability Requirements Service Hours A description of the hours the service is required to be available Example: 08:0018:00Monday to Friday Availability Requirements Target availability levels within the agreed service hours Normally expressed as percentagee.g., 99.5% Reliability Requirements Maximum number of service breaks that can be tolerated per period e.g., four per month, OR as Mean-Time-BetweenFailure (MTBF) 2.

Performance Test Foundation: Using guidance from service level agreements, the company identifies specific activities developers should perform. For example, if a developer seeks to meet an appropriate throughput for a business transaction as defined by its duration, frequency, and execution time, Progressives recommendation to that developer is to execute representative transactions and then enter the throughput information into its transaction monitoring system.

Harmonizing application and infrastructure designs greatly improves Progressives test acceptance rate while also reducing the cost of supporting a new application across its early lifecycle, when conventional applications may go through intensive performance tuning.
Access the full story online: The Progressive Group - Pull Forward Lifecycle Performance

OTHER IEC RESOURCES


Capital One: Collaborative Performance Optimization Schlumbergers Network Application Qualification Process Texas Instruments: Process Control-Focused ERP Performance Management

You might also like