You are on page 1of 3

1.

Data Center Efficiency Measurements


http://www.google.cz/corporate/datacenters/measuring.html

2. Virtualization and Automation Drive Dynamic Data Centers


http://www.datacenterdynamics.com/Media/MediaManager/Virtualisation%20and
%20Automation%20Drive%20Dynamics%20Data%20Centers.pdf

3. Data Center Measurement, Metrics & Capacity Planning


http://www.byteandswitch.com/document.asp?doc_id=174585

Good decision making requires timely and insightful information. Key to making
informed decisions along with implementing strategies is having insight into IT resource
usage and the services being delivered. Information about what IT resources exist, how
they are being used, and the level of service being delivered is essential to identifying
areas of improvement to boost productivity, raise efficiency, and reduce costs. It is
important to identify metrics and techniques that enable timely and informed decisions to
boost efficiency in the most applicable manner to meet IT service requirements.

A common focus, particularly for environments looking to use virtualization for server
consolidation, is server utilization. Server utilization does provide a partial picture;
however, it is important to look also at performance and availability for additional insight
into how a server is running. For example, a server may operate at a given low utilization
rate to meet application service-level response time or performance requirements. For
networks, including switches, routers, bridges, gateways, and other specialized
appliances, several metrics may be considered, including usage or utilization;
performance in terms of number of frames, packets, IOPS, or bandwidth per second; and
latency, errors, or queues indicating network congestion or bottlenecks.

From a storage standpoint, metrics should reflect performance in terms of IOPS,


bandwidth, and latency for various types of workloads. Availability metrics reflect how
much time, or what percent of time, the storage is available or ready for use. Capacity
metrics reflect how much or what percentage of a storage system is being used. Energy
metrics can be combined with performance, availability, and capacity metrics to
determine energy efficiency. Storage system capacity metrics should also reflect various
native storage capacities in terms of raw, unconfigured, and configured capacity. Storage
granularity can be assessed on a total usable storage system (block, file, and object based
and content accessible storage-cas) disk or tape basis or on a media enclosure basis -- for
example, disk shelves enclosure or individual device (spindle) basis. Another dimension
is the footprint of the storage solution, such as the floor space and rack space and that
may include height, weight, width, depth, or number of floor tiles.

Measuring IT resources across different types of resources, including multiple tiers,


categories, types, functions, and cost (price bands) of servers, storage, and networking
technologies, is not a trivial task. However, IT resource metrics can be addressed over
time to address performance, availability, capacity, and energy to achieve a given level of
work or service delivered under different conditions.

It is important to avoid trying to do too much with a single or limited metric that
compares too many different facets of resource usage. For example, simply comparing all
IT equipment from an inactive, idle perspective does not reflect productivity and energy
efficiency for doing useful work. Likewise, not considering low-power modes ignores
energy-saving opportunities during low-activity periods. Focusing only on storage or
server utilization or capacity per given footprint does not tell how much useful work can
be done in that footprint per unit of energy at a given cost and service delivery level.

Virtual data centers require physical resources to function efficiently and in a green or
environmentally friendly manner. Thus it is vital to understand the value of resource
performance, availability, capacity, and energy usage to deliver various IT services.
Understanding the relationship between different resources and how they are used is
important to gauge improvement and productivity as well as data center efficiency. For
example, while the cost per raw terabyte may seem relatively inexpensive, the cost for
I/O response time performance needs to be considered for active data.

Having enough resources to support business and application needs is essential to a


resilient storage network. Without adequate storage and storage networking resources,
availability and performance can be negatively impacted. Poor metrics and information
can lead to poor decisions and management. Establish availability, performance, response
time, and other objectives to gauge and measure performance of the end-to-end storage
and storage networking infrastructure. Be practical, as it can be easy to get caught up in
the details and lose sight of the bigger picture and objectives.

4. HP Capacity Advisor Version 4.0 User's Guide


http://www.docs.hp.com/en/T8670-90001/T8670-90001.pdf
Introduction
HP Capacity Advisor is a utility that allows you to monitor and evaluate system and
workload utilization of CPU cores, memory, network and disk I/O, and power. With this
information, you can load your systems to make best use of the available resources.
You can monitor and evaluate one or more systems that are connected in a cluster
configuration or to a network. A single system can include multi-core or hyper-threaded
processors. Capacity Advisor helps you evaluate system consolidations, load balancing,
changing system attributes, and varying workloads to decide how to move workloads to
improve utilization. The quantitative results from Capacity Advisor can aid the planner in
estimating future system workloads and in planning for changes to system configurations.
With Capacity Advisor, you can perform the following tasks within an easy-to-navigate,
clearly notated graphical user interface:
• Collect utilization data on CPU cores, memory, network and disk I/O, and power
• View historical resource utilization for whole-OS and sub-OS workloads on HP-UX
and whole-OS workload resource utilization on Microsoft® Windows systems.
• View historical workload resource utilization and aggregate utilization across the
partitioning
continuum (nPars, HP-UX vPars, HP-UX Virtual Machines)
• Generate resource utilization reports
• Plan workload or system changes, and assess impact on resource utilization
• Assess resource utilization impact for proposed changes in workload location or size
• Evaluate trends for forecasting future resource needs
Capacity Advisor can be used to simulate changes in system configuration, such as the
following:
• Consolidating several systems into one system
• Re-sizing a system for an upgrade
• Re-sizing the demands on a system to reflect a forecast
• Replacing older, small to mid-sized systems with virtual machines
Capacity Advisor can use data collected over time to show the results of these
configuration changes in many ways. Graphical views are available so you can see what
the effects of the changes are over time. Tables are available that give the percentage of
time and the degree to which the system is busy; this information is valuable in
comparing resource utilization and quality of service before and after a change. Other
tables show how many minutes per month the system is unacceptably busy–a measure
valuable for both quality of service and for estimating TiCAP bills. Because Capacity
Advisor works from data traces collected over time, it is much more accurate than using
only peak data or average data in understanding your systems and the workloads they
support.

5 Understanding Relative Server Capacity.

http://www.ibmsystemsmag.com/mainframe/marchapril03/technicalcorner/10041p2.aspx

6. http://www.mathematik.uni-trier.de/tf/98_16.ps
7.

You might also like