You are on page 1of 12

Storage I/O Control Technical

Overview and Considerations


for Deployment
VMware® vSphere™ 4.1
T E C H N I C A L W H I T E PA P E R
Storage I/O Control Technical Overview
and Considerations for Deployment

Executive Summary
Storage I/O Control (SIOC) provides storage I/O performance isolation for virtual machines, thus enabling VMware® vSphere™
(“vSphere”) administrators to comfortably run important workloads in a highly consolidated virtualized storage environment. It
protects all virtual machines from undue negative performance impact due to misbehaving I/O-heavy virtual machines, often known
as the “noisy neighbor” problem. Furthermore, the service level of critical virtual machines can be protected by SIOC by giving them
preferential I/O resource allocation during periods of congestion. SIOC achieves these benefits by extending the constructs of shares
and limits, used extensively for CPU and memory, to manage the allocation of storage I/O resources. SIOC improves upon the previous
host-level I/O scheduler by detecting and responding to congestion occurring at the array, and enforcing share-based allocation of I/O
resources across all virtual machines and hosts accessing a datastore.
With SIOC, vSphere administrators can mitigate the performance loss of critical workloads due to high congestion and storage latency
during peak load periods. The use of SIOC will produce better and more predictable performance behavior for workloads during
periods of congestion. Benefits of leveraging SIOC:
• Provides performance protection by enforcing proportional fairness of access to shared storage
• Detects and manages bottlenecks at the array
• Maximizes your storage investments by enabling higher levels of virtual-machine consolidation across your shared datastores
The purpose of this paper is to explain the basic mechanics of how SIOC, a new feature in vSphere 4.1, works and to discuss
considerations for deploying it in your VMware virtualized environments.

The Challenge of Shared Resources


Controlling the dynamic allocation of resources in distributed systems has been a long-standing challenge. Virtualized environments
introduce further challenges because of the inherent sharing of physical resources by many virtual machines. VMware has provided
ways to manage shared physical resources, such as CPU and memory, and to prioritize their use among all the virtual machines in the
environment. CPU and memory controls have worked well since memory and CPU resources are shared only at a local-host level, for
virtual machines residing within a single ESX® server.
The task of regulating shared resources that span multiple ESX hosts, such as shared datastores, presents new challenges, because
these resources are accessed in a distributed manner by multiple ESX hosts. Previous disk shares did not address this challenge, as
the shares and limits were enforced only at a single ESX host level, and were only enforced in response to host-side HBA bottlenecks,
which occur rarely. This approach had the problem of potentially allowing lower-priority virtual machines greater access to storage
resources based on their placement across different ESX hosts, as well as neglecting to provide benefits in the case that the datastore
is congested but the host-side queue is not. An ideal I/O resource-management solution should provide the allocation of I/O resources
independent of the placement of virtual machines and with consideration of the priorities of all virtual machines accessing the shared
datastore. It should also be able to detect and control all instances of congestion happening at the shared resource.

The Storage I/O Control Solution


SIOC solves the problem of managing shared storage resources across ESX hosts. It provides a fine-grained storage-control
mechanism by dynamically managing the size of, and access to, ESX host I/O queues based on assigned shares. SIOC enhances the
disk-shares capabilities of previous releases of VMware ESX Server by enforcing these disk shares not only at the local-host level but
also at the per-datastore level. Additionally, for the first time, vSphere with SIOC provides storage-device latency monitoring and
control, with which SIOC can throttle back storage workloads according to their priority in order to maintain total storage-device
latency below a certain threshold.

T E C H N I C A L W H I T E PA P E R / 2
Storage I/O Control Technical Overview
and Considerations for Deployment

How Storage I/O Control Works


SIOC monitors the latency of I/Os to datastores at each ESX host sharing that device. When the average normalized datastore latency
exceeds a set threshold (30ms by default), the datastore is considered to be congested, and SIOC kicks in to distribute the available
storage resources to virtual machines in proportion to their shares. This is to ensure that low-priority workloads do not monopolize
or reduce I/O bandwidth for high-priority workloads. SIOC accomplishes this by throttling back the storage access of the low-priority
virtual machines by reducing the number of I/O queue slots available to them. Depending on the mix of virtual machines running on
each ESX server and the relative I/O shares they have, SIOC may need to reduce the number of device queue slots that are available
on a given ESX server.

Host-Level Versus Datastore-Level Disk Schedulers


It is important to understand the way queuing works in the VMware virtualized storage stack to have a clear understanding of how
SIOC functions. SIOC leverages the existing host device queue to control I/O prioritization. Prior to vSphere 4.1, the ESX server device
queues were static and virtual-machine storage access was controlled within the context of the storage traffic on a single ESX server
host. With vSphere 4.1, SIOC provides datastore-wide disk scheduling that responds to congestion at the array, not just on the host-
side HBA.. This provides an ability to monitor and dynamically modify the size of the device queues of each ESX server based on
storage traffic and the priorities of all the virtual machines accessing the shared datastore.
An example of a local host-level disk scheduler is as follows:
Figure 1 shows the local scheduler influencing ESX host-level prioritization as two virtual machines are running on the same ESX server
with a single virtual disk on each.

Figure 1. I/O Shares for Two Virtual Machines on a Single ESX Server (Host-Level Disk Scheduler)

In the case in which I/O shares for the virtual disks (VMDKs) of each of those virtual machines are set to different values, it is the local
scheduler that prioritizes the I/O traffic only in case the local HBA becomes congested.

T E C H N I C A L W H I T E PA P E R / 3
Storage I/O Control Technical Overview
and Considerations for Deployment

This described host-level capability has existed for several years in ESX Server prior to vSphere 4.1. It is this local-host level disk
scheduler that also enforces the limits set for a given virtual-machine disk. If a limit is set for a given VMDK, the I/O will be controlled
by the local disk scheduler so as to not exceed the defined amount of I/O per second.
vSphere 4.1 has added two key capabilities: (1) the enforcement of I/O prioritization across all ESX servers that share a common
datastore, and (2) detection of array-side bottlenecks. These are accomplished by way of a datastore-wide distributed disk scheduler
that uses I/O shares per virtual machine to determine whether device queues need to be throttled back on a given ESX server to allow
a higher-priority workload to get better performance. The datastore-wide disk scheduler totals up the disk shares for all the VMDKs
that a virtual machine has on the given datastore. The scheduler then calculates what percentage of the shares the virtual machine has
compared to the total number of shares of all the virtual machines running on the datastore. This percentage of shares is displayed in
the list of details shown in the view of virtual machines tab for each datastore, as seen in Figure 2.

Figure 2. Datastore View of Disk Share Allocation Among Virtual Machines

As described before, SIOC engages only after a certain device-level latency is detected on the datastore. Once engaged, it begins to
assign fewer I/O queue slots to virtual machines with lower shares and more I/O queue slots to virtual machines with higher shares.
It throttles back the I/O for the lower-priority virtual machines, those with fewer shares, in exchange for the higher-priority virtual
machines getting more access to issue I/O traffic. However, it is important to understand that the maximum number of I/O queue
slots that can be used by the virtual machines on a given host cannot exceed the maximum device-queue depth for the device queue
of that ESX host. The ESX maximum queue depth varies by HBA model. The queue-depth maximum value is typically in range of 32
to 128. The lowest that SIOC can reduce the device queue depth to is 4. Figure 3a shows that, without SIOC, a virtual machine with
a lower number of shares, “VM C,” may get a larger percentage of the available storage-array device-queue slots and thus greater
storage array performance, while a virtual machine with higher I/O shares, “VM A,” gets fewer than its fair share and reduced storage
array performance. However, with SIOC engaged on that datastore, as in Figure 3b, the result will be that the lower-priority virtual
machine that is by itself on a separate host will be assigned a reduced number of I/O queue slots. That will result in fewer storage array
queue slots being used and a reduction in average device latency. The reduction in average device latency provides VM A and VM B
higher storage performance, as now the same number of I/Os that they previously were issuing complete faster due to the reduced
latency for each of those I/Os.
For instance, assume that VM A was using 18 I/O slots as shown in figure 3a. Without SIOC, the storage array latency could be
unbounded and the I/O workloads being performed by the lower priority VM C could cause a high storage device latency of, say,
40ms. In this example, VM A would have 18 I/Os @ 40ms worth of storage performance. Once enabled, SIOC controls the latency at
the configured congestion threshold, say 30ms. SIOC determines the number of storage array queue slots that can be used while
still maintaining an average device latency below the SIOC congestion threshold. Although SIOC does not directly manage the
storage array queue, it is able to indirectly control the storage array device queue by managing the ESX device queues that feed into
it. As shown in Figure 3b, SIOC has determined that 30 host-side storage queue slots can be used while still maintaining the desired
average device latency. SIOC then distributes those storage array queue slots to the various virtual machine workloads according to
their priorities. The net effect in this example is that VM C is throttled back to use only its correct relative share of the storage array.

T E C H N I C A L W H I T E PA P E R / 4
Storage I/O Control Technical Overview
and Considerations for Deployment

VM A, entitled to 60 percent of the queue slots (1500/2500 = 60 percent), is still is able to issue the same 18 I/Os but at a reduced
30ms latency. SIOC provides VM A greater storage performance by controlling VM C and ensuring it uses only its appropriate
allocation of total storage resources per performance. By throttling the ESX device-queue depths in proportion to the priorities of
the virtual machines that are using them, SIOC is able to control storage congestion at the storage array and distribute storage array
performance appropriately.

Figure 3. SIOC Device-Queue Management with Prioritized Disk Shares

SIOC provides isolation and prioritized distribution of storage resources even when vSphere administrators have not manually
set individual disk-share priorities on each VMDK per virtual machine. SIOC protects virtual machines that are running on higher
consolidated ESX servers. In Figures 4a and 4b, all virtual machine disks have default (1000 shares), or equal disk shares. Without
SIOC, VM A and VM B are penalized and not provided equal access to storage resources simply because they are running together
on the same ESX server and sharing the same ESX device queue. Whereas VM C, running on a lower consolidated ESX host, is given
unfair preference to storage resources. Even administrators who do not wish to individually set VMDK disk shares can benefit from
this feature. SIOC provides these vSphere administrators the ability to enable storage isolation for all virtual machines accessing a
datastore by simply checking a single check box at the datastore level. This new storage management capability offered by SIOC
allows vSphere administrators the ability to run higher consolidated virtual environments by preventing imbalances of storage
resource allocation during times of storage contention.

T E C H N I C A L W H I T E PA P E R / 5
Storage I/O Control Technical Overview
and Considerations for Deployment

Figure 4. SIOC Device-Queue Management with Equal Disk Shares

In these examples, SIOC is able to fully manage the storage array queue by throttling the ESX host device queues. This is possible
because all the workloads impacting the storage array queue are coming from the ESX hosts and are under SIOC’s control. However,
SIOC is able to provide storage workload isolation/prioritization even in scenarios in which external workloads, not under SIOC’s
control, are competing with those that it controls. In this scenario, SIOC will first automatically detect this situation, and then will
increase the number of device-queue slots it makes available to the virtual machine workloads so that they can compete more fairly
for total storage resources against external workloads. Using this approach, SIOC is able to maintain a balance between workload
isolation/prioritization and storage I/O throughput even when it cannot directly control or influence the external workload. This behavior
continues as long as the external workload persists and SIOC resumes normal operation once it stops detecting the external workload.

T E C H N I C A L W H I T E PA P E R / 6
Storage I/O Control Technical Overview
and Considerations for Deployment

Enabling Storage I/O Control


Since SIOC is an attribute of a datastore, it is set under the properties of a specific datastore. By default SIOC is not enabled on the
datastore. The default value for SIOC to kick in is 30ms, but this value can be modified by selecting the “Advanced” option where one
enables SIOC in the vCenter interface as shown in Figure 5.

Figure 5. Datastore Properties — SIOC Enablement and Congestion Threshold Setting

SIOC can be used on any FC, iSCSI, or locally attached block storage device that is supported with vSphere 4.1. Review the vSphere
4.1 Hardware Compatibility List (http://www.vmware.com/go/hcl) for the entire list of supported storage devices. SIOC is supported
with FC and iSCSI storage devices that have automated tiered storage capabilities. However, when using SIOC with automated tiered
storage, the SIOC Congestion Threshold must be set appropriately to make sure the storage device’s automated tiered storage
capabilities are not impacted by SIOC.
At this time, SIOC is not supported with NFS storage devices or with Raw Device Mapping (RDM) virtual disks. SIOC is also not
supported with datastores that have multiple extents or are being managed by multiple vCenter Management Servers.
For complete step-by-step instructions on how to enable SIOC, or change the default latency threshold for a datastore or other
limitations, consult the documentation or see “Managing Storage I/O Resources” (Chapter 4) in the vSphere 4.1 Resource Management
Guide (http://www.vmware.com/pdf/vsphere4/r41/vsp_41_resource_mgmt.pdf)

T E C H N I C A L W H I T E PA P E R / 7
Storage I/O Control Technical Overview
and Considerations for Deployment

Consideration for Deploying Storage I/O Control


Configuring Disk Shares
Disk shares specify the relative priority a virtual machine has on a given storage resource. When you assign disk shares to a virtual
disk/virtual machine, you specify the priority for that virtual machine’s access to storage resources relative to other powered-on
virtual machines. Disk shares in vSphere 4.1 can be leveraged at both a local, per–ESX host level, and now at a datastore level when
SIOC is enabled and actively prioritizing storage resources. Disk shares are set by selecting “Edit Settings” for a virtual machine and
are set on each VMDK, as seen in Figure 6. When SIOC is not enabled, disk shares and the relative priority they specify are enforced
only at a local–ESX host level, and then only when local HBAs are saturated. Virtual machines running on the same ESX hosts will be
prioritized relative to other virtual machines on the same host but not relative to virtual machines running on other ESX hosts. When
SIOC is enabled and actively controlling the ESX hosts to control storage latencies, disk shares and relative priorities are enforced
across all the ESX servers that access the SIOC controlled datastore. So a virtual machine running on one ESX host will have access to
storage resources based on the number of disk shares the virtual machine has compared to the total number of disk shares in use on
the datastore by all virtual machines across all ESX hosts in the shared storage environment. If a virtual machine does not fully use its
allocation of I/O access, the extra I/O slots are redistributed proportionally to the other virtual machines that are actively issuing I/O
requests on the datastore.

Figure 6. Virtual Machine Properties — Disk Shares and IOP Limits

As part of vSphere 4.1, I/O per second (IOPS) limits on a per-VMDK level can be set to further manage and prioritize virtual machine
workloads. Limits (expressed in terms of IOPS) are implemented at the local-disk scheduler level and are always enforced regardless of
whether or not SIOC is enabled.

Configuring the Storage I/O Control Congestion Latency Value


SIOC is designed to only engage and enforce storage I/O shares when the storage resource becomes contended. This is very similar
to CPU scheduling, in that it is only enforced when the resource is contended. To determine when a storage device is contended, SIOC
uses a congestion-threshold latency value that vSphere administrators can specify. The default congestion-threshold latency, 30ms,
in vSphere 4.1, is a conservative value that should work well for most users. The SIOC congestion-threshold value is configurable, so
vSphere administrators have the opportunity to maximize the benefits of SIOC suited to their own virtual environment and storage-
management preferences. This section discusses the considerations and recommendations for changing this key parameter.
The SIOC threshold represents a balance between (1) isolation and prioritized access to the storage resource at lower latencies, and
(2) higher throughput. When the SIOC congestion threshold is set low, SIOC can begin prioritizing storage access earlier and throttle
storage workloads more aggressively in order to maintain a datastore-wide latency below the congestion latency threshold. The more

T E C H N I C A L W H I T E PA P E R / 8
Storage I/O Control Technical Overview
and Considerations for Deployment

aggressive throttling needed to maintain a lower latency might reduce the overall storage throughput. When the congestion threshold
is set higher, SIOC will not engage and begin prioritizing resources among virtual machines until the higher latency is reached. When
using a higher SIOC congestion latency, SIOC does not need to throttle storage workloads as much in order to maintain the storage
latency below the higher congestion threshold. This may allow for higher overall storage throughput.
The default congestion threshold has been set to minimize the impact of throttling on storage throughput while still providing
reasonably low storage latency and isolation for high-priority virtual machines. In most cases it is not necessary to modify the storage
congestion threshold from its default value. However, a user may decide to modify the value depending on the type and speed of their
storage device, the characteristics of the workloads in their virtual environment, and their storage-management preference between
workload isolation/prioritization and workload throughput. Because various storage devices have different latency characteristics,
users may need to modify the congestion threshold depending on their storage type. See Table 1 to determine the recommended
range of values for your storage-device type.

T y pe o f storage m edi a bac k i ng Reco mmen ded threshold (u se isolatio n vs . thro ugh put

the datastore p re fere nce to deter m in e e xact value withi n r ang e)

Fibre Channel 20–30ms


SAS 20–30ms
SSD 10-15ms
SATA 30-50ms

Auto-tiered storage Use vendor recommended value, or if not provided by storage vendor, use the
Full LUN auto-tiering threshold value recommended above for the slowest tier of storage in the array.

Auto-tiered storage Use vendor recommended value, or if not provided by storage vendor, combine
Block level/sub-LUN auto-tiering ranges of fastest and slowest media types in array.

Table 1. SIOC Congestion Threshold Recommendations

The congestion threshold may also need to be adjusted when using automated tiered storage devices. These are systems that contain
two or more types of storage media and automatically and transparently migrate data between the storage types in order to optimize
I/O performance. These systems typically try to keep the most frequently accessed or “hot” data on faster storage such as SSD, and
less frequently accessed or “cold” data on slower media such as SAS or FC disks. This means that the type of storage media backing a
particular LUN can change over time.
For full LUN auto-tiering storage devices, in which the entire LUN is migrated between different storage tiers, use the recommended
value or range for the slowest tier of storage in the device. For example, in a full LUN auto-tiering storage device that contains SSD and
Fibre Channel disks, use the congestion threshold value that is recommended for Fibre Channel.
With sub-LUN or block-level auto-tiering storage, in which individual storage blocks inside a LUN are migrated between storage tiers,
combine the recommended congestion threshold values/ranges for each storage type in the auto-tiering storage devices. For example,
in a sub-LUN / block-level auto-tiering storage device that contains an SSD storage tier and a Fibre Channel storage tier, use an SIOC
congestion threshold value in the range of 10–30ms. The exact SIOC congestion-threshold value to use is based on your individual
storage-device characteristics and your preference of isolation (using a smaller SIOC congestion-threshold value) or throughput
(using a larger SIOC congestion-threshold value). For example, in the SSD-FC scenario, the more SSD storage you have in the array,
the more your storage device characteristics will match that of the SSD storage type and thus the closer your threshold should be
to the SSD recommended value of 10ms, the low end of the combined SSD-FC range. Customers can use the midpoint of the range
as a conservative congestion threshold value that provides a balance between the preference for isolation and the preference for
throughput. In the SSD-FC example in which there was a range of 10–30ms, the conservative congestion threshold value would be 20ms.

T E C H N I C A L W H I T E PA P E R / 9
Storage I/O Control Technical Overview
and Considerations for Deployment

When modifying the SIOC congestion threshold, keep in mind that the SIOC latency is a normalized latency metric calculated and
normalized for I/O size and aggregate number of IOPS across all the storage workloads accessing the datastore. SIOC uses a normalized
latency to take into consideration that not all storage workloads are the same. Some storage workloads may issue larger I/O operations
that would naturally result in longer device latencies to service these larger I/O requests. Normalizing the storage-workload latencies
allows SIOC to compare and prioritize workloads more accurately by bringing them all into a common measurement. Because the
SIOC value is normalized, the actual observed latency as seen from the guest OS inside the virtual machine or from an individual ESX
host may be different than the calculated SIOC-normalized latency per datastore.

Monitoring Storage I/O Control Effects


SIOC includes new metrics inside vCenter to allow users to observe SIOC’s actions and latency measurements. There are two new SIOC
metrics in vCenter, SIOC normalized latency and SIOC Aggregated IOPS. The SIOC normalized latency is the value that SIOC calculates
per datastore and uses when comparing with the SIOC congestion latency threshold to determine what actions to take, if any. SIOC
calculates these metrics every four seconds and they are refreshed in the vCenter display every 20 seconds. These metrics can be
viewed on the datastore performance screen inside vCenter, as seen in Figure 7. Additionally, vCenter reports the device-queue depths
for each ESX host. The ESX hosts’ device-queue depth metrics can be reviewed to determine what actions SIOC is taking on individual
ESX hosts and their device queues in order to maintain a datacenter-wide SIOC latency on the datastore under the set congestion threshold.

Figure 7. vCenter Datastore Performance and SIOC Metrics

SIOC detects the moment when external workloads, not under SIOC’s control, may be impacting the virtual environment’s storage
resources. When SIOC detects an external workload, it will trigger a “Non-VI workload detected” informational alert in vCenter. In
most cases, this alert is purely informational and requires no action on the part of the vSphere administrator. However, the alert may
be an indicator of an incorrectly configured SIOC environment. vSphere administrators should verify that they are running a supported
SIOC configuration and that all datastores that utilize the same disk spindles have SIOC enabled with identical SIOC congestion-
threshold values. The alert might also be triggered by some backup products and other administrative workloads that bypass the ESX
host and directly access the datastore in order to accomplish their tasks. SIOC is supported in these configurations and the alert can
be safely ignored for these products. Refer to VMware KB article 1020651 for more details on the “Non-VI workload detected” alert.

T E C H N I C A L W H I T E PA P E R / 1 0
Storage I/O Control Technical Overview
and Considerations for Deployment

Benefits of Using Storage I/O Control


SIOC enables improved I/O resource management for a multitude of conditions and provides peace of mind when running business-
critical I/O intensive applications in a shared VMware virtualization environment.

Provides performance protection


A common concern in any shared resource environment is that one consumer may get far more than its fair share of that resource
and adversely impact the performance of the other users that share the resource. SIOC provides the ability, at the datastore level, to
support multiple-tenant environments that share a datastore, by enabling service-level protections during periods of congestion. SIOC
prevents a single virtual machine from monopolizing the I/O throughput of a datastore even when the virtual machines have default
(equal value) I/O shares set.

Detects and manages bottlenecks at the array only when congestion exists
SIOC detects a bottleneck at the datastore level, and manages I/O queue slot distribution across the ESX servers that share a datastore.
SIOC expands the I/O resource control beyond the bounds of a single ESX server to work across all ESX servers that share a datastore.
When SIOC is enabled on a datastore and no congestion exists at the device level, it will not be engaged in managing I/O resources
and will have no effect on I/O latency or throughput. In an optimized and well-configured environment, SIOC may only engage
at certain peak periods during the day. During these times of congestion and in the presence of external or non–SIOC controlled
workloads, SIOC strikes a balance between aggregate throughput and enforcement of virtual machine I/O shares.
SIOC helps vSphere administrators understand when more I/O throughput (device capacity) is needed. If SIOC is engaged for
significant periods of time during the day, it raises the question if there is a need for a change in the storage configuration. In this case,
an administrator might consider either adding more I/O capacity or using VMware Storage vMotion to migrate I/O intensive virtual
machines to an alternate datastore.

Enables higher levels of consolidation with less storage expense


SIOC enables vSphere administrators to maximize their storage investments by running more virtual machines on their existing
storage infrastructure with confidence that periodic peak periods of high I/O activity will be controlled. Without SIOC, administrators
will often overprovision their storage to avoid latency issues that pop up during peak periods of storage activity. With SIOC, the
administrators can now comfortably run more virtual machines on a single datastore with confidence that the storage I/O will be
controlled and managed at the device level.
Leveraging SIOC can reduce storage costs because the cost of overprovisioning a storage environment, to the point that no contention
occurs, could be prohibitively expensive. Alternately, the cost of storage may drop dramatically by leveraging SIOC to manage the I/O
queue slot allocations to ensure proportional fairness and prioritization of virtual machines based on their I/O shares.

T E C H N I C A L W H I T E PA P E R / 1 1
Storage I/O Control Technical Overview
and Considerations for Deployment

Conclusion
SIOC offers I/O prioritization to virtual machines accessing shared storage resources. It allows vSphere administrators to align high-
priority virtual machine traffic with better performance and lower latency storage performance as compared to the lower-priority
virtual machines. It monitors datastore latency and engages when a preset congestion threshold has been exceeded. SIOC gives
vSphere administrators a new means to manage their VMware virtualized environments by allowing quality of service to be expressed
for storage workloads. As such, SIOC is a big step forward in the journey toward automated, policy-based management of shared
storage resources.
SIOC provides the means to better control a consolidated shared-storage resource by providing datastore-wide I/O prioritization,
helping to manage traffic on a shared and congested datastore. With the introduction of SIOC in vSphere 4.1, vSphere administrators
now have a new tool available to help them increase the consolidation density while ensuring that they will have peace of mind,
knowing that during periodic periods of peak I/O activity there will be a prioritization and proportional fairness enforced across all the
virtual machines accessing that shared resource.

About the Authors:


Paul Manning is a Storage Architect in the Technical Marketing group at VMware and is focused on virtual storage management.
Previously, he worked at EMC and Oracle, where he had more than 10 years of experience designing and developing storage
infrastructure and deployment best practices. He has also developed and delivered training courses on best practices for highly
available storage infrastructure to a variety of customers and partners in the United States and abroad. He has authored numerous
publications and presented many talks on the topic of best practices for storage deployments and performance optimization.
Joseph Dieckhans is a Performance Specialist in the Technical Marketing group at VMware. In this role, he works directly with the
Performance Engineering and R&D teams at VMware to provide customers with information and performance data on the latest
VMware features.

For More Information:


VMware Storage Technology Page:
http://www.vmware.com/go/storage
Performance Engineering paper on SIOC:
http://www.vmware.com/go/tp-managing-performance-variance

VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright © 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at
http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc., in the United States and/or other jurisdictions. All other marks and names mentioned herein might be
trademarks of their respective companies. Item No: VMW_10Q3_WP_vSphere_4_1_SIOC_p12_A_R3

You might also like