You are on page 1of 28

DEPLOYMENT BEST PRACTICE FOR

3
ORACLE DATABASE WITH VMAX
SERVICE LEVEL OBJECTIVE
MANAGEMENT

EMC VMAX Engineering White Paper

ABSTRACT
With the introduction of the third generation VMAX disk arrays, Oracle database
administrators have a new way to deploy a wide range of applications in a single
high-performance, high capacity, self-tuning storage environment that can
dynamically manage each applications performance requirements with minimal effort.
May, 2015

EMC WHITE PAPER


To learn more about how EMC products, services, and solutions can help solve your business and IT challenges, contact your local
representative or authorized reseller, visit www.emc.com, or explore and compare products in the EMC Store.

Copyright 2015 EMC Corporation. All Rights Reserved.


EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.
The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with
respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a
particular purpose.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

Part Number H13844.1

2
TABLE OF CONTENTS

EXECUTIVE SUMMARY .............................................................................. 4


AUDIENCE ......................................................................................................... 4

VMAX3 PRODUCT OVERVIEW .................................................................... 4


VMAX3 Overview ................................................................................................ 4
VMAX3 and Service Level Objetive (SLO) based provisioning .................................... 6

STORAGE DESIGN PRINCIPLES FOR ORACLE ON VMAX3 ......................... 10


Storage connectivity considerations .................................................................... 10
Host connectivity considerations ........................................................................ 10
Number and size of host devices considerations ................................................... 11
Virtual Provisioning and thin devices considerations.............................................. 11
Partition Alignment Considerations for x86-based platforms .................................. 12
ASM and database striping considerations ........................................................... 13
Oracle data types and the choice of SLO ............................................................. 14
Host I/O Limits and multi-tenancy ...................................................................... 16
Using cascaded storage groups .......................................................................... 16

ORACLE DATABASE PROVISIONING ....................................................... 17


Storage provisioning tasks with VMAX3 ............................................................... 17
Provisioning Oracle database storage with Unisphere ............................................ 18
Provisioning Oracle database storage with Solutions Enabler CLI ............................ 23

ORACLE SLO MANAGEMENT TEST USE CASES .......................................... 23


Test Configuration ............................................................................................ 23
Test Overview.................................................................................................. 25
Test case 1 Single database run with GRADUAL change of SLO ........................... 25
Test case 2 All flash configuration with Oracle DATA and REDO ........................... 26

CONCLUSION .......................................................................................... 27
REFERENCES ........................................................................................... 27
APPENDIX .............................................................................................. 28
Solutions Enabler CLI commands for SLO management and monitoring .................. 28

3
EXECUTIVE SUMMARY

VMAX3 family of storage arrays is the next major step in evolving VMAX hardware and software targeted to meet new industry
challenges of scale, performance, and availability. At the same time, VMAX3 has taken a leap in making complex operations of
storage management, provisioning, and setting performance goals, simple to execute and manage.
VMAX3 family of storage arrays come pre-configured from factory to simplify deployment at customer sites and minimize time to first
I/O. Each array uses Virtual Provisioning to allow the user easy and quick storage provisioning. While VMAX3 can ship as an all-flash
array with the combination of Enterprise Flash Drives (EFD 1 ) and large cache that accelerates both writes and reads even farther, it
also excels in providing Fully Automated Storage Tiering (FAST 2) enabled performance management based on service level goals
across multiple tiers of storage. VMAX3 new hardware architecture comes with more CPU power, larger persistent cache, and a new
Dynamic Virtual Matrix dual InfiniBand fabric interconnect that creates an extremely fast internal memory-to-memory and data-copy
fabric.
Many enhancements were introduced to VMAX3 replication software to support new capabilities such as TimeFinder SnapVX local
replication to allow for hundreds of snapshots that can be incrementally refreshed or restored, and can cascade any number of times.
Also SRDF remote replication software adds new features and capabilities that provide more robust remote replication support.
VMAX3 adds the ability to connect directly to Data Domain system so database backups can be sent directly from the primary storage
to the Data Domain system without having to go through the host first. VMAX3 also offers embedded file support known as
Embedded NAS (eNAS) via a new hypervisor layer new to VMAX3, in addition to traditional block storage.
This white paper explains the basic VMAX3 design changes with regard to storage provisioning and performance management, and
how they simplify the management of storage and affect Oracle database layout decisions. It explains the new FAST architecture for
managing Oracle databases performance using Service Level Objectives (SLOs) and provides guidelines and best practices for its
use.

AUDIENCE
This white paper is intended for database and system administrators, storage administrators, and system architects who are
responsible for implementing, managing, and maintaining Oracle databases and VMAX3 storage systems. It is assumed that readers
have some familiarity with Oracle and the EMC VMAX3 family of storage arrays, and are interested in achieving higher database
availability, performance, and ease of storage management.

VMAX3 PRODUCT OVERVIEW


VMAX3 OVERVIEW
The EMC VMAX3 family of storage arrays is built on the strategy of simple, intelligent, modular storage, and incorporates a Dynamic
Virtual Matrix interface that connects and shares resources across all VMAX3 engines, allowing the storage array to seamlessly grow
from an entry-level configuration into the worlds largest storage array. It provides the highest levels of performance and availability
featuring new hardware and software capabilities.
The newest additions to the EMC VMAX3 family, VMAX 100K, 200K and 400K, deliver the latest in Tier-1 scale-out multi-controller
architecture with consolidation and efficiency for the enterprise. With enhanced hardware and software, the new VMAX3 array
provides unprecedented performance and scale. It offers dramatic increases in floor tile density (GB/Ft2) with engines and high
capacity disk enclosures, for both 2.5" and 3.5" drives, consolidated in the same system bay. Figure 1 shows possible VMAX3
components. Refer to EMC documentation and release notes to find the most up to date supported components.
In addition, VMAX3 arrays can be configured as either hybrid or all-flash arrays. All VMAX3 models come pre-configured from the
factory to significantly shorten the time from installation to first I/O.

1
Enterprise Flash Drives (EFD) are SSD Flash drives that are designed for high performance and resiliency suited for enterprise applications
2
Fully Automated Storage Tiering (FAST) allows VMAX3 storage to automatically and dynamically manage performance service level goals across the
available storage resources to meet the application I/O demand, even as new data is added, and access patterns continue to change over time.

4
1 8 redundant VMAX3 Engines
Up to 4 PB usable capacity
Up to 256 FC host ports
Up to 16 TB global memory (mirrored)
Up to 384 Cores, 2.7 GHz Intel Xeon E5-2697-v2
Up to 5,760 drives
SSD Flash drives 200/400/800 GB 2.5/3.5
300 GB 1.2 TB 10K RPM SAS drives 2.5/3.5
300 GB 15K RPM SAS drives 2.5/3.5
2 TB/4 TB SAS 7.2K RPM 3.5

Figure 1 VMAX3 storage array 3

VMAX3 engines provide the foundation to the storage array. Each fully redundant engine contains two VMAX3 directors and redundant
interfaces to the new Dynamic Virtual Matrix dual InfiniBand fabric interconnect. Each director consolidates front-end, global
memory, and back-end functions, enabling direct memory access to data for optimized I/O operations. Depending on the array
chosen, up to eight VMAX3 engines can be interconnected via a set of active fabrics that provide scalable performance and high
availability. New to VMAX3 design, host ports are no longer mapped directly to CPU resources. CPU resources are allocated as needed
using pools (front-end, back-end, or data services pools) of CPU cores which can service all activity in the VMAX3 array. This shared
multi-core architecture reduces I/O path latencies by facilitating system scaling of processing power without requiring additional
drives or front-end connectivity.
VMAX3 arrays introduce the industrys first open storage and hypervisor converged operating system, HYPERMAX OS. It combines
industry-leading high availability, I/O management, data integrity validation, quality of service, and storage tiering and data security
with an open application platform. HYPERMAX OS features a real-time, non-disruptive, storage hypervisor that manages and protects
embedded data services (running in virtual machines) by extending VMAX high availability to these data services that traditionally
have run external to the array (such as Unisphere). HYPERMAX OS runs on top of the Dynamic Virtual Matrix leveraging its scale out
flexibility of cores, cache, and host interfaces. The embedded storage hypervisor reduces external hardware and networking
requirements, delivers the highest levels of availability, and dramatically lower latency.
All storage in the VMAX3 array is virtually provisioned. VMAX Virtual Provisioning enables users to simplify storage management and
increase capacity utilization by sharing storage among multiple applications and only allocating storage as needed from a shared pool
of physical disks known as Storage Resource Pool (SRP). The array uses the dynamic and intelligent capabilities of FAST to meet
specified Service Level Objectives (SLOs) throughout the lifecycle of each application. VMAX3 SLOs and SLO provisioning are new to
the VMAX3 family, and tightly integrated with EMC FAST software to optimize agility and array performance across all drive types in
the system. While VMAX3 can ship in an all-flash configuration, when purchased with hybrid drive types as a combination of flash and
hard drives, EMC FAST technology can improve application performance and at the same time reduce cost by intelligently using a
combination of high-performance flash drives with cost-effective high-capacity hard disk drives.

3
Additional drive types and capacities may be available. Contact EMC representative for more details.

5
For local replication, VMAX3 adds a new feature to TimeFinder software called SnapVX, which provides support for a greater number
of snapshots. Unlike previous VMAX snapshots, SnapVX snapshots do not require the use of dedicated target devices. SnapVX allows
for up to 256 snapshots per individual source. These snapshots can copy (referred to as link-copy) their data to new target devices
and re-link to update just the incremental data changes of previously linked devices. For remote replication, SRDF adds new
capabilities and features to provide protection for Oracle databases and applications. All user data entering VMAX3 is T10 DIF
protected, including replicated data and data on disks. T10 DIF protection can be expanded all the way to the host and application to
provide full end-to-end data protection for Oracle databases using either Oracle ASMlib with UEK, or ASM Filter Driver on a variety of
supported Linux operating systems 4.

VMAX3 AND SERVICE LEVEL OBJETIVE (SLO) BASED PROVISIONING


Introduction to FAST in VMAX3
With VMAX3, FAST is enhanced to include both intelligent storage provisioning and performance management, using Service Level
Objectives (SLOs). SLOs automate the allocation and distribution of application data to the correct data pool (and therefore storage
tier) without manual intervention. Simply choose the SLO (for example, Platinum, Gold, or silver) that best suits the application
requirement. SLOs are tied to expected average I/O latency for both reads and writes and therefore, both the initial provisioning and
application on-going performance are automatically measured and managed based on compliance to storage tiers and performance
goals. Every 10 minutes FAST samples the storage activity, and when necessary, moves data at FASTs sub-LUN granularity, which is
5.25MB (42 extents of 128KB). SLOs can be dynamically changed at any time (be promoted or demoted), and FAST continuously
monitors and adjusts data location at the sub-LUN granularity across the available storage tiers to match the performance goals
provided. All this is done automatically, within the VMAX3 storage array, without having to deploy complex application ILM 5 strategy
or use host resources for migrating data due to performance needs.
VMAX3 FAST Components
Figure 2 depicts the elements of FAST that form the basis for SLO based management, as described below.
Physical disk group provides grouping of physical storage (Flash, or hard disk drives) based on drive types. All drives in a disk
group have the same technology, capacity, form factor, and speed. The disk groups are pre-configured at the factory, based on the
specified configuration requirements at the time of purchase.
Data Pool is a collection of RAID protected internal devices (also known as TDATs, or thin data devices) that are carved out of a
single physical disk group. Each data pool can belong to a single SRP (see definition below), and provides a tier of storage based on
its drive technology and RAID protection. Data pools can allocate capacity for host devices or replications. Data pools are also pre-
configured at the factory to provide optimal RAID protection and performance.
Storage Resource Pool (SRP) is a collection of data pools that provides FAST a domain for capacity and performance
management. By default, a single default SRP is factory pre-configured. Additional SRPs can be created with an EMC service
engagement. The data movements performed by FAST are done within the boundaries of the SRP and are covered in detail later in
this paper.
Storage Group (SG) is a collection of host devices (LUNs) that consume storage capacity from the underlying SRP. Because both
FAST and storage provisioning operations are managed at a storage group level, storage groups can be cascaded (hierarchical) to
allow different levels of granularity required for each operation (see cascaded storage group section later).
Host devices (LUNs) are the components of a storage group. In VMAX3 all host devices are virtual and at the time of creation can
be fully allocated or thin. Virtual means that they are a set of pointers to data in the data pools, allowing FAST to manage the data
location across data pools seamlessly. Fully allocated means that the devices full capacity is reserved in the data pools even before
the host has access to the device. Thin means that although the host sees the LUN with its full reported capacity, in reality no
capacity is allocated from the data pools until explicitly written to by the host. All host devices are natively striped across the data
pools where they are allocated with granularity of a single VMAX3 track size, which is 128KB.

4
Consult EMC Simple Support Matrix for VMAX Dif1 Director Bit Settings document for more information on T10 DIF supported HBA and operating
systems.
5
Information Lifecycle Management (ILM) refers to a strategy of managing application data based on policies. It usually involves complex data analysis,
mapping, and tracking practices.

6
Service Level Objectives (SLO) provides a pre-defined set of service levels (such as Platinum, Gold or Silver) that can be
supported by the underlying SRP. Each SLO has a specific performance goal and some have also tier 6-compliance goals from the
underlying SRP that FAST will work to satisfy. For example, Bronze SLO will attempt to have no data on EFDs and Platinum SLO will
attempt to have no data on 7.2k rpm drives. An SLO defines an expected average response time target for a storage group. By
default, all host devices and all storage groups are attached to the Optimized SLO, (which will assure I/Os are serviced from the
most appropriate data pool for their workload), but in cases where more deterministic performance goals are needed, specific SLOs
can be specified.

Figure 2 VMAX3 architecture and service level provisioning

6
Storage tiers refer to combinations of disk drive and RAID protection that create unique storage service levels. For example Flash (SSD) drives with
RAID5 protection, 10k or 15k RPM drives with RAID1 protection, etc.

7
Service Level Objectives (SLO) and Workload Types Overview
Each storage resource pool (SRP) contains a set of known storage resources, as seen in Figure 2. Based on the available resources in
the SRP, HYPERMAX OS will offer a list of available Service Level Objectives (SLOs) that can be met using this particular SRP, as
shows in Table 1. This assures SLOs can be met, and that SRPs arent provisioned beyond their ability to meet application
requirements.

Note: Since SLOs are tied to the available drive types, it is important to plan the requirements for a new VMAX3 system carefully.
EMC works with customers using a new and easy to use Sizer tool to assist with this task.

Table 1 Service Level Objectives

SLO Minimum required drive Performance expectation


combinations to list SLO

Diamond EFD Emulating EFD performance

Platinum EFD and (15K or 10K) Emulating performance between 15K drive and EFD

Gold EFD and (15K or 10K or 7.2K) Emulating 15K drive performance

Silver EFD and (15K or 10K or 7.2K) Emulating 10K drive performance

Bronze 7.2K and (15K or 10K) Emulating 7.2K drive performance

Optimized Any System optimized performance

By default, no specific SLO needs to be selected as by default all data in the VMAX3 storage array receives Optimized SLO. System
Optimized SLO meets the performance and compliance requirements by dynamically placing the most active data in the highest
performing tier and less active data in low performance high capacity tiers.

Note: Optimized SLO offers an optimal balance of resources and performance across the whole SRP, based on I/O load, type of I/Os,
data pool utilization, and available capacities in the pools. It will place the most active data on higher performing storage and least
active data on the most cost-effective storage. If data pools capacity or utilization is stressed it will attempt to alleviate it by using
other pools.

However, when specific storage groups (database LUNs) require a more deterministic SLO, one of the other available SLOs can be
selected. For example, a storage group holding critical Oracle data files can receive a Diamond SLO while the Oracle logs files can be
put on Platinum. A less critical application can be fully contained in Gold or Silver SLO. Refer to Oracle Data Types and the Choice of
SLO section later in the paper.
Once an SLO is selected (other than Optimized), it can be further qualified by a Workload type: OLTP or DSS, where OLTP workload
is focused on optimizing performance for small block I/O and DSS workload is focused on optimizing performance for large block I/O.
The Workload Type can also specify whether to account for any overhead associated with replication (local or remote). The workload
type qualifiers for replication overhead are OLTP_Rep and DSS_Rep, where Rep denotes replicated.

8
Understanding SLO Definitions and Workload Types
Each SLO is effectively a reference to an expected response-time range (minimum and maximum allowed latencies) for host I/Os,
where a particular Expected Average Response Time is attached to each SLO and workload combination. The Solutions Enabler CLI or
Unisphere for VMAX can list the available service levels and workload combinations, as seen in Figure 3 (see command line syntax
example to list available SLOs in the Appendix), they only list the expected average latency, not the range of values.

Without a workload type, the latency range is the widest for its SLO type. When a workload type is added, the range is reduced, due
to the added information. When Optimized is selected (which is also the default SLO for all storage groups, unless the user assigns
another), the latency range is in fact the full latency spread that the SRP can satisfy, based on its known and available components.

Figure 3 Unisphere shows available SLOs

Important SLO considerations:


Because an SLO references a range of target host I/O latencies, the smaller the spread the more predictable is the result. It
is therefore recommended to select both an SLO as well as a workload type. For example: Platinum SLO with OLTP workload
and no replications.
Because an SLO references an Expected Average Response Time, it is possible for two applications executing a similar
workload and set with the same SLO to perform slightly different. This can happen if the host I/O latency still falls within the
allowed range. For that reason it is recommended to use a workload type together with an SLO when a smaller range of
latencies is desirable.

Note: SLOs can be easily changed using Solutions Enabler or Unisphere for VMAX. Also, when necessary to add additional layers of
SLOs, the Storage Group (SG) can be easily changed into cascaded SG so each child or the parent can receive its appropriate SLO.

9
STORAGE DESIGN PRINCIPLES FOR ORACLE ON VMAX3
VMAX3 storage provisioning has become much simpler than previous releases. Since VMAX3 physical disk groups, data pools, and
even the default SRP come pre-configured from factory, based on inputs to the Sizer tool that helps size them correctly, the only
thing required is configure connectivity between your hosts and the VMAX3 and then start provisioning host devices.
The following sections discuss the principles and considerations for storage connectivity and provisioning for Oracle.

STORAGE CONNECTIVITY CONSIDERATIONS


When planning storage connectivity for performance and availability it is recommended to go-wide before going deep, which means
it is better to connect storage ports across different engines and directors 7 than to use all the ports on a single director. In this way,
even in a case of a component failure, the storage can continue to service host I/Os.

New to VMAX3 is dynamic core allocation. Each VMAX3 director provides services such as front-end connectivity, backend
connectivity, or data management. Each such service has its own set of cores on each director that are pooled together to provide
CPU resources which can be allocated as necessary. For example, even if host I/Os arrive via a single front-end port on the director,
the front-end pool with all its CPU cores will be available to service that port. As I/Os arriving to other directors will have their own
core pools, again, for best performance and availability it is recommended to connect each host to ports on different directors before
using additional ports on the same director.

HOST CONNECTIVITY CONSIDERATIONS


Host connectivity considerations include two aspects. The first is the number and speed of the HBA ports (initiators) and the second
is the number and size of host devices.
HBA ports considerations:
Each HBA port (initiator) creates a path for I/Os between the host and the SAN switch, which then continues to the VMAX3 storage. If
a host was to only use a single HBA port it will have a single I/O path that has to serve all I/Os. Such design is not advisable as a
single path doesnt provide high-availability, and also risks a potential bottleneck during high I/O activity for the lack of additional
ports for load-balancing.
A better design provides each database server at least two HBA ports, preferably on two separate HBAs. The additional ports provide
more connectivity and also allow multipathing software like EMC PowerPath or Linux Device-Mapper, to load-balance and failover
across HBA paths.
Each path between host and storage device creates a SCSI device representation on the host. For example, two HBA ports going to
two VMAX front-end adapter ports with a 1:1 relationship create 3 presentations for each host device: one for each path and another
that the multipathing software creates as a pseudo device (such as /dev/emcpowerA, or /dev/dm-1, etc.). If each HBA port was
zoned and masked to both FA ports (1 too many relationship) there will be 5 SCSI device representations for each host device (one
for each path combination + pseudo device).
While modern operating systems can manage hundreds of devices, it is not advisable or necessary, and it burdens the user with
complex tracking and storage provisioning management overhead. It is therefore recommended to have enough HBA ports to
support workload concurrency, availability, and throughput, but use 1:1 relationships to storage front-end ports, and not have each
HBA port zoned and masked to all VMAX front-end ports. Such approach provides enough connectivity, availability, and concurrency,
yet reduces the complexity of the host registering lots of SCSI devices unnecessarily.

7
Each VMAX3 engine has two redundant directors

10
NUMBER AND SIZE OF HOST DEVICES CONSIDERATIONS
VMAX3 introduces the ability to create host devices with a capacity from a few megabytes to multiple terabytes. With the native
striping across data pools that VMAX3 provides, the user may be tempted to create only a few very large host devices. Think about
the following example: a 10TB Oracle database can reside on a 1 x 10TB host device, or perhaps on 10 x 1TB host devices. While
either option satisfies the capacity requirement, it is recommended to use reasonable number of host devices and size. In the
example above, if the database capacity was to rise above 10TB, it is likely that the DBA will want to add another device of the same
capacity (which is an Oracle ASM best practice), even if they didnt need 20TB in total. Therefore, large host devices create very
large building blocks when additional storage is needed.
Secondly, each host device creates its own host I/O queue at the operating system. Each such queue can service a tunable, but
limited, number of I/Os simultaneously. If, for example, the host had only 4 HBA ports, and only a single 10TB LUN (using the
previous example again), with multipathing software it will have only 4 paths available to queue I/Os. A high level of database
activity will generate more I/Os than the queues can service, resulting in artificially elongated latencies. In this example two more
host devices are advisable in this case to alleviate such an artificial bottleneck. Host software such as EMC PowerPath or iostat can
help in monitoring host I/O queues to make sure the number of devices and paths is adequate for the workload.

Another benefit of using multiple host devices is that internally the storage array can use more parallelism when operations such as
FAST data movement or local and remote replications take place. By performing more copy operations simultaneously, the overall
operation takes less time.
While there is no one magic number to the size and number of host devices, we recommend finding a reasonable low number that
offers enough concurrency, provides an adequate building block for capacity when additional storage is needed, and doesnt become
too large to manage.

VIRTUAL PROVISIONING AND THIN DEVICES CONSIDERATIONS


All VMAX3 host devices are Virtually Provisioned (also known as Thin Provisioning), meaning they are merely a set of pointers to
capacity allocated at 128KB extent granularity in the storage data pools. However, to the host they look and respond just like normal
LUNs. Using pointers allows FAST to move the application data between the VMAX3 data pools without affecting the host. It also
allows better capacity efficiency for TimeFinder snapshots by sharing of extents when data doesnt change between snapshots.
Virtual provisioning offers a choice of whether to fully allocate the host device capacity, or allow it to do allocation on-demand. A fully
allocated device consumes all its capacity in the data pool on creation, and therefore, there is no risk that future writes may fail if the
SRP has no capacity left 8. On the other hand, allocation on-demand allows over-provisioning, meaning that although the storage
devices are created and look to the host as available with their full capacity, actual capacity is only allocated in the data pools when
host writes occur. This is a common cost saving practice.
Allocation on-demand is suitable in situations when:
Applications capacity growth rate is unknown, and

The user prefers to not commit large amounts of storage ahead of time, as it may never get used, and
The user prefers to not disrupt host operations at a later time by adding more devices.
Therefore, if allocation on demand is leveraged, capacity will only be physically assigned as it is needed to meet application
requirements.

Note: Allocation on-demand works very well with Oracle ASM in general, as ASM tends to write over deleted areas in the ASM disk
group and re-use the space efficiently. When ASM Filter Driver is used, deleted capacity can be easily reclaimed in the SRP. This is
done by adding a Thin attribute to the ASM disk group, and performing a manual ASM rebalance.

8
FAST allocates capacity in the appropriate data pools based on the workload and SLO. However, when a data pool is full, FAST may use other pools in
the SRP to prevent host I/O failure.

11
Since Oracle pre-allocates capacity in the storage when database files are created, when allocation on-demand is used, it is best to
deploy a strategy where database capacity is grown over time based on actual need. For example, if ASM was provisioned with a thin
device of 2TB, rather than immediately creating data files of 2TB and consuming all its space, the DBA should create data files that
consume only the capacity necessary for the next few months, adding more data files at a later time, or increasing their size, based
on need.

PARTITION ALIGNMENT CONSIDERATIONS FOR X86-BASED PLATFORMS


ASM requires at least one partition on each host LUN. Some operating systems (such as Solaris) also require at least one partition
for user data. Due to a legacy BIOS architecture, by default, x86-base operating systems tend to create partitions with an offset of
63 blocks, or 63*512bytes = 31.5K. This offset is not aligned with VMAX track boundary (128KB for VMAX3). As a result I/Os
crossing track boundaries may be requested in two operations, causing unnecessary overhead and a potential for performance
problems.

Note: It is strongly recommended to align host partition of VMAX devices to an offset such as 1MB (2048 blocks). Use the Linux
parted command, or the expert mode in fdisk command to move the partition offset.

The following example illustrates use of the parted Linux command with dm-multipath or PowerPath:
# DM-Multipath:
for i in {1..32}
do
parted -s /dev/mapper/ora_data$i mklabel msdos
parted -s /dev/mapper/ora_data$i mkpart primary 2048s 100%
done

# PowerPath
for i in ct cu cv cw cx cy cz da db dc dd de
do
parted -s /dev/emcpower$i mklabel msdos
parted -s /dev/emcpower$i mkpart primary 2048s 100%
done
The following example illustrates use of the fdisk command:
[root@dsib0063 scripts]# fdisk /dev/mapper/ora_data1
...
Command (m for help): n (create a new partition)
Command action
e extended
p primary partition (1-4)
p (this will be a primary partition)
Partition number (1-4): 1 (create the first partition)
First cylinder (1-13054, default 1):[ENTER] (use default)
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-13054, default 13054):[ENTER] (use full LUN capacity)
Using default value 13054

Command (m for help): x (change to expert command mode)

Expert command (m for help): p (print partition table)

Disk /dev/mapper/ora_data1: 255 heads, 63 sectors, 13054 cylinders

Nr AF Hd Sec Cyl Hd Sec Cyl Start Size ID


1 00 1 1 0 254 63 1023 63 209712447 83
2 00 0 0 0 0 0 0 0 0 00
3 00 0 0 0 0 0 0 0 0 00
4 00 0 0 0 0 0 0 0 0 00

Expert command (m for help): b (move partition offset)


Partition number (1-4): 1 (move partition 1 offset)
New beginning of data (1-209712509, default 63): 2048 (offset of 1MB)

Expert command (m for help): p (print partition table again)

Disk /dev/mapper/ora_data1: 255 heads, 63 sectors, 13054 cylinders

Nr AF Hd Sec Cyl Hd Sec Cyl Start Size ID

12
1 00 1 1 0 254 63 1023 2048 209710462 83
2 00 0 0 0 0 0 0 0 0 00
3 00 0 0 0 0 0 0 0 0 00
4 00 0 0 0 0 0 0 0 0 00

Expert command (m for help): w (write updated partition table)

ASM AND DATABASE STRIPING CONSIDERATIONS


Host striping occurs when a host allocates capacity to a file and the storage allocations dont all take place as a contiguous allocation
on a single host device. Instead, the files storage allocation is spread (striped) across multiple host devices to provide more
concurrency, even though to anyone trying to read or write to the file, it seems like it is contiguous.
When the Oracle database issues reads and writes randomly across the data files, striping is not of great importance, since the
access pattern is random anyways. However, when a file is read or written to sequentially, striping can be of great benefit as it
spreads the workload across multiple storage devices, creating more parallelism of execution and, often, higher performance.
Without striping, the workload is directed to a single host device, with limited ability for parallelism.
Oracle ASM natively stripes its content across the ASM members (storage devices). ASM uses two types of striping: the first, which is
the default for most Oracle data types, is called Coarse striping, and it allocates capacity across ASM disk group 9 members round-
robin, with a 1MB default allocation unit (AU), or stripe-depth. ASM AU can be sized from 1MB (default) up to 64MB. The second type
of ASM striping is called Fine-Grain striping, and is used by default only for the control files. Fine-Grain striping divides the ASM
members into groups of 8, allocates an AU on each, and stripes the newly created data at 128KB across the 8 members, until the AU
on each of the members is full. Then it selects another 8 members and repeats the process until all user data is written. This process
usually takes place during Oracle files initialization, when the DBA creates data files, tablespaces, or a database. The type of striping
for each Oracle data type is kept in ASM templates which are associated with the ASM disk groups. Existing ASM extents are not
affected by template changes and therefore it is best to set the ASM templates correctly as soon as the ASM disk group is created.
To inspect the ASM templates execute the following command:

SQL> select name, stripe from V$ASM_TEMPLATE;

Typically, ASM default behavior is adequate for most workloads. However, when Oracle databases expects a high update rate, which
generates a lot of redo logs, EMC recommends setting the redo logs ASM template 10 to fine-grain instead of coarse to create better
concurrency.
To change the database redo logs template, execute the following command on the ASM disk group holding the logs:

SQL> ALTER DISKGROUP <REDO_DG> ALTER TEMPLATE onlinelog ATTRIBUTES (FINE);

When Oracle databases are created with a focus on sequential reads or writes, such as for Analytics applications, Decision Support,
and Data Warehouses. In such cases, EMC recommends setting the data files template to fine-grain, and increasing the allocation
unit from 1MB default to 4MB or 8MB.
The following example shows how to change the AU size of a disk group during creation:

SQL> CREATE DISKGROUP <DSS_DG> EXTERNAL REDUNDANCY DISK


'AFD:ORA_DEV1' SIZE 10G ,

'AFD:ORA_DEV2' SIZE 10G


ATTRIBUTE 'compatible.asm'='12.1.0.2.0','au_size'='8M';

The following example shows how to change the stripe type or DSS_DG disk group to fine-grain:

SQL> ALTER DISKGROUP <DSS_DG> ALTER TEMPLATE datafile ATTRIBUTES (FINE);

In a similar fashion, also tempfile template can be modified to use fine-grain striping for applications where a lot of temp files are
generated.

9
EMC recommends no ASM mirroring (i.e. external redundancy) which creates a single ASM failure group. However, when ASM mirroring is used, or
similarly, when multiple ASM failure groups are manually created, the striping will occur within a failure group rather than at a disk group level.
10
Since each ASM disk group has its own template settings, modifications such as for redo logs template should only take place in the appropriate disk
group where the logs reside.

13
ORACLE DATA TYPES AND THE CHOICE OF SLO
The following sections describe considerations for various Oracle data types and selection of SLOs to achieve desired performance.
Planning SLO for Oracle databases
VMAX3 storage arrays can support many enterprise applications, together with all their replication needs and auxiliary systems (such
as test, development, reporting, patch-testing, and others). With FAST and Service Level Objective (SLO) management, it is easy to
provide the right amount of resources to each such environment with ease, and modify it as business priorities or performance needs
change over time. This section discusses some of the considerations regarding different Oracle data types and SLO assignment for
them.
When choosing SLO for the Oracle database, consider the following:
While FAST operates at a sub-LUN granularity to satisfy SLO and workload demands, the SLO is set at a storage group
granularity (a group of devices). It is therefore important to match the storage group to sets of devices of equal application
and business priority (for example, a storage group can contain one or more ASM disk groups, but a single ASM disk group
should never be divided to multiple storage groups with more than a single SLO, since Oracle stripes the data across it).
Consider that with VMAX3 all writes go to the cache, which is persistent, and thus uses lazy writer to the backend storage.
Therefore, unless other reasons are in play (such as synchronous remote replications, long I/O queues, or a system that is
over-utilized), write latency should always be very low (cache-hit), regardless of the SLO or disk technology storing the
data. On a well-balanced system, SLOs primary affect is on read latency.

In general, EMC recommends for mission critical databases to separate the following data types to distinct set of devices (using an
ASM example):
+GRID (when RAC is configured): when RAC is installed, it keeps the cluster configuration file and quorum devices inside
the initial ASM disk group. When RAC is used, EMC recommends only for this disk group to use Normal, or High ASM
redundancy (double or triple ASM mirroring). The reason is that it is very small in size so mirroring hardly makes a
difference; however, it tells Oracle to create more quorum devices. All other disk groups should normally use External
redundancy, leveraging capacity saving and VMAX3 RAID protection.

Note: Dont mix database data with +GRID if storage replication is used as cluster information is unique to its location. If a
replica is to be mounted on another host a different +GRID can be pre-created there with the correct cluster information for
that location.

+DATA: minimum of one disk group for data and control files. Large databases may use more disk groups for data files,
based on business needs, retention policy, etc. Each such disk group can have its own SLO, using a VMAX3 storage group or
a cascaded storage group to set it.
+REDO: online redo logs. A single ASM disk group, or sometimes two (when logs are multiplexed). It is recommended to
separate data from logs for performance reasons, but also so when TimeFinder is used for backup/recovery, a restore of the
data file devices will not over-write the redo logs.
+TEMP (optional): typically temp files can reside with data files, however, when TEMP is very active, and very large, the
DBA may decide to separate it to its own ASM disk group and thus allow a different SLO and performance management. The
DBA may also decide to separate TEMP to its own devices when storage replications are used, since temp files dont need to
be replicated (can be easily recreated if needed), saving bandwidth for remote replications.
+FRA: typically for archive and/or flashback logs. If flashback logs consume a lot of space, the DBA may decide to separate
archive from flashback logs.

14
The following section will address SLO considerations for these data types.
SLO considerations for Oracle data files
A key part of performance planning for the Oracle database is understanding the business priority of the application it serves, and
with large databases it can also be important to understand the structure of the schemas, tablespaces, partitions, and the associated
data files. A default SLO can be used for the whole database for simplicity, but when more control over database performance is
necessary, a distinctive SLO should be used, together with a workload type.
The choice of workload type is rather simple; for databases focused on sequential reads/writes, a DSS type should be used. For
databases that either focus on transactional applications (OLTP), or mixed workloads such as both transactional and reporting, an
OLTP type should be used. If storage remote replication is used (SRDF), add: with Replications to the workload type.
When to use Diamond SLO: Diamond SLO is only available when EFDs are available in the SRP. It tells FAST to move all the
allocated storage extents in that storage group to EFDs, regardless of the I/O activity to them. Diamond provides the best read I/O
latency as flash technology is best for random reads. Diamond is also popular for mission critical databases servicing many users,
where the system is always busy, or even when each group of users start their workload intermittently and expect high-performance
with low-latency. By having the whole storage group using EFDs, it doesnt matter when a user becomes active to provide them with
best performance.
When to use Bronze SLO: Bronze SLO doesnt allow the storage group to leverage EFDs, regardless of the I/O activity. It is a good
choice for databases that dont require stringent performance and should let more critical applications utilize capacity on EFDs. For
example, databases can use Bronze SLO when their focus is development, test, and reports. Another use for Bronze SLO is for gold
copies of the database.
When to use Optimized SLO: Optimized SLO is a good default when FAST should make the best decisions based on actual
workload and for the storage array as a whole. Because Optimized SLO uses the widest range of allowed I/O latencies, FAST will
attempt to give the active extents in the storage group the best performance (including EFDs if possible). However, if there are
competing workloads with explicit SLO, they may get priority for the faster storage tiers, based on the smaller latency range other
SLOs have.
When to use Silver, Gold, or Platinum SLO: as explained earlier, each SLO provides a range of allowed I/O latency that FAST will
work to maintain. Provide the SLO that best fits the application based on business and performance needs. Refer to Table 1 and
Figure 3 to determine the desirable SLO.
SLO considerations for log files
An active redo log file exhibits sequential write IO by the log writer, and once the log is switched, typically an archiver process will
initiate sequential read I/O from that file. Since all writes in VMAX3 go to cache, the SLO has limited effect on log performance.
Archiver reads are not latency critical so there is no need to dedicate high performance storage for Archiver.
Considering this, Oracle logs can use any SLO, since they are write latency critical and the write latency has only to do with VMAX3
cache, not backend storage technology. Therefore, normally, Oracle log files can use Optimized (default) SLO or the same SLO as is
used for the data files. In special cases, where the DBA wants the logs on the best storage tiers, Platinum or Diamond can be used
instead.
SLO considerations for TEMP and ARCHIVE Logs
Temp files use sequential reads and sequential writes I/O profile, and archive logs use sequential write I/O profile. In both cases any
SLO will suffice, where low-latency SLO (such as Diamond, or Platinum) should likely be kept for other Oracle file types that focus on
smaller I/O and are more random read in nature. Unless there are specific performance needs for these file types, Optimized SLO
can be used for simplicity.
SLO considerations for INDEXES
Often Index access is performed in memory. Also, often Index is mixed with the data files and shares their SLO. However, when
indexes are large, they may incur a lot of storage I/Os. In that case it may be useful to separate them to their own LUNs (or ASM
disk group) and use a low-latency SLO (such as Gold, Platinum, or even Diamond) as Index access is typically random and latency
critical.

15
SLO considerations for All-Flash workloads
When a workload either requires a predictable low-latency / high-IOPS performance, or perhaps when many users with intermittent
workload peaks use a consolidated environment, each requiring high-performance during their respective activity time, an All-Flash
performance is suitable. All-Flash deployment is also suitable when data center power and floor-space are limited, and high
performance consolidated environment is desirable.

Note: Unlike All-Flash appliances, VMAX3 offers a choice of a single EFD tier, or multiple tiers. Since most databases require
additional capacity for replicas, test/dev environments and other copies of the production data, consider a hybrid array for these
replicas, and simply assign the production data to the Diamond SLO.

SLO considerations for noisy neighbor and competing workloads


In highly consolidated environments, many databases and applications compete for storage resources. FAST can provide each with
the appropriate performance when specific SLO and workload types are specified. By using different SLOs for each such application
(or group of applications), it is easy to manage such a consolidated environment, and modify the SLOs when business requirements
change. Refer to the next section for additional ways of controlling performance in a consolidated environment.

HOST I/O LIMITS AND MULTI-TENANCY


The Host I/O Limits quality of service (QOS) feature was introduced in the previous generation of VMAX arrays but it continues to
offer VMAX3 customers the option to place specific IOPs or bandwidth limits on any storage group, regardless of the SLO assigned to
that group. Assigning a specific Host I/O limit for IOPS, for example, to a storage group with low performance requirements can
ensure that a spike in I/O demand will not saturate its storage, cause FAST inadvertently to migrate extents to higher tiers, or
overload the storage, affecting performance of more critical applications. Placing a specific IOPs limit on a storage group will limit the
total IOPs for the storage group, but it does not prevent FAST from moving data based on the SLO for that group. For example, a
storage group with Gold SLO may have data in both EFD and HDD tiers to satisfy the I/O latency goals, yet limited to the IOPS
provided by Host I/O limit.

USING CASCADED STORAGE GROUPS


VMAX3 offers cascaded Storage Groups (SGs) wherein multiple child storage groups can be associated with a single parent storage
group for ease of manageability and for storage provisioning. This provides flexibility by associating different SLOs to individual child
storage groups to manage service levels for various application objects, and using the cascaded storage groups for storage
provisioning.
Figure 4 shows an Oracle server using cascaded storage group. The Oracle +DATA ASM disk group is set to use Gold SLO whereas
+REDO ASM disk group is set to use Silver SLO. Both storage groups are part of a cascaded storage group Oracle_DB_SG, which can
be used to provision all the database devices to the host, or multiple hosts in the case of a cluster.

Figure 4 Cascaded storage group

16
ORACLE DATABASE PROVISIONING
STORAGE PROVISIONING TASKS WITH VMAX3
Since VMAX3 comes pre-configured with data pools and a Storage Resource Pool (SRP), what is left to do is to create the host
devices, and make them visible to the hosts by an operation called device masking.

Note: Remember that zoning at the switch sets the physical connectivity that device masking defines more closely. Zoning needs to
be set ahead of time between the host initiators and the storage ports that will be used for device masking tasks.

Device creation is an easy task and can be performed in a number of ways:


1) Using Unisphere for VMAX3 UI
2) Using Solutions Enabler CLI

3) Using 11 Oracle Enterprise Manager 12c Cloud Control DBaaS plugin for VMAX3.
Device masking is also an easy task and includes the following steps:
1) Creation of an Initiator Group (IG). Initiator group is the list of host HBA port WWNs to which the devices will be visible.
2) Creation of a Storage Group (SG). Since storage groups are used for both FAST SLO management and storage provisioning,
review the discussion on cascaded storage groups earlier.
3) Creation of a Port Group (PG). Port group is the group of VMAX3 front-end ports where the host devices will be mapped and
visible.
4) Creation of a Masking View (MV). Masking view brings together a combination of SG, PG, and IG.
Device masking helps in controlling access to storage. For example, storage ports can be shared across many servers, but only the
masking view determines which of the server will have access to the appropriate devices and storage ports.

11
At the time this paper is written Oracle EM plugin for VMAX3 provisioning and cloning are planned, but not available yet. Check Oracle webpage for
available EM 12c Cloud Control plugins or contact EMC.

17
PROVISIONING ORACLE DATABASE STORAGE WITH UNISPHERE
This section covers storage provisioning for Oracle databases using Unisphere for VMAX.
Creation of a host Initiator Group (IG)
Provisioning storage requires creation of host initiator groups by specifying the host HBA WWN ports. To create a host IG select the
appropriate VMAX storage, then select Hosts tab. Select from the list of initiator WWNs, as shown in Figure 5.

Figure 5 Create Initiator Group

18
Creation of Storage Group (SG)
A storage group defines a group of one or more host devices. Using the SG creation screen, a storage group name is specified and
new storage devices can be created and placed into the storage group together with their initial SLO. If more than one group of
devices is requested, each group creates a child SG and can take its own unique SLO. Storage group creation screen is seen in Figure
6.

Figure 6 Create Storage Group

19
Select host(s)
In this step the hosts to which the new storage will be provisioned are selected. This is done by selecting an IG (host HBA ports), as
shown in Figure 7.

Figure 7 Create Initiator Group

20
Creation of Port Group (PG)
A Port Group defines which of the VMAX front-end ports will map and mask the new devices. A new port group can be created, or an
existing one selected, as seen in Figure 8.

Figure 8 Create Port Group

21
Creation of Masking View (MV)
At this point Unisphere has all that is needed to create a masking view. As outlined in Figure 9 the Storage Group, Initiator Group,
and Port Group are presented, and a masking view name is entered. VMAX automatically maps and masks the devices in the Storage
Group to the Oracle servers.

Figure 9 Create Masking View

22
PROVISIONING ORACLE DATABASE STORAGE WITH SOLUTIONS ENABLER CLI
The following is a provisioning example using VMAX3 Solutions Enabler CLI to create storage devices and mask them to the host.
Create devices for ASM disk group
Creating 4x1TB thin devices for Oracle ASM Data Disk Group. The output of the command includes the new device IDs. The full
capacity of the devices can be pre-allocated as shown below.

Note: If preallocate size=all is omitted, capacity for the new devices wont be pre-allocated in the data pools and the device will be
Thin. See also Virtual Provisioning and Thin Devices Considerations section earlier.

# symconfigure -sid 115 -cmd "create dev count=4,size=1024 GB, preallocate size=ALL, emulation=FBA, config=tdev
;" commit
...
New symdevs: 06F:072

Mapping and masking devices to host


<Create Child Storage Groups for ASM DGs >
# symaccess -sid 115 create name DATA_SG type storage devs 06F:072
# symaccess -sid 115 create name REDO_SG type storage devs 073:075

<Create Parent Storage Group and add Childs>


# symaccess -sid 115 create name ORA1_SG type storage sg Data_SG,REDO_SG

<Create host initiator group using a text file containing WWNs of HBA ports>
# symaccess -sid 115 create name ORA1_IG type initiator file wwn.txt

<Create port group specifying the VMAX3 FA ports>


# symaccess -sid 115 create name ORA1_PG type port dirport 1E:4,2E:4,1E:8,2E:8

<Create masking view to complete the mapping and masking>


# symaccess -sid 115 create view name ORA1_MV sg ORA1_SG pg ORA1_PG ig ORA1_IG

ORACLE SLO MANAGEMENT TEST USE CASES


TEST CONFIGURATION
This section covers examples of using Oracle databases with SLO management.
Test overview
The test cases covered are:
Single database performance using different SLOs for the Oracle data files.

All Flash Array (AFA) configuration with both Oracle +DATA and +LOG on EFDs

23
Databases configuration details
The following tables show the use cases test environment. Table 2 shows the VMAX3 storage environment, Table 3 shows the host
environment, and Table 4 shows the databases storage configuration.

Table 2 Test storage environment

Configuration aspect Description


Storage array VMAX 400K with 2-engines
HYPERMAX OS 5977.496
Drive mix (including spares) 64 x EFDs - RAID5 (3+1)
246 x 10K HDD - RAID1
102 x 1TB 7K HDD - RAID6 (6+2)

Table 3 Test host environment

Configuration aspect Description


Oracle Oracle Grid and Database release 12.1
Linux Oracle Enterprise Linux 6
Multipathing Linux DM Multipath
Hosts 2 x Cisco C240, 96 GB memory
Volume Manager Oracle ASM

Table 4 Test database configuration

Database Thin devices (LUNs)


ASM DG Assignment SRP Start SLO
Name: FINDB +DATA: 4 x 1 TB thin LUNs Default Bronze
Size: 1.5 TB +REDO: 4 x 100 GB thin LUNs Default Bronze

24
TEST OVERVIEW
General test notes:
FINDB database was configured to run an industry standard OLTP workload with 70/30 read/write ratio, and 8KB block size,
using Oracle database 12c and ASM. No special database tuning was done as the focus of the test was not on achieving
maximum performance and rather comparative differences of a standard database workload.
DATA and REDO storage groups (and ASM disk groups) were cascaded into a parent storage group for ease of provisioning
and performance management.
Data collection included storage performance metrics using Solutions Enabler and Unisphere, host performance metrics
using iostat, and database performance metrics using Oracle AWR.

High level test cases:


1. In the first use case of a single database workload, both DATA and LOG devices were set to Bronze SLO prior to the run. In
this way no data extents remained on EFDs in the storage array to create a baseline. During the test only +DATA SLOs were
changed from Bronze to Platinum. LOG storage group was left on Bronze since, as explained earlier, LOG workload is
focused on writes, which are always handled by the VMAX3 cache, and therefore affected to a lesser degree by the SLO
latency.
2. In the second use case both DATA and LOG devices were set to a Diamond SLO. The outcome is that regardless of their
workload, all their extents migrated to EFD, to demonstrate an All-Flash performance. Using Diamond can easily be done in
a hybrid array configuration such as this, though VMAX3 can also be purchased with an All-Flash configuration, which
eliminates the need to use SLO and provides a full EFD performance across all data.

TEST CASE 1 SINGLE DATABASE RUN WITH GRADUAL CHANGE OF SLO


Test Scenario:
Testing was started with all storage groups using a Bronze SLO in order to create a baseline. The DATA storage group was then
configured with progressively faster SLOs (Silber, then Gold, then Platinum) and performance statistics were gathered to analyze the
effect of SLOs changes on database transaction rates. During all tests the the REDO storage group remained on Bronze SLO.
Objectives:
The purpose of this test case is to understand how database performance can be controlled by changing the DATA SLO.
Test execution steps:
1. Run an OLTP workload on FINDB with DATA and REDO storage groups on Bronze SLO.
2. Gradually apply Silver, Gold and Platinum SLOs to DATA storage group and gather performance statistics.
Test Results:
Table 5 shows the test results of use case 1, including the database average transaction rate (AVG TPM), Oracle AWR average
random read response time (dbfile seq read), and storage front-end response time.

Table 5 Use case 1 results

Databases DATA REDO SLO AVG TPM AWR dbfile seq FA Avg response
SLO read (ms) time (ms)
FINDB Bronze Bronze 26,377 10 7.39
Silver Bronze 32,482 8 6.09
Gold Bronze 72,300 4 2.99
Platinum Bronze 146,162 2 1.49

25
Figure 10 shows the database average transaction rate (TPM) changes as a direct effect of changes in DATA SLOs.
The overall change between Bronze and Platinum SLO was 5x improved transaction rate.

VMAX3 promoted active data extents to increase performance as the SLO changed from Bronze to Silver, Gold, and Platinum. Not
only the transaction rate increased, but also IO latencies were reduced with more EFD allocations.
With Bronze SLO, Oracle database experienced a latency of 10 ms which improved to 2 ms with Platinum SLO. Corresponding
transaction rate jumped from 26,377 on Bronze SLO to almost 146,000 with Platinum SLO 5x improvements in transaction
rate.

SLO Controlled TPM OLTP Workload


200000

150000

100000
5x
50000

Figure 10 Use case 1 TPM changes

TEST CASE 2 ALL FLASH CONFIGURATION WITH ORACLE DATA AND REDO
Test Scenario:
This test used an All Flash configuration by placing both DATA and REDO storage groups on a Diamond SLO (EFD only). A similar
scenario can happen without an SLO by purchasing an All-Flash VMAX3.
Objectives:
Provide an All Flash configurations for low latency and high performance. This can be accomplished by customers by either
purchasing an All-Flash VMAX3 or using the Diamond SLO in a hybrid VMAX3 array. This test case used Diamond SLO for DATA and
LOG storage groups.
Test execution steps:
1. Set the DATA and REDO storage groups SLO to Diamond.
2. Run the OLTP workload and gather performance statistics.
Results
Table 6 shows the test results of use case 2, including the database average transaction rate (AVG TPM), Oracle AWR average
random read response time (dbfile seq read), and storage front-end response time.

Table 6 Test case 2 results

Databas DATA SLO REDO AVG TPM AWR dbfile seq Average FA latency
es SLO read (ms) (ms)
FINDB Diamond Diamond 183,451 1 0.8

26
Figure 11 shows the database average transaction rate (TPM) while using Diamond SLO for both DATA and REDO (equivalent to an
All Flash configuration).

The Diamond SLO provided predictable high-performance and high transaction rate at a low latency for the OLTP workload.

Diamond SLO Controlled TPM - Oracle Data and REDO Logs - OLTP
200000 Workload

150000

100000

50000

Figure 11 All Flash Configuration Transaction rate

CONCLUSION
VMAX3 provides a platform for Oracle databases that is easy to provision, manage, and operate with the application performance
needs in mind. The purpose of this paper was to describe some of the changes in the platform, how they relate to Oracle database
deployments, and provide a few examples as to how SLO management helps with performance management.

REFERENCES
EMC VMAX3 Family with HYPERMAX OS Product Guide
Unisphere for VMAX Documentation set
EMC Unisphere for VMAX Database Storage Analyzer

EMC Storage Plug-in for Oracle Enterprise Manager 12c Product Guide

27
APPENDIX
SOLUTIONS ENABLER CLI COMMANDS FOR SLO MANAGEMENT AND MONITORING
List available SLOs in the array
Availability of the SLO depends on the available drive types in the VMAX3 SRP. The following command can be used to list the
available SLO in the array and their expected average latency.

# symcfg sid 115 list slo


The output will contain the list of SLOs available to choose along with approximate response times:
SERVICE LEVEL OBJECTIVES
Symmetrix ID : 000197200115
Approx
Resp
Time
Name (ms)
--------- -----
Optimized N/A
Diamond 0.8
Platinum 3.0
Gold 5.0
Silver 8.0
Bronze 14.0

Change SLO for a storage group

Set the desired SLO and workload type for the a Child SG.
# symsg sid 115 sg DATA_SG set slo Bronze wl oltp

Review thin device allocations in the data pools


The thin device allocations can be checked for a given storage group.
# symsg sid 115 list tdev sg DATA_SG -detail

28

You might also like