You are on page 1of 28

OpenStack HPE 3PAR StoreServ

Block Storage Driver


Configuration Best Practices
OpenStack Liberty update

Technical white paper

Technical white paper

Contents
Revision history ..........................................................................................................................................................................................................................................................................................................................................3
Executive summary ................................................................................................................................................................................................................................................................................................................................ 4
Introduction ...................................................................................................................................................................................................................................................................................................................................................4
HPE 3PAR StoreServ Storage ...................................................................................................................................................................................................................................................................................................... 5
Configuration ............................................................................................................................................................................................................................................................................................................................................... 6
Volume types creation......................................................................................................................................................................................................................................................................................................................... 7
Setting extra_specs or capabilities ........................................................................................................................................................................................................................................................................................... 8
Extra_specs restrictions ............................................................................................................................................................................................................................................................................................................... 9
Creating and setting qos_specs ................................................................................................................................................................................................................................................................................................10
qos_specs restrictions .................................................................................................................................................................................................................................................................................................................. 11
Multiple storage backend support and Block Storage configuration ...................................................................................................................................................................................................... 11
iSCSI target port selection.............................................................................................................................................................................................................................................................................................................. 13
iSCSI Multipath Support ................................................................................................................................................................................................................................................................................................................... 13
Fibre Channel target port selection.......................................................................................................................................................................................................................................................................................14
Block Storage scheduler configuration with multi-backend ........................................................................................................................................................................................................................... 14
Block Storage scheduler configuration with driver filter and weigher ................................................................................................................................................................................................... 14
Volume types assignment.............................................................................................................................................................................................................................................................................................................. 17
Multiple backend requirements.......................................................................................................................................................................................................................................................................................... 17
Volume migration ..................................................................................................................................................................................................................................................................................................................................17
Volume manage and unmanage ..............................................................................................................................................................................................................................................................................................18
Consistency Groups ............................................................................................................................................................................................................................................................................................................................. 19
Volume retype ......................................................................................................................................................................................................................................................................................................................................... 22
Security improvements .................................................................................................................................................................................................................................................................................................................... 22
CHAP support ................................................................................................................................................................................................................................................................................................................................... 22
Configurable SSH Host Key Policy and Known Hosts File ....................................................................................................................................................................................................................... 22
Support for Containerized Stateful Services ................................................................................................................................................................................................................................................................ 23
Summary ....................................................................................................................................................................................................................................................................................................................................................... 23
Appendix....................................................................................................................................................................................................................................................................................................................................................... 23
Creating a goodness function ............................................................................................................................................................................................................................................................................................. 23
For more information......................................................................................................................................................................................................................................................................................................................... 28

Technical white paper

Page 3

Revision history
REV

DATE

DESCRIPTION

1.0

15-Apr-2014

Update for OpenStack Icehouse release


Added QoS and Fibre Channel zoning

2.0

16-Oct-2014

Update for OpenStack Juno release


Requires hp3parclient version 3.1.1 from the Python Package Index (PyPi).
HPE 3PAR FC OpenStack driver supports Match Set (requires Fibre Channel Zone Manager) VLUNs
instead of Host Sets.
Admin Horizon UI now supports adding extra-specs and qos-specs settings. HPE 3PAR iSCSI OpenStack
driver now supports CHAP authentication.
Configurable SSH Host Key Policy and known host file.
Default HPE 3PAR host persona was set to 1-Generic It now defaults to
2-Generic-ALUA.
Support added for manage/unmanage volumes.
The <pool> is required for any <host> based options on the command line; For the HPE 3PAR drivers
this is just a repeat of the driver backend name.

2.1

02-Feb-2015

Update host persona to enums to match HPE 3PAR WSAPI values.

3.0

30-Apr-2015

Updated for the OpenStack Kilo release


The hp3par_cpg setting in the cinder.conf can now contain multiple CPGs (pools).
The hp3par:cpg extra-spec is now ignored, if its used a warning will be posted to the log
Support added for Flash Cache requires HPE 3PAR OS 3.2.1 MU2, Web Services API version 1.4.2 and
hp3parclient version 3.2.0 from the PyPi.
Support added for Thin Duplication provisioned volumes requires HPE 3PAR OS 3.2.1 MU1 and Web
Services API version 1.4.1.
Block Storage scheduler configuration with driver filter and weigher.
Both Cisco and Brocade Fibre Channel Zone Manager drivers have made configuration changes.

4.0

16-Oct-2015

Updated for the OpenStack Liberty release


The drivers username credentials can have an edit role and domain set to all, the drivers no longer
require a super role.
Fixed the optional hp3par_snapshot_expiration and hp3par_snapshot_retention settings in cinder.conf
Support added for configuring the over subscription ratio and reserved percentage for thin provisioned
volumes.
Enabled support for Consistency Groups.
Configuration of iSCSI multipath.
Examples of goodness functions equations were added to the Appendix.
Added support for ClusterHQ Flocker open source technology.

Note
This document should be used for all OpenStack releases up to and including the Liberty release. A new document will be created for the
OpenStack Mitaka release which will be the first release that will contain the HPE re-branded Cinder driver.

Technical white paper

Page 4

Executive summary
HPEs commitment to the OpenStack community brings the power of OpenStack to the enterprise with new and enhanced offerings that enable
enterprises to increase agility, speed innovation, and lower costs.
Since the Grizzly release, HPE has been a top contributor to the advancement of the OpenStack project. 1 HPEs contributions have focused on
continuous integration and quality assurance, which has supported the development of a reliable and scalable cloud platform that is equipped to
handle production workloads.
To support the need that many larger organizations and service providers have for enterprise-class storage, HPE has developed the HPE 3PAR
StoreServ Block Storage Drivers, which support the OpenStack technology across both iSCSI and Fibre Channel (FC) protocols. This provides
the flexibility and cost-effectiveness of a cloud-based open source platform to customers with mission-critical environments and high resiliency
requirements.
Figure 1 shows the high-level components of a basic cloud architecture.

Figure 1. OpenStack cloud architecture

Introduction
This document provides information about the new best practices features in the OpenStack release. These include configuring and using
volume types, extra specs, quality of service (QoS) specs, consistency groups, over subscription, and multiple backend support with the
HPE 3PAR StoreServ Block Storage Drivers.
The HP3PARFCDriver and HP3PARISCSIDriver are based on the Block Storage (Cinder) plug-in architecture, shown in figure 2. The drivers
execute the volume operations by communicating with the HPE 3PAR Storage system over HTTP or HTTPS and secure shell (SSH) connections.
The connections communicate using the hp3parclient, which is part of the PyPi.

Stackalytics.com, OpenStack Liberty Analysis, September 2015. stackalytics.com/?release=liberty&amp;metric=commits&amp;project_type=openstack&metric=commits

Technical white paper

Page 5

Figure 2. HPE 3PAR iSCSI and FC drivers for OpenStack Cinder

HPE 3PAR StoreServ Storage


HPE 3PAR StoreServ uses a single architecture, shown in figure 3, to deliver primary storage platforms for midrange, enterprise, and all-flash
arrays. 2
HPE 3PAR StoreServ Block Storage Drivers can work with all arrays in the entire HPE 3PAR StoreServ product family. HPE 3PAR StoreServ
Storage delivers key advantages for the OpenStack community:
High performance to meet peak demands
Non-disruptive scalability to easily support storage growth
Bulletproof storage to reduce downtime
Increased efficiency to help ensure no wasted storage
Effortless storage administration to lower operational costs and reduce time to value
The HPE 3PAR has added two new features in the latest release, Adaptive Flash Cache (AFC) and Thin Deduplication provisioning. The HPE
3PAR implementation of the AFC uses the flash (SSD) storage as level-2 read cache on the HPE 3PAR StoreServ array.
The HPE 3PAR Thin Deduplication software delivers inline, block-level deduplication without performance or capacity inefficiency tradeoffs.
Built-in, zero-detection mechanism drives efficient inline zero block deduplication at the hardware layer.

HPE 3PAR StoreServ Storage: hp.com/go/3PAR

Technical white paper

Page 6

Figure 3. HPE 3PAR StoreServ Storage 3

Configuration
The HPE 3PAR StoreServ Block Storage Drivers for iSCSI and Fibre Channel were introduced in the OpenStack Grizzly release. Since that release,
several configuration improvements have been made, including the following:
Icehouse
CPGs used by the HPE 3PAR StoreServ Block Storage Drivers are no longer required to belong to a domain. The hp3par_domain
configuration setting in the cinder.conf file has been removed.
Added support to the HPE 3PAR iSCSI OpenStack driver, which allows the selection of the best-fit target iSCSI port from a list of candidate ports.
Enhanced quality of service features now using qos_specs instead of extra_specs.
Icehouse release requires the hp3parclient version 3.0.0 from the PyPi.
The HPE 3PAR FC OpenStack driver can now take advantage of the Fibre Channel Zone Manager feature in OpenStack that allows FC SAN
zone or access control management. See the OpenStack Configuration Reference Guide for details.
Juno
Juno release requires the hp3parclient version 3.1.1 from the PyPi.
Added support to the HPE 3PAR Fibre Channel OpenStack driver allows for Match Set (requires Fibre Channel Zone Manager) VLUNs instead
of Host Sets.
Admin Horizon UI now supports adding extra-specs and qos-specs settings.
The HPE 3PAR iSCSI OpenStack driver now supports CHAP authentication.
Configurable SSH Host Key Policy and known host file.

HPE 3PAR StoreServ offering: hp.com/us/en/products/disk-storage/index.html?facet=3par-storage

Technical white paper

Page 7

Kilo
The Kilo release introduces support for pools. With Kilo or later, the hp3par_cpg setting in the cinder.conf file is used to define CPGs/pools.
The pool name is the CPG name. The hp3par_cpg setting can now contain a comma-separated list of CPGs. This allows the scheduler to
select a backend and a pool in its set of pools.
The extra spec setting hp3par:cpg is ignored in Kilo. Instead, use the hp3par_cpg setting in the cinder.conf file to list the valid CPGs for a
backend. If types referred to different CPGs with different attributes, those should be converted to multiple backends with the CPGs specified
in the cinder.conf file.
Added support for Flash Cache, which can be enabled for a volume with the hp3par:flash_cache extra-spec setting.
Added support for Thin Deduplication volume provisioning, which can be used for provisioning a volume with the hp3par:provisioning
extra-spec setting.
The Fibre Channel Zone Manager feature in OpenStack that allows FC SAN zone or access control management. See the OpenStack
Configuration Reference Guide for the latest configuration details for both Cisco and Brocade.
The Dynamic Optimization license is required to support any feature that results in a volume changing provisioning type or CPG. This may
apply to the volume migrate, retype, and manage commands.
Liberty
Cinders cinder.conf setting for the hp3par_username and san login requires just the edit role and the Domain setting of all. This will
actually work with prior releases of the OpenStack as well, we had in the past required a super role.
Cinders cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes and
reserved_percentage to prevent over provisioning on the HPE 3PAR.
Fixed the optional hp3par_snapshot_retention and hp3par_snapshot_expiration parameters when set in cinder.conf file. They
being sent to the back-end as strings instead of integers.
Added HPE 3PAR Multipath iSCSI support, in cinder.conf add the hp3par_iscsi_ips property to the HPE 3PAR iSCSI backends that will be
utilizing multipath.

Volume types creation


Block Storage volume types are a type or label that can be selected at volume create time in OpenStack. These types can be created either in the
Admin Horizon UI or using the command line, as shown.
$cinder --os-username admin --os-tenant-name admin type-create <name>
The <name> is the name of the new volume type. This example illustrates how to create three new volume type names with the names gold,
silver, and bronze:
$cinder --os-username admin --os-tenant-name admin type-create gold
$cinder --os-username admin --os-tenant-name admin type-create silver
$cinder --os-username admin --os-tenant-name admin type-create bronze

Technical white paper

Page 8

Setting extra_specs or capabilities


After the volume type names have been created, you can assign extra_specs, qos_specs, or capabilities to these types. The filter scheduler
uses the extra_specs data to determine capabilities and the backend. It also enforces strict checking. Starting in the Icehouse release, any
QoS-related settings with the exception on the virtual volume set (VVS) must be set in the qos_specs, described in the next section, Creating
and setting qos_specs.
The extra_specs or capabilities must be set or unset for a volume type. The extra_specs are set or unset either in the Admin Horizon UI
(new in Juno) or using the command line, as shown:
$cinder --os-username admin --os-tenant-name admin type-key <vtype> <action>
[<key=value> [<key=value> ...]]
The argument <vtype> is the name or ID of the previously created volume type (e.g., gold, silver, and bronze). The argument <action> must
be one of the actions: set or unset. The optional argument <key=value> is the extra_specs to set. Only the key is necessary for unset.
Any or all of the following capabilities can be set on a volume type. They override the default values that were specified in the cinder.conf or are
just additional capabilities that the HPE 3PAR StoreServ Storage array offers. See the extra_specs restrictions section, which provides
constraints on when the VVS and QoS settings are set for a single volume type.
volume_backend_nameAssign a volume type to a particular Block Storage Driver and set the volume_backend_name key to match the
value specified in the cinder.conf file for that Block Storage Driver.
Scoping hp3par: This is required for all the HPE 3PAR specific keys. The current list of supported HPE 3PAR keys includes:
hp3par:flash_cacheValid values are true and false. Added in the Kilo release.hp3par:snap_cpgOverrides the hp3par_cpg_snap
setting. Defaults to the hp3par_cpg_snap setting in the cinder.conf file. If hp3par_cpg_snap is not set, it defaults to the hp3par_cpg
setting.
hp3par:provisioningDefaults to thin provisioning. Valid values are thin, dedup, and full. In Kilo and later, dedup was added as a
provisioning type for thin deduplication provisioned volumes.
hp3par:personaDefaults to 2Generic-ALUA persona. The valid values are: 1Generic, 2Generic-ALUA, 3Generic-legacy,
4HPEUX-legacy, 5AIX-legacy, 6EGENERA, 7ONTAP-legacy, 8VMware, 9OpenVMS, 10HPEUX, and 11Windows Server.
Before the Juno release the default was set to 1Generic; it now defaults to 2GenericALUA.

Note
The HPE 3PAR WSAPI requires these personas. The numerical values are different from what is displayed in the HPE 3PAR Management console
and the HPE 3PAR CLI.

Prior to Kilo only, the CPG could be set using hp3par:cpg. As described in the following bullet. In Kilo and later, CPGs should be controlled by
configuring separate backends with pools.
(Obsolete) hp3par:cpgOverrides the hp3par_cpg setting. Defaults to the hp3par_cpg setting in the cinder.conf file.
To use VVS settings, the HPE 3PAR StoreServ Storage array must have an HPE 3PAR Priority Optimization license installed.
hp3par:vvsThe virtual volume set name that has been set up by the administrator that would have predefined QoS rules associated with
it. If you specify extra_specs hp3par:vvs, the qos_specs minIOPS, maxIOPS, minBWS, and maxBWS settings are ignored.

Technical white paper

Page 9

Set examples:
$cinder type-key gold set hp3par:snap_cpg=SNAPCPG volume_backend_name=3par_FC
$cinder type-key silver set hp3par:provisioning=full volume_backend_name=3par_ISCSI
$cinder type-key bronze set hp3par:vvs=myvvs volume_backend_name=iscsi
Unset examples:
$cinder type-key gold unset hp3par:snap_cpg
Use the following command to list all the volume types and extra_specs currently configured:
$cinder --os-username admin --os-tenant-name admin extra-specs-list

Extra_specs restrictions
Certain constraints apply when using one or more of the extra_specs documented above.
If hp3par:snap_cpg is set per volume type, it must be in the same virtual domain as the back ends CPGs on the HPE 3PAR StoreServ
Storage array.
The hp3par:persona is set on a per volume basis, but is not actually used until that volume is attached to an instance and an HPE 3PAR
host is created. In this case, the first volumes persona to be attached to the host is used. Additional volumes that have a different persona will
still be attached, but their persona is ignored. They use the persona of the first attached volume.
Errors occur if you attempt to use vvs or the qos setting without the Priority Optimization license installed on the HPE 3PAR StoreServ
Storage array.
If you specify hp3par:vvs virtual volume set as an extra_spec and one or more of the qos settings (via qos_specs), the qos settings will
be ignored and the volume will be created in the VVS specified.
Volumes that have been cloned will only support extra specs keys hp3par:snap_cpg, hp3par:provisioning, and hp3par:vvs. The
others are ignored. In addition, the comments section of the cloned volume in the HPE 3PAR StoreServ Storage array will not be populated.
If you specify hp3par:flash_cache, the HPE 3PAR StoreServ Storage array must meet the following requirements:
Firmware version HPE 3PAR OS 3.2.1 MU2 and Web Services API version 1.4.2
Adaptive Flash Cache license installed
Available SSD Disks
The assigned CPG for a Flash Cache volume must be set to device type of SSD
Flash Cache must be enabled on the HPE 3PAR StoreServ Storage array. This is done with the CLI commandcreateflashcache <size>
(size must be in 16 GB increments). For example, createflashcache 128g will create 128 GB of Flash Cache for each node pair in the array.
If you specify dedup as the hp3par:provisioning value, the HPE 3PAR StoreServ Storage array must meet the following requirements:
Firmware version HPE 3PAR OS 3.2.1 MU1 and Web Services API version 1.4.1
Thin Deduplication license installed
Available SSD Disks
The assigned CPG for a Thin Deduplication volume must be set to device type of SSD

Technical white paper

Page 10

Creating and setting qos_specs


The qos_specs need to be created and associated with a volume type. To use these QoS settings, the HPE 3PAR StoreServ Storage array must
have a Priority Optimization license installed. The current HPE 3PAR qos_specs that can be specified in the Icehouse release do not require a scoping.
minIOPSSets the QoS I/O issue count minimum goal. If not specified, there is no limit on I/O issue count.
maxIOPSSets the QoS I/O issue count rate limit. If not specified, there is no limit on I/O issue count.
minBWSSets the QoS I/O issue bandwidth minimum goal. If not specified, there is no limit on I/O issue bandwidth rate.
maxBWSSets the QoS I/O issue bandwidth rate limit. If not specified, there is no limit on I/O issue bandwidth rate.
latencySets the latency goal in milliseconds.
prioritySets the priority of the QoS rule over other rules. Default to normal, the valid values are low, normal, and high.
Any or all of the above capabilities can be set on a volume type. They override the default values that were specified in the cinder.conf or are
just additional capabilities that the HPE 3PAR StoreServ Storage array offers. See the extra_specs requirements section, which provides
constraints on when the VVS and QoS settings are set for a single volume type.
Since the Icehouse release, minIOPS and maxIOPS must be used together to set I/O limits. Similarly, minBWS and maxBWS must be used
together. If only one is set, the other will be set to the same value. For example, if a qos-create was called with only minIOPS=10000 being set,
then maxIOPS would also be set to 10000.
All qos_specs can be made in the Admin Horizon UI or by on the command line. Use the following command to list all the qos_specs currently
configured:
$ cinder --os-username admin --os-tenant-name admin qos-list
The qos_specs can be created by using the qos-create command, following this format:
$cinder --os-username admin --os-tenant-name admin qos-create <name> <key=value> [<key=value> [<key=value>
...]]
The argument <name> is the name of the new QoS spec. The argument <key=value> is the qos_specs to set the key and value that you
would like to create for this qos_specs. You must have at least one key=value pair.
You can also set or unset keys and values only on the command line, after the qos_specs are created following this format:
$cinder --os-username admin --os-tenant-name admin qos-key <qos_specs> <action> [<key=value> [<key=value>
...]]
The argument <qos_specs> is the ID of the qos specs. You can retrieve the ID of the qos_spec by running cinder
qos-list. The argument <action> must be one of the actions: set or unset. The argument <key=value> is the qos_specs to set or
unset the key. Only key is necessary on unset.
Next, connect the qos_specs to a volume type by making an association. You can associate the qos specs ID to the volume type ID that is
connected to a particular Block Storage Driver by issuing the following command:
$cinder --os-username admin --os-tenant-name admin qos-associate <qos_specs_id> <volume_type_id>
You can undo an association using the qos-disassociate command.
$cinder --os-username admin --os-tenant-name admin qos-disassociate <qos_specs_id> <volume_type_id>
To find the <qos_spec_id>, run the cinder qos-list command. To find the <volume_type_id>; run the cinder extra-specs-list
command. The volume type used must also have a volume_backend_name assigned to it.
volume_backend_name=<volume backend name>

Technical white paper

Page 11

Create examples:
$cinder qos-create high_iops minIOPS=1000 maxIOPS=100000
$cinder qos-create high_bws maxBWS=5000
Set examples:
$cinder qos-key 563055a9-f17f-4553-8595-4a948b5bf010 set priority=high minIOPS=100000
$cinder qos-key d58adb0b-a282-43c5-8c13-550c38df31b8 set maxIOPS=2000 maxBWS=100
Unset examples:
$cinder qos-key 563055a9-f17f-4553-8595-4a948b5bf010 unset priority
$cinder qos-key d58adb0b-a282-43c5-8c13-550c38df31b8 unset maxIOPS maxBWS
When you want to unset a particular key value pair from a volume type, only the key is required.
Associate example:
$cinder qos-associate 563055a9-f17f-4553-8595-4a948b5bf010 71ca8337-5cbf-43f5- b634-c0b35808d9c4
Where 563055a9-f17f-4553-8595-4a948b5bf010 is the ID of the qos_specs and 71ca8337-5cbf-43f5-b634- c0b35808d9c4 is the ID of the volume
type. This ID can be found by running cinder qos-list and cinder extra-specs-list commands.
Disassociate example:
$cinder qos-disassociate 563055a9-f17f-4553-8595-4a948b5bf010 71ca8337-5cbf-43f5- b634-c0b35808d9c4

qos_specs restrictions
Certain constraints apply when using one or more of the qos_specs documented in the Creating and setting qos_specs section.
Errors occur if you attempt to use vvs or the qos setting without the Priority Optimization license installed on the HPE 3PAR StoreServ
Storage array.
If you specify hp3par:vvs virtual volume set as an extra_spec and one or more of the qos settings, the qos settings are ignored and the
volume is created in the VVS specified.

Multiple storage backend support and Block Storage configuration


Multiple backend support was added to OpenStack in the Grizzly release. Detailed instructions on setting up multiple backends can be found in
the OpenStack Configuration Reference Guide.
The multi-backend configuration is done in the cinder.conf file. The enabled_backends flag has to be set up. This flag defines the names
(separated by a comma) of the config groups for the different backends. One name is associated to one config group for a backend
(e.g., [3parfc-1]). Each group must have a full set of the driver-required configuration options. Figure 4 shows a sample cinder.conf file for three
different HPE 3PAR StoreServ Storage array backends configuring two Fibre Channel drivers and one iSCSI Cinder driver.

Note
Currently, the HPE 3PAR drivers communicate with the HPE 3PAR StoreServ Storage array over HTTP or HTTPS and SSH. This means that both
the hp3par_username/password and san_login/password entries must be configured in the cinder.conf file.

Technical white paper

Page 12

# List of backends that will be served by this node enabled_backends=3parfc-1,


3parfc-2,3pariscsi-1
# [3parfc-1]
volume_driver=cinder.volume.drivers.san.hp.hp_3par_fc.HP3PARFCDriver
volume_backend_name=3par_FC
hp3par_api_url=https://10.10.22.241:8080/api/v1
hp3par_username=<username>
hp3par_password=<password>
hp3par_cpg=OpenStackCPG_RAID5_NL,cpggold1
san_ip=10.10.22.241
san_login=<san_username>
san_password=<san_password>
max_over_subscription_ratio=10.0
reserved_percentage=15
#
[3parfc-2]
volume_driver=cinder.volume.drivers.san.hp.hp_3par_fc.HP3PARFCDriver
volume_backend_name=3par_FC
hp3par_api_url=https://10.10.22.242:8080/api/v1
hp3par_username=<username>
hp3par_password=<password>
hp3par_cpg=OpenStackCPG_RAID6_NL,cpggold2
san_ip=10.10.22.242
san_login=<san_username>
san_password=<san_password>
hp3par_snapshot_retention=48
hp3par_snapshot_expiration=72
#
[3pariscsi-1]
volume_driver=cinder.volume.drivers.san.hp.hp_3par_iscsi.HP3PARISCSIDriver
hp3par_iscsi_ips=10.10.220.253,10.10.220.254
hp3par_api_url=https://10.10.22.243:8080/api/v1
volume_backend_name=3par_ISCSI
hp3par_username=<username>
hp3par_password=<password>
hp3par_cpg=OpenStackCPG_RAID6_ISCSI
san_ip=10.10.22.243
san_login=<username>
san_password=<password>
Figure 4. Sample cinder.conf file

In this configuration, both the 3parfc-1 and 3parfc-2 have the same volume_backend_name. When a volume request comes in with the
3par_FC backend name, the scheduler must choose which one is most suitable. This is done with the capacity filter scheduler. See details in the
Block Storage scheduler configuration with multi-backend section. This example also includes a single iSCSI base HPE 3PAR Cinder driver with a
different volume_backend_name.
In this configuration, both 3parfc-1 and 3parfc-2 also show multiple CPGs in their hp3par_cpg option. These CPGs are used as pools in Kilo.

Technical white paper

Page 13

iSCSI target port selection


The HPE 3PAR iSCSI OpenStack driver provides the ability to select the best-fit target iSCSI port from a list of candidate ports. The first time a
volume is attached to a host, all iSCSI ports configured for driver selection are examined for best fit. The port with the least active volumes
attached is selected as the communication path to the HPE 3PAR StoreServ Storage array. Any subsequent volumes attached to the same host
will use the established target port.
To configure the candidate iSCSI ports used for best-fit selection, set the cinder.conf option, hp3par_iscsi_ips with a comma-separated list
of IP addresses. Do not use quotes around the list. For example, the section for the backend config group name [3pariscsi-1] in the
cinder.conf file in figure 4 is as follows:
hp3par_iscsi_ips=10.10.220.253,10.10.220.254
If the single iSCSI cinder.conf option iscsi_ip_address is set, it will be included as a possible candidate for port selection at volume attach
time.
At driver startup, target iSCSI ports are verified with the HPE 3PAR StoreServ Storage array to ensure each is a valid iSCSI port. If an invalid iSCSI
port is identified, the following message is logged to the cinder-volume log file:
2013-07-02 08:50:50.934 WARNING cinder.volume.drivers.san.hp.hp_3par_iscsi [req-6c6e6807-5543-46dd-ba6630149f24758d None None] Found invalid IP address(s) in configuration option(s) hp3par_iscsi_ips or
iscsi_ip_address '10.10.22.230, 10.10.220.25'
If no valid iSCSI port is found, the following exception is logged and the driver fails:
2013-07-02 08:53:57.559 TRACE cinder.service InvalidInput: Invalid input received: At least one valid iSCSI
IP address must be set.

iSCSI Multipath Support


Support for multipath was added to the 3PAR iSCSI driver in the Liberty release. The steps to setup multipath with 3PAR iSCSI will be described
in the sections below.
The first step is to setup the multipath options in cinder.conf as documented in OpenStack Configuration Reference Guide for general multipath
settings (i.e., enforce_multipath_for_image_xfer, iscsi_use_multipath, etc.
Once OpenStack is setup to use multipath the next step is to determine which 3PAR iSCSI IPs will be used as potential paths. Once this is decided
upon the cinder.conf file will need to be edited. In cinder.conf add the hp3par_iscsi_ips property to the 3PAR iSCSI backends that will be
utilizing multipath. The property should look similar to the example given in the iSCSI target port selection section. The following is an example of
how a cinder.conf file should look:
[3pariscsi-1]
volume_driver=cinder.volume.drivers.san.hp.hp_3par_iscsi.HP3PARISCSIDriver
hp3par_iscsi_ips=10.10.120.227,10.10.220.228,10.10.320.229
iscsi_ip_address = 10.10.120.227
hp3par_api_url=https://10.10.22.243:8080/api/v1
volume_backend_name=3par_ISCSI
hp3par_username=<username>
hp3par_password=<password>
hp3par_cpg=OpenStackCPG_RAID6_ISCSI
san_ip=10.10.22.243
san_login=<username>
san_password=<password>

Technical white paper

Page 14

Note
An attempt will be made to use all of the IPs that are listed in the hp3par_iscsi_ips property.

With multipath enabled and the iSCSI backend entry updated in cinder.conf the backend is now ready to support multipath. When performing
attaches to volumes it should be noticed that LUNs are created for each of the IPs defined in cinder.conf. If an IP is unreachable or the port itself
is not in a ready state it will be skipped and unused.

Fibre Channel target port selection


Before the Juno release, the HPE 3PAR FC OpenStack driver would always use all available FC ports on the HPE 3PAR host when an instance is
attached to a volume and only one FC path is available to that host. Now the HPE 3PAR FC OpenStack Driver can detect if only a single path FC
path is available. When a single FC path is detected, only a single VLUN will be created, instead of one for every available NSP (node:slot:port) on
the HPE 3PAR host. This prevents an HPE 3PAR host from using extra FC ports that are not needed. If multiple FC paths are available, all the
ports are used.
To configure HPE 3PAR OpenStack FC driver target port selection (added in Juno), the Fibre Channel Zone Manager needs to be configured and
the zone_mode=fabric must be set in cinder.conf to enable the target port selection. If zone_mode=None is not present in the cinder.conf,
then all available FC ports are used. See the OpenStack Configuration Reference Guide for details.

Block Storage scheduler configuration with multi-backend


Multi-backend must be used with filter_scheduler enabled. Filter scheduler acts in two steps:
1. Filter scheduler filters the available backends. By default, AvailabilityZoneFilter, CapacityFilter, and CapabilitiesFilter are

enabled.
2. Filter scheduler weighs the previously filtered backends. By default, the CapacityWeigher is enabled. The CapacityWeigher attributes

high scores to backends with the most available space.


According to the filtering and weighing, the scheduler will be able to pick the best backend to handle the request. In that way, filter scheduler
achieves the goal of explicitly creating volumes on specific backends using volume types.
From the Grizzly release forward, the default scheduler is the FilterScheduler.
(scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler)
So, the line does not need to be added to the cinder.conf file.

Block Storage scheduler configuration with driver filter and weigher


The driver filter and weigher for the Block storage scheduler is a feature (new in Kilo) that, when enabled, allows for a filter and goodness
function to be defined in your cinder.conf file. The two functions are used during volume creation time by the Block storage scheduler to
determine which backend is the ideal for the volume. The filter function is used to filter out backend choices that should not be considered at all.
The goodness function is used to rank the filtered backends from 0 to 100. This feature should be used when the default Block storage
scheduling does not provide enough control for where volumes are being created.
Enable the usage of the driver filter for the scheduler by adding DriverFilter to the scheduler_default_filters property in your
cinder.conf file. Enabling the driver weigher is similar. Add GoodnessWeigher to the scheduler_default_weighers property in your
cinder.conf file. If you wish to include other OpenStack filters and weighers in your setup make sure to add those to the
scheduler_default_filters and scheduler_default_weighers properties as well.

Technical white paper

Page 15

Note
You can choose to have only the DriverFilter or GoodnessWeigher enabled in your cinder.conf file depending on how much customization
you want.

OpenStack supports various math operations that can be used in the filter and goodness functions. The currently supported list of math
operations can be seen in table 1.
Table 1. Supported math operations for filter and goodness functions
OPERATIONS

TYPE

+, -, *, /, ^

standard math

not, and, or, &, |, !

logic

>, >=, <, <=, ==, <>, !=

equality

+, -

sign

x?a:b

ternary

abs(x), max(x,y), min(x,y)

math helper functions

Several driver specific properties are available for use in the filter and goodness functions for an HPE 3PAR backend. The currently supported list
of HPE 3PAR specific properties include:
capacity_utilizationPercent of total space used on the HPE 3PAR CPG.
total_volumesThe total number of volumes on the HPE 3PAR CPG.
Additional generic volume properties are available from OpenStack for use in the filter and goodness functions. These properties can be seen in
the OpenStack Cloud Administrator Guide.

Note
Access the HPE 3PAR specific properties by using the following format in your filter or goodness functions: capabilities.<property>

Technical white paper

Page 16

The sample cinder.conf file in figure 5 shows an example of how several HPE 3PAR backends could be configured to use the driver filter and
weigher from the Block storage scheduler.
[default]
scheduler_default_filters = DriverFilter
scheduler_default_weighers = GoodnessWeigher
enabled_backends = 3parfc-1, 3parfc-2, 3parfc-3
[3parfc-1]
hp3par_api_url = <api_url>
hp3par_username = <username>
hp3par_password = <password>
san_ip = <san_ip>
san_login = <san_username>
san_password = <san_password>
volume_backend_name = 3parfc
hp3par_cpg = CPG-1
volume_driver = cinder.volume.drivers.san.hp.hp_3par_fc.HP3PARFCDriver
filter_function = capabilities.total_volumes < 10
goodness_function = (capabilities.capacity_utilization < 75)? 90 : 50
[3parfc-2]
hp3par_api_url = <api_url>
hp3par_username = <username>
hp3par_password = <password>
san_ip = <san_ip>
san_login = <san_username>
san_password = <san_password>
volume_backend_name = 3parfc
hp3par_cpg = CPG-2
volume_driver = cinder.volume.drivers.san.hp.hp_3par_fc.HP3PARFCDriver
filter_function = capabilities.total_volumes < 10
goodness_function = (capabilities.capacity_utilization < 50)? 95 : 45
[3parfc-3]
hp3par_api_url = <api_url>
hp3par_username = <username>
hp3par_password = <password>
san_ip = <san_ip>
san_login = <san_username>
san_password = <san_password>
volume_backend_name = 3parfc
hp3par_cpg = CPG-3
volume_driver = cinder.volume.drivers.san.hp.hp_3par_fc.HP3PARFCDriver
filter_function = capabilities.total_volumes < 20
goodness_function = (capabilities.capacity_utilization < 90)? 75 : 40
Figure 5. Sample cinder.conf file showing driver filter and weigher usage

In figure 5 there are three HPE 3PAR backends enabled in the cinder.conf file. The sample shows how you can use HPE 3PAR specific properties
to distribute volumes with more control than the default Block storage scheduler.

Technical white paper

Page 17

Note
Remember that you can combine the HPE 3PAR specific properties with the generic volume properties provided by OpenStack. Also the values
used in the above sample are only for examples. In your own environment you have full control over the filter and goodness functions that you
create. Refer to the OpenStack Cloud Administrator Guide for more details and examples.

For more information on creating a useful goodness_function, see the Creating a goodness function section in the Appendix.

Volume types assignment


Use the following command or the Admin Horizon UI (new in Juno) to specify a volume_backend_name for each volume type you create. This
links the volume type to a backend name.
$ cinder --os-username admin --os-tenant-name admin type-key gold set volume_backend_name=3parfc-1
The second volume type could be for an iSCSI driver volume type named silver.
$ cinder --os-username admin --os-tenant-name admin type-key silver set volume_backend_name=3pariscsi-1
Multiple key value pairs can be specified when running the above command. For example, you could run the following command to create a
volume type named gold, with a CPG of OpenStack_RAID5_FC, and a host persona of VMware with full provisioning.
$ cinder --os-username admin --os-tenant-name admin type-key gold set volume_backend_name=3parfc-1
hp3par:persona=11 VMware hp3par:provisioning=full

Multiple backend requirements


In the Grizzly release, hard coding on the volume_backend_name was required, using either the HP3PARFCDriver or the
HP3PARISCSDriver. From the Havana release forward, you can name the volume_backend_name whatever you like.
The hp3par_domain in the cinder.conf file has been deprecated in the Havana release and removed in the Icehouse release. The driver now
looks up the domain based on the CPG specified in either the cinder.conf file (or hp3par:cpg extra-spec volume type setting prior to Kilo only).
Errors will occur if you try to attach volumes from different domains to the same HPE 3PAR host.

Volume migration
Starting in the Icehouse release, unattached volumes can be migrated between different CPGs in the same HPE 3PAR backend, directly within
the backend. Volume migration requires that you have the Dynamic Optimization license installed on our HPE 3PAR Storage array. First,
configure Cinder to use multiple backends, as explained in the Block Storage scheduler configuration with multi-backend section. Using the
command line, you can see the available driver instances represented as hosts within the cinder.conf file to migrate volumes using the
following command:
$cinder-manage host list
mystack mystack@3parfc-1
mystack@3parfc-2
mystack@3pariscsi-1

Technical white paper

Page 18

To see which HPE 3PAR driver instance is managing a particular volume:


$cinder show <volume_id>
Where <volume_id> represents the volume ID, and the host is in the attribute os-vol-host-attr:host.
os-vol-host-attr:host mystack@3parfc-1#cpggold1
To migrate a volume to a different driver instance, and therefore to a different CPG, use the command:
$cinder migrate <volume_id> <host>#<pool>
Where <volume_id> represents the volume ID and <host> represents the driver instance. The <pool> is required. For the Juno release for the
HPE 3PAR drivers, pool is just a repeat of the driver backend name. In Kilo and later the HPE 3PAR
drivers use the CPG as the pool.
$cinder migrate 3e57599e-7327-4596-a45f-d29939c836cf mystack@3parfc-2#cpggold2

Note
Cinder migrate requires the host or drivers to have the same volume_backend_name in the cinder.conf file. Changing the example above so that
all three drivers have the same volume_backend_name=3par would enable volume migration between all of them.

Volume manage and unmanage


Starting in the Juno release, HPE 3PAR volumes can be managed and unmanaged. This allows for importing non-OpenStack volumes already
on an HPE 3PAR Storage array into OpenStack/Cinder or exporting, which would remove them from the OpenStack/Cinder perspective. However,
the volume on the HPE 3PAR Storage array would be left intact. Using the command line, you can see the available driver instances represented
as hosts within the cinder.conf file. This host is where the HPE 3PAR volume that you would like to manage resides. Use the following command:
$cinder-manage host list
mystack mystack@3parfc-1
mystack@3parfc-2
mystack@3pariscsi
To manage with what exists on the HPE 3PAR but is not already managed by OpenStack/Cinder, use the command:
$cinder manage --name <cinder name> <host>#<pool> <source-name>
Where <source-name> represents the name of the volume to manage and <cinder name> is optional but represents the OpenStack name
and <host> represents the driver instance, the <pool> is required for the Juno release. In Juno for the HPE 3PAR drivers, pool is just a repeat
of the driver backend name. In Kilo and later, HPE 3PAR drivers use one of the CPGs configured for the backend as the pool. The manage
volume command will also accept an optional <--volume-type> parameter that will perform a retype of the virtual volume after being
managed.
$cinder manage --name volgold mystack@3par-fc2#cpggold2 volume123

Technical white paper

Page 19

Note
Cinder manage will rename the volume on the HPE 3PAR Storage array to a name that starts with osv- followed by a UUID as this is required
for OpenStack/Cinder to locate the volume under its management.

To unmanage a volume from the OpenStack/Cinder and leave the volume intact on the HPE 3PAR Storage array, use the command:
$ cinder unmanage <volume_id>
Where <volume_id> is the ID of the OpenStack/Cinder volume to unmanage:
$cinder unmanage 16ab6873-eb09-4522-8d0f-91aab83be34d

Note
Cinder unmanage will remove the OpenStack/Cinder volume from OpenStack but the volume will remain intact on the HPE 3PAR Storage array.
The volume name will have umn- prefixed to it, followed by an encoded UUID. This is required because the HPE 3PAR has name length and
character limitations.

Consistency Groups
Prior to consistency groups, every operation in cinder happened at the volume level. Grouping like volumes allows for improved data protection,
paves way to maintaining consistency of data across multiple different volumes, and allows for operations to be performed on groups of volumes.
The fundamental supported operations include creating a consistency group, deleting a consistency group (and all volumes inside of it), adding
volumes to a consistency group, removing volumes from a consistency group, snapshotting a consistency group, and creating a consistency
group from a source cgsnapshot.

Note
The 3PAR Cinder drivers do not currently support creating a consistency group from a source consistency group.

Technical white paper

Page 20

Consistency group CLI support is defaulted to off in the Liberty release. In order to access the consistency group related CLI commands,
/etc/cinder/policy.conf needs to be modified by removing group:nobody from the following lines as such:

consistencygroup:create" : ",
consistencygroup:delete": ",
consistencygroup:update": ",
consistencygroup:get": ",
consistencygroup:get_all": ",
consistencygroup:create_cgsnapshot" : ",
consistencygroup:delete_cgsnapshot": ",
consistencygroup:get_cgsnapshot": ",
consistencygroup:get_all_cgsnapshots": ",

Creating a Consistency Group


Once the policy.conf file is correctly modified, we can create a consistency group:
$cinder consisgroup-create [--name <name>] [--description <description>] <volume-type>
Where <volume-type> is the OpenStack/Cinder volume type name and --name and --description are optional:
$cinder consisgroup-create --name MyCG --description 3parfc cg 3parfc
To view the newly created consistency group:
$cinder consisgroup-list
| 831b2099-d5ba-4b92-a097-8c08f9a8404f | available | MyCG |
Deleting a Consistency Group
An empty consistency group can be deleted by issuing the following command:
$cinder consisgroup-delete <consisgroup-id>
Where <consisgroup-id> represents to consistency group ID:
$cinder consisgroup-delete 831b2099-d5ba-4b92-a097-8c08f9a8404f
If the group has volumes, we can add the force flag (NOTE: This will fully delete all volumes in the group):
$cinder consisgroup-delete <consisgroup-id> --force
Where <consisgroup-id> represents to consistency group ID:
$cinder consisgroup-delete 831b2099-d5ba-4b92-a097-8c08f9a8404f --force

Technical white paper

Page 21

Adding Volumes
In order to add volumes to the group:
$cinder consisgroup-update <consisgroup-id> --add-volumes <uuid1,uuid2,......>
Where <consisgroup-id> represents the consistency group ID and <uuid1,uuid2,......> is a comma separated list of OpenStack/Cinder volume IDs:
$cinder consisgroup-update 831b2099-d5ba-4b92-a097-8c08f9a8404f --add-volumes
87ac88d4-e360-4bdb-b888-2208fbe282dd,9ce09c09-0a20-4bcd-bd1a0eaa95dbe0cd,a4563466-61d4-4018-b586-f07f84c4010c
Removing Volumes
To delete volumes from a consistency group:
$cinder consisgroup-update <consisgroup-id> --remove-volumes <uuid1,uuid2,......>
Where <consisgroup-id> represents the consistency group ID and <uuid1,uuid2,......> is a comma separated list of OpenStack/Cinder volume IDs:
$cinder consisgroup-update 831b2099-d5ba-4b92-a097-8c08f9a8404f --remove-volumes
87ac88d4-e360-4bdb-b888-2208fbe282dd

Note
This does not delete the volume, it only removes it from the consistency group.
Creating a cgsnapshot
Snapshotting a consistency group can be accomplished with:
$cinder cgsnapshot-create [--name <name>] [--description <description>]
<consisgroup-id>
Where <consisgroup-id> represents the consistency group ID and --name and --description are optional:
$cinder cgsnapshot-create --name MyCGSnap --description Snapshot of MyCg
831b2099-d5ba-4b92-a097-8c08f9a8404f
To view the newly created cgsnapshot:
$cinder cgsnapshot-list
| 70c266bb-8255-4f2b-83cf-87f79d54dfb4 | creating | MyCGSnap |
Deleting a cgsnapshot
To delete a cgsnapshot:
$cinder cgsnapshot-delete <cgsnapshot-id>
Where <cgsnapshot-id> is the consistency group snapshot ID:
$cinder cgsnapshot-delete 70c266bb-8255-4f2b-83cf-87f79d54dfb4

Technical white paper

Page 22

Creating a Consistency from a cgsnapshot


In order to create a consistency group from a cgsnapshot we can issue:
$cinder consisgroup-create-from-src --cgsnapshot <cgsnapshot-id>
Where <cgsnapshot-id> is the consistency group snapshot ID:
$cinder consisgroup-create-from-src --cgsnapshot 70c266bb-8255-4f2b-83cf-87f79d54dfb4
The newly created consistency group can be treated as a new, completely separate group with no ties to its parent group or cgsnapshot.

Volume retype
Volume retype in now available (since the Juno release). The retype only works if the volume is on the same HPE 3PAR Storage array. This
allows the volume retype for example from a silver volume type to a gold volume type. The HPE 3PAR OpenStack drivers modify the volumes
Snap CPG, provisioning type, persona, and QoS settings, as needed, to make the volume behave appropriately for the new volume type. The
ability to change a volumes CPG existed prior to Kilo. In Kilo and later, separate configured backends with CPGs (as pools) should be used to
allow the scheduler to select the appropriate CPG. Volume retype also requires that you have the Dynamic Optimization license enabled on our
HPE 3PAR Storage array.
Use caution when using the optional --migration-policy on-demand, because this falls back to copying the entire volume (using dd over the
network) to the cinder node and then to the destination HPE 3PAR storage array. The cinder node also has to have enough space available to
store the entire volumes during the migration. We recommend that you use the default --migration-policy never when retype is used.

Note
Volume retype will not be allowed if the volume has snapshots and the retype would require a change to the Snap CPG or User CPG. The
volume_backend_name in cinder.conf must be the same between the source and destination volume types when --migration-policy is set to
never. This is the default and recommend retype method.

Security improvements
CHAP support
Challenge-Handshake-Authentication-Protocol (CHAP) support was added in the Juno release to the HPE 3PAR iSCSI driver and is one-way
authentication (sets the CHAP initiator on the HPE 3PAR Storage array). The hp3par_iscsi_chap_ enabled option in the cinder.conf must
be set to True to enable the iSCSI CHAP support. The current HPE 3PAR host will have the CHAP setting automatically added the next time an
iSCSI volume is attached.

Configurable SSH Host Key Policy and Known Hosts File


Both OpenStack Cinder and the HPE 3PAR client were enhanced in the Juno release to allow for configuring the SSH Host Key Policy and Known
Hosts File. This adds configuration options for ssh_hosts_key_file and strict_ssh_host_key_policy in cinder.conf.
The strict_ssh_host_key_policy option defaults to False. When False, Cinder and the HPE 3PAR Client will use auto-add policy like
previous versions. Auto-add allows new hosts to be added, but will raise an exception if a host that was already known starts sending a different
host key. When strict_ssh_host_key_policy=True, Cinder and the HPE 3PAR Client will use reject policy. With reject policy, the host must
already be recorded in your known host file and match the recorded host key.
The ssh_hosts_key_file option defaults to $state_path/ssh_known_hosts (state_path is a config option that defaults to
/var/lib/cinder). This setting allows you to specify the known hosts file to use for both Cinder and HPE 3PAR Client SSH connections. The
previous default was to use the system host keys. The client will try to create the configured file if it does not exist. If
strict_ssh_host_key_policy=True, then this file needs to be pre-populated with trusted known host keys. When using
strict_ssh_host_key_policy=False (the default), new hosts will be appended to the file automatically.

Technical white paper

Page 23

Support for Containerized Stateful Services


The HPE 3PAR drivers are supported by the ClusterHQ Flocker open source technology. Details on the setup and usage of Flocker can be found
at docs.clusterhq.com/en/latest/

Summary
HPE is a Platinum member of The OpenStack Foundation. HPE has integrated OpenStack open source cloud platform technology into its
enterprise solutions to enable customers and partners to build enterprise-grade private, public, and hybrid clouds.
The Liberty release continues HPEs contributions to the Cinder project, enhancing core Cinder capabilities as well as extending the HPE 3PAR
StoreServ Block Storage Driver. The focus continues to be on adding enterprise functionality such iSCSI multipath support, consistency groups,
over subscription on thin provisioned volumes and enhanced Block Storage scheduling based on filtering and goodness functions from the
Drivers, etc. The HPE 3PAR StoreServ Block Storage Drivers support the OpenStack technology across both iSCSI and Fibre Channel protocols.

Appendix
Creating a goodness function
There are several equations that can be used as a starting point for a useful goodness_function. Each equation will cause goodness values to
be distributed differently. Choosing the correct equation to start with depends on the needs of the administrator. Also, since goodness values can
only be between 0 and 100 the equations below will utilize min and max functions to create limits. See the Block Storage scheduler configuration
with driver filter and weigher section for a more general description.
Determining a good value for maxIOPS
The following equations will assume that you have a good reference value for maximum IOPS. The maximum IOPS a backend supports will
depend on which HPE 3PAR backend is being used. Some sample values for maximum IOPS are listed below.
From the HPE 3PAR 8000 Data sheet:
Remove bottlenecks with a flash-optimized, scale-out architecture delivering over 1 million IOPS and over 20 GB/sfor 8540 AFA
From the HPE 3PAR 20000 Data sheet:
Reduce performance bottlenecks with flash-optimized hardware and software for greater than 3 million IOPS at sub-millisecond latenciesfor
20450/20850 AFA
The above values of 1 million and 3 million IOPS can be good starting points for setups that are utilizing those backends. For backends not listed
above, their values could be used as a starting point. Another approach for determining the best maximum IOPS value is test the performance of
your setup directly.
Another consideration, when deciding on a maximum IOPS value is the number of CPGs that will be in use. The maximum IOPS should be on the
CPG level and not the backend level. For example, if you know a backend has a maximum IOPS of 250,000 then you would make sure the sum of
all of your assigned CPG maximum IOPS does not exceed 250,000 IOPS.
Polynomial
This equation produces goodness values for a backend which decrease in a polynomial fashion as the current IOPS on that backend increase.
The equation requires values for maxIOPS and a smoothing factor (smooth). The maxIOPS value can be specified by an administrator in qos
specs. It can also be hard-coded, if desired. The smoothing factor is a hard-coded value that can be tweaked by an administrator to adjust the
steepness of the polynomial decline.

2 + 100

Technical white paper

Page 24

This is how it would look in cinder.conf:


goodness_function = max(min(-(smooth / qos.maxIOPS) * capabilities.throughput ^ 2 + 100, 100), 0)
A slight modification can be made to the polynomial equation in order to change the point where the goodness value of a backend begins to
decrease. This version is recommended as it gives more control over the inflection point.

2 + 100 +

The new vertical value is used to change where the goodness value begins to drop off at.

goodness_function = max(min(-(smooth / qos.maxIOPS) * capabilities.throughput ^ 2 + 100 + vertical, 100), 0)

Note
That the cinder.conf examples above include capping the min and max goodness values to be 0 and 100. The smoothing and vertical values
should be a hard-coded values that are decided upon before hand by an administrator.

Figure 6. Goodness value in relation to IOPS using the recommended polynomial equation

Figure 6 is a graphical representation of the goodness values the recommended polynomial equation produces. A maxIOPS of 25000 and a
smoothing value of 0.001 were used. A vertical shift of 250 was also used. The dashed red line represents the point at which IOPS is 80% of the
maximum possible IOPS.

Technical white paper

Page 25

Figure 7. Goodness value in relation to IOPS using the polynomial equation

Figure 7 is a graphical representation of the goodness values the polynomial equation produces. A maxIOPS of 25000 and a smoothing value of
0.004 were used.
Linear
This equation produces goodness values for a backend which decrease in a linear fashion as the current IOPS on that backend increase. This
equation requires values for maxIOPS and minIOPS, both of which can be specified by an administrator in qos specs. The values can also be
hard-coded, if desired.
This is the general equation:

This is how it would look in cinder.conf:

= 100

goodness_function = max(min(100 * ((qos.maxIOPScapabilities.throughput) / (qos.maxIOPS qos.minIOPS)), 100), 0)

Note
That the cinder.conf example includes capping the min and max goodness values to be 0 and 100.

Technical white paper

Page 26

Figure 8. Goodness value in relation to IOPS when using linear equation

Figure 8 is a graphical representation of the goodness values the linear equation produces. A maxIOPS of 25000 and a minIOPS of 0 were used
for this example.

Figure 9. Goodness value in relation to IOPS when using a linear equation with minIOPS set to 20000

Figure 9 is a graphical representation of the goodness values the linear equation produces. A maxIOPS of 25000 and a minIOPS of 20000 were
used for this example. It can be seen that altering minIOPS will determine the inflection point for when goodness values begin decreasing from
100. The dashed red line represents the point at which IOPS is 80% of the maximum possible IOPS.

Technical white paper

Page 27

Exponential
This equation produces goodness values for a backend which decrease exponentially as the current IOPS on that backend increase. The
equations requires values for maxIOPS and a smoothing factor (smooth). The maxIOPS value can be specified by an administrator in qos specs. It
can also be hard-coded, if desired. The smoothing factor is a hard-coded value that can be tweaked by an administrator to adjust the steepness
of the exponential decline.

This is how it would look in cinder.conf:

= 100 1 +

goodness_function = max(min(100 * (1 + smooth / qos.maxIOPS) ^ -capabilities.throughput, 100), 0)

Note
That the cinder.conf example includes capping the min and max goodness values to be 0 and 100. The smoothing value should be a hardcoded value that is decided upon beforehand by an administrator.

Figure 10. Goodness value in relation to IOPS using the exponential equation

Figure 10 is a graphical representation of the goodness values the exponential equation produces. A maxIOPS of 25000 and a smoothing value
of 1.4 were used.

Technical white paper

For more information


HPE Cloud
HPE Helion
HPE Helion OpenStack Community
HPE Helion Hybrid Cloud
HPE Helion Cloud News
OpenStack
OpenStack website
OpenStack documentation
OpenStack Cloud Administrator Guide
OpenStack Block StorageHPE 3PAR
HPE 3PAR Storage array
HPE 3PAR StoreServ Storage family
HPE 3PAR Fibre Channel and iSCSI drivers
HPE 3PAR StoreServ 8000 Storage Data sheet
HPE 3PAR StoreServ 20000 Storage Data sheet
To help us improve our documents, provide feedback at hp.com/solutions/feedback

Learn more at
hp.com/go/OpenStack

Sign up for updates

Rate this document


Copyright 20142016 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without
notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements
accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard
Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
The OpenStack Word Mark is either registered trademark/service mark or trademark/service mark of the OpenStack Foundation, in the
United States and other countries and is used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or
sponsored by the OpenStack Foundation, or the OpenStack community. VMware is a registered trademark or trademark of VMware, Inc.
in the United States and/or other jurisdictions. Windows Server is either registered trademark or trademark of Microsoft Corporation in the
United States and/or other countries.
4AA5-1930ENW, January 2016, Rev. 6