North America ATS presents 2012 IBM Corporation 1 Topic: Whats New in and Migrating to PowerHA for AIX v7.1.3 Speaker: Shawn Bodily Release Overview Offering has new product ID. Standard Edition has base function plus Smart Assists (5765-H39) Enterprise Edition - 7.1.3(5765-H37) Migration options Static (offline) Snapshot Rolling migration AIX Requisites - 7.1.x 2012 IBM Corporation 2 2 PHA - 7.1.2 AIX 6.1 TL 8 SP1 AIX 7.1 TL 2 SP1 (Hyperswap also requires IV27586) RSCT 3.1.4 PHA - 7.1.3 AIX 6.1 TL 9 SP1 AIX 7.1 TL 3 SP1 RSCT 3.1.5 http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD101347 New features Standard Edition Usability enhancements SMIT menu changes* User Defined resource types* Additional service IP distribution policies to include source IP* New resource group management enhancement* Adaptive Failover capability* Systems Director plug-in Private Networks * Network tunables* New application startup option * 2012 IBM Corporation 3 New application startup option * IPV6 support Site Support Supports unicast for heartbeat (CAA) Support for dynamic hostname change (CAA) RAS improvements Utilizes CAA Repository Resiliency Backup Repository Disks Rootvg system event* Federated Security Support * JFS2 Mount Guard* * Topics are covered in detail in backup charts Blue=7.1.0 Red=7.1.1 Green=7.1.2 Brown=7.1.3 New features Standard Edition Continued Administrative additions New cluster command line, clmgr* New distributed cluster clcmd command * Physical volume rename * Display disk UUID* DARE progress indicator* Graphical Cluster Simulator Clmgr snapshot copy Clmgr html reports 2012 IBM Corporation 4 Clmgr html reports Additional Smart Assists* SAP SAP LiveCache Hot Standby Fast Fallover with SVC and DS8k FileNet Tivoli Storage Manager Lotus Domino Server MaxDB MQSeries * Topics are covered in detail in backup charts Blue=7.1.0 Red=7.1.1 Green=7.1.2 Brown=7.1.3 Simpler to deploy and easier to manage multi-site configurations with IBM Systems Director, intuitive interfaces, multi-site install wizard Stretched Cluster; Cluster wide AIX commands, kernel based event management single repository multicast communications Linked Clustering; cluster wide AIX commands, kernel based event management, linked clusters with unicast communications & dual repositories HyperSwap for continuously available storage in two-site topologies New features Enterprise Edition (7.1.2) 2012 IBM Corporation 5 HyperSwap for continuously available storage in two-site topologies Active/Active Single Node cluster support Cluster Split/Merge: technology for managing split-site policy scenarios Manual User prompt verification Cluster Aware AIX - Requirements Repository disk Required disk shared between all nodes in the cluster SAN/HBA Heartbeat (optional) Requires 4GB or 8GB FC adapters Support Key is it must have tme attribute Additional details http://www-01.ibm.com/support/docview.wss?uid=isg1IV03643 LPM "LPM is not supported for SAN communication. You must unconfigure SAN communication when LPM is used to migrate partitions 2012 IBM Corporation 6 Hostname CAA requires hostname and CAA nodename must be the same This means local cluster node communication path IP must be hostname IP CAA does not allow dynamic changing of the the hostname until 7.1.3 Requires specific vg and lv naming to be available # lsvg -l caavg_private caavg_private: LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT caalv_private1 boot 1 1 1 closed/syncd N/A caalv_private2 boot 1 1 1 closed/syncd N/A caalv_private3 boot 4 4 1 open/syncd N/A powerha_crlv boot 1 1 1 closed/syncd N/A New communications daemon (clcomd) Utilizes /etc/cluster/rhosts Uses port# 16191 (/etc/services) 7.1.0-7.1.2 Uses multicast for heartbeating Requires infrastructure to allow forwarding multicast packets (port 4096) 7.1.3 re-introduces unicast and makes multicast optional Cluster Aware AIX - differences 2012 IBM Corporation 7 7.1.3 re-introduces unicast and makes multicast optional Initial release did not have deadman switch CAA update used by 7.1.1 re-introduced deadman switch Use clctrl command to alter setting Default setting is a to assert Optional setting e to log event only. Repository Resiliency & Backup Pick list chooses shared disks currently not part of a volume group per the ODM You must make sure the disk has no VGDA 2012 IBM Corporation 8 Resiliency introduced in PHA 7.1.1 Requires AIX 6.1.7.3 or 7.1.1.3 Backup introduced in PHA 7.1.2 Allows up to 124 backup repository disks (does not autoswap) the disk has no VGDA information on it, or it will fail with a cryptic error message. JFS2 Mount Guard Preventing Double Mounts Mounting a file system on two nodes at once corrupts it LVM Active/Passive mode and CAA Storage Framework fencing help, but can be defeated AIX Mount Guard A second mount without intervening unmount will be rejected Mount state maintained on disk, does not require node interaction Set by chfs option, resettable by logredo or chfs PowerHA always sets if right AIX level, runs logredo +testsvcip:cl_activate_fs(.770):/test2fs[fs_mount+106] : Tell JFS2 to try to 2012 IBM Corporation 9 +testsvcip:cl_activate_fs(.770):/test2fs[fs_mount+106] : Tell JFS2 to try to protect against double mounts +testsvcip:cl_activate_fs(.770):/test2fs[fs_mount+108] chfs -a mountguard=yes /test2fs Available in all AIX levels required for PowerHA SystemMirror 7.1.1 bos.rte.filesystems 7.1.1 or 6.1.7 PowerHA support back ported to PowerHA 6.1 (IV06544) AIX Duplicate Varyonvg Prevents varyon of a vg in non-concurrent mode when already active in concurrent mode somewhere else Man page states added in AIX 6.1.8 seems to really work in 6.1.9 PowerHA Graphical Cluster Simulator Cluster Simulator: PowerHA cluster config can be deployed and experimented in simulation mode Based on PowerHA Director plugin Can run on a single notebook computer Director Server needs to be deployed on the PowerHA Simulator Plugin IBM Director Server Browser 2012 IBM Corporation 10 Director Server needs to be deployed on the notebook Saved XML configuration can be used to deploy a real cluster Cluster Copy suited for Multi cluster environment Helps deploy new cluster with minimal inputs No need for complex scripts to deploy clusters 7.1.3 Cluster Copy Support: Allows a snapshot taken on a fully configured and tested system to be restored on new hardware. clmgr: Cluster Copy 2012 IBM Corporation 11 on new hardware. Copies as much of the SystemMirror configuration as possible onto the new nodes. Only inputs required are the new node names and repository disk(s). Underlying dependencies must already be in place on the new hardware Examples: applications, method scripts, volume groups, Intended as a fast start, to improve time-to-value. Not perfect its called copy, not clone: Example: clmgr restore <FOREIGN_SNAPSHOT>NODES=newnd1,newnd2REPOSITORY=hdisk# Native HTML Report clmgr (1 of 2) Native HTML cluster report is now available via clmgr: Alternative to the IBM Systems Director reporting feature No external requirements. Available in the base product. Benefits include: Contains more cluster configuration information than any other report. 2012 IBM Corporation 12 Contains more cluster configuration information than any other report. Can be scheduled to run automatically via AIX core abilities (e.g. cron). Portable. Can be emailed without loss of information. Fully translated. Allows for inclusion of a company name or logo into the report header. Limitations: Per-node operation. No centralized management. Relatively modern browser required for tab effect Only officially supported on Internet Explorer and Firefox Native HTML Report clmgr (2 of 2) A sample clmgr HTML cluster report: clmgr view report cluster file=/tmp/hacluster.html type=html company_name=Marconi Company" company_logo=powerhalogo.jpg 2012 IBM Corporation 13 Smart Assists Complete list from 7.1.3 2012 IBM Corporation 14 PowerHA Two-Site Solutions Two Sites Stretched Cluster Linked Clusters Inter Site Communication Multicast Unicast Repository Disk Shared Separate Cluster Communication Networks SAN Disk Networks SAN * Cross Site LVM Mirroring HyperSwap Multi Site Concurrent RG with / HyperSwap (active-active) * NA Repository Disk 2012 IBM Corporation 15 with / HyperSwap (active-active) * PowerHA V7.1 Standard Enterprise Multi Site Definition Site Service IP Site Policies
Stretched Cluster Linked Clusters HADR with Storage Replication Management NA HyperSwap NA Two-Site Stretched Cluster Two-Site Linked Cluster Future capability * PowerHA Enterprise Edition HyperSwap Multi-Site PowerHA cluster with continuous storage availability Non-disruptive - applications keep running in the event of a storage outage Storage maintenance without downtime Application PowerHAHyperSwap Cluster 2012 IBM Corporation 16 Storage migration without downtime 7.1.3 adds support for Active/Active (including Oracle RAC) Single node cluster Auto resync Admin can turn off/on as desired Metro Mirror Primary DS8K Secondary DS8K http://www.redbooks.ibm.com/redpieces/abstracts/redp4954.html PowerHA/EE Linked Cluster Split/Merge Management Split-Site Management Heartbeat indicates failure One of the sites becomes production Site partitioning is avoided x 2012 IBM Corporation 17 Merge Management Site partitioning occurred One of the sites must become the sync source for the cluster Future capability * x PowerHA/EE Merge/Split Policy Options Policy Setting Split Merge Comments Majority Rule >N/2 side wins (N= total nodes in the cluster) In case of a tie, side with the smallest node id wins Tie Breaker Tie break holder side wins Manual (future) Operator interventions enabled for split/merge processing 2012 IBM Corporation 18 Tie breaker policy A means of determining the winner when a split-site condition occurs Losing side is quiecsed Tie breaker policy options Majority rules (site with largest number of nodes wins) SCSI 2 or 3 reservation disk (first one wins) Operator intervention (operator decides) X X Majority rules Manual (operator controlled failover) Split/Merge Policies Administrator prompts Cluster will wait for Admin inputs Optional Policy: After N prompts allow auto- recovery Custom action scripts can invoked at the time site down 2012 IBM Corporation 19 Custom action scripts can invoked at the time of split or merge as well Defaults Number of prompts (N)=infinite Interval between notifications: once in 30 seconds and then increasing in frequency Auto-Recovery after N prompts cluster split X Migrating to PowerHA v7.1.3 2012 IBM Corporation 20 Definition of Terms Term Definition Mixed cluster Nodes in a cluster running two different versions of PowerHA. A cluster in this state may be operational for a long period, but the configuration can not be changed until all nodes have been upgraded. Offline migration A type of migration where PowerHA is brought offline on all nodes prior to performing the migration. During this time, resources are not available 2012 IBM Corporation 21 not available Rolling migration A type of migration from one PowerHA version to another during which cluster services are stopped on one node at a time. That node is upgraded and reintegrated into the cluster before the next node is upgraded. Definition of Terms Continued Description Description Snapshot migration A type of migration from one PowerHA version to another during which you take a snapshot of the current cluster configuration, stop cluster services on all nodes, install the 7.1 version of PowerHA System Mirror, and then convert the snapshot by running the clconvert_snapshot utility. 2012 IBM Corporation 22 Non-Disruptive Upgrade A node can be unmanaged allowing all resources on that node to remain operational when cluster services are stopped. THIS IS NOT SUPPORTED FOR v7.1! Migration Planning Software Requirements PowerHA 7.1.3 AIX 6.1 TL9 SP1 or AIX 7.1 TL3 SP1 Install new CAA additional requisite filesets of: bos.cluster (.rte and .soliddb) bos.ahafs devices.common.IBM.storfwork (for SAN HB) cas.agent (for Systems Director plug-in) clic.rte (for secured encryption communication options of clcomd) 2012 IBM Corporation 23 clic.rte (for secured encryption communication options of clcomd) PowerHA for AIX Version Compatibility Matrix http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD101347 Not all configurations can be migrated Configurations with FDDI, ATM, X.25, and Token Ring, can not be migrated and must be removed from the configuration Configurations with IPAT via Replacement or Hardware Address Takeover can not be migrated and must be removed from the configuration Configurations with Heartbeat via Aliasing can not be migrated and must be removed from the configuration Limitations 2012 IBM Corporation 24 Non-IP Networking is accomplished differently RS232, TMSCSI, TMSSA, Disk Heartbeat are no longer supported but the migration will automatically remove those network types. Each nodes communication path must be set to the IP of the hostname This is a CAA requirement If the current nodename and hostname match that should work fine Hostname IP should not be an alias Hardware Requirements Repository Disk The repository is stored on a disk that must be SAN attached and zoned to be shared by every node in the cluster and only the nodes in the cluster. Multicast enabled network/switches (7.1.0-7.1.2) A multicast IP address is needed for the new monitoring technology. The product will assign multicast addresses, but you can explicitly specify multicast Migration Planning 2012 IBM Corporation 25 The product will assign multicast addresses, but you can explicitly specify multicast addresses. Optional in 7.1.3 as Unicast is available Optional 4GB or 8GB adapters for SAN HB Key is it must have tme attribute Additional details http://www-01.ibm.com/support/docview.wss?uid=isg1IV03643 "LPM is not supported for SAN communication. You must unconfigure SAN communication when LPM is used to migrate partitions Rolling Migration Overview Steps Stop cluster services on one node (move rg as needed) Upgrade AIX (if needed) and reboot Also install additional CAA filesets, bos.cluster and bos.ahafs Update /etc/cluster/rhosts Enter cluster node hostname IP addresses. Only one IP address per line. Refresh -s clcomd Execute clmigcheck (option1, then option 3) Upgrade PowerHA 2012 IBM Corporation 26 Upgrade PowerHA Install base level install images and complete upgrade procedures Then comeback and apply lastest SPs on top of it. Can be done non-disruptively. Review the /tmp/clconvert.log file Restart cluster services (move rg back if needed) Repeat steps above for each node (minus the additional options on clmigcheck) Rolling Migration - Clmigcheck ------------[ PowerHA System Mirror Migration Check ]------------- Please select one of the following options: 1 = Check ODM configuration. 2 = Check snapshot configuration. 2012 IBM Corporation 27 Select option 1. If the cluster cant be migrated this command will indicate that in error messages. These must be removed from the back level configuration. Changing the back level configuration requires a verify/sync operation. If only warnings, then you may proceed to option 3. If it can be migrated, select option 3 to choose IP option and repository disk (this data entry will only be done on the first node) 3 = Enter repository disk and multicast IP addresses. Select one of the above, "x to exit or "h" for help: Clmigcheck - Option 1 ------------[ PowerHA System Mirror Migration Check ]------------- CONFIG-WARNING: The configuration contains unsupported hardware: Disk Heartbeat network. The PowerHA network name is net_diskhb_01. This will be removed from the configuration during the migration to PowerHA System Mirror 7.1. 2012 IBM Corporation 28 Hit <Enter> to continue ------------[ PowerHA System Mirror Migration Check ]------------- The ODM has no unsupported elements. Hit <Enter> to continue ------------[ PowerHASystem Mirror Migration Check ]------------- Your cluster can use multicast or unicast messaging for heartbeat. Multicast addresses can be user specified or default (i.e. generated by AIX). Select the message protocol for cluster communications: 1 = DEFAULT_MULTICAST 2 = USER_MULTICAST 3 = UNICAST Select one of the above or "h" for help or "x" to exit: 3 Clmigcheck - Option 3 (7.1.3) 2012 IBM Corporation 29 Select one of the above or "h" for help or "x" to exit: 3 ------------[ PowerHASystem Mirror Migration Check ]------------- Select the disk to use for the repository 1 = 00cee73ee4dfad5c(hdisk3) 2 = 00cee73ee4dfc4e2(hdisk4) 3 = 00cee73ee4dfe0d0(hdisk5) Select one of the above or "h" for help or "x" to exit: 3 Rolling Migration continued Install the new PowerHA software on node. Review the /tmp/clconvert.log file to ensure that a conversion of the PowerHA ODMs has occurred. Restart PowerHA and move resource groups back to this node as necessary Repeat these steps on all remaining nodes in the cluster 2012 IBM Corporation 30 Additional information about clmigcheck The first time this is run (on the first node) Option 1 and option 3 must be run. Option 3 give an error if option 1 or 2 have not passed the check. Option 3 gets the input for repository disk & multicast address and creates a file (/var/clmigcheck/clmigcheck.txt) on all nodes in the cluster. When run on other nodes 2012 IBM Corporation 31 When run on other nodes If the clmigcheck.txt file exists, then a check will be made to determine if all other nodes in the cluster have PowerHA 7.1 installed. If not, a msg is displayed that no action is necessary on this node, continue with the install of PowerHA 7.1 If so, then the mkcluster will be executed to create and start the CAA cluster, then the customer can continue to install PowerHA 7.1 Errors reported by Option 1 are listed in /tmp/clmigcheck/clmigcheck.log Clmigcheck.txt contents Once option 3 has been executed the /var/clmigcheck/clmigcheck.txt is populated with the following information and distributed across all nodes in the cluster. CLUSTER_TYPE:STANDARD CLUSTER_REPOSITORY_DISK:00cee73ee4dfe0d0 CLUSTER_MULTICAST:UNI 2012 IBM Corporation 32 NULL = Default_Multicast IP Address = User_multicast UNI = Unicast Once clmigcheck.txt is created, subsequent execution of clmigcheck will not produce a menu, it will simply state that you can now proceed to upgrade to PowerHA v7.1 if it is not the last node to be upgraded. If it is the last node in the cluster, the CAA cluster will be created at that time. Clmigcheck Continued When clmigcheck is executed on another node, you will get the following ------------[ PowerHA SystemMirror Migration Check ]------------- clmigcheck: This is not the first node or last node clmigcheck was run on. No further checking is required on this node. You can install the new version of PowerHA SystemMirror. Hit <Enter> to continue When /usr/sbin/clmigcheck is run on the last node to migrate, the CAA 2012 IBM Corporation 33 When /usr/sbin/clmigcheck is run on the last node to migrate, the CAA cluster will be created. Verify this by using (/usr/sbin/lscluster m). ------------[ PowerHA SystemMirror Migration Check ]------------- About to configure a 2 node CAA cluster, this can take up to two minutes. Hit <Enter> to continue After it is created the following message will be displayed. clmigcheck: You can install the new version of PowerHA SystemMirror. Migrating an Offline Cluster To bring a cluster offline and upgrade the PowerHA software on the cluster nodes, complete the following procedure: Stop cluster services (all nodes.) Ensure that cluster services have been stopped on all nodes. Migration install AIX along with CAA filesets, requires a reboot (all nodes) Update /etc/cluster/rhosts file with cluster node hostname IP addresses. Only one IP address per line. Refresh clcomd (refresh -s clcomd) 2012 IBM Corporation 34 Run /usr/sbin/clmigcheck (select option 1) on one node If the cluster cant be migrated this command will indicate that in error messages. These must be removed from the back level configuration. Changing the back level configuration requires a verify/sync operation. If it can be migrated, select option 3 and input the shared disk and IP address . Run /usr/sbin/clmigcheck on the next/last node to migrate (before installing version 7.1 on it) and verify the CAA cluster has been created (use /usr/sbin/lscluster m) Install the new PowerHA software on each node. Update to base level install images Apply latest SPs on top of it Review the /tmp/clconvert.log file to ensure that a conversion of the Migrating an Offline Cluster continued 2012 IBM Corporation 35 PowerHA ODMs has occurred. Start cluster services, one node at a time, and ensure that each node successfully joins the cluster. Snapshot Migration To migrate a cluster using a cluster snapshot complete the following procedure: Stop cluster services (all nodes) Migration install AIX version including CAA filesets, requires a reboot (all nodes) Update /etc/cluster/rhosts file with public IP address per node (all nodes) Refresh clcomd via refresh -s clcomd (all nodes) Run /usr/sbin/clmigcheck option 2 on one node If the cluster cant be migrated this command will indicate that in error messages. If there are errors the snapshot can not be migrated. 2012 IBM Corporation 36 messages. If there are errors the snapshot can not be migrated. If it can be migrated, use option 3 to input the shared disk and multicast IP address required Deinstall current version of PowerHA Install the PowerHA 7.1 software and latest fixes (all nodes) Convert and apply the back level snapshot (on one node) Verify the CAA cluster has been created (use /usr/sbin/lscluster m) Start cluster services, one node at a time, and ensure that each node successfully joins the cluster. Additional resources Performing a rolling migration to PowerHA v7.1.3 https://www.youtube.com/watch?v=MaPxuK4poUw IBM PowerHA SystemMirror Standard Edition 7.1.1 for AIX Update Redbook http://www.redbooks.ibm.com/redpieces/abstracts/sg248030.html IBM PowerHA SystemMirror 7.1.2 Enterprise Edition for AIX Redbook http://www.redbooks.ibm.com/Redbooks.nsf/RedpieceAbstracts/sg248106.html 2012 IBM Corporation 37 http://www.redbooks.ibm.com/Redbooks.nsf/RedpieceAbstracts/sg248106.html Follow me on Twitter: http://twitter.com/#!/POWERHAguy Subscribe to my channel YouTube: http://www.youtube.com/powerhaguy BACKUP CHARTS 2012 IBM Corporation 38 BACKUP CHARTS New features Enterprise Edition 6.1 SP7 XIV replication support Synchronous & Asynchronous replication PowerHA, AIX & RSCT Requirements Cluster.es.genxd 6.1.0.7 AIX 5.3 TL9 or higher RSCT V2.4.12.0 or higher AIX 6.1 TL2 SP1 or higher RSCT V2.5.4.0 or higher IBM XIV Storage 2012 IBM Corporation 39 IBM XIV Storage Firmware Software Bundle 10.2.4 or later XCLI version 2.4.4 or later Must be installed on all PowerHA nodes Access to all XIV storage systems Reachable from all PowerHA nodes (via IPv4) Valid XCLI user account Note: The Enterprise Edition install media will not be refreshed to include SP7. You must download it from fixdist and install the appropriate filesets as needed. SMIT Menus PowerHA SystemMirror Move cursor to desired item and press Enter. Cluster Nodes and Networks Cluster Applications and Resources System Management (C-SPOC) Problem Determination Tools Custom Cluster Configuration HACMP for AIX Move cursor to desired item and press Enter. Initialization and Standard Configuration Extended Configuration System Management (C-SPOC) Problem Determination Tools New fastpath: smit sysmirror smit hacmp (v6.1) 2012 IBM Corporation 40 Custom Cluster Configuration Can't find what you are looking for ? Not sure where to start ? Can't find what you are looking for ? Not sure where to start ? Fastpath hacmp is still valid SMIT Menus - Continued Cluster Nodes and Networks Move cursor to desired item and press Enter. Initial Cluster Setup (Typical) Manage the Cluster Manage Nodes Manage Networks and Network Interfaces Discover Network Interfaces and Disks Verify and Synchronize Cluster Configuration smit hacmp (v6.1) Initialization and Standard Configuration Move cursor to desired item and press Enter. Configuration Assistants Configure an HACMP Cluster and Nodes Configure Resources to Make Highly Available Configure HACMP Resource Groups 2012 IBM Corporation 41 Configure HACMP Resource Groups Verify and Synchronize HACMP Cluster Configuration Manage Cluster Services and Resource Groups Display HACMP configuration HACMP Cluster Test Tool Not sure where to start ? Cluster Application and Resources Move cursor to desired item and press Enter. Make Applications Highly Available (User Smart Assists) Resources Resource Groups Verify and Synchronize Cluster Configuration User Defined Resources Presently, A user can introduce a new resource type by creating a Application Server. PowerHA requires a start/stop/monitor for that resource However, PowerHAfollows a strict pre-known order to handles the resources. Volume Groups will be handled first Application Servers will be handled at the last. This approach is not flexible for end users Whats new in the next release Introducing a concept of User Defined Resource Types where, user is allowed to develop a bundle which includes DISK VOLUME GROUP A c q u i s i t i o n O r d e r R e l e a s e O r d e r 2012 IBM Corporation 42 Introducing a concept of User Defined Resource Types where, user is allowed to develop a bundle which includes attributes/verifications/order for PowerHA etc. Framework accepts Methods to verify/start/stop/monitor/cleanup/restart the user defined resource A xml file can be supplied as input which will be having definition for the user defined resource type PowerHA allows to create instances of the user defined resource type and these instances can be added into RG as resources. FILE SYSTEM SERVICE IP APPLICATION User Defined Resource A c q u i s i t i o n O r d e r R e l e a s e O r d e r Creating User Defined Resources smitty sysmirror->Custom Cluster Configuration->Resources->Configure User Defined Resources and Types-Configure User Defined Resource Types->Add a User Defined Resource Type 2012 IBM Corporation 43 Then add the type as a resource Then add resource into resource group Setting the Service IP Distribution Policy smitty cm_service_ip-> Configure Service IP Labels/Address Distribution Preference->(choose network) 2012 IBM Corporation 44 New Dependencies Previously, PowerHA supports only parent-child dependency among Resource Groups. Child RGs starts after all of its parents starts Child RGs stops before its parents stops Not sufficient to support complex applications like SAP/FileNet FileNet App Server requires to be started only after database, but no need to be stopped if the DB is down for 2012 IBM Corporation 45 FileNet App Server requires to be started only after database, but no need to be stopped if the DB is down for sometime Two new dependencies are introduced in PowerHA 7.1 STARTAFTER STOPAFTER Affects how RG started or stopped Simplified UI for managing these relations in both SMIT and Director Dynamic Node Priority new Adaptive Fallover Present PowerHA support static/dynamic failover policies Static: New failover node will be the next node from the node participation list Dynamic Node Priority : allows the failover node to be having certain degree of free CPU/memory/IO activity This is not sufficient in some specific cases like SAP Enqueue replication, where RG for Enqueue server need to failover where Enqueue Replication is running. Whats new in the next release 2012 IBM Corporation 46 Introducing a new dynamic failover policy where, it asks user to supply a script which can dictate the failover behavior. The supplied script will be executed on all nodes by PowerHA to know whether the given node can be used as host for failing over RG Since it is a script, no of checks can be performed by user to let PowerHA know whether the node can be used as a host on a dynamic basis. The DNP feature is enhanced to support two more policies. The return code of a user-defined script is used in determining the destination node: cl_lowest_nonzero_udscript_rc cl_highest_udscript_rc Director Plug-in Replacement for WebSMIT A single, centralized view into all PowerHA SystemMirror clusters Centralized, secure access point Single sign-on capability Two highly accessible interfaces: Graphical interface Maximum, interactive assistance with many tasks Instant, and nearly instant help is available for just about everything 2012 IBM Corporation 47 Maximum error checking SystemMirror enterprise health summary Textual interface Maximum speed Centralized, cross-cluster scripting A common, unified IBM STG interface. Learn once, manage many. Additional Service IP Distribution Policies Three new additional service IP distribution polices Anti-Collocation with Source Collocation with Source Anti-Collocation with Persistent Label and Source These new choices are intended to address problems caused by AIX 2012 IBM Corporation 48 These new choices are intended to address problems caused by AIX route striping behavior Some clients expect or require that packets coming from the application are marked with the applications IP address (sometimes this is required for firewall configuration) Specify the desired source address in the Source IP Label for outgoing packets field Using Source Service IP Distribution Policies en1 9.16.128.201 - boot_ip1 en1 9.16.128.250 - serv_ip1 srv_ip1 boot_ip1 srv_ip1 2012 IBM Corporation 49 Firewall rule allows srv_ip1 Clmgr cluster command line Director plug-ins needed a consistent interface for SystemMirror. Simplify management of clusters from Director Reduce maintenance overhead Replacement to CLVT Current Smart Assists utilize CLVT 2012 IBM Corporation 50 Current Smart Assists utilize CLVT Overcomes previous CLVT limitations Limited trace output and logging Globalization Ease of use Supported actions add delete manage modify move Clmgr cluster command line Supported object classes cluster site node interface network resource_group service_ip file_collection fallback_timer volume_group (incomplete coverage) logical_volume (incomplete coverage) file_system (incomplete coverage) physical_volume (incomplete coverage) method (incomplete coverage) 2012 IBM Corporation 51 move offline online query recover sync view service_ip persistent_ip application_controller application_monitor dependency method (incomplete coverage) report snapshot tape Clcmd cluster distributed commands /usr/sbin/clcmd Provided by CAA Distributes command to all cluster nodes Ease of use Reminiscent of dsh from SP2 days Example clcmd cat /etc/cluster/rhosts 2012 IBM Corporation 52 clcmd cat /etc/cluster/rhosts NODE jessica.dfw.ibm.com ------------------------------- jessica jordan ------------------------------- NODE jordan.dfw.ibm.com ------------------------------- jessica jordan Private Networks Oracle requires that a network can be reserved to it No heart beating or protocol traffic PowerHA 6.1 and prior supported this PowerHA 7.1.0 did not PowerHA 7.1.1 restores ability to declare a network as private 2012 IBM Corporation 53 Interfaces restricted from use by CAA PowerHA lists local private network interfaces in /etc/cluster/ifrestrict Do not restrict the interface that has the host name IP Improved configuration Can change network attribute without redefinition, provided cluster is down Restriction to be removed in the service stream HeartBeat Tuning New Heartbeat Tuning Parameters Grace Period: The amount of time (seconds) the node will wait before marking a node as DOWN. Accepted values are between 5 and 30 Seconds. Failure Cycle: The frequency of the heartbeat. Accepted values are between 1 and 20 seconds Settings apply to all networks across the cluster. 2012 IBM Corporation 54 Settings apply to all networks across the cluster. To change these settings from Smitty sysmirror Custom Cluster Configuration Cluster Nodes and Networks Manage the Cluster Cluster Heartbeat Settings These settings can be modified from command line using clmgr command Clmgr modify cluster HEARTBEAT_FREQUENCY= 10000 GRACE_PERIOD=5000 The settings will take effect only after the next sync Systems Director Plug-in Getting Started 2012 IBM Corporation 55 Wizards Creating a Cluster Yellow background means this value Is required. Only unused SystemMirror nodes are displayed We check for compatible levels and edition as 2012 IBM Corporation 56 All labels checked against SystemMirror naming rules Addresses validated to be well-formed and edition as selections are made Common storage can be checked only after multiple selections Wizards Creating a Cluster 2012 IBM Corporation 57 Optionally select, or type, a persistent IP Optionally change the default controlling node Select a repository disk for the cluster The node name, in parentheses, indicates the disk name is for that node Wizards Creating a Cluster Optional, but a good idea; completes the initial configuration of the cluster. 2012 IBM Corporation 58 Cluster security settings can be adjusted if desired Content Navigation Systems Director Plug-in Management View 2012 IBM Corporation 59 Area Area System Director Plug-in Summary Panel 2012 IBM Corporation 60 Rootvg System Event New kernel level monitoring Monitors the loss of rootvg Defaults response is to log event and reboot causing fallover to occur Smitty sysmirror->Custom Cluster Configuration->Events->System Events 2012 IBM Corporation 61 Change/Show Event Response Type or select values in entry fields. Press Enter AFTER making all desired changes. [Entry Fields] * Event Name ROOTVG + * Response Log event and reboot + * Active Yes + Federated Security Federated Security for single-point-of-control, cluster-wide security management. The Federated Security feature integrates support for 3 different components: Lightweight Directory Access Protocol (LDAP) Role Based Access Control (RBAC) Encrypted Files System (EFS) System requirements 2012 IBM Corporation 62 System requirements PowerHA SystemMirror Version 7.1.1, or later IBM LDAP 6.2 (Correct version of Gskit and DB2 packaged with LDAP), or later (Applicable for server and client both. ) Microsoft Windows Server: Microsoft Windows Server 2003/R2 Active Directory Microsoft Windows Server 2008/R2 Active Directory Services for UNIX (SFU) 3.5, or later, or the Subsystem for UNIX-based Applications (SUA) (Needed to configure rsh between AIX client and Windows server) expect.base Note: Either a cluster will be configured for Active Directory server or IBM TDS server. In either case the LDAP client will be IBM LDAP. rsh service needs to be configured in case we want the schema copy and attributes modification for AD to be done by SystemMirror configuration. Refer to SystemMirror Admin guide for more information. Configuring application startup mode Application controllers started in the background by default Add/Change controller menu has a new option for foreground startup Add Application Controller Scripts Type or select values in entry fields. Press Enter AFTER making all desired changes. [Entry Fields] * Application Controller Name [] * Start Script [] * Stop Script [] Application Monitor Name(s) + 2012 IBM Corporation 63 Foreground start causes cluster event processing to wait for completion of the application controller start script Simplifies design of start scripts Allows sequencing of resource groups with dependencies Poorly designed scripts may cause hangs (config_too_long) Return codes usually not checked, SP1 will cause event error if RC=1 Application Monitor Name(s) + Application startup mode [foreground] + Physical Volume Rename A given physical volume may have different names on different nodes Almost guaranteed if nodes access a different number of disks Only volumes not already part of a volume group can be renamed Pick list only gives disks not in a volume group Can change all instances by PVID Rename a Physical Volume 2012 IBM Corporation 64 Can change all instances by PVID Rename a Physical Volume Type or select values in entry fields. Press Enter AFTER making all desired changes. [Entry Fields] Physical Volume Name hdisk2 Physical Volume Identifier 00f638bc49c50b8a Physical Volume is Known on these Nodes r3r6m21,r3r6m22 New Physical Volume Name [] Change all Physical Volumes with this PVID? no Display Disk UUID Convenient way to display Storage Framework information on a physical volume UUID is Universal Unique ID Used to identify objects within CAA Intended for assistance in low level trace and debug User should not normally need to know this information Disk name: hdisk8 2012 IBM Corporation 65 Disk name: hdisk8 Disk UUID: 4c13e89df353574f 154e795c0bc89f5e Fence Group UUID: 4cc81c16b64bf65d 51f638bc49c5094c - Fence Group vg00 Disk device major/minor number: 20, 2 Fence height: 0 (Read/Write) Reserve mode: 1 (Single Path) Disk Type: 0x01 (Local access only) Disk State: 0 C-SPOC- Physical Volumes-Show UUID for a Physical Volume DARE Progress Indicators Completion of a configuration change is not apparent Users could start up overlapping or contradictory changes With PowerHA 7.1.1, users terminal remains locked during DARE processing Progress indicators displayed Show Cluster Manager state, on-going events Plan to fine-tune display Remove confusing and redundant information Plan to back port to PowerHA 6.1 and 5.5 Example: 2012 IBM Corporation 66 Cluster Manager Current state: ST_RP_RUNNING Cluster Manager Current state: ST_RP_RUNNING Cluster Manager Current state: ST_RP_RUNNING Cluster Manager Current state: ST_RP_RUNNING Cluster Manager Current state: ST_BARRIER Cluster Manager Current state: ST_RP_RUNNING Cluster Manager Current state: ST_RP_RUNNING Cluster Manager Current state: ST_CBARRIER Cluster Manager Current state: ST_UNSTABLE Cluster Manager Current state: ST_UNSTABLE Cluster Manager Current state: ST_BARRIER Cluster Manager Current state: ST_BARRIER Cluster Manager Current state: ST_RP_RUNNING Cluster Manager Current state: ST_UNSTABLE Cluster Manager Current state: ST_UNSTABLE Cluster Manager Current state: ST_STABLE Cluster Manager Current state: ST_STABLE Cluster Manager Current state: ST_STABLE ...Completed Smart Assists Enhancements Redefining Component usage Currently Smart Assist for WAS and Oracle supports different products as components Smart Assist for WAS supports the following components IBM HTTP Server IBM Tivoli Directory Server IBM WebSphere Application Server - classic IBM WebSphere Application Server ND Deployment Manager Smart Assist for Oracle supports the following components Oracle Database 10g/11g Oracle Application Server 10g R1 Disadvantages Grouping support for different products (as components) affecting consumability 2012 IBM Corporation 67 Grouping support for different products (as components) affecting consumability Whats new in the next release Split the existing Smart Assist for WAS and Smart Assist for Oracle into Smart Assist for WebSphere Smart Assist for IHS Smart Assist for TDS Smart Assist for Oracle Database Smart Assist for Oracle Application Server PowerHA 7.1.2 Director Plug-in Enhancements Wizards Cluster Create Wizard Single Site and Multi Site deployment Resource Group Creation Wizard Custom and Smart Assist based RG deployment SAP liveCache HotStandby solution Wizard Federated Security Setup Wizard Volume Group Create Wizard Support for LVM Mirror Pools Replication (Mirror) Group Wizard HyperSwap Setup 2012 IBM Corporation 68 Management Enhancements Repository Disk/s Management Resource Groups management Snapshots, networks, log files etc Reports Management Notifications management Event driven callouts Capacity upgrade based fallovers HyperSwap Management File collections SA Enhancements Discovery mechanism and Manual configuration Until PowerHA 6.1, to get a list of smart assists, the framework runs all the discovery scripts and returns which list which user can work This has been changed such that, we list the installed smart assists first and run the discovery script of the only chosen smart assist from the list Manual Configuration Until PowerHA 6.1, A Smart Assist will automatically discover application and its resources and allows you to configure them under PowerHA. However, If the discovery script fails, user have no option to proceed further 2012 IBM Corporation 69 However, If the discovery script fails, user have no option to proceed further Starting from PowerHA 710, user has another option where he/she can supply the application details in a XML file (a template XML is supplied) smitty clsa -> <smart assist name> ->manual configuration Smart Assists Enhancements List of all Smart Assists SystemMirror 7.1.0 SystemMirror 7.1.1 2012 IBM Corporation 70 What has changed Cluster monitoring CAA instead of topsvcs Multicast vs heartbeat rings, no dependence on boot addresses for heartbeat Graphical user interface ISD Plug-in instead of WebSMIT Introduction of robust CLI for PowerHA configuration, control and status Summary 2012 IBM Corporation 71 Introduction of robust CLI for PowerHA configuration, control and status clmgr is an updated version of clvt Leveraged by ISD Plug-in What has not changed Cluster manager event processing The same event scripts and event flows for resource control Pre-/post-event configuration and processing Resource and resource group configuration Same constructs configured in HACMP ODM Resource groups, service addresses, application start/stop, application monitoring Summary 2012 IBM Corporation 72 Resource groups, service addresses, application start/stop, application monitoring Synchronization of HACMP ODM to all nodes (PHA cluster configuration NOT central)