Professional Documents
Culture Documents
Contents
Active/Active Configuration Terminology Advantages of Active/Active clustering Characteristics of nodes in an active/active configuration Standard active/active configurations Setup requirements and restrictions for standard active/active configurations Configuring an active/active configuration Understanding mirrored active/active configurations Advantages of mirrored active/active configurations Understanding stretch Metro Clusters Advantages of stretch Metro Cluster configuration Stretch Metro Cluster configuration Setup requirements and restrictions for stretch MetroCluster configurations Fabric-attached Metro Clusters use Brocade Fibre Channel switches Best practices for deploying active/active configurations
When one node fails or becomes impaired a takeover occurs, and the partner node continues to serve the failed nodes data Nondisruptive software upgrades
When you halt one node and allow takeover, the partner node continues to serve data for the halted node while you upgrade the node you halted
The nodes use the interconnect to do the following tasks: Continually check whether the other node is functioning Mirror log data for each others NVRAM Synchronize each others time
They use two or more disk shelf loops in which the following conditions apply: Each node manages its own disks Each node in takeover mode manages its partners disks
The mailbox disks are used to do the following tasks: Maintain consistency between the pair Continually check whether the other node is running or whether it has performed a takeover Store configuration information that is not specific to any particular node They can reside on the same Windows domain or on different domains
How Data ONTAP works with standard active/active configurationsIn a standard active/active configuration, Data ONTAP functions so that each node monitors the functioning of its partner through a heartbeat signal sent between the nodes. Data from the NVRAM of one node is mirrored by its partner, and each node can take over the partners disks if the partner fails. Also, the nodes synchronize each others time.
STEP 6: Accessing the partner in takeover mode Use the partner command to enter into the partner emulation mode ntap--1(takeover)> partner ntap--2/ntap-1> By using the partner command we have entered into 1 controller. To confirm the takeover use the hostname command to verify Enter the hostname command on the active controller. The output displays the hostname of the local controller
ntap--1(takeover)> hostname
Controller ntap--1 Use the ifconfig command with partner option to configure the partner address ntap--1> ifconfig e0b x.x.x.x1 netmask 255.255.255.0 partner x.x.x.x2
STEP 9: Make the entries persistant across reboots by updating the /etc/rc file Use the wrfile a to append the entries to /etc/rc file ntap--1> wrfile -a /etc/rc ifconfig e0b x.x.x.x1 netmask 255.255.255.0 partner x.x.x.x2 ntap--1> Use the rdfile command to confirm the entries ntap--1> rdfile /etc/rc
STEP 10: Configure the interface with partner ip address for providing services to the partner node after failover on controller ntap-2 Use the ifconfig command with partner option to configure the partner address ntap--2> ifconfig e0b x.x.x.x2 netmask 255.255.255.0 partner x.x.x.x1 ntap-3070-2> ifconfig -a
Note: Mirrored active/active configurations do not provide the capability to fail over to the partner node if one node is completely lost. For example, if power is lost to one entire node, including its storage, you cannot fail over to the partner node. For this capability, use a MetroCluster.
In addition, the failure of an FC-AL adapter, loop, or disk shelf module does not require a failover in a mirrored active/active configuration.
Similar to standard active/active configurations, if either node in a mirrored active/active configuration becomes impaired or cannot access its data, the other node can automatically serve the impaired nodes data until the problem is corrected.
You must enable the following licenses on both nodes: cluster syncmirror_local
Like mirrored active/active configurations, stretch MetroClusters contain two complete copies of the specified data volumes or file systems that you indicated as being mirrored volumes or file systems in your active/active configuration. These copies are called plexes and are continually and synchronously updated every time Data ONTAP writes data to the disks. Plexes are physically separated from each other across different groupings of disks.
Unlike mirrored active/active configurations, MetroClusters provide the capability to force a failover when an entire node (including the controllers and storage) is destroyed or unavailable
In addition, a MetroCluster enables you to use a single command to initiate a failover if an entire site becomes lost or unavailable. If a disaster occurs at one of the node locations and destroys your data there, your data not only survives on the other node, but can be served by that node while you address the issue or rebuild the configuration.
Continued data service after loss of one node with MetroCluster The MetroCluster configuration employs SyncMirror to build a system that can continue to serve data even after complete loss of one of the nodes and the storage at that site. Data consistency is retained, even when the data is contained in more than one aggregate.
The following licenses must be enabled on both nodes: cluster syncmirror_local cluster_remote
Thank You