Professional Documents
Culture Documents
Cluster Installation
Single-Node Configuration
March 2011
Legal Notices
Copyright ©2005-2011 VMware, Inc. All rights reserved. This product is protected by U.S. and
international copyright and intellectual property laws. VMware products are covered by one or more
patents listed at http://www.vmware.com/go/patents.
VMware and Zimbra are registered trademarks or trademarks of VMware, Inc. in the United states and/
or other jurisdiction. All other marks and names mentioned herein may be trademarks of their respective
companies.
VMware, Inc.
3401 Hillview Avenue
Palo Alto , California 94304 USA
www.zimbra.com
ZCS 7.1
March 2011
ZCS Cluster Installation- Single-Node Configuration Guide
This guide describes configuring one active node and one standby node in a
single-node cluster environment.
Topics in this chapter include:
Pre-configuration Requirements
Cluster Installation Overview
Preparing the SAN
Downloading the ZCS Software
Installing and Configuring Active Node Cluster Services
Installing the ZCS Cluster on the Standby Node
Preparing Red Hat Cluster Suite for ZCS
Configuring Red Hat Cluster for ZCS
Start the Red Hat Cluster Suite Daemons
Testing the Cluster Set up
Configuring ZCS in VCS
View Zimbra Cluster Status
Before you install your cluster enviornment, also read the following guides:
Red Hat Installation Modifications for ZCS guide about preparing the Red
Hat Enterprise Linux operating system for ZCS
Network Edition of ZCS 7.1 Quick Start Installation Guide about ZCS single
server installation and determine the system requirements
To get the latest copy of the documentation, go to http://www.zimbra.com/
support/documentation.html.
For cluster integration to provide high availability, VMware Zimbra
Collaboration Server (ZCS) 7.1 can integrate with the following:
Red Hat® Cluster Suite running on Red Hat Enterprise Linux® AS or ES
• Release 4, Update 5
• Release 5, Update 3
In the single-node cluster implementation, all Zimbra servers are part of a
cluster under the control of the Red Hat Cluster Manager.
Note: Red Hat Cluster Suite consists of Red Hat Cluster Manager and
Linux Virtual Server Cluster. For ZCS, only Red Hat Cluster
Manager is used. In this guide, Red Hat Cluster Suite refers only to
Cluster Manager.
Note: This guide does not explain how to use the cluster management
software. Before setting up the ZCS cluster enviornment, you should
know the concepts and terminology of the software you are using to
manage high availability.
Pre-configuration Requirements
Both active and standby servers must meet the requirements described in the
ZCS Quick Start Installation Guide, in addition to the requirements described
here.
If you are using Veritas Cluster Server, go to the Symantec website for
specific system requirements for cluster configurations. If you are not
familiar with Veritas Cluster Server, read the Veritas Cluster Server User’s
Guide.
Note: You can place all service data on a single volume or choose to place
the service data in multiple volumes.
Configure the SAN device and create the partitions for the volumes. Refer to
the cluster software documentation for configuration requirements.
If you select to configure the SAN in one volume with subdirectories, all
service data goes under a single SAN volume.
If you select to partition the SAN into multiple volumes, the SAN device is
partitioned to provide the multiple volumes for each Zimbra mailbox server
in the cluster. Example of the type of volumes that can be created follows.
3. Change directories to the unpacked file and type the following command to
begin the cluster install.
./install.sh --cluster active
4. The node requires Zimbra and Postfix users and groups. Type the Zimbra
group ID and Zimbra user ID to be used. The same user and group IDs
must be used on both active and standby nodes.
a. Type the Zimbra group ID (GID) to be used. The default is 500.
b. Type the Postfix group ID. The default is 501.
c. Type the Postdrop group ID. The default is 502.
d. Type the Zimbra user ID (UID) to be used. The default is 500.
e. Type the Postfix user ID. The default is 501.
The root directory for the mount points is created.
Each Zimbra cluster node needs zimbra and postfix users and groups.
The same user and group IDs must be used on all nodes. If not,
some nodes will not be able to access files on SAN owned by these
users/groups.
5. On the active node, create mount points for the cluster services. Enter one
service name per prompt. If you are installing on one volume as in this
example, you create only one mount point.
• Type the cluster service hostname, press Enter.
• Type done, when all mount points are created.
On every mailbox server node you need to create mount points for all
cluster services. Enter one service name per prompt.
cluster.example.com
6. Enter the cluster service hostname that will be active on this node. This is
the same as the public host name. This is not the same as the node
hostname.
7. Mount the SAN volume (s). You can mount one volume for all services or
you can mount separate volumes. The following command is to mount one
volume for all services . To mount by label, as root type:
Now you rerun install.sh cluster active to install the ZCS software.
Installing ZCS
Refer to the Installing Zimbra Software section of the Zimbra Collaboration
Server 7.1 Single Server Installation Guide, Network Edition, for complete
ZCS installation instructions. Make the following changes to those
installation instructions:
2. For step 5, select the Zimbra packages to install, including the Zimbra-
Cluster package, which is marked Y.
3. When the DNS error to resolve MX displays, enter Yes to change the
domain name. Modify the domain name to the cluster service hostname
(not the active node name).
4. Review the Common Configuration menu to verify that the host name and
LDAP master host name have been changed to the cluster service
hostname. When the ZCS installation is complete, there should be no
reference to the active node name.
When Configuration Complete - press return to exit displays, the cluster install
on the active node is complete.
Now you install the standby node.
3. Change directories to the unpacked file and type the following command to
begin the cluster install.
./install.sh --cluster standby
4. Type the Zimbra group ID (GID) and Zimbra user ID (UID) to be used. The
same user and group IDs must be used on both nodes.
a. Type the Zimbra group ID (GID) to be used. The default is 500.
5. To create mount points for the cluster services on the standby node, type
the active node cluster service hostname. Press Enter. Mount point(s) are
created for the cluster. These are the same service names as on the active
host.
6. Type done, to finish the mount point configuration.
7. The ZCS install automatically starts to install the ZCS software packages.
Select the same Zimbra packages as installed on the active host. There are
no modifications necessary to installing the packages.
The Zimbra processes are stopped, various cluster-specific adjustments
are made to the ZCS installation and unnecessary data files are deleted.
After the software installation is complete, you are asked to enter the active
cluster service name for this standby node. This creates the symlink /opt/
zimbra.
[root@node2 zcs-NETWORK-5.0.2_1703.RHEL4_64.20071105125148]#
Fence Device. This is the network power switch. The active node is plugged
into the fence device. The cluster uses the fence device for I/O fencing
during failover.
Cluster Nodes. The active node is added as a member to the cluster and
the fence device setting is configured for the active node.
Managed Resources. The preferred node for each service and the list of
volumes to be mounted from the SAN are configured.
1. When ZCS installation is complete on both the active and standby nodes,
on the active node, change directories to the directory where the ZCS .tgz
file was unpacked and start the configure-cluster.pl. Type
./bin/configure-cluster.pl
Press Enter to continue.
The configurator checks to verify that the server installation is correct.
---------------------------------------------------------
2. Each Zimbra cluster on the network must have a unique name to avoid
interfering with another Red Hat Cluster Suite cluster. Enter a name to
identify this cluster. The maximum number of characters for this name is
16. Press Enter.
3. Select the network power switch type that is used as the fence device. For
ZCS configuration, you must select either APC or WTI as the network
power switch device, even if this is not the device you are using. After the
cluster configuration is complete, you can change the generated
configuration file from the Red Hat Cluster Manager Console system-
config-cluster GUI tool.
a. Enter the number the corresponds to the fence device vendor:
• 1 for APC
• 2 for WTI
b. Enter the fence device hostname/IP address, login, and password
4. Enter the active node’s fully-qualified hostname and the plug number of the
fence device associated with the node’s power cord. When the node is
identified, type Done.
5. For each service, you need to choose a preferred node to run on and enter
the list of volumes to be mounted from the SAN. In a single server cluster
configuration, only one service is available.
Select the cluster service. Select 1.
Type Done when complete.
Choose a service:
1) cluster.example.com
2) Done
Choose from above (1-2): 1
7. Enter the cluster mount label defined for the active node.
Volume cluster.example.com-vol:
mount point = /opt/zimbra-cluster/mountpoints/
cluster.example.com
Enter device name (e.g. /dev/sda5, LABEL=mylabel):
A value must be entered!
Choose a service:
1) cluster.example.com
2) Done
Choose from above (1-2): 2
Configuration Summary
--------------------
Fence Device:
name: fence-device
agent: fence_apc
ipaddr: apce- -- -- -- -build-apc
login: apc
passwd: apc
Nodes:
node1.example.com - fence port 1
node2.example.com - fence port 2
Services:
cluster.example.com
ipaddr: 10.10.141.200
preferred node: node1.example.com
volumes:
cluster.example.com-vol
mountpoint: /opt/zimbra-cluster/mountpoints/
cluster.example.com
device: LABEL=cluster01
-----------------------------------------------------------
------------------------------------------------------------
cluster.conf to all nodes.
10. The configuration file must be copied to the standby node. The ZCS
configurator script can copy the files or you can do it manually. If you want
the script to copy the file to the standby node, enter Y. Enter the root
password, if prompted.
Important:
Log on to each node as root.
Run tail -f /var/log/messages, on each node to watch for any errors.
Open another session for each node.
To start the Red Hat Cluster Service on a member, type the following
commands in this order. Remember to enter the command on each node
before proceeding to the next command.
1. service ccsd start. This is the cluster configuration system daemon that
synchronizes configuration between cluster nodes.
2. service cman start. This is the cluster heartbeat daemon. It returns when
both nodes have established heartbeat with one another.
3. service fenced start. This is the cluster I/O fencing system that allows
cluster nodes to reboot a failed node during failover.
The service rgmanager start command returns immediately, but initializing the
cluster and bringing up the ZCS application for the cluster services on the
active node may take some time.
After all commands have been issued on both nodes, run clustat command
on the active node, to verify all cluster services have been started.
Continue to enter the clustat command, until it reports all nodes have joined
the cluster, and all services have been started.
Because nodes may not join the cluster in sequence, some of the services
may start on nodes that are different from the configured preferred nodes. This
is expected and eventually will be restarted on the configured preferred node.
When clustat shows all services are running on the active node, the cluster
configuration is complete.
If the services does not relocate to the active node after several minutes, you
can issue Red Hat Cluster Suite utility commands to manually correct the
situation.
Note: Not starting correctly on the preferred node usually is an issue that
happens only the first time the cluster is started.
For the cluster service that is not running on the active node, run clusvcadm -d
<cluster service name>, as root on the active node.
[root@node1.example.com]#clusvcadm -d mail1.example.com
to mount the SAN volumes of the service, bring up the service IP address, and
start the Zimbra processes.
[root@node1.example.com]#clusvcadm -e mail1.example.com -m
node1.example.com
1. Log in to the remote power switch and turn off the active node.
2. Run tail -f /var/log/messages on the standby node. You will observe the cluster
becomes aware of the failed node, I/O fence it, and bring up the failed
service on the standby node.
Define dependencies in the service group. Make sure all file systems for
the ZCS instance are mounted in the correct order and public IP address is
brought up, before the application is started.