Professional Documents
Culture Documents
This article describes the active-active technique for high availability using both vertical and
horizontal clustering, which compared to the active-passive technique, improves continuous
availability, performance, throughput, and scalability in WebSphere MQ and WebSphere
Message Broker.
Introduction
Part 1 in this article series covered the basics of multi-instance queue managers and multi-
instance brokers, and described the active-passive technique of high availability and horizontal
clustering. This article describes the new multi-instance broker feature in IBM WebSphere
Message Broker, and shows you how to use it used to configure an active-active load-balanced
environment. To implement this environment, you need to cluster the WebSphere Message Broker
and WebSphere MQ components both horizontally and vertically, as shown in Figure 1:
Vertical clustering
Vertical clustering is achieved by clustering the queue managers using WebSphere MQ clustering,
which optimizes processing and provides the following advantages:
Increased availability of queues, since multiple instances are exposed as cluster queues
Faster throughput of messages, since messages can be delivered on multiple queues
Better distribution of workload based on non-functional requirements
Horizontal clustering
Horizontal clustering is achieved by clustering the queue managers and brokers using the multi-
instance feature, which provides the following advantages:
Combining vertical and horizontal clustering leverages the use of individual physical servers for
availability, scalability, throughput, and performance. WebSphere MQ and WebSphere Message
Broker enable you to use both clustering techniques individually or together.
System information
Examples in this article were run on a system using WebSphere MQ V7.0.1.4 and WebSphere
Message Broker V7.0.0.3, with four servers running on SUSE Linux 10.0. Here is the topology of
active-active configurations on these four servers:
Active-active HA topology
wmbmi3.in.ibm.com hosts the queue manager IBMESBQM3, which acts a gateway queue
manager for the WebSphere MQ cluster IBMESBCLUSTER. This queue manager is used by
clients for sending messages.
wmbmi4.in.ibm.com hosts the NSF V4 mount points, which are used by the multi-instance queue
manager and multi-instance broker to store their runtime data.
Using two instances of multi-instance queue managers and a multi-instance broker overlapped
with a WebSphere MQ cluster provides a continuously available solution with no downtime. When
the active instance of the queue manager goes down, then by the time the passive instance starts
up, the other queue manager takes over the complete load, meeting the goal to enhance system
availability.
Set the uid and gid of the mqm group to be identical on all systems. Create log and data
directories in a common shared folder named /mqha. Make sure that the mqha directory is owned
by the user and group mqm, and that the access permissions are set to rwx for both user and
group. The commands below are executed as root user on wmbmi4.in.ibm.com:
Creating and setting ownership for directories under the shared folder /mqha
[root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMQ/IBMESBQM1/data
[root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMQ/IBMESBQM1/logs
[root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMB/IBMESBBRK1
[root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMQ/IBMESBQM2/data
[root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMQ/IBMESBQM2/logs
[root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMB/IBMESBBRK2
[root@wmbmi4.in.ibm.com]$ chown -R mqm:mqm /mqha
[root@wmbmi4.in.ibm.com]$ chmod -R ug+rwx /mqha
After the queue manager is created, display the properties of this queue manager using the
command below:
Copy the output from the dspmqinf command and paste it on the command line on
wmbmi2.in.ibm.com from the console of user mqm:
The multi-instance queue manager IBMESBQM1 is created. Next create the second multi-instance
queue manager IBMESBQM2 on wmbmi2.in.ibm.com. Log on as the user mqm and issue the
command:
After the queue manager is created, display the properties of the queue manager using the
command below:
Copy the output from the dspmqinf command and paste it on the command line on
wmbmi1.in.ibm.com from the console of user mqm:
Next, display the queue managers on both servers using the dspmq command on each. The
results should look like this:
[mqm@wmbmi2.in.ibm.com]$ dspmq
QMNAME(IBMESBQM1) STATUS(Ended immediately)
QMNAME(IBMESBQM2) STATUS(Ended immediately)
The multi-instance queue managers IBMESBQM1 and IBMESBQM2 have been created on the
servers wmbmi1.in.ibm.com and wmbmi2.in.ibm.com. Start the multi-instance queue managers in
the following sequence:
After the listeners are created and started, add queue managers in the cluster and then create
channels between the full repository queue managers. Issue this command on multi-instance
queue managers IBMESBQM1 and IBMESBQM2:
After completion, create CLUSTER sender and CLUSTER receiver channels between the full
repository queue managers by issuing the commands below:
Channels are set up between the two multi-instance queue managers for sharing MQ cluster
repository related information. Next, set up the channels between the partial repository gateway
QMGR (IBMESBQM3) and one of the full repository queue managers, such as IBMESBQM1.
Execute the commands below on queue manager IBMESBQM3:
After the three queue managers are added in cluster, the WebSphere MQ cluster topology should
look like this:
All of the queue managers have now been added under the cluster. Next, define local queues on
the full repository queue manager and expose them on the cluster for workload balancing. Execute
the following command on the full repository queue managers IBMESQM1 and IBMESBQM2:
Queue IBM.ESB.IN will be used by WebSphere Message Broker flows for processing messages.
Execute the command below on IBMESBQM3. The alias queue INPUT is exposed so the
application can put messages on the cluster queue:
Start the multi-instance broker before the next step of creating DataFlowEngine on these brokers.
Execute the commands below to start the broker. The active instance of the multi-instance broker
will be instantiated on the server, where the corresponding multi-instance queue manager will be
running in active state.
Execute the commands below on the server, where the corresponding active instances of the
multi-instance broker are running. The execution group IBMESBEG will be created:
Create DataFlowEngine
mqsicreateexecutiongroup IBMESBBRK1 -e IBMESBEG
mqsicreateexecutiongroup IBMESBBRK2 -e IBMESBEG
Input queues are marked as persistent queues, and in case of failure, if messages are already on
the INPUT Queue (IBM.ESB.IN) and not yet picked up by broker flow processing, the messages
are not lost. The transaction mode on JMS nodes is set to Local, which means that the messages
are received under the local sync point of the node, and any messages later sent by an output
node in the flow are not put under the local sync point, unless an individual output node specifies
that the message must be put under the local sync point.
Message flow
1. The input and output queues (IBM.ESB.IN and IBM.ESB.OUT) have their persistence
properties set to Persistent, which means that all messages arriving on these queues are
made persistent, to prevent any message loss during failover.
2. Input messages are sent through a JmsProducer utility (available in WebSphere MQ JMS
Samples). This standalone JMS Client is modified to generate messages having a sequence
number in the payload. Input message is a simple XML Message.
3. JMSProducer.java appends the sequence number in the input message:
<TestMsg><Message>Hello World #Seq_Num#</Message></TestMsg>.
4. The broker message flow reads the message and adds two more values to the message:
the name of the broker processing the message, and the timestamp when the message was
processed. These two values are added to the message to help in the testing process.
The .bindings file is now available for use in /home/mqm/qcf/QCF1/. You can also do JMS
configurations using MQ Explorer. For more information, see Using the WebSphere MQ JMS
administration tool in the WebSphere MQ V7.5 information center.
8. Repeat the above steps for generating the bindings for queue manager IBMESBQM2 and
place it in the directory /home/mqm/qcf/QCF2.
9. Deploy the flow on the brokers IBMESBBRK1 and IBMESBBRK2 in the cluster.
10. Use the JmsProducer utility as shown below to send the messages to the gateway queue
manager, which in turn will send the messages to the input queues of the message flows:
Run JMSProducer utility
java JmsProducer -m IBMESBQM3 -d IBM.ESB.IN -p 1416 -i "<TestMsg>
<Message>Hello World</Message></TestMsg>"
You can use JmsConsumer (available in the WebSphere MQ JMS Samples) to consume the
messages generated out of the given message flow. The output queue of the flow (IBM.ESB.OUT)
is configured to trigger the JmsConsumer utility whenever the first message is received on the
queue. When triggered, this JmsConsumer consumes messages from the IBM.ESB.OUT queue,
and writes them to a common flat file called ConsumerLog.txt in the directory /mqha/logs. One
instance of this JmsConsumer utility is triggered for each of the queue managers in the cluster.
Add runmqtrm as a WebSphere MQ service so that when the queue manager starts, it will start the
trigger monitor.
JmsConsumer.java is customized to read messages and then log information to a file on shared
file system. Entries added in the log file are shown below. You can use this in test scenarios to
evaluate failover results. For every read by JMSConsumer.java, the following entry is added to
ConsumerLog.txt:
<Queue Manager Name> -
<Queue Name> -
< Server name on which multi-instance QM is running > -
<Message Payload with #Seq Number>
After the passive queue manager of IBMESBQM1 comes up, both queue managers in the cluster
continue processing the messages.
Conclusion
This article described how to integrate two powerful technologies -- WebSphere MQ clusters, and
WebSphere MQ and WebSphere Message Broker multi-instance features -- in order to improve
the availability of queue managers and brokers in critical production systems. In this configuration,
the active instance of both WebSphere MQ and WebSphere Message Broker run on the same
physical server after failover, which makes this server a single point of failure. Therefore it is
important to have hardware redundancy, with WebSphere MQ and WebSphere Message Broker
components on separate servers.
Acknowledgement
The authors would like to thank WebSphere Senior Technical Architect Regina L. Manuel and
WebSphere Message Broker Software Engineer Martin R. Naish for their valuable suggestions and
feedback.
Related topics
WebSphere Message Broker resources
Configuring and administering multi-instance brokers for high availability in IBM
WebSphere Message Broker - Part 1
WebSphere Message Broker V8 documentation in IBM Knowledge Center
WebSphere Message Broker documentation library
WebSphere Message Broker forum
Patterns: SOA design using WebSphere Message Broker and WebSphere ESB,
SG24-7369
WebSphere MQ resources
WebSphere MQ V7.5 documentation in IBM Knowledge Center
WebSphere MQ product page
WebSphere MQ documentation library
IBM WebSphere MQ V7.1 and V7.5 Features and Enhancements, SG24-8087