You are on page 1of 17

Configuring and administering multi-instance brokers

for high availability in IBM WebSphere Message Broker


- Part 2
Rahul Gupta August 10, 2011
Devipriya Selvarajan

This article describes the active-active technique for high availability using both vertical and
horizontal clustering, which compared to the active-passive technique, improves continuous
availability, performance, throughput, and scalability in WebSphere MQ and WebSphere
Message Broker.

Introduction
Part 1 in this article series covered the basics of multi-instance queue managers and multi-
instance brokers, and described the active-passive technique of high availability and horizontal
clustering. This article describes the new multi-instance broker feature in IBM WebSphere
Message Broker, and shows you how to use it used to configure an active-active load-balanced
environment. To implement this environment, you need to cluster the WebSphere Message Broker
and WebSphere MQ components both horizontally and vertically, as shown in Figure 1:

Copyright IBM Corporation 2011 Trademarks


Configuring and administering multi-instance brokers for high Page 1 of 17
availability in IBM WebSphere Message Broker - Part 2
developerWorks ibm.com/developerWorks/

Overview of horizontal and vertical clustering

Vertical clustering
Vertical clustering is achieved by clustering the queue managers using WebSphere MQ clustering,
which optimizes processing and provides the following advantages:

Increased availability of queues, since multiple instances are exposed as cluster queues
Faster throughput of messages, since messages can be delivered on multiple queues
Better distribution of workload based on non-functional requirements

Horizontal clustering
Horizontal clustering is achieved by clustering the queue managers and brokers using the multi-
instance feature, which provides the following advantages:

Provides software redundancy similar to vertical clustering


Provides the additional benefit of hardware redundancy
Lets you configure multiple instances of the queue manager and broker on separate physical
servers, providing a high-availability (HA) solution
Saves the administrative overhead of a commercial HA solution, such as PowerHA

Combining vertical and horizontal clustering leverages the use of individual physical servers for
availability, scalability, throughput, and performance. WebSphere MQ and WebSphere Message
Broker enable you to use both clustering techniques individually or together.

Configuring and administering multi-instance brokers for high Page 2 of 17


availability in IBM WebSphere Message Broker - Part 2
ibm.com/developerWorks/ developerWorks

System information
Examples in this article were run on a system using WebSphere MQ V7.0.1.4 and WebSphere
Message Broker V7.0.0.3, with four servers running on SUSE Linux 10.0. Here is the topology of
active-active configurations on these four servers:

Active-active HA topology

wmbmi1.in.ibm.com hosts the:

Active instance of the multi-instance queue manager IBMESBQM1


Passive instance of the multi-instance queue manager IBMESBQM2
Active instance of the multi-instance broker IBMESBBRK1
Passive instance of the multi-instance broker IBMESBBRK2

wmbmi2.in.ibm.com hosts the:

Active instance of the multi-instance queue manager IBMESBQM2


Passive instance of the multi-instance queue manager IBMESBQM1
Active instance of the multi-instance broker IBMESBBRK2
Passive instance of the multi-instance broker IBMESBBRK1

Configuring and administering multi-instance brokers for high Page 3 of 17


availability in IBM WebSphere Message Broker - Part 2
developerWorks ibm.com/developerWorks/

wmbmi3.in.ibm.com hosts the queue manager IBMESBQM3, which acts a gateway queue
manager for the WebSphere MQ cluster IBMESBCLUSTER. This queue manager is used by
clients for sending messages.

wmbmi4.in.ibm.com hosts the NSF V4 mount points, which are used by the multi-instance queue
manager and multi-instance broker to store their runtime data.

WebSphere MQ cluster IBMESBCLUSTER has three participating queue managers:

Queue manager IBMESBQM1 acts as a full repository queue manager


Queue manager IBMESBQM2 acts as a full repository queue manager
Queue manager IBMESBQM3 acts as a partial repository queue manager

Using two instances of multi-instance queue managers and a multi-instance broker overlapped
with a WebSphere MQ cluster provides a continuously available solution with no downtime. When
the active instance of the queue manager goes down, then by the time the passive instance starts
up, the other queue manager takes over the complete load, meeting the goal to enhance system
availability.

Configuring a shared file system using NFS


For more information on configuring and exporting the /mqha file system hosted on
wmbmi4.in.ibm.com., see Configuring and administering multi-instance brokers for high availability
in IBM WebSphere Message Broker - Part 1.

Set the uid and gid of the mqm group to be identical on all systems. Create log and data
directories in a common shared folder named /mqha. Make sure that the mqha directory is owned
by the user and group mqm, and that the access permissions are set to rwx for both user and
group. The commands below are executed as root user on wmbmi4.in.ibm.com:

Creating and setting ownership for directories under the shared folder /mqha
[root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMQ/IBMESBQM1/data
[root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMQ/IBMESBQM1/logs
[root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMB/IBMESBBRK1
[root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMQ/IBMESBQM2/data
[root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMQ/IBMESBQM2/logs
[root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMB/IBMESBBRK2
[root@wmbmi4.in.ibm.com]$ chown -R mqm:mqm /mqha
[root@wmbmi4.in.ibm.com]$ chmod -R ug+rwx /mqha

Creating a queue manager


Start by creating the multi-instance queue manager IBMESBQM1 on the first server,
wmbmi1.in.ibm.com. Log on as the user mqm and issue the command:

Creating queue manager IBMESBQM1 on wmbmi1.in.ibm.com


[mqm@wmbmi1.in.ibm.com]$ crtmqm -md /mqha/WMQ/IBMESBQM1/data -ld
/mqha/WMQ/IBMESBQM1/logs IBMESBQM1

Configuring and administering multi-instance brokers for high Page 4 of 17


availability in IBM WebSphere Message Broker - Part 2
ibm.com/developerWorks/ developerWorks

After the queue manager is created, display the properties of this queue manager using the
command below:

Displaying the properties of queue manager IBMESBQM1


[mqm@wmbmi1.in.ibm.com]$ dspmqinf -o command IBMESBQM1
addmqinf -s QueueManager -v Name=IBMESBQM1 -v Directory=IBMESBQM1 -v Prefix=/var/mqm -v
DataPath=/mqha/WMQ/IBMESBQM1/data/IBMESBQM1

Copy the output from the dspmqinf command and paste it on the command line on
wmbmi2.in.ibm.com from the console of user mqm:

Creating a reference of IBMESBQM1 on wmbmi2.in.ibm.com


[mqm@wmbmi2.in.ibm.com]$ addmqinf -s QueueManager -v Name=IBMESBQM1 -v
Directory=IBMESBQM1 -v Prefix=/var/mqm -v DataPath=/mqha/WMQ/IBMESBQM1/data/IBMESBQM1
WebSphere MQ configuration information added.

The multi-instance queue manager IBMESBQM1 is created. Next create the second multi-instance
queue manager IBMESBQM2 on wmbmi2.in.ibm.com. Log on as the user mqm and issue the
command:

Creating queue manager IBMESBQM2 on wmbmi2.in.ibm.com


[mqm@wmbmi2.in.ibm.com]$ crtmqm -md /mqha/WMQ/IBMESBQM2/data -ld
/mqha/WMQ/IBMESBQM2/logs IBMESBQM2

After the queue manager is created, display the properties of the queue manager using the
command below:

Displaying the properties of queue manager IBMESBQM2


[mqm@wmbmi2.in.ibm.com]$ dspmqinf -o command IBMESBQM2
addmqinf -s QueueManager -v Name=IBMESBQM2 -v Directory=IBMESBQM2 -v Prefix=/var/mqm -v
DataPath=/mqha/WMQ/IBMESBQM2/data/IBMESBQM2

Copy the output from the dspmqinf command and paste it on the command line on
wmbmi1.in.ibm.com from the console of user mqm:

Creating a reference of IBMESBQM2 on wmbmi1.in.ibm.com


[mqm@wmbmi1.in.ibm.com]$ addmqinf -s QueueManager -v Name=IBMESBQM2 -v
Directory=IBMESBQM2 -v Prefix=/var/mqm -v DataPath=/mqha/WMQ/IBMESBQM2/data/IBMESBQM2
WebSphere MQ configuration information added.

Next, display the queue managers on both servers using the dspmq command on each. The
results should look like this:

Displaying the queue manager on Both Servers


[mqm@wmbmi1.in.ibm.com]$ dspmq
QMNAME(IBMESBQM1) STATUS(Ended immediately)
QMNAME(IBMESBQM2) STATUS(Ended immediately)

[mqm@wmbmi2.in.ibm.com]$ dspmq
QMNAME(IBMESBQM1) STATUS(Ended immediately)
QMNAME(IBMESBQM2) STATUS(Ended immediately)

Configuring and administering multi-instance brokers for high Page 5 of 17


availability in IBM WebSphere Message Broker - Part 2
developerWorks ibm.com/developerWorks/

The multi-instance queue managers IBMESBQM1 and IBMESBQM2 have been created on the
servers wmbmi1.in.ibm.com and wmbmi2.in.ibm.com. Start the multi-instance queue managers in
the following sequence:

Start multi-instance queue managers


start IBMESBQM1 on wmbmi1.in.ibm.com using command 'strmqm -x IBMESBQM1'
start IBMESBQM2 on wmbmi2.in.ibm.com using command 'strmqm -x IBMESBQM2'
start IBMESBQM2 on wmbmi1.in.ibm.com using command 'strmqm -x IBMESBQM2'
start IBMESBQM1 on wmbmi2.in.ibm.com using command 'strmqm -x IBMESBQM1'

Create a gateway queue manager IBMESBQM3 on wmbmi3.in.ibm.com. Create a queue manager


using the crtmqm command and then start it using the strrmqm command. This queue manager is
not a multi-instance one. After creating and starting the two multi-instance queue managers, add
them into a WebSphere MQ Cluster, as described below.

Creating a WebSphere MQ Cluster


After the queue managers IBMESBQM1, IBMESBQM2, and IBMESBQM3 are created and started,
create listeners on each of these queue managers and then add them in a WebSphere MQ cluster:

Define Listeners from runmqsc console


'define listener(IBMESBLISTENER1) trptype(tcp) port(1414) control(qmgr)' on IBMESBQM1
'define listener(IBMESBLISTENER2) trptype(tcp) port(1415) control(qmgr)' on IBMESBQM2
'define listener(IBMESBLISTENER3) trptype(tcp) port(1416) control(qmgr)' on IBMESBQM3

After the listeners are created, start them:

Start listeners from runmqsc console


'START LISTENER(IBMESBLISTENER1)' on IBMESBQM1
'START LISTENER(IBMESBLISTENER2)' on IBMESBQM2
'START LISTENER(IBMESBLISTENER3)' on IBMESBQM3

After the listeners are created and started, add queue managers in the cluster and then create
channels between the full repository queue managers. Issue this command on multi-instance
queue managers IBMESBQM1 and IBMESBQM2:

Add multi-instance queue manager in cluster as full repository


ALTER QMGR REPOS (IBMESBCLUSTER)

After completion, create CLUSTER sender and CLUSTER receiver channels between the full
repository queue managers by issuing the commands below:

Configuring and administering multi-instance brokers for high Page 6 of 17


availability in IBM WebSphere Message Broker - Part 2
ibm.com/developerWorks/ developerWorks

Create channels between full repository and partial repository queue


managers
Command to be issued on IBMESBQM1

DEFINE CHANNEL (TO.IBMESBQM1) CHLTYPE (CLUSRCVR) TRPTYPE (TCP) CONNAME


('wmbmi1.in.ibm.com (1414), wmbmi2.in.ibm.com (1414)') CLUSTER (IBMESBCLUSTER)

DEFINE CHANNEL (TO.IBMESBQM2) CHLTYPE (CLUSSDR) TRPTYPE (TCP) CONNAME


('wmbmi1.in.ibm.com (1415), wmbmi2.in.ibm.com (1415)') CLUSTER (IBMESBCLUSTER)

Command to be issued on IBMESBQM2

DEFINE CHANNEL (TO.IBMESBQM2) CHLTYPE (CLUSRCVR) TRPTYPE (TCP) CONNAME


('wmbmi1.in.ibm.com (1415), wmbmi2.in.ibm.com (1415)') CLUSTER (IBMESBCLUSTER)

DEFINE CHANNEL (TO.IBMESBQM1) CHLTYPE (CLUSSDR) TRPTYPE (TCP) CONNAME


('wmbmi1.in.ibm.com (1414), wmbmi2.in.ibm.com (1414)') CLUSTER (IBMESBCLUSTER)

Channels are set up between the two multi-instance queue managers for sharing MQ cluster
repository related information. Next, set up the channels between the partial repository gateway
QMGR (IBMESBQM3) and one of the full repository queue managers, such as IBMESBQM1.
Execute the commands below on queue manager IBMESBQM3:

Channels between partial repository and full repository queue managers


DEFINE CHANNEL (TO.IBMESBQM3) CHLTYPE (CLUSRCVR) TRPTYPE (TCP) CONNAME
('wmbmi3.in.ibm.com (1416)') CLUSTER (IBMESBCLUSTER)

DEFINE CHANNEL (TO.IBMESBQM1) CHLTYPE (CLUSSDR) TRPTYPE (TCP) CONNAME


('wmbmi1.in.ibm.com (1414), wmbmi2.in.ibm.com (1414)') CLUSTER (IBMESBCLUSTER)

After the three queue managers are added in cluster, the WebSphere MQ cluster topology should
look like this:

IBMESBCLUSTER with three queue managers

Configuring and administering multi-instance brokers for high Page 7 of 17


availability in IBM WebSphere Message Broker - Part 2
developerWorks ibm.com/developerWorks/

All of the queue managers have now been added under the cluster. Next, define local queues on
the full repository queue manager and expose them on the cluster for workload balancing. Execute
the following command on the full repository queue managers IBMESQM1 and IBMESBQM2:

Defining cluster queues


DEFINE QLOCAL (IBM.ESB.IN) DEFBIND (NOTFIXED) CLWLUSEQ (ANY) CLUSTER (IBMESBCLUSTER)

Queue IBM.ESB.IN will be used by WebSphere Message Broker flows for processing messages.
Execute the command below on IBMESBQM3. The alias queue INPUT is exposed so the
application can put messages on the cluster queue:

Queue alias for cluster queue


DEFINE QALIAS (INPUT) TARGQ (IBM.ESB.IN)

Configuring WebSphere Message Broker


At this point the multi-instance queue managers have been created and added to the WebSphere
MQ cluster. Next, create multi-instance brokers IBMESBBRK1 and IBMESBBRK2, and then
execution groups (DataFlowEngine) will be added to these brokers. Execute the commands below
as mqm user:

Create multi-instance broker IBMESBBRK1 on wmbmi1.in.ibm.com


mqsicreatebroker IBMESBBRK1 -q IBMESBQM1 -e /mqha/WMB/IBMESBBRK1

Create multi-instance broker IBMESBBRK2 on wmbmi2.in.ibm.com


mqsicreatebroker IBMESBBRK2 -q IBMESBQM2 -e /mqha/WMB/IBMESBBRK2

Create additional instance of IBMESBBRK1 on wmbmi2.in.ibm.com


mqsiaddbrokerinstance IBMESBBRK1 -e /mqha/WMB/IBMESBBRK1

Create additional instance of IBMESBBRK2 on wmbmi1.in.ibm.com


mqsiaddbrokerinstance IBMESBBRK2 -e /mqha/WMB/IBMESBBRK2

Start the multi-instance broker before the next step of creating DataFlowEngine on these brokers.
Execute the commands below to start the broker. The active instance of the multi-instance broker
will be instantiated on the server, where the corresponding multi-instance queue manager will be
running in active state.

Start multi-instance brokers


mqsistart IBMESBBRK1
mqsistart IBMESBBRK2

Configuring and administering multi-instance brokers for high Page 8 of 17


availability in IBM WebSphere Message Broker - Part 2
ibm.com/developerWorks/ developerWorks

Execute the commands below on the server, where the corresponding active instances of the
multi-instance broker are running. The execution group IBMESBEG will be created:

Create DataFlowEngine
mqsicreateexecutiongroup IBMESBBRK1 -e IBMESBEG
mqsicreateexecutiongroup IBMESBBRK2 -e IBMESBEG

Creating a message flow application for WebSphere Message Broker


Below is a simple message flow that you will need to create. This flow will read messages from
WebSphere MQ cluster input queues and process messages. The flow consists of a JMSInput
node followed by a Compute node and a JMSOutput node:

1. Input node (JMS.IBM.ESB.IN ) reads from queue IBM.ESB.IN.


2. Output node (JMS.IBM.ESB.OUT ) reads from queue IBM.ESB.OUT.
3. Compute node (AddBrokerName ) reads and copies the message tree.

Input queues are marked as persistent queues, and in case of failure, if messages are already on
the INPUT Queue (IBM.ESB.IN) and not yet picked up by broker flow processing, the messages
are not lost. The transaction mode on JMS nodes is set to Local, which means that the messages
are received under the local sync point of the node, and any messages later sent by an output
node in the flow are not put under the local sync point, unless an individual output node specifies
that the message must be put under the local sync point.

Message flow

1. The input and output queues (IBM.ESB.IN and IBM.ESB.OUT) have their persistence
properties set to Persistent, which means that all messages arriving on these queues are
made persistent, to prevent any message loss during failover.
2. Input messages are sent through a JmsProducer utility (available in WebSphere MQ JMS
Samples). This standalone JMS Client is modified to generate messages having a sequence
number in the payload. Input message is a simple XML Message.
3. JMSProducer.java appends the sequence number in the input message:
<TestMsg><Message>Hello World #Seq_Num#</Message></TestMsg>.
4. The broker message flow reads the message and adds two more values to the message:
the name of the broker processing the message, and the timestamp when the message was
processed. These two values are added to the message to help in the testing process.

Configuring and administering multi-instance brokers for high Page 9 of 17


availability in IBM WebSphere Message Broker - Part 2
developerWorks ibm.com/developerWorks/

Setting up the message flow


1. Create a message flow as shown above in the Message flow graphic. Add the ESQL below to
the Compute node:
Compute node ESQL

2. Configure the JMSInput node to have the following properties:


Source Queue = IBM.ESB.IN
Local JNDI bindings = file:///home/mqm/qcf/QCF1
Connection factory name = QCF
3. Configure the JMSOutput node to have the following properties:
Source Queue = IBM.ESB.OUT
Local JNDI bindings = file:///home/mqm/qcf/QCF1
Connection factory name = QCF
4. Change Local JNDI bindings to file:///home/mqm/qcf/QCF2 in the flow and deploy in the
second broker IBMESBBRK2. Both brokers will have their own copies of the connection
factories.
5. Create the bindings for the JMS queues using the JMSAdmin tool for queue manager
IBMESBQM1. The JMSInput node and JMSOutput node in the flow use the binding file under
the directory /home/mqm/qcf/QCF1/ of the Linux machine used for testing. To generate the
binding file, define the JMS objects first in a file called JMSobjectsdef:
JMS objects definition
DEF QCF(QCF1) + TRANSPORT(CLIENT) + QMANAGER(IBMESBQM1)
+ HOSTNAME(127.0.0.1) + PORT(1414)
DEF Q(IBM.ESB.IN) + QUEUE(IBM.ESB.IN) + QMANAGER(IBMESBQM1)
DEF Q(IBM.ESB.OUT) + QUEUE(IBM.ESB.OUT) + QMANAGER(IBMESBQM1)
6. Edit the JMSAdmin.config file in the /opt/mqm/java/bin directory to have the following entry,
which is the location where the bindings file will be generated:
Provider URL
PROVIDER_URL=file:/home/mqm/qcf/QCF1

Configuring and administering multi-instance brokers for high Page 10 of 17


availability in IBM WebSphere Message Broker - Part 2
ibm.com/developerWorks/ developerWorks

7. Run the JMSAdmin command to create the above JMS Objects:


Run JMSAdmin
mqm@wmbmi1:/opt/mqm/java/bin>./JMSAdmin < /home/mqm/JMSobjectsdef

The .bindings file is now available for use in /home/mqm/qcf/QCF1/. You can also do JMS
configurations using MQ Explorer. For more information, see Using the WebSphere MQ JMS
administration tool in the WebSphere MQ V7.5 information center.
8. Repeat the above steps for generating the bindings for queue manager IBMESBQM2 and
place it in the directory /home/mqm/qcf/QCF2.
9. Deploy the flow on the brokers IBMESBBRK1 and IBMESBBRK2 in the cluster.
10. Use the JmsProducer utility as shown below to send the messages to the gateway queue
manager, which in turn will send the messages to the input queues of the message flows:
Run JMSProducer utility
java JmsProducer -m IBMESBQM3 -d IBM.ESB.IN -p 1416 -i "<TestMsg>
<Message>Hello World</Message></TestMsg>"

You can use JmsConsumer (available in the WebSphere MQ JMS Samples) to consume the
messages generated out of the given message flow. The output queue of the flow (IBM.ESB.OUT)
is configured to trigger the JmsConsumer utility whenever the first message is received on the
queue. When triggered, this JmsConsumer consumes messages from the IBM.ESB.OUT queue,
and writes them to a common flat file called ConsumerLog.txt in the directory /mqha/logs. One
instance of this JmsConsumer utility is triggered for each of the queue managers in the cluster.
Add runmqtrm as a WebSphere MQ service so that when the queue manager starts, it will start the
trigger monitor.

JmsConsumer.java is customized to read messages and then log information to a file on shared
file system. Entries added in the log file are shown below. You can use this in test scenarios to
evaluate failover results. For every read by JMSConsumer.java, the following entry is added to
ConsumerLog.txt:
<Queue Manager Name> -
<Queue Name> -
< Server name on which multi-instance QM is running > -
<Message Payload with #Seq Number>

Testing the failover scenarios in the MQ cluster


Scenario 1. Controlled failover of WebSphere MQ
In Scenario 1, a large number of messages are sent to the flow and processed by both queue
managers in the cluster. Then one of the multi-instance queue managers (IBMESBQM1) is shut
down using the endmqm command. When the active instance of queue manager IBMESBQM1
goes down and before the passive instance comes up on the other machine, the messages are
processed by the other queue manager IBMESBQM2 in the cluster. You can verify this processing
by checking the timestamp and broker names in the messages in the output queue IBM.ESB.OUT.

Configuring and administering multi-instance brokers for high Page 11 of 17


availability in IBM WebSphere Message Broker - Part 2
developerWorks ibm.com/developerWorks/

After the passive queue manager of IBMESBQM1 comes up, both queue managers in the cluster
continue processing the messages.

1. Deploy the message flows to both brokers IBMESBBRK1(wmbmi1.in.ibm.com) and


IBMESBBRK2 (wmbmi2.in.ibm.com) as described above in Setting up the message flow.
2. From the third server wmbmi3.in.ibm.com, run the JMSProducer utility as shown below to
start publishing the messages to the gateway queue manager IBMESBQM3:
Run JMSProducer utility
java JmsProducer -m IBMESBQM3 -d IBM.ESB.IN -p 1416 -i "<TestMsg>
<Message>Hello World</Message></TestMsg>"
3. As the messages are processed, you can observe the data being written to ConsumerLog.txt
in the directory /mqha/logs by the triggered JmsConsumer JMS Client:
ConsumerLog.txt sample output
IBMESBQM1 - IBM.ESB.OUT - wmbmi1 -
<TestMsg><Message>Hello World1</Message><Timestamp>
2011-06-02T19:40:49.576381</Timestamp> <BrokerName>IBMESBBRK1</BrokerName></TestMsg>
IBMESBQM2 - IBM.ESB.OUT - wmbmi2 -
<TestMsg><Message>Hello World2</Message><Timestamp>
2011-06-02T19:39:51.703341</Timestamp> <BrokerName>IBMESBBRK2</BrokerName></TestMsg>
4. Stop the IBMESBQM1 queue manager using the following command in the
wmbmi1.in.ibm.com machine:
Stop IBMESBQM1
endmqm -s IBMESBQM1
5. As the active instance of IBMESBQM1 goes down, the passive instance in the
wmbmi2.in.ibm.com machine comes up. But meanwhile, the incoming messages are
processed by multi-instance broker IBMESBBRK2 on the queue manager IBMESBQM2
shared in the cluster (the messages highlighted in red in the output below). After the passive
instance comes up, the messages are processed by both members of the cluster once again,
so that there is absolutely no downtime for the systems. Results from Scenario 1 are shown
below:

Configuring and administering multi-instance brokers for high Page 12 of 17


availability in IBM WebSphere Message Broker - Part 2
ibm.com/developerWorks/ developerWorks

Output of controlled failover test

Scenario 2. Immediate failover of WebSphere MQ


Scenario 2 is the same as Scenario 1, but instead of shutting down the queue manager using the
endmqm command, the MQ process is killed using the kill command.

1. Deploy the message flows to both brokers IBMESBBRK1(wmbmi1.in.ibm.com) and


IBMESBBRK2 (wmbmi2.in.ibm.com) as described above in Setting up the message flow.
2. From the third server wmbmi3.in.ibm.com, run the JMSProducer utility as shown below to
start publishing the messages to the gateway queue manager IBMESBQM3:
Run JMSProducer utility
java JmsProducer -m IBMESBQM3 -d IBM.ESB.IN -p 1416 -i "<TestMsg>
<Message>Hello World</Message></TestMsg>"
3. As the messages are processed, you can observe the data being written to ConsumerLog.txt
in the directory /mqha/logs by the triggered JmsConsumer JMS Client:

Configuring and administering multi-instance brokers for high Page 13 of 17


availability in IBM WebSphere Message Broker - Part 2
developerWorks ibm.com/developerWorks/

ConsumerLog.txt sample output


IBMESBQM1 - IBM.ESB.OUT - winmb1 -
<TestMsg><Message>Hello World 1</Message><Timestamp>
2011-06-02T19:40:49.576381</Timestamp><BrokerName>IBMESBBRK1</BrokerName></TestMsg>
IBMESBQM2 - IBM.ESB.OUT - winmb2 -
<TestMsg><Message>Hello World 2</Message><Timestamp>
2011-06-02T19:39:51.703341</Timestamp><BrokerName>IBMESBBRK2</BrokerName></TestMsg>
4. Stop the IBMESBQM2 queue manager by killing the execution controller process amqzxma0
of the queue manager IBMESBQM2:
Immediate stop of IBMESBQM2
mqm@wmbmi2:~> ps -ef | grep amqzx
mqm 24632 1 0 18:10 ? 00:00:00 amqzxma0 -m IBMESBQM2 -x
mqm 13112 1 0 19:31 ? 00:00:00 amqzxma0 -m IBMESBQM1 -x
mqm@wmbmi2:~> kill -9 24632
5. As the active instance of IBMESBQM2 goes down, the passive instance in the
wmbmi1.in.ibm.com machine comes up. But meanwhile, the incoming messages are
processed by multi-instance broker IBMESBBRK1 on the queue manager IBMESBQM1
shared in cluster (the messages highlighted in red in the output below). After the passive
instance comes up, the messages are processed by both members of the cluster once again,
so that there is absolutely no downtime for the systems. Results from Scenario 2 are shown
below:
Output of Immediate failover test

Configuring and administering multi-instance brokers for high Page 14 of 17


availability in IBM WebSphere Message Broker - Part 2
ibm.com/developerWorks/ developerWorks

Scenario 3. Shutting down server wmbmi2.in.ibm.com


In Scenario 3, server wmbmi2.in.ibm.com is rebooted, the passive instance of IBMESBQM2
running in wmbmi1.in.ibm.com is notified that the active instance has gone down, and the passive
instance comes up. Meanwhile the messages coming in are processed by the cluster queue
manager IBMESBQM1 on wmbmi1.in.ibm.com.

1. Deploy the message flows to both brokers IBMESBBRK1(wmbmi1.in.ibm.com) and


IBMESBBRK2 (wmbmi2.in.ibm.com) as described above in Setting up the message flow.
2. From the third server wmbmi3.in.ibm.com, run the JMSProducer utility as shown below to
start publishing the messages to the gateway queue manager IBMESBQM3:
Run JMSProducer utility
java JmsProducer -m IBMESBQM3 -d IBM.ESB.IN -p 1416 -i "<TestMsg>
<Message>Hello World</Message></TestMsg>"
3. As the messages are processed, you can observe the data being written to ConsumerLog.txt
in the directory /mqha/logs by the triggered JmsConsumer JMS Client:
ConsumerLog.txt sample output
IBMESBQM1 - IBM.ESB.OUT - winmb1 -
<TestMsg><Message>Hello World 2</Message><Timestamp>
2011-06-06T17:03:51.838884</Timestamp><BrokerName>IBMESBBRK1</BrokerName></TestMsg>
IBMESBQM2 - IBM.ESB.OUT - winmb2 -
<TestMsg><Message>Hello World 1</Message><Timestamp>
2011-06-06T17:29:04.264681</Timestamp><BrokerName>IBMESBBRK2</BrokerName></TestMsg>
4. Reboot the server wmbmi2.in.ibm.com by issuing the following command as root user:
Reboot wmbmi2.in.ibm.com
wmbmi2:/home/mqm # reboot
Broadcast message from root (pts/2) (Mon Jun 6 17:30:22 2011):
The system is going down for reboot NOW!
wmbmi2:/home/mqm # date
Mon Jun 6 17:30:40 IST 2011
5. As the active instance of IBMESBQM2 goes down, the passive instance in the
wmbmi1.in.ibm.com machine comes up. But meanwhile, the incoming messages are
processed by multi-instance broker IBMESBBRK1 on the queue manager IBMESBQM1
shared in cluster (the messages highlighted in red in the output below). After the passive
instance comes up, the messages are processed by both members of the cluster once again,
so that there is absolutely no downtime for the systems. Results from Scenario 2 are shown
below:

Configuring and administering multi-instance brokers for high Page 15 of 17


availability in IBM WebSphere Message Broker - Part 2
developerWorks ibm.com/developerWorks/

Output of system shutdown test

Conclusion
This article described how to integrate two powerful technologies -- WebSphere MQ clusters, and
WebSphere MQ and WebSphere Message Broker multi-instance features -- in order to improve
the availability of queue managers and brokers in critical production systems. In this configuration,
the active instance of both WebSphere MQ and WebSphere Message Broker run on the same
physical server after failover, which makes this server a single point of failure. Therefore it is
important to have hardware redundancy, with WebSphere MQ and WebSphere Message Broker
components on separate servers.

Acknowledgement
The authors would like to thank WebSphere Senior Technical Architect Regina L. Manuel and
WebSphere Message Broker Software Engineer Martin R. Naish for their valuable suggestions and
feedback.

Configuring and administering multi-instance brokers for high Page 16 of 17


availability in IBM WebSphere Message Broker - Part 2
ibm.com/developerWorks/ developerWorks

Related topics
WebSphere Message Broker resources
Configuring and administering multi-instance brokers for high availability in IBM
WebSphere Message Broker - Part 1
WebSphere Message Broker V8 documentation in IBM Knowledge Center
WebSphere Message Broker documentation library
WebSphere Message Broker forum
Patterns: SOA design using WebSphere Message Broker and WebSphere ESB,
SG24-7369
WebSphere MQ resources
WebSphere MQ V7.5 documentation in IBM Knowledge Center
WebSphere MQ product page
WebSphere MQ documentation library
IBM WebSphere MQ V7.1 and V7.5 Features and Enhancements, SG24-8087

Copyright IBM Corporation 2011


(www.ibm.com/legal/copytrade.shtml)
Trademarks
(www.ibm.com/developerworks/ibm/trademarks/)

Configuring and administering multi-instance brokers for high Page 17 of 17


availability in IBM WebSphere Message Broker - Part 2

You might also like