Traditional MQ cluster are used for Workload balancing with in the MQ but it will not load balance the Application connection which are connecting to it using CCDT . We have to connect to only one QMGR at any point of time . for High Availability we are using Multi Instance setup using HA cluster service . This is one of the DisAdvantage in the traditional MQ cluster’s . This is been overcome with MQ Uniform cluster .
Uniform cluster is a new feature introduced in IBM MQ V9.1.2 CD ( continue delivery) version ( not available in MQ V9.1 Long term edition). Starting MQ V9.1.4 CD, we introduce changes to make it simple to set up a uniform cluster and give an ability to keep the configuration uniform. You can simplify both initial creation of a uniform cluster, and subsequently keeping the configuration between the uniform cluster members identical, by using the automatic configuration and automatic clustering support. When using this capability, one configuration file describes the cluster and another represents the MQSC configuration to apply to all queue managers in the uniform cluster. On each queue manager restart, the configuration is reapplied and the cluster automatically forms. See Creating a uniform cluster for more details on using this feature.
The objective of a uniform cluster deployment is that applications can be designed for scale and availability, and can connect to any of the queue managers within the uniform cluster. This removes any dependency on a specific queue manager, resulting in better availability and workload balancing of messaging traffic. Uniform clusters are not available on IBM® MQ for z/OS®; queue sharing groups provide many of the capabilities of a uniform cluster.
Uniform clusters are a specific pattern of an IBM MQ cluster that provides a highly available and horizontally scaled small collection of queue managers. These queue managers are configured almost identically, so that an application can interact with them as a single group. This makes it easier to ensure each queue manager in the cluster is being used, by automatically ensuring application instances are spread evenly across the queue managers.
Uniform clusters remove some of the manual steps an administrator has to go through to create and administer a group of independent, interconnected queue managers. They move some client connection logic from the client to the queue manager, where information about levels of application activity can inform decisions at the clients, as to which queue managers they should connect to.
To take full advantage of a uniform cluster, each application should also be scaled into multiple matching instances, preferably with at least as many instances as there are queue managers, if not many more.
An IBM MQ cluster, of whatever size, provides multiple capabilities:
- A directory of all clustering resources, discoverable by any member in a cluster
- Automatic channel creation and connectivity
- Horizontal scaling across multiple matching queues, using message workload balancing
- Dynamic message routing, based on availability
Uniform clusters use IBM MQ clustering for communication between queue managers, and balancing of workload between queues. However they differ from typical IBM MQ clusters in the following ways:
- Uniform clusters typically have a smaller number of queue managers in the cluster. You should not create a uniform cluster with greater than 10 queue managers.
- Every member of the cluster has near-identical configuration.
- The cluster is typically used by a single application, or group of related applications.
- The number of application instances connecting to the cluster should be greater than, or equal to, the number of queue managers.
In a uniform cluster pattern, all queue managers in the cluster offer the same messaging services. For example, you might configure all cluster members to have the same local queues defined, and allow client applications to connect to any member of the cluster. You might also have the same server connection channels defined, and possibly the same authority records, channel authentication rules, and so on. However, members of the cluster can still have some differences in objects and configuration. For example, some applications might create temporary dynamic queues while they are connected to a queue manager.
Further more about the MQ uniform cluster can be found in the below URL or follow below to continue on UNIFORM MQ Cluster and Setup .
Uniform Cluster is introduced in 9.1.X CD and in 9.2 LTS releases .It support Application connection balancing to form ACTIVE/ACTIVE Queue Manager setup.
A MQ Uniform cluster allows a set of Queue Managers to work together intelligently to work as a single unit to distribute the load evenly across.
This provides a horizontal scaled small collection of Queue Managers . These QMGR’s are almost identically configured with same queues , Channels and Authentication records
The Uniform cluster make sure the traffic is evenly distributed across the available queue managers for better load balancing the connection .
Advantage of a uniform cluster, each application should also be scaled into multiple matching instances, preferably with at least as many instances as there are queue managers, if not many more.An IBM MQ cluster, of whatever size, provides multiple capabilities:
- A directory of all clustering resources, discoverable by any member in a cluster
- Automatic channel creation and connectivity
- Horizontal scaling across multiple matching queues, using message workload balancing
- Dynamic message routing, based on availability
Uniform clusters use IBM MQ clustering for communication between queue managers, and balancing of workload between queues. However they differ from typical IBM MQ clusters in the following ways:
- Uniform clusters typically have a smaller number of queue managers in the cluster. You should not create a uniform cluster with greater than 10 queue managers.
- Every member of the cluster has near-identical configuration.
- The cluster is typically used by a single application, or group of related applications.
- The number of application instances connecting to the cluster should be greater than, or equal to, the number of queue managers.
Let us try to create a Unifrom cluster with 3 Queue Managers .
Create a file UniCluster.ini that describes how you want the cluster itself to look in terms of full repositories.As for any cluster, two full repositories act as central stores of information about the cluster.Specifically, you need to describe the names and connection names for the two full repositories in this cluster. Here we use QMGR1 and QMGR2 as full repositories.
UniCluster.ini
AutoCluster:
Type=Uniform
Repository1Name=QMGR1
Repository1Conname=127.0.0.1(1414)
Repository2Name=QMGR2
Repository2Conname=127.0.0.1(1415)
ClusterName=UNICLUSTER
The RepositoryNConname fields are used as the conname attribute for other cluster members to define cluster senders (CLUSSDR) to them
Create a sample configuration file, UniCluster.mqsc which contains the MQSC definitions you want to be applied to all cluster members.There is one mandatory line needed in this file, which is a definition of a cluster receiver channel (CLUSRCVR), with a CLUSTER attribute of the automatic cluster name (usually through the +AUTOCL+ insert) and a channel name which includes the +QMNAME+ insert.This describes how other members of the uniform cluster connect to each queue manager and is used as a template of how to connect to the other queue managers as well.
- +AUTOCL+The automatic cluster name
- +QMNAME+The name of the queue manager being created+
- +VARIABLE+Any <variable name> defined during queue manager creation or in the
Variables
qm.ini stanza, for example +CONNAME+
Make these two files available on each machine that will host a uniform cluster member. In our Scenario We are using single machine .
UniCluster.mqsc
- Simplify the demo system by disabling channel and connection authentication
ALTER QMGR CHLAUTH(DISABLED) CONNAUTH(‘ ‘)
REFRESH SECURITY TYPE(CONNAUTH) - The only definition required to join a Uniform CLuster when using AutoCluster is to define a cluster receiver channel
- This will use the cluster name from the AutoCluster ini file setting and the connection name from the crtmqm command
DEFINE CHANNEL(UNICLUSTER.+QMNAME+) CHLTYPE(CLUSRCVR) CLUSTER(+AUTOCL+) CONNAME(+CONNAME+) SHORTRTY(120) SHORTTMR(5) - Every queue manager needs to accept client connections
DEFINE CHANNEL(SVRCONN.CHANNEL) CHLTYPE(SVRCONN) - Messaging resources like queues need to be defined on every member of the uniform cluster
DEFINE QLOCAL(Q1) CLUSTER(UNICLUSTER) DEFPSIST(YES) DEFBIND(NOTFIXED)
Note: Replace . with * in the above content . * will not be executed in runmqsc it is for comment .
On the command line, supply:
- A request to start a listener, on the expected port
- A request for automatic INI configuration (-ii) pointing to the automatic cluster setup file (uniclus.ini)
- A request for automatic MQSC configuration (-ic) pointing to the MQSC configuration file which includes a CLUSRCVR definition for the uniform cluster.
- A variable for the CONNAME for this queue manager
# Create QMGR1 and listen on port 1414
crtmqm -ii UniCluster.ini -ic UniCluster.mqsc -iv CONNAME=”127.0.0.1(1414)” -p 1414 QMGR1
# Create QMGR2 and listen on port 1415
crtmqm -ii UniCluster.ini -ic UniCluster.mqsc -iv CONNAME=”127.0.0.1(1415)” -p 1415 QMGR2
# Create QMGR3 and listen on port 1416
crtmqm -ii UniCluster.ini -ic UniCluster.mqsc -iv CONNAME=”127.0.0.1(1416)” -p 1416 QMGR3
mqm@ip-172-31-14-154 ~]$ crtmqm -ii UniCluster.ini -ic UniCluster.mqsc -iv CONNAME=”127.0.0.1(1414)” -p 1414 QMGR1
IBM MQ queue manager created.
Directory ‘/var/mqm/qmgrs/QMGR1’ created.
The queue manager is associated with installation ‘Installation1’.
Creating or replacing default objects for queue manager ‘QMGR1’.
Default objects statistics : 84 created. 0 replaced. 0 failed.
Completing setup.
Setup completed.
[mqm@ip-172-31-14-154 ~]$ crtmqm -ii UniCluster.ini -ic UniCluster.mqsc -iv CONNAME=”127.0.0.1(1415)” -p 1415 QMGR2
IBM MQ queue manager created.
Directory ‘/var/mqm/qmgrs/QMGR2’ created.
The queue manager is associated with installation ‘Installation1’.
Creating or replacing default objects for queue manager ‘QMGR2’.
Default objects statistics : 84 created. 0 replaced. 0 failed.
Completing setup.
Setup completed.
[mqm@ip-172-31-14-154 ~]$ crtmqm -ii UniCluster.ini -ic UniCluster.mqsc -iv CONNAME=”127.0.0.1(1416)” -p 1416 QMGR3
IBM MQ queue manager created.
Directory ‘/var/mqm/qmgrs/QMGR3’ created.
The queue manager is associated with installation ‘Installation1’.
Creating or replacing default objects for queue manager ‘QMGR3’.
Default objects statistics : 84 created. 0 replaced. 0 failed.
Completing setup.
Setup completed.
[mqm@ip-172-31-14-154 ~]$
Each queue manager in the uniform cluster is created with an almost identical command line – all the differences between full and partial repository are handled automatically for a uniform cluster.
Start the QMGR;s using below
strmqm QMGR1
strmqm QMGR2
strmqm QMGR3
mqm@ip-172-31-14-154 ~]$ strmqm QMGR1 ; strmqm QMGR2 ;strmqm QMGR3 The system resource RLIMIT_NOFILE is set at an unusually low level for IBM MQ. The system resource RLIMIT_NPROC is set at an unusually low level for IBM MQ. Successfully applied automatic configuration INI definitions. IBM MQ queue manager 'QMGR1' starting. The queue manager is associated with installation 'Installation1'. 5 log records accessed on queue manager 'QMGR1' during the log replay phase. Log replay for queue manager 'QMGR1' complete. Transaction manager state recovered for queue manager 'QMGR1'. IBM MQ queue manager 'QMGR1' started using V9.2.0.1. The system resource RLIMIT_NOFILE is set at an unusually low level for IBM MQ. The system resource RLIMIT_NPROC is set at an unusually low level for IBM MQ. Successfully applied automatic configuration INI definitions. IBM MQ queue manager 'QMGR2' starting. The queue manager is associated with installation 'Installation1'. 5 log records accessed on queue manager 'QMGR2' during the log replay phase. Log replay for queue manager 'QMGR2' complete. Transaction manager state recovered for queue manager 'QMGR2'. IBM MQ queue manager 'QMGR2' started using V9.2.0.1. The system resource RLIMIT_NOFILE is set at an unusually low level for IBM MQ. The system resource RLIMIT_NPROC is set at an unusually low level for IBM MQ. Successfully applied automatic configuration INI definitions. IBM MQ queue manager 'QMGR3' starting. The queue manager is associated with installation 'Installation1'. 5 log records accessed on queue manager 'QMGR3' during the log replay phase. Log replay for queue manager 'QMGR3' complete. Transaction manager state recovered for queue manager 'QMGR3'. IBM MQ queue manager 'QMGR3' started using V9.2.0.1. [mqm@ip-172-31-14-154 ~]$ ++++++++++++++++ [mqm@ip-172-31-14-154 ~]$ dspmq QMNAME(QMGR1) STATUS(Running) QMNAME(QMGR2) STATUS(Running) QMNAME(QMGR3) STATUS(Running) [mqm@ip-172-31-14-154 ~]$ ps -ef |grep lsr mqm 18731 18686 0 Apr04 ? 00:00:04 /opt/mqm/bin/runmqlsr -r -m QMGR1 -t TCP -p 1414 mqm 18895 18839 0 Apr04 ? 00:00:03 /opt/mqm/bin/runmqlsr -r -m QMGR2 -t TCP -p 1415 mqm 19070 19012 0 Apr04 ? 00:00:03 /opt/mqm/bin/runmqlsr -r -m QMGR3 -t TCP -p 1416 mqm 21594 21557 0 12:39 pts/0 00:00:00 grep --color=auto lsr [mqm@ip-172-31-14-154 ~]$
QMGR’s are created with Auto config file and Unifom.mqsc definitions . Let us see the cluster information
Verify the uniform cluster setup
AMQ8441I: Display Cluster Queue Manager details. CLUSQMGR(QMGR1) CHANNEL(UNICLUSTER.QMGR1) CLUSTER(UNICLUSTER) QMID(QMGR1_2021-04-04_04.37.47) QMTYPE(REPOS) STATUS(RUNNING) AMQ8441I: Display Cluster Queue Manager details. CLUSQMGR(QMGR2) CHANNEL(UNICLUSTER.QMGR2) CLUSTER(UNICLUSTER) QMID(QMGR2_2021-04-04_04.37.58) QMTYPE(REPOS) STATUS(RUNNING) AMQ8441I: Display Cluster Queue Manager details. CLUSQMGR(QMGR3) CHANNEL(UNICLUSTER.QMGR3) CLUSTER(UNICLUSTER) QMID(QMGR3_2021-04-04_04.38.07) QMTYPE(NORMAL) STATUS(RUNNING)
The amqsghac, amqsphac, and amqsmhac programs are started from the command line, and can be used in combination to demonstrate reconnection after the failure of one instance of a multi-instance queue manager. We will use amqsphac in our example
MQ_INSTALLATION_PATH=/opt/mqm export MQCHLLIB=/home/mqm export MQCHLTAB=CCDT.JSON # Check that the CCDT.JSON file exists before starting the applications if [ -f "$MQCHLLIB/$MQCHLTAB" ]; then # Start multiple instances of the sample application for (( i=0; i<10; ++i)); do $MQ_INSTALLATION_PATH/samp/bin/amqsghac Q1 *ANY_QM & done else echo "$MQCHLLIB/$MQCHLTAB not found" fi
RunClient.sh file with the below CCDT file . We used 10 you can change to any values to create n number of connections
CCDT File with the below content .
{ "channel": [ { "name": "SVRCONN.CHANNEL", "clientConnection": { "connection": [ { "host": "localhost", "port": 1414 } ], "queueManager": "ANY_QM" }, "connectionManagement": { "clientWeight": 1, "affinity": "none" }, "type": "clientConnection" }, { "name": "SVRCONN.CHANNEL", "clientConnection": { "connection": [ { "host": "localhost", "port": 1415 } ], "queueManager": "ANY_QM" }, "connectionManagement": { "clientWeight": 1, "affinity": "none" }, "type": "clientConnection" }, { "name": "SVRCONN.CHANNEL", "clientConnection": { "connection": [ { "host": "localhost", "port": 1415 } ], "queueManager": "ANY_QM" }, "connectionManagement": { "clientWeight": 1, "affinity": "none" }, "type": "clientConnection" }, { "name": "SVRCONN.CHANNEL", "clientConnection": { "connection": [ { "host": "localhost", "port": 1414 } ], "queueManager": "QMGR1" }, "type": "clientConnection" }, { "name": "SVRCONN.CHANNEL", "clientConnection": { "connection": [ { "host": "localhost", "port": 1415 } ], "queueManager": "QMGR2" }, "type": "clientConnection" }, { "name": "SVRCONN.CHANNEL", "clientConnection": { "connection": [ { "host": "localhost", "port": 1416 } ], "queueManager": "QMGR3" }, "type": "clientConnection" } ] }
There is no default JSON CCDT, and IBM MQ does not supply any tooling to create or edit CCDTs in JSON format.
You do not need to be using IBM MQ for Multiplatforms to create and edit a JSON CCDT file.
Using the JSON format, you can define duplicate channel definitions of the same name. When you deploy IBM MQ on the cloud, you can use this to make your deployment scalable and highly available.
The JSON file is human readable, which can simplify queue manager configuration.
You need no specialist tooling to maintain the CCDT file.
The file is smaller.
This format provides backwards and forwards compatibility.
Locally by setting the MQCHLLIB and MQCHLTAB environment variables.
Run the command runmqsc -n
Run the DISPLAY CHANNEL command. For example, run DISPLAY CHANNEL(*). as shown in the below
mqm@ip-172-31-14-154 ~]$ export MQCHLLIB=/var/mqm [mqm@ip-172-31-14-154 ~]$ export MQCHLTAB=CCDT.JSON [mqm@ip-172-31-14-154 ~]$ runmqsc -n 5724-H72 (C) Copyright IBM Corp. 1994, 2020. Starting local MQSC for 'CCDT.JSON'. dis chl(*) 1 : dis chl(*) AMQ8414I: Display Channel details. CHANNEL(SVRCONN.CHANNEL) CHLTYPE(CLNTCONN) AMQ8414I: Display Channel details. CHANNEL(SVRCONN.CHANNEL) CHLTYPE(CLNTCONN) AMQ8414I: Display Channel details. CHANNEL(SVRCONN.CHANNEL) CHLTYPE(CLNTCONN) AMQ8414I: Display Channel details. CHANNEL(SVRCONN.CHANNEL) CHLTYPE(CLNTCONN) AMQ8414I: Display Channel details. CHANNEL(SVRCONN.CHANNEL) CHLTYPE(CLNTCONN) AMQ8414I: Display Channel details. CHANNEL(SVRCONN.CHANNEL) CHLTYPE(CLNTCONN) : dis chl(*) all 2 : dis chl(*) all AMQ8414I: Display Channel details. CHANNEL(SVRCONN.CHANNEL) CHLTYPE(CLNTCONN) AFFINITY(NONE) ALTDATE(2021-04-05) ALTTIME(13.24.31) CERTLABL( ) CLNTWGHT(1) COMPHDR(NONE) COMPMSG(NONE) CONNAME(localhost(1414)) DEFRECON(NO) DESCR( ) HBINT(300) KAINT(AUTO) LOCLADDR( ) MAXMSGL(4194304) MODENAME( ) PASSWORD( ) QMNAME(ANY_QM) RCVDATA( ) RCVEXIT( ) SCYDATA( ) SCYEXIT( ) SENDDATA( ) SENDEXIT( ) SHARECNV(10) SSLCIPH( ) SSLPEER( ) TPNAME( ) TRPTYPE(TCP) USERID( ) AMQ8414I: Display Channel details. CHANNEL(SVRCONN.CHANNEL) CHLTYPE(CLNTCONN) AFFINITY(NONE) ALTDATE(2021-04-05) ALTTIME(13.24.31) CERTLABL( ) CLNTWGHT(1) COMPHDR(NONE) COMPMSG(NONE) CONNAME(localhost(1415)) DEFRECON(NO) DESCR( ) HBINT(300) KAINT(AUTO) LOCLADDR( ) MAXMSGL(4194304) MODENAME( ) PASSWORD( ) QMNAME(ANY_QM) RCVDATA( ) RCVEXIT( ) SCYDATA( ) SCYEXIT( ) SENDDATA( ) SENDEXIT( ) SHARECNV(10) SSLCIPH( ) SSLPEER( ) TPNAME( ) TRPTYPE(TCP) USERID( ) AMQ8414I: Display Channel details. CHANNEL(SVRCONN.CHANNEL) CHLTYPE(CLNTCONN) AFFINITY(NONE) ALTDATE(2021-04-05) ALTTIME(13.24.31) CERTLABL( ) CLNTWGHT(1) COMPHDR(NONE) COMPMSG(NONE) CONNAME(localhost(1416)) DEFRECON(NO) DESCR( ) HBINT(300) KAINT(AUTO) LOCLADDR( ) MAXMSGL(4194304) MODENAME( ) PASSWORD( ) QMNAME(ANY_QM) RCVDATA( ) RCVEXIT( ) SCYDATA( ) SCYEXIT( ) SENDDATA( ) SENDEXIT( ) SHARECNV(10) SSLCIPH( ) SSLPEER( ) TPNAME( ) TRPTYPE(TCP) USERID( ) AMQ8414I: Display Channel details. CHANNEL(SVRCONN.CHANNEL) CHLTYPE(CLNTCONN) AFFINITY(PREFERRED) ALTDATE(2021-04-05) ALTTIME(13.24.31) CERTLABL( ) CLNTWGHT(0) COMPHDR(NONE) COMPMSG(NONE) CONNAME(localhost(1414)) DEFRECON(NO) DESCR( ) HBINT(300) KAINT(AUTO) LOCLADDR( ) MAXMSGL(4194304) MODENAME( ) PASSWORD( ) QMNAME(QMGR1) RCVDATA( ) RCVEXIT( ) SCYDATA( ) SCYEXIT( ) SENDDATA( ) SENDEXIT( ) SHARECNV(10) SSLCIPH( ) SSLPEER( ) TPNAME( ) TRPTYPE(TCP) USERID( ) AMQ8414I: Display Channel details. CHANNEL(SVRCONN.CHANNEL) CHLTYPE(CLNTCONN) AFFINITY(PREFERRED) ALTDATE(2021-04-05) ALTTIME(13.24.31) CERTLABL( ) CLNTWGHT(0) COMPHDR(NONE) COMPMSG(NONE) CONNAME(localhost(1415)) DEFRECON(NO) DESCR( ) HBINT(300) KAINT(AUTO) LOCLADDR( ) MAXMSGL(4194304) MODENAME( ) PASSWORD( ) QMNAME(QMGR2) RCVDATA( ) RCVEXIT( ) SCYDATA( ) SCYEXIT( ) SENDDATA( ) SENDEXIT( ) SHARECNV(10) SSLCIPH( ) SSLPEER( ) TPNAME( ) TRPTYPE(TCP) USERID( ) AMQ8414I: Display Channel details. CHANNEL(SVRCONN.CHANNEL) CHLTYPE(CLNTCONN) AFFINITY(PREFERRED) ALTDATE(2021-04-05) ALTTIME(13.24.31) CERTLABL( ) CLNTWGHT(0) COMPHDR(NONE) COMPMSG(NONE) CONNAME(localhost(1416)) DEFRECON(NO) DESCR( ) HBINT(300) KAINT(AUTO) LOCLADDR( ) MAXMSGL(4194304) MODENAME( ) PASSWORD( ) QMNAME(QMGR3) RCVDATA( ) RCVEXIT( ) SCYDATA( ) SCYEXIT( ) SENDDATA( ) SENDEXIT( ) SHARECNV(10) SSLCIPH( ) SSLPEER( ) TPNAME( ) TRPTYPE(TCP) USERID( )
Note: When a JSON CCDT is used it is possible to have multiple channels with the same name. If multiple channels with the same name exist, and they have CLNTWGHT(0) then the channels will be selected in the order that they are defined in the JSON CCDT.
sh RunClient.sh — run this command in background using & to generate connections
mqm@ip-172-31-14-154 ~]$ Sample AMQSGHAC start Sample AMQSGHAC start Sample AMQSGHAC start Sample AMQSGHAC start Sample AMQSGHAC start Sample AMQSGHAC start Sample AMQSGHAC start Sample AMQSGHAC start Sample AMQSGHAC start Sample AMQSGHAC start [1]+ Done sh RunClient.sh [mqm@ip-172-31-14-154 ~]$
Calculate each QMGR connection from AMQSGHAC using below
echo "dis conn() where(appltag eq 'amqsghac')" | runmqsc QMGR1 | grep " CONN" | wc -w echo "dis conn() where(appltag eq 'amqsghac')" | runmqsc QMGR2 | grep " CONN" | wc -w
echo "dis conn(*) where(appltag eq 'amqsghac')" | runmqsc QMGR3 | grep " CONN" | wc -w
[mqm@ip-172-31-14-154 ~]$ echo "dis conn(*) where(appltag eq 'amqsghac')" | runmqsc QMGR1 | grep " CONN" | wc -w 3 [mqm@ip-172-31-14-154 ~]$ echo "dis conn(*) where(appltag eq 'amqsghac')" | runmqsc QMGR2 | grep " CONN" | wc -w 4 [mqm@ip-172-31-14-154 ~]$ echo "dis conn(*) where(appltag eq 'amqsghac')" | runmqsc QMGR3 | grep " CONN" | wc -w 3 [mqm@ip-172-31-14-154 ~]$
lets End QMGR3 and see the connections
[mqm@ip-172-31-14-154 ~]$ 14:02:24 : EVENT : Connection Broken
MQGET ended with reason code 2009
MQCLOSE ended with reason code 2009
14:02:24 : EVENT : Connection Broken
MQGET ended with reason code 2009
MQCLOSE ended with reason code 2009
14:02:24 : EVENT : Connection Broken
MQGET ended with reason code 2009
MQCLOSE ended with reason code 2009
14:02:24 : EVENT : Connection Broken
MQGET ended with reason code 2009
MQCLOSE ended with reason code 2009
MQDISC ended with reason code 2009
Sample AMQSGHAC end
MQDISC ended with reason code 2009
Sample AMQSGHAC end
MQDISC ended with reason code 2009
Sample AMQSGHAC end
MQDISC ended with reason code 2009
Sample AMQSGHAC end
endmqm QMGR2
AMQ8146E: IBM MQ queue manager not available.
[mqm@ip-172-31-14-154 ~]$
[mqm@ip-172-31-14-154 ~]$
2009 is the error code for Connection broken .
Connection rebalenced with in couple of seconds
Bring down QMGR3 . Stopping and starting a queue manager will show connections being moved to alternative queue managers and then connections being rebalanced once the queue manager is available again.
Note: that you’ll probably want to end the queue manager using endmqm -r QMGR, otherwise the connected applications will terminate ( MQRC 2009 ) their connections rather than move them.
start QMGR3 Again and try with -r [ endmqm -r QMGR3 ]
[mqm@ip-172-31-14-154 ~]$ echo "dis conn(*) where(appltag eq 'amqsghac')" | runmqsc QMGR1 | grep " CONN" | wc -w 7 [mqm@ip-172-31-14-154 ~]$ echo "dis conn(*) where(appltag eq 'amqsghac')" | runmqsc QMGR2 | grep " CONN" | wc -w 6 [mqm@ip-172-31-14-154 ~]$ echo "dis conn(*) where(appltag eq 'amqsghac')" | runmqsc QMGR3 | grep " CONN" | wc -w 0 [mqm@ip-172-31-14-154 ~]$
Now start QMGR3 and observer the connections are re-balenced back
[mqm@ip-172-31-14-154 ~]$ echo "dis conn(*) where(appltag eq 'amqsghac')" | runmqsc QMGR1 | grep " CONN" | wc -w 4 [mqm@ip-172-31-14-154 ~]$ echo "dis conn(*) where(appltag eq 'amqsghac')" | runmqsc QMGR2 | grep " CONN" | wc -w 5 [mqm@ip-172-31-14-154 ~]$ echo "dis conn(*) where(appltag eq 'amqsghac')" | runmqsc QMGR3 | grep " CONN" | wc -w 4 [mqm@ip-172-31-14-154 ~]$
- When there are enough consuming application instances, there is always an instance of the application processing messages.
- When you stop a queue manager, any connected application instances are evenly distributed across the remaining queue managers in the cluster.
- When you start a queue manager, any application instances connected to other queue managers in the cluster are automatically rebalanced to include the newly started queue manager.
This means that the uniform cluster continually ensures that applications are optimally distributed, maximising message processing, even in the event of planned and unplanned outages.