WebSphere MQ High Connection Count Issue

☆樱花仙子☆ 提交于 2019-12-02 05:40:47

问题


As per our configuration, we have WAS version is 8.5.5.1, IBM MQ version 7.5.0.3. We are using 2 channels to connect to WMQ, one with MAXINST set to 250 and one with 500. SHARECNV is set to 10 for both. Now we have an upper limit of making maximum 1600 connections in a queue manager but we end up crossing that limit after 3-4 days of continuous running of WAS Server.

I want to understand how parameters on WAS side affect this count. We are using Queue Connection Factory and Act Spec for making the connections and we have 23 of each of them. Out of these for 22 the settings in Act Spec and QCF are kept default like max server sessions=10, max connection in connection pool=10, max sessions in session pool set to 10. This services have quite low tps of around 15-20 request per minute. All 22 of them use same channel to connect to queue manager with MAXINST set to 250. 1 gets quite high load with peak of 80 requests per second(aprox 40 per server) for which max server sessions=40, max connection in connection pool=40, max sessions in session pool is set to 10.Connection Timeout, Reap Time, Unused Timeout and Aged timeout values are kept default for all.

With these settings we end up making around 1200 connection on the channel used by 22 services and around 500 for the other channel after 2-3 days of continuous running. These build up over a period of time. Now I want to tune these settings so that we don't end up crossing the connection count limit and also don't end up having no connections available. So I have a few questions:

  1. What is a better option from performance point of view- reducing max connections in connection pool or max sessions in sessions pool. What should be the ideal values for the load mentioned earlier.

  2. What should be the ideal value for Unused Timeout for Connection Pool and Session Pool which is set to 30 mins by default. If we reduce it to say 5 mins, what implications could it have on performance of failure to get the connections.

  3. Is there some setting that can be done at WMQ side so that the idle/unused connections are closed or this can happen only from the client side.

  4. DISCINT parameter value is set to zero and HBINT to 300. What should be the ideal value.

I ran below command to view the connections

echo "DIS CONN(*) TYPE(*) CONNAME CHANNEL OBJNAME OBJTYPE" | mqsc -e -m     QM-p width=1000 | grep -o '^\w\+:\|\w\+[(][^)]\+[)]' | awk -F '[()]' -v OFS="," 'function printValues() { if ("CHANNEL" in p) { print p["CHANNEL"], p["CURSHCNV"], p["CONNAME"],p["CHSTADA"],p["CHSTATI"],p["LSTMSGDA"],p["LSTMSGTI"],p["OBJNAME"],p["OBJTYPE"],p["ASTATE"] } } /^\w+:/ { printValues(); delete p; next } { p[$1] = $2 } END { printValues() }' | grep MYCHANNEL

MYCHANNEL,,10.215.161.65,,,,,,,NONE
MYCHANNEL,,10.215.161.65,,,,,,,SUSPENDED
    MYCHANNEL,,10.215.161.65,,,,,MYQUEUE01,QUEUE,ACTIVE

I can see a lot of connection in None and suspended state which do not have any OBJNAME or OBJTYPE associated. I have tried simulating the issue in Test and same thing happens and these connections keeps on increasing as we keep in hitting requests. Can someone tell me why these connections are getting created. Also it looks like these connections will never be used by the application.

This is how connection are made and closed in the application: We have an abstrack bean class which is extended by all MDB's

@MessageDriven(activationConfig = {
    @ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge"),
    @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue") })
public class TrackBeanV2 extends AbstractServiceBean implements MessageListener {//code}

The abstrack bean handles creation and closing of connections in following manner:

public abstract class AbstractServiceBean {

@Resource(name = "myQCF", type = QueueConnectionFactory.class, shareable = true, description = "Reply Connection Factory")
private ConnectionFactory replyCF; 

@PostConstruct
private void postConstruct() {
        replyConnection = replyCF.createConnection();

    }  catch (JMSException e) {
        throw new RuntimeException("Failed to create JMS Connection");
    }

}
@PreDestroy
private void preDestroy() {
    try {
        replyConnection.close();
    } catch (JMSException e) {
        throw new RuntimeException("Failed to close JMS connection", e);
    }
}

private void sendResponseMessage(String outputMessageText, String jmsMessageID , Destination replyDestination) {
    TextMessage replyMessage = null;
    try {           
        createSession();    
        createProducer();
        replyMessage = createReplyMessage(outputMessageText , jmsMessageID);    
        sendReply(replyMessage, replyDestination);  
        closeProducer();
        closeSession();
    } catch (JMSException exp) {
        handleException(exp);
    }
}
private void createSession() throws JMSException{
    replySession = replyConnection.createSession(true, 0);                  
}`
private void createProducer() throws JMSException{                              
    replyProducer = replySession.createProducer(null);      
}

private void closeSession() throws JMSException {
    if (replySession != null) {
        replySession.close();
    }
}

private void closeProducer() throws JMSException{
    if (replyProducer != null) {            
        replyProducer.close();          
    }
}   
private void sendReply(TextMessage replyMessage, Destination replyDestination) throws JMSException {    
    logMessages(replyMessage.getText(), "RESPONSE MESSAGE");
    replyProducer.send(replyDestination, replyMessage);
}

I have not added other methods of the class which marshalling/unmarshalling and other stuff.


回答1:


There is a IBM developerWorks blog post "Avoiding run-away numbers of channels" by @MoragHughson that goes into detail about the various settings on a queue manager to limit total maximum channels for the entire queue manager (MaxChannels in qm.ini), a single channel (MAXINST), and a single client machine connecting to a channel (MAXINSTC).

There is MQGem Software blog post "MaxChannels vs DIS QMSTATUS CONNS" also by @MoragHughson (Thank you Morag for the helpful posts) that goes into detail on the differences between a connections (DIS CONN) and channels (DIS CHS).

Below are a few commands that can help with reconciling things (note I've tested these on Linux, if you are running on another OS and they don't work let me know and I'll try and provide a working example for that OS):

The command below will show you the connection identifier, channel name associated to the connection if any, and the IP address if any, the output is CONN,CHANNEL,CONNAME.

echo "DIS CONN(*) CHANNEL CONNAME"|runmqsc <QMGR> | grep -o '^\w\+:\|\w\+[(][^)]\+[)]' | awk -F '[()]' -v OFS="," 'function printValues() { if ("CONN" in p) { print p["CONN"], p["CHANNEL"], p["CONNAME"] } } /^\w+:/ { printValues(); delete p; next } { p[$1] = $2 } END { printValues() }'

The command below will show you each running channel instance, the number of shared conversations, and the IP address connecting to the channel, the output is CHANNEL,CURSHCNV,CONNAME.

echo "DIS CHS(*) ALL"|runmqsc <QMGR> | grep -o '^\w\+:\|\w\+[(][^)]\+[)]' | awk -F '[()]' -v OFS="," 'function printValues() { if ("CHANNEL" in p) { print p["CHANNEL"], p["CURSHCNV"], p["CONNAME"] } } /^\w+:/ { printValues(); delete p; next } { p[$1] = $2 } END { printValues() }'

Both of the above commands can by adapted to use the mqsc program that you showed you use in your comments.




回答2:


After doing lot more analysis and trying out different WAS and MQ settings, we ruled out any issue with configuration and code. While researching found following link http://www-01.ibm.com/support/docview.wss?uid=swg21605479. The issue was with Wily Introscope tool used to monitor the WAS server, it was making connections with MQ and not releasing them. We removed the monitoring from the Server and it is working fine since then. Thanks everyone here for their support.




回答3:


We had a similar problem where the connection count used to reach its limit once the application was kept active for hours.

Our catch was to call disconnect() of the queue manager post enqueue or dequeue rather than close(). So make sure your finally block looks something like this.

finally 
{
    queue.close();
    qMgr.disconnect();     //rather than qMgr.close();      
}


来源:https://stackoverflow.com/questions/42156365/websphere-mq-high-connection-count-issue

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!