Apache ActiveMq Artemis client reconnecting to next available broker in clustered HA replication/shared-data store

前端 未结 2 1115
小鲜肉
小鲜肉 2021-01-28 12:09

Broker.xml (host1) and host2 just the port number changes to 61616 and slave as configuration. In reference with Apache Artemis client fail over discovery



        
相关标签:
2条回答
  • 2021-01-28 12:37

    The error stating "Unblocking a blocking call that will never get a response" is expected if failover happens when the client is in the middle of a blocking call (e.g. sending a durable message and waiting for an ack from the broker, committing a transaction, etc.). This is discussed further in the documentation.

    The fact that clients don't switch back to the master broker when it comes back is also expected given your configuration. In short, you haven't configured failback properly. Your master should have:

    <ha-policy>
       <replication>
          <master>
             <check-for-live-server>true</check-for-live-server>
          </master>
       </replication>
    </ha-policy>
    

    And your slave should have:

    <ha-policy>
       <replication>
          <slave>
             <allow-failback>true</allow-failback>
          </slave>
       </replication>
    </ha-policy>
    

    This is also discussed in the documentation.

    Lastly, you do not need to configure the broadcast and discovery groups when using a static connector.

    0 讨论(0)
  • 2021-01-28 12:40

    After handling the JMSException in the producer (also was able to fail-over successfully)

      <!--added this handler-->
        <bean id="deadLetterErrorHandler"
            class="org.apache.camel.builder.DeadLetterChannelBuilder">
            <property name="deadLetterUri" value="log:dead" />
        </bean>
    
    <!-- referred the handler -->
    <route errorHandlerRef="deadLetterErrorHandler">
          <from uri="direct:toMyQueue" />
                <transform>
                    <simple>MSG FRM DIRECT TO MyExampleQueue : ${bodyAs(String)}
                    </simple>
                </transform>
                <to uri="ref:myqueue" />
    </route>
    

    spun up two VM and used 172.28.128.28 (master) and 172.28.128.100 (was slave/backup) From below logs, when the master is was down the slave started at broker side and client failover logs are below.

    2020-06-06 07:26:11 DEBUG NettyConnector:1259 - NettyConnector [host=172.28.128.28, port=61616, httpEnabled=false, httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=false, useNio=true] host 1: 172.28.128.100 ip address: 172.28.128.100 host 2: 172.28.128.28 ip address: 172.28.128.28
    2020-06-06 07:26:11 DEBUG ClientSessionFactoryImpl:272 - Setting up backup config = TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=172-28-128-100 for live = TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=172-28-128-28
    2020-06-06 07:26:11 DEBUG JmsConfiguration$CamelJmsTemplate:502 - Executing callback on JMS Session: JmsPoolSession { ActiveMQSession->ClientSessionImpl [name=ab4faacf-a801-11ea-9c64-0a002700000c, username=null, closed=false, factory = org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl@7b18658a, metaData=(jms-session=,)]@1118d539 }
    2020-06-06 07:26:11 DEBUG JmsConfiguration:621 - Sending JMS message to: ActiveMQQueue[myExampleQueue] with message: ActiveMQMessage[null]:PERSISTENT/ClientMessageImpl[messageID=0, durable=true, address=null,userID=null,properties=TypedProperties[]]
    2020-06-06 07:26:11 DEBUG DefaultProducerCache:169 - >>>> direct://toMyQueue Exchange[]
    2020-06-06 07:26:11 DEBUG SendProcessor:167 - >>>> ref://myqueue Exchange[]
    ...
    2020-06-06 07:27:15 DEBUG ClientSessionFactoryImpl:800 - Trying reconnection attempt 0/1
    2020-06-06 07:27:15 DEBUG ClientSessionFactoryImpl:1102 - Trying to connect with connectorFactory = org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory@2fd9fb34, connectorConfig=TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=172-28-128-100
    2020-06-06 07:27:15 DEBUG NettyConnector:508 - Connector + NettyConnector [host=172.28.128.100, port=61616, httpEnabled=false, httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=false, useNio=true] using nio
    2020-06-06 07:27:15 DEBUG client:670 - AMQ211002: Started NIO Netty Connector version 4.1.48.Final to 172.28.128.100:61616
    2020-06-06 07:27:15 DEBUG NettyConnector:805 - Remote destination: /172.28.128.100:61616
    2020-06-06 07:27:15 DEBUG NettyConnector:661 - Added ActiveMQClientChannelHandler to Channel with id = 62e089d3 
    2020-06-06 07:27:15 DEBUG ClientSessionFactoryImpl:809 - Reconnection successful
    2020-06-06 07:27:15 DEBUG ClientSessionFactoryImpl:277 - ClientSessionFactoryImpl received backup update for live/backup pair = TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=172-28-128-100 / null but it didn't belong to TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=172-28-128-100
    2020-06-06 07:27:15 DEBUG JmsConfiguration$CamelJmsTemplate:502 - Executing callback on JMS Session: JmsPoolSession { ActiveMQSession->ClientSessionImpl [name=d200c1a0-a801-11ea-9c64-0a002700000c, username=null, closed=false, factory = org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl@12abcd1e, metaData=(jms-session=,)]@33d28f0a }
    2020-06-06 07:27:16 DEBUG JmsConfiguration:621 - Sending JMS message to: ActiveMQQueue[myExampleQueue] with message: ActiveMQMessage[null]:PERSISTENT/ClientMessageImpl[messageID=0, durable=true, address=null,userID=null,properties=TypedProperties[]]
    2020-06-06 07:27:16 DEBUG DefaultProducerCache:169 - >>>> direct://toMyQueue Exchange[]
    2020-06-06 07:27:16 DEBUG SendProcessor:167 - >>>> ref://myqueue Exchange[]
    

    When the client (consumer) using camel routes within the broker (queue to queue route), or broker (queue) to spring bean the JMS Exception didn't occur, but handling and redelivering those would be helpful.

    0 讨论(0)
提交回复
热议问题