Infinispan: How to combine embedded cache and standalone server in one cluster?

妖精的绣舞 提交于 2019-12-10 11:07:23

问题


As a proof of concept, I try to build an infinispan cluster with an existing application which starts an embedded cache and one or more standalone infinispan servers.

The reasosing behind this is, that I want to show, that there is a way of zero-configuration automatic creation of a cluster by simply starting freshly downloaded infinispan standalone servers. So my application runs an embedded cache which will be automatically "joined" by new nodes.

I'm using the very default configuration for infinispan and jgroups (see below).

The effect is, that two or more of my applications with embedded cache "see each other" and two or more standalone infinispan servers see each other. But none of my nodes "see" the standalone nodes and they don't see my nodes.

I use infinispan 6.0.2.

The standalone server is this: http://downloads.jboss.org/infinispan/6.0.2.Final/infinispan-server-6.0.2.Final-bin.zip

Please give hints on what to check or links to ressources I could study to make this work.

This is the code, that starts the embedded cache:

  DefaultCacheManager manager = new DefaultCacheManager(
    GlobalConfigurationBuilder.defaultClusteredBuilder().transport()
      .nodeName( node ).addProperty( "configurationFile", "jgroups.xml" )
      .build(), new ConfigurationBuilder().clustering()
      .cacheMode( CacheMode.DIST_SYNC ).build() );

  Cache<String, String> cache = manager.getCache( "default" );

This is the jgroups-config I'm using:

<config xmlns="urn:org:jgroups"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="urn:org:jgroups file:schema/JGroups-3.2.xsd">

    <UDP
            mcast_addr="${jgroups.udp.mcast_addr:228.6.7.8}"
            mcast_port="${jgroups.udp.mcast_port:46655}"
            tos="8"
            ucast_recv_buf_size="200k"
            ucast_send_buf_size="200k"
            mcast_recv_buf_size="200k"
            mcast_send_buf_size="200k"
            loopback="true"
            max_bundle_size="64000"
            max_bundle_timeout="30"
            ip_ttl="${jgroups.udp.ip_ttl:2}"
            enable_bundling="true"
            enable_diagnostics="false"
            bundler_type="old"

            thread_naming_pattern="pl"

            thread_pool.enabled="true"
            thread_pool.min_threads="2"
            thread_pool.max_threads="30"
            thread_pool.keep_alive_time="60000"
            thread_pool.queue_enabled="true"
            thread_pool.queue_max_size="100"
            thread_pool.rejection_policy="Discard"

            oob_thread_pool.enabled="true"
            oob_thread_pool.min_threads="2"
            oob_thread_pool.max_threads="30"
            oob_thread_pool.keep_alive_time="60000"
            oob_thread_pool.queue_enabled="false"
            oob_thread_pool.queue_max_size="100"
            oob_thread_pool.rejection_policy="Discard"
            />

    <PING timeout="3000" num_initial_members="3"/>
    <MERGE2 max_interval="30000" min_interval="10000"/>
    <FD_SOCK/>
    <FD_ALL timeout="15000"/>
    <VERIFY_SUSPECT timeout="5000"/>
    <!-- Commented when upgraded to 3.1.0.Alpha (remove eventually)
    <pbcast.NAKACK  exponential_backoff="0"
                    use_mcast_xmit="true"
                    retransmit_timeout="300,600,1200"
                    discard_delivered_msgs="true"/> -->
    <pbcast.NAKACK2
            xmit_interval="1000"
            xmit_table_num_rows="100"
            xmit_table_msgs_per_row="10000"
            xmit_table_max_compaction_time="10000"
            max_msg_batch_size="100"/>

    <!-- Commented when upgraded to 3.1.0.Alpha (remove eventually)
    <UNICAST timeout="300,600,1200"/>  -->
    <UNICAST2
            stable_interval="5000"
            xmit_interval="500"
            max_bytes="1m"
            xmit_table_num_rows="20"
            xmit_table_msgs_per_row="10000"
            xmit_table_max_compaction_time="10000"
            max_msg_batch_size="100"
            conn_expiry_timeout="0"/>
    <pbcast.STABLE stability_delay="500" desired_avg_gossip="5000" max_bytes="1m"/>
    <pbcast.GMS print_local_addr="false" join_timeout="3000" view_bundling="true"/>
    <UFC max_credits="200k" min_threshold="0.20"/>
    <MFC max_credits="200k" min_threshold="0.20"/>
    <FRAG2 frag_size="8000"  />
    <RSVP timeout="60000" resend_interval="500" ack_on_delivery="true" />
</config>

回答1:


Your problem is probably in multicast port/address setting in JGroups. Also, check that you use the same IPv4 / IPv6 settings.




回答2:


Problem Seems with IPv4/IPv6 stack

Use JVM Argument while statring infinispan server

-Djava.net.preferIPv4Stack=true

this will make application to use IPV4 as default stack.



来源:https://stackoverflow.com/questions/24068586/infinispan-how-to-combine-embedded-cache-and-standalone-server-in-one-cluster

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!