How to set send-buffer-size and receive-buffer-size in infinispan hotrod client and server

纵然是瞬间 提交于 2019-12-08 05:22:30

问题


I was planning to use out or process distributed caching solution and I am trying infinispan hot rod protocol for this purpose. It performs quite well compare to other caching solutions but I feel it is taking more in network communication than expected. We have 1000Mbps ethernet network and round trip time between client and server is around 200ms but infinispan hot rod protocol is taking around 7 seconds in transferring an object of size 30 MB from server to client. I feel that I need to do tcp tuning to reduce this time, can someone please suggest me how can I tune tcp to get best performance? On googling I found that send-buffer-size and receive -buffer-size can help in this case but I don't know how and where to set these properties. Can someone please help me in this regard. Any help in this regard is highly appreciated.

Thanks, Abhinav


回答1:


By default, Hot Rod client and server enable TCP-no-delay, which is good for small objects. For bigger objects, such as your case, you might wanna disable it so that client/server can buffer and then send instead. For the client, when you construct RemoteCacheManager, try passing infinispan.client.hotrod.tcp_no_delay=false, and the server needs a similar configuration option too. How the server is configured depends on your Infinispan versions. If using the latest Infinispan 6.0.0 version, you'll have to go to the standalone.xml file and change the endpoint subsystem configuration so that hotrod-connector has tcp-nodelay attribute set to false. Send/receive buffers only apply when TCP-no-delay are disabled. These are also configurable via similar methods, but I'd only do that if you're not happy with the result once TCP-no-delay has been disabled.



来源:https://stackoverflow.com/questions/20722620/how-to-set-send-buffer-size-and-receive-buffer-size-in-infinispan-hotrod-client

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!