Zookeeper connection error

后端 未结 23 928
佛祖请我去吃肉
佛祖请我去吃肉 2020-12-24 05:41

We have a standalone zookeeper setup on a dev machine. It works fine for every other dev machine except this one testdev machine.

We get this error over and over aga

相关标签:
23条回答
  • 2020-12-24 06:18

    Just now I solved the same question and post a blog.

    In brief, if xx's zoo.cfg like:

    server.1=xx:2888:3888
    server.2=yy:2888:3888
    server.3=zz:2888:3888
    

    then xx's myid=1 is must

    0 讨论(0)
  • 2020-12-24 06:20

    I also ran into this problem last week and have managed to fix this now. I got the idea to resolve this one from the response shared by @gukoff.

    My requirement and situation was slightly different from the ones shared so far but the issue was fundamentally the same so I thought of sharing it on this thread.

    I was actually trying to query zookeeper quorum (after every 30 seconds) for some information from my application and was using the Curator Framework for this purpose (the methods available in LeaderLatch class). So, essentially I was starting up a CuratorFramework client and supplying this to LeaderLatch object.

    Only after I ran into the error mentioned in this thread - I realised that I did not close the zookeeper client connection(s) established in my applications. The maxClientCnxns property had the value of 60 and as soon as the number of connections (all of them were stale connections) touched 60, my application started complaining with this error.

    I found out about the number of open connections by:

    1. Checking the zookeeper logs, where there were warning messages stating "Too many connections from {IP address of the host}"

    2. Running the following netstat command from the same host mentioned in the above logs where my application was running:

    netstat -no | grep :2181 | wc -l

    Note: The 2181 port is the default for zookeeper supplied as a parameter in grep to match the zookeeper connections.

    To fix this, I cleared up all of those stale connections manually and then added the code for closing the zookeeper client connections gracefully in my application.

    I hope this helps!

    0 讨论(0)
  • 2020-12-24 06:22

    I faced the same issue and found it was due to zookeeper cluster nodes needs ports opened to communicate with each other.

    server.1=xx.xx.xx.xx:2888:3888
    
    server.2=xx.xx.xx.xx:2888:3888
    
    server.3=xx.xx.xx.xx:2888:3888
    

    once i allowed these ports through aws security group and restarted. All worked fine for me

    0 讨论(0)
  • 2020-12-24 06:24

    I have just solved the problem. I am using centos 7. And the trouble-maker is firewall.Using "systemctl stop firewalld" to shut it all down in each server can simply solve the problem.Or you can use command like

    firewall-cmd --zone=public --add-port=2181/udp --add-port=2181/tcp --permanent" to configure all three ports ,include 2181,2888,3888 in each server.And then "firewall-cmd --reload
    

    Finally use

    zkServer.sh restart
    

    to restart your servers and problem solved.

    0 讨论(0)
  • 2020-12-24 06:25

    Make sure all required services are running

    Step 1 : Check if hbase-master is running

    sudo /etc/init.d/hbase-master status
    

    if not, then start it sudo /etc/init.d/hbase-master start

    Step 2 : Check if hbase-regionserver is running

    sudo /etc/init.d/hbase-regionserver status
    

    if not, then start it sudo /etc/init.d/hbase-regionserver start

    Step 3 : Check if zookeeper-server is running

    sudo /etc/init.d/zookeeper-server status
    

    if not, then start it sudo /etc/init.d/zookeeper-server start


    or simply run these 3 commands in a row.

    sudo /etc/init.d/hbase-master restart
    sudo /etc/init.d/hbase-regionserver restart
    sudo /etc/init.d/zookeeper-server restart
    

    after that don't forget to check the status

    sudo /etc/init.d/hbase-master status
    sudo /etc/init.d/hbase-regionserver status
    sudo /etc/init.d/zookeeper-server status
    

    You might find that zookeeper is still not running: then you can run the zookeeper

    sudo /usr/lib/zookeeper/bin/zkServer.sh stop
    sudo /usr/lib/zookeeper/bin/zkServer.sh start
    

    after that again check the status and make sure its running

    sudo /etc/init.d/zookeeper-server status
    

    This should work.

    0 讨论(0)
  • 2020-12-24 06:25

    Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)

    I changed just the number of brokers in the zoo.cfg file and restart zookeeper and kafka service

    0 讨论(0)
提交回复
热议问题