Hbase client ConnectionLoss for /hbase error

后端 未结 5 1905
忘掉有多难
忘掉有多难 2021-02-02 18:18

I\'m going completely crazy:

Installed Hadoop/Hbase, all is running;

/opt/jdk1.6.0_24/bin/jps
23261 ThriftServer
22582 QuorumPeerMain
21969 NameNode
23         


        
相关标签:
5条回答
  • 2021-02-02 18:36

    The problem was actually that (for some reason ... I don't really get it in detail) the firewall was blocking one of the ports required to talk to Zookeeper; from the command line it worked, from my app it didn't. However when I disabled the firewall all worked fine all of a sudden.

    Thank you for your help!

    0 讨论(0)
  • 2021-02-02 18:39

    This happens when user has an incorrect value defined for "zookeeper.znode.parent" in the hbase-site.xml sourced on the client side or in case of a custom API written , the "zookeeper.znode.parent" was incorrectly updated to a wrong location . For example the default "zookeeper.znode.parent" is set to "/hbase-unsecure" , but if you incorrectly specify that as lets say "/hbase" as opposed to what we have set up in the cluster, we will encounter this exception while trying to connect to the HBase cluster

    0 讨论(0)
  • 2021-02-02 18:57

    I had the same issue connecting to my hbase db.

    Turns out I had a bad address of the db machine in my /etc/hosts.

    0 讨论(0)
  • 2021-02-02 18:58

    This is a Zookeeper(ZK) error. The HBase client tries to get the /hbase node from Zookeeper and fails.

    You can get a ZK dump from the HBase master web interface. You should see all the connections to ZK and figure out if something is exhausting them.

    Before diving into anything else you could try restarting your ZK cluster and see if it fixes your problem. (It's strange that you see that with a single client).

    HBase has a setting to increase the number of connections to ZK. It's

    hbase.zookeeper.property.maxClientCnxns
    

    There were a few updates (see below) lately related to the default number of connections (there's a hbase-default.xml file that has all the default configurations). You can override this in your hbase-site.xml file (under HBase conf dir) and raise it to 100 or more. But make sure you're not masking the real problem this way, you shouldn't see this problem with a single client.

    We've had a similar situation, but it was happening during heavy operations from map-reduce jobs, after upgrading to HBase-0.90.

    Here are a couple of issue related to your problem:

    • https://issues.apache.org/jira/browse/HBASE-3773
    • https://issues.apache.org/jira/browse/HBASE-3777

    If you still can't figure it out send an email to the hbase-users list or join the #hbase channel on freenode and ask live questions.

    0 讨论(0)
  • 2021-02-02 18:59

    Step 1: First will check the HBase Master node is running or not by using "jps" commands.

    Step 2: using "stop-all.sh" command to stop the all running services on Hadoop cluster.

    For more inofrmation about this issue:

    http://commandstech.com/hbase-error-keeperrorcode-connectionloss-for-hbase-in-cluster/

    Step 3: using "start-all.sh" command to start all running services.

    Step 4: using "jps" command to check the services if it showing HBase master working then fine otherwise will do below steps:

    Step 5: Goto root user using "sudo su"

    Step 6: Goto hbase shell file path: "cd /usr/lib/habse-1.2.6-hadoop/bin/start-hbase.sh"

    Step 7: Open the hbase shell using "hbase shell" command

    Step 8: use "list" command.

    0 讨论(0)
提交回复
热议问题