Hbase managed zookeeper suddenly trying to connect to localhost instead of zookeeper quorum

前端 未结 3 1254
[愿得一人]
[愿得一人] 2021-01-12 21:08

I was running some tests with table mappers and reducers on large scale problems. After a certain point my reducers started failing when the job was 80% done. From what I

相关标签:
3条回答
  • 2021-01-12 21:18

    Hard to say what is happening with the information, given. I have found the Hadoop Stack (HBase especially) to be quite hostile to even the slightest bit of misconfiguration in DNS or the hosts file.

    As the quorum in your hbase-site.xml looks good, I'd start checking with network/hostname resolution related configurations:

    • Has the nodename slipped into the localhost entry in /etc/hosts on hdev03?
    • Is there an entry for the host itself in hdev03s /etc/hosts (there should)?
    • Has Reverse DNS been correctly configured in case you are using DNS for name resolution instead of the hosts file?

    These are just a few pointers in the direction I'd look with this kind of issue. Hope it helps!

    0 讨论(0)
  • 2021-01-12 21:21

    I've had same problem when running HBase through Spark on Yarn. Everything was fine until suddenly it started to trying to connect to localhost instead of quorum. Setting port and quorum programmatically before HBase call fixed the issue

    conf.set("hbase.zookeeper.quorum","my.server")
    conf.set("hbase.zookeeper.property.clientPort","5181")
    

    I'm using MapR, and it has "unusual" (5181) zookeeper port

    0 讨论(0)
  • 2021-01-12 21:41

    Add '--driver-class-path ~/hbase-1.1.2/conf' into spark-submit command, so that the task can find the configured zookeeper servers instead of 127.0.0.1.

    0 讨论(0)
提交回复
热议问题