httpfs error Operation category READ is not supported in state standby

后端 未结 1 1720
再見小時候
再見小時候 2020-12-29 13:32

I am working on hadoop apache 2.7.1 and I have a cluster that consists of 3 nodes

nn1
nn2
dn1

nn1 is the dfs.default.name, so it is the master name n

相关标签:
1条回答
  • 2020-12-29 13:52

    It looks like HttpFs is not High Availability aware yet. This could be due to the missing configurations required for the Clients to connect with the current Active Namenode.

    Ensure the fs.defaultFS property in core-site.xml is configured with the correct nameservice ID.

    If you have the below in hdfs-site.xml

    <property>
      <name>dfs.nameservices</name>
      <value>mycluster</value>
    </property>
    

    then in core-site.xml, it should be

    <property>
      <name>fs.defaultFS</name>
      <value>hdfs://mycluster</value>
    </property>
    

    Also configure the name of the Java class which will be used by the DFS Client to determine which NameNode is the currently Active and is serving client requests.

    Add this property to hdfs-site.xml

    <property>
      <name>dfs.client.failover.proxy.provider.mycluster</name>            
      <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    

    Restart the Namenodes and HttpFs after adding the properties in all nodes.

    0 讨论(0)
提交回复
热议问题