dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured

后端 未结 3 1396
温柔的废话
温柔的废话 2020-12-09 12:08

I was trying to configure hadoop with one name node and four data nodes. I was able to successfully configure the name node and job tracker on one machine and bring it up.<

相关标签:
3条回答
  • 2020-12-09 12:45

    These steps solved the problem for me :

    1. export HADOOP_CONF_DIR="$HADOOP_HOME/etc/hadoop"
    2. echo $HADOOP_CONF_DIR
    3. hdfs namenode -format
    4. hdfs getconf -namenodes
    5. start-dfs.sh

    Then, Hadoop can properly started.

    0 讨论(0)
  • 2020-12-09 12:54

    The name of the masters file is misleading. It should contain the address of the SecondaryNameNode and is read by the NameNode itself. DataNodes do not have anything to do with the masters file. You need to configure fs.default.name on core-site.xml configuration file.

    The error you see is also misleading and points you to the wrong configuration parameter.

    0 讨论(0)
  • 2020-12-09 12:58

    Adding the rpc-address in hdfs-site.xml for the name node will work like this

    <property>
    <name>dfs.namenode.rpc-address</name>
    <value>dnsname:port</value>
    </property>
    

    also in core-site add the property

    <property> <name>fs.defaultFS</name> <value>dnsname:port</value> </property>

    0 讨论(0)
提交回复
热议问题