I want to access hdfs with fully qualified names such as :
hadoop fs -ls hdfs://machine-name:8020/user
I could also simply access hdfs with
hadoop fs -ls /user
However, I am writing test cases that should work on different distributions(HDP, Cloudera, MapR...etc) which involves accessing hdfs files with qualified names.
I understand that hdfs://machine-name:8020
is defined in core-site.xml as fs.default.name
. But this seems to be different on different distributions. For example, hdfs is maprfs on MapR. IBM BigInsights don't even have core-site.xml
in $HADOOP_HOME/conf
.
There doesn't seem to a way hadoop tells me what's defined in fs.default.name
with it's command line options.
How can I get the value defined in fs.default.name
reliably from command line?
The test will always be running on namenode, so machine name is easy. But getting the port number(8020) is a bit difficult. I tried lsof, netstat.. but still couldn't find a reliable way.
Below command available in Apache hadoop 2.7.0 onwards, this can be used for getting the values for the hadoop configuration properties. fs.default.name is deprecated in hadoop 2.0, fs.defaultFS is the updated value. Not sure whether this will work incase of maprfs.
hdfs getconf -confKey fs.defaultFS # ( new property )
or
hdfs getconf -confKey fs.default.name # ( old property )
Not sure whether there is any command line utilities available for retrieving configuration properties values in Mapr or hadoop 0.20 hadoop versions. In case of this situation you better try the same in Java for retrieving the value corresponding to a configuration property.
Configuration hadoop conf = Configuration.getConf();
System.out.println(conf.get("fs.default.name"));
fs.default.name is deprecated.
use : hdfs getconf -confKey fs.defaultFS
I encountered this answer when I was looking for HDFS URI. Generally that's a URL pointing to the namenode. While hdfs getconf -confKey fs.defaultFS
gets me the name of the nameservice but it won't help me building the HDFS URI.
I tried the command below to get a list of the namenodes instead
hdfs getconf -namenodes
This gave me a list of all the namenodes, primary first followed by secondary. After that constructing the HDFS URI was simple
hdfs://<primarynamenode>/
you can use
hdfs getconf -confKey fs.default.name
Yes, hdfs getconf -namenodes will show list of namenodes.
来源:https://stackoverflow.com/questions/26216876/find-port-number-where-hdfs-is-listening