I\'ve tried numerous ways of setting the logging level in Hadoop to WARN, but have failed each time. Firstly, I tried to configure the log4j.properties file by simply replac
Apache hadoop documentation is a bit misleading. If you are debugging issues you can change the log level on the fly using the below steps. You should mention the package name rather than the file name.
Example: For Namenode: hadoop daemonlog -setlevel lxv-centos-01:50070 org.apache.hadoop.hdfs.server.namenode DEBUG
For Resourcemanager yarn daemonlog -setlevel lxv-centos-01:8088 org.apache.hadoop.yarn.server.resourcemanager DEBUG
The above setting goes away when you restart the processes. This is a temporary solution for debugging issues.
The default log level can be adjusted by modifying the hadoop.root.logger
property in your conf/log4j.properties
configuration file. Note that you'll have to do that for every node in your cluster.
Example line in conf/log4j.properties
:
hadoop.root.logger=WARN,console
I rather use
HADOOP_ROOT_LOGGER=WARN,DRFA
in hadoop-env.sh
or you can use hadoop.root.logger in log4j.properties
DRFA will allow the logs to go into the File Appender rather than Console -> System.err/out.
To change the log levels dynamically, so that restart of the daemon is not required use hadoop daemonlog utility.
hadoop daemonlog -setlevel hostname:port className logLevel
For example to change the log level of datanode logs to WARN.
hadoop daemonlog -setlevel hostname:50075 org.apache.hadoop.hdfs.server.datanode.DataNode WARN