Setting the logging level in Hadoop to WARN

后端 未结 4 411
一生所求
一生所求 2021-01-05 08:38

I\'ve tried numerous ways of setting the logging level in Hadoop to WARN, but have failed each time. Firstly, I tried to configure the log4j.properties file by simply replac

相关标签:
4条回答
  • 2021-01-05 08:49

    Apache hadoop documentation is a bit misleading. If you are debugging issues you can change the log level on the fly using the below steps. You should mention the package name rather than the file name.

    Example: For Namenode: hadoop daemonlog -setlevel lxv-centos-01:50070 org.apache.hadoop.hdfs.server.namenode DEBUG

    For Resourcemanager yarn daemonlog -setlevel lxv-centos-01:8088 org.apache.hadoop.yarn.server.resourcemanager DEBUG

    The above setting goes away when you restart the processes. This is a temporary solution for debugging issues.

    0 讨论(0)
  • 2021-01-05 08:49

    The default log level can be adjusted by modifying the hadoop.root.logger property in your conf/log4j.properties configuration file. Note that you'll have to do that for every node in your cluster.

    Example line in conf/log4j.properties:

    hadoop.root.logger=WARN,console
    
    0 讨论(0)
  • 2021-01-05 08:53

    I rather use

    HADOOP_ROOT_LOGGER=WARN,DRFA

    in hadoop-env.sh

    or you can use hadoop.root.logger in log4j.properties

    DRFA will allow the logs to go into the File Appender rather than Console -> System.err/out.

    0 讨论(0)
  • 2021-01-05 09:06

    To change the log levels dynamically, so that restart of the daemon is not required use hadoop daemonlog utility.

        hadoop daemonlog -setlevel hostname:port className logLevel
    

    For example to change the log level of datanode logs to WARN.

        hadoop daemonlog -setlevel hostname:50075 org.apache.hadoop.hdfs.server.datanode.DataNode WARN
    
    0 讨论(0)
提交回复
热议问题