“Too many fetch-failures” while using Hive

后端 未结 1 398
孤城傲影
孤城傲影 2021-01-15 13:05

I\'m running a hive query against a hadoop cluster of 3 nodes. And I am getting an error which says \"Too many fetch failures\". My hive query is:

  insert o         


        
1条回答
  •  孤街浪徒
    2021-01-15 13:48

    This can be caused by various hadoop configuration issues. Here a couple to look for in particular:

    • DNS issue : examine your /etc/hosts
    • Not enough http threads on the mapper side for the reducer

    Some suggested fixes (from Cloudera troubleshooting)

    • set mapred.reduce.slowstart.completed.maps = 0.80
    • tasktracker.http.threads = 80
    • mapred.reduce.parallel.copies = sqrt (node count) but in any case >= 10

    Here is link to troubleshooting for more details

    http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-cloudera

    Update for 2020 Things have changed a lot and AWS mostly rules the roost. Here is some troubleshooting for it

    https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-troubleshoot-error-resource-1.html

    Too many fetch-failures PDF Kindle The presence of "Too many fetch-failures" or "Error reading task output" error messages in step or task attempt logs indicates the running task is dependent on the output of another task. This often occurs when a reduce task is queued to execute and requires the output of one or more map tasks and the output is not yet available.

    There are several reasons the output may not be available:

    The prerequisite task is still processing. This is often a map task.

    The data may be unavailable due to poor network connectivity if the data is located on a different instance.

    If HDFS is used to retrieve the output, there may be an issue with HDFS.

    The most common cause of this error is that the previous task is still processing. This is especially likely if the errors are occurring when the reduce tasks are first trying to run. You can check whether this is the case by reviewing the syslog log for the cluster step that is returning the error. If the syslog shows both map and reduce tasks making progress, this indicates that the reduce phase has started while there are map tasks that have not yet completed.

    One thing to look for in the logs is a map progress percentage that goes to 100% and then drops back to a lower value. When the map percentage is at 100%, this does not mean that all map tasks are completed. It simply means that Hadoop is executing all the map tasks. If this value drops back below 100%, it means that a map task has failed and, depending on the configuration, Hadoop may try to reschedule the task. If the map percentage stays at 100% in the logs, look at the CloudWatch metrics, specifically RunningMapTasks, to check whether the map task is still processing. You can also find this information using the Hadoop web interface on the master node.

    If you are seeing this issue, there are several things you can try:

    Instruct the reduce phase to wait longer before starting. You can do this by altering the Hadoop configuration setting mapred.reduce.slowstart.completed.maps to a longer time. For more information, see Create Bootstrap Actions to Install Additional Software.

    Match the reducer count to the total reducer capability of the cluster. You do this by adjusting the Hadoop configuration setting mapred.reduce.tasks for the job.

    Use a combiner class code to minimize the amount of outputs that need to be fetched.

    Check that there are no issues with the Amazon EC2 service that are affecting the network performance of the cluster. You can do this using the Service Health Dashboard.

    Review the CPU and memory resources of the instances in your cluster to make sure that your data processing is not overwhelming the resources of your nodes. For more information, see Configure Cluster Hardware and Networking.

    Check the version of the Amazon Machine Image (AMI) used in your Amazon EMR cluster. If the version is 2.3.0 through 2.4.4 inclusive, update to a later version. AMI versions in the specified range use a version of Jetty that may fail to deliver output from the map phase. The fetch error occurs when the reducers cannot obtain output from the map phase.

    Jetty is an open-source HTTP server that is used for machine to machine communications within a Hadoop cluster

    0 讨论(0)
提交回复
热议问题