Spark : How to speedup foreachRDD?

十年热恋 提交于 2019-12-24 10:44:36

问题


We have a Spark streaming application which ingests data @10,000/ sec ... We use the foreachRDD operation on our DStream( since spark doesn't execute unless it finds the output operation on DStream)

so we have to use the foreachRDD output operation like this , it takes upto to 3 hours ...to write a singlebatch of data (10,000) which is slow

CodeSnippet 1:

requestsWithState.foreachRDD { rdd =>

     rdd.foreach {
     case (topicsTableName, hashKeyTemp, attributeValueUpdate) => {          
          val client = new AmazonDynamoDBClient()
          val request = new UpdateItemRequest(topicsTableName, hashKeyTemp, attributeValueUpdate)
          try client.updateItem(request)

        catch {
            case se: Exception => println("Error executing updateItem!\nTable ", se)
         }
        }
        case null =>
      }
    }
  } 

So i thought the code inside foreachRDD might be the problem so commented it out to see how much time it takes ....to my surprise ...even with nocode inside the foreachRDD it still run's for 3 hours

CodeSnippet 2:

requestsWithState.foreachRDD { 
rdd => rdd.foreach { 
// No code here still takes a lot of time ( there used to be code but removed it to see if it's any faster without code) // 
 }
}  

Please let us know if we are missing anything or an alternative way to do this as i understand without a output operation on DStream spark streaming application will not run .. at this time i can't use other output operations ...

Note : To isolate the problem and make sure that dynamo code is not problem ...i ran with empty loop .....look's like foreachRDD is slow on it's own when iterating over a huge record set coming in @10,000/sec ...and not the dynamo code as empty foreachRDD and with dynamo code took the same time ...

ScreenShot showing all the stages that are executed and time taken by foreachRDD even though it's jus looping and no code inside

Time taken by the foreachRDD empty loop

Task distribution for large running task among 9 worker nodes for the foreachRDD empty loop ...


回答1:


I know it is late,but if you like to hear,I have some guess that may give you some insights.

It is not the code inside rdd.foreach that takes long time,but the code before rdd.foreach, the code which generate the rdd. Transformations are lazy,spark does not compute it until you use the result. When code runs in rdd.foreach,spark do the computation,and generate the data rows.The code in rdd.foreach loops only manipulate the result. You can check this by commenting out the rdd.foreach

requestsWithState.foreachRDD { 
  //rdd => rdd.foreach { 
  // No code here still takes a lot of time ( there used to be code but removed it to //see if it's any faster without code) 
  //}
} 

I guess it will be extremely fast,because no computations happens. Or you can change the transformations to a very simple one,it will be fast too. It does not solve your problem,but if I'm right,it will help you locate your problem.




回答2:


Have you tried with no loop, like below?

//requestsWithState.foreachRDD {  
  //rdd => rdd.foreach {  
  // No code here //  
  // } 
//}

It's the foreachRDD which is taking time not the code inside it. Please note that it is foreach not for. It will run n times whether there is code inside it or not.

Effective tests can be used for performance testing:

https://tech.ovoenergy.com/spark-streaming-in-production-testing/



来源:https://stackoverflow.com/questions/42582542/spark-how-to-speedup-foreachrdd

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!