Spark streaming 2.0.0 - freezes after several days under load

半世苍凉 提交于 2019-12-11 04:26:37

问题


We are running on AWS EMR 5.0.0 with Spark 2.0.0. Consuming from a 125 shard Kinesis stream. Feeding 19k events/s using 2 message producers, each message about 1k in size. Consuming using a cluster of 20 machines. The code has a flatMap(), groupByKey(), persist(StorageLevel.MEMORY_AND_DISK_SER_2()) and repartition(19); Then storing to s3 using foreachRDD(); Using backpressure and Kryo:

sparkConf.set("spark.streaming.backpressure.enabled", "true");
sparkConf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");

While running, Ganglia show a consistent increase in used memory without a GC. At some point, when there's no more free memory to allocate, Spark stops processing micro batches and the incoming queue is growing. That's the freeze point - spark streaming is not able to recover. In our case, Spark froze after 3.5 days of running under pressure.

The problem: we need streaming to run for at least a week (preferably more) without restarting.

Spark cofiguration:

spark.executor.extraJavaOptions -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:PermSize=256M -XX:MaxPermSize=256M -XX:OnOutOfMemoryError='kill -9 %p' 
spark.driver.extraJavaOptions -Dspark.driver.log.level=INFO -XX:+UseConcMarkSweepGC -XX:PermSize=256M -XX:MaxPermSize=256M -XX:OnOutOfMemoryError='kill -9 %p' 
spark.master yarn-cluster
spark.executor.instances 19
spark.executor.cores 7
spark.executor.memory 7500M
spark.driver.memory 7500M
spark.default.parallelism 133
spark.yarn.executor.memoryOverhead 2950
spark.yarn.driver.memoryOverhead 2950
spark.eventLog.enabled false
spark.eventLog.dir hdfs:///spark-logs/

Thanks in advance.

来源:https://stackoverflow.com/questions/39289345/spark-streaming-2-0-0-freezes-after-several-days-under-load

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!