How to restart Spark Streaming job from checkpoint on Dataproc?

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-04 04:40:52

问题


This is a follow up to Spark streaming on dataproc throws FileNotFoundException

Over the past few weeks (not sure since exactly when), restart of a spark streaming job, even with the "kill dataproc.agent" trick is throwing this exception:

17/05/16 17:39:02 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at stream-event-processor-m/10.138.0.3:8032
17/05/16 17:39:03 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: Submitted application application_1494955637459_0006
17/05/16 17:39:04 ERROR org.apache.spark.SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:85)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
    at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:149)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:497)
    at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2258)
    at org.apache.spark.streaming.StreamingContext.<init>(StreamingContext.scala:140)
    at org.apache.spark.streaming.StreamingContext$$anonfun$getOrCreate$1.apply(StreamingContext.scala:826)
    at org.apache.spark.streaming.StreamingContext$$anonfun$getOrCreate$1.apply(StreamingContext.scala:826)
    at scala.Option.map(Option.scala:146)
    at org.apache.spark.streaming.StreamingContext$.getOrCreate(StreamingContext.scala:826)
    at com.thumbtack.common.model.SparkStream$class.main(SparkStream.scala:73)
    at com.thumbtack.skyfall.StreamEventProcessor$.main(StreamEventProcessor.scala:19)
    at com.thumbtack.skyfall.StreamEventProcessor.main(StreamEventProcessor.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
17/05/16 17:39:04 INFO org.spark_project.jetty.server.ServerConnector: Stopped ServerConnector@5555ffcf{HTTP/1.1}{0.0.0.0:4479}
17/05/16 17:39:04 WARN org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
17/05/16 17:39:04 ERROR org.apache.spark.util.Utils: Uncaught exception in thread main
java.lang.NullPointerException
    at org.apache.spark.network.shuffle.ExternalShuffleClient.close(ExternalShuffleClient.java:152)
    at org.apache.spark.storage.BlockManager.stop(BlockManager.scala:1360)
    at org.apache.spark.SparkEnv.stop(SparkEnv.scala:87)
    at org.apache.spark.SparkContext$$anonfun$stop$11.apply$mcV$sp(SparkContext.scala:1797)
    at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1290)
    at org.apache.spark.SparkContext.stop(SparkContext.scala:1796)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:565)
    at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2258)
    at org.apache.spark.streaming.StreamingContext.<init>(StreamingContext.scala:140)
    at org.apache.spark.streaming.StreamingContext$$anonfun$getOrCreate$1.apply(StreamingContext.scala:826)
    at org.apache.spark.streaming.StreamingContext$$anonfun$getOrCreate$1.apply(StreamingContext.scala:826)
    at scala.Option.map(Option.scala:146)
    at org.apache.spark.streaming.StreamingContext$.getOrCreate(StreamingContext.scala:826)
    at com.thumbtack.common.model.SparkStream$class.main(SparkStream.scala:73)
    at com.thumbtack.skyfall.StreamEventProcessor$.main(StreamEventProcessor.scala:19)
    at com.thumbtack.skyfall.StreamEventProcessor.main(StreamEventProcessor.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Exception in thread "main" org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:85)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
    at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:149)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:497)
    at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2258)
    at org.apache.spark.streaming.StreamingContext.<init>(StreamingContext.scala:140)
    at org.apache.spark.streaming.StreamingContext$$anonfun$getOrCreate$1.apply(StreamingContext.scala:826)
    at org.apache.spark.streaming.StreamingContext$$anonfun$getOrCreate$1.apply(StreamingContext.scala:826)
    at scala.Option.map(Option.scala:146)
    at org.apache.spark.streaming.StreamingContext$.getOrCreate(StreamingContext.scala:826)
    at com.thumbtack.common.model.SparkStream$class.main(SparkStream.scala:73)
    at com.thumbtack.skyfall.StreamEventProcessor$.main(StreamEventProcessor.scala:19)
    at com.thumbtack.skyfall.StreamEventProcessor.main(StreamEventProcessor.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Job output is complete

How to restart a spark streaming job from its checkpoint on a Dataproc cluster?


回答1:


We've recently added auto-restart capabilities to dataproc jobs (available in gcloud beta track and in v1 API).

To take advantage of auto-restart, a job must be able to recover/cleanup so it will not work for most jobs without modification. However, it does work out of the box with Spark streaming with checkpoint files.

The restart-dataproc-agent trick should no longer be necessary. Auto-restart is resilient against Job crashes, Dataproc Agent failures, and VM restart-on-migration events.

Example: gcloud beta dataproc jobs submit spark ... --max-failures-per-hour 1

See: https://cloud.google.com/dataproc/docs/concepts/restartable-jobs

If you want to test out recovery, you can simulate VM migration by restarting the master VM [1]. After this you should be able to describe the job [2] and see ATTEMPT_FAILURE entry in statusHistory.

[1] gcloud compute instances reset <cluster-name>-m

[2] gcloud dataproc jobs describe



来源:https://stackoverflow.com/questions/44008418/how-to-restart-spark-streaming-job-from-checkpoint-on-dataproc

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!