问题
We have Kafka stream which use Avro. I need to connect it to Spark Stream. I use bellow code as Lev G suggest.
kvs = KafkaUtils.createDirectStream(ssc, [topic], {"metadata.broker.list": brokers}, valueDecoder=MessageSerializer.decode_message)
I got bellow error when i execute it through spark-submit.
2018-10-09 10:49:27 WARN YarnSchedulerBackend$YarnSchedulerEndpoint:66 - Requesting driver to remove executor 12 for reason Container marked as failed: container_1537396420651_0008_01_000013 on host: server_name. Exit status: 1. Diagnostics: [2018-10-09 10:49:25.810]Exception from container-launch. Container id: container_1537396420651_0008_01_000013 Exit code: 1
[2018-10-09 10:49:25.810]
[2018-10-09 10:49:25.811]Container exited with a non-zero exit code 1. Error file: prelaunch.err. Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000d5580000, 702545920, 0) failed; error='Cannot allocate memory' (errno=12)
[2018-10-09 10:49:25.822]
[2018-10-09 10:49:25.822]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err : Last 4096 bytes of stderr :
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000d5580000, 702545920, 0) failed; error='Cannot allocate memory' (errno=12)
I used bellow command.
spark-submit --master yarn --py-files ${BIG_DATA_LIBS}v3io-py.zip --packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.2.0 --jars ${BIG_DATA_LIBS}v3io-hcfs_2.11.jar,${BIG_DATA_LIBS}v3io-spark2-object-dataframe_2.11.jar,${BIG_DATA_LIBS}v3io-spark2-streaming_2.11.jar ${APP_PATH}/${SCRIPT_PATH}/kafka_to_spark_stream.py
All Variables are export correctly. What is this this error?
回答1:
Could it be that you don't allocate enough memory on the driver/executors to handle the stream?
来源:https://stackoverflow.com/questions/52714436/kafka-stream-to-spark-stream-python