Premature end of Content-Length delimited message body SparkException while reading from S3 using Pyspark

我的梦境 提交于 2021-01-28 01:42:06

问题


I am using the below code to read S3 csv file from my local machine.

from pyspark import SparkConf, SparkContext
from pyspark.sql import SparkSession
import configparser
import os

conf = SparkConf()
conf.set('spark.jars', '/usr/local/spark/jars/aws-java-sdk-1.7.4.jar,/usr/local/spark/jars/hadoop-aws-2.7.4.jar')

#Tried by setting this, but failed
conf.set('spark.executor.memory', '8g') 
conf.set('spark.driver.memory', '8g') 

spark_session = SparkSession.builder \
        .config(conf=conf) \
        .appName('s3-write') \
        .getOrCreate()

# getting S3 credentials from file
aws_profile = "lijo" #user profile name
config = configparser.ConfigParser()
config.read(os.path.expanduser("~/.aws/credentials"))
access_key = config.get(aws_profile, "aws_access_key_id") 
secret_key = config.get(aws_profile, "aws_secret_access_key")

# hadoop configuration for S3
hadoop_conf=spark_session._jsc.hadoopConfiguration()
hadoop_conf.set("fs.s3a.access.key", access_key)
hadoop_conf.set("fs.s3a.secret.key", secret_key)
hadoop_conf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")

#Tried by setting this, but no use
hadoop_conf.set("fs.s3a.connection.maximum", "1000") 
hadoop_conf.set("fs.s3.maxConnections", "1000") 
hadoop_conf.set("fs.s3a.connection.establish.timeout", "50000") 
hadoop_conf.set("fs.s3a.socket.recv.buffer", "8192000") 
hadoop_conf.set("fs.s3a.readahead.range", "32M")

# 1) Read csv
df = spark_session.read.csv("s3a://pyspark-lijo-test/auction.csv", header=True,mode="DROPMALFORMED")
df.show(2)

Below is my spark standalone configuration details.

[('spark.driver.host', '192.168.0.49'),
 ('spark.executor.id', 'driver'),
 ('spark.app.name', 's3-write'),
 ('spark.repl.local.jars',
  'file:///usr/local/spark/jars/aws-java-sdk-1.7.4.jar,file:///usr/local/spark/jars/hadoop-aws-2.7.4.jar'),
 ('spark.jars',
  '/usr/local/spark/jars/aws-java-sdk-1.7.4.jar,/usr/local/spark/jars/hadoop-aws-2.7.4.jar'),
 ('spark.app.id', 'local-1594186616260'),
 ('spark.rdd.compress', 'True'),
 ('spark.driver.memory', '8g'),
 ('spark.driver.port', '35497'),
 ('spark.serializer.objectStreamReset', '100'),
 ('spark.master', 'local[*]'),
 ('spark.executor.memory', '8g'),
 ('spark.submit.pyFiles', ''),
 ('spark.submit.deployMode', 'client'),
 ('spark.ui.showConsoleProgress', 'true')]

But I am getting the below error while reading even a 1MB file.

Py4JJavaError: An error occurred while calling o43.csv.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, 192.168.0.49, executor driver): org.apache.spark.util.TaskCompletionListenerException: Premature end of Content-Length delimited message body (expected: 888,879; received: 16,360)
    at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:145)
    at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:124)
    at org.apache.spark.scheduler.Task.run(Task.scala:143)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:834)

Tried changing the S3 read code to below one and it is working, but we need to convert RDD to Dataframe.

2) data = spark_session.sparkContext.textFile("s3a://pyspark-lijo-test/auction.csv").map(lambda line: line.split(","))
data.show(2)

Why is the SparkSql code(1) not able to read even small size file or any setting needs to be done?


回答1:


Found out the issue. There was some issue in Spark 3.0. Switched to latest Spark 2.4.6 version and it is working fine as expected.



来源:https://stackoverflow.com/questions/62788833/premature-end-of-content-length-delimited-message-body-sparkexception-while-read

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!