Reading multiple files from S3 in parallel (Spark, Java)

前端 未结 3 1442
天涯浪人
天涯浪人 2021-02-04 10:52

I saw a few discussions on this but couldn\'t quite understand the right solution: I want to load a couple hundred files from S3 into an RDD. Here is how I\'m doing it now:

相关标签:
3条回答
  • 2021-02-04 11:32

    the underlying problem is that listing objects in s3 is really slow, and the way it is made to look like a directory tree kills performance whenever something does a treewalk (as wildcard pattern maching of paths does).

    The code in the post is doing the all-children listing which delivers way better performance, it's essentially what ships with Hadoop 2.8 and s3a listFiles(path, recursive) see HADOOP-13208.

    After getting that listing, you've got strings to objects paths which you can then map to s3a/s3n paths for spark to handle as text file inputs, and which you can then apply work to

    val files = keys.map(key -> s"s3a://$bucket/$key").mkString(",")
    sc.textFile(files).map(...)
    

    And as requested, here's the java code used.

    String prefix = "s3a://" + properties.get("s3.source.bucket") + "/";
    objectListing.getObjectSummaries().forEach(summary -> keys.add(prefix+summary.getKey())); 
    // repeat while objectListing truncated 
    JavaRDD<String> events = sc.textFile(String.join(",", keys))
    

    Note that I switched s3n to s3a, because, provided you have the hadoop-aws and amazon-sdk JARs on your CP, the s3a connector is the one you should be using. It's better, and its the one which gets maintained and tested against spark workloads by people (me). See The history of Hadoop's S3 connectors.

    0 讨论(0)
  • 2021-02-04 11:35

    I guess if you try to parallelize while reading aws will be utilizing executor and definitely improve the performance

    val bucketName=xxx
    val keyname=xxx
    val df=sc.parallelize(new AmazonS3Client(new BasicAWSCredentials("awsccessKeyId", "SecretKey")).listObjects(request).getObjectSummaries.map(_.getKey).toList)
            .flatMap { key => Source.fromInputStream(s3.getObject(bucketName, keyname).getObjectContent: InputStream).getLines }
    
    0 讨论(0)
  • 2021-02-04 11:42

    You may use sc.textFile to read multiple files.

    You can pass multiple file url with as its argument.

    You can specify whole directories, use wildcards and even CSV of directories and wildcards.

    Ex:

    sc.textFile("/my/dir1,/my/paths/part-00[0-5]*,/another/dir,/a/specific/file")
    

    Reference from this ans

    0 讨论(0)
提交回复
热议问题