Apache Spark's RDD splitting according to the particular size

后端 未结 2 560
眼角桃花
眼角桃花 2020-12-21 05:43

I am trying to read strings from a text file, but I want to limit each line according to a particular size. For example;

Here is my representing the file.

相关标签:
2条回答
  • 2020-12-21 06:08

    You will need to read all the data anyhow. Not much you can do apart from mapping each line and trim it.

    rdd.map(line => line.take(3)).collect()
    
    0 讨论(0)
  • 2020-12-21 06:20

    Not a particularly efficient solution (not terrible either) but you can do something like this:

    val pairs = rdd
      .flatMap(x => x)  // Flatten
      .zipWithIndex  // Add indices
      .keyBy(_._2 / 3)  // Key by index / n
    
    // We'll use a range partitioner to minimize the shuffle 
    val partitioner = new RangePartitioner(pairs.partitions.size, pairs)
    
    pairs
      .groupByKey(partitioner)  // group
      // Sort, drop index, concat
      .mapValues(_.toSeq.sortBy(_._2).map(_._1).mkString("")) 
      .sortByKey()
      .values
    

    It is possible to avoid the shuffle by passing data required to fill the partitions explicitly but it takes some effort to code. See my answer to Partition RDD into tuples of length n.

    If you can accept some misaligned records on partitions boundaries then simple mapPartitions with grouped should do the trick at much lower cost:

    rdd.mapPartitions(_.flatMap(x => x).grouped(3).map(_.mkString("")))
    

    It is also possible to use sliding RDD:

    rdd.flatMap(x => x).sliding(3, 3).map(_.mkString(""))
    
    0 讨论(0)
提交回复
热议问题