Spark: Difference between numPartitions in read.jdbc(..numPartitions..) and repartition(..numPartitions..)

十年热恋 提交于 2019-12-11 11:48:26

问题


I'm perplexed between the behaviour of numPartitions parameter in the following methods:

  1. DataFrameReader.jdbc
  2. Dataset.repartition

The official docs of DataFrameReader.jdbc say following regarding numPartitions parameter

numPartitions: the number of partitions. This, along with lowerBound (inclusive), upperBound (exclusive), form partition strides for generated WHERE clause expressions used to split the column columnName evenly.

And official docs of Dataset.repartition say

Returns a new Dataset that has exactly numPartitions partitions.


My current understanding:

  1. The numPartition parameter in DataFrameReader.jdbc method controls the degree of parallelism in reading the data from database
  2. The numPartition parameter in Dataset.repartition controls the number of output files that will be generated when this DataFrame would be written to disk

My questions:

  1. If I read DataFrame via DataFrameReader.jdbc and then write it to disk (without invoking repartition method), then would there still be as many files in output as there would've been had I written out a DataFrame to disk after having invoked repartition on it?
  2. If the answer to the above question is:
    • Yes: Then is it redundant to invoke repartition method on a DataFrame that was read using DataFrameReader.jdbc method (with numPartitions parameter)?
    • No: Then please correct the lapses in my understanding. Also in that case shouldn't the numPartitions parameter of DataFrameReader.jdbc method be called something like 'parallelism'?

回答1:


Short answer: There is (almost) no difference in behaviour of numPartitions parameter in the two methods


read.jdbc(..numPartitions..)

Here, the numPartitions parameter controls:

  1. number of parallel connections that would be made to the MySQL (or any other RDBM) for reading the data into DataFrame.
  2. Degree of parallelism on all subsequent operations on the read DataFrame including writing to disk until repartition method is invoked on it

repartition(..numPartitions..)

Here numPartitions parameter controls the degree of parallelism that would be exhibited in performing any operation of the DataFrame, including writing to disk.


So basically the DataFrame obtained on reading MySQL table using spark.read.jdbc(..numPartitions..) method behaves the same (exhibits the same degree of parallelism in operations performed over it) as if it was read without parallelism and the repartition(..numPartitions..) method was invoked on it afterwards (obviously with same value of numPartitions)


To answer to exact questions:

If I read DataFrame via DataFrameReader.jdbc and then write it to disk (without invoking repartition method), then would there still be as many files in output as there would've been had I written out a DataFrame to disk after having invoked repartition on it?

Yes

Assuming that the read task had been parallelized by providing appropriate parameters (columnName, lowerBound, upperBound & numPartitions), all operations on the resulting DataFrame including write will be performed in parallel. Quoting the official docs here:

numPartitions: The maximum number of partitions that can be used for parallelism in table reading and writing. This also determines the maximum number of concurrent JDBC connections. If the number of partitions to write exceeds this limit, we decrease it to this limit by calling coalesce(numPartitions) before writing.


Yes: Then is it redundant to invoke repartition method on a DataFrame that was read using DataFrameReader.jdbc method (with numPartitions parameter)?

Yes

Unless you invoke the other variations of repartition method (the ones that take columnExprs param), invoking repartition on such a DataFrame (with same numPartitions) parameter is redundant. However, I'm not sure if forcing same degree of parallelism on an already-parallelized DataFrame also invokes shuffling of data among executors unnecessarily. Will update the answer once I come across it.



来源:https://stackoverflow.com/questions/48276241/spark-difference-between-numpartitions-in-read-jdbc-numpartitions-and-repa

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!