The Parquet files contain a per-block row count field. Spark seems to read it at some point (SpecificParquetRecordReaderBase.java#L151).
I tried this in spark-
We can also use
java.text.NumberFormat.getIntegerInstance.format(sparkdf.count)
That is correct, Spark is already using the rowcounts field when you are running count
.
Diving into the details a bit, the SpecificParquetRecordReaderBase.java references the Improve Parquet scan performance when using flat schemas commit as part of [SPARK-11787] Speed up parquet reader for flat schemas. Note, this commit was included as part of the Spark 1.6 branch.
If the query is a row count, it pretty much works the way you described it (i.e. reading the metadata). If the predicates are fully satisfied by the min/max values, that should work as well though that is not as fully verified. It's not a bad idea to use those Parquet fields but as implied in the previous statement, the key issue is to ensure that the predicate filtering matches the metadata so you are doing an accurate count.
To help understand why there are two stages, here's the DAG created when running the count() statement.
When digging into the two stages, notice that the first one (Stage 25) is running the file scan while the second stage (Stage 26) runs the shuffle for the count.
Thanks to Nong Li (the author of the SpecificParquetRecordReaderBase.java commit) for validating!
To provide additional context on the bridge between Dataset.count
and Parquet, the flow of the internal logic surrounding this is:
VectorizedParquetRecordReader
is actually an empty Parquet messageInternalRow
per InternalRow.scala.To work with the Parquet File format, internally, Apache Spark wraps the logic with an iterator that returns an InternalRow
; more information can be found in InternalRow.scala. Ultimately, the count()
aggregate function interacts with the underlying Parquet data source using this iterator. BTW, this is true for both vectorized and non-vectorized Parquet reader.
Therefore, to bridge the Dataset.count()
with the Parquet reader, the path is:
Dataset.count()
call is planned into an aggregate operator with a single count() aggregate function.For more information, please refer to Parquet Count Metadata Explanation.