问题
In spark, is there a fast way to get an approximate count of the number of elements in a Dataset ? That is, faster than Dataset.count()
does.
Maybe we could calculate this information from the number of partitions of the DataSet, could we ?
回答1:
You could try to use countApprox
on RDD API, altough this also launches a Spark job, it should be faster as it just gives you an estimate of the true count for a given time you want to spend (milliseconds) and a confidence interval (i.e. the probabilty that the true value is within that range):
example usage:
val cntInterval = df.rdd.countApprox(timeout = 1000L,confidence = 0.90)
val (lowCnt,highCnt) = (cntInterval.initialValue.low, cntInterval.initialValue.high)
You have to play a bit with the parameters timeout
and confidence
. The higher the timeout, the more accurate is the estimated count.
回答2:
If you have a truly enormous number of records, you can get an approximate count using something like HyperLogLog and this might be faster than count()
. However you won't be able to get any result without kicking off a job.
When using Spark there are two kinds of RDD operations: transformations and actions. Roughly speaking, transformations modify an RDD and return a new RDD. Actions calculate or generate some result. Transformations are lazily evaluated, so they don't kick off a job until an action is called an action at the end of a sequence of transformations.
Because Spark is a distributed programming framework, there is a lot of overhead for running jobs. If you need something that feels more like "real time" whatever that means, either use basic Scala (or Python) if your data is small enough, or move to a streaming approach and do something like update a counter as new records flow through.
来源:https://stackoverflow.com/questions/44273870/in-spark-how-to-estimate-the-number-of-elements-in-a-dataframe-quickly