In spark, how to estimate the number of elements in a dataframe quickly

后端 未结 2 984
臣服心动
臣服心动 2021-01-03 12:26

In spark, is there a fast way to get an approximate count of the number of elements in a Dataset ? That is, faster than Dataset.count() does.

Maybe we c

相关标签:
2条回答
  • 2021-01-03 13:10

    If you have a truly enormous number of records, you can get an approximate count using something like HyperLogLog and this might be faster than count(). However you won't be able to get any result without kicking off a job.

    When using Spark there are two kinds of RDD operations: transformations and actions. Roughly speaking, transformations modify an RDD and return a new RDD. Actions calculate or generate some result. Transformations are lazily evaluated, so they don't kick off a job until an action is called an action at the end of a sequence of transformations.

    Because Spark is a distributed programming framework, there is a lot of overhead for running jobs. If you need something that feels more like "real time" whatever that means, either use basic Scala (or Python) if your data is small enough, or move to a streaming approach and do something like update a counter as new records flow through.

    0 讨论(0)
  • 2021-01-03 13:13

    You could try to use countApprox on RDD API, altough this also launches a Spark job, it should be faster as it just gives you an estimate of the true count for a given time you want to spend (milliseconds) and a confidence interval (i.e. the probabilty that the true value is within that range):

    example usage:

    val cntInterval = df.rdd.countApprox(timeout = 1000L,confidence = 0.90)
    val (lowCnt,highCnt) = (cntInterval.initialValue.low, cntInterval.initialValue.high)
    

    You have to play a bit with the parameters timeout and confidence. The higher the timeout, the more accurate is the estimated count.

    0 讨论(0)
提交回复
热议问题