How to calculate or approximate the median of a list without storing the list

后端 未结 10 1086
你的背包
你的背包 2020-11-28 22:24

I\'m trying to calculate the median of a set of values, but I don\'t want to store all the values as that could blow memory requirements. Is there a way of calculating or ap

相关标签:
10条回答
  • 2020-11-28 22:51

    I picked up the idea of iterative quantile calculation. It is important to have a good value for starting point and eta, these may come from mean and sigma. So I programmed this:

    Function QuantileIterative(Var x : Array of Double; n : Integer; p, mean, sigma : Double) : Double;
    Var eta, quantile,q1, dq : Double;
        i : Integer;
    Begin
      quantile:= mean + 1.25*sigma*(p-0.5);
      q1:=quantile;
      eta:=0.2*sigma/xy(1+n,0.75); // should not be too large! sets accuracy
      For i:=1 to n Do 
         quantile := quantile + eta * (signum_smooth(x[i] - quantile,eta) + 2*p - 1);
      dq:=abs(q1-quantile);
      If dq>eta
         then Begin
              If dq<3*eta then eta:=eta/4;
              For i:=1 to n Do 
                 quantile := quantile + eta * (signum_smooth(x[i] - quantile,eta) + 2*p - 1);
         end;
      QuantileIterative:=quantile
    end;
    

    As the median for two elements would be the mean, I used a smoothed signum function, and xy() is x^y. Are there ideas to make it better? Of course if we have some more a-priori knowledge we can add code using min and max of the array, skew, etc. For big data you would not use an array perhaps, but for testing it is easier.

    0 讨论(0)
  • 2020-11-28 22:52

    There is the 'remedian' statistic. It works by first setting up k arrays, each of length b. Data values are fed in to the first array and, when this is full, the median is calculated and stored in the first pos of the next array, after which the first array is re-used. When the second array is full the median of its values is stored in the first pos of the third array, etc. etc. You get the idea :)

    It's simple and pretty robust. The reference is here...

    http://web.ipac.caltech.edu/staff/fmasci/home/astro_refs/Remedian.pdf

    Hope this helps

    Michael

    0 讨论(0)
  • 2020-11-28 22:55

    This is tricky to get right in general, especially to handle degenerate series that are already sorted, or have a bunch of values at the "start" of the list but the end of the list has values in a different range.

    The basic idea of making a histogram is most promising. This lets you accumulate distribution information and answer queries (like median) from it. The median will be approximate since you obviously don't store all values. The storage space is fixed so it will work with whatever length sequence you have.

    But you can't just build a histogram from say the first 100 values and use that histogram continually.. the changing data may make that histogram invalid. So you need a dynamic histogram that can change its range and bins on the fly.

    Make a structure which has N bins. You'll store the X value of each slot transition (N+1 values total) as well as the population of the bin.

    Stream in your data. Record the first N+1 values. If the stream ends before this, great, you have all the values loaded and you can find the exact median and return it. Else use the values to define your first histogram. Just sort the values and use those as bin definitions, each bin having a population of 1. It's OK to have dupes (0 width bins).

    Now stream in new values. For each one, binary search to find the bin it belongs to. In the common case, you just increment the population of that bin and continue. If your sample is beyond the histogram's edges (highest or lowest), just extend the end bin's range to include it. When your stream is done, you find the median sample value by finding the bin which has equal population on both sides of it, and linearly interpolating the remaining bin-width.

    But that's not enough.. you still need to ADAPT the histogram to the data as it's being streamed in. When a bin gets over-full, you're losing information about that bin's sub distribution. You can fix this by adapting based on some heuristic... The easiest and most robust one is if a bin reaches some certain threshold population (something like 10*v/N where v=# of values seen so far in the stream, and N is the number of bins), you SPLIT that overfull bin. Add a new value at the midpoint of the bin, give each side half of the original bin's population. But now you have too many bins, so you need to DELETE a bin. A good heuristic for that is to find the bin with the smallest product of population and width. Delete it and merge it with its left or right neighbor (whichever one of the neighbors itself has the smallest product of width and population.). Done! Note that merging or splitting bins loses information, but that's unavoidable.. you only have fixed storage.

    This algorithm is nice in that it will deal with all types of input streams and give good results. If you have the luxury of choosing sample order, a random sample is best, since that minimizes splits and merges.

    The algorithm also allows you to query any percentile, not just median, since you have a complete distribution estimate.

    I use this method in my own code in many places, mostly for debugging logs.. where some stats that you're recording have unknown distribution. With this algorithm you don't need to guess ahead of time.

    The downside is the unequal bin widths means you have to do a binary search for each sample, so your net algorithm is O(NlogN).

    0 讨论(0)
  • 2020-11-28 22:56

    I use these incremental/recursive mean and median estimators, which both use constant storage:

    mean += eta * (sample - mean)
    median += eta * sgn(sample - median)
    

    where eta is a small learning rate parameter (e.g. 0.001), and sgn() is the signum function which returns one of {-1, 0, 1}. (Use a constant eta if the data is non-stationary and you want to track changes over time; otherwise, for stationary sources you can use something like eta=1/n for the mean estimator, where n is the number of samples seen so far... unfortunately, this does not appear to work for the median estimator.)

    This type of incremental mean estimator seems to be used all over the place, e.g. in unsupervised neural network learning rules, but the median version seems much less common, despite its benefits (robustness to outliers). It seems that the median version could be used as a replacement for the mean estimator in many applications.

    Also, I modified the incremental median estimator to estimate arbitrary quantiles. In general, a quantile function tells you the value that divides the data into two fractions: p and 1-p. The following estimates this value incrementally:

    quantile += eta * (sgn(sample - quantile) + 2.0 * p - 1.0)
    

    The value p should be within [0,1]. This essentially shifts the sgn() function's symmetrical output {-1,0,1} to lean toward one side, partitioning the data samples into two unequally-sized bins (fractions p and 1-p of the data are less than/greater than the quantile estimate, respectively). Note that for p=0.5, this reduces to the median estimator.

    I would love to see an incremental mode estimator of a similar form...

    (Note: I also posted this to a similar topic here: "On-line" (iterator) algorithms for estimating statistical median, mode, skewness, kurtosis?)

    0 讨论(0)
提交回复
热议问题