What is the fastest search method for a sorted array?

前端 未结 8 430
伪装坚强ぢ
伪装坚强ぢ 2021-02-01 04:57

Answering to another question, I wrote the program below to compare different search methods in a sorted array. Basically I compared two implementations of Interpolation search

相关标签:
8条回答
  • 2021-02-01 05:35

    Benchmarked on Win32 Core2 Quad Q6600, gcc v4.3 msys. Compiling with g++ -O3, nothing fancy.

    Observation - the asserts, timing and loop overhead is about 40%, so any gains listed below should be divided by 0.6 to get the actual improvement in the algorithms under test.

    Simple answers:

    1. On my machine replacing the int64_t with int for "low", "high" and "mid" in interpolationSearch gives a 20% to 40% speed up. This is the fastest easy method I could find. It is taking about 150 cycles per look-up on my machine (for the array size of 100000). That's roughly the same number of cycles as a cache miss. So in real applications, looking after your cache is probably going to be the biggest factor.

    2. Replacing binarySearch's "/2" with a ">>1" gives a 4% speed up.

    3. Using STL's binary_search algorithm, on a vector containing the same data as "arr", is about the same speed as the hand coded binarySearch. Although on the smaller "size"s STL is much slower - around 40%.

    0 讨论(0)
  • 2021-02-01 05:43

    The implementation of the binary search that was used for comparisons can be improved. The key idea is to "normalize" the range initially so that the target is always > a minimum and < than a maximum after the first step. This increases the termination delta size. It also has the effect of special casing targets that are less than the first element of the sorted array or greater than the last element of the sorted array. Expect approximately a 15% improvement in search time. Here is what the code might look like in C++.

    int binarySearch(int * &array, int target, int min, int max)
    { // binarySearch
      // normalize min and max so that we know the target is > min and < max
      if (target <= array[min]) // if min not normalized
      { // target <= array[min]
          if (target == array[min]) return min;
          return -1;
      } // end target <= array[min]
      // min is now normalized
    
      if (target >= array[max]) // if max not normalized
      { // target >= array[max]
          if (target == array[max]) return max;
          return -1;
      } // end target >= array[max]
        // max is now normalized
    
      while (min + 1 < max)
      { // delta >=2
        int tempi = min + ((max - min) >> 1); // point to index approximately in the middle between min and max
        int atempi = array[tempi]; // just in case the compiler does not optimize this
        if (atempi > target)max = tempi; // if the target is smaller, we can decrease max and it is still normalized        
        else if (atempi < target)min = tempi; // the target is bigger, so we can increase min and it is still normalized        
            else return tempi; // if we found the target, return with the index
            // Note that it is important that this test for equality is last because it rarely occurs.
      } // end delta >=2
      return -1; // nothing in between normalized min and max
    } // end binarySearch
    
    0 讨论(0)
  • 2021-02-01 05:44

    One way of approaching this is to use a space versus time trade-off. There are any number of ways that could be done. The extreme way would be to simply make an array with the max size being the max value of the sorted array. Initialize each position with the index into sortedArray. Then the search would simply be O(1).

    The following version, however, might be a little more realistic and possibly be useful in the real world. It uses a "helper" structure that is initialized on the first call. It maps the search space down to a smaller space by dividing by a number that I pulled out of the air without much testing. It stores the index of the lower bound for a group of values in sortedArray into the helper map. The actual search divides the toFind number by the chosen divisor and extracts the narrowed bounds of sortedArray for a normal binary search.

    For example, if the sorted values range from 1 to 1000 and the divisor is 100, then the lookup array might contain 10 "sections". To search for value 250, it would divide it by 100 to yield integer index position 250/100=2. map[2] would contain the sortedArray index for values 200 and larger. map[3] would have the index position of values 300 and larger thus providing a smaller bounding position for a normal binary search. The rest of the function is then an exact copy of your binary search function.

    The initialization of the helper map might be more efficient by using a binary search to fill in the positions rather than a simple scan, but it is a one time cost so I didn't bother testing that. This mechanism works well for the given test numbers which are evenly distributed. As written, it would not be as good if the distribution was not even. I think this method could be used with floating point search values too. However, extrapolating it to generic search keys might be harder. For example, I am unsure what the method would be for character data keys. It would need some kind of O(1) lookup/hash that mapped to a specific array position to find the index bounds. It's unclear to me at the moment what that function would be or if it exists.

    I kludged the setup of the helper map in the following implementation pretty quickly. It is not pretty and I'm not 100% sure it is correct in all cases but it does show the idea. I ran it with a debug test to compare the results against your existing binarySearch function to be somewhat sure it works correctly.

    The following are example numbers:

    100000 * 10000 : cycles binary search          = 10197811
    100000 * 10000 : cycles interpolation uint64_t = 9007939
    100000 * 10000 : cycles interpolation float    = 8386879
    100000 * 10000 : cycles binary w/helper        = 6462534
    

    Here is the quick-and-dirty implementation:

    #define REDUCTION 100  // pulled out of the air
    typedef struct {
        int init;  // have we initialized it?
        int numSections;
        int *map;
        int divisor;
    } binhelp;
    
    int binarySearchHelp( binhelp *phelp, int sortedArray[], int toFind, int len)
    {
        // Returns index of toFind in sortedArray, or -1 if not found
        int low;
        int high;
        int mid;
    
        if ( !phelp->init && len > REDUCTION ) {
            int i;
            int numSections = len / REDUCTION;
            int divisor = (( sortedArray[len-1] - 1 ) / numSections ) + 1;
            int threshold;
            int arrayPos;
    
            phelp->init = 1;
            phelp->divisor = divisor;
            phelp->numSections = numSections;
            phelp->map = (int*)malloc((numSections+2) * sizeof(int));
            phelp->map[0] = 0;
            phelp->map[numSections+1] = len-1;
            arrayPos = 0;
            // Scan through the array and set up the mapping positions.  Simple linear
            // scan but it is a one-time cost.
            for ( i = 1; i <= numSections; i++ ) {
                threshold = i * divisor;
                while ( arrayPos < len && sortedArray[arrayPos] < threshold )
                    arrayPos++;
                if ( arrayPos < len )
                    phelp->map[i] = arrayPos;
                else
                    // kludge to take care of aliasing
                    phelp->map[i] = len - 1;
            }
        }
    
        if ( phelp->init ) {
            int section = toFind / phelp->divisor;
            if ( section > phelp->numSections )
                // it is bigger than all values
                return -1;
    
            low = phelp->map[section];
            if ( section == phelp->numSections )
                high = len - 1;
            else
                high = phelp->map[section+1];
        } else {
            // use normal start points
            low = 0;
            high = len - 1;
        }
    
        // the following is a direct copy of the Kriss' binarySearch
        int l = sortedArray[low];
        int h = sortedArray[high];
    
        while (l <= toFind && h >= toFind) {
            mid = (low + high)/2;
    
            int m = sortedArray[mid];
    
            if (m < toFind) {
                l = sortedArray[low = mid + 1];
            } else if (m > toFind) {
                h = sortedArray[high = mid - 1];
            } else {
                return mid;
            }
        }
    
        if (sortedArray[low] == toFind)
            return low;
        else
            return -1; // Not found
    }
    

    The helper structure needs to be initialized (and memory freed):

        help.init = 0;
        unsigned long long totalcycles4 = 0;
        ... make the calls same as for the other ones but pass the structure ...
            binarySearchHelp(&help, arr,searched[j],length);
        if ( help.init )
            free( help.map );
        help.init = 0;
    
    0 讨论(0)
  • 2021-02-01 05:45

    Look first at the data and whether a big gain can be got by data specific method over a general method.

    For large static sorted datasets, you can create an additional index to provide partial pigeon holing, based on the amount of memory you're willing to use. e.g. say we create a 256x256 two dimensional array of ranges, which we populate with the start and end positions in the search array of elements with corresponding high order bytes. When we come to search, we then use the high order bytes on the key to find the range / subset of the array we need to search. If we did have ~ 20 comparisons on our binary search of 100,000 elements O(log2(n)) we're now down to ~4 comarisons for 16 elements, or O(log2 (n/15)). The memory cost here is about 512k

    Another method, again suited to data that doesn't change much, is to divide the data into arrays of commonly sought items and rarely sought items. For example, if you leave your existing search in place running a wide number of real world cases over a protracted testing period, and log the details of the item being sought, you may well find that the distribution is very uneven, i.e. some values are sought far more regularly than others. If this is the case, break your array into a much smaller array of commonly sought values and a larger remaining array, and search the smaller array first. If the data is right (big if!), you can often achieve broadly similar improvements to the first solution without the memory cost.

    There are many other data specific optimizations which score far better than trying to improve on tried, tested and far more widely used general solutions.

    0 讨论(0)
  • 2021-02-01 05:47

    If you have some control over the in-memory layout of the data, you might want to look at Judy arrays.

    Or to put a simpler idea out there: a binary search always cuts the search space in half. An optimal cut point can be found with interpolation (the cut point should NOT be the place where the key is expected to be, but the point which minimizes the statistical expectation of the search space for the next step). This minimizes the number of steps but... not all steps have equal cost. Hierarchical memories allow executing a number of tests in the same time as a single test, if locality can be maintained. Since a binary search's first M steps only touch a maximum of 2**M unique elements, storing these together can yield a much better reduction of search space per-cacheline fetch (not per comparison), which is higher performance in the real world.

    n-ary trees work on that basis, and then Judy arrays add a few less important optimizations.

    Bottom line: even "Random Access Memory" (RAM) is faster when accessed sequentially than randomly. A search algorithm should use that fact to its advantage.

    0 讨论(0)
  • 2021-02-01 05:50

    Unless your data is known to have special properties, pure interpolation search has the risk of taking linear time. If you expect interpolation to help with most data but don't want it to hurt in the case of pathological data, I would use a (possibly weighted) average of the interpolated guess and the midpoint, ensuring a logarithmic bound on the run time.

    0 讨论(0)
提交回复
热议问题