Best case Big O complexity

前端 未结 2 1522
走了就别回头了
走了就别回头了 2021-01-29 11:54

The question:
how can you limit the input data to achieve a better Big O complexity? Describe an algorithm for handling this limited data to find if there are an

相关标签:
2条回答
  • 2021-01-29 12:04

    You can achieve a better big O complexity if you know the max value your integer array can take. Lets say it as m. The algorithm to do it is the variance of Bucket Sort. The complexity is O(n). Source code of algorithm:

    public boolean HasDuplicates(int [] arr, int m)
    {
    
        boolean bucket[] = new boolean[m];
    
        for (int elem : arr)
        {
    
            if (bucket[elem])
            {
               return true; // a duplicate found
            }
    
            bucket[elem] = true;
        }   
        return false;   
    }
    
    0 讨论(0)
  • 2021-01-29 12:26

    Assume that sorting is our problem.

    We know that sorting with only comparisons requires Ω(n*log(n)) time and we can do it in O(n*log(n)) by for example a merge sort.

    However, if we limit n to some constant, for example let n < 10^6, then we can do it for any input in O(10^6 * log(10^6)) which is O(1) in terms of Big-O.

    The bottom line is, if you want to measure a performance in terms of the Big-O notation, you can not assume any size limitation on the input.

    0 讨论(0)
提交回复
热议问题