Empirically estimating big-oh time efficiency

前端 未结 10 1856
清酒与你
清酒与你 2020-12-23 16:52

Background

I\'d like to estimate the big-oh performance of some methods in a library through benchmarks. I don\'t need precision -- it suffices to show that someth

10条回答
  •  时光说笑
    2020-12-23 17:20

    Wanted to share my experiments as well. Nothing new from the theoretical standpoint, but it's a fully functional Python module that can easily be extended.

    Main points:

    • It's based on scipy Python library curve_fit function that allows to fit any function into the given set of points minimizing sum of square differences;

    • Since tests are done increasing the problem size exponentially points closer to the start will kind of have a bigger weight, which does not help to identify the correct approximation, so it seems to me that simple linear interpolation to redestribute points evenly does help;

    • The set of approximations we are trying to fit is fully under our control; I've added the following ones:

            def fn_linear(x, k, c):
                return k * x + c
    
            def fn_squared(x, k, c):
                return k * x ** 2 + c
    
            def fn_pow3(x, k, c):
                return k * x ** 3 + c
    
            def fn_log(x, k, c):
                return k * np.log10(x) + c
    
            def fn_nlogn(x, k, c):
                return k * x * np.log10(x) + c
    

    Here is a fully functional Python module to play with: https://gist.github.com/gubenkoved/d9876ccf3ceb935e81f45c8208931fa4, and some pictures it produces (please note -- 4 graphs per sample with different axis scales).

提交回复
热议问题