What's the theory behind computing variance of an image?

前端 未结 2 1121
孤城傲影
孤城傲影 2021-02-14 21:41

I am trying to compute the blurriness of an image by using LaplacianFilter.

According to this article: https://www.pyimagesearch.com/2015/09/07/blur-detection-with-open

2条回答
  •  旧巷少年郎
    2021-02-14 22:38

    On sentence description:

    The blured image's edge is smoothed, so the variance is small.


    1. How variance is calculated.

    The core function of the post is:

    def variance_of_laplacian(image):
        # compute the Laplacian of the image and then return the focus
        # measure, which is simply the variance of the Laplacian
        return cv2.Laplacian(image, cv2.CV_64F).var()
    

    As Opencv-Python use numpy.ndarray to represent the image, then we have a look on the numpy.var:

    Help on function var in module numpy.core.fromnumeric:
    
    var(a, axis=None, dtype=None, out=None, ddof=0, keepdims=

    2. Using for picture

    This to say, the var is calculated on the flatten laplacian image, or the flatted 1-D array.

    To calculate variance of array x, it is:

    var = mean(abs(x - x.mean())**2)


    For example:

    >>> x = np.array([[1, 2], [3, 4]])
    >>> x.var()
    1.25
    >>> np.mean(np.abs(x - x.mean())**2)
    1.25
    

    For the laplacian image, it is edged image. Make images using GaussianBlur with different r, then do laplacian filter on them, and calculate the vars:

    The blured image's edge is smoothed, so the variance is little.

提交回复
热议问题