Is there a way to detect if an image is blurry?

后端 未结 12 1776
予麋鹿
予麋鹿 2020-11-22 14:48

I was wondering if there is a way to determine if an image is blurry or not by analyzing the image data.

相关标签:
12条回答
  • 2020-11-22 15:16

    During some work with an auto-focus lens, I came across this very useful set of algorithms for detecting image focus. It's implemented in MATLAB, but most of the functions are quite easy to port to OpenCV with filter2D.

    It's basically a survey implementation of many focus measurement algorithms. If you want to read the original papers, references to the authors of the algorithms are provided in the code. The 2012 paper by Pertuz, et al. Analysis of focus measure operators for shape from focus (SFF) gives a great review of all of these measure as well as their performance (both in terms of speed and accuracy as applied to SFF).

    EDIT: Added MATLAB code just in case the link dies.

    function FM = fmeasure(Image, Measure, ROI)
    %This function measures the relative degree of focus of 
    %an image. It may be invoked as:
    %
    %   FM = fmeasure(Image, Method, ROI)
    %
    %Where 
    %   Image,  is a grayscale image and FM is the computed
    %           focus value.
    %   Method, is the focus measure algorithm as a string.
    %           see 'operators.txt' for a list of focus 
    %           measure methods. 
    %   ROI,    Image ROI as a rectangle [xo yo width heigth].
    %           if an empty argument is passed, the whole
    %           image is processed.
    %
    %  Said Pertuz
    %  Abr/2010
    
    
    if ~isempty(ROI)
        Image = imcrop(Image, ROI);
    end
    
    WSize = 15; % Size of local window (only some operators)
    
    switch upper(Measure)
        case 'ACMO' % Absolute Central Moment (Shirvaikar2004)
            if ~isinteger(Image), Image = im2uint8(Image);
            end
            FM = AcMomentum(Image);
    
        case 'BREN' % Brenner's (Santos97)
            [M N] = size(Image);
            DH = Image;
            DV = Image;
            DH(1:M-2,:) = diff(Image,2,1);
            DV(:,1:N-2) = diff(Image,2,2);
            FM = max(DH, DV);        
            FM = FM.^2;
            FM = mean2(FM);
    
        case 'CONT' % Image contrast (Nanda2001)
            ImContrast = inline('sum(abs(x(:)-x(5)))');
            FM = nlfilter(Image, [3 3], ImContrast);
            FM = mean2(FM);
    
        case 'CURV' % Image Curvature (Helmli2001)
            if ~isinteger(Image), Image = im2uint8(Image);
            end
            M1 = [-1 0 1;-1 0 1;-1 0 1];
            M2 = [1 0 1;1 0 1;1 0 1];
            P0 = imfilter(Image, M1, 'replicate', 'conv')/6;
            P1 = imfilter(Image, M1', 'replicate', 'conv')/6;
            P2 = 3*imfilter(Image, M2, 'replicate', 'conv')/10 ...
                -imfilter(Image, M2', 'replicate', 'conv')/5;
            P3 = -imfilter(Image, M2, 'replicate', 'conv')/5 ...
                +3*imfilter(Image, M2, 'replicate', 'conv')/10;
            FM = abs(P0) + abs(P1) + abs(P2) + abs(P3);
            FM = mean2(FM);
    
        case 'DCTE' % DCT energy ratio (Shen2006)
            FM = nlfilter(Image, [8 8], @DctRatio);
            FM = mean2(FM);
    
        case 'DCTR' % DCT reduced energy ratio (Lee2009)
            FM = nlfilter(Image, [8 8], @ReRatio);
            FM = mean2(FM);
    
        case 'GDER' % Gaussian derivative (Geusebroek2000)        
            N = floor(WSize/2);
            sig = N/2.5;
            [x,y] = meshgrid(-N:N, -N:N);
            G = exp(-(x.^2+y.^2)/(2*sig^2))/(2*pi*sig);
            Gx = -x.*G/(sig^2);Gx = Gx/sum(Gx(:));
            Gy = -y.*G/(sig^2);Gy = Gy/sum(Gy(:));
            Rx = imfilter(double(Image), Gx, 'conv', 'replicate');
            Ry = imfilter(double(Image), Gy, 'conv', 'replicate');
            FM = Rx.^2+Ry.^2;
            FM = mean2(FM);
    
        case 'GLVA' % Graylevel variance (Krotkov86)
            FM = std2(Image);
    
        case 'GLLV' %Graylevel local variance (Pech2000)        
            LVar = stdfilt(Image, ones(WSize,WSize)).^2;
            FM = std2(LVar)^2;
    
        case 'GLVN' % Normalized GLV (Santos97)
            FM = std2(Image)^2/mean2(Image);
    
        case 'GRAE' % Energy of gradient (Subbarao92a)
            Ix = Image;
            Iy = Image;
            Iy(1:end-1,:) = diff(Image, 1, 1);
            Ix(:,1:end-1) = diff(Image, 1, 2);
            FM = Ix.^2 + Iy.^2;
            FM = mean2(FM);
    
        case 'GRAT' % Thresholded gradient (Snatos97)
            Th = 0; %Threshold
            Ix = Image;
            Iy = Image;
            Iy(1:end-1,:) = diff(Image, 1, 1);
            Ix(:,1:end-1) = diff(Image, 1, 2);
            FM = max(abs(Ix), abs(Iy));
            FM(FM<Th)=0;
            FM = sum(FM(:))/sum(sum(FM~=0));
    
        case 'GRAS' % Squared gradient (Eskicioglu95)
            Ix = diff(Image, 1, 2);
            FM = Ix.^2;
            FM = mean2(FM);
    
        case 'HELM' %Helmli's mean method (Helmli2001)        
            MEANF = fspecial('average',[WSize WSize]);
            U = imfilter(Image, MEANF, 'replicate');
            R1 = U./Image;
            R1(Image==0)=1;
            index = (U>Image);
            FM = 1./R1;
            FM(index) = R1(index);
            FM = mean2(FM);
    
        case 'HISE' % Histogram entropy (Krotkov86)
            FM = entropy(Image);
    
        case 'HISR' % Histogram range (Firestone91)
            FM = max(Image(:))-min(Image(:));
    
    
        case 'LAPE' % Energy of laplacian (Subbarao92a)
            LAP = fspecial('laplacian');
            FM = imfilter(Image, LAP, 'replicate', 'conv');
            FM = mean2(FM.^2);
    
        case 'LAPM' % Modified Laplacian (Nayar89)
            M = [-1 2 -1];        
            Lx = imfilter(Image, M, 'replicate', 'conv');
            Ly = imfilter(Image, M', 'replicate', 'conv');
            FM = abs(Lx) + abs(Ly);
            FM = mean2(FM);
    
        case 'LAPV' % Variance of laplacian (Pech2000)
            LAP = fspecial('laplacian');
            ILAP = imfilter(Image, LAP, 'replicate', 'conv');
            FM = std2(ILAP)^2;
    
        case 'LAPD' % Diagonal laplacian (Thelen2009)
            M1 = [-1 2 -1];
            M2 = [0 0 -1;0 2 0;-1 0 0]/sqrt(2);
            M3 = [-1 0 0;0 2 0;0 0 -1]/sqrt(2);
            F1 = imfilter(Image, M1, 'replicate', 'conv');
            F2 = imfilter(Image, M2, 'replicate', 'conv');
            F3 = imfilter(Image, M3, 'replicate', 'conv');
            F4 = imfilter(Image, M1', 'replicate', 'conv');
            FM = abs(F1) + abs(F2) + abs(F3) + abs(F4);
            FM = mean2(FM);
    
        case 'SFIL' %Steerable filters (Minhas2009)
            % Angles = [0 45 90 135 180 225 270 315];
            N = floor(WSize/2);
            sig = N/2.5;
            [x,y] = meshgrid(-N:N, -N:N);
            G = exp(-(x.^2+y.^2)/(2*sig^2))/(2*pi*sig);
            Gx = -x.*G/(sig^2);Gx = Gx/sum(Gx(:));
            Gy = -y.*G/(sig^2);Gy = Gy/sum(Gy(:));
            R(:,:,1) = imfilter(double(Image), Gx, 'conv', 'replicate');
            R(:,:,2) = imfilter(double(Image), Gy, 'conv', 'replicate');
            R(:,:,3) = cosd(45)*R(:,:,1)+sind(45)*R(:,:,2);
            R(:,:,4) = cosd(135)*R(:,:,1)+sind(135)*R(:,:,2);
            R(:,:,5) = cosd(180)*R(:,:,1)+sind(180)*R(:,:,2);
            R(:,:,6) = cosd(225)*R(:,:,1)+sind(225)*R(:,:,2);
            R(:,:,7) = cosd(270)*R(:,:,1)+sind(270)*R(:,:,2);
            R(:,:,7) = cosd(315)*R(:,:,1)+sind(315)*R(:,:,2);
            FM = max(R,[],3);
            FM = mean2(FM);
    
        case 'SFRQ' % Spatial frequency (Eskicioglu95)
            Ix = Image;
            Iy = Image;
            Ix(:,1:end-1) = diff(Image, 1, 2);
            Iy(1:end-1,:) = diff(Image, 1, 1);
            FM = mean2(sqrt(double(Iy.^2+Ix.^2)));
    
        case 'TENG'% Tenengrad (Krotkov86)
            Sx = fspecial('sobel');
            Gx = imfilter(double(Image), Sx, 'replicate', 'conv');
            Gy = imfilter(double(Image), Sx', 'replicate', 'conv');
            FM = Gx.^2 + Gy.^2;
            FM = mean2(FM);
    
        case 'TENV' % Tenengrad variance (Pech2000)
            Sx = fspecial('sobel');
            Gx = imfilter(double(Image), Sx, 'replicate', 'conv');
            Gy = imfilter(double(Image), Sx', 'replicate', 'conv');
            G = Gx.^2 + Gy.^2;
            FM = std2(G)^2;
    
        case 'VOLA' % Vollath's correlation (Santos97)
            Image = double(Image);
            I1 = Image; I1(1:end-1,:) = Image(2:end,:);
            I2 = Image; I2(1:end-2,:) = Image(3:end,:);
            Image = Image.*(I1-I2);
            FM = mean2(Image);
    
        case 'WAVS' %Sum of Wavelet coeffs (Yang2003)
            [C,S] = wavedec2(Image, 1, 'db6');
            H = wrcoef2('h', C, S, 'db6', 1);   
            V = wrcoef2('v', C, S, 'db6', 1);   
            D = wrcoef2('d', C, S, 'db6', 1);   
            FM = abs(H) + abs(V) + abs(D);
            FM = mean2(FM);
    
        case 'WAVV' %Variance of  Wav...(Yang2003)
            [C,S] = wavedec2(Image, 1, 'db6');
            H = abs(wrcoef2('h', C, S, 'db6', 1));
            V = abs(wrcoef2('v', C, S, 'db6', 1));
            D = abs(wrcoef2('d', C, S, 'db6', 1));
            FM = std2(H)^2+std2(V)+std2(D);
    
        case 'WAVR'
            [C,S] = wavedec2(Image, 3, 'db6');
            H = abs(wrcoef2('h', C, S, 'db6', 1));   
            V = abs(wrcoef2('v', C, S, 'db6', 1));   
            D = abs(wrcoef2('d', C, S, 'db6', 1)); 
            A1 = abs(wrcoef2('a', C, S, 'db6', 1));
            A2 = abs(wrcoef2('a', C, S, 'db6', 2));
            A3 = abs(wrcoef2('a', C, S, 'db6', 3));
            A = A1 + A2 + A3;
            WH = H.^2 + V.^2 + D.^2;
            WH = mean2(WH);
            WL = mean2(A);
            FM = WH/WL;
        otherwise
            error('Unknown measure %s',upper(Measure))
    end
     end
    %************************************************************************
    function fm = AcMomentum(Image)
    [M N] = size(Image);
    Hist = imhist(Image)/(M*N);
    Hist = abs((0:255)-255*mean2(Image))'.*Hist;
    fm = sum(Hist);
    end
    
    %******************************************************************
    function fm = DctRatio(M)
    MT = dct2(M).^2;
    fm = (sum(MT(:))-MT(1,1))/MT(1,1);
    end
    
    %************************************************************************
    function fm = ReRatio(M)
    M = dct2(M);
    fm = (M(1,2)^2+M(1,3)^2+M(2,1)^2+M(2,2)^2+M(3,1)^2)/(M(1,1)^2);
    end
    %******************************************************************
    

    A few examples of OpenCV versions:

    // OpenCV port of 'LAPM' algorithm (Nayar89)
    double modifiedLaplacian(const cv::Mat& src)
    {
        cv::Mat M = (Mat_<double>(3, 1) << -1, 2, -1);
        cv::Mat G = cv::getGaussianKernel(3, -1, CV_64F);
    
        cv::Mat Lx;
        cv::sepFilter2D(src, Lx, CV_64F, M, G);
    
        cv::Mat Ly;
        cv::sepFilter2D(src, Ly, CV_64F, G, M);
    
        cv::Mat FM = cv::abs(Lx) + cv::abs(Ly);
    
        double focusMeasure = cv::mean(FM).val[0];
        return focusMeasure;
    }
    
    // OpenCV port of 'LAPV' algorithm (Pech2000)
    double varianceOfLaplacian(const cv::Mat& src)
    {
        cv::Mat lap;
        cv::Laplacian(src, lap, CV_64F);
    
        cv::Scalar mu, sigma;
        cv::meanStdDev(lap, mu, sigma);
    
        double focusMeasure = sigma.val[0]*sigma.val[0];
        return focusMeasure;
    }
    
    // OpenCV port of 'TENG' algorithm (Krotkov86)
    double tenengrad(const cv::Mat& src, int ksize)
    {
        cv::Mat Gx, Gy;
        cv::Sobel(src, Gx, CV_64F, 1, 0, ksize);
        cv::Sobel(src, Gy, CV_64F, 0, 1, ksize);
    
        cv::Mat FM = Gx.mul(Gx) + Gy.mul(Gy);
    
        double focusMeasure = cv::mean(FM).val[0];
        return focusMeasure;
    }
    
    // OpenCV port of 'GLVN' algorithm (Santos97)
    double normalizedGraylevelVariance(const cv::Mat& src)
    {
        cv::Scalar mu, sigma;
        cv::meanStdDev(src, mu, sigma);
    
        double focusMeasure = (sigma.val[0]*sigma.val[0]) / mu.val[0];
        return focusMeasure;
    }
    

    No guarantees on whether or not these measures are the best choice for your problem, but if you track down the papers associated with these measures, they may give you more insight. Hope you find the code useful! I know I did.

    0 讨论(0)
  • 2020-11-22 15:17

    One way which I'm currently using measures the spread of edges in the image. Look for this paper:

    @ARTICLE{Marziliano04perceptualblur,
        author = {Pina Marziliano and Frederic Dufaux and Stefan Winkler and Touradj Ebrahimi},
        title = {Perceptual blur and ringing metrics: Application to JPEG2000,” Signal Process},
        journal = {Image Commun},
        year = {2004},
        pages = {163--172} }
    

    It's usually behind a paywall but I've seen some free copies around. Basically, they locate vertical edges in an image, and then measure how wide those edges are. Averaging the width gives the final blur estimation result for the image. Wider edges correspond to blurry images, and vice versa.

    This problem belongs to the field of no-reference image quality estimation. If you look it up on Google Scholar, you'll get plenty of useful references.

    EDIT

    Here's a plot of the blur estimates obtained for the 5 images in nikie's post. Higher values correspond to greater blur. I used a fixed-size 11x11 Gaussian filter and varied the standard deviation (using imagemagick's convert command to obtain the blurred images).

    enter image description here

    If you compare images of different sizes, don't forget to normalize by the image width, since larger images will have wider edges.

    Finally, a significant problem is distinguishing between artistic blur and undesired blur (caused by focus miss, compression, relative motion of the subject to the camera), but that is beyond simple approaches like this one. For an example of artistic blur, have a look at the Lenna image: Lenna's reflection in the mirror is blurry, but her face is perfectly in focus. This contributes to a higher blur estimate for the Lenna image.

    0 讨论(0)
  • 2020-11-22 15:24

    Building off of Nike's answer. Its straightforward to implement the laplacian based method with opencv:

    short GetSharpness(char* data, unsigned int width, unsigned int height)
    {
        // assumes that your image is already in planner yuv or 8 bit greyscale
        IplImage* in = cvCreateImage(cvSize(width,height),IPL_DEPTH_8U,1);
        IplImage* out = cvCreateImage(cvSize(width,height),IPL_DEPTH_16S,1);
        memcpy(in->imageData,data,width*height);
    
        // aperture size of 1 corresponds to the correct matrix
        cvLaplace(in, out, 1);
    
        short maxLap = -32767;
        short* imgData = (short*)out->imageData;
        for(int i =0;i<(out->imageSize/2);i++)
        {
            if(imgData[i] > maxLap) maxLap = imgData[i];
        }
    
        cvReleaseImage(&in);
        cvReleaseImage(&out);
        return maxLap;
    }
    

    Will return a short indicating the maximum sharpness detected, which based on my tests on real world samples, is a pretty good indicator of if a camera is in focus or not. Not surprisingly, normal values are scene dependent but much less so than the FFT method which has to high of a false positive rate to be useful in my application.

    0 讨论(0)
  • 2020-11-22 15:24

    I came up with a totally different solution. I needed to analyse video still frames to find the sharpest one in every (X) frames. This way, I would detect motion blur and/or out of focus images.

    I ended up using Canny Edge detection and I got VERY VERY good results with almost every kind of video (with nikie's method, I had problems with digitalized VHS videos and heavy interlaced videos).

    I optimized the performance by setting a region of interest (ROI) on the original image.

    Using EmguCV :

    //Convert image using Canny
    using (Image<Gray, byte> imgCanny = imgOrig.Canny(225, 175))
    {
        //Count the number of pixel representing an edge
        int nCountCanny = imgCanny.CountNonzero()[0];
    
        //Compute a sharpness grade:
        //< 1.5 = blurred, in movement
        //de 1.5 à 6 = acceptable
        //> 6 =stable, sharp
        double dSharpness = (nCountCanny * 1000.0 / (imgCanny.Cols * imgCanny.Rows));
    }
    
    0 讨论(0)
  • 2020-11-22 15:29

    Matlab code of two methods that have been published in highly regarded journals (IEEE Transactions on Image Processing) are available here: https://ivulab.asu.edu/software

    check the CPBDM and JNBM algorithms. If you check the code it's not very hard to be ported and incidentally it is based on the Marzialiano's method as basic feature.

    0 讨论(0)
  • 2020-11-22 15:30

    Answers above elucidated many things, but I think it is useful to make a conceptual distinction.

    What if you take a perfectly on-focus picture of a blurred image?

    The blurring detection problem is only well posed when you have a reference. If you need to design, e.g., an auto-focus system, you compare a sequence of images taken with different degrees of blurring, or smoothing, and you try to find the point of minimum blurring within this set. I other words you need to cross reference the various images using one of the techniques illustrated above (basically--with various possible levels of refinement in the approach--looking for the one image with the highest high-frequency content).

    0 讨论(0)
提交回复
热议问题