How could I make the discontinuous contour of an image consistant?

后端 未结 6 1258
孤街浪徒
孤街浪徒 2021-02-04 15:32

In the task, I got an discontinuous edge image, how could make it closed? in other word make the curve continuous. And the shape could be any kind, cause this the coutour shadow

相关标签:
6条回答
  • 2021-02-04 16:03

    Here is a slightly different approach to finish it off using Imagemagick

    1) Threshold and dilate the contour

    convert 8y0KL.jpg -threshold 50% -morphology dilate disk:11 step1.gif
    

    2) Erode by a smaller amount

    convert step1.gif -morphology erode disk:8 step2.gif
    

    3) Pad by 1 pixel all around with black and floodfill the outside with white and remove the padded 1 pixel all around

    convert step2.gif -bordercolor black -border 1 -fill white -draw "color 0,0 floodfill" -alpha off -shave 1x1 step3.gif
    

    4) Erode by a smaller amount and get the edge on white side of the transition. Note that we started with dilate 11, then we eroded by 8, then we now erode by 3. So 8+3=11 should get us back to about the center line.

    convert step3.gif -morphology erode disk:3 -morphology edgein diamond:1 step4.gif
    

    5) Create animation to compare

    convert -delay 50 8y0KL.jpg step4.gif -loop 0 animation.gif
    

    0 讨论(0)
  • 2021-02-04 16:04

    Here are a few ideas that may get you started. I don't feel like coding and debugging a load of C++ in OpenCV - oftentimes folks ask questions and never log in again, or you spend hours working on something and then they tell you that the single sample image they provided was not at all representative of their actual images and the method that it has taken 25 minutes to explain is completely inappropriate.


    One idea is morphological dilation - you can do that at the command line like this with ImageMagick:

    convert gappy.jpg -threshold 50% -morphology dilate disk:5 result.png
    


    Another idea might be to locate all the "line end" pixels with Hit-and-Miss morphology. This is available in OpenCV, but I am doing it with ImageMagick to save coding/debugging. The structuring elements are like this:

    Hopefully you can see that the first (leftmost) structuring element represents the West end of an East-West line, and that the second one represents the North end of a North-South line and so on. If you still haven't got it, the last one is the South-West end of North-East to South-West line.

    Basically, I find the line ends and then dilate them with blue pixels and overlay that onto the original:

    convert gappy.jpg -threshold 50%  \
       \( +clone -morphology hmt lineends -morphology dilate disk:1 -fill blue -opaque white -transparent black \) \
       -flatten result.png
    

    Here's a close-up of before and after:

    You can also find the singleton pixels with no neighbours, using a "peaks" structuring element like this:

    and then you can find all the peaks and dilate them with red pixels like this:

    convert gappy.jpg -threshold 50% \
        \( +clone -morphology hmt Peaks:1.9 -fill red -morphology dilate disk:2  -opaque white -transparent black \) \
        -flatten result.png
    

    Here is a close-up of before and after:

    Depending on how your original images look, you may be able to apply the above ideas iteratively till your contour is whole - maybe you could detect that by flood filling and seeing if your contour "holds water" without the flood fill "leaking" out everywhere.

    Obviously you would do the red peaks and the blue line ends both in white to complete your contour - I am just doing it in colour to illustrate my technique.

    0 讨论(0)
  • 2021-02-04 16:04

    Here's another suggestion that is more "computer vision literature" oriented.

    As a rule of thumb preprocessing step, it is usually a good idea to thin all the edges to make sure they are about 1 pixel thick. A popular edge thinning method is non-maximal suppression (NMS).

    Then I would start off by analyzing the image, and finding all the connected components that I have. OpenCV already provides the connectedComponents function. Once groups of connected components are determined, you can fit a Bezier curve to each group. An automatic method of fitting Bezier curves to a set of 2D points is available in the Graphics Gem book. There's also C code available for their method. The goal of fitting a Bezier curve is to get as much high-level understanding of each component group as possible.

    Next, you need to join those Bezier curves together. A method of joining lines using endpoints clustering is available in the work of Shpitalni and Lipson. In that paper, take a look at their adaptive clustering method in the section named "Entity Linking and Endpoint Clustering".

    Finally, with all the curves grouped together you can fit a final Bezier curve too all the points that you have to get a nice and natural looking edge map.

    As a side note, you can take a look at the work Ming-Ming Cheng in cartoon curve extraction. There's OpenCV-based code available for that method here too, but will output the following once applied to your image:

    Disclaimer:
    I can attest to the performance of the Bezier curve fitting algorithm as I've personally used it and it works pretty well. Cheng's curve extraction algorithm works well too, however, it will create bad looking "blobs" with thin contours due to the use of gradient detection (which has a tendency of making thin lines thick!). If you could find a way to work around this "thickening" effect, you can skip Bezier curve extraction and jump right into endpoint clustering to join the curves together.

    Hope this helps!

    0 讨论(0)
  • 2021-02-04 16:17

    My proposal:

    • find the endpoints; these are the pixels with at most one neighbor, after a thinning step to discard "thick" endpoints. Endpoints should come in pairs.

    • from all endpoints, grow a digital disk until you meet another endpoint which is not the peer.

    Instead of growing a disk, you can preprocess the set of endpoints and prepare it for a nearest-neighbor search (2D-tree for instance). You will need to modify the search to avoid hitting the peer.

    This approach does not rely on standard functions, but it has the advantage to respect the original outline.

    On the picture, the original pixels are in white or in green when they are endpoints. The yellow pixels are digital line segments drawn between the nearest endpoint pairs.

    0 讨论(0)
  • 2021-02-04 16:23

    Mark Setchell's answer is a fun way to learn new stuff along the way. My approach is rather simple and straight-forward.

    I got the following solution off the top of my head. It involves a simple blurring operation sandwiched between two morphological operations

    I have explained what I have done alongside the code:

    #---- I converted the image to gray scale and then performed inverted binary threshold on it. ----
    
    img = cv2.imread('leaf.jpg')
    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    ret, thresh = cv2.threshold(gray, 127, 255, 1)  
    

    #---- Next I performed morphological erosion for a rectangular structuring element of kernel size 7 ----
    
    kernel = np.ones((7, 7),np.uint8)
    erosion = cv2.morphologyEx(thresh, cv2.MORPH_ERODE, kernel, iterations = 2)
    cv2.imshow('erosion', erosion )
    

    #---- I then inverted this image and blurred it with a kernel size of 15. The reason for such a huge kernel is to obtain a smooth leaf edge ----
    
    ret, thresh1 = cv2.threshold(erosion, 127, 255, 1)
    blur = cv2.blur(thresh1, (15, 15))
    cv2.imshow('blur', blur)
    

    #---- I again performed another threshold on this image to get the central portion of the edge ----
    
    ret, thresh2 = cv2.threshold(blur, 145, 255, 0)
    
    #---- And then performed morphological erosion to thin the edge. For this I used an ellipse structuring element of kernel size 5 ----
    
    kernel1 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
    final = cv2.morphologyEx(thresh2, cv2.MORPH_ERODE, kernel1, iterations = 2)
    cv2.imshow(final', final)
    

    Hope this helps :)

    0 讨论(0)
  • 2021-02-04 16:25

    You can try using distance transform

    % binarize
    im=rgb2gray(im); im=im>100;
    
    % Distance transform
    bd=bwdist(im);
    maxDist = 5;
    bd(bd<maxDist)=0;
    bw=bwperim(bd); bw=imclearborder(bw);
    bw=imfill(bw,'holes');
    bw=bwperim(bwmorph(bw,'thin',maxDist));
    
    figure,imagesc(bw+2*im),axis image
    

    0 讨论(0)
提交回复
热议问题