问题
I want to do some planar rectification, to convert from left to right:
I have the code to do the correction, but I need the 4 corner coords.
I'm using the following code to find them:
import cv2
image = cv2.imread('input.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
canny = cv2.Canny(gray, 120, 255, 1)
corners = cv2.goodFeaturesToTrack(canny,4,0.5,50)
for corner in corners:
x,y = corner.ravel()
cv2.circle(image,(x,y),5,(36,255,12),-1)
cv2.imshow("result", image)
cv2.waitKey()
It reads the image, and transforms it to grayscale + canny
But the resultant corners (found by cv2.goodFeaturesToTrack) aren't the desired ones:
I need the external corners of the card, any clue to achieve it?
Thanks
This is the input.png:
回答1:
Canny is a tool for edge detection, and if correctly tuned it does what it says on the tin.
Once you get the edges, you must define what a corner is. For instance, is it a sharp turn in a edge?
You'd like to use the function cv2.goodFeaturesToTrack
, which is supposed to be a corner detection tool, but once again, what is a corner? It uses the Shi-Tomasi algorithm to find the N "best" corners in an image, which is just a threshold, and some minimum distance between points.
In the end, it is guaranteed to almost never bear the four corners you want. You should try these alternatives, and stick with the best option:
try to get more corners and geometrically determine the four "outmost" ones.
combine your method with some other transformation, or object-matching. For instance, if you are looking for a rectangular-ish image, try to match it against a template, compute the transform matrix and resolve edges after transformation.
use a different edge detection method, or a combination of methods.
Note that a card doesn't have sharp corners like a piece of paper, so you'll end up cropping the card or skewing it if using any "corner" on the rounded edges, or trying to locate an edge outside the actual "white" of the card, to avoid the skew (try to inscribe the card into a sharp-edge rectangle) - note that Canny is not effective in this case.
回答2:
Update: Added four point perspective transform.
I have skipped perspective transform as the question is about finding right corners.
You can skip the loop by getting contour with maximum area
then processing it. Some blurring may help it further. Press Esc
button to get next image output.
Another useful method, how to find corners points of a shape in an image in opencv?
Ouput Images
Code
"""
Task: Detect card corners and fix perspective
"""
import cv2
import numpy as np
img = cv2.imread('resources/KSuVq.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(gray,127,255,0)
cv2.imshow('Thresholded original',thresh)
cv2.waitKey(0)
## Get contours
contours,h = cv2.findContours(thresh,cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
## only draw contour that have big areas
imx = img.shape[0]
imy = img.shape[1]
lp_area = (imx * imy) / 10
#################################################################
# Four point perspective transform
# https://www.pyimagesearch.com/2014/08/25/4-point-opencv-getperspective-transform-example/
#################################################################
def order_points(pts):
# initialzie a list of coordinates that will be ordered
# such that the first entry in the list is the top-left,
# the second entry is the top-right, the third is the
# bottom-right, and the fourth is the bottom-left
rect = np.zeros((4, 2), dtype = "float32")
# the top-left point will have the smallest sum, whereas
# the bottom-right point will have the largest sum
s = pts.sum(axis = 1)
rect[0] = pts[np.argmin(s)]
rect[2] = pts[np.argmax(s)]
# now, compute the difference between the points, the
# top-right point will have the smallest difference,
# whereas the bottom-left will have the largest difference
diff = np.diff(pts, axis = 1)
rect[1] = pts[np.argmin(diff)]
rect[3] = pts[np.argmax(diff)]
# return the ordered coordinates
return rect
def four_point_transform(image, pts):
# obtain a consistent order of the points and unpack them
# individually
rect = order_points(pts)
(tl, tr, br, bl) = rect
# compute the width of the new image, which will be the
# maximum distance between bottom-right and bottom-left
# x-coordiates or the top-right and top-left x-coordinates
widthA = np.sqrt(((br[0] - bl[0]) ** 2) + ((br[1] - bl[1]) ** 2))
widthB = np.sqrt(((tr[0] - tl[0]) ** 2) + ((tr[1] - tl[1]) ** 2))
maxWidth = max(int(widthA), int(widthB))
# compute the height of the new image, which will be the
# maximum distance between the top-right and bottom-right
# y-coordinates or the top-left and bottom-left y-coordinates
heightA = np.sqrt(((tr[0] - br[0]) ** 2) + ((tr[1] - br[1]) ** 2))
heightB = np.sqrt(((tl[0] - bl[0]) ** 2) + ((tl[1] - bl[1]) ** 2))
maxHeight = max(int(heightA), int(heightB))
# now that we have the dimensions of the new image, construct
# the set of destination points to obtain a "birds eye view",
# (i.e. top-down view) of the image, again specifying points
# in the top-left, top-right, bottom-right, and bottom-left
# order
dst = np.array([
[0, 0],
[maxWidth - 1, 0],
[maxWidth - 1, maxHeight - 1],
[0, maxHeight - 1]], dtype = "float32")
# compute the perspective transform matrix and then apply it
M = cv2.getPerspectiveTransform(rect, dst)
warped = cv2.warpPerspective(image, M, (maxWidth, maxHeight))
# return the warped image
return warped
#################################################################
## Get only rectangles given exceeding area
for cnt in contours:
approx = cv2.approxPolyDP(cnt,0.01 * cv2.arcLength(cnt, True), True)
## calculate number of vertices
#print(len(approx))
if len(approx) == 4 and cv2.contourArea(cnt) > lp_area:
print("rectangle")
tmp_img = img.copy()
cv2.drawContours(tmp_img, [cnt], 0, (0, 255, 255), 6)
cv2.imshow('Contour Borders', tmp_img)
cv2.waitKey(0)
tmp_img = img.copy()
cv2.drawContours(tmp_img, [cnt], 0, (255, 0, 255), -1)
cv2.imshow('Contour Filled', tmp_img)
cv2.waitKey(0)
# Make a hull arround the contour and draw it on the original image
tmp_img = img.copy()
mask = np.zeros((img.shape[:2]), np.uint8)
hull = cv2.convexHull(cnt)
cv2.drawContours(mask, [hull], 0, (255, 255, 255), -1)
cv2.imshow('Convex Hull Mask', mask)
cv2.waitKey(0)
# Draw minimum area rectangle
tmp_img = img.copy()
rect = cv2.minAreaRect(cnt)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(tmp_img, [box], 0, (0, 0, 255), 2)
cv2.imshow('Minimum Area Rectangle', tmp_img)
cv2.waitKey(0)
# Draw bounding rectangle
tmp_img = img.copy()
x, y, w, h = cv2.boundingRect(cnt)
cv2.rectangle(tmp_img, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.imshow('Bounding Rectangle', tmp_img)
cv2.waitKey(0)
# Bounding Rectangle and Minimum Area Rectangle
tmp_img = img.copy()
rect = cv2.minAreaRect(cnt)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(tmp_img, [box], 0, (0, 0, 255), 2)
x, y, w, h = cv2.boundingRect(cnt)
cv2.rectangle(tmp_img, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.imshow('Bounding Rectangle', tmp_img)
cv2.waitKey(0)
# determine the most extreme points along the contour
# https://www.pyimagesearch.com/2016/04/11/finding-extreme-points-in-contours-with-opencv/
tmp_img = img.copy()
extLeft = tuple(cnt[cnt[:, :, 0].argmin()][0])
extRight = tuple(cnt[cnt[:, :, 0].argmax()][0])
extTop = tuple(cnt[cnt[:, :, 1].argmin()][0])
extBot = tuple(cnt[cnt[:, :, 1].argmax()][0])
cv2.drawContours(tmp_img, [cnt], -1, (0, 255, 255), 2)
cv2.circle(tmp_img, extLeft, 8, (0, 0, 255), -1)
cv2.circle(tmp_img, extRight, 8, (0, 255, 0), -1)
cv2.circle(tmp_img, extTop, 8, (255, 0, 0), -1)
cv2.circle(tmp_img, extBot, 8, (255, 255, 0), -1)
print("Corner Points: ", extLeft, extRight, extTop, extBot)
cv2.imshow('img contour drawn', tmp_img)
cv2.waitKey(0)
#cv2.destroyAllWindows()
## Perspective Transform
tmp_img = img.copy()
pts = np.array([extLeft, extRight, extTop, extBot])
warped = four_point_transform(tmp_img, pts)
cv2.imshow("Warped", warped)
cv2.waitKey(0)
cv2.destroyAllWindows()
References
https://docs.opencv.org/4.5.0/dd/d49/tutorial_py_contour_features.html
https://www.pyimagesearch.com/2016/04/11/finding-extreme-points-in-contours-with-opencv/
https://www.pyimagesearch.com/2014/08/25/4-point-opencv-getperspective-transform-example/
回答3:
Here is one way to find the corners in Python OpenCV. I note this is more complicates since the green dots on the input complicate the issue and they likely would not be in the input image. One could simply threshold on the green dots using cv2.inRange() to find the green dots. But I will assume this is not really what you want.
- Read the input
- Convert to gray
- Threshold
- Get the largest contour and draw it on the input
- Reduce the number of vertices in the contour as a polygon and draw the polygon on the input.
- The polygon has 5 vertices and two are virtually the same. Normally, one would get 4 verices if the green dots were not there. So draw a white filled polygon on a black background.
- Get the corners from the white polygon on black background and draw on these vertices
- Save the results
Input:
import cv2
import numpy as np
import time
# load image
img = cv2.imread("hello.png")
# convert to gray
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# threshold
thresh = cv2.threshold(gray, 128, 255, cv2.THRESH_BINARY)[1]
# get the largest contour
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
big_contour = max(contours, key=cv2.contourArea)
peri = cv2.arcLength(big_contour, True)
# draw contour on input in red
result = img.copy()
result2 = np.zeros_like(img)
cv2.drawContours(result, [big_contour], 0, (0,0,255), 1)
cv2.drawContours(result2, [big_contour], 0, (0,0,255), 1)
# reduce to fewer vertices on polygon
poly = cv2.approxPolyDP(big_contour, 0.1 * peri, False)
# draw polygon on input in green
cv2.polylines(result, [poly], False, (0,255,0), 1)
cv2.polylines(result2, [poly], False, (0,255,0), 1)
# list polygon points
print("Polygon Points:")
for p in poly:
px = p[0][0]
py = p[0][1]
print(px,py)
print('')
# draw white filled polygon on black background
result3 = np.zeros_like(thresh)
cv2.fillPoly(result3,[poly],255)
# get corners
corners = cv2.goodFeaturesToTrack(result3,4,0.01,50,useHarrisDetector=True,k=0.04)
# print corner coords and draw circles
result3 = cv2.merge([result3,result3,result3])
print("Corners:")
for c in corners:
x,y = c.ravel()
print(int(x), int(y))
cv2.circle(result3,(x,y),3,(0,0,255),-1)
# save result
cv2.imwrite("hello_contours.png", result)
cv2.imwrite("hello_polygon.png", result2)
cv2.imwrite("hello_corners.png", result3)
# display it
cv2.imshow("thresh", thresh)
cv2.imshow("result", result)
cv2.imshow("result2", result2)
cv2.imshow("result3", result3)
cv2.waitKey(0)
Contours and Polygon on input image:
Contours and Polygon on black background:
Polygon Vertices:
227 69
41 149
114 284
307 167
228 70
Note the first and last vertices are within one pixel of each other
Corners on white polygon on black background:
Corner Vertices:
306 167
42 149
114 283
227 69
来源:https://stackoverflow.com/questions/64860785/opencv-using-canny-and-shi-tomasi-to-detect-round-corners-of-a-playing-card