How to extract rectangles of varying edge intensity from images?

拈花ヽ惹草 提交于 2020-01-05 07:20:13

问题


I am trying to extract the account number from an image of a cheque. The logic that I have is that, I am trying to find the rectangle that contains the account number, slice the bounding rectangle and then feed the slice into an OCR to get the text out of it.

The problem I am facing is when the rectangle is not very prominent and light colour, I am not able to get the rectangle contour since the edges are not connected totally.

How to overcome this? Things I tried, but did not work are

  1. I cannot increase the erosion iteration, to erode it more, because then the edges connect with the surrounding black pixels and form a different shape.
  2. Reducing the threshold offset might help, but, it seems inefficient. Since the code has to work with several types of images. I can start with offset 10 and keep incrementing the offset and checking if I found the rectangle or not. This will increase the time a lot for cheques with prominent rectangles that work well at offset 20 or more. And since I don't have a condition to check if the edges of the rectangle are prominent or not, the loop has to be applied in all the cheques.

Keeping the above points in mind. Can someone help me out with a solution to this problem?

Libraries used and versions

scikit-image==0.13.1
opencv-python==3.3.0.10

Code

from skimage.filters import threshold_adaptive, threshold_local
import cv2

Step 1:

image = cv2.imread('cropped.png')

Step 2:

Using adaptive threshold from skimage to remove the background, so that I can get the account number rectangle box. This works fine for the cheques where the rectangle is more pronounced, but when the rectangle edges are thin, or are lighter in colour, the threshold results in unconnected edges, because of which I am not able to find the contours. I have attached examples of this further down in the question.

account_number_block = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
account_number_block = threshold_adaptive(account_number_block, 251, offset=20)
account_number_block = account_number_block.astype("uint8") * 255

Step 3:

Erode the image a bit to try to connect small disconnections in the edges

kernel = np.ones((3,3), np.uint8)

account_number_block = cv2.erode(account_number_block, kernel, iterations=5)

Find the contours

(_, cnts, _) = cv2.findContours(account_number_block.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
# cnts = sorted(cnts, key=cv2.contourArea)[:3]
rect_cnts = [] # Rectangular contours 
for cnt in cnts:
    approx = cv2.approxPolyDP(cnt,0.01*cv2.arcLength(cnt,True),True)
    if len(approx) == 4:
        rect_cnts.append(cnt)


rect_cnts = sorted(rect_cnts, key=cv2.contourArea, reverse=True)[:1]

Working Example

Step 1: Original Image

Step 2: After thresholding to remove the background.

Step 3: Finding contours to find rectangle box of the account number.

Failure Working example - Light rectangular boundary.

Step 1: Read original image

Step 2: After thresholding to remove the background. Notice that the edges of the rectangle are not connected, because of which I am not able to get the contour out of it.

Step 3: Finding contours to find rectangle box of the account number.


回答1:


import numpy as np
import cv2
import pytesseract as pt
from PIL import Image


#Run Main
if __name__ == "__main__" :

    image = cv2.imread("image.jpg", -1)

    # resize image to speed up computation
    rows,cols,_ = image.shape
    image = cv2.resize(image, (np.int32(cols/2),np.int32(rows/2)))

    # convert to gray and binarize
    gray_img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    binary_img = cv2.adaptiveThreshold(gray_img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 9, 9)

    # note: erosion and dilation works on white forground
    binary_img = cv2.bitwise_not(binary_img)

    # dilate the image to fill the gaps
    kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
    dilated_img = cv2.morphologyEx(binary_img, cv2.MORPH_DILATE, kernel,iterations=2)

    # find contours, discard contours which do not belong to a rectangle
    (_, cnts, _) = cv2.findContours(dilated_img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
    rect_cnts = [] # Rectangular contours 
    for cnt in cnts:
        approx = cv2.approxPolyDP(cnt,0.01*cv2.arcLength(cnt,True),True)
        if len(approx) == 4:
            rect_cnts.append(cnt)

    # sort contours based on area
    rect_cnts = sorted(rect_cnts, key=cv2.contourArea, reverse=True)[:1]

    # find bounding rectangle of biggest contour
    box = cv2.boundingRect(rect_cnts[0])
    x,y,w,h = box[:]

    # extract rectangle from the original image
    newimg = image[y:y+h,x:x+w]

    # use 'pytesseract' to get the text in the new image
    text = pt.image_to_string(Image.fromarray(newimg))
    print(text)

    cv2.namedWindow('Image', cv2.WINDOW_NORMAL)
    cv2.imshow('Image', newimg)
    cv2.waitKey(0)
    cv2.destroyAllWindows()

result: 03541140011724

result: 34785736216



来源:https://stackoverflow.com/questions/50445459/how-to-extract-rectangles-of-varying-edge-intensity-from-images

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!