How improve image quality to extract text from image using Tesseract

天大地大妈咪最大 提交于 2020-02-25 06:37:16

问题


I'm trying to use Tessract in the code below to extract the two lines of the image. I tryied to improve the image quality but even though it didn't work.

Can anyone help me?

from PIL import Image, ImageEnhance, ImageFilter
import pytesseract

img = Image.open(r'C:\ocr\test00.jpg')
new_size = tuple(4*x for x in img.size)
img = img.resize(new_size, Image.ANTIALIAS)
img.save(r'C:\\test02.jpg', 'JPEG')


print( pytesseract.image_to_string( img ) )

回答1:


Given the comment by @barny I don't know if this will work, but you can try the code below. I created a script that selects the display area and warps this into a straight image. Next a threshold to a black and white mask of the characters and the result is cleaned up a bit.

Try if it improves recognition. If it does, also look at the intermediate stages so you'll understand all that happens.

Update: It seems Tesseract prefers black text on white background, inverted and dilated the result.

Result:

Updated result:

Code:

import numpy as np 
import cv2
# load image
image = cv2.imread('disp.jpg')

# create grayscale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# perform threshold
retr, mask = cv2.threshold(gray_image, 190, 255, cv2.THRESH_BINARY)

# findcontours
ret, contours, hier = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# select the largest contour
largest_area = 0
for cnt in contours:
    if cv2.contourArea(cnt) > largest_area:
        cont = cnt
        largest_area = cv2.contourArea(cnt)

# find the rectangle (and the cornerpoints of that rectangle) that surrounds the contours / photo
rect = cv2.minAreaRect(cont)
box = cv2.boxPoints(rect)
box = np.int0(box)

#### Warp image to square
# assign cornerpoints of the region of interest
pts1 = np.float32([box[2],box[3],box[1],box[0]])
# provide new coordinates of cornerpoints
pts2 = np.float32([[0,0],[500,0],[0,110],[500,110]])

# determine and apply transformationmatrix
M = cv2.getPerspectiveTransform(pts1,pts2)
tmp = cv2.warpPerspective(image,M,(500,110))

 # create grayscale
gray_image2 = cv2.cvtColor(tmp, cv2.COLOR_BGR2GRAY)
# perform threshold
retr, mask2 = cv2.threshold(gray_image2, 160, 255, cv2.THRESH_BINARY_INV)

# remove noise / close gaps
kernel =  np.ones((5,5),np.uint8)
result = cv2.morphologyEx(mask2, cv2.MORPH_CLOSE, kernel)

#draw rectangle on original image
cv2.drawContours(image, [box], 0, (255,0,0), 2)

# dilate result to make characters more solid
kernel2 =  np.ones((3,3),np.uint8)
result = cv2.dilate(result,kernel2,iterations = 1)

#invert to get black text on white background
result = cv2.bitwise_not(result)

#show image
cv2.imshow("Result", result)
cv2.imshow("Image", image)

cv2.waitKey(0)
cv2.destroyAllWindows()


来源:https://stackoverflow.com/questions/54497882/how-improve-image-quality-to-extract-text-from-image-using-tesseract

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!