Text detection on Seven Segment Display via Tesseract OCR

时光总嘲笑我的痴心妄想 提交于 2019-12-03 06:46:14

问题


The problem that I am running with is to extract the text out of an image and for this I have used Tesseract v3.02. The sample images from which I have to extract text are related to meter readings. Some of them are with solid sheet background and some of them have LED display. I have trained the dataset for solid sheet background and the results are some how effective.

The major problem I have now is the text images with LED/LCD background which are not recognized by Tesseract and due to this the training set isn't generated.

Can anyone guide me to the right direction on how to use Tesseract with the Seven Segment Display(LCD/LED background) or is there any other alternative that I can use instead of Tesseract.


回答1:


This seems like an image preprocessing task. Tesseract would really prefer its images to all be white-on-black text in bitmap format. If you give it something that isn't that, it will do its best to convert it to that format. It is not very smart about how to do this. Using some image manipulation tool (I happen to like imagemagick), you need to make the images more to tesseract's satisfaction. An easy first pass might be to do a small-radius gaussian blur, threshold at a pretty low value (you're trying to keep only black, so 15% seems right), and then invert the image.

The hard part then becomes knowing which preprocessing task to do. If you have metadata telling you what sort of display you're dealing with, great. If not, I suspect you could look at image color histograms to at least figure out whether your text is white-on-black or black-on-color. If these are the only scenarios, white-on-black is always solid background, and black-on-color is always seven-segment-display, then you're done. If not, you'll have to be clever. Good luck, and please let us know what you come up with.




回答2:


https://github.com/upupnaway/digital-display-character-rec/blob/master/digital_display_ocr.py

Did this using openCV and tesseract and the "letsgodigital" trained data

-steps include edge detection and extracting the display using the largest contour. Then threshold image using otsu or binarization and pass it through pytesseracts image_to_string function.




回答3:


Take a look at this project:

https://github.com/arturaugusto/display_ocr

There you can download the trained data for 7 segments font and a python script with some pre-process capabilities.



来源:https://stackoverflow.com/questions/17672705/text-detection-on-seven-segment-display-via-tesseract-ocr

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!