The problem that I am running with is to extract the text out of an image and for this I have used Tesseract v3.02. The sample images from which I have to extract text are relat
Take a look at this project:
https://github.com/arturaugusto/display_ocr
There you can download the trained data for 7 segments font and a python script with some pre-process capabilities.
https://github.com/upupnaway/digital-display-character-rec/blob/master/digital_display_ocr.py
Did this using openCV and tesseract and the "letsgodigital" trained data
-steps include edge detection and extracting the display using the largest contour. Then threshold image using otsu or binarization and pass it through pytesseracts image_to_string function.
This seems like an image preprocessing task. Tesseract would really prefer its images to all be white-on-black text in bitmap format. If you give it something that isn't that, it will do its best to convert it to that format. It is not very smart about how to do this. Using some image manipulation tool (I happen to like imagemagick), you need to make the images more to tesseract's satisfaction. An easy first pass might be to do a small-radius gaussian blur, threshold at a pretty low value (you're trying to keep only black, so 15% seems right), and then invert the image.
The hard part then becomes knowing which preprocessing task to do. If you have metadata telling you what sort of display you're dealing with, great. If not, I suspect you could look at image color histograms to at least figure out whether your text is white-on-black or black-on-color. If these are the only scenarios, white-on-black is always solid background, and black-on-color is always seven-segment-display, then you're done. If not, you'll have to be clever. Good luck, and please let us know what you come up with.