问题
I'm trying to process an image using ABBYY OCR SDK using the sample code placed in this question but I'm not able get the co-ordinates right for a specific word say "OCR" on the screenshot below.
I want to draw an overlay (yellow rectangle over the word "OCR") and sometimes the rectangle is placed very far away from the actual word.
回答1:
The XML you get is synthesised according to this schema.
For each recognized character it will contain an instance of charParams
element as shown in the answer you linked to. The element will contain the coordinates in page pixels - the same XML also contains a page
element:
<page width="..." height="..." resolution="..." originalCoords="...">
where the image width and height are stored. So l
and r
for each charParams
element is in range 0..width-1
of the corresponding page and t
and b
for each charParams
element is in range 0..height-1
of the corresponding page.
Also it's worth mentioning explicitly that all coordinates are in pixels - they are completely resolution-agnostic. This is why whenever you try to highlight anything on an image you have to take zoom into account - the image will likely not be always displayed as is by your device software, but will be downscaled and so you have to map page coordinates onto your zoomed-out image coordinates and highlight appropriately.
回答2:
Have you checked the DPI of the original image and also check the documentation to make sure the OCR engine is using the same DPI and not returning the image in points or some other measurement system.
It could be that rectangle you are drawing in iOS is not based on pixels but on some other measurement system also.
You just need to work through the process, testing as you go, and work out where the problem is coming from. It is most likely a uniform scaling and the distance from the actual word is proportional to the distance of the word from the top left of the page.
来源:https://stackoverflow.com/questions/8679106/incorrect-coordinates-retrieved-from-image-using-abbyy-ocr-sdk