Incorrect coordinates retrieved from image using ABBYY OCR SDK

非 Y 不嫁゛ 提交于 2019-12-06 08:42:04
sharptooth

The XML you get is synthesised according to this schema.

For each recognized character it will contain an instance of charParams element as shown in the answer you linked to. The element will contain the coordinates in page pixels - the same XML also contains a page element:

<page width="..." height="..." resolution="..." originalCoords="...">

where the image width and height are stored. So l and r for each charParams element is in range 0..width-1 of the corresponding page and t and b for each charParams element is in range 0..height-1 of the corresponding page.

Also it's worth mentioning explicitly that all coordinates are in pixels - they are completely resolution-agnostic. This is why whenever you try to highlight anything on an image you have to take zoom into account - the image will likely not be always displayed as is by your device software, but will be downscaled and so you have to map page coordinates onto your zoomed-out image coordinates and highlight appropriately.

Have you checked the DPI of the original image and also check the documentation to make sure the OCR engine is using the same DPI and not returning the image in points or some other measurement system.

It could be that rectangle you are drawing in iOS is not based on pixels but on some other measurement system also.

You just need to work through the process, testing as you go, and work out where the problem is coming from. It is most likely a uniform scaling and the distance from the actual word is proportional to the distance of the word from the top left of the page.

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!