PDF text extraction from given coordinates

前端 未结 3 1195
情话喂你
情话喂你 2020-11-27 10:43

I would like to extract text from a portion (using coordinates) of PDF using Ghostscript.

Can anyone help me out?

相关标签:
3条回答
  • 2020-11-27 11:18

    Debenu Quick PDF Library can extract text from a defined area on a page. The SetTextExtractionArea function lets you specify the x and y coordinates and then you can also specify the width and height of the area.

    • Left = The horizontal coordinate of the left edge of the area
    • Top = The vertical coordinate of the top edge of the area
    • Width = The width of the area
    • Height = The height of the area

    Then the GetPageText function can be called immediately after this to extract the text from that defined area.

    Here's an example using C# (though the library is multi-platform and can be used with many different programming languages):

    DPL.LoadFromFile(@"Sample.pdf", "");
    DPL.SetOrigin(1); // Sets 0,0 coordinate position to top left of page, default is bottom left
    DPL.SetTextExtractionArea(35, 35, 229, 30); // Left, Top, Width, Height
    string ExtractedContent = DPL.GetPageText(8);
    Console.WriteLine(ExtractedContent);
    

    Using GetPageText it is also possible to return just the text located in that area or the text located in that area as well as information about the text's font such as name, color and size.

    0 讨论(0)
  • 2020-11-27 11:20

    I'm not sure GhostScript can accept coordinates, but you can convert the PDF to a image and send it to an OCR engine either as a subimage cropped from the given coordinates or as the whole image along with the coordinates. Some OCR API accepts a rectangle parameter to narrow the region for OCR.

    Look at VietOCR for a working example, which uses Tesseract as its OCR engine and GhostScript as PDF-to-image converter.

    0 讨论(0)
  • 2020-11-27 11:25

    Yes, with Ghostscript, you can extract text from PDFs. But no, it is not the best tool for the job. And no, you cannot do it in "portions" (parts of single pages). What you can do: extract the text of a certain range of pages only.

    First: Ghostscript's txtwrite output device (not so good)

     gs \
       -dBATCH \
       -dNOPAUSE \
       -sDEVICE=txtwrite \
       -dFirstPage=3 \
       -dLastPage=5 \
       -sOutputFile=- \
       /path/to/your/pdf
    

    This will output all text contained on pages 3-5 to stdout. If you want output to a text file, use

       -sOutputFile=textfilename.txt
    

    gs Update:

    Recent versions of Ghostscript have seen major improvements in the txtwrite device and bug fixes. See recent Ghostscript changelogs (search for txtwrite on that page) for details.


    Second: Ghostscript's ps2ascii.ps PostScript utility (better)

    This one requires you to download the latest version of the file ps2ascii.ps from the Ghostscript Git source code repository. You'd have to convert your PDF to PostScript, then run this command on the PS file:

    gs \
      -q \
      -dNODISPLAY \
      -P- \
      -dSAFER \
      -dDELAYBIND \
      -dWRITESYSTEMDICT \
      -dSIMPLE \
       /path/to/ps2ascii.ps \
       input.ps \
      -c quit
    

    If the -dSIMPLE parameter is not defined, each output line contains some additional info beyond the pure text content about fonts and fontsize used.

    If you replace that parameter by -dCOMPLEX, you'll get additional infos about colors and images used.

    Read the comments inside the ps2ascii.ps to learn more about this utility. It's not comfortable to use, but for me it worked in most cases I needed it....

    Third: XPDF's pdftotext CLI utility (more comfortable than Ghostscript)

    A more comfortable way to do text extraction: use pdftotext (available for Windows as well as Linux/Unix or Mac OS X). This utility is based either on Poppler or on XPDF. This is a command you could try:

     pdftotext \
       -f 13 \
       -l 17 \
       -layout \
       -opw supersecret \
       -upw secret \
       -eol unix \
       -nopgbrk \
       /path/to/your/pdf
       - |less
    

    This will display the page range 13 (first page) to 17 (last page), preserve the layout of a double-password protected named PDF file (using user and owner passwords secret and supersecret), with Unix EOL convention, but without inserting pagebreaks between PDF pages, piped through less...

    pdftotext -h displays all available commandline options.

    Of course, both tools only work for the text parts of PDFs (if they have any). Oh, and mathematical formula also won't work too well... ;-)


    pdftotext Update:

    Recent versions of Poppler's pdftotext have now options to extract "a portion (using coordinates) of PDF" pages, like the OP asked for. The parameters are:

    • -x <int> : top left corner's x-coordinate of crop area
    • -y <int> : top left corner's y-coordinate of crop area
    • -W <int> : crop area's width in pixels (defaults to 0)
    • -H <int> : crop area's height in pixels (defaults to 0)

    Best, if used with the -layout parameter.


    Fourth: MuPDF's mutool draw command can also extract text

    The cross-platform, open source MuPDF application (made by the same company that also develops Ghostscript) has bundled a command line tool, mutool. To extract text from a PDF with this tool, use:

    mutool draw -F txt the.pdf
    

    will emit the extracted text to <stdout>. Use -o filename.txt to write it into a file.

    Fifth: PDFLib's Text Extraction Toolkit (TET) (best of all... but it is PayWare)

    TET, the Text Extraction Toolkit from the pdflib family of products can find the x-y-coordinate of text content in a PDF file (and much more). TET has a commandline interface, and it's the most powerful of all text extraction tools I'm aware of. (It can even handle ligatures...) Quote from their website:

    Geometry
    TET provides precise metrics for the text, such as the position on the page, glyph widths, and text direction. Specific areas on the page can be excluded or included in the text extraction, e.g. to ignore headers and footers or margins.

    In my experience, while it's does not sport the most straight-forward CLI interface you can imagine: after you got used to it, it will do what it promises to do, for most PDFs you throw towards it...


    And there are even more options:

    1. podofotxtextract (CLI tool) from the PoDoFo project (Open Source)
    2. calibre (normally a GUI program to handle eBooks, Open Source) has a commandline option that can extract text from PDFs
    3. AbiWord (a GUI word processor, Open Source) can import PDFs and save its files as .txt: abiword --to=txt --to-name=output.txt input.pdf
    0 讨论(0)
提交回复
热议问题