The PDF format from its inception (more than 20 years ago) never was intended to be host of extractable, meaningfully structured data.
Its purpose was to be a reliable visual representation of text, images and diagrams in a document -- a kind of digital paper (that would also reliably be transferred to real paper via printing). Only later in its development more features were added, which should help in extracting data again (google for Tagged PDF).
For some examples of problems which are posed when data scraping tables from PDFs, see this article:
- Why Updating Dollars for Docs Was So Difficult
Contradicting my point '1.' above, now I say this: for an amazing family of tools that gets better and better from week to week for extracting tabular data from PDFs (unless they are scanned pages), see these links:
- Introducing Tabula: Upload a PDF, get back tabular CSV data. Poof!
- Tabula-Extractor: A Command Line Interface to Tabula
- Tabula source code repository
- Tabula API (upcoming, not ready yet)
So: go look for Tabula. If any tools can do what you want, at this time Tabula is probably amongst the best for the job!
Update
I've recently created an ASCiinema screencast demonstrating the use of the Tabula command line interface to extract a big table from a PDF as CSV:
(Click on image above to see it running. If it runs too fast for you to read all text, make use of the "Pause" button (||
-symbol).)
It is hosted here:
- https://asciinema.org/a/22761