I want to extract different content from a PDF file in Java:
Is it also possible to get
Yes Alp, iText does offer the functionality you mentioned.
READING PDFS
iText isn't a PDF viewer, iText can't convert PDF to an image, nor can iText be used to print a PDF, but the PdfReader class can give you access to the objects that form a PDF document and to the the content stream of each page. This content stream can be parsed and if the content wasn't added as rasterized text, you can convert a page to plain text. Note that iText doesn't do OCR.
Use com.itextpdf.text.pdf.PdfReader;
class.
Apache comes to the rescue, once again.
You can also use JPedal for all these extraction tasks.
iText is my PDF tool of choice these days.
- The complete visible text
"Visible" is a tough one. You can parse out all the parsable text with the com.itextpdf.text.pdf.parse package's classes... but those classes don't know about CLIPPING. You can constrain the parser to the page size easily enough.
// all text on the page, regardless of position
PdfTextExtractor.getTextFromPage(reader, pageNum);
You'd actually need the override that takes a TextExtractionStrategy, the filtered strategy. It gets interesting fairly quickly, but I think you can get everything you want here "out of the box".
- images
Yep, via the same package classes. Image listeners aren't as well supported as text listeners, but do exist.
- links
Yes. Links are "annotations" to various PDF pages. Finding them is a simple matter of looping through each page's "annotations array" and picking out the link annotations.
PdfDictionary pageDict = myReader.getPageN(1);
PdfArray annots = pageDict.getAsArray(PdfName.ANNOTS);
ArrayList<String> dests = new ArrayList<String>();
if (annots != null) {
for (int i = 0; i < annots.size(); ++i) {
PdfDictionary annotDict = annots.getAsDict(i);
PdfName subType = annotDict.getAsName(PdfName.SUBTYPE);
if (subType != null && PdfName.LINK.equals(subType)) {
PdfDictionary action = annotDict.getAsDict(PdfName.A);
if (action != null && PdfName.URI.equals(action.getAsName(PdfName.S)) {
dests.add(action.getAsString(PdfName.URI).toString());
} // else { its an internal link, meh }
}
}
}
You can find the PDF Spec here.
- input elements
Definitely. For either XFA (LiveCycle Designer) or the older-tech "AcroForm" forms, iText can find all the fields, and their values.
AcroFields fields = myReader.getAcroFields();
Set<String> fieldNames = fields.getFields().keySet();
for (String fldName : fieldNames) {
System.out.println( fldName + ": " + fields.getField( fldName ) );
}
Mutli-select lists wouldn't be handled all that well. You'll get a blank space after the colon for empty text fields and for buttons. None too informative... but that'll get you started.
- document meta tags like title, description or author
Pretty trivial. Yes.
Map<String, String> info = myPdfReader.getInfo();
System.out.println( info );
In addition to the basic author/title/etc, there's a fairly involved XML schema you can access via reader.getMetadata()
.
- only headlines
A TextRenderFilter
can ignore text based on whatever criteria you wish. Font size sounds about right based on your comment.
Most of this you can do with our PDF Library extended edition as well.
Whichever solution you go for, bear in mind that for certain PDF documents, text extraction is impossible due to the way the PDF is constructed (the glyphs on the page sometimes don't have any semantic meaning associated with them).
The quick way to check this is to open the document in Acrobat and try copying/pasting the text. If it comes up as gibberish there, chances are it will come up as gibberish in any other PDF extractor.