I am trying to write a bash script to merge all pdf files of a directory into one single pdf file. The command pdfunite *.pdf output.pdf
successfully achieves this
you can embed the result of command using $()
,
so you can do following
$ pdfunite $(ls -v *.pdf) output.pdf
or
$ pdfunite $(ls *.pdf | sort -n) output.pdf
However, note that this does not work when filename contains special character such as whitespace.
In the case you can do the following:
ls -v *.txt | bash -c 'IFS=$'"'"'\n'"'"' read -d "" -ra x;pdfunite "${x[@]}" output.pdf'
Although it seems a little bit complicated, its just combination of
Note that you cannot use xargs
since pdfunite
requires input pdf's as the middle of arguments.
I avoided using readarray
since it is not supported in older bash version, but you can use it instead of IFS=.. read -ra ..
if you have newer bash
.
You can rename your documents i.e. 001.pdf 002.pdf and so on.
Do it in multiple steps. I am assuming you have files from 1 to 99.
pdfunite $(find ./ -regex ".*[^0-9][0-9][^0-9].*" | sort) out1.pdf
pdfunite out1.pdf $(find ./ -regex ".*[^0-9]1[0-9][^0-9].*" | sort) out2.pdf
pdfunite out2.pdf $(find ./ -regex ".*[^0-9]2[0-9][^0-9].*" | sort) out3.pdf
and so on.
the final file will consist of all your pdfs in numerical order.
!!! Beware of writing the output file such as out1.pdf etc. otherwise pdfunite will overwrite the last file !!!
Edit: Sorry I was missing the [^0-9] in each regex. Corrected it in the above commands.