问题
I have a Postgresql script that automatically imports csv files into my database. The script can detect duplicate records and remove them, do a proper upsert but still cannot tackle everything. Basically the csv files are exported from other systems which append at the beginning and end of the file extra information e.g:
Total Count: 2956
Avg Time: 13ms
Column1, Column2, Column3
... ... ...
What I want to do is skip those initial rows or any rows at the bottom of the file. Is there any way I can do this in Postgresql via COPY or via another route whatever that might be? Can I call for instance operating system commands via Postgresql?
回答1:
For Linux use tail
and head
to crop the file and pipe it to your script:
tail -n +3 file.csv | head -1 | psql -f my_script.sql my_database
Then your script will copy from STDIN:
copy my_table from STDIN;
来源:https://stackoverflow.com/questions/16037624/postgresql-csv-importation-that-skips-rows