I am reading a csv file in my sql script and copying its data into a postgre sql table. The line of code is below :
\\copy participants_2013 from \'C:/Users/
Any encoding has numeric ranges of valid code. Are you sure so your data are in win1252 encoding?
Postgres is very strict and doesn't import any possible encoding broken files. You can use iconv
that can works in tolerant mode, and it can remove broken chars. After cleaning by iconv
you can import the file.
I had this problem today and it was because inside of a TEXT column I had fancy quotes that had been copy/pasted from an external source.
The problem is that 0x9D
is not a valid byte value in WIN1252.
There's a table here: https://en.wikipedia.org/wiki/Windows-1252
The problem may be that you are importing a UTF-8 file and postgresql is defaulting to Windows-1252 (which I believe is the default on many windows systems).
You need to change the character set on your windows command line before running the script with chcp. Or in postgresql you can:
SET CLIENT_ENCODING TO 'utf8';
Before importing the file.
Simply specify encoding 'UTF-8'
as the encoding in the \copy
command, e.g. (I broke it into two lines for readability but keep it all on the same line):
\copy dest_table from 'C:/src-data.csv'
(format csv, header true, delimiter ',', encoding 'UTF8');
More details:
The problem is that the Client Encoding is set to WIN1252
, most likely because it is running on Windows machine but the file has a UTF-8
character in it.
You can check the Client Encoding with
SHOW client_encoding;
client_encoding
-----------------
WIN1252