What is the fastest, easiest tool or method to convert text files between character sets?
Specifically, I need to convert from UTF-8 to ISO-8859-15 and vice versa.
There is also a web tool to convert file encoding: https://webtool.cloud/change-file-encoding
It supports wide range of encodings, including some rare ones, like IBM code page 37.
Assuming, you don't know the input encoding and still wish to automate most of the conversion, I concluded this one liner from summing up previous answers.
iconv -f $(chardetect input.text | awk '{print $2}') -t utf-8 -o output.text
Use this Python script: https://github.com/goerz/convert_encoding.py Works on any platform. Requires Python 2.7.
Get-Content -Encoding UTF8 FILE-UTF8.TXT | Out-File -Encoding UTF7 FILE-UTF7.TXT
The shortest version, if you can assume that the input BOM is correct:
gc FILE.TXT | Out-File -en utf7 file-utf7.txt
iconv(1)
iconv -f FROM-ENCODING -t TO-ENCODING file.txt
Also there are iconv-based tools in many languages.
If macOS GUI applications are your bread and butter, SubEthaEdit is the text editor I usually go to for encoding-wrangling — its "conversion preview" allows you to see all invalid characters in the output encoding, and fix/remove them.
And it's open-source now, so yay for them