I am doing compressing of JavaScript files and the compressor is complaining that my files have 
character in them.
How can I search for these cha
Using tail might be easier:
tail --bytes=+4 filename > new_filename
I'm suggest the use of "dos2unix" tool, please test to run dos2unix ./thefile.js
.
If necessary try to use something like this for multiple files:
for x in $(find . -type f -exec echo {} +); do dos2unix $x ; done
My Regards.
@tripleee's solution didn't work for me. But changing the file encoding to ASCII and again to UTF-8 did the trick :-)
I've used vimgrep for this
:vim "[\uFEFF]" *
also normal vim search command
/[\uFEFF]
perl -pi~ -CSD -e 's/^\x{fffe}//' file1.js path/to/file2.js
I would assume the tool will break if you have other utf-8 in your files, but if not, perhaps this workaround can help you. (Untested ...)
Edit: added the -CSD
option, as per tchrist's comment.
The 'file' command shows if the BOM is present:
For example: 'file myfile.xml' displays: "XML 1.0 document, UTF-8 Unicode (with BOM) text, with very long lines, with CRLF line terminators"
dos2unix will remove the BOM.