I wrote a program for calling Unicode from outside the program. I am using Windows XP and Eclipse. When I run that program in the IDE, it shows the Unicode, but when I exported
You did the code already (I didn't follow the links), but you may compare the code with How to import a font - registerFont is crucial.
Also in a jar file all paths are case-sensitive. You may inspect the jar with 7zip or WinZip.
While not answering the question directly, here is a small howto on how to read/write correctly from text files, in an OS dependent way.
First thing to know is that the JVM has a file.encoding
property. It defines the default encoding used for all file read/write operation, all readers used when not specifying an encoding.
As such, you don't want to use the default constructors, but define the encoding each time. In Java, the class which "embodies" an encoding is Charset
. If you want UTF-8, you will use:
StandardCharsets.UTF_8
(Java 7+),Charset.forName("UTF-8")
(Java 6-),Charsets.UTF_8
(if you use Guava).In order to read a file correctly, open an InputStream
to that file, then an InputStreamReader
over that InputStream
(in the code samples below, UTF8
is the UTF-8 charset obtained from one of the methods above):
final InputStream in = new FileInputStream(...);
final Reader reader = new InputStreamReader(in, UTF8);
In order to write a file correctly, open an OutputStream
to it, then an OutputStreamWriter
over that OutputStream
:
final OutputStream out = new FileOutputStream(...);
final Writer writer = new OutputStreamWriter(out, UTF8);
And, of course, do not forget to .close()
both of the streams/readers/writers in a finally
block. Hint: if you don't use Java 7, use Guava 14.0+, use Closer. It is the most secure way to deal with multiple I/O resources and ensuring they are dealt with correctly.