问题
How can I effectively read from a large file and write bulk data into a file using the Java NIO
framework.
I'm working with ByteBuffer
and FileChannel
and had tried something like below:
public static void main(String[] args)
{
String inFileStr = "screen.png";
String outFileStr = "screen-out.png";
long startTime, elapsedTime;
int bufferSizeKB = 4;
int bufferSize = bufferSizeKB * 1024;
// Check file length
File fileIn = new File(inFileStr);
System.out.println("File size is " + fileIn.length() + " bytes");
System.out.println("Buffer size is " + bufferSizeKB + " KB");
System.out.println("Using FileChannel with an indirect ByteBuffer of " + bufferSizeKB + " KB");
try ( FileChannel in = new FileInputStream(inFileStr).getChannel();
FileChannel out = new FileOutputStream(outFileStr).getChannel() )
{
// Allocate an indirect ByteBuffer
ByteBuffer bytebuf = ByteBuffer.allocate(bufferSize);
startTime = System.nanoTime();
int bytesCount = 0;
// Read data from file into ByteBuffer
while ((bytesCount = in.read(bytebuf)) > 0) {
// flip the buffer which set the limit to current position, and position to 0.
bytebuf.flip();
out.write(bytebuf); // Write data from ByteBuffer to file
bytebuf.clear(); // For the next read
}
elapsedTime = System.nanoTime() - startTime;
System.out.println("Elapsed Time is " + (elapsedTime / 1000000.0) + " msec");
}
catch (IOException ex) {
ex.printStackTrace();
}
}
Can anybody tell, should I follow the same procedure if my file size in more than 2 GB?
What should I follow if the similar things I want to do while writing if written operations are in bulk?
回答1:
Note that you can simply use Files.copy(Paths.get(inFileStr),Paths.get(outFileStr), StandardCopyOption.REPLACE_EXISTING) to copy the file as your example code does, just likely faster and with only one line of code.
Otherwise, if you already have opened the two file channels, you can just use
in.transferTo(0, in.size(), out) to transfer the entire contents of the in
channel to the out
channel. Note that this method allows to specify a range within the source file that will be transferred to the target channel’s current position (which is initially zero) and that there’s also a method for the opposite way, i.e. out.transferFrom(in, 0, in.size()) to transfer data from the source channel’s current position to an absolute range within the target file.
Together, they allow almost every imaginable nontrivial bulk transfer in an efficient way without the need to copy the data into a Java side buffer. If that’s not solving your needs, you have to be more specific in your question.
By the way, you can open a FileChannel directly without the FileInputStream
/FileOutputStream
detour since Java 7.
回答2:
while ((bytesCount = in.read(bytebuf)) > 0) {
// flip the buffer which set the limit to current position, and position to 0.
bytebuf.flip();
out.write(bytebuf); // Write data from ByteBuffer to file
bytebuf.clear(); // For the next read
}
Your copy loop is not correct. It should be:
while ((bytesCount = in.read(bytebuf)) > 0 || bytebuf.position() > 0) {
// flip the buffer which set the limit to current position, and position to 0.
bytebuf.flip();
out.write(bytebuf); // Write data from ByteBuffer to file
bytebuf.compact(); // For the next read
}
Can anybody tell, should I follow the same procedure if my file size [is] more than 2 GB?
Yes. The file size doesn't make any difference.
来源:https://stackoverflow.com/questions/41115869/reading-and-writting-a-large-file-using-java-nio