问题
I'm trying to make a (front-end) Javascript that would be able to copy very large files (i.e. read them from a file input element and 'download' them using StreamSaver.js).
This is the actual code:
<html>
<header>
<title>File copying</title>
</header>
<body>
<script src="https://cdn.jsdelivr.net/npm/web-streams-polyfill@2.0.2/dist/ponyfill.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/streamsaver@2.0.3/StreamSaver.min.js"></script>
<script type="text/javascript">
const streamSaver = window.streamSaver;
async function copyFile() {
const fileInput = document.getElementById("fileInput");
const file = fileInput.files[0];
if (!file) {
alert('select a (large) file');
return;
}
const newName = file.name + " - Copy";
let remaining = file.size;
let written = 0;
const chunkSize = 1048576; // 1MB
const writeStream = streamSaver.createWriteStream(newName);
const writer = writeStream.getWriter();
while (remaining > 0) {
let readSize = chunkSize > remaining ? remaining : chunkSize;
let blob = file.slice(written, readSize);
let aBuff = await blob.arrayBuffer();
await writer.write(new Uint8Array(aBuff));
written += readSize;
remaining -= readSize;
}
await writer.close();
}
</script>
<input type="file" id="fileInput"/>
<button onclick="copyFile()">Copy file</button>
</body>
</html>
It seems that during the second loop in the while
the aBuff
variable value (the blob.arrayBuffer
) is an empty ArrayBuffer
.
Am I reading the file the wrong way? My intent is to read a (potentially huge) file, chunk by chunk and do something with each chunk (in this case just output it to the downloading file by StreamSaver.js). What better approach is available in today's browsers?
来源:https://stackoverflow.com/questions/62346764/file-slice-fails-second-time