How to set InputStream content Length

。_饼干妹妹 提交于 2020-08-01 09:24:11

问题


I am uploading files to Amazon S3 bucket. The files are being uploaded but i get the following Warning.

WARNING: No content length specified for stream data. Stream contents will be buffered in memory and could result in out of memory errors.

So I added the following line to my code

metaData.setContentLength(IOUtils.toByteArray(input).length);

but then i got the following message. I don't even know if it is a warning or what.

Data read has a different length than the expected: dataLength=0; expectedLength=111992; includeSkipped=false; in.getClass()=class sun.net.httpserver.FixedLengthInputStream; markedSupported=false; marked=0; resetSinceLastMarked=false; markCount=0; resetCount=0

How can i set contentLength to the metaData of InputSteam? Any help would be greatly appreciated.


回答1:


When you read the data with IOUtils.toByteArray, this consumes the InputStream. When the AWS API tries to read it, it's zero length.

Read the contents into a byte array and provide an InputStream wrapping that array to the API:

byte[] bytes = IOUtils.toByteArray(input);
metaData.setContentLength(bytes.length);
ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(bytes);
PutObjectRequest putObjectRequest = new PutObjectRequest(bucket, key, byteArrayInputStream, metadata);
client.putObject(putObjectRequest);

You should consider using the multipart upload API to avoid loading the whole InputStream into memory. For example:

byte[] bytes = new byte[BUFFER_SIZE];
String uploadId = client.initiateMultipartUpload(new InitiateMultipartUploadRequest(bucket, key)).getUploadId();

int bytesRead = 0;
int partNumber = 1;
List<UploadPartResult> results = new ArrayList<>();
bytesRead = input.read(bytes);
while (bytesRead >= 0) {
    UploadPartRequest part = new UploadPartRequest()
        .withBucketName(bucket)
        .withKey(key)
        .withUploadId(uploadId)
        .withPartNumber(partNumber)
        .withInputStream(new ByteArrayInputStream(bytes, 0, bytesRead))
        .withPartSize(bytesRead);
    results.add(client.uploadPart(part));
    bytesRead = input.read(bytes);
    partNumber++;
}
CompleteMultipartUploadRequest completeRequest = new CompleteMultipartUploadRequest()
    .withBucketName(bucket)
    .withKey(key)
    .withUploadId(uploadId)
    .withPartETags(results);
client.completeMultipartUpload(completeRequest);



回答2:


Note that by using a ByteBuffer you simply do manually what the AWS SDK already did for you automatically! It still buffers the entire stream into memory and is as good as the original solution which produces the warning from the SDK.

You can only get rid of the memory-problem if you have another way to know the length of the stream, for instance, when you create the stream from a file:

void uploadFile(String bucketName, File file) {
    try (final InputStream stream = new FileInputStream(file)) {
        ObjectMetadata metadata = new ObjectMetadata();
        metadata.setContentLength(file.length());
        s3client.putObject(
                new PutObjectRequest(bucketName, file.getName(), stream, metadata)
        );
    }
}



回答3:


Breaking News! AWS SDK 2.0 has built-in support for uploading files:

        s3client.putObject(
                (builder) -> builder.bucket(myBucket).key(file.getName()),
                RequestBody.fromFile(file)
        );

There's also RequestBody methods for taking Strings or Buffers which automatically and efficiently set Content-Length. Only when you have another kind of InputStream, you still need to provide the length yourself – however that case should be more rare now with all the other options available.



来源:https://stackoverflow.com/questions/36201759/how-to-set-inputstream-content-length

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!