How do I client-side upload a viewable file to Amazon S3?

前端 未结 7 1342
别那么骄傲
别那么骄傲 2020-12-13 15:20

Let me start of by saying that I am normally very reluctant to post this questions as I always feel that there\'s an answer to everything SOMEWHERE on the internet. After sp

相关标签:
7条回答
  • 2020-12-13 16:00

    Update

    I have bad news. According to release notes of SDK 2.1.6 at http://aws.amazon.com/releasenotes/1473534964062833:

    "The SDK will now throw an error if ContentLength is passed into an 
    Amazon S3 presigned URL (AWS.S3.getSignedUrl()). Passing a 
    ContentLength is not supported by the SDK, since it is not enforced on 
    S3's side given the way the SDK is currently generating these URLs. 
    See GitHub issue #457."
    

    I have found on some occassions, ContentLength must be included (specifically if your client passes it so the signatures will match), then on other occassions, getSignedUrl will complain if you include ContentLength with a parameter error: "contentlength is not supported in presigned urls". I noticed that the behavior would change when I changed the machine which was making the call. Presumably the other machine made a connection to another Amazon server in the farm.

    I can only guess why the behavior exists in some cases, but not in others. Perhaps not all of Amazon's servers have been fully upgraded? In either case, to handle this problem, I now make an attempt using ContentLength and if it gives me the parameter error, then I call the getSignedUrl again without it. This is a work-around to deal with this strange behavior with the SDK.

    A little example... not very pretty to look at but you get the idea:

    MediaBucketManager.getPutSignedUrl = function ( params, next ) {
        var _self = this;
        _self._s3.getSignedUrl('putObject', params, function ( error, data ) {
            if (error) {
                console.log("An error occurred retrieving a signed url for putObject", error);
                // TODO: build contextual error
                if (error.code == "UnexpectedParameter" && error.message.search("ContentLength") > -1) {
                    if (params.ContentLength) delete params.ContentLength
                    MediaBucketManager.getPutSignedUrl(bucket, key, expires, params, function ( error, data ) {
                        if (error) {
                            console.log("An error occurred retrieving a signed url for putObject", error);
                        } else {
                            console.log("Retrieved a signed url for putObject:", data);
                            return next(null, data)
                        }
                    }); 
                } else {
                    return next(error); 
                }
            } else {
                console.log("Retrieved a signed url for putObject:", data);
                return next(null, data);
            }
        });
    };
    

    So, below is not entirely correct (it will be correct in some cases but give you the parameter error in others) but might help you get started.

    Old Answer

    It seems (for a signedUrl to PUT a file to S3 where there is only public-read ACL) there are a few headers that will be compared when a request is made to PUT to S3. They are compared against what has been passed to getSignedUrl:

    CacheControl: 'STRING_VALUE',
    ContentDisposition: 'STRING_VALUE',
    ContentEncoding: 'STRING_VALUE',
    ContentLanguage: 'STRING_VALUE',
    ContentLength: 0,
    ContentMD5: 'STRING_VALUE',
    ContentType: 'STRING_VALUE',
    Expires: new Date || 'Wed De...'
    

    see the full list here: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property

    When you're calling getSignedUrl you'll pass a 'params' object (fairly clear in the documentation) that includes the Bucket, Key, and Expires data. Here is an (NodeJS) example:

    var params = { Bucket:bucket, Key:key, Expires:expires };
    s3.getSignedUrl('putObject', params, function ( error, data ) {
        if (error) {
            // handle error
        } else {
            // handle data
        }
    });
    

    Less clear is setting the ACL to 'public-read':

    var params = { Bucket:bucket, Key:key, Expires:expires, ACL:'public-read' };
    

    Very much obscure is the notion of passing headers that you expect the client, using the signed url, will pass along with the PUT operation to S3:

    var params = {
        Bucket:bucket,
        Key:key,
        Expires:expires,
        ACL:'public-read',
        ContentType:'image/png',
        ContentLength:7469
    };
    

    In my example above, I have included ContentType and ContentLength because those two headers are included when using XmlHTTPRequest in javascript, and in the case of Content-Length cannot be changed. I suspect that will be the case for other implementations of HTTP requests like Curl and such because they are required headers when submitting HTTP requests that include a body (of data).

    If the client does not include the ContentType and ContentLength data about the file when requesting a signedUrl, when it comes time to PUT the file to S3 (with that signedUrl), the S3 service will find the headers included with the client's requests (because they are required headers) but the signature will not have included them - and so, they will not match and the operation will fail.

    So, it appears that you will have to know, in advance of making your getSignedUrl call, the content type and content length of the file to be PUT to S3. This wasn't a problem for me because I exposed a REST endpoint to allow our clients to request a signed url just before making the PUT operation to S3. Since the client has access to the file to be submitted (at the moment they are ready to submit), it was a trivial operation for the client to access the file size and type and request a signed url with that data from my endpoint.

    0 讨论(0)
  • 2020-12-13 16:01

    As per @Reinsbrain request, this is the Node.js version of implementing client side uploads to the server with "public-read" rights.

    BACKEND (NODE.JS)

    var AWS = require('aws-sdk');
    var AWS_ACCESS_KEY_ID = process.env.S3_ACCESS_KEY;
    var AWS_SECRET_ACCESS_KEY = process.env.S3_SECRET;
    AWS.config.update({accessKeyId: AWS_ACCESS_KEY_ID, secretAccessKey: AWS_SECRET_ACCESS_KEY});
    var s3 = new AWS.S3();
    var moment = require('moment');
    var S3_BUCKET = process.env.S3_BUCKET;
    var crypto = require('crypto');
    var POLICY_EXPIRATION_TIME = 10;// change to 10 minute expiry time
    var S3_DOMAIN = process.env.S3_DOMAIN;
    
    exports.writePolicy = function (filePath, contentType, maxSize, redirect, callback) {
      var readType = "public-read";
    
      var expiration = moment().add('m', POLICY_EXPIRATION_TIME);//OPTIONAL: only if you don't want a 15 minute expiry
    
      var s3Policy = {
        "expiration": expiration,
        "conditions": [
          ["starts-with", "$key", filePath],
          {"bucket": S3_BUCKET},
          {"acl": readType},
          ["content-length-range", 2048, maxSize], //min 2kB to maxSize
          {"redirect": redirect},
          ["starts-with", "$Content-Type", contentType]
        ]
      };
    
      // stringify and encode the policy
      var stringPolicy = JSON.stringify(s3Policy);
      var base64Policy = Buffer(stringPolicy, "utf-8").toString("base64");
    
      // sign the base64 encoded policy
      var testbuffer = new Buffer(base64Policy, "utf-8");
    
      var signature = crypto.createHmac("sha1", AWS_SECRET_ACCESS_KEY)
        .update(testbuffer).digest("base64");
    
      // build the results object to send to calling function
      var credentials = {
        url: S3_DOMAIN,
        key: filePath,
        AWSAccessKeyId: AWS_ACCESS_KEY_ID,
        acl: readType,
        policy: base64Policy,
        signature: signature,
        redirect: redirect,
        content_type: contentType,
        expiration: expiration
      };
    
      callback(null, credentials);
    }
    

    FRONTEND assuming the values from server are in input fields and that you're submitting images via a form submission (i.e. POST since I couldn't get PUT to work):

    function dataURItoBlob(dataURI, contentType) {
      var binary = atob(dataURI.split(',')[1]);
      var array = [];
      for(var i = 0; i < binary.length; i++) {
        array.push(binary.charCodeAt(i));
      }
      return new Blob([new Uint8Array(array)], {type: contentType});
    }
    
    function submitS3(callback) {
      var base64Data = $("#file").val();//your file to upload e.g. img.toDataURL("image/jpeg")
      var contentType = $("#contentType").val();
      var xmlhttp = new XMLHttpRequest();
      var blobData = dataURItoBlob(base64Data, contentType);
    
      var fd = new FormData();
      fd.append('key', $("#key").val());
      fd.append('acl', $("#acl").val());
      fd.append('Content-Type', contentType);
      fd.append('AWSAccessKeyId', $("#accessKeyId").val());
      fd.append('policy', $("#policy").val());
      fd.append('signature', $("#signature").val());
      fd.append("redirect", $("#redirect").val());
      fd.append("file", blobData);
    
      xmlhttp.onreadystatechange=function(){
        if (xmlhttp.readyState==4) {
          //do whatever you want on completion
          callback();
        }
      }
      var someBucket = "your_bucket_name"
      var S3_DOMAIN = "https://"+someBucket+".s3.amazonaws.com/";
      xmlhttp.open('POST', S3_DOMAIN, true);
      xmlhttp.send(fd);
    }
    

    Note: I was uploading more than 1 image per submission so I added multiple iframes (with the FRONTEND code above) to do simultaneous multi-image uploads.

    0 讨论(0)
  • 2020-12-13 16:07

    step 1: Set s3 policy:

    {
        "expiration": "2040-01-01T00:00:00Z",
        "conditions": [
                        {"bucket": "S3_BUCKET_NAME"},
                        ["starts-with","$key",""],
                        {"acl": "public-read"},
                        ["starts-with","$Content-Type",""],
                        ["content-length-range",0,524288000]
                      ]
    }
    

    step 2: prepare aws keys,policy,signature, in this example, all stored at s3_tokens dictionary

    the trick here is in the policy & signature policy: 1) save step 1 policy in a file. dump it to a json file. 2) base 64 encoded json file (s3_policy_json):

    #python
    policy = base64.b64encode(s3_policy_json)
    

    signature:

    #python
    s3_tokens_dict['signature'] = base64.b64encode(hmac.new(AWS_SECRET_ACCESS_KEY, policy, hashlib.sha1).digest())
    

    step 3: from your js

    $scope.upload_file = function(file_to_upload,is_video) {
        var file = file_to_upload;
        var key = $scope.get_file_key(file.name,is_video);
        var filepath = null;
        if ($scope.s3_tokens['use_s3'] == 1){
           var fd = new FormData();
           fd.append('key', key);
           fd.append('acl', 'public-read'); 
           fd.append('Content-Type', file.type);      
           fd.append('AWSAccessKeyId', $scope.s3_tokens['aws_key_id']);
           fd.append('policy', $scope.s3_tokens['policy']);
           fd.append('signature',$scope.s3_tokens['signature']);
           fd.append("file",file);
           var xhr = new XMLHttpRequest();
           var target_url = 'http://s3.amazonaws.com/<bucket>/';
           target_url = target_url.replace('<bucket>',$scope.s3_tokens['bucket_name']);
           xhr.open('POST', target_url, false); //MUST BE LAST LINE BEFORE YOU SEND 
           var res = xhr.send(fd);
           filepath = target_url.concat(key);
        }
        return filepath;
    };
    
    0 讨论(0)
  • 2020-12-13 16:09

    It sounds like you don't really need a signed URL, just that you want your uploads to be publicly viewable. If that's the case, you just need to go to the AWS console, choose the bucket you want to configure, and click on permissions. Then click the button that says 'add bucket policy' and input the following rule:

    {
        "Version": "2008-10-17",
        "Id": "http referer policy example",
        "Statement": [
            {
                "Sid": "readonly policy",
                "Effect": "Allow",
                "Principal": "*",
                "Action": "s3:GetObject",
                "Resource": "arn:aws:s3:::BUCKETNAME/*"
            }
        ]
    }
    

    where BUCKETNAME should be replaced with your own bucket's name. The contents of that bucket will be readable by anyone now, provided they have a direct link to a specific file.

    0 讨论(0)
  • 2020-12-13 16:13

    Are you using the official AWS Node.js SDK? http://aws.amazon.com/sdkfornodejs/

    Here's how I'm using it...

     var data = {
            Bucket: "bucket-xyz",
            Key: "uploads/" + filename,
            Body: buffer,
            ACL: "public-read",
            ContentType: mime.lookup(filename)
        };
     s3.putObject(data, callback);
    

    And My uploaded files are public readable. Hope it helps.

    0 讨论(0)
  • 2020-12-13 16:25

    Could you just upload using your PUT pre signed URL without worrying about permissions, but immediately create another pre signed URL with a GET method and infinite expiration, and provide that to the viewing public?

    0 讨论(0)
提交回复
热议问题