Getting 403 (Forbidden) when uploading to S3 with a signed URL

后端 未结 6 1353
礼貌的吻别
礼貌的吻别 2021-02-07 03:30

I\'m trying to generate a pre-signed URL then upload a file to S3 through a browser. My server-side code looks like this, and it generates the URL:

let s3 = new          


        
相关标签:
6条回答
  • 2021-02-07 03:37

    Your request needs to match the signature, exactly. One apparent problem is that you are not actually including the canned ACL in the request, even though you included it in the signature. Change to this:

    var options = { headers: { 'Content-Type': fileType, 'x-amz-acl': 'public-read' } };
    
    0 讨论(0)
  • 2021-02-07 03:44

    If you're trying to use an ACL, make sure that your Lambda IAM role has the s3:PutObjectAcl for the given Bucket and also that your bucket allows for the s3:PutObjectAcl for the uploading Principal (user/iam/account that's uploading).

    This is what fixed it for me after double checking all my headers and everything else.

    Inspired by this answer https://stackoverflow.com/a/53542531/2759427

    0 讨论(0)
  • 2021-02-07 03:49

    Receiving a 403 Forbidden error for a pre-signed s3 put upload can also happen for a couple of reasons that are not immediately obvious:

    1. It can happen if you generate a pre-signed put url using a wildcard content type such as image/*, as wildcards are not supported.

    2. It can happen if you generate a pre-signed put url with no content type specified, but then pass in a content type header when uploading from the browser. If you don't specify a content type when generating the url, you have to omit the content type when uploading. Be conscious that if you are using an upload tool like Uppy, it may attach a content type header automatically even when you don't specify one. In that case, you'd have to manually set the content type header to be empty.

    In any case, if you want to support uploading any file type, it's probably best to pass the file's content type to your api endpoint, and use that content type when generating your pre-signed url that you return to your client.

    For example, generating a pre-signed url from your api:

    const AWS = require('aws-sdk')
    const uuid = require('uuid/v4')
    
    async function getSignedUrl(contentType) {
        const s3 = new AWS.S3({
            accessKeyId: process.env.AWS_KEY,
            secretAccessKey: process.env.AWS_SECRET_KEY
        })
        const signedUrl = await s3.getSignedUrlPromise('putObject', {
            Bucket: 'mybucket',
            Key: `uploads/${uuid()}`,
            ContentType: contentType
        })
    
        return signedUrl
    }
    

    And then sending an upload request from the browser:

    import Uppy from '@uppy/core'
    import AwsS3 from '@uppy/aws-s3'
    
    this.uppy = Uppy({
        restrictions: {
            allowedFileTypes: ['image/*'],
            maxFileSize: 5242880, // 5 Megabytes
            maxNumberOfFiles: 5
        }
    }).use(AwsS3, {
        getUploadParameters(file) {
            async function _getUploadParameters() {
                let signedUrl = await getSignedUrl(file.type)
                return {
                    method: 'PUT',
                    url: signedUrl
                }
            }
    
            return _getUploadParameters()
        }
    })
    

    For further reference also see these two stack overflow posts: how-to-generate-aws-s3-pre-signed-url-request-without-knowing-content-type and S3.getSignedUrl to accept multiple content-type

    0 讨论(0)
  • 2021-02-07 03:51

    Had the same issue, here is how you need to solve it,

    1. Extract the filename portion of the signed URL. Do a print that you are extracting your filename portion correctly with querystring parameters. This is critical.
    2. Encode to URI Encoding of the filename with query string parameters.
    3. Return the url from your lambda with encoded filename along with other path or from your node service.

    Now post from axios with that url, it will work.

    EDIT1: Your signature will also be invalid, if you pass in wrong content type.

    Please ensure that the content-type you have you create the pre-signed url is same as the one you are using it for put.

    Hope it helps.

    0 讨论(0)
  • 2021-02-07 03:52

    1) You might need to use S3V4 signatures depending on how the data is transferred to AWS (chunk versus stream). Create the client as follows:

    var s3 = new AWS.S3({
      signatureVersion: 'v4'
    });
    

    2) Do not add new headers or modify existing headers. The request must be exactly as signed.

    3) Make sure that the url generated matches what is being sent to AWS.

    4) Make a test request removing these two lines before signing (and remove the headers from your PUT). This will help narrow down your issue:

      ContentType: req.body.fileType,
      ACL: 'public-read'
    
    0 讨论(0)
  • 2021-02-07 03:59

    This code was working with credentials and a bucket I created several years ago, but caused a 403 error on recently created credentials/buckets:

    const s3 = new AWS.S3({
      region: region,
      accessKeyId: process.env.AWS_ACCESS_KEY,
      secretAccessKey: process.env.AWS_SECRET_KEY,
    })
    

    The fix was simply to add signatureVersion: 'v4'.

    const s3 = new AWS.S3({
      signatureVersion: 'v4',
      region: region,
      accessKeyId: process.env.AWS_ACCESS_KEY,
      secretAccessKey: process.env.AWS_SECRET_KEY,
    })
    

    Why? I don't know.

    0 讨论(0)
提交回复
热议问题