问题
I am using a bucket policy that denies any non-SSL communications and UnEncryptedObjectUploads.
{
"Id": "Policy1361300844915",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyUnSecureCommunications",
"Action": "s3:*",
"Effect": "Deny",
"Resource": "arn:aws:s3:::my-bucket",
"Condition": {
"Bool": {
"aws:SecureTransport": false
}
},
"Principal": {
"AWS": "*"
}
},
{
"Sid": "DenyUnEncryptedObjectUploads",
"Action": "s3:PutObject",
"Effect": "Deny",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
},
"Principal": {
"AWS": "*"
}
}
]
}
This policy works for applications that support SSL and SSE settings but only for the objects being uploaded.
I ran into these issues:
- CloudBerry Explorer and S3 Browser failed during folders and files RENAME in the bucket with that Bucket Policy. After I applied only SSL requirement in the bucket policy, those browsers successfully completed file/folder renaming.
CloudBerry Explorer was able to RENAME objects with the full SSL/SSE bucket policy only after I enabled in Options – Amazon S3 Copy/Move through the local computer (slower and costs money).
All copy/move inside Amazon S3 failed due to that restrictive policy.
That means that we cannot control copy/move process that is not originated from the application that manipulates local objects. At least above mentioned CloudBerry Options proved that.
But I might be wrong, that is why I am posting this question.
- In my case, with that bucket policy enabled, S3 Management Console becomes useless. Users cannot create folders, delete them, what they can is only upload files.
Is there something wrong with my bucket policy? I do not know those Amazon S3 mechanisms that used for objects manipulating.
Does Amazon S3 treat external requests (API/http headers) and internal requests differently?
Is it possible to apply this policy only to the uploads and not to internal Amazon S3 GET/PUT etc..? I have tried http referer with the bucket URL to no avail.
The bucket policy with SSL/SSE requirements is a mandatory for my implementation.
Any ideas would be appreciated.
Thank you in advance.
回答1:
IMHO There is no way to automatically tell Amazon S3 to turn on SSE for every PUT requests. So, what I would investigate is the following :
write a script that list your bucket
for each object, get the meta data
if SSE is not enabled, use the PUT COPY API (http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html) to add SSE "(...) When copying an object, you can preserve most of the metadata (default) or specify new metadata (...)"
If the PUT operation succeeded, use the DELETE object API to delete the original object
Then run that script on an hourly or daily basis, depending on your business requirements. You can use S3 API in Python (http://boto.readthedocs.org/en/latest/ref/s3.html) to make it easier to write the script.
If this "change-after-write" solution is not valid for you business wise, you can work at different level
use a proxy between your API client and S3 API (like a reverse proxy on your site), and configure it to add the SSE HTTP header for every PUT / POST requests. Developer must go through the proxy and not be authorised to issue requests against S3 API endpoints
write a wrapper library to add the SSE meta data automatically and oblige developer to use your library on top of the SDK.
The later today are a matter of discipline in the organisation, as it is not easy to enforce them at a technical level.
Seb
来源:https://stackoverflow.com/questions/17262370/amazon-s3-server-side-encryption-bucket-policy-problems