s3cmd

How to delete or purge old files on S3?

醉酒当歌 提交于 2019-11-30 01:09:22
Are there existing solutions to delete any files older than x days? Ravi Bhatt Amazon has introduced object expiration recently. Amazon S3 Announces Object Expiration Amazon S3 announced a new feature, Object Expiration that allows you to schedule the deletion of your objects after a pre-defined time period. Using Object Expiration to schedule periodic removal of objects eliminates the need for you to identify objects for deletion and submit delete requests to Amazon S3. You can define Object Expiration rules for a set of objects in your bucket. Each Object Expiration rule allows you to

Difference between s3cmd, boto and AWS CLI

风流意气都作罢 提交于 2019-11-29 15:53:51
问题 I am thinking about redeploying my static website to Amazon S3. I need to automate the deployment so I was looking for an API for such tasks. I'm a bit confused over the different options. Question : What is the difference between s3cmd, the Python library boto and AWS CLI? 回答1: s3cmd and AWS CLI are both command line tools. They're well suited if you want to script your deployment through shell scripting (e.g. bash). AWS CLI gives you simple file-copying abilities through the "s3" command,

Zip an entire directory on S3

依然范特西╮ 提交于 2019-11-29 14:03:23
问题 If I have a directory with ~5000 small files on S3, is there a way to easily zip up the entire directory and leave the resulting zip file on S3? I need to do this without having to manually access each file myself. Thanks! 回答1: No, there is no magic bullet. (As an aside, you have to realize that there is no such thing as a "directory" in S3. There are only objects with paths. You can get directory-like listings, but the '/' character isn't magic - you can get prefixes by any character you

Creating a folder via s3cmd (Amazon S3)

社会主义新天地 提交于 2019-11-29 05:36:16
问题 I am using s3cmd to upload files to my S3 server. My problem is that when a directory on the server does not exist the upload fails. How can I tell s3cmd to create the folder if it does not exist? I am using PHP. 回答1: I believe you should try something like s3cmd put file.jpg s3://bucket/folder/file.jpg . S3 doesn't have the concept of directories, the whole folder/file.jpg is the file name. If using a GUI tool or something you delete the file.jpg from inside the folder, you will most

S3 moving files between buckets on different accounts?

风格不统一 提交于 2019-11-28 17:04:22
I'm doing some work for a client that has 2 separate AWS accounts. We need to move all the files in a bucket on one of their S3 accounts to a new bucket on the 2nd account. We thought that s3cmd would allow this, using the format: s3cmd cp s3://bucket1 s3://bucket2 --recursive However this only allows me to use the keys of one account and I can't specify the accounts of the 2nd account. Is there a way to do this without downloading the files and uploading them again to the 2nd account? Robs You don't have to open permissions to everyone. Use the below Bucket policies on source and destination

how to add cache control in AWS S3?

喜欢而已 提交于 2019-11-27 06:29:20
I have moved 20000 files to AWS S3 by s3cmd command. Now i want to add cache-control for all images (.jpg) These files are located in ( s3://bucket-name/images/ ). How can i add cache-control for all images by s3cmd or is there any other way to add header ? Thanks user3440362 Please try the current upstream master branch ( https://github.com/s3tools/s3cmd ), as it now has a modify command, used as follows: ./s3cmd --recursive modify --add-header="Cache-Control:max-age=86400" s3://yourbucket/ Also with the AWS's own client: aws s3 sync /path s3://yourbucket/ --recursive --cache-control max-age

how to add cache control in AWS S3?

心已入冬 提交于 2019-11-26 12:54:49
问题 I have moved 20000 files to AWS S3 by s3cmd command. Now i want to add cache-control for all images (.jpg) These files are located in ( s3://bucket-name/images/ ). How can i add cache-control for all images by s3cmd or is there any other way to add header ? Thanks 回答1: Please try the current upstream master branch (https://github.com/s3tools/s3cmd), as it now has a modify command, used as follows: ./s3cmd --recursive modify --add-header="Cache-Control:max-age=86400" s3://yourbucket/ 回答2: Also

What is the algorithm to compute the Amazon-S3 Etag for a file larger than 5GB?

馋奶兔 提交于 2019-11-26 06:53:11
Files uploaded to Amazon S3 that are smaller than 5GB have an ETag that is simply the MD5 hash of the file, which makes it easy to check if your local files are the same as what you put on S3. But if your file is larger than 5GB, then Amazon computes the ETag differently. For example, I did a multipart upload of a 5,970,150,664 byte file in 380 parts. Now S3 shows it to have an ETag of 6bcf86bed8807b8e78f0fc6e0a53079d-380 . My local file has an md5 hash of 702242d3703818ddefe6bf7da2bed757 . I think the number after the dash is the number of parts in the multipart upload. I also suspect that

What is the algorithm to compute the Amazon-S3 Etag for a file larger than 5GB?

☆樱花仙子☆ 提交于 2019-11-26 01:57:54
问题 Files uploaded to Amazon S3 that are smaller than 5GB have an ETag that is simply the MD5 hash of the file, which makes it easy to check if your local files are the same as what you put on S3. But if your file is larger than 5GB, then Amazon computes the ETag differently. For example, I did a multipart upload of a 5,970,150,664 byte file in 380 parts. Now S3 shows it to have an ETag of 6bcf86bed8807b8e78f0fc6e0a53079d-380 . My local file has an md5 hash of 702242d3703818ddefe6bf7da2bed757 . I