Unless I\'m missing something, it seems that none of the APIs I\'ve looked at will tell you how many objects are in an
. Is ther
If you're looking for specific files, let's say .jpg
images, you can do the following:
aws s3 ls s3://your_bucket | grep jpg | wc -l
Can also be done with gsutil du (Yes, a Google Cloud tool)
gsutil du s3://mybucket/ | wc -l
Use AWS Cloudwatch's metrics
or:
aws s3api list-objects --bucket <BUCKET_NAME> --prefix "<FOLDER_NAME>" | wc -l
or:
aws s3 ls s3://<BUCKET_NAME>/<FOLDER_NAME>/ --recursive --summarize --human-readable | grep "Total Objects"
or with s4cmd:
s4cmd ls -r s3://<BUCKET_NAME>/<FOLDER_NAME>/ | wc -l
aws s3api list-objects --bucket <BUCKET_NAME> --output json --query "[sum(Contents[].Size), length(Contents[])]" | awk 'NR!=2 {print $0;next} NR==2 {print $0/1024/1024/1024" GB"}'
or:
aws s3 ls s3://<BUCKET_NAME>/<FOLDER_NAME>/ --recursive --summarize --human-readable | grep "Total Size"
or with s4cmd:
s4cmd du s3://<BUCKET_NAME>
or with CloudWatch metrics:
aws cloudwatch get-metric-statistics --metric-name BucketSizeBytes --namespace AWS/S3 --start-time 2020-10-20T16:00:00Z --end-time 2020-10-22T17:00:00Z --period 3600 --statistics Average --unit Bytes --dimensions Name=BucketName,Value=<BUCKET_NAME> Name=StorageType,Value=StandardStorage --output json | grep "Average"
If you use the s3cmd command-line tool, you can get a recursive listing of a particular bucket, outputting it to a text file.
s3cmd ls -r s3://logs.mybucket/subfolder/ > listing.txt
Then in linux you can run a wc -l on the file to count the lines (1 line per object).
wc -l listing.txt
Old thread, but still relevant as I was looking for the answer until I just figured this out. I wanted a file count using a GUI-based tool (i.e. no code). I happen to already use a tool called 3Hub for drag & drop transfers to and from S3. I wanted to know how many files I had in a particular bucket (I don't think billing breaks it down by buckets).
So, using 3Hub,
- list the contents of the bucket (looks basically like a finder or explorer window)
- go to the bottom of the list, click 'show all'
- select all (ctrl+a)
- choose copy URLs from right-click menu
- paste the list into a text file (I use TextWrangler for Mac)
- look at the line count
I had 20521 files in the bucket and did the file count in less than a minute.
You can download and install s3 browser from http://s3browser.com/. When you select a bucket in the center right corner you can see the number of files in the bucket. But, the size it shows is incorrect in the current version.
Gubs