I am using the aws cli to list the files in an s3 bucket using the following command (documentation):
aws s3 ls s3://mybucket --recursive --human-readable --
Simple Way
aws s3 ls s3://mybucket --recursive --human-readable --summarize|cut -c 29-
A simple filter would be:
aws s3 ls s3://mybucket --recursive | perl -pe 's/^(?:\S+\s+){3}//'
This will remove the date, time and size. Left only the full path of the file. It also works without the recursive and it should also works with filename containing spaces.
An S3 bucket may not only have files but also files with prefixes. In case you use --recursive
it will not only list the files but also just the prefixes. In case you do not care about the prefixes and just the files within the bucket or just the prefixes within the bucket, this should work.
aws s3 ls s3://$S3_BUCKET/$S3_OPTIONAL_PREFIX/ --recursive | awk '{ if($3 >0) print $4}'
awk
's $3
is the size of the file in case of prefix it would be 0
. It could also be that the file is empty so it would skip empty files as well.
I would suggest not depending on the spacing and fetching the 4th field.
You technically want the last field regardless of which position it was in.
So it's safer to use rev
to your advantage...
rev
reverses the string input char by char
so when you pipe the aws s3 ls
out put to rev
you have everything reversed, including the positions of the fields, so the last field always becomes the first field.
Instead of figuring out where the last field would be, you just rev
, get first, then rev
again because the characters in the field would be in reverse as well. (e.g. 2013-09-02 21:32:57 23 Bytes foo/bar/.baz/a
becomes a/zab./rab/oof setyB 32 75:23:12 20-90-3102
)
then cut -d" "
-f1would retrieve the first field
a/zab./rab/oof<br> then
revagain to get
foo/bar/.baz/a`
aws s3 ls s3://mybucket --recursive | rev | cut -d" " -f1 | rev
For only the file names, I find the easiest to be:
aws s3 ls s3://path/to/bucket/ | cut -d " " -f 4
This will cut the returned output at the spaces (cut -d " "
) and return the fourth column (-f 4
), which is the list of file names.
You can't do this with just the aws
command, but you can easily pipe it to another command to strip out the portion you don't want. You also need to remove the --human-readable
flag to get output easier to work with, and the --summarize
flag to remove the summary data at the end.
Try this:
aws s3 ls s3://mybucket --recursive | awk '{print $4}'
Edit: to take spaces in filenames into account:
aws s3 ls s3://mybucket --recursive | awk '{$1=$2=$3=""; print $0}' | sed 's/^[ \t]*//'