Is there a way to create presigned URL for objects in S3 bucket using AWS CLI?
I know that could be done using SDK, but is it possible with CLI?
I found this on
Wildcards are now supported.
E.g. aws s3 presign s3://mybucket/*
if you do aws s3 presign s3://bucket-address/my-file.csv
you get back url. Then when you pass it to wget
make sure you wrap it with apostrophes
wget 'https://bucket-address.s3.aws.com/xbxxxxxxxxxxxxxxx'
if you just do without apostrophes you will get 403 wget https://bucket-address.s3.aws.com/xbxxxxxxxxxxxxxxx
I'm describing more in https://blog.eq8.eu/til/transfer-file-to-server.html
Did you try aws s3 presign?
Generate a pre-signed URL for an Amazon S3 object. This allows anyone who receives the pre-signed URL to retrieve the S3 object with an HTTP GET request. For sigv4 requests the region needs to be configured explicitly.
This will generate a URL that will expire in 3600 seconds (default)
aws s3 presign s3://mybucket/myobject
This will generate a URL that will expire in 300 seconds
aws s3 presign s3://mybucket/myobject --expires-in 300
Output
https://mybucket.s3.amazonaws.com/myobject?AWSAccessKeyId=AKIAJXXXXXXXXXXXXXXX&Expires=1503602631&Signature=ibOGfAovnhIF13DALdAgsdtg2s%3D
So the command for pre-signed URL is:
aws s3 presign s3://bucket-address/ --expires-in 300
But the caveat is; we can have pre-signed URL's that works for individual file/object level and not on directory level.
Happy to be corrected if wrong.
@Michael and @Shabbir,
Yes, aws s3 presign
does not accept globbing/wildcards/, --include
, --exclude
, or --recursive'.
aws
lsdoes not accept
-1; it behaves as
ls -lpor
ls -lph`.
a loop works:
for file in $(aws s3 ls s3://mybucket --profile myprofile \
--endpoint-url <my-endpoint> | awk '{print $NF}'); do
aws s3 presign --expires-in 300 "s3://mybucket/$file" |
--profile my-profile --endpoint-url <my-endpoint>
done