boto

Pre-signed URLs and x-amz-acl

不羁的心 提交于 2019-12-22 03:56:51
问题 I want to create a so-called "pre-signed" URL for uploading a particular object (PUT) to Amazon S3 bucket. So far so good. I am using the python library boto to create an URL, that contains all necessary stuff (expires, signature and so on). The URL looks like this: https://<bucketname>.s3.amazonaws.com/<key>?Signature=<sig>&Expires=<expires>&AWSAccessKeyId=<my key id>&x-amz-acl=public-read Note the last parameter. This, at least, as I understand, limits whoever uses this URL to uploading an

How do you set “Content-Type” when saving to S3 using django-storages with S3boto backend?

情到浓时终转凉″ 提交于 2019-12-22 03:41:50
问题 I am using django-storages with s3boto as a backend. I have one bucket with two folders - one for static and one for media . I achieve this using django-s3-folder-storage . As well as saving to S3 using a model, I also want to implement an image-resize-and-cache function to save the files to S3. To do this I interact directly with my S3 bucket. The code works, but the Content-Type isn't set on S3. in iPython: In [2]: from s3_folder_storage.s3 import DefaultStorage In [3]: s3media =

How do you set “Content-Type” when saving to S3 using django-storages with S3boto backend?

孤街醉人 提交于 2019-12-22 03:41:06
问题 I am using django-storages with s3boto as a backend. I have one bucket with two folders - one for static and one for media . I achieve this using django-s3-folder-storage . As well as saving to S3 using a model, I also want to implement an image-resize-and-cache function to save the files to S3. To do this I interact directly with my S3 bucket. The code works, but the Content-Type isn't set on S3. in iPython: In [2]: from s3_folder_storage.s3 import DefaultStorage In [3]: s3media =

python boto for aws s3, how to get sorted and limited files list in bucket?

孤人 提交于 2019-12-22 01:15:11
问题 If There are too many files on a bucket, and I want to get only 100 newest files, How can I get only these list? s3.bucket.list seems not to have that function. Is there anybody who know this? please let me know. thanks. 回答1: There is no way to do this type of filtering on the service side. The S3 API does not support it. You might be able to accomplish something like this by using prefixes in your object names. For example, if you named all of your objects using a pattern like this: YYYYMMDD

Why does default_storate.exists() with django-storages with S3Boto backend cause a memory error with a large S3 bucket?

雨燕双飞 提交于 2019-12-21 22:54:19
问题 I am experiencing what looks like a memory leak with django-storages using the S3Boto backend, when running default_storage.exists() I'm following the docs here: http://django-storages.readthedocs.org/en/latest/backends/amazon-S3.html Here is the relevant part of my settings file: DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage' Here is what I do to repeat the issue: ./manage.py shell from django.core.files.storage import default_storage # Check default storage is right default

Boto3 EMR - Hive step

百般思念 提交于 2019-12-21 12:46:36
问题 Is it possible to carry out hive steps using boto 3? I have been doing so using AWS CLI, but from the docs (http://boto3.readthedocs.org/en/latest/reference/services/emr.html#EMR.Client.add_job_flow_steps), it seems like only jars are accepted. If Hive steps are possible, where are the resources? Thanks 回答1: I was able to get this to work using Boto3: # First create your hive command line arguments hive_args = "hive -v -f s3://user/hadoop/hive.hql" # Split the hive args to a list hive_args

Upload a file using boto

廉价感情. 提交于 2019-12-21 12:46:35
问题 import boto conn = boto.connect_s3('', '') mybucket = conn.get_bucket('data_report_321') I can download the file from a bucket using the following code. for b in mybucket: print b.name b.get_contents_to_filename('0000_part_00', headers=None, cb=None, num_cb=10, torrent=False, version_id=None, res_download_handler=None, response_headers=None) But I am not able to upload a file. I get an error: AttributeError: 'str' object has no attribute 'tell' send_file nor set_contents functions are working

Upload a file using boto

蓝咒 提交于 2019-12-21 12:45:05
问题 import boto conn = boto.connect_s3('', '') mybucket = conn.get_bucket('data_report_321') I can download the file from a bucket using the following code. for b in mybucket: print b.name b.get_contents_to_filename('0000_part_00', headers=None, cb=None, num_cb=10, torrent=False, version_id=None, res_download_handler=None, response_headers=None) But I am not able to upload a file. I get an error: AttributeError: 'str' object has no attribute 'tell' send_file nor set_contents functions are working

AWS: Boto3: AssumeRole example which includes role usage

ぐ巨炮叔叔 提交于 2019-12-21 07:37:39
问题 I'm trying to use the AssumeRole in such a way that i'm traversing multiple accounts and retrieving assets for those accounts. I've made it to this point: import boto3 stsclient = boto3.client('sts') assumedRoleObject = sts_client.assume_role( RoleArn="arn:aws:iam::account-of-role-to-assume:role/name-of-role", RoleSessionName="AssumeRoleSession1") Great, i have the assumedRoleObject. But now i want to use that to list things like ELBs or something that isn't a built-in low level resource. How

Boto [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed while connecting to S3

假如想象 提交于 2019-12-21 07:15:30
问题 I am trying to connect to S3 using boto, but it seems to fail. I've tried some workarounds, but they don't seem to work. Can anyone please help me with this. Below is the code. import boto if not boto.config.has_section('Credentials'): boto.config.add_section('Credentials') boto.config.set('Credentials', 'aws_access_key_id', AWS_KEY) boto.config.set('Credentials', 'aws_secret_access_key', AWS_SECRET_KEY) if not boto.config.has_section('Boto'): boto.config.add_section('Boto') boto.config.set(