boto

Getting Http403 when saving files through django s3 middleware (but can save using boto in shell)

南笙酒味 提交于 2019-12-23 10:43:50
问题 I have been trying to save user uploaded files to my s3 bucket via my django application. I'm using the django-s3-storage middleware, but I keep getting: S3ResponseError: 403 Forbidden (Access Denied) I'm using these settings: MEDIAFILES_LOCATION = 'media' AWS_S3_CUSTOM_DOMAIN = 'my-bucket.s3-website-eu-west-1.amazonaws.com' AWS_S3_HOST = 's3-website-eu-west-1.amazonaws.com' MEDIA_URL = "https://%s/%s/" % (AWS_S3_CUSTOM_DOMAIN, MEDIAFILES_LOCATION) DEFAULT_FILE_STORAGE = 'django_s3_storage

List EC2 volumes in Boto

风格不统一 提交于 2019-12-23 10:25:14
问题 I want to list all volumes attached to my ec2. conn = EC2Connection() attribute = get_instance_metadata() region=attribute['local-hostname'].split('.')[1] inst_id = attribute['instance-id'] aws = boto.ec2.connect_to_region(region) volume=attribute['local-hostname'] volumes = str(aws.get_all_volumes(filters={'attachment.instance-id': inst_id})) With my code I can list volume but the result is like that : [vol-35b0b5fa, Volume:vol-6cbbbea3] I need something like : vol-35b0b5fa vol-6cbbbea3 I

400 Bad Request while pulling instances with amazon

陌路散爱 提交于 2019-12-23 09:07:42
问题 Can any body say why I'm getting this error? I'm getting this while pulling instances after connection to amazon server. import boto con = boto.connect_ec2(aws_access_key_id='XXX',aws_secret_access_key='XXX') con.get_all_instances() Traceback (most recent call last): File "getAllinstanc.py", line 7, in <module> reservations = ec2conn.get_all_instances() File "c:\jiva\py26\lib\site-packages\boto-2.3.0-py2.6.egg\boto\ec2\connection.py", line 467, in get_all_instances [('item', Reservation)],

Get instance by instance-id

↘锁芯ラ 提交于 2019-12-23 07:27:48
问题 I need to get the instance by instance-id, is it possible to do it without requesting a list of all instances? I've tried: ec2_conn = boto.connect_ec2(aws_access_key_id=key, aws_secret_access_key=access) c2.get_all_instances([instanceId]) It works, but is there some other way to get the instance? The reason I'm asking is I received UnauthorizedOperation for get_all_instances request, so I would prefer to change the request, not the security settings. 回答1: Maybe boto has evolved since the time

What are the request per seconds of the AWS API Queries?

不想你离开。 提交于 2019-12-23 04:48:32
问题 What are the request per seconds and average response times(roundtrip) of the following API calls made by Boto-2.38/Boto3 ? conn=EC2(aws_access_key,aws_secret_key_id) Q1:images=conn.get_all_images(owners=['self']) Q2:instances=conn.get_only_instances() Q3:snapshots=conn.get_all_snapshots(owner='self') Q4:snapshot=conn.create_snapshot(volume_id, description) Q5:instance=conn.launch_instance(<params>) Q6:image=conn.create_image(instance_id,name,description) Q7:conn.deregister_image(image_id

How to access image by url on s3 using boto3?

久未见 提交于 2019-12-22 18:37:36
问题 What I want to accomplish is to generate a link to view the file (ex.image or pdf). The item is not accessible by URL (https://[bucket].s3.amazonaws.com/img_name.jpg), I think because its private and not public? (I'm not the owner of the bucket, but he gave me the access_key and secret_key?) For now, all I can do is to download a file with this code. s3.Bucket('mybucket').download_file('upload/nocturnes.png', 'dropzone/static/pdf/download_nocturnes.png') I want to access an image on s3 so I

Boto Glacier - Upload file larger than 4 GB using multipart upload

不羁岁月 提交于 2019-12-22 11:13:31
问题 I am periodically uploading a file to AWS Glacier using boto as follows: # Import boto's layer2 import boto.glacier.layer2 # Create a Layer2 object to connect to Glacier l = boto.glacier.layer2.Layer2(aws_access_key_id=awsAccess, aws_secret_access_key=awsSecret) # Get a vault based on vault name (assuming you created it already) v = l.get_vault(vaultName) # Create an archive from a local file on the vault archiveID = v.create_archive_from_file(fileName) However this fails for files that are

How can I add a tag to a key in boto (Amazon S3)?

跟風遠走 提交于 2019-12-22 05:39:17
问题 I am trying to tag a key that I've uploaded to S3. In the same below I just create a file from a string. Once I have they key, I'm not sure how to tag the file. I've tried Tag as well as TagSet. from boto.s3.bucket import Bucket from boto.s3.key import Key from boto.s3.tagging import Tag, TagSet k = Key(bucket) k.key = 'foobar/somefilename' k.set_contents_from_string('some data in file') Tag(k, 'the_tag') 回答1: As far as I can see in the docs, a setTags-method is only available on a bucket

How do I connect to an existing CloudSearch domain in boto?

那年仲夏 提交于 2019-12-22 05:14:45
问题 I'm just starting to work with boto to connect to Amazon CloudSearch. I got the examples working, but I can't find any examples of connecting to an existing domain, all the examples create a new domain. Poking around, I found get_domain, but that fails if I call it on the connection object. >>> conn.get_domain('foo') Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Layer2' object has no attribute 'get_domain' Any suggestions as to how I can connect to an

Unable to read instance data, giving up error in python boto

时光总嘲笑我的痴心妄想 提交于 2019-12-22 04:00:54
问题 I am trying to access amazon s3 using boto library to access common crawl data availble in amazon 'aws-publicdatasets'. i created access config file in ~/.boto [Credentials] aws_access_key_id = "my key" aws_secret_access_key = "my_secret" and while creating connection with amazon s3 i see below error in logs. 2014-01-23 16:28:16,318 boto [DEBUG]:Retrieving credentials from metadata server. 2014-01-23 16:28:17,321 boto [ERROR]:Caught exception reading instance data Traceback (most recent call