boto3

Athena query fails with boto3 (S3 location invalid)

别等时光非礼了梦想. 提交于 2020-08-24 17:32:09
问题 I'm trying to execute a query in Athena, but it fails. Code: client.start_query_execution(QueryString="CREATE DATABASE IF NOT EXISTS db;", QueryExecutionContext={'Database': 'db'}, ResultConfiguration={ 'OutputLocation': "s3://my-bucket/", 'EncryptionConfiguration': { 'EncryptionOption': 'SSE-S3' } }) But it raises the following exception: botocore.errorfactory.InvalidRequestException: An error occurred (InvalidRequestException) when calling the StartQueryExecution operation: The S3 location

Athena query fails with boto3 (S3 location invalid)

流过昼夜 提交于 2020-08-24 17:29:22
问题 I'm trying to execute a query in Athena, but it fails. Code: client.start_query_execution(QueryString="CREATE DATABASE IF NOT EXISTS db;", QueryExecutionContext={'Database': 'db'}, ResultConfiguration={ 'OutputLocation': "s3://my-bucket/", 'EncryptionConfiguration': { 'EncryptionOption': 'SSE-S3' } }) But it raises the following exception: botocore.errorfactory.InvalidRequestException: An error occurred (InvalidRequestException) when calling the StartQueryExecution operation: The S3 location

How to list available regions with Boto3 (Python)

﹥>﹥吖頭↗ 提交于 2020-08-22 03:29:52
问题 As AWS expands and adds new regions, I'd like to have my code automatically detect that. Currently, the "Select your region" is hard coded but I would like to parse the following for just the RegionName . import boto3 ec2 = boto3.client('ec2') regions = ec2.describe_regions() print(regions) My output is JSON like so: {'Regions': [{'Endpoint': 'ec2.ap-south-1.amazonaws.com', 'RegionName': 'ap-south-1'}, {'Endpoint': 'ec2.eu-west-1.amazonaws.com', 'RegionName': 'eu-west-1'}, {'Endpoint': 'ec2

How to list available regions with Boto3 (Python)

本小妞迷上赌 提交于 2020-08-22 03:29:46
问题 As AWS expands and adds new regions, I'd like to have my code automatically detect that. Currently, the "Select your region" is hard coded but I would like to parse the following for just the RegionName . import boto3 ec2 = boto3.client('ec2') regions = ec2.describe_regions() print(regions) My output is JSON like so: {'Regions': [{'Endpoint': 'ec2.ap-south-1.amazonaws.com', 'RegionName': 'ap-south-1'}, {'Endpoint': 'ec2.eu-west-1.amazonaws.com', 'RegionName': 'eu-west-1'}, {'Endpoint': 'ec2

How to get more than 1000 objects from S3 by using list_objects_v2?

泪湿孤枕 提交于 2020-08-22 03:02:35
问题 I have more than 500,000 objects on s3. I am trying get the size of each object. I am using the following python code for that import boto3 bucket = 'bucket' prefix = 'prefix' contents = boto3.client('s3').list_objects_v2(Bucket=bucket, MaxKeys=1000, Prefix=prefix)["Contents"] for c in contents: print(c["Size"]) But it just gave me the size of top 1000 objects. Based on the documentation we can't get more 1000. Is there any way I can get more than that? 回答1: Use the ContinuationToken returned

How to get more than 1000 objects from S3 by using list_objects_v2?

℡╲_俬逩灬. 提交于 2020-08-22 03:02:27
问题 I have more than 500,000 objects on s3. I am trying get the size of each object. I am using the following python code for that import boto3 bucket = 'bucket' prefix = 'prefix' contents = boto3.client('s3').list_objects_v2(Bucket=bucket, MaxKeys=1000, Prefix=prefix)["Contents"] for c in contents: print(c["Size"]) But it just gave me the size of top 1000 objects. Based on the documentation we can't get more 1000. Is there any way I can get more than that? 回答1: Use the ContinuationToken returned

AWS - Using email template from S3 bucket

纵饮孤独 提交于 2020-08-11 02:48:28
问题 I can send email with amazon SES in python with boto3. I made my email template and passing it as a parameter inside my code. I want to upload my email template in S3 bucket and intergrate it with my existing code. I have searched the documentation but can't find any lead. How do I do this? Here is my code so far: import boto3 from botocore.exceptions import ClientError SENDER = "************" RECIPIENT = "*************" AWS_REGION = "us-east-1" SUBJECT = "Amazon SES Test (SDK for Python)"

Rename an incoming S3 file with a random directory structure

*爱你&永不变心* 提交于 2020-08-10 18:52:41
问题 I have this app that will send a file to a s3 bucket. unfortunately I cannot change the path it sends it to in s3 so I have to figure out a way to get this file. mys3bucket: /apps/region/020-07-14T22:24:34Z/details.csv As you can see the date, the app places the date into the path. I am trying to not hard code items to make it more flexible. what I want to do is get that details.csv file rename and move it to another location within the same s3 bucket. basically its permanent location. what I

Failing to create s3 buckets in specific regions

喜夏-厌秋 提交于 2020-08-09 09:14:30
问题 I'm trying to create an s3 bucket in every region in AWS with boto3 in python but I'm failing to create a bucket in 4 regions (af-south-1, eu-south-1, ap-east-1 & me-south-1) My python code: def create_bucket(name, region): s3 = boto3.client('s3') s3.create_bucket(Bucket=name, CreateBucketConfiguration={'LocationConstraint': region}) and the exception I get: botocore.exceptions.ClientError: An error occurred (InvalidLocationConstraint) when calling the CreateBucket operation: The specified

How to use a pretrained model from s3 to predict some data?

送分小仙女□ 提交于 2020-08-09 05:41:05
问题 I have trained a semantic segmentation model using the sagemaker and the out has been saved to a s3 bucket. I want to load this model from the s3 to predict some images in sagemaker. I know how to predict if I leave the notebook instance running after the training as its just an easy deploy but doesn't really help if I want to use an older model. I have looked at these sources and been able to come up with something myself but it doesn't work hence me being here: https://course.fast.ai