boto3

Returning only the subset for response from dynamdb scan

做~自己de王妃 提交于 2020-12-13 03:28:20
问题 the use case that I am trying to achieve is: To check if there is any existing bid on the load that has been posted. there 2 different tables "load" which has information on the load "bid" which has information of bid on loads. Now I read that dynamodb can't support joins as RDS does. How can I check there is an exiting bid on a load? My approach here was to scan the "bid" table with load_id and bid_by and send the information to the front end and do the rest of the operation there. here is a

How do I create a Presigned URL to download a file from an S3 Bucket using Boto3?

大城市里の小女人 提交于 2020-12-11 08:49:12
问题 I have to download a file from my S3 bucket onto my server for some processing. The bucket does not support direct connections and has to use a Pre-Signed URL . The Boto3 Docs talk about using a presigned URL to upload but do not mention the same for download. 回答1: import boto3 s3_client = boto3.client('s3') BUCKET = 'my-bucket' OBJECT = 'foo.jpg' url = s3_client.generate_presigned_url( 'get_object', Params={'Bucket': BUCKET, 'Key': OBJECT}, ExpiresIn=300) print(url) For another example, see:

How do I create a Presigned URL to download a file from an S3 Bucket using Boto3?

廉价感情. 提交于 2020-12-11 08:47:07
问题 I have to download a file from my S3 bucket onto my server for some processing. The bucket does not support direct connections and has to use a Pre-Signed URL . The Boto3 Docs talk about using a presigned URL to upload but do not mention the same for download. 回答1: import boto3 s3_client = boto3.client('s3') BUCKET = 'my-bucket' OBJECT = 'foo.jpg' url = s3_client.generate_presigned_url( 'get_object', Params={'Bucket': BUCKET, 'Key': OBJECT}, ExpiresIn=300) print(url) For another example, see:

ValueError: non-string names in Numpy dtype unpickling only on AWS Lambda

半世苍凉 提交于 2020-12-07 04:48:29
问题 I am using pickle to save my trained ML model. For the learning part, I am using scikit-learn library and building a RandomForestClassifier rf = RandomForestClassifier(n_estimators=100, max_depth=20, min_samples_split=2, max_features='auto', oob_score=True, random_state=123456) rf.fit(X, y) fp = open('model.pckl', 'wb') pickle.dump(rf, fp, protocol=2) fp.close() I uploaded this model on S3 and I am fetching this model using boto3 library in AWS Lambda. s3_client = boto3.client('s3') bucket =

ValueError: non-string names in Numpy dtype unpickling only on AWS Lambda

丶灬走出姿态 提交于 2020-12-07 04:46:13
问题 I am using pickle to save my trained ML model. For the learning part, I am using scikit-learn library and building a RandomForestClassifier rf = RandomForestClassifier(n_estimators=100, max_depth=20, min_samples_split=2, max_features='auto', oob_score=True, random_state=123456) rf.fit(X, y) fp = open('model.pckl', 'wb') pickle.dump(rf, fp, protocol=2) fp.close() I uploaded this model on S3 and I am fetching this model using boto3 library in AWS Lambda. s3_client = boto3.client('s3') bucket =

Flask Upload Image to S3 without saving it to local file system

北慕城南 提交于 2020-12-06 07:04:39
问题 I need to upload a a user submitted photo to an s3 bucket. However I keep getting the following error: TypeError: expected str, bytes or os.PathLike object, not FileStorage How am I able to store the file as string/bytes instead of FileStorage? The relevent code is as follows: @user_api.route('upload-profile-photo', methods=['PUT']) @Auth.auth_required def upload_profile_photo(): """ Upload User Profile Photo """ key = Auth.auth_user() bucket = 'profile-photos' content_type = request.mimetype

How to copy files between S3 buckets in 2 different accounts using boto3

女生的网名这么多〃 提交于 2020-12-01 15:08:53
问题 I'm trying to files from a vendors S3 bucket to my S3 bucket using boto3. I'm using the sts service to assume a role to access the vendor s3 bucket. I'm able to connect to the vendor bucket and get a listing of the bucket. I run into CopyObject operation: Access Denied error when copying to my bucket. Here is my script session = boto3.session.Session(profile_name="s3_transfer") sts_client = session.client("sts", verify=False) assumed_role_object = sts_client.assume_role( RoleArn="arn:aws:iam:

How to copy files between S3 buckets in 2 different accounts using boto3

岁酱吖の 提交于 2020-12-01 15:06:28
问题 I'm trying to files from a vendors S3 bucket to my S3 bucket using boto3. I'm using the sts service to assume a role to access the vendor s3 bucket. I'm able to connect to the vendor bucket and get a listing of the bucket. I run into CopyObject operation: Access Denied error when copying to my bucket. Here is my script session = boto3.session.Session(profile_name="s3_transfer") sts_client = session.client("sts", verify=False) assumed_role_object = sts_client.assume_role( RoleArn="arn:aws:iam:

Delete all versions of an object in S3 using python?

夙愿已清 提交于 2020-12-01 09:41:05
问题 I have a versioned bucket and would like to delete the object (and all of its versions) from the bucket. However, when I try to delete the object from the console, S3 simply adds a delete marker but does not perform a hard delete. Is it possible to delete all versions of the object (hard delete) with a particular key?: s3resource = boto3.resource('s3') bucket = s3resource.Bucket('my_bucket') obj = bucket.Object('my_object_key') # I would like to delete all versions for the object like so: obj

How to query cloudwatch logs using boto3 in python

跟風遠走 提交于 2020-12-01 06:13:30
问题 I have a lambda function that writes metrics to Cloudwatch. While, it writes metrics, It generates some logs in a log-group. INFO:: username: simran+test@abc.com ClinicID: 7667 nodename: MacBook-Pro-2.local INFO:: username: simran+test2@abc.com ClinicID: 7667 nodename: MacBook-Pro-2.local INFO:: username: simran+test@abc.com ClinicID: 7668 nodename: MacBook-Pro-2.local INFO:: username: simran+test3@abc.com ClinicID: 7667 nodename: MacBook-Pro-2.local I would like to query AWS logs in past x