amazon-s3

Check if folder exist in s3 bucket

谁都会走 提交于 2021-02-16 13:16:12
问题 How can I check if some folder exists in my s3 bucket using Ruby on Rails? I’m using AWS:S3 official gem After initializing the global connection AWS::S3::Base.establish_connection!(:access_key_id => 'my_key_id', :secret_access_key => ‘my_secret’) I have bucket named: myfirstbucket With folder inside named: my_folder With file inside my_folder named: my_pic.jpg When I try to check if my_pic.jpg exist it work just fine s3object.exists? “/my_folder/my_pic.jpg” , “myfirstbucket” => True How can

S3 permissions required to get bucket size?

元气小坏坏 提交于 2021-02-13 17:41:09
问题 I'm using boto3 to get the size of all objects in S3 and have granted the following permissions: s3:ListAllMyBuckets s3:ListObjects s3:GetObject However boto keeps throwing this error: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied I couldn't find any details in the docs or by looking at the source code for boto... does anyone know th e minimum permissions necessary just to get the size of all objects in an S3 bucket? 回答1: I created the following lambda

S3 permissions required to get bucket size?

帅比萌擦擦* 提交于 2021-02-13 17:40:40
问题 I'm using boto3 to get the size of all objects in S3 and have granted the following permissions: s3:ListAllMyBuckets s3:ListObjects s3:GetObject However boto keeps throwing this error: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied I couldn't find any details in the docs or by looking at the source code for boto... does anyone know th e minimum permissions necessary just to get the size of all objects in an S3 bucket? 回答1: I created the following lambda

S3 permissions required to get bucket size?

做~自己de王妃 提交于 2021-02-13 17:39:01
问题 I'm using boto3 to get the size of all objects in S3 and have granted the following permissions: s3:ListAllMyBuckets s3:ListObjects s3:GetObject However boto keeps throwing this error: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied I couldn't find any details in the docs or by looking at the source code for boto... does anyone know th e minimum permissions necessary just to get the size of all objects in an S3 bucket? 回答1: I created the following lambda

How to implement argparse in Python

南楼画角 提交于 2021-02-11 15:41:28
问题 I'm new to Python, I got a small script to upload files to S3, at the moment I only hard-code one single file in the script, the bucket name is also hard-coded. I wanted to merge argparse in this script so that I can add some arguements by myself and upload different files. For example, in the command line I can specify arguments to decide file_name x upload to bucket_name xxx . I've been reading documents about how to set argparse but I can only make small changes and don't know how to

How to Process file on S3 event through AWS lambda using C#

扶醉桌前 提交于 2021-02-11 15:29:49
问题 I am looking for C# code blocks to read file from S3 on PUT event and upload the file to another bucket. I am fairly new to C# and see most of the blogs are either written for python and java. Any help will be highly appreciated. Thanks, 回答1: The flow would be: Configure an Amazon S3 Event to trigger the AWS Lambda function when a new object is created Details of the object created will be passed to the Lambda function via the event Your Lambda function should then call CopyObject() to

Referencing AWS S3 bucket name programmatically instead of hardcoded

六眼飞鱼酱① 提交于 2021-02-11 15:23:35
问题 I'm working with AWS Amplify to develop an iOS application. I've added storage through S3 to host some assets and am trying to configure the application to download them. The only issue is that every example I see has the bucket name and path hardcoded, but because I have multiple environments and make new environments sometimes and each bucket has the environment name appended to it, I don't want to have to rewrite the bucket name each time. For example if I'm in my test environment the

Reading from S3 in EMR

萝らか妹 提交于 2021-02-11 15:23:17
问题 I'm having troubles reading csv files stored on my bucket on AWS S3 from EMR. I have read quite a few posts about it and have done the following to make it works : Add an IAM policy allowing read & write access to s3 Tried to pass the uris in the Argument section of the spark-submit request I thought querying S3 from EMR on a common account was straight forward (because it works locally after defining a fileSystem and providing aws credentials), but when I run : df = spark.read.option(

How to stream a large gzipped .tsv file from s3, process it, and write back to a new file on s3?

我们两清 提交于 2021-02-11 14:34:19
问题 I have a large file s3://my-bucket/in.tsv.gz that I would like to load and process, write back its processed version to an s3 output file s3://my-bucket/out.tsv.gz . How do I streamline the in.tsv.gz directly from s3 without loading all the file to memory (it cannot fit the memory) How do I write the processed gzipped stream directly to s3? In the following code, I show how I was thinking to load the input gzipped dataframe from s3, and how I would write the .tsv if it were located locally

Copying a file from S3 into my codebase when using Elastic Beanstalk

孤人 提交于 2021-02-11 13:56:00
问题 I have the following script: Parameters: bucket: Type: CommaDelimitedList Description: "Name of the Amazon S3 bucket that contains your file" Default: "my-bucket" fileuri: Type: String Description: "Path to the file in S3" Default: "https://my-bucket.s3.eu-west-2.amazonaws.com/oauth-private.key" authrole: Type: String Description: "Role with permissions to download the file from Amazon S3" Default: "aws-elasticbeanstalk-ec2-role" files: /var/app/current/storage/oauth-private.key: mode: