aws-lambda

Lambda event returns empty object

倖福魔咒の 提交于 2021-02-10 18:40:43
问题 I need to access event["pathParameters"] but the event returns an empty object. I created the function with AWS Cloud9 IDE. Here is my simple function: def handler(event, context): return { 'statusCode': 200, 'body': json.dumps(event), 'headers': { 'Content-Type': 'application/json' } } 回答1: event is set by the payload you're invoking the lambda with. When you use API gateway, that payload includes the key pathParameters , but when you're testing using the lambda console you'll need to form

Access AWS S3 from Lambda within Default VPC

你。 提交于 2021-02-10 15:43:16
问题 I have a lambda function which needs to access ec2 through ssh and load files and save it to s3. So,for that I have kept ec2 and lambda both in default VPCs and same subnet. Now the problem is that I am able to connect the function to ec2 but not to s3. Its killing me since morning as when I remove the vpc settings it uploads the files to s3 ,but then connection to ec2 is lost. I tried to add a NAT gateway to default VPC(although I am not sure I did it correctly or not because I am new to

sam package is reducing the size of my template

余生颓废 提交于 2021-02-10 14:36:42
问题 I have a SAM template that I am using to building 4 lambda functions integrated with API gateways. AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: An AWS Serverless Specification template describing your function. #To avoide 'stage' being created when deploying the Api gateway. Globals: Api: OpenApiVersion: 3.0.1 Resources: # api gateway model for all user methods ApiGatewayApi: Type: AWS::Serverless::Api Properties: Name: loadeo_user StageName:

AWS: Delete Permanently S3 objects less than 30 days using 'Lifecycle Rule'

感情迁移 提交于 2021-02-10 14:30:58
问题 Is there a way to configure on S3 Lifecycle to delete object less than 30 days (say I want to delete in 5 days Permanently without moving to any other Storage class like glacier? Or should I go by other alternative like Lambda ? I believe, S3 'Lifecycle Rule' allows storage class only more than 30 days. 回答1: You can use expiration action: Define when objects expire. Amazon S3 deletes expired objects on your behalf. You can set expiration time to 5 days or 1 day, or what suits you. For example

AWS: Delete Permanently S3 objects less than 30 days using 'Lifecycle Rule'

若如初见. 提交于 2021-02-10 14:29:22
问题 Is there a way to configure on S3 Lifecycle to delete object less than 30 days (say I want to delete in 5 days Permanently without moving to any other Storage class like glacier? Or should I go by other alternative like Lambda ? I believe, S3 'Lifecycle Rule' allows storage class only more than 30 days. 回答1: You can use expiration action: Define when objects expire. Amazon S3 deletes expired objects on your behalf. You can set expiration time to 5 days or 1 day, or what suits you. For example

How to create a codepipeline to build jar file from java code stored at github and deploy it to lambda function?

[亡魂溺海] 提交于 2021-02-10 14:11:14
问题 I want to build a codepipeline that will get the code(java) from github build a jar file and deploy it to aws lamda(or store the jar in a specific S3 bucket). I only want to use tools provided by AWS platform only. If I am using only Codebuild I am able to build jar from the github code and store it to S3(https://docs.aws.amazon.com/codebuild/latest/userguide/getting-started.html) and I am using a deployer lamda function to deploy the code to my service lamda. Whenever there is any change in

How to create a codepipeline to build jar file from java code stored at github and deploy it to lambda function?

强颜欢笑 提交于 2021-02-10 14:06:29
问题 I want to build a codepipeline that will get the code(java) from github build a jar file and deploy it to aws lamda(or store the jar in a specific S3 bucket). I only want to use tools provided by AWS platform only. If I am using only Codebuild I am able to build jar from the github code and store it to S3(https://docs.aws.amazon.com/codebuild/latest/userguide/getting-started.html) and I am using a deployer lamda function to deploy the code to my service lamda. Whenever there is any change in

ENOSPC error on AWS Lambda

南笙酒味 提交于 2021-02-10 13:24:38
问题 Sorry for this loaded question. I. TL;DR: The /tmp directory on AWS Lambda keeps filling up when it shouldn't and gives me ENOSPC error on subsequent request. II. The TL version: I have a microservice built with Node JS (0.10x) on AWS Lambda that does 2 things: Given a list of urls, it goes to relevant sources (S3, Cloudfront, thumbor, etc.) and download the physical files into the /tmp directory After downloading all of these files, it will compress them into a tar ball and upload to S3.

AWS Systems Manager Parameter Store: Using StringList as Key Value Pairs in Java (Lambda)

◇◆丶佛笑我妖孽 提交于 2021-02-10 13:16:34
问题 Im using Api Gateway and AWS Lambda and AWS RDS to build an API. My Lambda Function Code is Java. Currently im using the AWS Systems Manager Parameter Store successfully to connect to my Database. Therefore I created a parameter called "connection" which has the type String and holds my complete connection url. In Lambda functions i can access this parameter successfully this way: GetParameterRequest parameterRequest = new GetParameterRequest().withName("connection").withWithDecryption(false)

AWS Systems Manager Parameter Store: Using StringList as Key Value Pairs in Java (Lambda)

橙三吉。 提交于 2021-02-10 13:16:15
问题 Im using Api Gateway and AWS Lambda and AWS RDS to build an API. My Lambda Function Code is Java. Currently im using the AWS Systems Manager Parameter Store successfully to connect to my Database. Therefore I created a parameter called "connection" which has the type String and holds my complete connection url. In Lambda functions i can access this parameter successfully this way: GetParameterRequest parameterRequest = new GetParameterRequest().withName("connection").withWithDecryption(false)