amazon-dynamodb

Putting to local DynamoDB table with Python boto3 times out

元气小坏坏 提交于 2021-01-29 05:54:15
问题 I am attempting to programmatically put data into a locally running DynamoDB Container by triggering a Python lambda expression. I'm trying to follow the template provided here: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.Python.03.html I am using the amazon/dynamodb-local you can download here: https://hub.docker.com/r/amazon/dynamodb-local Using Ubuntu 18.04.2 LTS to run the container and lambda server AWS Sam CLI to run my Lambda api Docker Version 18.09

How can I get spark on emr-5.2.1 to write to dynamodb?

浪子不回头ぞ 提交于 2021-01-29 03:16:29
问题 According to this article here, when I create an aws emr cluster that will use spark to pipe data to dynamodb, I need to preface with the line: spark-shell --jars /usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar This line appears in numerous references, including from the amazon devs themselves. However, when I run create-cluster with an added --jars flag, I get this error: Exception in thread "main" java.io.FileNotFoundException: File file:/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar does not

Separate tables vs map lists - DynamoDB

淺唱寂寞╮ 提交于 2021-01-28 19:02:12
问题 I need your help. I am quite new to databases. I'm trying to get set up a table in DynamoDB to store info about TV shows. It seems pretty simple and straightforward but I am not sure if what I am doing is correct. So far I have this structure. I am trying to fit everything about the TV shows into one table. Seasons and episodes are contained within a list of maps within a list of maps. Is this too much layering? Would this present a problem in the future where some items are huge? Should I

AWS CDK Working with Existing DynamoDB and Streams

允我心安 提交于 2021-01-28 11:43:53
问题 I'm migrating my cloud solution to cdk. I can see how to add a stream to a new DynamoDB in the constructor through the TableProps: const newTable = new dynamodb.Table(this, 'new Table', { tableName: 'streaming', partitionKey: { name : 'id', type: dynamodb.AttributeType.NUMBER }, stream: StreamViewType.NEW_AND_OLD_IMAGES, }) but there is no apparent way to enable a stream on an existing DynamoDB. I can't seem to access the TableProps on an existing item. const sandpitTable = dynamodb.Table

aws-cli dynamodb create table with multiple secondary index

做~自己de王妃 提交于 2021-01-28 09:40:31
问题 I am trying to create a dynamodb table with 2 local secondary indexes. I did the following and only the latter index(index-2) applied. What's the correct way of doing this? aws dynamodb create-table \ --table-name test_table_name \ --attribute-definitions \ AttributeName=type,AttributeType=S \ ... --key-schema \ AttributeName=type,KeyType=HASH \ AttributeName=id,KeyType=RANGE \ --provisioned-throughput \ ReadCapacityUnits=5,WriteCapacityUnits=5 \ --local-secondary-indexes \ 'IndexName=index-1

DynamoDB provisioned Write Capacity Units exceeded too often and unexpectedly

混江龙づ霸主 提交于 2021-01-28 08:33:34
问题 I believe I understand Write/Read capacity units, how they work and are calculated in DynamoDB. Proof of that is that I understand this article thoroughly as well as the aws documentation. That said I'm experiencing an unexpected behavior when writing items to my table. I have a DynamoDB table with the following settings. Most notably 5 Write/Read Capacity Units I'm putting in this table readings from sensors connected to a Raspberry Pi that I get and send with python2.7 to Dynamo with my

sinon stub for Lambda using promises

天涯浪子 提交于 2021-01-28 08:27:45
问题 I just started using sinon, and I had some initial success stubbing out DynamoDB calls: sandbox = sinon.createSandbox() update_stub = sandbox.stub(AWS.DynamoDB.DocumentClient.prototype, 'update').returns({ promise: () => Promise.resolve(update_meeting_result) }) This works great. But I also need to stub Lambda, and the same approach isn't working: lambda_stub = sandbox.stub(AWS.Lambda.prototype, 'invoke').returns({ promise: () => Promise.resolve({lambda_invoke_result}) // }) With this, I get

Describe Dynamodb Table at Local Installation using endpoint URL?

蓝咒 提交于 2021-01-28 08:01:10
问题 I have Installed DynamoDB Local Instance on my machine, I want to Describe DynamoDB Tables which are created at this Local Instance of DynamoDB, How can I achieve this ? 回答1: You Need to have installed awscli on your machine. Steps to install awscli : Click Here After Installing, Run below command to Describe DynamoDb Table : aws dynamodb describe-table --table-name tableName --endpoint-url http://localhost:8000 Note : Here --endpoint-url is Optional & Used for Local DynamoDB Installation

AWS DynamoDB Throttled Write Request Handling

旧城冷巷雨未停 提交于 2021-01-28 07:11:21
问题 I have a table which has throttled write request at a specified time. I want to understand more about how AWS-SDK handle them. For my current understanding, DynamoDB will return an error to my Lambda. That's why I will have user errors in DynamoDB Table Metrics. However, AWS-SDK has error-handling and retry strategy which helps me to retry and write the throttled requests back to the table. Is it correct? 回答1: Every time your application sends a request that exceeds your capacity you get a

aws glue to access/crawl dynamodb from another aws account (cross account access)

独自空忆成欢 提交于 2021-01-28 06:56:15
问题 I have written a glue job which exports DynamoDb table and stores it on S3 in csv format. The glue job and the table are in the same aws account, but the S3 bucket is in a different aws account. I have been able to access cross account S3 bucket from the glue job by attaching the following bucket policy to it. { "Version": "2012-10-17", "Statement": [ { "Sid": "tempS3Access", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<AWS-ACCOUNT-ID>:role/<ROLE-PATH>" }, "Action": [ "s3:Get*",