dynamodb-queries

How to design key schema to have only one DynamoDB table per application?

别来无恙 提交于 2020-01-01 08:21:44
问题 According to DynamoDB doc: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-general-nosql-design.html "You should maintain as few tables as possible in a DynamoDB application. Most well designed applications require only one table ." But according to my experience you always have to do the opposite thing due to partition key design. Let's consider the next situation. We have several user roles, for example, "admin", "manager", "worker". Usual workflow of an admin is to CRUD

Dynamodb query all records as per no-sql design

回眸只為那壹抹淺笑 提交于 2019-12-24 18:57:39
问题 I know writing queries in dynamodb but I am still new. Now for a project rather than simply creating a table with a partition key and sort key, I have designed a no-sql data-model data structure for my table. Accordingly, I have implemented list/add/update/delete dynamodb queries. My Question is how to query all records, for example query all products . For the Admin panel of my application, I need to display all the records of an entity type, for each type of Entity . I know querying all

Scan DynamoDB table is not returning data

时间秒杀一切 提交于 2019-12-24 18:43:29
问题 I am trying to scan failed token counts from dynamodb database table without any indexes. It is returning 0 from the database. I doubt it is not scanning complete database. Below is my method and the dynamoDBClient working condition one and it has connection details. I am posting here only the scan query part public int getFailedAuthStatusCount() { Map<String,String> expressionAttributesNames = new HashMap<>(); expressionAttributesNames.put("#status","auth_status"); Map<String, AttributeValue

How to get optimal bulk insertion rate in DynamoDb through Executor Framework in Java?

房东的猫 提交于 2019-12-22 23:21:11
问题 I'm doing a POC on Bulk write (around 5.5k items) in local Dynamo DB using DynamoDB SDK for Java. I'm aware that each bulk write cannot have more than 25 write operations, so I am dividing the whole dataset into chunks of 25 items each. Then I'm passing these chunks as callable actions in Executor framework. Still, I'm not having a satisfactory result as the 5.5k records are getting inserted in more than 100 seconds. I'm not sure how else can I optimize this. While creating the table I

Querying many-to-many in DynamoDB with Node.js

时光怂恿深爱的人放手 提交于 2019-12-14 02:12:20
问题 I was reading about how to model many-to-many relationships in DynamoDB from this article: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-adjacency-graphs.html Let's say that the requirement is to display a list of all Bills for a given Invoice. But you need to display all attributes for each bill (red circles in the image). I could query all the Bills for Invoice-92551 as follow: var params = { TableName: "some-table", KeyConditionExpression: "#pk = :pk",

How to query based on condition in dynamo db table?

坚强是说给别人听的谎言 提交于 2019-12-13 03:49:02
问题 I have Table with following items where p_id is Primary partition key and p_type (String) is a Primary sort key. p_id p_type address name phone 1221 Men I want to write query in node js function with this condition: "select all where p_type ="men" and address.area="abc". my address item is a map and it looks like this: "address": { "M": { "area": { "S": "abc" }, "city": { "S": "Bengaluru" }, } How to achieve this. do i have to create global secondary index?if yes how to create it. 回答1: You

How to use pagination on dynamoDB

北慕城南 提交于 2019-12-11 18:44:00
问题 How can you make a paginated request (limit, offset, and sort_by) using dynamoDB? On mysql you can: SELECT... LIMIT 10 OFFSET 1 order by created_date ASC I'm trying this using nodejs, and in this case created_date isn't the primary key, can I query using sort key created_date ? This is my users table { "user_id": "asa2311", "created_date": "2019/01/18 15:05:59", "status": "A", "rab_item_id": "0", "order_id": "1241241", "description": "testajabroo", "id": "e3f46600-1af7-11e9-ac22-8d3a3e79a693"

How return JSON object from DynamoDB with appsync?

萝らか妹 提交于 2019-12-09 01:05:20
问题 How I can get JSON object in response from DynamoDB ? I store data in DB as array of object in format JSON. I have next mapping template request { "version": "2017-02-28", "operation": "PutItem", "key": { "userId": { "S": "$context.identity.username" } }, #set( $attrs = $util.dynamodb.toMapValues($ctx.args)) #set( $attrs.categories = $util.dynamodb.toDynamoDB($ctx.args.categories)) "attributeValues": $util.toJson($attrs) } and mapping template response #set( $result = $ctx.result) #set(

Modeling Relational Data in DynamoDB (nested relationship)

大兔子大兔子 提交于 2019-12-06 08:15:30
问题 Entity Model: I've read AWS Guide about create a Modeling Relational Data in DynamoDB. It's so confusing in my access pattern. Access Pattern +-------------------------------------------+------------+------------+ | Access Pattern | Params | Conditions | +-------------------------------------------+------------+------------+ | Get TEST SUITE detail and check that |TestSuiteID | | | USER_ID belongs to project has test suite | &UserId | | +-------------------------------------------+-----------

How to get optimal bulk insertion rate in DynamoDb through Executor Framework in Java?

不打扰是莪最后的温柔 提交于 2019-12-06 05:02:15
I'm doing a POC on Bulk write (around 5.5k items) in local Dynamo DB using DynamoDB SDK for Java. I'm aware that each bulk write cannot have more than 25 write operations, so I am dividing the whole dataset into chunks of 25 items each. Then I'm passing these chunks as callable actions in Executor framework. Still, I'm not having a satisfactory result as the 5.5k records are getting inserted in more than 100 seconds. I'm not sure how else can I optimize this. While creating the table I provisioned the WriteCapacityUnit as 400(not sure what's the maximum value I can give) and experimented with