How to get optimal bulk insertion rate in DynamoDb through Executor Framework in Java?

房东的猫 提交于 2019-12-22 23:21:11

问题


I'm doing a POC on Bulk write (around 5.5k items) in local Dynamo DB using DynamoDB SDK for Java. I'm aware that each bulk write cannot have more than 25 write operations, so I am dividing the whole dataset into chunks of 25 items each. Then I'm passing these chunks as callable actions in Executor framework. Still, I'm not having a satisfactory result as the 5.5k records are getting inserted in more than 100 seconds.

I'm not sure how else can I optimize this. While creating the table I provisioned the WriteCapacityUnit as 400(not sure what's the maximum value I can give) and experimented with it a bit, but it never made any difference. I have also tried changing the number of threads in executor.

This is the main code to perform the bulk write operation:


    public static void main(String[] args) throws Exception {

        AmazonDynamoDBClient client = new AmazonDynamoDBClient().withEndpoint("http://localhost:8000");

        final AmazonDynamoDB aws = new AmazonDynamoDBClient(new BasicAWSCredentials("x", "y"));
        aws.setEndpoint("http://localhost:8000");

        JSONArray employees = readFromFile();
        Iterator<JSONObject> iterator = employees.iterator();

        List<WriteRequest> batchList = new ArrayList<WriteRequest>();

        ExecutorService service = Executors.newFixedThreadPool(20);

        List<BatchWriteItemRequest> listOfBatchItemsRequest = new ArrayList<>();
        while(iterator.hasNext()) {
            if (batchList.size() == 25) {
                Map<String, List<WriteRequest>> batchTableRequests = new HashMap<String, List<WriteRequest>>();
                batchTableRequests.put("Employee", batchList);
                BatchWriteItemRequest batchWriteItemRequest = new BatchWriteItemRequest();
                batchWriteItemRequest.setRequestItems(batchTableRequests);
                listOfBatchItemsRequest.add(batchWriteItemRequest);
                batchList = new ArrayList<WriteRequest>();
            }
            PutRequest putRequest = new PutRequest();
            putRequest.setItem(ItemUtils.fromSimpleMap((Map) iterator.next()));
            WriteRequest writeRequest = new WriteRequest();
            writeRequest.setPutRequest(putRequest);
            batchList.add(writeRequest);
        }

        StopWatch watch = new StopWatch();
        watch.start();

        List<Future<BatchWriteItemResult>> futureListOfResults = listOfBatchItemsRequest.stream().
                map(batchItemsRequest -> service.submit(() -> aws.batchWriteItem(batchItemsRequest))).collect(Collectors.toList());

        service.shutdown();

        while(!service.isTerminated());

        watch.stop();
        System.out.println("Total time taken : " + watch.getTotalTimeSeconds());

    }

}

This is the code used to create the dynamoDB table:

    public static void main(String[] args) throws Exception {
        AmazonDynamoDBClient client = new AmazonDynamoDBClient().withEndpoint("http://localhost:8000");

        DynamoDB dynamoDB = new DynamoDB(client);
        String tableName = "Employee";
        try {
            System.out.println("Creating the table, wait...");
            Table table = dynamoDB.createTable(tableName, Arrays.asList(new KeySchemaElement("ID", KeyType.HASH)

            ), Arrays.asList(new AttributeDefinition("ID", ScalarAttributeType.S)),
                    new ProvisionedThroughput(1000L, 1000L));
            table.waitForActive();
            System.out.println("Table created successfully.  Status: " + table.getDescription().getTableStatus());

        } catch (Exception e) {
            System.err.println("Cannot create the table: ");
            System.err.println(e.getMessage());
        }
    }

回答1:


DynamoDB Local is provided as a tool for developers who need to develop offline for DynamoDB and is not designed for scale or performance. As such it is not intended for scale testing, and if you need to test bulk loads or other high velocity workloads it is best to use a real table. The actual cost incurred from dev testing on a live table is usually quite minimal as the tables only need to be provisioned for high capacity during the test runs.



来源:https://stackoverflow.com/questions/56811061/how-to-get-optimal-bulk-insertion-rate-in-dynamodb-through-executor-framework-in

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!