Traceback (the last output from console):
File \"batchpy.py\", line 61, in
obj.batch_w1()
File \"batchpy.py\", line 49, in batch_w1
batch.put_
There are 2 ways you can handle this problem:
Increase level of throughput (for this option you have to pay more).
The way that normally we have to do at some points is that we need to implement the logic at application level. For instance, call dynamoDB to check for an exception. If throughput is exceeded, sleep for some seconds and call the same query again (this is what we have implemented in our app).
DynamoDB uses a provisioned throughput model for both reads and writes. That means your application will receive errors if it tries to perform more reads or writes than you have allocated to the table.
AWS has done a number of things to help out with this:
Depending on the type of app you are creating there are several things you can do to deal with these errors: