Is it averaged per second? Per minute? Per hour?
For example.. if I pay for 10 \"read units\" which allows for 10 highly consistent reads per second, will I be throttl
If I pay for 10 "read units" which allows for 10 highly consistent reads per second, will I be throttled if I try to do 20 reads in a single second, even if it was the only 20 reads that occurred in the last hour?
Yes, this is due to the very concept of Amazon DynamoDB being fast and predictable performance with seamless scalability - the quoted FAQ is actually addressing this correctly already (i.e. you have to take operations/second literally), though the calculation is better illustrated in Provisioned Throughput in Amazon DynamoDB indeed:
A unit of Write Capacity enables you to perform one write per second for items of up to 1KB in size. Similarly, a unit of Read Capacity enables you to perform one strongly consistent read per second (or two eventually consistent reads per second) of items of up to 1KB in size. Larger items will require more capacity. You can calculate the number of units of read and write capacity you need by estimating the number of reads or writes you need to do per second and multiplying by the size of your items (rounded up to the nearest KB).
Units of Capacity required for writes = Number of item writes per second x item size (rounded up to the nearest KB)
Units of Capacity required for reads* = Number of item reads per second x item size (rounded up to the nearest KB) * If you use eventually consistent reads you’ll get twice the throughput in terms of reads per second.
[emphasis mine]
Getting these calculations right for real world use cases is potentially complex though, please make sure to check further details like e.g. the Provisioned Throughput Guidelines in Amazon DynamoDB as well accordingly.
From AWS :
DynamoDB currently retains up five minutes (300 seconds) of unused read and write capacity
DynamoDB provides some flexibility in the per-partition throughput provisioning. When you are not fully utilizing a partition's throughput, DynamoDB retains a portion of your unused capacity for later bursts of throughput usage. DynamoDB currently retains up five minutes (300 seconds) of unused read and write capacity. During an occasional burst of read or write activity, these extra capacity units can be consumed very quickly—even faster than the per-second provisioned throughput capacity that you've defined for your table. However, do not design your application so that it depends on burst capacity being available at all times: DynamoDB can and does use burst capacity for background maintenance and other tasks without prior notice.
We set our 'write-limit
' to 10 units/sec
for one of the tables. Cloudwatch graph (see image) shows we exceeded this by one unit (11 writes/sec
). I'm assuming there's a small wiggle room (<= 10%
). Again , i'm just assuming ...
https://aws.amazon.com/blogs/developer/rate-limited-scans-in-amazon-dynamodb/
Using google guava library to use rateLimiter class to limit the consumed capacity is possible.
It appears that they track writes in a five minute window and will throttle you when your average over the last five minutes exceeds your provisioned throughput.
I did some testing. I created a test table with throughput of 1 write/second. If I don't write to it for a while and then send a stream of requests, Amazon seems to accept about 300 before it starts throttling.
The caveat, of course, is that this is not stated in any official Amazon documentation and could change at any time.
The DynamoDB provides 'Burst Capacity' which allows for spikes in amount of data read from table. You can read more about it under: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuidelinesForTables.html#GuidelinesForTables.Bursting
Basically it's what @abjennings noticed - It uses 5min window to average number of reads from a table.