I\'m using the redis-cli
tool to observe redis-server latency. Here\'s an example:
ubuntu:~$ redis-cli --latency -h 127.0.0.1 -p 6379
min: 0, max: 1
The --latency
switch puts redis-cli into a special mode that is designed to help you measure the latency between the client and your Redis server. During the time it is run in that node, redis-cli pings (using the Redis PING command) the server and keeps track of the average/minimum/maximum response times it got (in milliseconds).
This is a useful tool for ruling out network issues when you are using a remote Redis server.
The redis-cli --latency -h -p
command is a tool that helps troubleshoot and understand latency problems you maybe experiencing with Redis. It does so by measuring the time for the Redis server to respond to the Redis PING command in milliseconds.
In this context latency is the maximum delay between the time a client issues a command and the time the reply to the command is received by the client. Usually Redis processing time is extremely low, in the sub microsecond range, but there are certain conditions leading to higher latency figures.
-- Redis latency problems troubleshooting
So when we ran the command redis-cli --latency -h 127.0.0.1 -p 6379
Redis enters into a special mode in which it continuously samples latency (by running PING).
Now let's breakdown that data it returns: min: 0, max: 15, avg: 0.12 (2839 samples)
What's (2839 samples)
? This is the amount of times the redis-cli
recorded issuing the PING command and receiving a response. In other words, this is your sample data. In our example we recorded 2839 requests and responses.
What's min: 0
? The min
value represents the minimum delay between the time the CLI issued PING
and the time the reply was received. In other words, this was the absolute best response time from our sampled data.
What's max: 15
? The max
value is the opposite of min
. It represents the maximum delay between the time the CLI issued PING
and the time the reply to the command was received. This is the longest response time from our sampled data. In our example of 2839 samples, the longest transaction took 15ms
.
What's avg: 0.12
? The avg
value is the average response time in milliseconds for all our sampled data. So on average, from our 2839 samples the response time took 0.12ms
.
Basically, higher numbers for min
, max
, and avg
is a bad thing.
Some good followup material on how to use this data: