The benchmark documentation says concurrency is how many requests are done simultaneously, while number of requests is total number of requests. What I\'m wondering is, if I put
It means a single test with a total of 100 requests, keeping 20 requests open at all times. I think the misconception you have is that requests all take the same amount of time, which is virtually never the case. Instead of issuing requests in batches of 20, ab simply starts with 20 requests and issues a new one each time an existing request finishes.
For example, testing with ab -n 10 -c 3
would start with3 concurrent requests:
[1, 2, 3]
Let's say #2 finishes first, ab replaces it with a fourth:
[1, 4, 3]
... then #1 may finish, replaced by a fifth:
[5, 4, 3]
... Then #3 finishes:
[5, 4, 6]
... and so on, until a total of 10 requests have been made. (As requests 8, 9, and 10 complete the concurrency tapers off to 0 of course.)
Make sense?
As to your question about why you see results with more failures than total requests... I don't know the answer to that. I can't say I've seen that. Can you post links or test cases that show this?
Update: In looking at the source, ab tracks four types of errors which are detailed below the "Failed requests: ..." line:
err_conn
in source) Incremented when ab fails to set up the HTTP connectionerr_recv
in source) Incremented when ab fails a read of the connection failserr_length
in source) Incremented when the response length is different from the length of the first good response received.err_except
in source) Incremented when ab sees an error while polling the connection socket (e.g. the connection is killed by the server?)The logic around when these occur and how they are counted (and how the total bad
count is tracked) is, of necessity, a bit complex. It looks like the current version of ab should only count a failure once per request, but perhaps the author of that article was using a prior version that was somehow counting more than one? That's my best guess.
If you're able to reproduce the behavior, definitely file a bug.
I see nothing wrong. Failed requests can increment more than one error each. That's how ab
works.
There are various statically declared buffers of fixed length. Combined with the lazy parsing of the command line arguments, the response headers from the server and other external inputs, this might bite you.
You might notice for example that the previous node results have a similar count for 3 of the error counters. Most probably, from the 100 000 requests made only 8409 failed and not 25227.
Receive: 8409, Length: 8409, Exceptions: 8409