问题
I have a web service that receives data in JSON format, processes the data, and then returns the result to the requester.
I want to measure the request, response, and total time using cURL
.
My example request looks like:
curl -X POST -d @file server:port
and I currently measure this using the time
command in Linux:
time curl -X POST -d @file server:port
The time command only measures total time, though - which isn't quite what I am looking for.
Is there any way to measure request and response times using cURL
?
回答1:
From this brilliant blog post... https://blog.josephscott.org/2011/10/14/timing-details-with-curl/
cURL supports formatted output for the details of the request (see the cURL manpage for details, under -w, –write-out <format>
). For our purposes we’ll focus just on the timing details that are provided.
Create a new file, curl-format.txt, and paste in:
time_namelookup: %{time_namelookup}\n time_connect: %{time_connect}\n time_appconnect: %{time_appconnect}\n time_pretransfer: %{time_pretransfer}\n time_redirect: %{time_redirect}\n time_starttransfer: %{time_starttransfer}\n ----------\n time_total: %{time_total}\n
Make a request:
curl -w "@curl-format.txt" -o /dev/null -s "http://wordpress.com/"
Or on Windows, it's...
curl -w "@curl-format.txt" -o NUL -s "http://wordpress.com/"
What this does:
-w "@curl-format.txt"
tells cURL to use our format file-o /dev/null
redirects the output of the request to /dev/null-s
tells cURL not to show a progress meter"http://wordpress.com/"
is
the URL we are requesting. Use quotes particularly if your URL has "&" query string parameters
And here is what you get back:
time_namelookup: 0.001
time_connect: 0.037
time_appconnect: 0.000
time_pretransfer: 0.037
time_redirect: 0.000
time_starttransfer: 0.092
----------
time_total: 0.164
Make a Windows shortcut (aka BAT file)
Put this command in CURLTIME.BAT (in the same folder as curl.exe)
curl -w "@%~dp0curl-format.txt" -o NUL -s %*
Then you can simply call...
curltime wordpress.org
回答2:
Here is the answer:
curl -X POST -d @file server:port -w %{time_connect}:%{time_starttransfer}:%{time_total}
All of the variables used with -w
can be found in man curl
.
回答3:
Option 1. To measure total time
:
curl -o /dev/null -s -w 'Total: %{time_total}s\n' https://www.google.com
Sample output:
Option 2. To get time to establish connection
, TTFB: time to first byte
and total time
:
curl -o /dev/null -s -w 'Establish Connection: %{time_connect}s\nTTFB: %{time_starttransfer}s\nTotal: %{time_total}s\n' https://www.google.com
Sample output:
Ref: Get response time with curl
回答4:
A shortcut you can add to your .bashrc etc, based on other answers here:
function perf {
curl -o /dev/null -s -w "%{time_connect} + %{time_starttransfer} = %{time_total}\n" "$1"
}
Usage:
> perf stackoverflow.com
0.521 + 0.686 = 1.290
回答5:
The following is inspired by Simon's answer. It's self-contained (doesn't require a separate format file), which makes it great for inclusion into .bashrc
.
curl_time() {
curl -so /dev/null -w "\
namelookup: %{time_namelookup}s\n\
connect: %{time_connect}s\n\
appconnect: %{time_appconnect}s\n\
pretransfer: %{time_pretransfer}s\n\
redirect: %{time_redirect}s\n\
starttransfer: %{time_starttransfer}s\n\
-------------------------\n\
total: %{time_total}s\n" "$@"
}
Futhermore, it should work with all arguments that curl
normally takes, since the "$@"
just passes them through. For example, you can do:
curl_time -X POST -H "Content-Type: application/json" -d '{"key": "val"}' https://postman-echo.com/post
Output:
namelookup: 0,125000s
connect: 0,250000s
appconnect: 0,609000s
pretransfer: 0,609000s
redirect: 0,000000s
starttransfer: 0,719000s
-------------------------
total: 0,719000s
回答6:
If you want to analyze or summarize the latency you can try apache bench:
ab -n [number of samples] [url]
For example:
ab -n 100 http://www.google.com/
It will show:
This is ApacheBench, Version 2.3 <$Revision: 1757674 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking www.google.com (be patient).....done
Server Software: gws
Server Hostname: www.google.com
Server Port: 80
Document Path: /
Document Length: 12419 bytes
Concurrency Level: 1
Time taken for tests: 10.700 seconds
Complete requests: 100
Failed requests: 97
(Connect: 0, Receive: 0, Length: 97, Exceptions: 0)
Total transferred: 1331107 bytes
HTML transferred: 1268293 bytes
Requests per second: 9.35 [#/sec] (mean)
Time per request: 107.004 [ms] (mean)
Time per request: 107.004 [ms] (mean, across all concurrent requests)
Transfer rate: 121.48 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 20 22 0.8 22 26
Processing: 59 85 108.7 68 911
Waiting: 59 85 108.7 67 910
Total: 80 107 108.8 90 932
Percentage of the requests served within a certain time (ms)
50% 90
66% 91
75% 93
80% 95
90% 105
95% 111
98% 773
99% 932
100% 932 (longest request)
回答7:
Another way is configuring ~/.curlrc
like this
-w "\n\n==== cURL measurements stats ====\ntotal: %{time_total} seconds \nsize: %{size_download} bytes \ndnslookup: %{time_namelookup} seconds \nconnect: %{time_connect} seconds \nappconnect: %{time_appconnect} seconds \nredirect: %{time_redirect} seconds \npretransfer: %{time_pretransfer} seconds \nstarttransfer: %{time_starttransfer} seconds \ndownloadspeed: %{speed_download} byte/sec \nuploadspeed: %{speed_upload} byte/sec \n\n"
So the output of curl
is
❯❯ curl -I https://google.com
HTTP/2 301
location: https://www.google.com/
content-type: text/html; charset=UTF-8
date: Mon, 04 Mar 2019 08:02:43 GMT
expires: Wed, 03 Apr 2019 08:02:43 GMT
cache-control: public, max-age=2592000
server: gws
content-length: 220
x-xss-protection: 1; mode=block
x-frame-options: SAMEORIGIN
alt-svc: quic=":443"; ma=2592000; v="44,43,39"
==== cURL measurements stats ====
total: 0.211117 seconds
size: 0 bytes
dnslookup: 0.067179 seconds
connect: 0.098817 seconds
appconnect: 0.176232 seconds
redirect: 0.000000 seconds
pretransfer: 0.176438 seconds
starttransfer: 0.209634 seconds
downloadspeed: 0.000 byte/sec
uploadspeed: 0.000 byte/sec
回答8:
Hey is better than Apache Bench, has fewer issues with SSL
./hey https://google.com -more
Summary:
Total: 3.0960 secs
Slowest: 1.6052 secs
Fastest: 0.4063 secs
Average: 0.6773 secs
Requests/sec: 64.5992
Response time histogram:
0.406 [1] |
0.526 [142] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
0.646 [1] |
0.766 [6] |∎∎
0.886 [0] |
1.006 [0] |
1.126 [0] |
1.246 [12] |∎∎∎
1.365 [32] |∎∎∎∎∎∎∎∎∎
1.485 [5] |∎
1.605 [1] |
Latency distribution:
10% in 0.4265 secs
25% in 0.4505 secs
50% in 0.4838 secs
75% in 1.2181 secs
90% in 1.2869 secs
95% in 1.3384 secs
99% in 1.4085 secs
Details (average, fastest, slowest):
DNS+dialup: 0.1150 secs, 0.0000 secs, 0.4849 secs
DNS-lookup: 0.0032 secs, 0.0000 secs, 0.0319 secs
req write: 0.0001 secs, 0.0000 secs, 0.0007 secs
resp wait: 0.2068 secs, 0.1690 secs, 0.4906 secs
resp read: 0.0117 secs, 0.0011 secs, 0.2375 secs
Status code distribution:
[200] 200 responses
References
- https://github.com/rakyll/hey
回答9:
Another option that is perhaps the simplest one in terms of the command line is adding the built-in --trace-time
option:
curl -X POST -d @file server:port --trace-time
Even though it technically does not output the timings of the various steps as requested by the OP, it does display the timestamps for all steps of the request as shown below. Using this, you can (fairly easily) calculate how long each step has taken.
$ curl https://www.google.com --trace-time -v -o /dev/null
13:29:11.148734 * Rebuilt URL to: https://www.google.com/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 013:29:11.149958 * Trying 172.217.20.36...
13:29:11.149993 * TCP_NODELAY set
13:29:11.163177 * Connected to www.google.com (172.217.20.36) port 443 (#0)
13:29:11.164768 * ALPN, offering h2
13:29:11.164804 * ALPN, offering http/1.1
13:29:11.164833 * successfully set certificate verify locations:
13:29:11.164863 * CAfile: none
CApath: /etc/ssl/certs
13:29:11.165046 } [5 bytes data]
13:29:11.165099 * (304) (OUT), TLS handshake, Client hello (1):
13:29:11.165128 } [512 bytes data]
13:29:11.189518 * (304) (IN), TLS handshake, Server hello (2):
13:29:11.189537 { [100 bytes data]
13:29:11.189628 * TLSv1.2 (IN), TLS handshake, Certificate (11):
13:29:11.189658 { [2104 bytes data]
13:29:11.190243 * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
13:29:11.190277 { [115 bytes data]
13:29:11.190507 * TLSv1.2 (IN), TLS handshake, Server finished (14):
13:29:11.190539 { [4 bytes data]
13:29:11.190770 * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
13:29:11.190797 } [37 bytes data]
13:29:11.190890 * TLSv1.2 (OUT), TLS change cipher, Client hello (1):
13:29:11.190915 } [1 bytes data]
13:29:11.191023 * TLSv1.2 (OUT), TLS handshake, Finished (20):
13:29:11.191053 } [16 bytes data]
13:29:11.204324 * TLSv1.2 (IN), TLS handshake, Finished (20):
13:29:11.204358 { [16 bytes data]
13:29:11.204417 * SSL connection using TLSv1.2 / ECDHE-ECDSA-CHACHA20-POLY1305
13:29:11.204451 * ALPN, server accepted to use h2
13:29:11.204483 * Server certificate:
13:29:11.204520 * subject: C=US; ST=California; L=Mountain View; O=Google LLC; CN=www.google.com
13:29:11.204555 * start date: Oct 2 07:29:00 2018 GMT
13:29:11.204585 * expire date: Dec 25 07:29:00 2018 GMT
13:29:11.204623 * subjectAltName: host "www.google.com" matched cert's "www.google.com"
13:29:11.204663 * issuer: C=US; O=Google Trust Services; CN=Google Internet Authority G3
13:29:11.204701 * SSL certificate verify ok.
13:29:11.204754 * Using HTTP2, server supports multi-use
13:29:11.204795 * Connection state changed (HTTP/2 confirmed)
13:29:11.204840 * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
13:29:11.204881 } [5 bytes data]
13:29:11.204983 * Using Stream ID: 1 (easy handle 0x55846ef24520)
13:29:11.205034 } [5 bytes data]
13:29:11.205104 > GET / HTTP/2
13:29:11.205104 > Host: www.google.com
13:29:11.205104 > User-Agent: curl/7.61.0
13:29:11.205104 > Accept: */*
13:29:11.205104 >
13:29:11.218116 { [5 bytes data]
13:29:11.218173 * Connection state changed (MAX_CONCURRENT_STREAMS == 100)!
13:29:11.218211 } [5 bytes data]
13:29:11.251936 < HTTP/2 200
13:29:11.251962 < date: Fri, 19 Oct 2018 10:29:11 GMT
13:29:11.251998 < expires: -1
13:29:11.252046 < cache-control: private, max-age=0
13:29:11.252085 < content-type: text/html; charset=ISO-8859-1
13:29:11.252119 < p3p: CP="This is not a P3P policy! See g.co/p3phelp for more info."
13:29:11.252160 < server: gws
13:29:11.252198 < x-xss-protection: 1; mode=block
13:29:11.252228 < x-frame-options: SAMEORIGIN
13:29:11.252262 < set-cookie: 1P_JAR=2018-10-19-10; expires=Sun, 18-Nov-2018 10:29:11 GMT; path=/; domain=.google.com
13:29:11.252297 < set-cookie: NID=141=pzXxp1jrJmLwFVl9bLMPFdGCtG8ySQKxB2rlDWgerrKJeXxfdmB1HhJ1UXzX-OaFQcnR1A9LKYxi__PWMigjMBQHmI3xkU53LI_TsYRbkMNJNdxs-caQQ7fEcDGE694S; expires=Sat, 20-Apr-2019 10:29:11 GMT; path=/; domain=.google.com; HttpOnly
13:29:11.252336 < alt-svc: quic=":443"; ma=2592000; v="44,43,39,35"
13:29:11.252368 < accept-ranges: none
13:29:11.252408 < vary: Accept-Encoding
13:29:11.252438 <
13:29:11.252473 { [5 bytes data]
100 12215 0 12215 0 0 112k 0 --:--:-- --:--:-- --:--:-- 112k
13:29:11.255674 * Connection #0 to host www.google.com left intact
回答10:
curl -v --trace-time
This must be done in verbose mode
回答11:
I made a friendly formatter for sniffing curl requests to help with debugging ( see comments for usage ). It contains's every known output parameter you can write out in an easy to read format.
https://gist.github.com/manifestinteractive/ce8dec10dcb4725b8513
回答12:
here is the string you can use with -w
, contains all options that curl -w
supports.
{"contentType":"%{content_type}","filenameEffective":"%{filename_effective}","ftpEntryPath":"%{ftp_entry_path}","httpCode":"%{http_code}","httpConnect":"%{http_connect}","httpVersion":"%{http_version}","localIp":"%{local_ip}","localPort":"%{local_port}","numConnects":"%{num_connects}","numRedirects":"%{num_redirects}","proxySslVerifyResult":"%{proxy_ssl_verify_result}","redirectUrl":"%{redirect_url}","remoteIp":"%{remote_ip}","remotePort":"%{remote_port}","scheme":"%{scheme}","size":{"download":"%{size_download}","header":"%{size_header}","request":"%{size_request}","upload":"%{size_upload}"},"speed":{"download":"%{speed_download}","upload":"%{speed_upload}"},"sslVerifyResult":"%{ssl_verify_result}","time":{"appconnect":"%{time_appconnect}","connect":"%{time_connect}","namelookup":"%{time_namelookup}","pretransfer":"%{time_pretransfer}","redirect":"%{time_redirect}","starttransfer":"%{time_starttransfer}","total":"%{time_total}"},"urlEffective":"%{url_effective}"}
outputs JSON.
回答13:
Here's a Bash one-liner to hit the same server repeatedly:
for i in {1..1000}; do curl -s -o /dev/null -w "%{time_total}\n" http://server/get_things; done
回答14:
This is a modified version of Simons answer which makes the multi-lined output a single line. It also introduces the current timestamp so it's easier to follow each line of output.
Sample format fle$ cat time-format.txt
time_namelookup:%{time_namelookup} time_connect:%{time_connect} time_appconnect:%{time_appconnect} time_pretransfer:%{time_pretransfer} time_redirect:%{time_redirect} time_starttransfer:%{time_starttransfer} time_total:%{time_total}\n
example cmd
$ while [ 1 ];do echo -n "$(date) - " ; curl -w @curl-format.txt -o /dev/null -s https://myapp.mydom.com/v1/endpt-http; sleep 1; done | grep -v time_total:0
results
Mon Dec 16 17:51:47 UTC 2019 - time_namelookup:0.004 time_connect:0.015 time_appconnect:0.172 time_pretransfer:0.172 time_redirect:0.000 time_starttransfer:1.666 time_total:1.666
Mon Dec 16 17:51:50 UTC 2019 - time_namelookup:0.004 time_connect:0.015 time_appconnect:0.175 time_pretransfer:0.175 time_redirect:0.000 time_starttransfer:3.794 time_total:3.795
Mon Dec 16 17:51:55 UTC 2019 - time_namelookup:0.004 time_connect:0.017 time_appconnect:0.175 time_pretransfer:0.175 time_redirect:0.000 time_starttransfer:1.971 time_total:1.971
Mon Dec 16 17:51:58 UTC 2019 - time_namelookup:0.004 time_connect:0.014 time_appconnect:0.173 time_pretransfer:0.173 time_redirect:0.000 time_starttransfer:1.161 time_total:1.161
Mon Dec 16 17:52:00 UTC 2019 - time_namelookup:0.004 time_connect:0.015 time_appconnect:0.166 time_pretransfer:0.167 time_redirect:0.000 time_starttransfer:1.434 time_total:1.434
Mon Dec 16 17:52:02 UTC 2019 - time_namelookup:0.004 time_connect:0.015 time_appconnect:0.177 time_pretransfer:0.177 time_redirect:0.000 time_starttransfer:5.119 time_total:5.119
Mon Dec 16 17:52:08 UTC 2019 - time_namelookup:0.004 time_connect:0.014 time_appconnect:0.172 time_pretransfer:0.172 time_redirect:0.000 time_starttransfer:30.185 time_total:30.185
Mon Dec 16 17:52:39 UTC 2019 - time_namelookup:0.004 time_connect:0.014 time_appconnect:0.164 time_pretransfer:0.164 time_redirect:0.000 time_starttransfer:30.175 time_total:30.176
Mon Dec 16 17:54:28 UTC 2019 - time_namelookup:0.004 time_connect:0.015 time_appconnect:3.191 time_pretransfer:3.191 time_redirect:0.000 time_starttransfer:3.212 time_total:3.212
Mon Dec 16 17:56:08 UTC 2019 - time_namelookup:0.004 time_connect:0.015 time_appconnect:1.184 time_pretransfer:1.184 time_redirect:0.000 time_starttransfer:1.215 time_total:1.215
Mon Dec 16 18:00:24 UTC 2019 - time_namelookup:0.004 time_connect:0.015 time_appconnect:0.181 time_pretransfer:0.181 time_redirect:0.000 time_starttransfer:1.267 time_total:1.267
I used the above to catch slow responses on the above endpoint.
来源:https://stackoverflow.com/questions/18215389/how-do-i-measure-request-and-response-times-at-once-using-curl