I have been using curl -i http://website.com
for a long time. It\'s great, it reports the response headers and information.
I also use a tool jq
Since I faced the some problem today, I came up using:
curl -i http://some-server/get.json | awk '{ sub("\r$", ""); print }' | awk -v RS= 'NR==1{print > "/dev/stderr";next} 1' > /dev/stdout | jq .
Most likely not the best solution, but it works for me.
Explanation: the first awk program will just convert windows new lines to unix new lines.
In the second program -v RS=
will instruct awk to use one or more blank lines as record separators[1]. NR==1{print > "/dev/stderr";next}
will print the first record (NR==1) to stderr, the next statement forces awk to immediately stop processing the current record and go on to the next record[2]. 1
is just a short hand for {print $0}
[3].
[1] https://stackoverflow.com/a/33297878
[2] https://www.gnu.org/software/gawk/manual/html_node/Next-Statement.html
[3] https://stackoverflow.com/a/20263611