I have my config setup to handle a bunch of GET requests which render pixels that work fine to handle analytics and parse query strings for logging. With an additional third
Ok. So finally I was able to log the post data and return a 200. It's kind of a hacky solution that I'm not too proud of which basically overrides the natural behavior for error_page, but my inexperience of nginx plus timelines lead me to this solution:
location /bk {
if ($request_method != POST) {
return 405;
}
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_redirect off;
proxy_pass $scheme://127.0.0.1:$server_port/success;
log_format my_tracking $request_body;
access_log /mnt/logs/nginx/my_tracking.access.log my_tracking;
}
location /success {
return 200;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/www/nginx-default;
log_format my_tracking $request_body;
access_log /mnt/logs/nginx/my_tracking.access.log my_tracking_2;
}
Now according to that config, it would seem that the proxy pass would return a 200 all the time. Occasionally I would get 500 but when I threw in an error_log to see what was going on, all of my request_body data was in there and I couldn't see a problem. So I caught that and wrote to the same log. Since nginx doesn't like the same name for the tracking variable, I just used my_tracking_2 and wrote to the same log as when it returns a 200. Definitely not the most elegant solution and I welcome any better solution. I've seen the post module, but in my scenario, I couldn't recompile from source.
FWIW, this config worked for me:
location = /logpush.html {
if ($request_method = POST) {
access_log /var/log/nginx/push.log push_requests;
proxy_pass $scheme://127.0.0.1/logsink;
break;
}
return 200 $scheme://$host/serviceup.html;
}
#
location /logsink {
return 200;
}
nginx
log format taken from here: http://nginx.org/en/docs/http/ngx_http_log_module.html
no need to install anything extra
worked for me for GET
and POST
requests:
upstream my_upstream {
server upstream_ip:upstream_port;
}
location / {
log_format postdata '$remote_addr - $remote_user [$time_local] '
'"$request" $status $bytes_sent '
'"$http_referer" "$http_user_agent" "$request_body"';
access_log /path/to/nginx_access.log postdata;
proxy_set_header Host $http_host;
proxy_pass http://my_upstream;
}
}
just change upstream_ip
and upstream_port
This solution works like a charm (updated in 2017 to honor that log_format needs to be in the http part of the nginx config):
log_format postdata $request_body;
server {
# (...)
location = /post.php {
access_log /var/log/nginx/postdata.log postdata;
fastcgi_pass php_cgi;
}
}
I think the trick is making nginx believe that you will call a cgi script.
The solution below was the best format I found.
log_format postdata escape=json '$remote_addr - $remote_user [$time_local] '
'"$request" $status $bytes_sent '
'"$http_referer" "$http_user_agent" "$request_body"';
server {
listen 80;
server_name api.some.com;
location / {
access_log /var/log/nginx/postdata.log postdata;
proxy_pass http://127.0.0.1:8080;
}
}
For this input
curl -d '{"key1":"value1", "key2":"value2"}' -H "Content-Type: application/json" -X POST http://api.deprod.com/postEndpoint
Generate that great result
201.23.89.149 - [22/Aug/2019:15:58:40 +0000] "POST /postEndpoint HTTP/1.1" 200 265 "" "curl/7.64.0" "{\"key1\":\"value1\", \"key2\":\"value2\"}"
Try echo_read_request_body.
"echo_read_request_body ... Explicitly reads request body so that the $request_body variable will always have non-empty values (unless the body is so big that it has been saved by Nginx to a local temporary file)."
location /log {
log_format postdata $request_body;
access_log /mnt/logs/nginx/my_tracking.access.log postdata;
echo_read_request_body;
}