I have around 1300vhosts in one nginx conf file. All with the following layout (they are listed after each other in the vhost file).
Now my problem is that sometimes my
Reference (how nginx handles request): http://nginx.org/en/docs/http/request_processing.html
In this configuration nginx tests only the request’s header field “Host” to determine which server the request should be routed to. If its value does not match any server name, or the request does not contain this header field at all, then nginx will route the request to the default server for this port.
the default server is the first one — which is nginx’s standard default behaviour
Could you check the host header of those bad requests?
Also you can create an explicit default server to catch all of these bad requests, and just log the request info (i.e, $http_host) into a different error log file for investigation.
server {
listen 80 default_server;
server_name _;
error_log /path/to/the/default_server_error.log;
return 444;
}
[UPDATE] As you are doing nginx -s reload
and you have so many domains in that nginx conf file, the following is possible:
A reload works like this
starting new worker processes with a new configuration, graceful shutdown of old worker processes
So old workers and new workers could co-exist for a while. For example, when you add a new server block (with new domain name) to your config file, during the reloading time, the new workers will have the new domain and the old one will not. When the request happens to be sent by the old worker process, it will be treated as unknown host and served by the default server.
You said that it's done every 2 minutes. Could you run
ps aux |grep nginx
and check how long each worker is running? If it's much more than 2 minutes, the reloading may not work as you expected.