I\'ve been having random 500 Internal Server
errors on my PHP / MySQL based sites on various shared hosts. I\'m using PHP 5.2.17 through CGI/FastCGI on a shared Lin
for anyone looking for more info:
in my case, there was a code issue. On an incoming http request, a call to an internal URL was being made from within the code thereby creating a deadlock kind of situation.
This resulted in hung PHP processes and brought the server down. we were using file_get_contents('URL') or cURL() in our callback function. We then replaced it with an simple drupal function which provided us the values from DB.
The NewRelic tool helped me identify the function which was taking a lot of time to respond.
Another way to identify this would have been to do a drupal_watchdog on our callback function and verify the logs when the server crashed.
hope this helps.
This issue is generally not just Host specific, it is developer related as well, depending on the configuration. However some hosts are rather strict with FastCGI and will limit your capabilities. It is generally easier to run without using FastCGI and just use mod_php unless you have specific need to use FastCGI in your application.
We would need to see your fcgi wrapper (what's in /dev/shm/blackmou-php.fcgi) or .htaccess for FastCGI spawning, to better assist you without knowing which files and the code that is on those files the issue occurs with. Also do your hosts use Apache, LightHttpd, or Nginx (or combination)? At that point I strongly suggest updating to use PHP 5.3.9+
As this can be caused by any number of issues, FastCGI effectively prevents your site/scripts from being attacked by a Denial of Service or crashing due to memory leaks, etc. (EG: trying to handle 80,000 connections simply by dropping and limiting the number of requests or getting stuck in an endless loop by timing out and terminating the process)
This error in particular is generally caused by an idle_timeout (30 seconds by default) or max children processes limit. It can also be caused by someone starting a long running script and closing their browser/connection before the script completes.
FastCGI launches its process wrapper, executes a command, times out prior to completing the process, connection seen as reset by peer.
Another example is that max children (maxProcesses) is reached (EG: a lot of sites show 2 or 4 as an example when in reality you may need 20 or 50 depending on average traffic) If all children are currently active and an additional request/connection is made, the children is limited to maxProcesses, to which FastCGI will not share the active children, so it must first either terminate the process and start a new child process, or drop the request, depending on your configurations.
Here is some more information on the settings:
http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html
http://www.fastcgi.com/drupal/node/10
Wrapper Example
PHP_FCGI_CHILDREN=0 #no limit
export PHP_FCGI_CHILDREN
PHP_FCGI_MAX_REQUESTS=10000
export PHP_FCGI_MAX_REQUESTS
UPDATE
To add on to this, this can also be caused by php memory limit
If the above doesn't resolve your issue, update your php.ini to increase memory_limit