At the core of any answer should be the statement 'it depends on what you are trying to do'.. So for example you might be trying to...
- Support 10k concurrent clients in a chat room
- Serve 1M static pages / month
- Provide a CMS like functionality to 3k users
- Serve a simple blog type site, used by a small business
- Support a write-heavy system, such as a game, with millions of users
- Build a consumer banking web site
- etc..
It's a commonly held belief that one should choose the right technologies to solve the problem in hand - any discussion about the virtues of combining one technology with another, whether they inherently handle event IO or not, without context is going to be of limited value in any decision making process.
Another highly influential factor in decision making for your technology stack are the skills and experience available to deliver on the goals of the project.
However...
Having used all of the technologies you are referring to, i'll give you some examples of problems we believe to have solved and why certain configurations were selected:
PHP + Nginx
Yes PHP is blocking, but that isn't any impediment to Facebook and others using it as their core web application language. In the more traditional LAMP stack, the A being Apache, you have a well known and long standing issue which can arise under high traffic conditions where you might have created a 1-to-1 correlation between web requests inbound to Apache and connections to the Database. If you are expected to serve more than a 1000 concurrent clients, and your DB has a 1000 connection limit , I expect you'd be running into difficulties.
This kind of resource starvation creates a breaking experience for users under overloaded conditions.
Nginx in this situation can give you more bang for your buck since the evented IO de-couples the correlation between web request and outbound PHP database connection. There's plenty of literature out there to corroborate this. Bear in mind, this isn't by magic - this is by dint of how you configure NGinx and PHP - you can easily hang yourself by leaving defaults enabled.
Assuming a thought through configuration is in place, NGinx's evented IO has the net effect of buffering requests, farming them out to PHP at a rate that the database can handle.
PHP apps widely employ caches like Memcached to further support high volumes in read heavy systems.
Node.js (pure HTTP)
Our reason for choosing Node.js for the production solution that it supports are
- It wasn't mission-critical (Node.js is new and therefore you wouldn't want to run your banking system on it)
- We wanted something lightweight
- We wanted to learn and experiment - usually this criteria isn't permitted in many business systems
The newness of Node and the subtlety of it's evented IO programming concepts meant that we broke things quite a few times and it took us longer to get the final solution squared up than had we stuck to PHP.
The frameworks for web apps are somewhat in their infancy
- Express.js
- Socketstream
- Backbone.js
- perhaps others...
Given how young they all are, they are still very much works in progress. Depending on what you are trying to achieve for the project, they may serve you well or drain time and effort in your learning them.
For example, in Express.js, simple things like dealing with HTTP cache headers, GZIPing of content are not entirely standard, so if that sort of stuff is essential, you'll have to start making bespoke solutions or look elsewhere.
There is nothing native to a Node.js installation or app built with it that ensures it's started in the run-levels, unlike NGinx or Apache. That means you have to figure out some solutions to manage the application autostart on reboot, and other recoveries, using something in addition to Node.js. (We use Monit)
Node.js + NGinx
You bet, why not.. NGinx is more mature so since we need to terminate SSL connections before farming them out to Node.js, thats what we've done. The added benefit is that now NGinx can GZIP content passing through.