Is PHP or vanilla Perl CGI faster?

匿名 (未验证) 提交于 2019-12-03 08:56:10

问题:

I'm developing a web app for an Apache shared hosting server. I have already written some code in Perl but I recently found out, to my surprise, the shared hosting provider does not provided mod_perl or a way to install it.

I have been a bit worried that running a Perl web app through CGI without mod_perl would make it very slow? Should I switch all of my code to PHP instead, would that be faster?

The reason I chose Perl in the first place is, I'm very familiar with Perl more than PHP. Also I wanted to be able to use my Perl libraries outside the realm of web development.

So if any of you are experienced with Apache web development, can you shed some light as to which direction should I take.

For the sake of this question, lets say the web application will get 500+ hits a day.

Which would be faster PHP or Perl without mod_perl?

Thanks in advance for the help.

回答1:

At only 500 hits a day, you could write your code in just about anything and not have to worry about slow downs. 500 hits a day evens out to about 1 page every 3 minutes. Even assuming a non-normal distribution of hits, you shouldn't really worry about this with such small traffic numbers.



回答2:

PHP would be faster.

However, with only 500 hits per day, using cgi would not be a problem. Not even with 500 hits an hour.



回答3:

Much depends on your architecture. Modern Perl frameworks aren't well suited for use as CGI (long start-up times). If you use CGI, Catalyst probably is a bad idea. That said, using classical architecture it should be quite manageable.



回答4:

Unless your shared host is running PHP as a CGI application (not mod_php or FastCGI), PHP is almost1 always going to be faster. While Perl, running as a CGI, could probably handle your 500 hits a day, an application/page developed with CGI is going to be sluggish.

CGI works by spawning a new process to run your program for each request. Both mod_php and FastCGI applications mitigate this by spawning a set a number of processes and then using these to run your application. In other words, a new processes isn't being spawned for each request. (This is an oversimplified explanation, please don't use in a CS Term Paper. See mod_php and FastCGI docs for more info)

  1. You could come up with pathological examples where it wouldn't be, but then you'd be the kind of person to come up with pathological examples of things, and no one wants that


回答5:

Speed shouldn't be your concern. Both languages are suitable for web applications.



回答6:

For the volume of traffic you're looking at, Perl with vanilla CGI shouldn't be an issue, although I would second the earlier recommendations to check out FastCGI as another option which your hosting service may provide.

Or another option would be to look for a different hosting company...



回答7:

Expanding on what Alan Storm said, you might be able to use Perl with FCGI instead.

FCGI works by having a sort of stand-alone server, a daemon if you like, that connects with your web server via FCGI protocol and delegates/dispatches requests.

This is faster than normal CGI, as this emulates a sort of "servlet" model, the application is persistent, and there is no need for a new initialization on every call like there is with normal CGI.

I have not yet learned how to do this myself, but I believe Catalyst has this option, so its just a matter of learning how to replicate this.

FastCGI/FCGI should be available on drastically more hosts than plain old mod_perl, as FCGI applications are not web-server specific, and some web servers implement PHP via a fcgi utility.

And I've experimented with FCGI webserving a little, and preliminary tests say it can handle at least 500 req/s , far faster than the above concerns of 500/day or 500/hour.



回答8:

It's possible to hack fastcgi support into a hosting account that doesn't support it. I compiled the fastcgi library with the install prefix set to the same thing as the home directory on the hosting account. Then I synced it up and set up catalyst to use the small cgi-fcgi bridge. It worked well. Nice and fast, because the cgi bridge is just a tiny little executable. The catalyst process persisted in the background just fine.



回答9:

The answer in everyones mind is: who cares. 500 requests per day is nothing.

Just use whats fastest to implement / maintain and move on.



回答10:

For lighter web frameworks that will work using CGI then have a look at....



回答11:

It depends mostly on how complex your code is and how it's put together; if you run it as CGI, perl will compile your script and modules on each invocation, and will have to reconnect to your database for each request. If your code is complex enough, this may take a few seconds per pageview, which may hamper user experience.

If your codebase and used modules isn't huge though, there should be no problem at all.

You can do a perl -c on your code to get a feel for how long perl startup and your compilation time is.



标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!