What is the best way to handle this: large download via PHP + slow connection from client = script timeout before file is completely downloaded

泄露秘密 提交于 2019-12-03 12:30:20

Use X-SENDFILE. Most webservers will support it either natively, or though a plugin (apache).

using this header you can simply specify a local file path and exit the PHP script. The webserver sees the header and serves that file instead.

The easy solution would be to disable the timeout. You can do this on a per-request basis with:

set_time_limit(0);

If your script is not buggy, this shouldn't be problem – unless your server is not able to handle so many concurrent connections due to slow clients.

In that case, #1, #2 and #3 are two good solutions, and I would go with whichever is cheaper. Your concerns about #1 could be mitigated by generating download tokens that could only be used once, or for a small period of time.

Option #4, in my opinion, is not a great option. The speed can greatly vary during a download, so any estimate you would do initially would be, with a significant probability, wrong.

I am a bit reserved about #4. An attacker could forge a fake AJAX request to set your timeout to a very high value, then he can get you into an infinite loop. (If you were worried about that in the first place)

I would suggest a solution similar to @prodigitalson. You can make directories using hash values /downloads/389a002392ag02/myfile.zip which symlinks to the real file. Your PHP script redirects to that file which gets served by HTTP server. The symlink gets deleted periodically.

The added benefit for creating directory instead of a file is that end user doesn't see a mangled file name.

I think the main problem is serving the file thourgh a PHP script. Not only you will have the timeout problem. Also there is a web server process running while the file is being sent to the client.

I would recommend some kind of #1. It don't has to be a CDN but the PHP script should redirect directly to the file. You might check the bypass using a rewrite rule and a param that will check if the param and the current request time match.

I think you might do something like #1 except keep it on your servers and bypass serving it via php directly. After whatever auth/approval needs to happen with php have that script create a temporary link to the file for dowwnload via traditional http. If on a *nix id do this via a symlink to the real file and have a cron job run every n minutes to clear old links to the file.

You may create a temp file on the disk, or a symlink, and then redirect(using header()) to that temp file. Then a cronjob could come and remove "expired" temp files. The key here is that every download should have a unique temp file associated.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!