So I am trying to serve large files via a PHP script, they are not in a web accessible directory, so this is the best way I can figure to provide access to them.
The
You don't need to read the whole thing - just enter a loop reading it in, say, 32Kb chunks and sending it as output. Better yet, use fpassthru which does much the same thing for you....
$name = 'mybigfile.zip';
$fp = fopen($name, 'rb');
// send the right headers
header("Content-Type: application/zip");
header("Content-Length: " . filesize($name));
// dump the file and stop the script
fpassthru($fp);
exit;
even less lines if you use readfile, which doesn't need the fopen call...
$name = 'mybigfile.zip';
// send the right headers
header("Content-Type: application/zip");
header("Content-Length: " . filesize($name));
// dump the file and stop the script
readfile($name);
exit;
If you want to get even cuter, you can support the Content-Range header which lets clients request a particular byte range of your file. This is particularly useful for serving PDF files to Adobe Acrobat, which just requests the chunks of the file it needs to render the current page. It's a bit involved, but see this for an example.
While fpassthru()
has been my first choice in the past, the PHP manual actually recommends* using readfile() instead, if you are just dumping the file as-is to the client.
*
"If you just want to dump the contents of a file to the output buffer, without first modifying it or seeking to a particular offset, you may want to use the readfile(), which saves you the fopen() call." —PHP manual
The Python answers are all good. But is there any reason you can't make a web accessible directory containing symbolic links to the actual files? It may take some extra server configuration, but it ought to work.
Have a look at fpassthru(). In more recent versions of PHP this should serve the files without keeping them in memory, as this comment states.
One of benefits of fpassthru() is that this function can work not only with files but any valid handle. Socket for example.
And readfile() must be a little faster, cause of using OS caching mechanism, if possible (as like as file_get_contents()).
One more tip. fpassthru() hold handle open, until client gets content (which may require quite a long time on slow connect), and so you must use some locking mechanism if parallel writes to this file possible.
The best way to send big files with php is the X-Sendfile
header. It allows the webserver to serve files much faster through zero-copy mechanisms like sendfile(2)
. It is supported by lighttpd and apache with a plugin.
Example:
$file = "/absolute/path/to/file"; // can be protected by .htaccess
header('X-Sendfile: '.$file);
header('Content-type: application/octet-stream');
header('Content-Disposition: attachment; filename="'.basename($file).'"');
// other headers ...
exit;
The server reads the X-Sendfile
header and sends out the file.