问题
I have a PHP script that you can upload very large files with (up to 500MB), and the file's content is stored in a MySQL database. Currently I do something like this:
mysql_query("INSERT INTO table VALUES('')");
$uploadedfile = fopen($_FILES['file']['tmp_name'], 'rb');
while (!feof($uploadedfile)) {
$line = mysql_escape_string(fgets($uploadedfile, 4096));
mysql_query("UPDATE table SET file = CONCAT(file, '$line') WHERE something = something");
}
fclose($uploadedfile);
This of course does a bloody lot of sql queries.
I did that rather than something like
$file = file_get_contents($_FILES['file']['tmp_name']);
mysql_query("INSERT INTO table VALUES('$file')");
because that would use up however much memory the file was, and it seemed better to do more sql queries than to use 500 mb of memory.
However, there must be a better way. Should I go ahead and do it the file_get_contents way or is there a better way than CONCAT, or is the way I'm doing it now the lesser of all evils?
回答1:
I always store my files on the server, and store their location in the database.
回答2:
You are right, in some cases a filesystem cannot do the job. Because databases have features such as locking, replication, integrity, no limitation on number of rows, etc etc... Which do not exist in a file system.
Also, backup/restore/migrate the system becomes more complicated and cannot be done safely on a running server (risk of inconsistency and data loss). Or at least guaranting this is very difficult in a DB+FS configuration.
What about migrating from a "/" separator based OS to a "\" based one? You need to update all your paths.
Your method seems to be correct, but the 4096 byte slicing is way too small. Mysql will have no trouble working with 256kb slices, for instance.
Also, I would not concatenate, but rather store each slice as a single record. The database may have trouble storing huge files in a single record, and this may hit limitations mentioned in other answers.
Keeping the data sliced will allow streaming the content without ever storing the whole data in memory, for instance. This way, there is virtually no limit to stored file size.
回答3:
This wouldn't actually work (by default) with mySQl, because that would cause a 500 MB big query.
$file = file_get_contents($_FILES['file']['tmp_name']);
mysql_query("INSERT INTO table VALUES('$file')");
because the max_allowed_packet
is set to 16777216. You would either be required to increase it or split it in chunks smaller than 16 MB (minus query ~500-1000 bytes for the query string).
You can find out the max_allowed_packet of your mysql server by doing querying
SELECT @@global.max_allowed_packet
回答4:
I would imaging that the most effective way to do that would be to do all the validation in the script UP TO the point of the insert, then shell out and do a file move of the uploaded $FILES temp file piped into a MySQL command line insert query. You'd want someone better in bash than me to validate that but it seems it would pretty much remove the memory issue?
回答5:
I have yet to see an application that actually needs to store files in a relational database.
There are a significant number of freely available, powerful, databases out there that are designed and optimized specifically for storing/retrieving files. They're called filesystems
Store your files in your filesystem, and your metadata in the RDBMS.
You're worried about using up 500MB of memory while inserting, and it's not clear why. You're eventually going to want to get those files back out of the database, and I don't think you'll find a way to read the file data out in chunks.
回答6:
You can use this :
http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_load-file It also gives a straightforward query example.
来源:https://stackoverflow.com/questions/3894282/better-way-to-store-large-files-in-a-mysql-database