I have a php script that runs a mysql query, then loops the result, and in that loop also runs several queries:
$sqlstr = \"SELECT * FROM user_pred WHERE
Part of the reason you may be seeing additional used memory on every iteration is that PHP hasn't (yet) garbage collected the things that are no longer referenced.
From the php.net memory_get_usage manual:
Parameters
real_usage Set this to TRUE to get the real size of memory allocated from system. If not set or FALSE only the memory used by emalloc() is reported.
With this parameter set to true, the script showed no increase of memory, like I expected.
The best way is probably to get all userIds and flush them to a file. Then run a new script which forks with pipes to x amount of worker drones. Then just give them a small list of userIds to process as they complete each list. With multiple cpus/cores/servers you can finish the task faster. If one worker fails, just start a new one. To use other servers as workers you can call them with curl/fopen/soap/etc from a worker thread.
I think you should try calling mysql_free_result() at some point during the loop. — From the comments:
It's worth noting that mysql_query() only returns a resource for
SELECT
,SHOW
,EXPLAIN
, andDESCRIBE
queries.
So there is no result to free for an update query.
Anyway, your approach is not the best to begin with. Try mysqli paramterized statements instead, or (even better) updating the rows at the database directly. It looks like all of the SQL in the loop could be handled with one single UPDATE statement.
This memory leak would only be a problem if it's killing the script with a "memory exhausted" error. PHP will happily garbage collect any unusued objects/variables on its own, but the collector won't kick until it has to - garbage collection can be a very expensive operation.
It's normal to see memory usage climb even if you're constantly reusing the same objects/variables - it's not until memory usage exceeds a certain level that the collector will fire up and clean house.
I suspect that you could make things run much faster if you batched userIDs into groups and issued fewer updates, changing more records with each. e.g. do the following:
UPDATE user_roundscores SET ursUpdDate=NOW() WHERE ursUserTeamIdFK IN (id1, id2, id3, id4, id5, etc...)
instead of doing it one-update-per-user. Fewer round-trips through the DB interface layer and more time on the server = faster running.
As well, consider the impact of now expanding this to millions of users, as you say in a comment. A million individual updates will take a non-trivial amount of time to run, so the NOW()
will not be a "constant". If it takes 5 minutes to do the full run, then you're going to get a wide variety of ursUpdDate
timestamps. You may want to consider cacheing a single NOW()
call in a server-side variable and issue the updates against that variable:
SELECT @cachednow :p NOW();
UPDATE .... SET ursUpDate = @cachednow WHERE ....;
The unset
call is pointless/irrelevant. Try with mysql_free_result though - It might have some effect.