debugging long running PHP script

后端 未结 11 2504
我寻月下人不归
我寻月下人不归 2021-02-19 06:46

I have php script running as a cron job, extensively using third party code. Script itself has a few thousands LOC. Basically it\'s the data import / treatment script. (JSON to

相关标签:
11条回答
  • 2021-02-19 07:10

    Profiling tool:

    There is a PHP profiling tool called Blackfire which is currently in public beta. There is specific documentation on how to profile CLI applications. Once you collected the profile you can analyze the application control flow with time measurements in a nice UI:

    Memory consumption suspicious:

    Memory seems not to be a problem, usage growing as it should, without unexpected peaks.

    A growing memory usage actually sounds suspicious! If the current dataset does not depend on all previous datasets of the import, then a growing memory most probably means, that all imported datasets are kept in memory, which is bad. PHP may also frequently try to garbage collect, just to find out that there is nothing to remove from memory. Especially long running CLI tasks are affected, so be sure to read the blog post that discovered the behavior.

    0 讨论(0)
  • 2021-02-19 07:10

    Use strace to see what the program is basically doing from the system perspective. Is it hanging in IO operations etc.? strace should be the first thing you try when encountering performance problems with whatever kind of Linux application. Nobody can hide from it! ;)

    If you should find out that the program hangs in network related calls like connect, readfrom and friends, meaning the network communication does hang at some point while connecting or waiting for responses than you can use tcpdump to analyze this.

    Using the above methods you should be able to find out most common performance problems. Note that you can even attach to a running task with strace using -p PID.


    If the above methods doesn't help, I would profile the script using xdebug. You can analyse the profiler output using tools like KCachegrind

    0 讨论(0)
  • 2021-02-19 07:10

    I've run into strange slowdowns when doing network heavy efforts in the past. Basically, what I found was that during manual testing the system was very fast but when left to run unattended it would not get as much done as I had hoped.

    In my case the issue I found was that I had default network timeouts in place and many web requests would simply time out.

    In general, though not an external tool, you can use the difference between two microtime(TRUE) requests to time sections of code. To keep the logging small set a flag limit and only test the time if the flag has not been decremented down to zero after reducing for each such event. You can have individual flags for individual code segments or even for different time limits within a code segment.

    $flag['name'] = 10;  // How many times to fire
    $slow['name'] = 0.5; // How long in seconds before it's a problem?
    
    $start = microtime(TRUE);
    do_something($parameters);
    $used  = microtime(TRUE) - $start;
    if ( $flag['name'] && used >= $slow['name'] )
    {
      logit($parameters);
      $flag['name']--;
    }
    

    If you output what URL, or other data/event took to long to process, then you can dig into that particular item later to see if you can find out how it is causing trouble in your code.

    Of course, this assumes that individual items are causing your problem and not simply a general slowdown over time.

    EDIT:

    I (now) see it's a production server. This makes editing the code less enjoyable. You'd probably want to make integrating with the code a minimal process having the testing logic and possibly supported tags/flags and quantities in an external file.

    setStart('flagname');
    // Do stuff to be checked for speed here
    setStop('flagname',$moredata);
    

    For maximum robustness the methods/functions would have to ensure they handled unknown tags, missing parameters, and so forth.

    0 讨论(0)
  • 2021-02-19 07:16

    Although it is not stipulated, and if my guess is correct you seem to be dealing with records one at a time, but in one big cron.

    i.e. Grab a record#1, munge it somehow, add value to it, reformat it then save it, then move to record#2

    I would consider breaking the big cron down. ie

    Cron#1: grab all the records, and cache all the salient data locally (to that server). Set a flag if this stage is achieved.

    Cron #2: Now you have the data you need, munge and add value, cache that output. Set a flag if this stage is achieved.

    Cron #3: Reformat that data and store it. Delete all the files.

    This kind of "divide and conquer" will ease your debugging woes, and lead to a better understanding of what is actually going on, and as a bonus give you the opportunity to rerun say, cron 2.

    I've had to do this many times, and for me logging is the key to identifying weaknesses in your code, identify poor assumptions about data quality, and can hint at where latency is causing a problem.

    0 讨论(0)
  • 2021-02-19 07:19

    Regular "top" command can show you, if CPU usage by php or mysql is bottleneck. If not, then delays may be caused by http calls.

    If CPU usage by mysqld is low, but constant, then it may be disk usage bottleneck.

    Also, you can check your bandwidth usage by installing and using "speedometer", or other tools.

    0 讨论(0)
提交回复
热议问题