What are the disadvantages of using persistent connection in PDO

前端 未结 8 1201
囚心锁ツ
囚心锁ツ 2020-11-22 14:41

In PDO, a connection can be made persistent using the PDO::ATTR_PERSISTENT attribute. According to the php manual -

Persistent connectio

相关标签:
8条回答
  • 2020-11-22 15:35

    On my tests I had a connection time of over a second to my localhost, thus assuming I should use a persistent connection. Further tests showed it was a problem with 'localhost':

    Test results in seconds (measured by php microtime):

    • hosted web: connectDB: 0.0038912296295166
    • localhost: connectDB: 1.0214691162109 (over one second: do not use localhost!)
    • 127.0.0.1: connectDB: 0.00097203254699707

    Interestingly: The following code is just as fast as using 127.0.0.1:

    $host = gethostbyname('localhost');
    // echo "<p>$host</p>";
    $db = new PDO("mysql:host=$host;dbname=" . DATABASE . ';charset=utf8', $username, $password,
        array(PDO::ATTR_EMULATE_PREPARES => false,
        PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION));
    
    0 讨论(0)
  • 2020-11-22 15:37

    Please be sure to read this answer below, which details ways to mitigate the problems outlined here.


    The same drawbacks exist using PDO as with any other PHP database interface that does persistent connections: if your script terminates unexpectedly in the middle of database operations, the next request that gets the left over connection will pick up where the dead script left off. The connection is held open at the process manager level (Apache for mod_php, the current FastCGI process if you're using FastCGI, etc), not at the PHP level, and PHP doesn't tell the parent process to let the connection die when the script terminates abnormally.

    If the dead script locked tables, those tables will remain locked until the connection dies or the next script that gets the connection unlocks the tables itself.

    If the dead script was in the middle of a transaction, that can block a multitude of tables until the deadlock timer kicks in, and even then, the deadlock timer can kill the newer request instead of the older request that's causing the problem.

    If the dead script was in the middle of a transaction, the next script that gets that connection also gets the transaction state. It's very possible (depending on your application design) that the next script might not actually ever try to commit the existing transaction, or will commit when it should not have, or roll back when it should not have.

    This is only the tip of the iceberg. It can all be mitigated to an extent by always trying to clean up after a dirty connection on every single script request, but that can be a pain depending on the database. Unless you have identified creating database connections as the one thing that is a bottleneck in your script (this means you've done code profiling using xdebug and/or xhprof), you should not consider persistent connections as a solution to anything.

    Further, most modern databases (including PostgreSQL) have their own preferred ways of performing connection pooling that don't have the immediate drawbacks that plain vanilla PHP-based persistent connections do.


    To clarify a point, we use persistent connections at my workplace, but not by choice. We were encountering weird connection behavior, where the initial connection from our app server to our database server was taking exactly three seconds, when it should have taken a fraction of a fraction of a second. We think it's a kernel bug. We gave up trying to troubleshoot it because it happened randomly and could not be reproduced on demand, and our outsourced IT didn't have the concrete ability to track it down.

    Regardless, when the folks in the warehouse are processing a few hundred incoming parts, and each part is taking three and a half seconds instead of a half second, we had to take action before they kidnapped us all and made us help them. So, we flipped a few bits on in our home-grown ERP/CRM/CMS monstrosity and experienced all of the horrors of persistent connections first-hand. It took us weeks to track down all the subtle little problems and bizarre behavior that happened seemingly at random. It turned out that those once-a-week fatal errors that our users diligently squeezed out of our app were leaving locked tables, abandoned transactions and other unfortunate wonky states.

    This sob-story has a point: It broke things that we never expected to break, all in the name of performance. The tradeoff wasn't worth it, and we're eagerly awaiting the day we can switch back to normal connections without a riot from our users.

    0 讨论(0)
提交回复
热议问题