PHP-MySQLi connection randomly fails with “Cannot assign requested address”

前端 未结 5 1660

Since about 2 weeks I\'m dealing with one of the weirdest problems in LAMP stack. Long story short randomly connection to MySQL server is failing with error message:

<         


        
相关标签:
5条回答
  • 2021-02-04 14:17

    I had this problem and solved it using persistent connection mode, which can be activated in mysqli by pre-fixing the database hostname with a 'p:'

    $link = mysqli_connect('p:localhost', 'fake_user', 'my_password', 'my_db');
    

    From: http://php.net/manual/en/mysqli.persistconns.php :

    The idea behind persistent connections is that a connection between a client process and a database can be reused by a client process, rather than being created and destroyed multiple times. This reduces the overhead of creating fresh connections every time one is required, as unused connections are cached and ready to be reused. ...

    To open a persistent connection you must prepend p: to the hostname when connecting.

    0 讨论(0)
  • 2021-02-04 14:31

    We had the same problem. Although "tcp_fin_timeout" and "ip_local_port_range" solutions worked, the real problem was poorly writen PHP script, which just created new connection almost every second query it made to database. Rewriting script to connect just once solved all trouble. Please be aware that lowering "tcp_fin_timeout" value may be dangerous, as some code may depend on DB connection being still there after some time after connection. It's rather a dirty duct tape and bubble gum path than real solution.

    0 讨论(0)
  • 2021-02-04 14:34

    Vicidial servers regularly require increasing the connection limit in MySQL. Many installations (and we've seen and worked on a lot of them) have had to do this by modifying the limit

    Additionally there have been reports of conntract_Max requiring increase in

    /sbin/sysctl -w net.netfilter.nf_conntrack_max=196608
    

    when the problem turns out to be networking related.

    Also note that Vicidial has some specific suggested settings and even some enterprise settings for mysql configuration. Have a look in my-bigvici.cnf in /usr/src/astguiclient/conf for some configuration ideas that may open your mysql server up a bit.

    So far, no problems have resulted from increasing connection limits, just additional resources used. Since the purpose of the server is to make this application work, dedicating resources to this application does not seem like a problem. LOL

    0 讨论(0)
  • 2021-02-04 14:39

    With Vicidial I have run into the same problem frequently, due to the kind of programming used, new MYSQL connections have to be established (very) frequently from a number of vicidial components, we have systems hammering the db server with over 10000 connections per second, most of which are serviced within a few ms and which are closed within a second or less. From experience I can tell you that in a local network, with close to no lost packages, tcp_fin_timeout can be reduced all the way down to 3 with no problems showing up.

    Typical linux commands to diagnose if connections waiting to be closed is your problem are:

    netstat -anlp | grep :3306 | grep TIME_WAIT -wc
    

    which will show you the number of connections that are waiting to be closed completely.

    netstat -nat | awk {'print $5'} | cut -d ":" -f1 | sort | uniq -c | sort -n
    

    which will show the connections per connected host, allowing you to identify which other host is folding your system if there are multiple candidates.

    To test the fix you can just

    cat /proc/sys/net/ipv4/tcp_fin_timeout
    echo "3" > /proc/sys/net/ipv4/tcp_fin_timeout
    

    which will temporarily set the tcp_fin_timeout to 3 sec and tell you how many seconds it was before, so you can revert to the old value for testing.

    As a permanent fix I would suggest you add the following line to /etc/sysctl.conf

    net.ipv4.tcp_fin_timeout=3
    

    Within a good local network with should not cause any trouble, if you do run into problems e.g. because of packet loss, you can try

    net.ipv4.tcp_tw_reuse=1
    net.ipv4.tcp_tw_recycle=0
    net.ipv4.tcp_fin_timeout=10
    

    Wiche allows more time for the connection to close and tries to reuse same ip:port combinations for new connections to the same host:service combination.

    OR

    net.ipv4.tcp_tw_reuse=1
    net.ipv4.tcp_tw_recycle=1
    net.ipv4.tcp_fin_timeout=10
    

    Which will even more aggressively try to reuse connections, what can however create new problems with other applications for example with your webserver. So you should try the simple solution first, in most cases it will already fix your problem without any bad side effects!

    Good Luck!

    0 讨论(0)
  • 2021-02-04 14:40

    MySQL: Using giant number of connections

    What are dangers of frequent connects ?
    It works well, with exception of some extreme cases. If you get hundreds of connects per second from the same box you may get into running out of local port numbers. The way to fix it could be - decrease "/proc/sys/net/ipv4/tcp_fin_timeout" on linux (this breaks TCP/IP standard but you might not care in your local network), increase "/proc/sys/net/ipv4/ip_local_port_range" on the client. Other OS have similar settings. You also may use more web boxes or multiple IP for your same database host to work around this problem. I've realy seen this in production.

    Some background about this problem:
    TCP/IP connection is identified by localip:localport remoteip:remote port. We have MySQL IP and Port as well as client IP fixed in this case so we can only vary local port which has finite range. Note even after you close connection TCP/IP stack has to keep the port reserved for some time, this is where tcp_fin_timeout comes from.

    0 讨论(0)
提交回复
热议问题