How to solve MySQL max_user_connections error

前端 未结 5 1326
花落未央
花落未央 2020-12-03 15:37

I\'m getting following error when I try to log onto phpMyAdmin.

User ** already has more than \'max_user_connections\' active connections

Could anyo

相关标签:
5条回答
  • 2020-12-03 15:54

    Read max_connections document to solve your problem

    If clients encounter Too many connections errors when attempting to connect to the mysqld server, all available connections are in use by other clients.

    The permitted number of connections is controlled by the max_connections system variable. The default value is 151 to improve performance when MySQL is used with the Apache Web server. To support more connections, set max_connections to a larger value.

    First: Check your current database max_connection variable

    SHOW VARIABLES LIKE 'max_connections';
    +-----------------+-------+
    | Variable_name   | Value |
    +-----------------+-------+
    | max_connections | 151   |
    +-----------------+-------+
    

    Then Try to increase the max_connection parameter either with running command like:

     SET GLOBAL max_connections = 300;
    

    Or set this parameter in my.cnf that located at /etc/my.cnf

    vi /etc/my.cnf
    max_connections = 300
    

    Finally: Restart mysql service

    0 讨论(0)
  • 2020-12-03 15:55

    Your best bet is to increase max_connections. For a MySQL instance serving multiple different web apps (raw php, WordPress, phpBB), you probably want a value of at least 60 for this.

    Issue this command and you'll find out how many global connections you have available:

    show global variables like '%connections%'
    

    You can find out how many connections are in use at any given moment like this:

    show status like '%connected%'
    

    You can find out what each connection is doing like this:

    show full processlist
    

    I would try for a global value of at least 100 connections if I were you. Your service provider ought to be able to help you if you don't have access to do this. It needs to be done in the my.cnf file configuration for MySQL. Don't set it too high or you run the risk of your MySQL server process gobbling up all your RAM.

    A second approach allows you to allocate those overall connections to your different MySQL users. If you have different MySQL usernames for each of your web apps, this approach will work for you. This approach is written up here. https://www.percona.com/blog/2014/07/29/prevent-mysql-downtime-set-max_user_connections/

    The final approach to controlling this problem is more subtle. You're probably using the Apache web server as underlying tech. You can reduce the number of Apache tasks running at the same time to, paradoxically, increase throughput. That's because Apache queues up requests. If it has a few tasks efficiently banging through the queue, that is often faster than lots of tasks because there's less contention. It also requires fewer MySQL connections, which will solve your immediate problem. That's explained here: Restart Mysql automatically when ubuntu on EC2 micro instance kills it when running out of memory

    By the way, web apps like WordPress use a persistent connection pool. That is, they establish connections to the MySQL data base, hold them open, and reuse them. If your apps are busy, each connection's lifetime ought to be several minutes.

    0 讨论(0)
  • 2020-12-03 15:56

    It looks like queries stuck on the server. Restart and everything will be ok. If you are on shared hosting, just contact your hosting provider they will fix it. I'm namecheap user and they solved it.

    0 讨论(0)
  • 2020-12-03 16:16

    First, this is a hack, but works, especially on a shared host.
    We all have bad "neighbors" sometimes, right?

    If you have access to your /etc/ increase the limit from 30 to 50, in your my.cnf or through the information schema.

    1. To ignore the error message the visitor might see, use @mysql_connect().
    2. If there are more than 30 MUCs, use the "or die()" statement to stop the query.
    3. Replace the "or die" message with die(header(location: THIS PAGE)) and be sure to mysql_close();

    Yes, it will cause a delay in page loading. But better to load than a white screen of death -or worse error messages that visitors have no understanding of.

    0 讨论(0)
  • 2020-12-03 16:16

    This happens due to limit specified in the mysql configuration, the system variable max_user_connections.

    Solutions

    Killing the queries which are stuck at the backend is only a solution I would suggest if it is a SELECT query. Queries that change data, like UPDATE/DELETE/INSERT, are not to be killed.

    Secondly, you can use the command mysqladmin processlist to check what is going on inside mysql.

    If locking is causing your problem, you can check which engine you are using and change it to another. IBM's SolidDB documentation on table locks might help you. Though there may be another reason for this. (For example, perhaps your queries are taking too long because of an unoptimized query, or the table size is too big, or you have a spammed database).

    0 讨论(0)
提交回复
热议问题