mysqld service stops once a day on ec2 server

前端 未结 6 419
孤城傲影
孤城傲影 2020-12-12 12:53

Environment Details:

Server: Amazon ec2 Linux
Web Server: Apache
Web Framework: Django with mod_wsgi

Following I have found in the mysql_er

相关标签:
6条回答
  • 2020-12-12 13:34

    I found answer adds on this discussion and worked for me: https://www.digitalocean.com/community/questions/mysql-server-keeps-stopping-unexpectedly?answer=26016

    you have to do both innodb_buffer_pool_size to something reasonable like 32M on my.conf in /etc/mysql/my.cnf, and you also may need to modify /etc/apache2/mods-enabled/mpm_prefork.conf to reduce the number of connections started by apache;

    <IfModule mpm_prefork_module>
        StartServers     3
        MinSpareServers  3
        MaxSpareServers  5
        MaxRequestWorkers 25
        MaxConnectionsPerChild  0
    </IfModule>
    
    0 讨论(0)
  • 2020-12-12 13:37

    Use 50% of available RAM to test:

    You can decrease the innodb_buffer_pool_size very low to see if it helps:

    #/etc/my.cnf 
    innodb_buffer_pool_size = 1M
    

    A rule of thumb is to set innodb_buffer_pool_size to 50% of available RAM for your low memory testing. This means you start the server and everything except MySQL InnoDB. See how much RAM you have. Then use 50% of that for InnoDB.

    To try many low-memory settings at once:

    • http://paragasu.wordpress.com/2008/12/02/very-low-memory-mysql-5-mycnf-configuration/

    A more likely culprit is whatever else is on that server, such as a webserver.

    Apache?

    Are you using Apache and/or another webserver? If so, try to decrease its RAM usage. For example in Apache conf, consider low RAM settings like these:

    StartServers 1
    MinSpareServers 1
    MaxSpareServers 5
    MaxClients 5
    

    And cap the requests like this:

    MaxRequestsPerChild 300
    

    Then restart Apache.

    mod_wsgi:

    If you're using Apache with mod_python, switch to Apache with mod_wsgi.

    Pympler:

    If it's still happening, possibly your Django is steadily growing. Try Django memory profiling with Pympler:

    • http://www.rkblog.rk.edu.pl/w/p/profiling-django-object-size-and-memory-usage-pympler/

    SAR:

    Your report of once-per-day failures, then once-per-week failures, could point to some kind of cron job running daily or weekly. For example, perhaps there's a batch process that takes up a lot of RAM, or a database dump, etc.

    To track RAM use and look for RAM spikes in the hour before MySQL dies, take a look at SAR, which is a great tool: http://www.thegeekstuff.com/2011/03/sar-examples/

    0 讨论(0)
  • 2020-12-12 13:39

    Increasing the available RAM by adding new Swap space might also help. Steps are here

    Make sure that you create /swapfile of the size smaller than the available space shown by

    df -h
    

    For example for me output of df- h was:

    Filesystem      Size  Used Avail Use% Mounted on
    /dev/xvda1      7.8G  1.2G  6.3G  16% /
    none            4.0K     0  4.0K   0% /sys/fs/cgroup
    udev            492M   12K  492M   1% /dev
    tmpfs           100M  336K   99M   1% /run
    

    So I created using 2 G

    sudo fallocate -l 2G /swapfile
    

    And then just start the service

    sudo /etc/init.d/mysql restart
    

    Hope this helps. All the best.

    0 讨论(0)
  • 2020-12-12 13:51

    Once I got stuck in similar issues, I was really frustrated that my users see this ugly message that Error Establishing DB Connection. Instead of resolving the exact issues I found this repo to work like a charm for me (temporarily). After that I got debugged it by my friend and he just fine tuned my server with some configuration changes. But I've still added this script to my crontab every 10 minutes and then check if the server is crashed (which for my case crashed eventually whenever I run VNCServer on my server) and then restart it

    0 讨论(0)
  • 2020-12-12 13:56

    You have to decrease you innodb_buffer_pool_size = <60-80% of your main memory)

    Solution for Innodb Error:

    110603  7:34:15 [ERROR] Plugin ‘InnoDB’ init function returned error.
    110603  7:34:15 [ERROR] Plugin ‘InnoDB’ registration as a STORAGE ENGINE failed.
    110603  7:34:15 [ERROR] Unknown/unsupported storage engine: InnoDB
    110603  7:34:15 [ERROR] Aborting
    
    10603  7:34:15 [Note] /usr/sbin/mysqld: Shutdown complete
    
    I moved the ib_logfile0 and ib_logfile01 to bak and start Mysql again. Now this time, it is working fine
    
    [root@xxx mysql]# mv ib_logfile0 ib_logfile0-bak
    [root@xxx mysql]# mv ib_logfile1 ib_logfile1-bak
    

    Source: http://www.onaxer.com/tag/error-plugin-innodb-init-function-returned-error/

    0 讨论(0)
  • 2020-12-12 13:57

    Like others have mentioned, the problem appears to be your system running low on RAM and MySQL is blowing up due to that. Below is how to go about narrowing down where your system's memory is being used and how to recover from the database going down.

    Take a look at collectd and its plugins. Some of the applicable ones may be the processes plugin and the memory plugin. With those you can see your systems' memory usage and what processes are taking up most of it.

    Depending on how you are running Django, you can configure the worker processes to only process a certain number of requests and then terminate. That way if there is some sort of memory leak in your application it will not persist past that number of requests. For example, if you use Gunicorn, you can use the --max-requests option. Setting it to 500 will drop the worker after it has processed 500 requests.

    The above combined with stats collection will show you some interesting memory usage trends.

    As for the database going down, you can setup process supervision so that if MySQL does die, it will be relaunched automatically. MySQL in latest version of Ubuntu uses Upstart to do just that. If the process dies, Upstart will bring it back up immediately. If you're using another distro that doesn't have this built-in, take a look at Supervisor. While this doesn't fix the problem it will at least mitigate its effects. This should not be seen as the fix but rather a way to keep your application running in case something does go wrong.

    0 讨论(0)
提交回复
热议问题