Socket accept - “Too many open files”

后端 未结 13 1782
情歌与酒
情歌与酒 2020-11-28 02:48

I am working on a school project where I had to write a multi-threaded server, and now I am comparing it to apache by running some tests against it. I am using autobench to

相关标签:
13条回答
  • 2020-11-28 03:09

    For future reference, I ran into a similar problem; I was creating too many file descriptors (FDs) by creating too many files and sockets (on Unix OSs, everything is a FD). My solution was to increase FDs at runtime with setrlimit().

    First I got the FD limits, with the following code:

    // This goes somewhere in your code
    struct rlimit rlim;
    
    if (getrlimit(RLIMIT_NOFILE, &rlim) == 0) {
        std::cout << "Soft limit: " << rlim.rlim_cur << std::endl;
        std::cout << "Hard limit: " << rlim.rlim_max << std::endl;
    } else {
        std::cout << "Unable to get file descriptor limits" << std::endl;
    }
    

    After running getrlimit(), I could confirm that on my system, the soft limit is 256 FDs, and the hard limit is infinite FDs (this is different depending on your distro and specs). Since I was creating > 300 FDs between files and sockets, my code was crashing.

    In my case I couldn't decrease the number of FDs, so I decided to increase the FD soft limit instead, with this code:

    // This goes somewhere in your code
    struct rlimit rlim;
    
    rlim.rlim_cur = NEW_SOFT_LIMIT;
    rlim.rlim_max = NEW_HARD_LIMIT;
    
    if (setrlimit(RLIMIT_NOFILE, &rlim) == -1) {
        std::cout << "Unable to set file descriptor limits" << std::endl;
    }
    

    Note that you can also get the number of FDs that you are using, and the source of these FDs, with this code.

    Also you can find more information on gettrlimit() and setrlimit() here and here.

    0 讨论(0)
  • 2020-11-28 03:10

    There are multiple places where Linux can have limits on the number of file descriptors you are allowed to open.

    You can check the following:

    cat /proc/sys/fs/file-max
    

    That will give you the system wide limits of file descriptors.

    On the shell level, this will tell you your personal limit:

    ulimit -n
    

    This can be changed in /etc/security/limits.conf - it's the nofile param.

    However, if you're closing your sockets correctly, you shouldn't receive this unless you're opening a lot of simulataneous connections. It sounds like something is preventing your sockets from being closed appropriately. I would verify that they are being handled properly.

    0 讨论(0)
  • 2020-11-28 03:12

    Similar issue on Ubuntu 18 on vsphere. The cause - Config file nginx.conf contains too many log files and sockets. Sockets are treated as files in Linux. When nginx -s reload or sudo service nginx start/restart, the Too many open files error appeared in error.log.

    NGINX worker processes were launched by NGINX user. Ulimit (soft and hard) for nginx user was 65536. The ulimit and setting limits.conf did not work.

    The rlimit setting in nginx.conf did not help either: worker_rlimit_nofile 65536;

    The solution that worked was:

    $ mkdir -p /etc/systemd/system/nginx.service.d
    $ nano /etc/systemd/system/nginx.service.d/nginx.conf
        [Service]
        LimitNOFILE=30000
    $ systemctl daemon-reload
    $ systemctl restart nginx.service
    
    0 讨论(0)
  • 2020-11-28 03:13

    This means that the maximum number of simultaneously open files.

    Solved:

    At the end of the file /etc/security/limits.conf you need to add the following lines:

    * soft nofile 16384
    * hard nofile 16384
    

    In the current console from root (sudo does not work) to do:

    ulimit -n 16384
    

    Although this is optional, if it is possible to restart the server.

    In /etc/nginx/nginx.conf file to register the new value worker_connections equal to 16384 divide by value worker_processes.

    If not did ulimit -n 16384, need to reboot, then the problem will recede.

    PS:

    If after the repair is visible in the logs error accept() failed (24: Too many open files):

    In the nginx configuration, propevia (for example):

    worker_processes 2;
    
    worker_rlimit_nofile 16384;
    
    events {
      worker_connections 8192;
    }
    
    0 讨论(0)
  • 2020-11-28 03:17

    Use lsof -u `whoami` | wc -l to find how many open files the user has

    0 讨论(0)
  • 2020-11-28 03:22

    I had similar problem. Quick solution is :

    ulimit -n 4096
    

    explanation is as follows - each server connection is a file descriptor. In CentOS, Redhat and Fedora, probably others, file user limit is 1024 - no idea why. It can be easily seen when you type: ulimit -n

    Note this has no much relation to system max files (/proc/sys/fs/file-max).

    In my case it was problem with Redis, so I did:

    ulimit -n 4096
    redis-server -c xxxx
    

    in your case instead of redis, you need to start your server.

    0 讨论(0)
提交回复
热议问题