Socket accept - “Too many open files”

后端 未结 13 1796
情歌与酒
情歌与酒 2020-11-28 02:48

I am working on a school project where I had to write a multi-threaded server, and now I am comparing it to apache by running some tests against it. I am using autobench to

相关标签:
13条回答
  • 2020-11-28 02:56

    On MacOS, show the limits:

    launchctl limit maxfiles
    

    Result like: maxfiles 256 1000

    If the numbers (soft limit & hard limit) are too low, you have to set upper:

    sudo launchctl limit maxfiles 65536 200000
    
    0 讨论(0)
  • 2020-11-28 02:59

    Just another information about CentOS. In this case, when using "systemctl" to launch process. You have to modify the system file ==> /usr/lib/systemd/system/processName.service .Had this line in the file :

    LimitNOFILE=50000
    

    And just reload your system conf :

    systemctl daemon-reload
    
    0 讨论(0)
  • 2020-11-28 02:59

    I had the same problem and I wasn't bothering to check the return values of the close() calls. When I started checking the return value, the problem mysteriously vanished.

    I can only assume an optimisation glitch of the compiler (gcc in my case), is assuming that close() calls are without side effects and can be omitted if their return values aren't used.

    0 讨论(0)
  • 2020-11-28 03:00

    When your program has more open descriptors than the open files ulimit (ulimit -a will list this), the kernel will refuse to open any more file descriptors. Make sure you don't have any file descriptor leaks - for example, by running it for a while, then stopping and seeing if any extra fds are still open when it's idle - and if it's still a problem, change the nofile ulimit for your user in /etc/security/limits.conf

    0 讨论(0)
  • 2020-11-28 03:05

    I had this problem too. You have a file handle leak. You can debug this by printing out a list of all the open file handles (on POSIX systems):

    void showFDInfo()
    {
       s32 numHandles = getdtablesize();
    
       for ( s32 i = 0; i < numHandles; i++ )
       {
          s32 fd_flags = fcntl( i, F_GETFD ); 
          if ( fd_flags == -1 ) continue;
    
    
          showFDInfo( i );
       }
    }
    
    void showFDInfo( s32 fd )
    {
       char buf[256];
    
       s32 fd_flags = fcntl( fd, F_GETFD ); 
       if ( fd_flags == -1 ) return;
    
       s32 fl_flags = fcntl( fd, F_GETFL ); 
       if ( fl_flags == -1 ) return;
    
       char path[256];
       sprintf( path, "/proc/self/fd/%d", fd );
    
       memset( &buf[0], 0, 256 );
       ssize_t s = readlink( path, &buf[0], 256 );
       if ( s == -1 )
       {
            cerr << " (" << path << "): " << "not available";
            return;
       }
       cerr << fd << " (" << buf << "): ";
    
       if ( fd_flags & FD_CLOEXEC )  cerr << "cloexec ";
    
       // file status
       if ( fl_flags & O_APPEND   )  cerr << "append ";
       if ( fl_flags & O_NONBLOCK )  cerr << "nonblock ";
    
       // acc mode
       if ( fl_flags & O_RDONLY   )  cerr << "read-only ";
       if ( fl_flags & O_RDWR     )  cerr << "read-write ";
       if ( fl_flags & O_WRONLY   )  cerr << "write-only ";
    
       if ( fl_flags & O_DSYNC    )  cerr << "dsync ";
       if ( fl_flags & O_RSYNC    )  cerr << "rsync ";
       if ( fl_flags & O_SYNC     )  cerr << "sync ";
    
       struct flock fl;
       fl.l_type = F_WRLCK;
       fl.l_whence = 0;
       fl.l_start = 0;
       fl.l_len = 0;
       fcntl( fd, F_GETLK, &fl );
       if ( fl.l_type != F_UNLCK )
       {
          if ( fl.l_type == F_WRLCK )
             cerr << "write-locked";
          else
             cerr << "read-locked";
          cerr << "(pid:" << fl.l_pid << ") ";
       }
    }
    

    By dumping out all the open files you will quickly figure out where your file handle leak is.

    If your server spawns subprocesses. E.g. if this is a 'fork' style server, or if you are spawning other processes ( e.g. via cgi ), you have to make sure to create your file handles with "cloexec" - both for real files and also sockets.

    Without cloexec, every time you fork or spawn, all open file handles are cloned in the child process.

    It is also really easy to fail to close network sockets - e.g. just abandoning them when the remote party disconnects. This will leak handles like crazy.

    0 讨论(0)
  • 2020-11-28 03:08

    it can take a bit of time before a closed socket is really freed up

    lsof to list open files

    cat /proc/sys/fs/file-max to see if there's a system limit

    0 讨论(0)
提交回复
热议问题