Docker Ignores limits.conf (trying to solve “too many open files” error)

前端 未结 3 1854
梦毁少年i
梦毁少年i 2021-02-01 06:11

I\'m running a web server that is handling many thousands of concurrent web socket connections. For this to be possible, on Debian linux (my base image is google/debian:wheezy,

相关标签:
3条回答
  • 2021-02-01 06:50

    I was able to mitgiate this issue with the following configuration :

    I used ubuntu 14.04 linux for the docker machine and the host machine.

    On the host machine You need to :

    • update the /etc/security/limits.conf to include :* - nofile 64000
    • add to your /etc/sysctl.conf : fs.file-max = 64000
    • restart sysctl : sudo sysctl -p
    0 讨论(0)
  • 2021-02-01 06:50

    You can pass the limit as argument while running the container. That way you don't have to modify host's limits and give too much power to the container. Here is how:

    docker run --ulimit nofile=5000:5000 <image-tag>
    
    0 讨论(0)
  • 2021-02-01 07:02

    With docker-compose you could configure ulimits.

    https://docs.docker.com/compose/compose-file/#ulimits

    You can add soft/hard limits as a mapping.

    ulimits:
      nproc: 65535
      nofile:
        soft: 20000
        hard: 40000
    

    Although not ideal you could run container with privileged option (Mostly for a quick non-optimal solution for a Dev environment, not recommended if security is a concern).

    docker run --privileged
    

    Please see:

    https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities

    0 讨论(0)
提交回复
热议问题