How do I limit the memory resource of a group of docker containers?

♀尐吖头ヾ 提交于 2020-01-15 01:48:40

问题


I understand that I can use --memory and --memory-swap to limit memory resource per container. But, how do I limit memory resource per a group of containers?

My system has 8GB RAM memory and consists of 2 docker containers. I want to set an 8 GB limit on both containers. I do not want to set a 4GB memory resource limit for each container as

  1. A container may use more than 4GB memory.
  2. Both containers won't use 4GB memory at the same time, so it would make sense to give the unused memory of container A to container B.

Things I have tried

  1. The default parent cgroup for docker containers is "docker". "docker" is the parent cgroup of my containers. I tried setting the limit of the parent cgroup

    echo 8000000000 > /sys/fs/cgroup/memory/docker/memory.limit_in_bytes

  2. I check memory usage of the parent cgroup "docker" but it is 0, not equivalent to the sum of memory usage of its children.

    cat /sys/fs/cgroup/memory/docker/memory.usage_in_bytes # returns 0

I appreciate your help.


回答1:


i think there are two ways to solve your problem.

you could try to increase the memory usage of each single container or for the compose itself.

like you mentioned you can pass each Dockerfile in your compose the option --memory or --memory-swap.

in your docker-compose.yml file you can set option/parameters to limit or increase the resources.

e.g.:

version: '3'
services:
  redis:
    image: redis:alpine
    deploy:
      resources:
        limits:
          cpus: '0.50'
          memory: 50M
        reservations:
          cpus: '0.25'
          memory: 20M

you can do this for every container you want to compose.

DOCS: https://docs.docker.com/compose/compose-file/#resources



来源:https://stackoverflow.com/questions/52030217/how-do-i-limit-the-memory-resource-of-a-group-of-docker-containers

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!