问题
I have setup docker and I have used completely different block device to store docker's system data:
[root@blink1 /]# cat /etc/sysconfig/docker
# /etc/sysconfig/docker
other_args="-H tcp://0.0.0.0:9367 -H unix:///var/run/docker.sock -g /disk1/docker"
Note that /disk/1
is using a completely different hard drive /dev/xvdi
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 5.1G 2.6G 67% /
devtmpfs 1.9G 108K 1.9G 1% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
/dev/xvdi 20G 5.3G 15G 27% /disk1
/dev/dm-1 9.8G 1.7G 7.6G 18% /disk1/docker/devicemapper/mnt/bb6c540bae25aaf01aedf56ff61ffed8c6ae41aa9bd06122d440c6053e3486bf
/dev/dm-2 9.8G 1.7G 7.7G 18% /disk1/docker/devicemapper/mnt/c85f756c59a5e1d260c3cdb473f3f4d9e55ac568967abe190eeaf9c4087afeac
The problem is that when I continue download docker images and run docker containers, it seems that the other hard drive /dev/xvda1
is also used up.
I can verify this problem by remove some docker images. After I removed some docker images, /dev/xvda1
has some more extra space now.
Am I missing something?
My docker version:
[root@blink1 /]# docker info
Containers: 2
Images: 42
Storage Driver: devicemapper
Pool Name: docker-202:1-275421-pool
Pool Blocksize: 64 Kb
Data file: /disk1/docker/devicemapper/devicemapper/data
Metadata file: /disk1/docker/devicemapper/devicemapper/metadata
Data Space Used: 3054.4 Mb
Data Space Total: 102400.0 Mb
Metadata Space Used: 4.7 Mb
Metadata Space Total: 2048.0 Mb
Execution Driver: native-0.2
Kernel Version: 3.14.20-20.44.amzn1.x86_64
Operating System: Amazon Linux AMI 2014.09
回答1:
It's a kernel problem with devicemapper, which affects the RedHat family of OS (RedHat, Fedora, CentOS, and Amazon Linux). Deleted containers don't free up mapped disk space. This means that on the affected OSs you'll slowly run out of space as you start and restart containers.
The Docker project is aware of this, and the kernel is supposedly fixed in upstream (https://github.com/docker/docker/issues/3182).
A work-around of sorts is to give Docker its own volume to write to ("When Docker eats up you disk space"). This doesn't actually stop it from eating space, just from taking down other parts of your system after it does.
My solution was to uninstall docker, then delete all its files, then reinstall:
sudo yum remove docker
sudo rm -rf /var/lib/docker
sudo yum install docker
This got my space back, but it's not much different than just launching a replacement instance. I have not found a nicer solution.
回答2:
Deleting my entire /var/lib/docker is not ok for me. These are a safer ways:
Solution 1:
The following commands from the issue clear up space for me and it's a lot safer than deleting /var/lib/docker or for Windows check your disk image location here.
Before:
docker info
Example output:
Metadata file:
Data Space Used: 53.38 GB
Data Space Total: 53.39 GB
Data Space Available: 8.389 MB
Metadata Space Used: 6.234 MB
Metadata Space Total: 54.53 MB
Metadata Space Available: 48.29 MB
Command in newer versions of Docker e.g. 17.x +
docker system prune -a
It will show you a warning that it will remove all the stopped containers,networks, images and build cache. Generally it's safe to remove this. (Next time you run a container it may pull from the Docker registry)
Example output:
Total reclaimed space: 1.243GB
You can then run docker info again to see what has been cleaned up
docker info
Solution 2:
Along with this, make sure your programs inside the docker container are not writing many/huge files to the file system.
Check your running docker process's space usage size
docker ps -s #may take minutes to return
or for all containers, even exited
docker ps -as #may take minutes to return
You can then delete the offending container/s
docker rm <CONTAINER ID>
Find the possible culprit which may be using gigs of space
docker exec -it <CONTAINER ID> "/bin/sh"
du -h
In my case the program was writing gigs of temp files.
(Nathaniel Waisbrot mentioned in the accepted answer this issue and I got some info from the issue)
OR
Commands in older versions of Docker e.g. 1.13.x (run as root not sudo):
# Delete 'exited' containers
docker rm -v $(docker ps -a -q -f status=exited)
# Delete 'dangling' images (If there are no images you will get a docker: "rmi" requires a minimum of 1 argument)
docker rmi $(docker images -f "dangling=true" -q)
# Delete 'dangling' volumes (If there are no images you will get a docker: "volume rm" requires a minimum of 1 argument)
docker volume rm $(docker volume ls -qf dangling=true)
After :
> docker info
Metadata file:
Data Space Used: 1.43 GB
Data Space Total: 53.39 GB
Data Space Available: 51.96 GB
Metadata Space Used: 577.5 kB
Metadata Space Total: 54.53 MB
Metadata Space Available: 53.95 MB
回答3:
I had a similar problem and I think this happens when you don't have enough space in the disk for all your docker images. I had 6GB reserved for docker images which it turned out not to be enough in my case. Anyway, I had removed every image and container and still disk looked full. Most of the space was being used by /var/lib/docker/devicemapper and /var/lib/docker/tmp.
This command didn't work for me:
# docker ps -qa | xargs docker inspect --format='{{ .State.Pid }}' | xargs -IZ fstrim /proc/Z/root/
First, I stopped docker service:
sudo service docker stop
Then I deleted /var/lib/docker:
Then I did what somebody suggested here in https://github.com/docker/docker/issues/18867#issuecomment-232301073
Remove existing instance of docker metadata rm -rf /var/lib/docker
sudo rm -rf /var/lib/docker
Pass following options to docker daemon: -s devicemapper --storage-opt dm.fs=xfs --storage-opt dm.mountopt=discard
Start docker daemon.
For last two steps, I run:
sudo dockerd -s devicemapper --storage-opt dm.fs=xfs --storage-opt dm.mountopt=discard
回答4:
Move the /var/lib/docker
directory.
Assuming the /data
directory has enough room, if not, substitute for one that does,
sudo systemctl stop docker
sudo mv /var/lib/docker /data
sudo ln -s /data/docker /var/lib/docker
sudo systemctl start docker
This way, you don't have to reconfigure docker.
回答5:
As mentioned in issue #18867 - Delete data in a container devicemapper can not free used space from Github.com
Try running the below command:
# docker ps -qa | xargs docker inspect --format='{{ .State.Pid }}' | xargs -IZ fstrim /proc/Z/root/
It uses the fstrim tool to trim the devicemapper thinly-provisioned disk.
回答6:
Docker prune by default does not remove volumes,
you can try something like
docker system prune --volume
回答7:
maybe you can try docker system prune
to remove all the images that are not important
回答8:
I had this problem occurring once in a while in my MacOS with Engine: 19.03.2
. In my case, rebuilding the image takes a long time and wasn't a feasible option. So deleting the images / pruning wasn't an option.
The solution was to save the image in a tar, delete the image, quit docker, reload the image from the tar file. Commands for each step as below.
docker save -o <name>.tar <image-name>
docker rmi <image-name>
- Quit docker (Observe the Docker.qcow2 file shrinking after this)
- Restart docker
docker load -q -i <name>.tar
Try these steps for all the images if the size does not reduce for a single image. My suggestion is to start from older images rather than new ones. (You can save and delete them at once).
Reference: https://dbaontap.com/2017/07/18/clean-qcow2-docker-macbook/
回答9:
Yes, Docker use /var/lib/docker folder to store the layers. There are ways to reclaim the space and move the storage to some other directory.
You can mount a bigger disk space and move the content of /var/lib/docker to the new mount location and make sym link.
There is detail explanation on how to do above task.
http://www.scmtechblog.net/2016/06/clean-up-docker-images-from-local-to.html
You can remove the intermediate layers too.
https://github.com/vishalvsh1/docker-image-cleanup
来源:https://stackoverflow.com/questions/27853571/why-is-docker-image-eating-up-my-disk-space-that-is-not-used-by-docker