问题
I am running a container on a VM. My container is writing logs by default to /var/lib/docker/containers/CONTAINER_ID/CONTAINER_ID-json.log file until the disk is full.
Currently, I have to delete manually this file to avoid the disk to be full. I read that in Docker 1.8 there will be a parameter to rotate the logs. What would you recommend as the current workaround?
回答1:
Docker 1.8 has been released with a log rotation option. Adding:
--log-opt max-size=50m
when the container is launched does the trick. You can learn more at: https://docs.docker.com/engine/admin/logging/overview/
回答2:
CAUTION: This is for docker-compose version 2 only
Example:
version: '2'
services:
db:
container_name: db
image: mysql:5.7
ports:
- 3306:3306
logging:
options:
max-size: 50m
回答3:
Caution: this post relates to docker versions < 1.8 (which don't have the --log-opt
option)
Why don't you use logrotate (which also supports compression)?
/var/lib/docker/containers/*/*-json.log {
hourly
rotate 48
compress
dateext
copytruncate
}
Configure it either directly on your CoreOs Node or deploy a container (e.g. https://github.com/tutumcloud/logrotate) which mounts /var/lib/docker to rotate the logs.
回答4:
Pass log options while running a container. An example will be as follows
sudo docker run -ti --name visruth-cv-container --log-opt max-size=5m --log-opt max-file=10 ubuntu /bin/bash
where --log-opt max-size=5m
specifies the maximum log file size to be 5MB and --log-opt max-file=10
specifies the maximum number of files for rotation.
回答5:
Example for docker-compose version 1:
mongo:
image: mongo:3.6.16
restart: unless-stopped
log_opt:
max-size: 1m
max-file: "10"
回答6:
[This answer covers current versions of docker for those coming across the question long after it was asked.]
To set the default log limits for all newly created containers, you can add the following in /etc/docker/daemon.json:
{
"log-driver": "json-file",
"log-opts": {"max-size": "10m", "max-file": "3"}
}
Then reload docker with systemctl reload docker
if you are using systemd (otherwise use the appropriate restart command for your install).
You can also switch to the local logging driver with a similar file:
{
"log-driver": "local",
"log-opts": {"max-size": "10m", "max-file": "3"}
}
The local logging driver stores the log contents in an internal format (I believe protobufs) so you will get more log contents in the same size logfile (or take less file space for the same logs). The downside of the local driver is external tools like log forwarders, may not be able to parse the raw logs. Be aware the docker logs
only works when the log driver is set to json-file
, local
, or journald
.
The max-size
is a limit on the docker log file, so it includes the json or local log formatting overhead. And the max-file
is the number of logfiles docker will maintain. After the size limit is reached on one file, the logs are rotated, and the oldest logs are deleted when you exceed max-file
.
For more details, docker has documentation on all the drivers at: https://docs.docker.com/config/containers/logging/configure/
I also have a presentation covering this topic. Use P
to see the presenter notes: https://sudo-bmitch.github.io/presentations/dc2019/tips-and-tricks-of-the-captains.html#logs
来源:https://stackoverflow.com/questions/31829587/docker-container-logs-taking-all-my-disk-space