Heroku describes logs in its Twelve-Factor App manifest as simple event streams:
Logs are the stream of aggregated, time-ordered events collected from the ou
Docker 1.6 introduced the notion of logging drivers to offer more control over log output. The --log-driver
flag configures where stdout
& stderr
from the process running in a container should be directed. See also Configuring Logging drivers.
Several drivers are available. Note that all of these except json-file
disable the use of docker logs
to gather container logs.
/var/lib/docker/containers//-json.log
--log-opt
to direct log messages to a specified syslog via TCP, UDP or Unix domain socket. Also disables docker logs
* New in Docker 1.8
** New in Docker 1.9
For example:
docker run --log-driver=syslog --log-opt syslog-address=tcp://10.0.0.10:1514 ...
This is the Docker-recommended solution for software that writes its log messages to stdout
& stderr
. Some software, however, does not write log messages to stdout/stderr
. They instead write to log files or to syslog, for example. In those cases, some of the details from the original answer below still apply. To recap:
If the app writes to a local log file, mount a volume from the host (or use a data-only container to the container and write log messages to that location.
If the app writes to syslog, there are several options:
/dev/log
) to the container using -v /dev/log:/dev/log
. Don't forget that any logs within a container should be rotated just as they would on a host OS.
Is it safe to rely on Docker's own log facility (docker logs)?
docker logs
prints the entire stream each time, not just new logs, so it's not appropriate. docker logs --follow
will give tail -f
-like functionality, but then you have a docker CLI command running all the time. Thus while it is safe to run docker logs
, it's not optimal.
Is it safe to run docker undetached and consider its output as the logging stream?
You can start containers with systemd and not daemonize, thus capturing all the stdout in the systemd journal which can then be managed by the host however you'd like.
Can stdout be redirected to a file directly (disk space)?
You could do this with docker run ... > logfile
of course, but it feels brittle and harder to automate and manage.
If using a file, should it be inside the docker image or a bound volume (docker run --volume=[])?
If you write inside the container then you need to run logrotate or something in the container to manage the log files. Better to mount a volume from the host and control it using the host's log rotation daemon.
Is logrotation required?
Sure, if the app writes logs you need to rotate them just like in a native OS environment. But it's harder if you write inside the container since the log file location isn't as predictable. If you rotate on the host, the log file would live under, for example with devicemapper as the storage driver, /var/lib/docker/devicemapper/mnt/
. Some ugly wrapper would be needed to have logrotate find the logs under that path.
Is it safe to redirect stdout directly into a logshipper (and which logshipper)?
Better to use syslog and let the log collector deal with syslog.
Is a named pipe (aka FIFO) an option?
A named pipe isn't ideal because if the reading end of the pipe dies, the writer (the container) will get a broken pipe. Even if that event is handled by the app, it will be blocked until there is a reader again. Plus it circumvents docker logs
.
See also this post on fluentd with docker.
See Jeff Lindsay's tool logspout that collects logs from running containers and routes them however you want.
Finally, note that stdout from the container logs to a file on the host in /var/lib/docker/containers/
.