I am working on a simple Docker image that has a large number of environment variables. Are you able to import an environment variable file like with docker-compose? I canno
There are various options:
https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e-env-env-file
docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash
(You can also just reference previously exported
variables, see USER
below.)
The one answering your question about an .env file is:
cat env.list
# This is a comment
VAR1=value1
VAR2=value2
USER
docker run --env-file env.list ubuntu env | grep VAR
VAR1=value1
VAR2=value2
docker run --env-file env.list ubuntu env | grep USER
USER=denis
You can also load the environment variables from a file. This file should use the syntax variable=value
(which sets the variable to the given value) or variable
(which takes the value from the local environment), and # for comments.
Regarding the difference between variables needed at (image) build time or (container) runtime and how to combine ENV
and ARG
for dynamic build arguments you might try this:
ARG or ENV, which one to use in this case?
Yes, there are a couple of ways you can do this.
In Docker Compose, you can supply environment variables in the file itself, or point to an external env file:
# docker-compose.yml
version: '2'
services:
service-name:
image: service-app
environment:
- GREETING=hello
env_file:
- .env
Incidentally, one nice feature that is somewhat related is that you can use multiple Compose files, with each subsequent one adding to the other. So if the above were to define a base, you can then do this (e.g. per run-time environment):
# docker-compose-dev.yml
version: '2'
services:
service-name:
environment:
- GREETING=goodbye
You can then run it thus:
docker-compose -f docker-compose.yml -f docker-compose-dev.yml up
To do this in Docker only, use your entrypoint or command to run an intermediate script, thus:
#Dockerfile
....
ENTRYPOINT ["sh", "bin/start.sh"]
And then in your start script:
#!/bin/sh
source .env
python /manager.py
I've used this related answer as a helpful reference for myself in the past.
To amplify my remark in the comments, if you make your entry point a shell or Python script, it is likely that Unix signals (stop, kill, etc) will not be passed onto your process. This is because that script will become process ID 1, which makes it the parent process of all other processes in the container - in Linux/Unix there is an expectation that this PID will forward signals to its children, but unless you explicitly implement that, it won't happen.
To rectify this, you can install an init system. I use dumb-init from Yelp. This repo also features plenty of detail if you want to understand it a bit better, or simple install instructions if you just want to "install and forget".
If you need environment variables runtime, it's easiest to create a launcher script that sets up the environment with multiple export
statements and then launches your process.
If you need them build time, have a look at the ARG
and ENV
statements. You'll need one per variable.
I really like @halfers approach, but this could also work. docker run
takes an optional parameter called --env-file
which is super helpful.
So your docker file could look like this.
COPY .env .env
and then in a build script use:
docker build -t my_docker_image . && docker run --env-file .env my_docker_image