I\'m new to Docker, and it\'s unclear how to access an external database from a container. Is the best way to hard-code in the connection string?
# Dockerfil
Using docker-compose
, you can inherit env variables in docker-compose.yml and subsequently any Dockerfile(s) called by docker-compose
to build images. This is useful when the Dockerfile
RUN
command should execute commands specific to the environment.
(your shell has RAILS_ENV=development
already existing in the environment)
docker-compose.yml:
version: '3.1'
services:
my-service:
build:
#$RAILS_ENV is referencing the shell environment RAILS_ENV variable
#and passing it to the Dockerfile ARG RAILS_ENV
#the syntax below ensures that the RAILS_ENV arg will default to
#production if empty.
#note that is dockerfile: is not specified it assumes file name: Dockerfile
context: .
args:
- RAILS_ENV=${RAILS_ENV:-production}
environment:
- RAILS_ENV=${RAILS_ENV:-production}
Dockerfile:
FROM ruby:2.3.4
#give ARG RAILS_ENV a default value = production
ARG RAILS_ENV=production
#assign the $RAILS_ENV arg to the RAILS_ENV ENV so that it can be accessed
#by the subsequent RUN call within the container
ENV RAILS_ENV $RAILS_ENV
#the subsequent RUN call accesses the RAILS_ENV ENV variable within the container
RUN if [ "$RAILS_ENV" = "production" ] ; then echo "production env"; else echo "non-production env: $RAILS_ENV"; fi
This way, I don't need to specify environment variables in files or docker-compose
build
/up
commands:
docker-compose build
docker-compose up
There is a nice hack how to pipe host machine environment variables to a docker container:
env > env_file && docker run --env-file env_file image_name
Use this technique very carefully, because
env > env_file
will dump ALL host machine ENV variables toenv_file
and make them accessible in the running container.
The problem I had was that I was putting the --env-file at the end of the command
docker run -it --rm -p 8080:80 imagename --env-file ./env.list
Fix
docker run --env-file ./env.list -it --rm -p 8080:80 imagename
You can pass environment variables to your containers with the -e
flag.
An example from a startup script:
sudo docker run -d -t -i -e REDIS_NAMESPACE='staging' \
-e POSTGRES_ENV_POSTGRES_PASSWORD='foo' \
-e POSTGRES_ENV_POSTGRES_USER='bar' \
-e POSTGRES_ENV_DB_NAME='mysite_staging' \
-e POSTGRES_PORT_5432_TCP_ADDR='docker-db-1.hidden.us-east-1.rds.amazonaws.com' \
-e SITE_URL='staging.mysite.com' \
-p 80:80 \
--link redis:redis \
--name container_name dockerhub_id/image_name
Or, if you don't want to have the value on the command-line where it will be displayed by ps
, etc., -e
can pull in the value from the current environment if you just give it without the =
:
sudo PASSWORD='foo' docker run [...] -e PASSWORD [...]
If you have many environment variables and especially if they're meant to be secret, you can use an env-file:
$ docker run --env-file ./env.list ubuntu bash
The --env-file flag takes a filename as an argument and expects each line to be in the VAR=VAL format, mimicking the argument passed to --env. Comment lines need only be prefixed with #
Use -e
or --env value to set environment variables (default []).
An example from a startup script:
docker run -e myhost='localhost' -it busybox sh
If you want to use multiple environments from the command line then before every environment variable use the -e
flag.
Example:
sudo docker run -d -t -i -e NAMESPACE='staging' -e PASSWORD='foo' busybox sh
Note: Make sure put the container name after the environment variable, not before that.
If you need to set up many variables, use the --env-file
flag
For example,
$ docker run --env-file ./my_env ubuntu bash
For any other help, look into the Docker help:
$ docker run --help
Official documentation: https://docs.docker.com/compose/environment-variables/
Another way is to use the powers of /usr/bin/env
:
docker run ubuntu env DEBUG=1 path/to/script.sh