How do I set up linkage between Docker containers so that restarting won't break it?

前端 未结 11 2132
隐瞒了意图╮
隐瞒了意图╮ 2020-12-02 05:40

I have a few Docker containers running like:

  • Nginx
  • Web app 1
  • Web app 2
  • PostgreSQL

Since Nginx needs to connect to the

相关标签:
11条回答
  • 2020-12-02 06:09

    You can use an ambassador container. But do not link the ambassador container to your client, since this creates the same problem as above. Instead, use the exposed port of the ambassador container on the docker host (typically 172.17.42.1). Example:

    postgres volume:

    $ docker run --name PGDATA -v /data/pgdata/data:/data -v /data/pgdata/log:/var/log/postgresql phusion/baseimage:0.9.10 true
    

    postgres-container:

    $ docker run -d --name postgres --volumes-from PGDATA -e USER=postgres -e PASS='postgres' paintedfox/postgresql
    

    ambassador-container for postgres:

    $ docker run -d --name pg_ambassador --link postgres:postgres -p 5432:5432 ctlc/ambassador
    

    Now you can start a postgresql client container without linking the ambassador container and access postgresql on the gateway host (typically 172.17.42.1):

    $ docker run --rm -t -i paintedfox/postgresql /bin/bash
    root@b94251eac8be:/# PGHOST=$(netstat -nr | grep '^0\.0\.0\.0 ' | awk '{print $2}')
    root@b94251eac8be:/# echo $PGHOST
    172.17.42.1
    root@b94251eac8be:/#
    root@b94251eac8be:/# psql -h $PGHOST --user postgres
    Password for user postgres: 
    psql (9.3.4)
    SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)
    Type "help" for help.
    
    postgres=#
    postgres=# select 6*7 as answer;
     answer 
    --------
         42
    (1 row)
    
    bpostgres=# 
    

    Now you can restart the ambassador container whithout having to restart the client.

    0 讨论(0)
  • 2020-12-02 06:10

    If anyone is still curious, you have to use the host entries in /etc/hosts file of each docker container and should not depend on ENV variables as they are not updated automatically.

    There will be a host file entry for each of the linked container in the format LINKEDCONTAINERNAME_PORT_PORTNUMBER_TCP etc..

    The following is from docker docs

    Important notes on Docker environment variables

    Unlike host entries in the /etc/hosts file, IP addresses stored in the environment variables are not automatically updated if the source container is restarted. We recommend using the host entries in /etc/hosts to resolve the IP address of linked containers.

    These environment variables are only set for the first process in the container. Some daemons, such as sshd, will scrub them when spawning shells for connection.

    0 讨论(0)
  • 2020-12-02 06:16

    Network-scoped alias is what you need is this case. It's a rather new feature, which can be used to "publish" a container providing a service for the whole network, unlike link aliases accessible only from one container.

    It does not add any kind of dependency between containers — they can communicate as long as both are running, regardless of restarts and replacement and launch order. It uses DNS internally, I believe, instead of /etc/hosts

    Use it like this: docker run --net=some_user_definied_nw --net-alias postgres ... and you can connect to it using that alias from any container on the same network.

    Does not work on the default network, unfortunately, you have to create one with docker network create <network> and then use it with --net=<network> for every container (compose supports it as well).

    In addition to container being down and hence unreachable by alias multiple containers can also share an alias in which case it's not guaranteed that it will be resolved to the right one. But in some case that can help with seamless upgrade, probably.

    It's all not very well documented as of yet, hard to figure out just by reading the man page.

    0 讨论(0)
  • 2020-12-02 06:20

    with OpenSVC approach, you can workaround by :

    • use a service with its own ip address/dns name (the one your end users will connect to)
    • tell docker to expose ports to this specific ip address ("--ip" docker option)
    • configure your apps to connect to the service ip address

    each time you replace a container, you are sure that it will connect to the correct ip address.

    Tutorial here => Docker Multi Containers with OpenSVC

    don't miss the "complex orchestration" part at the end of tuto, which can help you start/stop containers in the correct order (1 postgresql subset + 1 webapp subset + 1 nginx subset)

    the main drawback is that you expose webapp and PostgreSQL ports to public address, and actually only the nginx tcp port need to be exposed in public.

    0 讨论(0)
  • 2020-12-02 06:21

    Another alternative is to use the --net container:$CONTAINER_ID option.

    Step 1: Create "network" containers

    docker run --name db_net ubuntu:14.04 sleep infinity
    docker run --name app1_net --link db_net:db ubuntu:14.04 sleep infinity
    docker run --name app2_net --link db_net:db ubuntu:14.04 sleep infinity
    docker run -p 80 -p 443 --name nginx_net --link app1_net:app1 --link app2_net:app2 ubuntu:14.04 sleep infinity
    

    Step 2: Inject services into "network" containers

    docker run --name db --net container:db_net pgsql
    docker run --name app1 --net container:app1_net app1
    docker run --name app2 --net container:app1_net app2
    docker run --name nginx --net container:app1_net nginx
    

    As long as you do not touch the "network" containers, the IP addresses of your links should not change.

    0 讨论(0)
提交回复
热议问题