Docker Networking - nginx: [emerg] host not found in upstream

牧云@^-^@ 提交于 2019-11-29 21:17:13

There is a possibility to use "volumes_from" as a workaround until depends_on feature (discussed below) is introduced. All you have to do is change your docker-compose file as below:

nginx:
  image: nginx
  ports:
    - "42080:80"
  volumes:
    - ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
  volumes_from:
    - php

php:
  build: config/docker/php
  ports:
    - "42022:22"
  volumes:
    - .:/var/www/html
  env_file: config/docker/php/.env.development

mongo:
  image: mongo
  ports:
    - "42017:27017"
  volumes:
    - /var/mongodata/wa-api:/data/db
  command: --smallfiles

One big caveat in the above approach is that the volumes of php are exposed to nginx, which is not desired. But at the moment this is one docker specific workaround that could be used.

depends_on feature This probably would be a futuristic answer. Because the functionality is not yet implemented in Docker (as of 1.9)

There is a proposal to introduce "depends_on" in the new networking feature introduced by Docker. But there is a long running debate about the same @ https://github.com/docker/compose/issues/374 Hence, once it is implemented, the feature depends_on could be used to order the container start-up, but at the moment, you would have to resort to one of the following:

  1. make nginx retry until the php server is up - I would prefer this one
  2. use volums_from workaround as described above - I would avoid using this, because of the volume leakage into unnecessary containers.

This can be solved with the mentioned depends_on directive since it's implemented now (2016):

version: '2'
  services:
    nginx:
      image: nginx
      ports:
        - "42080:80"
      volumes:
        - ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
      depends_on:
        - php

    php:
      build: config/docker/php
      ports:
        - "42022:22"
      volumes:
        - .:/var/www/html
      env_file: config/docker/php/.env.development
      depends_on:
        - mongo

    mongo:
      image: mongo
      ports:
        - "42017:27017"
      volumes:
        - /var/mongodata/wa-api:/data/db
      command: --smallfiles

Successfully tested with:

$ docker-compose version
docker-compose version 1.8.0, build f3628c7

Find more details in the documentation.

There is also a very interesting article dedicated to this topic: Controlling startup order in Compose

You can set the max_fails and fail_timeout directives of nginx to indicate that the nginx should retry the x number of connection requests to the container before failing on the upstream server unavailability.

You can tune these two numbers as per your infrastructure and speed at which the whole setup is coming up. You can read more details about the health checks section of the below URL: http://nginx.org/en/docs/http/load_balancing.html

Following is the excerpt from http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server max_fails=number

sets the number of unsuccessful attempts to communicate with the server that should happen in the duration set by the fail_timeout parameter to consider the server unavailable for a duration also set by the fail_timeout parameter. By default, the number of unsuccessful attempts is set to 1. The zero value disables the accounting of attempts. What is considered an unsuccessful attempt is defined by the proxy_next_upstream, fastcgi_next_upstream, uwsgi_next_upstream, scgi_next_upstream, and memcached_next_upstream directives.

fail_timeout=time

sets the time during which the specified number of unsuccessful attempts to communicate with the server should happen to consider the server unavailable; and the period of time the server will be considered unavailable. By default, the parameter is set to 10 seconds.

To be precise your modified nginx config file should be as follows (this script is assuming that all the containers are up by 25 seconds at least, if not, please change the fail_timeout or max_fails in below upstream section): Note: I didn't test the script myself, so you could give it a try!

upstream phpupstream {
   waapi_php_1:9000 fail_timeout=5s max_fails=5;
}
server {
    listen  80;

    root /var/www/test;

    error_log /dev/stdout debug;
    access_log /dev/stdout;

    location / {
        # try to serve file directly, fallback to app.php
        try_files $uri /index.php$is_args$args;
    }

    location ~ ^/.+\.php(/|$) {
        # Referencing the php service host (Docker)
        fastcgi_pass phpupstream;

        fastcgi_split_path_info ^(.+\.php)(/.*)$;
        include fastcgi_params;

        # We must reference the document_root of the external server ourselves here.
        fastcgi_param SCRIPT_FILENAME /var/www/html/public$fastcgi_script_name;

        fastcgi_param HTTPS off;
    }
}

Also, as per the following Note from docker (https://github.com/docker/compose/blob/master/docs/networking.md), it is evident that the retry logic for checking the health of the other containers is not docker's responsibility and rather the containers should do the health check themselves.

Updating containers

If you make a configuration change to a service and run docker-compose up to update it, the old container will be removed and the new one will join the network under a different IP address but the same name. Running containers will be able to look up that name and connect to the new address, but the old address will stop working.

If any containers have connections open to the old container, they will be closed. It is a container's responsibility to detect this condition, look up the name again and reconnect.

I believe Nginx dont take in account Docker resolver (127.0.0.11), so please, can you try adding:

resolver 127.0.0.11

in your nginx configuration file?

If you are so lost for read the last comment. I have reached another solution.

The main problem is the way that you named the services names.

In this case, if in your docker-compose.yml, the service for php are called "api" or something like that, you must ensure that in the file nginx.conf the line that begins with fastcgi_pass have the same name as the php service. i.e fastcgi_pass api:9000;

have the same problem until there was in a docker-compose.yml two networks defined: backend and frontend. When running all containers on same default network everything works fine.

Had the same problem and solved it. Please add the following line to docker-compose.yml nginx section:

links:
  - php:waapi_php_1

Host in nginx config fastcgi_pass section should be linked inside docker-compose.yml nginx configuration.

With links there is an order of container startup being enforced. Without links the containers can start in any order (or really all at once).

I think the old setup could have hit the same issue, if the waapi_php_1 container was slow to startup.

I think to get it working, you could create an nginx entrypoint script that polls and waits for the php container to be started and ready.

I'm not sure if nginx has any way to retry the connection to the upstream automatically, but if it does, that would be a better option.

You have to use something like docker-gen to dynamically update nginx configuration when your backend is up.

See:

I believe Nginx+ (premium version) contains a resolve parameter too (http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream)

Perhaps the best choice to avoid linking containers issues are the docker networking features

But to make this work, docker creates entries in the /etc/hosts for each container from assigned names to each container.

with docker-compose --x-networking -up is something like [docker_compose_folder]-[service]-[incremental_number]

To not depend on unexpected changes in these names you should use the parameter

container_name

in your docker-compose.yml as follows:

php:
      container_name: waapi_php_1
      build: config/docker/php
      ports:
        - "42022:22"
      volumes:
        - .:/var/www/html
      env_file: config/docker/php/.env.development

Making sure that it is the same name assigned in your configuration file for this service. I'm pretty sure there are better ways to do this, but it is a good approach to start.

My Workaround (after much trial and error):

  • In order to get around this issue, I had to get the full name of the 'upstream' Docker container, found by running docker network inspect my-special-docker-network and getting the full name property of the upstream container as such:

    "Containers": {
         "39ad8199184f34585b556d7480dd47de965bc7b38ac03fc0746992f39afac338": {
              "Name": "my_upstream_container_name_1_2478f2b3aca0",
    
  • Then used this in the NGINX my-network.local.conf file in the location block of the proxy_pass property: (Note the addition of the GUID to the container name):

    location / {
        proxy_pass http://my_upsteam_container_name_1_2478f2b3aca0:3000;
    

As opposed to the previously working, but now broken:

    location / {
        proxy_pass http://my_upstream_container_name_1:3000

Most likely cause is a recent change to Docker Compose, in their default naming scheme for containers, as listed here.

This seems to be happening for me and my team at work, with latest versions of the Docker nginx image:

  • I've opened issues with them on the docker/compose GitHub here

(new to nginx) In my case it was wrong folder name

For config

upstream serv {
    server ex2_app_1:3000;
}

make sure the app folder is in ex2 folder:

ex2/app/...

Two things worth to mention:

  • Using same network bridge
  • Using links to add hosts resol

My example:

version: '3'
services:
  mysql:
    image: mysql:5.7
    restart: always
    container_name: mysql
    volumes:
      - ./mysql-data:/var/lib/mysql
    environment:
      MYSQL_ROOT_PASSWORD: tima@123
    network_mode: bridge
  ghost:
    image: ghost:2
    restart: always
    container_name: ghost
    depends_on:
      - mysql
    links:
      - mysql
    environment:
      database__client: mysql
      database__connection__host: mysql
      database__connection__user: root
      database__connection__password: xxxxxxxxx
      database__connection__database: ghost
      url: https://www.itsfun.tk
    volumes:
      - ./ghost-data:/var/lib/ghost/content
    network_mode: bridge
  nginx:
    image: nginx
    restart: always
    container_name: nginx
    depends_on:
      - ghost
    links:
      - ghost
    ports:
      - "80:80"
      - "443:443"
    volumes:
       - ./nginx/nginx.conf:/etc/nginx/nginx.conf
       - ./nginx/conf.d:/etc/nginx/conf.d
       - ./nginx/letsencrypt:/etc/letsencrypt
    network_mode: bridge

If you don't specify a special network bridge, all of them will use the same default one.

Add the links section to your nginx container configuration.

You have to make visible the php container to the nginx container.

nginx:
  image: nginx
  ports:
    - "42080:80"
  volumes:
    - ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
  links:
    - php:waapi_php_1

This is how I get it over, the best way I can think of.

ADD root /

RUN cp /etc/hosts /etc/hosts.tmp && \
    echo -e "\
127.0.0.1 code_gogs_1 \n\
127.0.0.1 pm_zentao_1 \n\
127.0.0.1 ci_drone_1 \n\
    " >> /etc/hosts && \
    nginx -t && \
# mv: can't rename '/etc/hosts.tmp': Resource busy
# mv /etc/hosts.tmp /etc/hosts
    cat /etc/hosts.tmp > /etc/hosts && \
    rm /etc/hosts.tmp
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!