Can (or should) 2 docker containers interact with each other via localhost?

后端 未结 3 1900
礼貌的吻别
礼貌的吻别 2021-02-20 06:28

We\'re dockerizing our micro services app, and I ran into some discovery issues.

The app is configured as follows:

When the a service is started in \'non-local\'

相关标签:
3条回答
  • 2021-02-20 06:53

    On production, never use docker or docker compose alone. Use an orchestrator (rancher, docker swarm, k8s, ...) and deploy your stack there. Orchestrator will take care of the networking issue. Your container can link each other, so you can access them directly by a name (don't care too much about the ip).

    On local host, use docker compose to startup your containers and use link. do not use a local port but the name of the link. (if your container A need to access container B on port 1234, then do a link B linked to A with name BBBB and use tcp://BBBB:1234 to access the container from A )

    If you really want to bind port to your localhost and use this, access port by your host IP, not localhost.

    0 讨论(0)
  • 2021-02-20 06:56

    If changing the hard-coded addresses is not an option for now, perhaps you could modify the startup scripts of your containers to forward forward ports in each local container to the required services in other machines.

    This would create some complications though, because you would have to setup ssh in each of your containers, and manage the corresponding keys.

    Come to think of it, if encryption is not an issue, ssh is not necessary. Using socat or redir would probably be enough.

    socat TCP4-LISTEN:61001,fork TCP4:othercontainer:61001
    
    0 讨论(0)
  • 2021-02-20 07:06

    But one service can not interact with another service since they are not on the same machine and tcp://localhost:61001 will obviously not work.

    Actually, they can. You are right that tcp://localhost:61001 will not work, because using localhost within a container would be referring to the container itself, similar to how localhost works on any system by default. This means that your services cannot share the same host. If you want them to, you can use one container for both services, although this really isn't the best design since it defeats one of the main purposes of Docker Compose.

    The ideal way to do it is with docker-compose links, the guide you referenced shows how to define them, but to actually use them you need to use the linked container's name in URLs as if the linked container's name had an IP mapping defined in the original container's /etc/hosts (not that it actually does, but just so you get the idea). If you want to change it to be something different from the name of the linked container, you can use a link alias, which are explained in the same guide you referenced.

    For example, with a docker-compose.yml file like this:

    a:
      expose:
        - "9999"
    b:
      links:
        - a
    

    With a listening on 0.0.0.0:9999, b can interact with a by making requests from within b to tcp://a:9999. It would also be possible to shell into b and run

    ping a
    

    which would send ping requests to the a container from the b container.

    So in conclusion, try replacing localhost in the request URL with the literal name of the linked container (or the link alias, if the link is defined with an alias). That means that

    tcp://<container_name>:61001
    

    should work instead of

    tcp://localhost:61001
    

    Just make sure you define the link in docker-compose.yml.

    Hope this helps

    0 讨论(0)
提交回复
热议问题