docker-swarm-mode

docker service update vs docker stack deploy with existing stack

て烟熏妆下的殇ゞ 提交于 2019-12-10 13:24:04
问题 I have a doubt in using docker swarm mode commands to update existing services after having deployed a set of services using docker stack deploy . As far I understood every service is pinned to the SHA256 digest of the image at the time of creation, so if you rebuild and push an image (with same tag) and you try to run a docker service update , service image is not updated (even if SHA256 is different). On the contrary, if you run a docker stack deploy again, all the services are updated with

What's the main advantage of using replicas in Docker Swarm Mode?

。_饼干妹妹 提交于 2019-12-10 12:44:33
问题 I'm struggling to understand the idea of replica instances in Docker Swarm Mode. I've read that it's a feature that helps with high availability. However, Docker automatically starts up a new task on a different node if one node goes down even with 1 replica defined for the service, which also provides high availability. So what's the advantage of having 3 replica instances rather than 1 for an arbitrary service? My assumption was that with more replicas, Docker spends less time creating a

Is network security / encryption provided by default in docker swarm mode?

半城伤御伤魂 提交于 2019-12-10 09:57:24
问题 In this document it says that: Overlay networking for Docker Engine swarm mode comes secure out of the box. You can also encrypt data exchanged between containers on different nodes on the overlay network. To enable encryption, when you create an overlay network pass the --opt encrypted flag: > $ docker network create --opt encrypted --driver overlay my-multi-host-network So if all the containers are running on the my-multi-host-network is all the traffic between the containers encrypted

Docker Swarm - Can’t pull from private registry

我与影子孤独终老i 提交于 2019-12-10 05:26:35
问题 I'm running a service on a Swarm cluster, thanks to docker stack deploy --with-registry-auth and this compose file: version: "3.1" services: builder-consumer: image: us.gcr.io/my-gcloud-project/my/image:123 stop_grace_period: 30m volumes: - [...] environment: - [...] deploy: mode: global placement: constraints: - node.role == worker secrets: - [...] secrets: [...] This works fine when I deploy, but when I add a worker node to the swarm later on, the new worker can't pull the image required to

How do I get host networking to work with docker swarm mode

偶尔善良 提交于 2019-12-07 06:11:09
问题 From this PR that got recently merged into docker's 17.06 release candidate, we now have support for host networking with swarm services. However, trying out a very similar command I'm seeing an error: $ docker service create --name nginx-host --network host nginx Error response from daemon: could not find the corresponding predefined swarm network: network host not found I'm running the 17.06 release candidate: $ docker version Client: Version: 17.06.0-ce-rc2 API version: 1.30 Go version:

How do I get host networking to work with docker swarm mode

China☆狼群 提交于 2019-12-05 11:20:53
From this PR that got recently merged into docker's 17.06 release candidate, we now have support for host networking with swarm services. However, trying out a very similar command I'm seeing an error: $ docker service create --name nginx-host --network host nginx Error response from daemon: could not find the corresponding predefined swarm network: network host not found I'm running the 17.06 release candidate: $ docker version Client: Version: 17.06.0-ce-rc2 API version: 1.30 Go version: go1.8.3 Git commit: 402dd4a Built: Wed Jun 7 10:07:14 2017 OS/Arch: linux/amd64 Server: Version: 17.06.0

Host environment variables with docker stack deploy

两盒软妹~` 提交于 2019-12-04 12:13:48
问题 I was wondering if there is a way to use environment variables taken from the host where the container is deployed, instead of the ones taken from where the docker stack deploy command is executed. For example imagine the following docker-compose.yml launched on three node Docker Swarm cluster: version: '3.2' services: kafka: image: wurstmeister/kafka ports: - target: 9094 published: 9094 protocol: tcp mode: host deploy: mode: global environment: KAFKA_JMX_OPTS: "-Djava.rmi.server.hostname=$

How is load balancing done in Docker-Swarm mode

与世无争的帅哥 提交于 2019-12-04 08:05:07
问题 I'm working on a project to set up a cloud architecture using docker-swarm. I know that with swarm I could deploy replicas of a service which means multiple containers of that image will be running to serve requests. I also read that docker has an internal load balancer that manages this request distribution. However, I need help in understanding the following: Say I have a container that exposes a service as a REST API or say its a web app. And If I have multiple containers (replicas)

How to directly mount NFS share/volume in container using docker compose v3

泪湿孤枕 提交于 2019-12-04 07:33:54
问题 I have a compose file with v3 where there are 3 services sharing/using the same volume. While using swarm mode we need to create extra containers & volumes to manage our services across the cluster. I am planning to use NFS server so that single NFS share will get mounted directly on all the hosts within the cluster. I have found below two ways of doing it but it needs extra steps to be performed on the docker host - Mount the NFS share using "fstab" or "mount" command on the host & then use

How to share volumes across multiple hosts in docker engine swarm mode?

坚强是说给别人听的谎言 提交于 2019-12-03 16:30:17
问题 Can we share a common/single named volume across multiple hosts in docker engine swarm mode, what's the easiest way to do it ? 回答1: If you have an NFS server setup you can use use some nfs folder as a volume from docker compose like this: volumes: grafana: driver: local driver_opts: type: nfs o: addr=192.168.xxx.xx,rw device: ":/PathOnServer" 回答2: From scratch, Docker does not support this by itself. You must use additional components either a docker plugin which would provide you with a new