docker-swarm-mode

Docer swarm mode with ubuntu and mac

▼魔方 西西 提交于 2020-01-25 10:13:36
问题 I ran docker swarm init on master node Then ran docker swarm join --token SWMTKN-1-xxxx 192.168.1.105:2377 from worker nodes. I have total 5 nodes (3 ubuntu, 2 mac) I deploy by docker stack deploy -c docker-compose-worker.yml --with-registry-auth PL The command above starts up a container in each node. However docker network inspect PL_default shows only 3 peers(all ubuntu). The 2 nodes can't ping the master or any other node using the ip listed under Containers "Containers": {

How to use a private registry with docker swarm and traefik in docker

天大地大妈咪最大 提交于 2020-01-03 04:59:06
问题 I am running a single node swarm, I am using traefik to manage all my external connections, and I want to run a registry such that I can connect to it at registry.myhost.com Now all the examples I can see suggest creating a registry as a normal container rather than a service, however when I do this, I do not have the ability to add it to my traefik network and thus enable it to be found externally. Do I need to create another internal network and connect both traefik and it to it, and if so,

Jenkins service in Docker swarm stays at 0/1 replicas

浪子不回头ぞ 提交于 2019-12-12 09:48:54
问题 I'm trying to run a fault tolerant Jenkins in a docker swarm using the following command: docker service create --replicas 1 --name jenkins -p 8080:8080 -p 50000:50000 --mount src=/home/ubuntu/jenkins_home,dst=/var/jenkins_home jenkins:alpine But checking the service status and containers running I see that the replicas stay in 0. ubuntu@ip-172-30-3-81:~$ docker service create --replicas 1 --name jenkins -p 8080:8080 -p 50000:50000 --mount src=/home/ubuntu/jenkins_home,dst=/var/jenkins_home

Docker Swarm - Can’t pull from private registry

孤者浪人 提交于 2019-12-12 09:39:14
问题 I'm running a service on a Swarm cluster, thanks to docker stack deploy --with-registry-auth and this compose file: version: "3.1" services: builder-consumer: image: us.gcr.io/my-gcloud-project/my/image:123 stop_grace_period: 30m volumes: - [...] environment: - [...] deploy: mode: global placement: constraints: - node.role == worker secrets: - [...] secrets: [...] This works fine when I deploy, but when I add a worker node to the swarm later on, the new worker can't pull the image required to

Mount rexray/ceph volume in multiple containers on Docker swarm

主宰稳场 提交于 2019-12-11 15:43:19
问题 What I have done I have built a Docker Swarm cluster where I am running containers that have persistent data. To allow the container to move to another host in the event of failure I need resilient shared storage across the swarm. After looking into the various options I have implemented the following: Installed a Ceph Storage Cluster across all nodes of the Swarm and create a RADOS Block Device (RBD). http://docs.ceph.com/docs/master/start/quick-ceph-deploy/ Installed Rexray on each node and

Docker swarm tries to parse the value of ENV variable in my compose file (because it has a go template in it) and gives me an error

时光怂恿深爱的人放手 提交于 2019-12-11 01:29:52
问题 The error I try to launch a logspout container and set the log format (an ENV variable) via a docker-compose file. Not too difficult, and if I launch it with docker-compose up , everything works fine. But when I try to launch it with docker swarm init and docker stack deploy -c docker-compose.yml mystack , I get an error: Error response from daemon: rpc error: code = InvalidArgument desc = expanding env failed: expanding env "RAW_FORMAT={ \"container\" : \"{{ .Container.Name }}\", \"labels\":