Docker Swarm can manage two types of storage: volume and bind. While bind is not suggested by Docker Documentation since it create a binding between a local directory (on each swarm Node) to a task, volume method implementation is not mentioned, so I don't understand how volumes are shared between tasks.
How Docker Swarm shares volumes between nodes? Where are volumes saved (on a manager? and if there are more than one manages?)?
There is no problem between nodes if it runnings on different machines in different net? Does it create a VPN?
What you're asking about is a common question. Volume data and the features of what that volume can do are managed by a volume driver. Just like you can use different network drivers like overlay
, bridge
, or host
, you can use different volume drivers.
Docker and Swarm only come with the standard local
driver out of the box. It doesn't have any awareness of Swarm, and it will just create new volumes for your data on whichever node your service tasks are scheduled on. This is usually not what you want.
You want a 3rd party driver plugin that is Swarm aware, and will ensure the volume you created for a service task is available on the right node at the right time. Options include using "Docker for AWS/Azure" and its included CloudStor driver, or the popular open source REX-Ray solution.
There are lots of 3rd party volume drivers, which you can find on the Docker Store.
Swarm Mode itself does not do anything different with volumes, it runs any volume mount command you provide on the node where the container is running. If your volume mount is local to that node, then your data will be saved locally on that node. There is no built in functionality to move data between nodes automatically.
There are some software based distributed storage solutions like GlusterFS, and Docker has one called Infinit which is not yet GA and development on that has taken a back seat to the Kubernetes integration in EE.
The typical result is you either need to manage replication of storage within your application (e.g. etcd and other raft based algorithms) or you perform your mounts on an external storage system (hopefully with its own HA). Mounting an external storage system has two options, block or file based. Block based storage (e.g. EBS) typically comes with higher performance, but is limited to only be mounted on a single node. For this, you will typically need a 3rd party volume plugin driver to give your docker node access to that block storage. File based storage (e.g. EFS) has lower performance, but is more portable, and can be simultaneously mounted on multiple nodes, which is useful for a replicated service.
The most common file based network storage is NFS (this is the same protocol used by EFS). And you can mount that without any 3rd party plugin drivers. The unfortunately named "local" volume plugin driver that docker ships with give you the option to pass any values you want to the mount command with driver options, and with no options, it defaults to storing volumes in the docker directory /var/lib/docker/volumes. With options, you can pass it the NFS parameters, and it will even perform a DNS lookup on the NFS hostname (something you don't have with NFS normally). Here's an example of the different ways to mount an NFS filesystem using the local volume driver:
# create a reusable volume
$ docker volume create --driver local \
--opt type=nfs \
--opt o=nfsvers=4,addr=192.168.1.1,rw \
--opt device=:/path/to/dir \
foo
# or from the docker run command
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,\"volume-opt=o=nfsvers=4,addr=192.168.1.1\",volume-opt=device=:/host/path \
foo
# or to create a service
$ docker service create \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,\"volume-opt=o=nfsvers=4,addr=192.168.1.1\",volume-opt=device=:/host/path \
foo
# inside a docker-compose file
...
volumes:
nfs-data:
driver: local
driver_opts:
type: nfs
o: nfsvers=4,addr=192.168.1.1,rw
device: ":/path/to/dir"
...
My solution for AWS EFS, that works:
- Create EFS (don't forget to open NFS port 2049 at security group)
Install nfs-common package:
sudo apt-get install -y nfs-common
Check if your efs works:
mkdir efs-test-point sudo chmod go+rw efs-test-point
sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport [YOUR_EFS_DNS]:/ efs-test-point
touch efs-test-point/1.txt sudo umount efs-test-point/ ls -la efs-test-point/
directory must be empty
sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport [YOUR_EFS_DNS]:/ efs-test-point
ls -la efs-test-point/
file 1.txt must exists
Configure docker-compose.yml file:
services: sidekiq: volumes: - uploads_tmp_efs:/home/application/public/uploads/tmp ... volumes: uploads_tmp_efs: driver: local driver_opts: type: nfs o: addr=[YOUR_EFS_DNS],nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 device: [YOUR_EFS_DNS]:/
After searching in the documentation and in the docker discussions I was able to find the following information regarding this problem:
- bind mounting a host-directory in a container (
docker run -v /some/host/dir/:/container/path/
) uses the files that are present on the host. If the host-directory doesn't exist, a new, empty directory is created on the host and mounted in the container (this will change in future, and an error is shown instead)- using a "nameless" volume (
docker run -v /container/path
) will create a new volume, and copy the contents of /container/path into that volume- using a "named" volume (
docker run -v somename:/container/path
) will create a new volume, named "somename", or use the existing "somename" volume, and use the files that are present inside that volume. If the volume is newly created, it will be empty.
Source: Discussion on Github
The reason for all this is:
It's not a bug, it acted as that because it should do that. For the anonymous volume, docker knows that the volumes is fully controlled by itself, so docker can do anythings it thinks correct(Here is copying files in image to the volume). But the named volume is designed for the volume plugin, so docker does not know what it should do, and does nothing.
Source: Related discussion on Github
So you have to use a Volume driver which supports that which indeed can be found at the docker store
来源:https://stackoverflow.com/questions/47756029/how-does-docker-swarm-implement-volume-sharing