How to directly mount NFS share/volume in container using docker compose v3

做~自己de王妃 提交于 2019-12-02 14:18:53

After discovering that this is massively undocumented,here's the correct way to mount a NFS volume using stack and docker compose.

The most important thing is that you need to be using version: "3.2" or higher. You will have strange and un-obvious errors if you don't.

The second issue is that volumes are not automatically updated when their definition changes. This can lead you down a rabbit hole of thinking that your changes aren't correct, when they just haven't been applied. Make sure you docker rm VOLUMENAME everywhere it could possibly be, as if the volume exists, it won't be validated.

The third issue is more of a NFS issue - The NFS folder will not be created on the server if it doesn't exist. This is just the way NFS works. You need to make sure it exists before you do anything.

(Don't remove 'soft' and 'nolock' unless you're sure you know what you're doing - this stops docker from freezing if your NFS server goes away)

Here's a complete example:

[root@docker docker-mirror]# cat nfs-compose.yml
version: "3.2"

services:
  rsyslog:
    image: jumanjiman/rsyslog
    ports:
      - "514:514"
      - "514:514/udp"
    volumes:
      - type: volume
        source: example
        target: /nfs
        volume:
          nocopy: true
volumes:
  example:
    driver_opts:
      type: "nfs"
      o: "addr=10.40.0.199,nolock,soft,rw"
      device: ":/docker/example"



[root@docker docker-mirror]# docker stack deploy --with-registry-auth -c nfs-compose.yml rsyslog
Creating network rsyslog_default
Creating service rsyslog_rsyslog
[root@docker docker-mirror]# docker stack ps rsyslog
ID                  NAME                IMAGE                       NODE                DESIRED STATE       CURRENT STATE                     ERROR               PORTS
tb1dod43fe4c        rsyslog_rsyslog.1   jumanjiman/rsyslog:latest   swarm-4             Running             Starting less than a second ago
[root@docker docker-mirror]#

Now, on swarm-4:

root@swarm-4:~# docker ps
CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS              PORTS               NAMES
d883e0f14d3f        jumanjiman/rsyslog:latest   "rsyslogd -n -f /e..."   6 seconds ago       Up 5 seconds        514/tcp, 514/udp    rsyslog_rsyslog.1.tb1dod43fe4cy3j5vzsy7pgv5
root@swarm-4:~# docker exec -it d883e0f14d3f df -h /nfs
Filesystem                Size      Used Available Use% Mounted on
:/docker/example          7.2T      5.5T      1.7T  77% /nfs
root@swarm-4:~#

This volume will be created (but not destroyed) on any swarm node that the stack is running on.

root@swarm-4:~# docker volume inspect rsyslog_example
[
    {
        "CreatedAt": "2017-09-29T13:53:59+10:00",
        "Driver": "local",
        "Labels": {
            "com.docker.stack.namespace": "rsyslog"
        },
        "Mountpoint": "/var/lib/docker/volumes/rsyslog_example/_data",
        "Name": "rsyslog_example",
        "Options": {
            "device": ":/docker/example",
            "o": "addr=10.40.0.199,nolock,soft,rw",
            "type": "nfs"
        },
        "Scope": "local"
    }
]
root@swarm-4:~#

Yes you can directly reference an NFS from the compose file:

volumes:
   db-data:
      driver: local
      driver_opts:
        type: nfs
        o: addr=$SOMEIP,rw
        device: ":$PathOnServer"

And in an analogous way you could create an nfs volume on each host.

docker volume create --driver local --opt type=nfs --opt o=addr=$SomeIP,rw --opt device=:$DevicePath --name nfs-docker

My solution for AWS EFS, that works:

  1. Create EFS (don't forget to open NFS port 2049 at security group)
  2. Install nfs-common package:

    sudo apt-get install -y nfs-common

  3. Check if your efs works:

    mkdir efs-test-point
    sudo chmod go+rw efs-test-point
    sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport [YOUR_EFS_DNS]:/ efs-test-point
    touch efs-test-point/1.txt
    sudo umount efs-test-point/
    ls -la efs-test-point/

    directory must be empty

    sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport [YOUR_EFS_DNS]:/ efs-test-point

    ls -la efs-test-point/

    file 1.txt must exists

  4. Configure docker-compose.yml file:

    services:
      sidekiq:
        volumes:
          - uploads_tmp_efs:/home/application/public/uploads/tmp
      ...
    volumes:
      uploads_tmp_efs:
        driver: local
        driver_opts:
          type: nfs
          o: addr=[YOUR_EFS_DNS],nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2
          device: [YOUR_EFS_DNS]:/

Depending on how I need to use the volume, I have the following 3 options.

First, you can create the named volume directly and use it as an external volume in compose, or as a named volume in a docker run or docker service create command.

  # create a reusable volume
  $ docker volume create --driver local \
      --opt type=nfs \
      --opt o=nfsvers=4,addr=nfs.example.com,rw \
      --opt device=:/path/to/dir \
      foo

Next, there is the --mount syntax that works from docker run and docker service create. This is a rather long option, and when you are embedded a comma delimited option within another comma delimited option, you need to pass some quotes (escaped so the shell doesn't remove them) to the command being run. I tend to use this for a one-off container that needs to access NFS (e.g. a utility container to setup NFS directories):

  # or from the docker run command
  $ docker run -it --rm \
    --mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,\"volume-opt=o=nfsvers=4,addr=nfs.example.com\",volume-opt=device=:/host/path \
    foo

  # or to create a service
  $ docker service create \
    --mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,\"volume-opt=o=nfsvers=4,addr=nfs.example.com\",volume-opt=device=:/host/path \
    foo

Lastly, you can define the named volume inside your compose file. One important note when doing this, the name volume only gets created once, and not updated with any changes. So if you ever need to modify the named volume you'll want to give it a new name.

  # inside a docker-compose file
  ...
  services:
    example-app:
      volumes:
      - "nfs-data:/data"
  ...
  volumes:
    nfs-data:
      driver: local
      driver_opts:
        type: nfs
        o: nfsvers=4,addr=nfs.example.com,rw
        device: ":/path/to/dir"
  ...

In each of these examples:

  • Type is set to nfs, not nfs4. This is because docker provides some nice functionality on the addr field, but only for the nfs type.
  • The o are the options that gets passed to the mount syscall. One difference between the mount syscall and the mount command in Linux is the device has the portion before the : moved into an addr option.
  • nfsvers is used to set the NFS version. This avoids delays as the OS tries other NFS versions first.
  • addr may be a DNS name when you use type=nfs, rather than only an IP address. Very useful if you have multiple VPC's with different NFS servers using the same DNS name, or if you want to adjust the NFS server in the future without updating every volume mount.
  • Other options like rw (read-write) can be passed to the o option.
  • The device field is the path on the remote NFS server. The leading colon is required. This is an artifact of how the mount command moves the IP address to the addr field for the syscall. This directory must exist on the remote host prior to the volume being mounted into a container.
  • In the --mount syntax, the dst field is the path inside the container. For named volumes, you set this path on the right side of the volume mount (in the short syntax) on your docker run -v command.

If you get permission issues accessing a remote NFS volume, a common cause I've encountered is containers running as root, with the NFS server set to root squash (changing all root access to the nobody user). You either need to configure your containers to run as a well known non-root UID that has access to the directories on the NFS server, or disable root squash on the NFS server.

namaiiee

My problem was solved with changing driver option type to NFS4.

volumes:
  my-nfs-share:
    driver: local
    driver_opts:
      type: "nfs4"
      o: "addr=172.24.0.107,rw"
      device: ":/mnt/sharedwordpress"
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!