How to deal with persistent storage (e.g. databases) in Docker

前端 未结 14 1080
野的像风
野的像风 2020-11-22 11:54

How do people deal with persistent storage for your Docker containers?

I am currently using this approach: build the image, e.g. for PostgreSQL, and then start the c

相关标签:
14条回答
  • 2020-11-22 12:02

    I recently wrote about a potential solution and an application demonstrating the technique. I find it to be pretty efficient during development and in production. Hope it helps or sparks some ideas.

    Repo: https://github.com/LevInteractive/docker-nodejs-example
    Article: http://lev-interactive.com/2015/03/30/docker-load-balanced-mongodb-persistence/

    0 讨论(0)
  • 2020-11-22 12:03

    I'm just using a predefined directory on the host to persist data for PostgreSQL. Also, this way it is possible to easily migrate existing PostgreSQL installations to Docker containers: https://crondev.com/persistent-postgresql-inside-docker/

    0 讨论(0)
  • 2020-11-22 12:05

    Docker 1.9.0 and above

    Use volume API

    docker volume create --name hello
    docker run -d -v hello:/container/path/for/volume container_image my_command
    

    This means that the data-only container pattern must be abandoned in favour of the new volumes.

    Actually the volume API is only a better way to achieve what was the data-container pattern.

    If you create a container with a -v volume_name:/container/fs/path Docker will automatically create a named volume for you that can:

    1. Be listed through the docker volume ls
    2. Be identified through the docker volume inspect volume_name
    3. Backed up as a normal directory
    4. Backed up as before through a --volumes-from connection

    The new volume API adds a useful command that lets you identify dangling volumes:

    docker volume ls -f dangling=true
    

    And then remove it through its name:

    docker volume rm <volume name>
    

    As @mpugach underlines in the comments, you can get rid of all the dangling volumes with a nice one-liner:

    docker volume rm $(docker volume ls -f dangling=true -q)
    # Or using 1.13.x
    docker volume prune
    

    Docker 1.8.x and below

    The approach that seems to work best for production is to use a data only container.

    The data only container is run on a barebones image and actually does nothing except exposing a data volume.

    Then you can run any other container to have access to the data container volumes:

    docker run --volumes-from data-container some-other-container command-to-execute
    
    • Here you can get a good picture of how to arrange the different containers.
    • Here there is a good insight on how volumes work.

    In this blog post there is a good description of the so-called container as volume pattern which clarifies the main point of having data only containers.

    Docker documentation has now the DEFINITIVE description of the container as volume/s pattern.

    Following is the backup/restore procedure for Docker 1.8.x and below.

    BACKUP:

    sudo docker run --rm --volumes-from DATA -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data
    
    • --rm: remove the container when it exits
    • --volumes-from DATA: attach to the volumes shared by the DATA container
    • -v $(pwd):/backup: bind mount the current directory into the container; to write the tar file to
    • busybox: a small simpler image - good for quick maintenance
    • tar cvf /backup/backup.tar /data: creates an uncompressed tar file of all the files in the /data directory

    RESTORE:

    # Create a new data container
    $ sudo docker run -v /data -name DATA2 busybox true
    # untar the backup files into the new container᾿s data volume
    $ sudo docker run --rm --volumes-from DATA2 -v $(pwd):/backup busybox tar xvf /backup/backup.tar
    data/
    data/sven.txt
    # Compare to the original container
    $ sudo docker run --rm --volumes-from DATA -v `pwd`:/backup busybox ls /data
    sven.txt
    

    Here is a nice article from the excellent Brian Goff explaining why it is good to use the same image for a container and a data container.

    0 讨论(0)
  • 2020-11-22 12:05

    There are several levels of managing persistent data, depending on your needs:

    • Store it on your host
      • Use the flag -v host-path:container-path to persist container directory data to a host directory.
      • Backups/restores happen by running a backup/restore container (such as tutumcloud/dockup) mounted to the same directory.
    • Create a data container and mount its volumes to your application container
      • Create a container that exports a data volume, use --volumes-from to mount that data into your application container.
      • Backup/restore the same as the above solution.
    • Use a Docker volume plugin that backs an external/third-party service
      • Docker volume plugins allow your datasource to come from anywhere - NFS, AWS (S3, EFS, and EBS)
      • Depending on the plugin/service, you can attach single or multiple containers to a single volume.
      • Depending on the service, backups/restores may be automated for you.
      • While this can be cumbersome to do manually, some orchestration solutions - such as Rancher - have it baked in and simple to use.
      • Convoy is the easiest solution for doing this manually.
    0 讨论(0)
  • 2020-11-22 12:07

    Use Persistent Volume Claim (PVC) from Kubernetes, which is a Docker container management and scheduling tool:

    Persistent Volumes

    The advantages of using Kubernetes for this purpose are that:

    • You can use any storage like NFS or other storage and even when the node is down, the storage need not be.
    • Moreover the data in such volumes can be configured to be retained even after the container itself is destroyed - so that it can be reclaimed, if necessary, by another container.
    0 讨论(0)
  • 2020-11-22 12:08

    If you want to move your volumes around you should also look at Flocker.

    From the README:

    Flocker is a data volume manager and multi-host Docker cluster management tool. With it you can control your data using the same tools you use for your stateless applications by harnessing the power of ZFS on Linux.

    This means that you can run your databases, queues and key-value stores in Docker and move them around as easily as the rest of your application.

    0 讨论(0)
提交回复
热议问题