How to include files outside of Docker's build context?

前端 未结 14 813
情话喂你
情话喂你 2020-11-22 07:21

How can I include files from outside of Docker\'s build context using the \"ADD\" command in the Docker file?

From the Docker documentation:

T

相关标签:
14条回答
  • 2020-11-22 07:35

    If you read the discussion in the issue 2745 not only docker may never support symlinks they may never support adding files outside your context. Seems to be a design philosophy that files that go into docker build should explicitly be part of its context or be from a URL where it is presumably deployed too with a fixed version so that the build is repeatable with well known URLs or files shipped with the docker container.

    I prefer to build from a version controlled source - ie docker build -t stuff http://my.git.org/repo - otherwise I'm building from some random place with random files.

    fundamentally, no.... -- SvenDowideit, Docker Inc

    Just my opinion but I think you should restructure to separate out the code and docker repositories. That way the containers can be generic and pull in any version of the code at run time rather than build time.

    Alternatively, use docker as your fundamental code deployment artifact and then you put the dockerfile in the root of the code repository. if you go this route probably makes sense to have a parent docker container for more general system level details and a child container for setup specific to your code.

    0 讨论(0)
  • 2020-11-22 07:38

    The trick is to recognize that you can specify the context in the build command to include files from the parent directory if you specify the Docker path, I'd change my Dockerfile to look like this:

    ...
    COPY ./ /dest/
    ...
    

    Then my build command can look like this:

    docker built -t TAG -f DOCKER_FILE_PATH CONTEXT
    

    From the project directory

    docker built -t username/project[:tag] -f ./docker/Dockerfile .
    

    From project/docker

    docker built -t username/project[:tag] -f ./docker/Dockerfile ..
    
    0 讨论(0)
  • 2020-11-22 07:39

    I often find myself utilizing the --build-arg option for this purpose. For example after putting the following in the Dockerfile:

    ARG SSH_KEY
    RUN echo "$SSH_KEY" > /root/.ssh/id_rsa
    

    You can just do:

    docker build -t some-app --build-arg SSH_KEY="$(cat ~/file/outside/build/context/id_rsa)" .
    

    But note the following warning from the Docker documentation:

    Warning: It is not recommended to use build-time variables for passing secrets like github keys, user credentials etc. Build-time variable values are visible to any user of the image with the docker history command.

    0 讨论(0)
  • 2020-11-22 07:39

    In my case, my Dockerfile is written like a template containing placeholders which I'm replacing with real value using my configuration file.

    So I couldn't specify this file directly but pipe it into the docker build like this:

    sed "s/%email_address%/$EMAIL_ADDRESS/;" ./Dockerfile | docker build -t katzda/bookings:latest . -f -;
    

    But because of the pipe, the COPY command didn't work. But the above way solves it by -f - (explicitly saying file not provided). Doing only - without the -f flag, the context AND the Dockerfile are not provided which is a caveat.

    0 讨论(0)
  • 2020-11-22 07:44

    One quick and dirty way is to set the build context up as many levels as you need - but this can have consequences. If you're working in a microservices architecture that looks like this:

    ./Code/Repo1
    ./Code/Repo2
    ...
    

    You can set the build context to the parent Code directory and then access everything, but it turns out that with a large number of repositories, this can result in the build taking a long time.

    An example situation could be that another team maintains a database schema in Repo1 and your team's code in Repo2 depends on this. You want to dockerise this dependency with some of your own seed data without worrying about schema changes or polluting the other team's repository (depending on what the changes are you may still have to change your seed data scripts of course) The second approach is hacky but gets around the issue of long builds:

    Create a sh (or ps1) script in ./Code/Repo2 to copy the files you need and invoke the docker commands you want, for example:

    #!/bin/bash
    rm -r ./db/schema
    mkdir ./db/schema
    
    cp  -r ../Repo1/db/schema ./db/schema
    
    docker-compose -f docker-compose.yml down
    docker container prune -f
    docker-compose -f docker-compose.yml up --build
    

    In the docker-compose file, simply set the context as Repo2 root and use the content of the ./db/schema directory in your dockerfile without worrying about the path. Bear in mind that you will run the risk of accidentally committing this directory to source control, but scripting cleanup actions should be easy enough.

    0 讨论(0)
  • Using docker-compose, I accomplished this by creating a service that mounts the volumes that I need and committing the image of the container. Then, in the subsequent service, I rely on the previously committed image, which has all of the data stored at mounted locations. You will then have have to copy these files to their ultimate destination, as host mounted directories do not get committed when running a docker commit command

    You don't have to use docker-compose to accomplish this, but it makes life a bit easier

    # docker-compose.yml
    
    version: '3'
      services:
        stage:
          image: alpine
          volumes:
            - /host/machine/path:/tmp/container/path
          command: bash -c "cp -r /tmp/container/path /final/container/path"
        setup:
          image: stage
    
    # setup.sh
    
    # Start "stage" service
    docker-compose up stage
    
    # Commit changes to an image named "stage"
    docker commit $(docker-compose ps -q stage) stage
    
    # Start setup service off of stage image
    docker-compose up setup
    
    0 讨论(0)
提交回复
热议问题