Which is the best way to pass AWS credentials to Docker container?

前端 未结 5 881
无人及你
无人及你 2020-12-02 07:12

I am running docker-container on Amazon EC2. Currently I have added AWS Credentials to Dockerfile. Could you please let me know the best way to do this?

相关标签:
5条回答
  • 2020-12-02 07:34

    You could create ~/aws_env_creds containing:

    touch ~/aws_env_creds
    chmod 777 ~/aws_env_creds
    vi ~/aws_env_creds
    

    Add these value (replace the key of yours):

    AWS_ACCESS_KEY_ID=AK_FAKE_KEY_88RD3PNY
    AWS_SECRET_ACCESS_KEY=BividQsWW_FAKE_KEY_MuB5VAAsQNJtSxQQyDY2C
    

    Press "esc" to save the file.

    Run and test the container:

     my_service:
          build: .
          image: my_image
          env_file:
            - ~/aws_env_creds
    
    0 讨论(0)
  • 2020-12-02 07:36

    A lot has changed in Docker since this question was asked, so here's an attempt at an updated answer.

    First, specifically with AWS credentials on containers already running inside of the cloud, using IAM roles as Vor suggests is a really good option. If you can do that, then add one more plus one to his answer and skip the rest of this.


    Once you start running things outside of the cloud, or have a different type of secret, there are two key places that I recommend against storing secrets:

    1. Environment variables: when these are defined on a container, every process inside the container has access to them, they are visible via /proc, apps may dump their environment to stdout where it gets stored in the logs, and most importantly, they appear in clear text when you inspect the container.

    2. In the image itself: images often get pushed to registries where many users have pull access, sometimes without any credentials required to pull the image. Even if you delete the secret from one layer, the image can be disassembled with common Linux utilities like tar and the secret can be found from the step where it was first added to the image.


    So what other options are there for secrets in Docker containers?

    Option A: If you need this secret only during the build of your image, cannot use the secret before the build starts, and do not have access to BuildKit yet, then a multi-stage build is a best of the bad options. You would add the secret to the initial stages of the build, use it there, and then copy the output of that stage without the secret to your release stage, and only push that release stage to the registry servers. This secret is still in the image cache on the build server, so I tend to use this only as a last resort.

    Option B: Also during build time, if you can use BuildKit which was released in 18.09, there are currently experimental features to allow the injection of secrets as a volume mount for a single RUN line. That mount does not get written to the image layers, so you can access the secret during build without worrying it will be pushed to a public registry server. The resulting Dockerfile looks like:

    # syntax = docker/dockerfile:experimental
    FROM python:3
    RUN pip install awscli
    RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws s3 cp s3://... ...
    

    And you build it with a command in 18.09 or newer like:

    DOCKER_BUILDKIT=1 docker build -t your_image --secret id=aws,src=$HOME/.aws/credentials .
    

    Option C: At runtime on a single node, without Swarm Mode or other orchestration, you can mount the credentials as a read only volume. Access to this credential requires the same access that you would have outside of docker to the same credentials file, so it's no better or worse than the scenario without docker. Most importantly, the contents of this file should not be visible when you inspect the container, view the logs, or push the image to a registry server, since the volume is outside of that in every scenario. This does require that you copy your credentials on the docker host, separate from the deploy of the container. (Note, anyone with the ability to run containers on that host can view your credential since access to the docker API is root on the host and root can view the files of any user. If you don't trust users with root on the host, then don't give them docker API access.)

    For a docker run, this looks like:

    docker run -v $HOME/.aws/credentials:/home/app/.aws/credentials:ro your_image
    

    Or for a compose file, you'd have:

    version: '3'
    services:
      app:
        image: your_image
        volumes:
        - $HOME/.aws/credentials:/home/app/.aws/credentials:ro
    

    Option D: With orchestration tools like Swarm Mode and Kubernetes, we now have secrets support that's better than a volume. With Swarm Mode, the file is encrypted on the manager filesystem (though the decryption key is often there too, allowing the manager to be restarted without an admin entering a decrypt key). More importantly, the secret is only sent to the workers that need the secret (running a container with that secret), it is only stored in memory on the worker, never disk, and it is injected as a file into the container with a tmpfs mount. Users on the host outside of swarm cannot mount that secret directly into their own container, however, with open access to the docker API, they could extract the secret from a running container on the node, so again, limit who has this access to the API. From compose, this secret injection looks like:

    version: '3.7'
    
    secrets:
      aws_creds:
        external: true
    
    services:
      app:
        image: your_image
        secrets:
        - source: aws_creds
          target: /home/user/.aws/credentials
          uid: '1000'
          gid: '1000'
          mode: 0700
    

    You turn on swarm mode with docker swarm init for a single node, then follow the directions for adding additional nodes. You can create the secret externally with docker secret create aws_creds $HOME/.aws/credentials. And you deploy the compose file with docker stack deploy -c docker-compose.yml stack_name.

    I often version my secrets using a script from: https://github.com/sudo-bmitch/docker-config-update

    Option E: Other tools exist to manage secrets, and my favorite is Vault because it gives the ability to create time limited secrets that automatically expire. Every application then gets its own set of tokens to request secrets, and those tokens give them the ability to request those time limited secrets for as long as they can reach the vault server. That reduces the risk if a secret is ever taken out of your network since it will either not work or be quick to expire. The functionality specific to AWS for Vault is documented at https://www.vaultproject.io/docs/secrets/aws/index.html

    0 讨论(0)
  • 2020-12-02 07:38

    Another approach is to pass the keys from the host machine to the docker container. You may add the following lines to the docker-compose file.

    services:
      web:
        build: .
        environment:
          - AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
          - AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
          - AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
    
    0 讨论(0)
  • 2020-12-02 07:44

    Yet another approach is to create temporary read-only volume in docker-compose.yaml. AWS CLI and SDK (like boto3 or AWS SDK for Java etc.) are looking for default profile in ~/.aws/credentials file.

    If you want to use other profiles, you just need also to export AWS_PROFILE variable before running docker-compose command.

    export AWS_PROFILE=some_other_profile_name

    version: '3'
    
    services:
      service-name:
        image: docker-image-name:latest
        environment:
          - AWS_PROFILE=${AWS_PROFILE}
        volumes:
          - ~/.aws/:/root/.aws:ro
    

    In this example, I used root user on docker. If you are using other user, just change /root/.aws to user home directory.

    :ro - stands for read-only docker volume

    It is very helpful when you have multiple profiles in ~/.aws/credentials file and you are also using MFA. Also helpful when you want to locally test docker-container before deploying it on ECS on which you have IAM Roles, but locally you don't.

    0 讨论(0)
  • 2020-12-02 07:56

    The best way is to use IAM Role and do not deal with credentials at all. (see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html )

    Credentials could be retrieved from http://169.254.169.254..... Since this is a private ip address, it could be accessible only from EC2 instances.

    All modern AWS client libraries "know" how to fetch, refresh and use credentials from there. So in most cases you don't even need to know about it. Just run ec2 with correct IAM role and you good to go.

    As an option you can pass them at the runtime as environment variables ( i.e docker run -e AWS_ACCESS_KEY_ID=xyz -e AWS_SECRET_ACCESS_KEY=aaa myimage)

    You can access these environment variables by running printenv at the terminal.

    0 讨论(0)
提交回复
热议问题