How do I deploy updated Docker images to Amazon ECS tasks?

前端 未结 12 1082
灰色年华
灰色年华 2021-01-29 22:00

What is the right approach to make my Amazon ECS tasks update their Docker images, once said images have been updated in the corresponding registry?

相关标签:
12条回答
  • 2021-01-29 22:55

    since there has not been any progress at AWS side. I will give you the simple python script that exactly performs the steps described in the high rated answers of Dima and Samuel Karp.

    First push your image into your AWS registry ECR then run the script:

    import boto3, time
    
    client = boto3.client('ecs')
    cluster_name = "Example_Cluster"
    service_name = "Example-service"
    reason_to_stop = "obsolete deployment"
    
    # Create new deployment; ECS Service forces to pull from docker registry, creates new task in service
    response = client.update_service(cluster=cluster_name, service=service_name, forceNewDeployment=True)
    
    # Wait for ecs agent to start new task
    time.sleep(10)
    
    # Get all Service Tasks
    service_tasks = client.list_tasks(cluster=cluster_name, serviceName=service_name)
    
    # Get meta data for all Service Tasks
    task_meta_data = client.describe_tasks(cluster=cluster_name, tasks=service_tasks["taskArns"])
    
    # Extract creation date
    service_tasks = [(task_data['taskArn'], task_data['createdAt']) for task_data in task_meta_data["tasks"]]
    
    # Sort according to creation date
    service_tasks = sorted(service_tasks, key= lambda task: task[1])
    
    # Get obsolete task arn
    obsolete_task_arn = service_tasks[0][0]
    print("stop ", obsolete_task_arn)
    
    # Stop obsolete task
    stop_response = client.stop_task(cluster=cluster_name, task=obsolete_task_arn, reason=reason_to_stop)
    

    This code does:

    1. create a new task with the new image in the service
    2. stop the obsolete old task with the old image in the service
    0 讨论(0)
  • 2021-01-29 22:58

    Every time you start a task (either through the StartTask and RunTask API calls or that is started automatically as part of a Service), the ECS Agent will perform a docker pull of the image you specify in your task definition. If you use the same image name (including tag) each time you push to your registry, you should be able to have the new image run by running a new task. Note that if Docker cannot reach the registry for any reason (e.g., network issues or authentication issues), the ECS Agent will attempt to use a cached image; if you want to avoid cached images from being used when you update your image, you'll want to push a different tag to your registry each time and update your task definition correspondingly before running the new task.

    Update: This behavior can now be tuned through the ECS_IMAGE_PULL_BEHAVIOR environment variable set on the ECS agent. See the documentation for details. As of the time of writing, the following settings are supported:

    The behavior used to customize the pull image process for your container instances. The following describes the optional behaviors:

    • If default is specified, the image is pulled remotely. If the image pull fails, then the container uses the cached image on the instance.

    • If always is specified, the image is always pulled remotely. If the image pull fails, then the task fails. This option ensures that the latest version of the image is always pulled. Any cached images are ignored and are subject to the automated image cleanup process.

    • If once is specified, the image is pulled remotely only if it has not been pulled by a previous task on the same container instance or if the cached image was removed by the automated image cleanup process. Otherwise, the cached image on the instance is used. This ensures that no unnecessary image pulls are attempted.

    • If prefer-cached is specified, the image is pulled remotely if there is no cached image. Otherwise, the cached image on the instance is used. Automated image cleanup is disabled for the container to ensure that the cached image is not removed.

    0 讨论(0)
  • 2021-01-29 23:00

    If your task is running under a service you can force a new deployment. This forces the task definition to be re-evaluated and the new container image to be pulled.

    aws ecs update-service --cluster <cluster name> --service <service name> --force-new-deployment
    
    0 讨论(0)
  • 2021-01-29 23:01

    well i am also trying to find an automated way of doing it, That is push the changes to ECR and then latest tag should be picked up by service. Right you can do it manually by Stopping the task for your service from your cluster. New tasks will pull the updated ECR containers .

    0 讨论(0)
  • 2021-01-29 23:03

    I created a script for deploying updated Docker images to a staging service on ECS, so that the corresponding task definition refers to the current versions of the Docker images. I don't know for sure if I'm following best practices, so feedback would be welcome.

    For the script to work, you need either a spare ECS instance or a deploymentConfiguration.minimumHealthyPercent value so that ECS can steal an instance to deploy the updated task definition to.

    My algorithm is like this:

    1. Tag Docker images corresponding to containers in the task definition with the Git revision.
    2. Push the Docker image tags to the corresponding registries.
    3. Deregister old task definitions in the task definition family.
    4. Register new task definition, now referring to Docker images tagged with current Git revisions.
    5. Update service to use new task definition.

    My code pasted below:

    deploy-ecs

    #!/usr/bin/env python3
    import subprocess
    import sys
    import os.path
    import json
    import re
    import argparse
    import tempfile
    
    _root_dir = os.path.abspath(os.path.normpath(os.path.dirname(__file__)))
    sys.path.insert(0, _root_dir)
    from _common import *
    
    
    def _run_ecs_command(args):
        run_command(['aws', 'ecs', ] + args)
    
    
    def _get_ecs_output(args):
        return json.loads(run_command(['aws', 'ecs', ] + args, return_stdout=True))
    
    
    def _tag_image(tag, qualified_image_name, purge):
        log_info('Tagging image \'{}\' as \'{}\'...'.format(
            qualified_image_name, tag))
        log_info('Pulling image from registry in order to tag...')
        run_command(
            ['docker', 'pull', qualified_image_name], capture_stdout=False)
        run_command(['docker', 'tag', '-f', qualified_image_name, '{}:{}'.format(
            qualified_image_name, tag), ])
        log_info('Pushing image tag to registry...')
        run_command(['docker', 'push', '{}:{}'.format(
            qualified_image_name, tag), ], capture_stdout=False)
        if purge:
            log_info('Deleting pulled image...')
            run_command(
                ['docker', 'rmi', '{}:latest'.format(qualified_image_name), ])
            run_command(
                ['docker', 'rmi', '{}:{}'.format(qualified_image_name, tag), ])
    
    
    def _register_task_definition(task_definition_fpath, purge):
        with open(task_definition_fpath, 'rt') as f:
            task_definition = json.loads(f.read())
    
        task_family = task_definition['family']
    
        tag = run_command([
            'git', 'rev-parse', '--short', 'HEAD', ], return_stdout=True).strip()
        for container_def in task_definition['containerDefinitions']:
            image_name = container_def['image']
            _tag_image(tag, image_name, purge)
            container_def['image'] = '{}:{}'.format(image_name, tag)
    
        log_info('Finding existing task definitions of family \'{}\'...'.format(
            task_family
        ))
        existing_task_definitions = _get_ecs_output(['list-task-definitions', ])[
            'taskDefinitionArns']
        for existing_task_definition in [
            td for td in existing_task_definitions if re.match(
                r'arn:aws:ecs+:[^:]+:[^:]+:task-definition/{}:\d+'.format(
                    task_family),
                td)]:
            log_info('Deregistering task definition \'{}\'...'.format(
                existing_task_definition))
            _run_ecs_command([
                'deregister-task-definition', '--task-definition',
                existing_task_definition, ])
    
        with tempfile.NamedTemporaryFile(mode='wt', suffix='.json') as f:
            task_def_str = json.dumps(task_definition)
            f.write(task_def_str)
            f.flush()
            log_info('Registering task definition...')
            result = _get_ecs_output([
                'register-task-definition',
                '--cli-input-json', 'file://{}'.format(f.name),
            ])
    
        return '{}:{}'.format(task_family, result['taskDefinition']['revision'])
    
    
    def _update_service(service_fpath, task_def_name):
        with open(service_fpath, 'rt') as f:
            service_config = json.loads(f.read())
        services = _get_ecs_output(['list-services', ])[
            'serviceArns']
        for service in [s for s in services if re.match(
            r'arn:aws:ecs:[^:]+:[^:]+:service/{}'.format(
                service_config['serviceName']),
            s
        )]:
            log_info('Updating service with new task definition...')
            _run_ecs_command([
                'update-service', '--service', service,
                '--task-definition', task_def_name,
            ])
    
    
    parser = argparse.ArgumentParser(
        description="""Deploy latest Docker image to staging server.
    The task definition file is used as the task definition, whereas
    the service file is used to configure the service.
    """)
    parser.add_argument(
        'task_definition_file', help='Your task definition JSON file')
    parser.add_argument('service_file', help='Your service JSON file')
    parser.add_argument(
        '--purge_image', action='store_true', default=False,
        help='Purge Docker image after tagging?')
    args = parser.parse_args()
    
    task_definition_file = os.path.abspath(args.task_definition_file)
    service_file = os.path.abspath(args.service_file)
    
    os.chdir(_root_dir)
    
    task_def_name = _register_task_definition(
        task_definition_file, args.purge_image)
    _update_service(service_file, task_def_name)
    

    _common.py

    import sys
    import subprocess
    
    
    __all__ = ['log_info', 'handle_error', 'run_command', ]
    
    
    def log_info(msg):
        sys.stdout.write('* {}\n'.format(msg))
        sys.stdout.flush()
    
    
    def handle_error(msg):
        sys.stderr.write('* {}\n'.format(msg))
        sys.exit(1)
    
    
    def run_command(
            command, ignore_error=False, return_stdout=False, capture_stdout=True):
        if not isinstance(command, (list, tuple)):
            command = [command, ]
        command_str = ' '.join(command)
        log_info('Running command {}'.format(command_str))
        try:
            if capture_stdout:
                stdout = subprocess.check_output(command)
            else:
                subprocess.check_call(command)
                stdout = None
        except subprocess.CalledProcessError as err:
            if not ignore_error:
                handle_error('Command failed: {}'.format(err))
        else:
            return stdout.decode() if return_stdout else None
    
    0 讨论(0)
  • Registering a new task definition and updating the service to use the new task definition is the approach recommended by AWS. The easiest way to do this is to:

    1. Navigate to Task Definitions
    2. Select the correct task
    3. Choose create new revision
    4. If you're already pulling the latest version of the container image with something like the :latest tag, then just click Create. Otherwise, update the version number of the container image and then click Create.
    5. Expand Actions
    6. Choose Update Service (twice)
    7. Then wait for the service to be restarted

    This tutorial has more detail and describes how the above steps fit into an end-to-end product development process.

    Full disclosure: This tutorial features containers from Bitnami and I work for Bitnami. However the thoughts expressed here are my own and not the opinion of Bitnami.

    0 讨论(0)
提交回复
热议问题