Best way to manage code changes for application in Amazon EC2 with Auto Scaling

前端 未结 3 1961
你的背包
你的背包 2021-01-31 19:49

I have multiple instances running behind Load balancer with Auto Scaling in AWS.

Now, if I have to push some code changes to these instances and any new instances that m

相关标签:
3条回答
  • 2021-01-31 20:25

    The way I do my code changes is to have a master server which I edit on the code on. All the slave servers which scale then rsync via ssh by a cron job to bring all the files up to date. All the servers sync every 30 minutes +- a few random seconds to keep from accessing it at the exact same second. (note I leave the Master off of the load balancer so users always have the same code being sent to them. Similarly, when I decide to publish my code changes, I do an rsync from my test server to my master server.

    Using this approach, you merely have to put the sync command in the start-up and you don't have to worry about what the code state was on the slave image as it will be up to date after it boots.

    EDIT: We have stopped using this method now and started using the new service AWS CodeDeploy which is made for this exact purpose:

    http://aws.amazon.com/codedeploy/

    Hope this helps.

    0 讨论(0)
  • 2021-01-31 20:29

    It appears you can manually double auto scaling group size, it will create EC2 instances using AMI from current Launch Configuration. Now if you decrease auto scaling group back to the previous size, old instances will be killed and only instances created from a new AMI will survive.

    0 讨论(0)
  • 2021-01-31 20:31

    We configure our Launch Configuration to use a "clean" off-the-shelf AMI - we use these: http://aws.amazon.com/amazon-linux-ami/

    One of the features of these AMIs is CloudInit - https://help.ubuntu.com/community/CloudInit

    This feature enables us to deliver to the newly spawned plain vanilla EC2 instance some data. Specifically, we give the instance a script to run.
    The script (in a nutshell) does the following:

    1. Upgrades itself (to make sure all security patches and bug fixes are applied).
    2. Installs Git and Puppet.
    3. Clones a Git repo from Github.
    4. Applies a puppet script (which is part of the repo) to configure itself. Puppet installs the rest of the needed software modules.

    It does take longer than booting from a pre-configured AMI, but we skip the process of actually making these AMIs every time we update the software (a couple of times a week) and the servers are always "clean" - no manual patches, all software modules are up to date etc.

    Now, to upgrade the software, we use a local boto script. The script kills the servers running the old code one by one. The Auto Scaling mechanism launches new (and upgraded) servers.

    Make sure to use as-terminate-instance-in-auto-scaling-group because using ec2-terminate-instance will cause the ELB to continue to send traffic to the shutting-down instance, until it fails the health check.

    Interesting related blog post: http://blog.codento.com/2012/02/hello-ec2-part-1-bootstrapping-instances-with-cloud-init-git-and-puppet/

    0 讨论(0)
提交回复
热议问题