问题
I am not sure my question is relevant as I may try to mix tools (Capistrano and Docker) that should not be mixed.
I have recently dockerized an application that is deployed with Capistrano. Docker compose is used both for development and staging environments.
This is how my project looks like (the application files are not shown):
Capfile
docker-compose.yml
docker-compose.staging.yml
config/
deploy.rb
deploy
staging.rb
The Docker Compose files creates all the necessary containers (Nginx, PHP, MongoDB, Elasticsearch, etc.) to run the app in development or staging environment (hence some specific parameters defined in docker-compose.staging.yml
).
The app is deployed to the staging environment with this command:
cap staging deploy
The folder architecture on the server is the one of Capistrano:
current
releases
20160912150720
20160912151003
20160912153905
shared
The following command has been run in the current
directory of the staging server to instantiate all the necessary containers to run the app:
docker-compose -f docker-compose.yml -f docker-compose.staging.yml up -d
So far so good. Things get more complicated on the next deploy: the current
symlink will point to a new directory of the releases
directory:
- If
deploy.rb
defines commands that need to be executed inside containers (likedocker-compose exec php composer install
for PHP), Docker tells that the containers don't exist yet (because the existing ones were created on the previous release folder). - If a
docker-compose up -d
command is executed in the Capistrano deployment process, I get some errors because of port conflicts (the previous containers still exist).
Do you have an idea on how to solve this issue? Should I move away from Capistrano and do something different?
The idea would be to keep the (near) zero-downtime deployment that Capistrano offers with the flexibility of Docker containers (providing several PHP versions for various apps on the same server for instance).
回答1:
As far as i understood, you are using capistrano on the host , to redeploy the whole application stack, means containers. So you are using capistrano to orchestrate building, container creation and thus deployment.
While you do so you basically, when running cap deploy
- build the app ( based on the current base you pulled on the host ) - probably even includes gulp/grunt/build tasks
- then you "package" it into your image using "volume mounts"
- during that you start / replace the containers
You do so to get a 'nearly' zero downtime deployment.
If you really care about the downtime and about formalising your deployment process that much, you should do it right by using a proper pipeline implementation for
- packaging / ci
- deployment / distribution
I do not think capistrano can/should be one of the tools you can use during this strategy. Capistrano is meant for deployment of an application directly on a server using ssh and git as transport. Using cap to build whole images on the target server to then start those as containers, is really over the top, IMHO.
packaging / building
Either use a CI/CD server like jenkins/bamboo/gocd to build an release-image for you application. Assuming only the app is customised in terms of 'release', lets say you have db and app as containers/services, app will include your source-code and will regularly change during releases..
Thus its a CD/CI process to build a new app-image (release) offsite on your CI server. Pulling the source code of your application an packaging it into your image using COPY
and then any RUN
statement to compile your assets ( npm / gulp / grunt whatever ). That all happens not on the production server, but on the CI/CD agent. Using multistage builds for slim images is encouraged.
Then you push this release-image, lets call this image yourregistry.com/yourapp
into your private registry as a new 'version' for deployment.
deployment
with downtime (easy)
To deploy into your production or staging server WITH downtime, you would simply do a docker-composer pull && docker-composer up
- this will pull the newer image and then start it in your stack - your app is upgraded. Using tagged images in the release stage would require to change the the docker-compose.yml
The server should of course be able to pull from your private repository.
withou downtime (more effort)
Achieving a zero-downtime deployment you should use the blue-green deployment concept. Thus you add a proxy to your setup and do no longer expose the public port from the app, but rather using this proxy public port. Your current live system might be running on a random port 21231, the proxy is forwarding from 443 to 21231.
We are using random ports to avoid the conflict during deploying the "second" system, covering one of the issue you mentioned.
When redeploying, you will only start a "new" container based on the new app-image in addition (to the old one), it gets a new random port 12312 - if you like, run your integration tests agains 12312 directly ( do not use the proxy ). If you are done and happy, reconfigure the proxy to now forward to 12312 - then remove the old container (21231).
If you like to automate the proxy-reconfiguration, which in detail is out of scope for this question, you can use service-discovery and a registrator which makes random ports much more practical and makes it easy to reconfigure you proxy, let it be nginx/haproxy while they are running. Tools would be, for example.
- consul
consul watch
+consul-template
or tiller on the proxy to update the proxy-config- Registator for centralized registration or consul agent client mode with a service-configuration.json (depends on you choice) -
回答2:
I don't think Capistrano is the right tool for the job. This was recently discussed in a PR for SSHKit, which underlies Capistrano.
https://github.com/capistrano/sshkit/pull/368
@EugenMayer does a better job of explaining a "normal" way of using Docker.
来源:https://stackoverflow.com/questions/39457603/how-to-integrate-capistrano-with-docker-for-deployment