A simple Python deployment problem - a whole world of pain

后端 未结 8 668
无人共我
无人共我 2020-12-22 16:34

We have several Python 2.6 applications running on Linux. Some of them are Pylons web applications, others are simply long-running processes that we run from the command lin

相关标签:
8条回答
  • 2020-12-22 17:15

    It sounds like what you want is a build script. So write one, using shell, python, ant, or your favorite build tool. If you don't like writing in XML, pant allows you to write ant scripts in python. Several people have mentioned buildout but I don't have any experience with it.

    First define your steps. It sounds like you want to:

    1. SVN export from your production tag (you don't want to have a working copy in prod)
    2. set virtualenv
    3. easy_install or pip install required packages (or probably use pre-downloaded & tested versions)
    4. copy production configuration files to your target (it's not a good idea to keep this information in your source repo -- though you could have them versioned separately)
    5. restart your server & do any other setup task
    6. run smoke tests and rollback on failure

    If you're doing load balancing or depending on other services production you might want to figure out a way to roll out limited scope so all your customers aren't affected at once. If you have a production-like staging environment that might also satisfy that need.

    0 讨论(0)
  • 2020-12-22 17:20

    I've been working on implementing this for our work projects. It's a few different parts involved.

    First, we customize virtualenv.py using their bootstrap abilities to add in your own custom post-creation functions and flags. These allow us to define common types of projects and also gives us a single command to create a new virtualenv, checkout a project from the git repository, and install any requirements into the virtualenv using pip and requirements.txt files.

    so our commands look like: python venv.py --no-site-packages -g $git_proj -t $tag_num $venv_dir

    http://pypi.python.org/pypi/virtualenv http://pip.openplans.org/

    Now that gets us through the initial checking out of an existing project. As we work and update the project we use fabric commands within each project to build releases and then to deploy them:

    http://docs.fabfile.org/0.9.0/

    I've got a fab command: make_tag which checks for unused commits, opens files that need version strings updated, builds and uploads sphinx docs, and then commits the final tag to the repository.

    The flip side is a fab deploy command which will, over ssh, do a git co of the tag specified, run a pip update on any new requirements, run any database migrations needed, and then resets the web server if this is a web application.

    Here's an example of the tagging function: http://www.google.com/codesearch/p?hl=en#9tLIXCbI4vU/fabfile.py&q=fabfile.py%20git%20tag_new_version&sa=N&cd=1&ct=rc&l=143

    There are a ton of good fabric files you can browse through using the google code search. I know I cheat-sheeted a few for my own use.

    It's definitely complicated and has several parts in order to get things running smooth. Once you get it running though, the flexibility and speed for things is just awesome.

    0 讨论(0)
  • 2020-12-22 17:23

    This is really not hard. You need to play mostly with buildout and supervisord IMO.

    While learning buildout may take a little time but it's worth it, given amount of pain it reduces in repeated setups.

    About nohup: nohup approach does not suit serious deployments. I have very good experience with supervisord. It is excellent solution for running production python applications. It is very easy to setup.

    Some specific answers below.

    1. single command to deploy: Buildout is the answer. We are using it since a couple of years with not much of problems
    2. Usually it's like you checkout the source. Then run buildout. Further it may not be a good idea to let the setup copy into site-packages. Better keep environments separate.
    3. Configs would not be overwritten.
    4. You may/should consider building egg(s) for the common packages. Like you may build the egg for a package (say commonlib) and upload it on your code repository. Then you can specify this as a dependency in your buildout.cfg
    5. Buildout is capable of building most essential packages completely separate from central/top level install. However in my experience the python packages with c extension if installed as OS packages, it's much easier.
    0 讨论(0)
  • 2020-12-22 17:26

    I would use rsync to synchronize outwards from your production "prime" server to the others, and from your "beta test" platform, to your production "prime" server.

    rsync has the benefit of copying only those files which changed, and copying only parts of files that changed partially, and verifying the integrity and identical content at the end on all machines. An update that gets part way through and is interrupted can easily be continued later, making your deployment more robust.

    Subversion or Mercurial would not be a bad idea in this case either. Mercurial has the advantage of allowing you to "pull" or "push" instead of just updating from one central source. You might find interesting cases where a decentralized model (mercurial) works better.

    0 讨论(0)
  • 2020-12-22 17:34

    Have a look at Buildout for reproducible deployments.

    0 讨论(0)
  • 2020-12-22 17:34

    Another vote for fabric (haven't tried Buildout yet). We've been using it successfully for a couple of months now.

    If you're having trouble with fabric, another option is Capistrano. Works great (even for non-rails apps). Only stopped using it because it feels weird to use Ruby to deploy Python apps ;)

    0 讨论(0)
提交回复
热议问题