We have several Python 2.6 applications running on Linux. Some of them are Pylons web applications, others are simply long-running processes that we run from the command lin
It sounds like what you want is a build script. So write one, using shell, python, ant, or your favorite build tool. If you don't like writing in XML, pant allows you to write ant scripts in python. Several people have mentioned buildout but I don't have any experience with it.
First define your steps. It sounds like you want to:
If you're doing load balancing or depending on other services production you might want to figure out a way to roll out limited scope so all your customers aren't affected at once. If you have a production-like staging environment that might also satisfy that need.
I've been working on implementing this for our work projects. It's a few different parts involved.
First, we customize virtualenv.py using their bootstrap abilities to add in your own custom post-creation functions and flags. These allow us to define common types of projects and also gives us a single command to create a new virtualenv, checkout a project from the git repository, and install any requirements into the virtualenv using pip and requirements.txt files.
so our commands look like: python venv.py --no-site-packages -g $git_proj -t $tag_num $venv_dir
http://pypi.python.org/pypi/virtualenv http://pip.openplans.org/
Now that gets us through the initial checking out of an existing project. As we work and update the project we use fabric commands within each project to build releases and then to deploy them:
http://docs.fabfile.org/0.9.0/
I've got a fab command: make_tag which checks for unused commits, opens files that need version strings updated, builds and uploads sphinx docs, and then commits the final tag to the repository.
The flip side is a fab deploy command which will, over ssh, do a git co of the tag specified, run a pip update on any new requirements, run any database migrations needed, and then resets the web server if this is a web application.
Here's an example of the tagging function: http://www.google.com/codesearch/p?hl=en#9tLIXCbI4vU/fabfile.py&q=fabfile.py%20git%20tag_new_version&sa=N&cd=1&ct=rc&l=143
There are a ton of good fabric files you can browse through using the google code search. I know I cheat-sheeted a few for my own use.
It's definitely complicated and has several parts in order to get things running smooth. Once you get it running though, the flexibility and speed for things is just awesome.
This is really not hard. You need to play mostly with buildout and supervisord IMO.
While learning buildout may take a little time but it's worth it, given amount of pain it reduces in repeated setups.
About nohup: nohup approach does not suit serious deployments. I have very good experience with supervisord. It is excellent solution for running production python applications. It is very easy to setup.
Some specific answers below.
I would use rsync to synchronize outwards from your production "prime" server to the others, and from your "beta test" platform, to your production "prime" server.
rsync has the benefit of copying only those files which changed, and copying only parts of files that changed partially, and verifying the integrity and identical content at the end on all machines. An update that gets part way through and is interrupted can easily be continued later, making your deployment more robust.
Subversion or Mercurial would not be a bad idea in this case either. Mercurial has the advantage of allowing you to "pull" or "push" instead of just updating from one central source. You might find interesting cases where a decentralized model (mercurial) works better.
Have a look at Buildout for reproducible deployments.
Another vote for fabric (haven't tried Buildout yet). We've been using it successfully for a couple of months now.
If you're having trouble with fabric, another option is Capistrano. Works great (even for non-rails apps). Only stopped using it because it feels weird to use Ruby to deploy Python apps ;)