I\'m thinking about putting the virtualenv for a Django web app I am making inside my git repository for the app. It seems like an easy way to keep deploy\'s simple and easy
If you just setting up development env, then use pip freeze file, caz that makes the git repo clean.
Then if doing production deployment, then checkin the whole venv folder. That will make your deployment more reproducible, not need those libxxx-dev packages, and avoid the internet issues.
So there are two repos. One for your main source code, which includes a requirements.txt. And a env repo, which contains the whole venv folder.
I use pip freeze to get the packages I need into a requirements.txt file and add that to my repository. I tried to think of a way of why you would want to store the entire virtualenv, but I could not.
I think is that the best is to install the virtual environment in a path inside the repository folder, maybe is better inclusive to use a subdirectory dedicated to the environment (I have deleted accidentally my entire project when force installing a virtual environment in the repository root folder, good that I had the project saved in its latest version in Github).
Either the automated installer, or the documentation should indicate the virtualenv path as a relative path, this way you won't run into problems when sharing the project with other people. About the packages, the packages used should be saved by pip freeze -r requirements.txt
.
I used to do the same until I started using libraries that are compiled differently depending on the environment such as PyCrypto. My PyCrypto mac wouldn't work on Cygwin wouldn't work on Ubuntu.
It becomes an utter nightmare to manage the repository.
Either way I found it easier to manage the pip freeze & a requirements file than having it all in git. It's cleaner too since you get to avoid the commit spam for thousands of files as those libraries get updated...
I use what is basically David Sickmiller's answer with a little more automation. I create a (non-executable) file at the top level of my project named activate
with the following contents:
[ -n "$BASH_SOURCE" ] \
|| { echo 1>&2 "source (.) this with Bash."; exit 2; }
(
cd "$(dirname "$BASH_SOURCE")"
[ -d .build/virtualenv ] || {
virtualenv .build/virtualenv
. .build/virtualenv/bin/activate
pip install -r requirements.txt
}
)
. "$(dirname "$BASH_SOURCE")/.build/virtualenv/bin/activate"
(As per David's answer, this assumes you're doing a pip freeze > requirements.txt
to keep your list of requirements up to date.)
The above gives the general idea; the actual activate script (documentation) that I normally use is a bit more sophisticated, offering a -q
(quiet) option, using python
when python3
isn't available, etc.
This can then be sourced from any current working directory and will properly activate, first setting up the virtual environment if necessary. My top-level test script usually has code along these lines so that it can be run without the developer having to activate first:
cd "$(dirname "$0")"
[[ $VIRTUAL_ENV = $(pwd -P) ]] || . ./activate
Sourcing ./activate
, not activate
, is important here because the latter will find any other activate
in your path before it will find the one in the current directory.
If you know which operating systems your application will be running on, I would create one virtualenv for each system and include it in my repository. Then I would make my application detect which system it is running on and use the corresponding virtualenv.
The system could e.g. be identified using the platform module.
In fact, this is what I do with an in-house application I have written, and to which I can quickly add a new system's virtualenv in case it is needed. This way, I do not have to rely on that pip will be able to successfully download the software my application requires. I will also not have to worry about compilation of e.g. psycopg2 which I use.
If you do not know which operating system your application may run on, you are probably better off using pip freeze
as suggested in other answers here.