I tend to use SQLite when doing Django development, but on a live server something more robust is often needed (MySQL/PostgreSQL, for example). Invariably, there are other c
At the end of settings.py I have the following:
try:
from settings_local import *
except ImportError:
pass
This way if I want to override default settings I need to just put settings_local.py right next to settings.py.
I have my settings.py file in an external directory. That way, it doesn't get checked into source control, or over-written by a deploy. I put this in the settings.py file under my Django project, along with any default settings:
import sys
import os.path
def _load_settings(path):
print "Loading configuration from %s" % (path)
if os.path.exists(path):
settings = {}
# execfile can't modify globals directly, so we will load them manually
execfile(path, globals(), settings)
for setting in settings:
globals()[setting] = settings[setting]
_load_settings("/usr/local/conf/local_settings.py")
Note: This is very dangerous if you can't trust local_settings.py.
Update: django-configurations has been released which is probably a better option for most people than doing it manually.
If you would prefer to do things manually, my earlier answer still applies:
I have multiple settings files.
settings_local.py
- host-specific configuration, such as database name, file paths, etc.settings_development.py
- configuration used for development, e.g. DEBUG = True
.settings_production.py
- configuration used for production, e.g. SERVER_EMAIL
.I tie these all together with a settings.py
file that firstly imports settings_local.py
, and then one of the other two. It decides which to load by two settings inside settings_local.py
- DEVELOPMENT_HOSTS
and PRODUCTION_HOSTS
. settings.py
calls platform.node()
to find the hostname of the machine it is running on, and then looks for that hostname in the lists, and loads the second settings file depending on which list it finds the hostname in.
That way, the only thing you really need to worry about is keeping the settings_local.py
file up to date with the host-specific configuration, and everything else is handled automatically.
Check out an example here.
Well, I use this configuration:
At the end of settings.py:
#settings.py
try:
from locale_settings import *
except ImportError:
pass
And in locale_settings.py:
#locale_settings.py
class Settings(object):
def __init__(self):
import settings
self.settings = settings
def __getattr__(self, name):
return getattr(self.settings, name)
settings = Settings()
INSTALLED_APPS = settings.INSTALLED_APPS + (
'gunicorn',)
# Delete duplicate settings maybe not needed, but I prefer to do it.
del settings
del Settings
The most simplistic way I found was:
1) use the default settings.py for local development and 2) create a production-settings.py starting with:
import os
from settings import *
And then just override the settings that differ in production:
DEBUG = False
TEMPLATE_DEBUG = DEBUG
DATABASES = {
'default': {
....
}
}
I have two files. settings_base.py
which contains common/default settings, and which is checked into source control. Each deployment has a separate settings.py
, which executes from settings_base import *
at the beginning and then overrides as needed.