I\'m trying to create a cronjob to back up my database every night before something catastrophic happens. It looks like this command should meet my needs:
0
Correct me if I'm wrong, but if the system user is the same as the database user, PostgreSQL won't ask for the password - it relies on the system for authentication. This might be a matter of configuration.
Thus, when I wanted the database owner postgres
to backup his databases every night, I could create a crontab for it: crontab -e -u postgres
. Of course, postgres
would need to be allowed to execute cron jobs; thus it must be listed in /etc/cron.allow
, or /etc/cron.deny
must be empty.
This one liner helps me while creating dump of a single database.
PGPASSWORD="yourpassword" pg_dump -U postgres -h localhost mydb > mydb.pgsql
Backup over ssh with password using temporary .pgpass credentials and push to S3:
#!/usr/bin/env bash
cd "$(dirname "$0")"
DB_HOST="*******.*********.us-west-2.rds.amazonaws.com"
DB_USER="*******"
SSH_HOST="my_user@host.my_domain.com"
BUCKET_PATH="bucket_name/backup"
if [ $# -ne 2 ]; then
echo "Error: 2 arguments required"
echo "Usage:"
echo " my-backup-script.sh <DB-name> <password>"
echo " <DB-name> = The name of the DB to backup"
echo " <password> = The DB password, which is also used for GPG encryption of the backup file"
echo "Example:"
echo " my-backup-script.sh my_db my_password"
exit 1
fi
DATABASE=$1
PASSWORD=$2
echo "set remote PG password .."
echo "$DB_HOST:5432:$DATABASE:$DB_USER:$PASSWORD" | ssh "$SSH_HOST" "cat > ~/.pgpass; chmod 0600 ~/.pgpass"
echo "backup over SSH and gzip the backup .."
ssh "$SSH_HOST" "pg_dump -U $DB_USER -h $DB_HOST -C --column-inserts $DATABASE" | gzip > ./tmp.gz
echo "unset remote PG password .."
echo "*********" | ssh "$SSH_HOST" "cat > ~/.pgpass"
echo "encrypt the backup .."
gpg --batch --passphrase "$PASSWORD" --cipher-algo AES256 --compression-algo BZIP2 -co "$DATABASE.sql.gz.gpg" ./tmp.gz
# Backing up to AWS obviously requires having your credentials to be set locally
# EC2 instances can use instance permissions to push files to S3
DATETIME=`date "+%Y%m%d-%H%M%S"`
aws s3 cp ./"$DATABASE.sql.gz.gpg" s3://"$BUCKET_PATH"/"$DATABASE"/db/"$DATETIME".sql.gz.gpg
# s3 is cheap, so don't worry about a little temporary duplication here
# "latest" is always good to have because it makes it easier for dev-ops to use
aws s3 cp ./"$DATABASE.sql.gz.gpg" s3://"$BUCKET_PATH"/"$DATABASE"/db/latest.sql.gz.gpg
echo "local clean-up .."
rm ./tmp.gz
rm "$DATABASE.sql.gz.gpg"
echo "-----------------------"
echo "To decrypt and extract:"
echo "-----------------------"
echo "gpg -d ./$DATABASE.sql.gz.gpg | gunzip > tmp.sql"
echo
Just substitute the first couple of config lines with whatever you need - obviously. For those not interested in the S3 backup part, take it out - obviously.
This script deletes the credentials in .pgpass
afterward because in some environments, the default SSH user can sudo without a password, for example an EC2 instance with the ubuntu
user, so using .pgpass
with a different host account in order to secure those credential, might be pointless.
You can pass a password into pg_dump directly by using the following:
pg_dump "host=localhost port=5432 dbname=mydb user=myuser password=mypass" > mydb_export.sql
$ PGPASSWORD="mypass" pg_dump -i -h localhost -p 5432 -U username -F c -b -v -f dumpfilename.dump databasename
A secure way of passing the password is to store it in .pgpass
file
Content of the .pgpass
file will be in the format:
db_host:db_port:db_name:db_user:db_pass
#Eg
localhost:5432:db1:admin:tiger
localhost:5432:db2:admin:tiger
Now, store this file in the home directory of the user with permissions u=rw (0600) or less
To find the home directory of the user, use
echo $HOME
Restrict permissions of the file
chmod 0600 /home/ubuntu/.pgpass