I’ve been using this shell script to backup my WordPress site and database for a while. There is nothing WordPress-specific in it though, the script can be used for anything that uses a database. It creates a MySQL dump of the database and a ‘ls’ of the site directory, compares then to the last ones and creates a new archives of each of them if something has changed. What it doesn’t do is move those files off-site for an actual backup.
Works for small sites, like this one. Dumping the whole database just to see if anything changed might become an incresingly worse idea as the site grows. Use common sense, run on a slave for bigger setups or something.
#!/bin/sh # basic options. CHANGE THIS # OPT_BASIC parameters HOST='localhost' DB_NAME='wpdbase' DB_USER='wpuser' DB_PASS='wppass' LOGFILE='wpbak.log' BAKFILE='wpbak.sql' WWWDIR='/usr/local/www/wordpress/*' WWWLS='wwwdir.ls' WWWFILE="wwwbak.tbz2" # options passed to mysqldump # # OPT_OPT # --opt selects a set of options, described below. --skip-opt disables all of # them then they can be enabled separately. (--skip-opt --option1 --option2) # --opt options: # --add-drop-table # add DROP TABLE before CREATE TABLE # --add-locks # lock table before insert, faster insert # --create-options # MYSQL-specific CREATE TABLE # --disable-keys # create index only after INSERT is done, faster recovery # --extended-insert # multiple-row INSERT, faster # --lock-tables # lock before you dump # --quick # retrieves rows one at a time. for large tables # --set-charset # add SET NAMES def_char_set # # OPT_MISC # --comments # include comments about program/server versions and host # --complete-insert # column names in INSERT. for readability # --dump-date # include date at the end of the dump file. on by default # --flush-logs # flush server log files before dump. needs RELOAD privs # remove log file if it exists, otherwise it will be added to #if [ -f $LOGFILE ]; then # rm $LOGFILE #fi # do the dump OPT_BASIC="--result-file=$BAKFILE --log-error=$LOGFILE --host=$DB_HOST --user=$DB_USER --password=$DB_PASS" OPT_OPT="--opt" OPT_MISC="--comments --complete-insert --skip-dump-date" mysqldump $OPT_BASIC $OPT_OPT $OPT_MISC $DB_NAME # check if database has changed if [ -f $BAKFILE.last ]; then SHAL=`sha256 -q $BAKFILE.last` SHAR=`sha256 -q $BAKFILE` if [ "$SHAL" != "$SHAR" ]; then DBCHANGED="YES" else echo `date` " - Nothing changed in the database" DBCHANGED="NO" fi else DBCHANGED="YES" fi # check if www directory changed ls -alR $WWWDIR > $WWWLS if [ -f $WWWLS.last ]; then SHAL=`sha256 -q $WWWLS.last` SHAR=`sha256 -q $WWWLS` if [ "$SHAL" != "$SHAR" ]; then WWWCHANGED="YES" else echo `date` " - Nothing changed in $WWWDIR" WWWCHANGED="NO" fi else WWWCHANGED="YES" fi if [ $DBCHANGED = "YES" ]; then mv $BAKFILE $BAKFILE.last if [ -f $BAKFILE.bz2 ]; then rm $BAKFILE.bz2 fi echo `date` " - backing up $BAKFILE" bzip2 -c $BAKFILE.last > $BAKFILE.bz2 else rm $BAKFILE fi if [ $WWWCHANGED = "YES" ]; then mv $WWWLS $WWWLS.last echo `date` " - backing up $WWWDIR" tar -cjf $WWWFILE $WWWDIR else rm $WWWLS fi
It is very incomplete, just a skeleton. I was planning to improve it, but since it’s good enough for my needs I seem to never find the time to do it.
If anything changed since the last backup it will set the variables
"YES". Lines 79-92 check those variables and act accordingly. This is where code to move the backups off-server could be added. Incremental backups could also be implemented by using
diff(1) to create patch files instead of archiving the whole thing every time.
A few ideas for moving the files to another server include using something like Dropbox or Google Drive, uploading the files via FTP or similar, sending them by mail as attachement using mutt, backing them up on a VCS like svn or git, copying to a shared directory for a different server to pull them.