The goal of this project is to be a really simple backup solution where you can just drop a binary on your database server, setup a cronjob to run at regular intervals, and it will take care of performing full and incremental backups at a regular intervals and uploading them to a cloud bucket.
This project also aims to simplify restoring backups. The goal is for it to be hassle free and require little mental overhead in the greatest time of needs.
Currently this project is built to perform backups of MySQL 8.0 and store them on a DigitalOcean Space. The software for actually generating the backup file is the amazing Percona Xtrabackup.
Latest release can be found on releases page.
ssh your-server
wget https://github.com/feederco/really-simple-db-backup/releases/download/$VERSION/really-simple-db-backup_$VERSION_$PLATFORM_$ARCH.tar.gz -O really-simple-db-backup.tar.gz
tar xvf really-simple-db-backup.tar.gz
sudo mv really-simple-db-backup /usr/bin/really-simple-db-backup
git clone [email protected]:feederco/really-simple-db-backup.git
cd really-simple-db-backup
GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o build/really-simple-db-backup main.go
scp build/really-simple-db-backup my-db-host:/usr/bin/really-simple-db-backup
crontab -e
And add the following:
Set to run 05:00 AM every day.
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
0 5 * * * /usr/bin/really-simple-db-backup perform >> /var/log/really-simple-db-backup.log 2>&1
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
1 * * * * /usr/bin/really-simple-db-backup perform >> /var/log/really-simple-db-backup.log 2>&1
really-simple-db-backup THE_COMMAND
perform
perform-full
perform-incremental
restore
upload
download
finalize-restore
test-alert
list-backups
prune
Performs either a full or incremental backup by checking for previous runs.
really-simple-db-backup perform
/etc/really-simple-db-backup.json
{
"digitalocean": {
"key": "digitalocean-app-token",
"space_endpoint": "fra1.digitaloceanspaces.com",
"space_name": "my-backups",
"space_key": "auth-key-for-space",
"space_secret": "auth-secret-for-space"
},
"mysql": {
"data_path": "(optional)"
},
"persistent_storage": "(optional)",
"alerting": {
"slack": {
"webhook_url": "https://hooks.slack.com/services/<your-webhook-url>"
}
},
"retention": {
"automatically_remove_old": true,
"retention_in_days": 7,
"hours_between_full_backups": 24
}
}
really-simple-db-backup perform \
-do-key=digitalocean-app-token \
-do-space-endpoint=fra1.digitaloceanspaces.com \
-do-space-name=my-backups \
-do-space-key=auth-key-for-space \
-do-space-secret=auth-secret-for-space
To force a full backup you can use the perform-full
command.
really-simple-db-backup perform-full
To force an incremental backup you can use the perform-incremental
command.
really-simple-db-backup perform-incremental
If for some reason a backup failed and you were successfully able to retrieve a backup yourself, you can use the upload
command to upload this to your DigitalOcean Space.
really-simple-db-backup upload -file /path/to/backup.xbstream
restore
will download a backup to a new volume, extract it, decompress it, run Xtrabackup's prepare command. When these steps are completed the backup is ready to be moved to the MySQL datadir
and after that MySQL is ready to start again.
If you just want to download and prepare a backup, you can use the download
command. This is good if you just want to
really-simple-db-backup download -hostname my-other-host
If you have run the download
command and have a fully prepared backup that you now wish to use, you can run the finalize-restore
command which will run the second half of steps that are run by the restore
command.
really-simple-db-backup finalize-restore -existing-restore-directory=/mnt/my_restore_volume/really-simple-db-restore
Note
If the config-option retention.automatically_remove_old
is set to true
, an automatic prune will be run on each full backup. Backup lineages older than retention.retention_in_days
(or retention.retention_in_hours
).
To force a prune the prune
command can be run:
really-simple-db-backup prune
To prune on another host the -hostname
flag can be passed in:
really-simple-db-backup prune -hostname other-host
To make sure the Slack integration is setup correctly you can use the test-alert
command to run the same code path that will be executed on a critical error.
really-simple-db-backup test-alert
To list all backups for the current host you can run the list-backups
command. This is also a good way to test that your access tokens for cloud storage is correct.
really-simple-db-backup list-backups
You can also run this on another host to check the backups of a specific host:
really-simple-db-backup list-backups -hostname my-other-host
To see if there are any backups since a certain timestamp simply pass in the timestamp (as formatted in the backup filenames: YYYYMMDDHHII
)
really-simple-db-backup list-backups -timestamp 201901050000
By default the script checks for the existence of a config file at /etc/really-simple-db-backup.json
. If this is found the defaults are loaded from that file and can be overriden by command line options.
If this config file is located somewhere else you can pass that in with the -config
option.
really-simple-db-backup perform -config ./my-other-config.json
The following format is expected:
{
"digitalocean": {
"key": "digitalocean-app-token",
"space_endpoint": "fra1.digitaloceanspaces.com",
"space_name": "my-backups",
"space_key": "auth-key-for-space",
"space_secret": "auth-secret-for-space"
},
"mysql": {
"data_path": "(optional)"
},
"persistent_storage": "(optional)",
"alerting": {
"slack": {
"webhook_url": "https://hooks.slack.com/services/<your-webhook-url>"
}
},
"retention": {
"automatically_remove_old": true,
"retention_in_days": 7,
"hours_between_full_backups": 24
}
}
If the retention
option is left empty (or null
) no pruning is done.
Set this to true
for the pruning to be run on each full backup. If it is set to false
(or not set at all) you need to manually run really-simple-db-backup prune
to remove old backups.
Set to the number of days a backup is kept before being considered for removal.
If you want more fine-grained control of how old backups are kept, use the retention_in_hours
option instead. If any of the above values are set to 0
(or not included in the config JSON), the other value is used.
Set to number of hours between full backups. Note: This does perform the actually scheduling of this command. You need to do that separately in a cronjob or similar. See the section
The default directory for MySQL is normally /var/lib/mysql
. If you have mounted a volume for your data and set different datadir
you can pass in the following option: -mysql-data-path=/mnt/my_mysql_volume/mysql
or set the "mysql.data_path"
config property in the JSON config.
To save state between runs a persistent storage directory is created to store information about the last backup. By default this is: /var/lib/backup-mysql
. To change this the flag -persistent-storage=/my/alternate/directory
can be passed in or set the "persistent_storage"
config property in the JSON config.
Below is a short run-through of what this script does.
The process and code is based on the excellent guide from DigitalOcean Docs: How To Back Up MySQL Databases to Object Storage with Percona
When running the following things happen:
- Check to see if MySQL is installed with the expected version (8.0)
- Check to see that all necessary software installed (Percona Xtrabackup 8.0)
- Decide wether a full or an incremental backup is needed by checking for previous runs of this software
- A DigitalOcean Block Storage volume is created and mounted. The volume size depends on the MySQL data directory
- Percona Xtrabackup is run and a compressed backup file is created onto the volume
- The backup file is uploaded to a DigitalOcean Space for safe storage
- Fetch all backups for the given host on the DigitalOcean Space
- Find the one that is a best match for the passed in timestamp
- A DigitalOcean Block Storage volume is created and mounted. The volume size depends on the file found in the
- Download & extract all pieces for the backup to this volume
- Decompress the backup
- Perform
Xtrabackup
's prepare command which prepares it for use - Move all files back to the MySQL data path
Starting MySQL is up to you when the process is finished.
Backup failures should not be happen silently. Therefor alerting to Slack is built-in to this project.
You need to create a Custom Integration
in your Slack channel with the type Incoming WebHook
. We recommend creating a separate channel with must-action messages.
The slack
entry in the config file can have the following options:
{
"webhook_url": "webhook URL",
"channel": "Override default channel to share to",
"username": "Override username (default: BackupsBot)",
"icon_emoji": "Override avatar of bot (default: :card_file_box: π)",
}
Panic mode. 2 minutes ago you accidentally ran rm -rf /var/lib/mysql
on the production database. For some reason you decide to do this manually, instead of using really-simple-db-backup restore
.
Now, you are reading this guide. What do I need to do?
Here are the steps:
-
Boot up a new server. Note Decompressing takes up a lot of extra space, so add extra margin to the server.
-
SSH into the server, open a new
screen
session.
screen
- Install the right version of MySQL (the same version as the backup was made on) This can be super tricky. This is what I did for Ubuntu 18:
Chose the right version here:
https://downloads.mysql.com/archives/community/
Used wget
to download each file matching the right version of the platform. Then ran dpkg -i
in the correct order:
dpkg -i mysql-common_*
dpkg -i mysql-community-client-core_*
dpkg -i mysql-community-client_*
dpkg -i mysql-client_*
dpkg -i mysql-community-server-core_*
dpkg -i mysql-server_*
- Stop mysql
service mysql stop
- Install percona-xtrabackup:
wget https://repo.percona.com/apt/percona-release_latest.$(lsb_release -sc)_all.deb
dpkg -i percona-release_latest.$(lsb_release -sc)_all.deb
apt update
percona-release enable-only tools release
apt update
apt install percona-xtrabackup-80 qpress
- Download your backup
wget -o backup.xbstream https://secure-link-to-backup/backup.xbstream
- Extract and decompress the backup into a temporary directory.
mkdir backup;
xbstream -x < backup.xbstream -C backup/;
for bf in `find backup/ -iname "*\.qp"`; do qpress -d $bf $(dirname $bf) && rm $bf; done;
- Prepare any incremental backups.
HELP NEEDED What are the correct steps here?
- Move the backup to the right place
mv /var/lib/mysql /var/lib/mysql-old; # just to be safe :)
mv backup /var/lib/mysql
chown mysql:mysql -R /var/lib/mysql
- Start mysql
service mysql start
We use goreleaser.com
for release management. Install goreleaser
as defined here: goreleaser.com/install
To release a new version you need to get a personal access token with the repo
scope here: github.com/settings/tokens/new. Remember that token.
To create a new release tag a version and run goreleaser
:
git tag 1.0.0
git push --tags
GITHUB_TOKEN=yourtoken goreleaser
All issue management is on our Github issues.
Check the issues tagged help wanted
for good tickets to work on.