Monday, June 15, 2009

The Backup Policy: Databases

It's getting time to revisit my old friend, the backup policy. My boss and I reviewed it last week before he left, and I'm going to spend some time refining the implementation of it.

Essentially, our company, like most, operates on data. The backup policy is designed to ensure that no piece of essential data is lost or unusable, and we try to accomplish that through various backups and archives (read Michael Janke's excellent guest blog entry, "Backups Suck", for more information).

The first thing listed in our backup policy is our Oracle database. It's our primary data store, and at 350GB, a real pain in the butt to transfer around. We've got our primary oracle instance at the primary site (duh?), and it's producing archive logs. That means anytime there's a change in the database, a log file gets written to. We then ship those logs to three machines that are running in "standby mode" where they are replayed to bring the database up to date.

The first standby database is also at the primary site. This enables us to switch over to another database server in an instant if the primary machine crashes with an OS problem or a hardware problem, or something similar that hasn't been corrupting the database for a significant time.

The second standby database is at the backup site. We would move to it in the event that both database machines crash at the primary site (not likely), or if the primary site is rendered unusable for some other reason (slightly more likely). Ideally, we'd have a very fast link (100Mb/s+) between the two sites, but this isn't the case currently, although a link like that is planned in the future.

The third standby database is on the backup server. The backup server is at a 3rd location and has 16-tape library attached to it. In addition to lots of other data that I'll cover in later articles, the Oracle database and historic transaction logs get spooled here so that we can create archives of the database.

These archives would be useful if we found out that several months ago, an unnoticed change went through the database, like a table getting dropped, or some kind of slight corruption that wouldn't bring attention to itself. With archives, we can go back and find out how long it has been that way, or even recover data from before the table was dropped.

Every Sunday, the second standby database is shut down and copied to a test database. After it is copied, it's activated on the test database machine, so that our operations people can test experimental software and data on it.

In addition, a second testing environment is going to be launched at the third site, home of the backup machine. This testing environment will be fed in a similar manner from the third standby database.

Being able to activate these backups help to ensure that our standby databases are a viable recovery mechanism.

The policy states that every Sunday an image will be created from the standby instance. This image will be paired with the archive logs from the next week (Mon-Sat) and written to tape the following Sunday, after which a new image will be created. Two images will be kept live on disk, and another two will be kept in compressed form (It's faster to uncompress a disk than read it from tape).

In the future, I'd like to build in a method to regularly restore a DB image from tape, activate it, and run queries against it, similar to the testing environments. This would extend our "known good" area from the database images to include our backup media.

So that's what I'm doing to prevent data loss from the Oracle DB. I welcome any questions, and I especially welcome any suggestions that would improve the policy. Thanks!