By Gareth Griffiths, Chief Technology Officer, BridgeHead Software

I’ve been in the backup business for a long time (no don’t ask!), but I remember 7 track tapes and when floppy disks were 8 inches and actually floppy… yet the basic concept that healthcare organisations need safe and secure copies of data from multiple points in time remains unchanged.

If, for some reason, you don’t realise that data has been damaged for a while, you need to be able to find a copy from before that damage occurred. If a system is destroyed (through hardware failure, user error or malicious damage – malware) you need a clean copy of the data to restore. What’s more, in the case of a complete system failure, you want a backup that is as recent as possible. These needs persist, but the landscape has changed.

So, what should healthcare IT departments use as a backup target today? Below are some tips and suggestions that may help…

Basic Rules for Backups

Rule 1: Safe secure copies

The first rule is that you need copies that are protected from malware. Imagine the backup software running amok (e.g. because it is malware pretending to be the backup software). You require secure copies that cannot be deleted by the backup software for which there are many options to achieve this.

Rule 2: Don’t have all your backups in one location

If all your backups are on a single device or in a single location, what happens if that device or location is destroyed? The answer: have at least two locations for your backups – one might reasonably be on-site for ease of recovery, provided there is a second copy securely off-site; but a second copy must be sufficiently independent of the first that corruption won’t spread.

Rule 3: Think about recovery time

In backups we usually think about three metrics:

  • How old is the backup?
  • How long does it take to finish a restore?
  • How long after that are you able to be in production (given you might need to redo a lot of work before you can go live again)?

Once we start thinking about these basic rules, we see the need for two tiers of protection – something quick to restore, and something really safe.

Tier 1 – quick and convenient backups

To be quick, your backup has to be immediately available – this pretty much rules out physical tape. You need an online device and, ideally, you don’t want to copy large amounts of data simply because it takes a long time and the amount of data you manage keeps growing. So, you should consider:

  • Snapshots within storage arrays
  • Clones of virtual machines (VMs)
  • Online deduplication devices.

Snapshots

Snapshots are great, but they do depend on the underlying storage. So, of the suggested choices, snapshots are the most vulnerable, but the fastest to restore. These two attributes (speed of restore and safety/security) are usually in opposition. Also, by their nature, the snapshots as well as being dependent on the array are using your primary storage and, therefore, relatively expensive often meaning you can’t afford to keep much depth. However, snapshots are excellent for the very short term and are the fastest to restore.

Clones of VMs

Clones of VMware or Hyper-V virtual machines are copies that can be instantly restored – no copying required, just bring them into service. These can, however, be on second tier storage, i.e. higher capacity and lower performance; because they are normally passive (if you have to restore one, you can bring it up and then migrate it back to the tier one storage). Ideally clones should be on deduplicating storage.

One big advantage with virtualization is that we can make very fast backups as we only need to capture the changed blocks – both VMware and Hyper-V have the ability to track which areas changed so the clone copy can be ‘updated’ with just the changes (with some depth in the clones preserved using snapshots). So, clones can give fast backups, instant restore and use lower tier storage. Potentially, the clones can even be on a deduplication appliance if that supports datastores or SMB3 shares.

Online deduplication backup appliances

Here we combine an appropriate tier of storage together with deduplication technology so that multiple generations of backup do not take up much capacity. While with VMs we can make snapshots of clones to only store changes, with traditional physical systems we can achieve the same net effect with a deduplication appliance where the appliance only stores the unique data of each backup. If the appliance supports it, this can also be a target for VMware or Hyper-V clones as well as for backups.

Personally, I like this option (combined with Storage Array snapshots, if those are available) because it is separate from the array, doesn’t use the most expensive tier one storage and, subject to the device chosen, can perform source-side deduplication, thereby reducing network traffic. I prefer the device to be accessed via an API rather than a share for better security and I like it even more if the appliance has some form of immutability feature to protect itself from a corrupted backup server.

Tier 2 – secure and off-site copies

Here we are most interested in safety and security. These should be made using the tier-1 backups as their source, so there is little or no effect on production. Good old tape in an off-site store is still an option, but tapes are vulnerable to transport errors (they can get lost) and are slow and labour intensive to use. Is there a more modern alternative?

Fortunately, there are indeed new alternatives. Cloud storage is the most promising. Cloud storage is much less vulnerable – the major cloud providers invest heavily to make sure their stores are safe and secure, but you still need to take care that security includes things like encrypted storage and how to protect those encryption keys, and that you choose the appropriate level of replication/resilience. Cloud storage, not surprisingly, requires networks – those need to be secure (encrypted data in-flight) and the network has to be fast enough to handle the volume – backups are large. To make this work well you need the backups to be deduplicated before going over the network – this can reduce the size of a full backup to just a few percent of the size if not much has changed.

Another potential alternative is replication from your on-site deduplication appliance. Most appliances can replicate deduplicated data to another appliance in a physically separate, secure location. This is a bit less resilient than using a cloud secondary store because the data is all in one device at the remote site (what do you do if that appliance totally breaks?), but it is all under your control. If you have two datacentres, that can be a reasonable alternative where data in each site is locally backed up and the backups replicated to the other site.

BridgeHead is now offering Cloud-Backup-as-a-Service (CBaaS) using what, in our opinion, is the best-of-breed options for these technologies. If you would like more information on CBaaS, please contact us.