The evolution of backup and disaster recovery

In this interview, Amanda Strassle, IT Senior Director of Data Center Service Delivery at Seagate Technology, talks about enterprise backup issues, illustrates how the cloud shaping an IT department’s approach to backup and disaster recovery, and much more.

The amount of data organizations deal with has increased massively in the last few years. How has this affected the way they handle backups?
You’re correct. The amount of data we are dealing with today is significantly more. In fact, our Business Data Centers are managing over 30% more data than they were one year ago. This has forced enterprise data centers to operate smarter and employ new technologies to curb costs. A couple of the new technologies we now utilize include data de-duplication and Openstack Swift.

With the de-duplication solution the resulting footprint of our backup process is significantly less than it would be without this technology. However, with the data growth we’ve been experiencing, optimizing backups is not good enough to keep pace. And, just like other data centers out there in the industry, we do not have the luxury of always throwing more money at ongoing operations. So cost effective storage solutions like Openstack Swift will allow us to keep costs at a digestible level while also allowing us to keep data online and more readily available.

In an always online world, what would you say it’s an appropriate recovery time?
The obvious answer is less is best and always on is even better. It has been our experience that our most critical services (those application services that drive revenue or differentiate our business from our competitors) should operate at 100% availability. We utilize extremely high redundancy so failures do not result in an impact to the application service and zero recovery is actually needed.

Unfortunately, a highly redundant architecture typically cannot be implemented for all of your application services due to the cost. So for these less than critical services it is a matter of weighing the solution cost with the recovery time. Typically the closer you drive to zero downtime, the more expensive the solution cost will be. Understand what is acceptable. Have the conversation with the consumer of the application service. Determine their recovery time expectations and what you both are willing to pay in ongoing costs. It is a gentle balance. Find the middle ground that everyone can work within.

What advice would you give to an organization that wants to strengthen its basic backup plan?
Most organizations have not zoomed out from their day to day backup operations in a long time. I suggest they first step out of the weeds. Validate why backups are being done. For example, are you archiving data for long-term retrieval in addition to supporting short-term recovery? Verify how often you are backing up and what is being backed up. Also determine the various backup and recovery methods in use.

Gathering answers to this kind of investigation will lead an organization to complete some very valuable housekeeping and also determine which backup and recovery solutions are optimal. In all honesty your backup plan is not the most critical aspect. It is your recovery plan. And it is important to not ignore your recovery plan. So I suggest running regular recovery exercises that test out every backup solution you have in place. Define failure scenarios of varying degrees of impact (from an isolated hardware failure to a complete site disaster) and then execute your recovery process. This validates that your backup and recovery process is sound and that you can meet recovery time expectations.

One other note, we have found it easier to couple disaster recovery exercises with scheduled maintenance events whenever possible. For example, if you have to bring a portion of your network down for maintenance then test out your application service recovery by moving the impacted services to your disaster recovery site. This is a win-win situation. Recovery processes are validated and you get your maintenance done with minimal impact to your application services.

How is the cloud shaping an IT department’s approach to backup and disaster recovery?
We have found cloud-based backup and recovery which compliments our other solutions. Our mainstream disaster recovery solutions have always been on-premise matching the majority of our applications that are also on-premise. But the growth in the amount of data we manage is rapid and the use of SaaS is increasing. At the same time core data center services like backup and recovery are always expected to be in place no matter where the application or data is being hosted.

We are finding that cloud-based backup and recovery solutions allow a growing enterprise data center like ours to adapt quickly because we are able to make use of cloud elasticity. With that said, just like many other enterprise data center organizations, we are still learning about the public cloud space. We have begun a journey. We are identifying gaps and opportunities. And we are determining the best fit of the different backup and recovery solutions. I suppose it is true of every big shift of technology. But with this move to the cloud space the technical aspects seem to actually be easier to implement than the management aspects.

With cloud comes new responsibilities which drives new teams into operation which drives different ways of managing the new landscape. This is especially true in our case where our application services and disaster recovery solutions have historically been hosted on premise. It is a brave new world out there to say the least. Lots of challenges and lots of opportunities.

More about

Don't miss