Welcome!

Release Management Authors: Pat Romanski, Elizabeth White, David H Deans, Liz McMillan, Jnan Dash

Related Topics: @CloudExpo, Containers Expo Blog, Release Management , Cloud Security, @DXWorldExpo, SDN Journal

@CloudExpo: Article

Back Up Data Correctly to Avoid a Disaster, Even When Disaster Strikes

As the tech landscape evolves, data storage practices need to be carefully considered & amended to meet changing requirements

The convergence of increased data stored on company networks and stricter compliance regulations dictating the length of time the information needs to be stored for have helped cloud storage explode in popularity. The cloud offers an easy-to-use, scalable and cost-effective solution for data storage. However, organizations must seriously consider how they manage their storage from a back-up and disaster recovery perspective. While there is no doubt that cloud computing can speed disaster recovery - from reducing the time it takes to restore data, to the fact that information is stored off-site, alleviating the risk of natural disasters - incorrectly managing the storage can often prove more of a hindrance than a help. Whether an employee accidently deletes a file, or a more sinister hack on the company network takes place, for most organizations it is inevitable that data recovery will need to occur at some point. Planning for disaster is essential and having in place an effective back-up and disaster recovery process can save headaches down the line.

As the tech landscape evolves and organizations increasingly have to adapt to new trends, such as virtualization and unstructured Big Data, data storage practices need to be carefully considered and amended to meet changing requirements. There are a multitude of options available that can cause IT teams to struggle with understanding the best solution for their organizational needs. Companies often fail to consider future scenarios when making decisions and, instead, focus on their needs at the current time. This has the potential to cause problems down the line, particularly when it comes to back up and disaster recovery strategies.

From hardware failure to network hacks the potential for data loss is huge. A recent survey by independent research firm, TechValidate* revealed that significant hardware failures occur far more frequently than many may believe. Cited in the survey, 52 percent of respondents had seen a failure within the last year and of that number, 37 percent had suffered the loss within the last six months. However, the same study also revealed that 81 percent of organizations do not have a tried and tested back-up and disaster recovery strategy in place. What is alarming about these statistics is the fact that disaster recovery will be an inevitable requirement at some point for almost every business, but most have not prepared for the eventuality.

If more than two-thirds of U.S. companies have not tested their disaster recovery strategies, chances are they will have no idea how long it will take to restore their business-critical data if disaster were to strike. Where data is stored will make all the difference. While storing data all in one place may once have been the norm, this need not be the case with a cloud solution. In fact, storing everything in one environment can contradict a number of the cloud's value propositions, leading to adverse financial and disaster recovery effects. Cloud storage is a relatively cheap commodity, but storing everything - from emails about company social events to key customer information - all in one place can rapidly become expensive, even in the cloud. Also, from a practical point of view, it's likely that a lot of information stored within the company will never be looked at again and while compliance initiatives dictate that data has to be retained for a certain period of time, the location is up to the organization. There is therefore no reason to store the everyday essential information in the same location as the ‘never-again' information.

Further, if an outage occurs, any company will need to get its business-critical information back as close to immediately as possible. But if every piece of company information recorded over the last 10 years is being recovered at once, the process will be hindered and take far longer than necessary, or feasible, for business operations. This will not only cause serious headaches for anyone who needs access to the data, but it could also cost millions in lost revenue. Imagine a retail outlet not being able to process payments correctly because their server has gone down and they can't get it back up quickly enough because of all the other less essential information that they are restoring. The revenue lost could be extremely detrimental.

Storing by Importance
A new approach should be considered in order to ascertain where data should be stored. A key element that must be a part of your disaster recovery plan is the idea of "tiering" the data to be recovered based on its overall business importance. This allows resources to be correctly proportioned with the budget requirements and business impact.

The first step should be deciding which applications and data are business critical and which are not. This will then allow the data to be grouped depending on its importance and a ‘storage hierarchy' can be put in place. Data that does not need to be accessed frequently can be placed in lower cost storage that may take days to recover, while business-critical information should be placed in more expensive storage where it can be recovered quickly. In the event that a system's restore is necessary, irrelevant information will not slow the process down and everything can be returned at a speed that is appropriate to its importance.

Most companies will have vast amounts of data and manually deciding what data is stored where would be a laborious process for an individual, or even a team after the initial segregation has taken place. Therefore, once the hierarchy has been put in place, it can be combined with an automated system that intelligently tracks and tags all data based on predefined rules, and automatically diverts it to the correct location. Not only does this allow IT teams to focus on more value adding tasks, but also guarantees all data is backed-up, without concern that anything may have been missed.

With these systems in place, businesses can test and tweak their strategies and be sure that in the event of an outage, their applications, data and systems are only the touch of a button away. Planning, implementing and testing data recovery techniques help make the actual disaster, be the only disaster.

*Survey conducted by independent research firm, TechValidate, December 2012.

More Stories By Bob Davis

With more than 25 years of software marketing and executive management experience, Bob Davis oversees Kaseya’s global marketing efforts. He applies significant experience from marketing network and system management solutions to directing Kaseya’s strategy, product marketing, branding, public relations, design and social networking functions One of the original founders of the company, Davis returned to Kaseya in 2010.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and Bi...
Contextual Analytics of various threat data provides a deeper understanding of a given threat and enables identification of unknown threat vectors. In his session at @ThingsExpo, David Dufour, Head of Security Architecture, IoT, Webroot, Inc., discussed how through the use of Big Data analytics and deep data correlation across different threat types, it is possible to gain a better understanding of where, how and to what level of danger a malicious actor poses to an organization, and to determin...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
"MobiDev is a Ukraine-based software development company. We do mobile development, and we're specialists in that. But we do full stack software development for entrepreneurs, for emerging companies, and for enterprise ventures," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Cloud computing delivers on-demand resources that provide businesses with flexibility and cost-savings. The challenge in moving workloads to the cloud has been the cost and complexity of ensuring the initial and ongoing security and regulatory (PCI, HIPAA, FFIEC) compliance across private and public clouds. Manual security compliance is slow, prone to human error, and represents over 50% of the cost of managing cloud applications. Determining how to automate cloud security compliance is critical...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, discussed how they built...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...